article
stringlengths 0
456k
| abstract
stringlengths 0
65.5k
|
---|---|
magnetised plasma dynamics lies at the heart of many fascinating phenomena in astro , space and laboratory physics .turbulence in the solar wind and in the interstellar medium , solar , stellar and accretion disk flares , substorms in the earth s magnetosphere , and turbulent transport and instabilities in magnetised fusion experiments , are just a few examples of remarkable physics problems whose solution is indeed determined by understanding the behaviour of plasmas in a magnetised environment . in many of these cases ,( i ) the collision frequency is much lower than the typical frequencies of the physical phenomena of interest ( e.g. , turbulence , magnetic reconnection ) i.e. , the plasmas are weakly collisional ; and ( ii ) the size of the ion larmor orbit is several orders of magnitude smaller than the size of the system .weak collisionality implies that on the timescales of interest the plasma can not be treated as a fluid , and instead a kinetic description that evolves the particles distribution functions is required .this is rather unfortunate from the computational point of view , since fully kinetic models live on a six - dimensional phase - space ( each particle is characterised by its position and velocity vectors ) .the strong magnetisation , however , implies that the plasma is highly anisotropic , with very different particle motions along and across the magnetic field direction .this anisotropy can be explored analytically to yield reduced kinetic models , i.e. , asymptotic descriptions that reduce the phase - space to only 5d or even 4d .this leads to tremendous computational savings and effectively renders possible calculations that would otherwise not be feasible on today s supercomputers .gyrokinetics is a rigorous description of strongly magnetised , weakly - collisional plasmas .the key idea behind the gyrokinetic formalism is that , because of the strong magnetic ( guide ) field , the particles larmor gyration frequency is much higher than the frequencies of dynamical interest , and can thus be averaged over .this allows for the reduction of the dimensionality of the system , from 6d ( three position and three velocity coordinates ) to 5d ( three position coordinates , and velocities parallel and perpendicular to the magnetic field ) while retaining all the essential physical effects .gyrokinetics was originally motivated by the attempt to model microinstabilities in magnetic fusion experiments ; in this respect it has been rather successful . as recognition of its usefulness, the range of applications of gyrokinetics has broadened in recent years ; it is now routinely applied to the study of turbulence in magnetised astrophysical systems , and there have also been some studies pioneering its application to the problem of magnetic reconnection .this reduction of the dimensionality of the system allowed by gyrokinetics is extremely advantageous from the numerical point of view .nonetheless , intrinsically multiscale problems such as kinetic turbulence and reconnection remain formidable computational challenges .further simplification where possible is therefore desirable .one possible such simplification of gyrokinetics has recently been proposed by zocco and schekochihin : the kinetic reduced electron heating model ( krehm ) , a rigorous asymptotic limit of gyrokinetics valid for plasmas such that [ krehm_order ] _e~m_e / m_i , where is the electron beta , are the background electron density and temperature , respectively , and is the background magnetic field .under this assumption , ref . shows that it is possible to reduce the plasma dynamics to a 4d phase - space position and velocity parallel to the magnetic field while retaining key physics such as phase - mixing and electron landau damping , ion finite larmor radius effects , electron inertia , electron collisions and ohmic resistivity .this is a very significant simplification of the full kinetic description , which renders possible truly multiscale kinetic simulations . in particular , because no _ ad hoc _ fluid closure is employed , krehm can be used for detailed studies of energy conversion and dissipation in kinetic turbulence and reconnection , including electron heating via phase - mixing and landau damping . if taken literally , the ordering imposed by is somewhat restrictive , and obviously excludes many plasmas of interest .examples of plasmas where it may hold are some regions of the solar corona , the large plasma device ( lapd ) experiment at ucla , and edge regions in some tokamaks does tend to be rather low in this region . ] .however , one may legitimately expect that the plasma behaviour captured by krehm will qualitatively hold beyond its rigorous limits of applicability , as is so often the case with many other simplified plasma models ( mhd being a notorious example of a description known to work rather well far outside its strict limits of validity ) .hints that this may indeed be the case are offered in , where a direct comparison of krehm with a ( non - reduced ) gyrokinetic model for the linear collisionless tearing mode problem yields very good agreement at values of significantly larger than .this paper reports on the numerical methods and algorithms used in , the first numerical code to solve this particular set of equations .an extensive series of tests and benchmarks is also presented .considerable attention is devoted to orszag - tang - type decaying turbulence , both in the fluid and kinetic regimes .the reader interested in the application of this code and physics model to the problem of magnetic reconnection is referred to , where the importance of electron heating via landau damping in reconnection is demonstrated for the first time .a second set of equations solved by are the kinetic reduced mhd ( krmhd ) equations , which describe the evolution of compressible fluctuations ( density and parallel magnetic field ) in the regime ( being the wave number perpendicular to the guide - field of a typical perturbation , and the ion larmor radius . )these equations are structurally identical to those of krehm , so their numerical implementation in is straightforward .we also note that krehm reduces to the standard reduced - mhd ( rmhd ) set of equations in the appropriate limit ( i.e. , when the wave length of the fluctuations is much larger than all the kinetic scales ) .thus , can also be used as a rmhd code ( in either 2d or 3d slab geometry ) .finally , we remark that under an isothermal closure for the electrons , krehm reduces to the simple two - field gyrofluid model treated in refs . ( which is a limit of the more complete models of snyder and of schep ) .this paper is organized as follows .presents the different sets of equations integrated by .the kinetic equations are solved by means of a hermite expansion , which requires some form of closure ( or truncation ) .this is discussed in , where an asymptotically exact nonlinear closure for the hermite - moment hierarchy is derived .presents the energy evolution equation for the closed krehm model ; and the normalizations that we adopt are laid out in .deals with the numerical discretization of the equations , including in a discussion of the implementation of a spectral - like scheme for the advection in the direction along the guide - field : a combination of an optimal third order total variation diminishing ( tvd ) runge kutta for the time derivative with a seventh - order upwind scheme for the fluxes . a series of linear and nonlinear benchmarks of the code are presented in , with emphasis on orszag - tang - type decaying turbulence test cases .finally , the main points and results of this paper are summarised in .also included for reference in appendix a is the recently proposed modification of the krehm model to allow for background electron temperature gradients .solves two distinct sets of equations : ( i ) the kinetic reduced electron heating model ( krehm ) equations and ( ii ) the kinetic reduced mhd ( krmhd ) equations .these models are briefly discussed below ; we refer the interested reader to the original references for a detailed derivation of the equations of each model . the kinetic reduced electron heating model ( krehm ) derived in ref . is a rigorous asymptotic reduction of standard gyrokinetics applicable to plasmas that verify . in the slab geometry that we adopt , the background magnetic field ( the guide - field ) is assumed to be straight and uniform , .the perturbed electron distribution function , to lowest order in , and in the gyrokinetic expansion , is defined as [ delta_f_e ] f_e= g_e + ( n_e / n_0e+2v_u_e / v_the^2)f_0e , where is the equilibrium maxwellian , is the electron thermal speed , is the velocity coordinate parallel to the guide - field direction , is the electron density perturbation ( the zeroth moment of ) , and [ uepar ] u_e =- j_/n_0e e=(e / c m_e)d_e^2_^2is the parallel electron flow ( the first moment of ) . in this expression , is the parallel current and is the parallel component of the vector potential ( note that , in this model , the parallel ion flow is zero to the order that is kept in the expansion ) ; and is the electron skin depth , where is the electron plasma frequency .all moments of higher than and are contained in the `` reduced '' electron distribution function , e.g. , parallel temperature fluctuations are given by = d^3v g_e . for notational simplicity ,let us introduce the following usual definitions : [ eq : ddt ] & = & + , + [ eq : bdotgrad ] & = & - , where is the electrostatic potential and ] is the collision operator .the perturbed electron density and the electrostatic potential are related via the gyrokinetic poisson law : [ eq : gk_poisson ] = , where and denotes the inverse fourier transform of , with the modified bessel function and ( is the ion larmor radius , with the ion thermal velocity and the ion gyrofrequency ) . is a kinetic equation for the reduced electron distribution function .an important observation is that it does not contain an explicit dependence on .if such a dependence is not introduced by the collision operator ] defines the resistive diffusivity : _eid_e^2 . in the hermite formulation , is the velocity - space equivalent of in the usual fourier representation of position space .thus , for example , the formation of fine scale structures in velocity space ( as arises from phase - mixing ) can be conveniently thought of as a transfer of energy to high s , much in the same way as the formation of fine scales in real space leads to energy being transferred to high wave numbers in the usual fourier representation . on the other hand, the hermite representation introduces a closure problem , in that the equation for couples to the higher order moment .we shall see in , however , that a rigorous , nonlinear closure can be obtained .the well known reduced mhd ( rmhd ) equations can be obtained from by taking the collisional limit , , where and represent the typical frequencies and perpendicular wave numbers of the fluctuations , and is the ion sound larmor radius . in this limit , the isothermal approximation , , applies , and thus decouples from . for , becomes [ eq : rmhd - poisson ] = , where we have defined to make contact with the standard terminology .further defining , we obtain : + & = & v_a+ , [ eq : vorticity ] + + &= & + v_a , [ eq : epar ] where is the alfvn speed based on the guide - field , .a different set of equations solved by is the kinetic reduced magnetohydrodynamics ( krmhd ) model , derived by expanding the gyrokinetic equation in terms of the small parameter in this sense , it is the long wavelength limit of gyrokinetics . in this limit , the alfvnic component of the turbulent fluctuations decouples from the compressive component .the dynamics of the system are completely determined by the alfvnic fluctuations , which are governed by the reduced mhd .the compressive fluctuations , on the other hand , evolve according to a kinetic equation : + v_g & = & d v_g , [ eq : slowmodekin ] where is related to the perturbed ion distribution function [ see equation ( 183 ) of schekochihin ] and is a one dimensional maxwellian . the parameter is a linear combination of the physical parameters ion - to - electron temperature ratio , plasma beta , and the ion charge [ see equation ( 182 ) of schekochihin ] . the structure of is mathematically similar to that of , the main difference being that this kinetic equation is decoupled from the alfvnic fluctuations , unlike its krehm counterpart .similar to , one obtains the following set of equations by expanding in terms of hermite polynomials : , \quad m\ge2.\end{aligned}\ ] ] notice that , unlike , begin at .additionally , since the term on the right hand side of is proportional to the first hermite polynomial , the parameter makes an appearance only in the equation for .the hermite expansion transforms the original electron drift - kinetic equation , ( [ eq : ge ] ) , into an infinite , coupled set of fluid - like equations , ( [ eq : gms ] ) [ or , similarly for krhmd , into ] .formally , the two representations are exactly equivalent , i.e. , no information is lost by introducing the hermite representation .however , the numerical implementation of equations ( [ eq : gms ] ) obviously requires some form of truncation , i.e. , given a certain number of hermite moments , , it is necessary to specify some prescription for . in other words , as in the derivation of any fluid set of equations , the hermite expansion introduces a closure problem .attempts to solve this problem have varied , from simply setting ( e.g. , ) , to polynomial closures in which is extrapolated from a number of previous moments . particularly noteworthy is the approach followed by hammett and co - workers where closures have been carefully designed to rigorously capture the linear landau damping rates ( as well as gyro - radius effects and dominant nonlinearities ) . in the system of equations under consideration here, it turns out that an asymptotically exact closure can be obtained in the large limit .let us consider that the collision frequency is small but finite .then , there will be a range of s for which the collisional term is negligible one may think of this as the inertial range : energy is injected into low s via the coupling with ohm s law , and cascades ( phase - mixes ) to higher s .however , as increases , a dissipation range is encountered , when the collisional term in [ or in ] is no longer subdominant with respect to the other terms . roughly speaking , in the dissipation range, energy arrives at from and is mostly dissipated there ; only an exponentially smaller fraction is passed on to .one thus expects that in the dissipation range , by definition .the implication of this is that , for in the dissipation range , the dominant balance in the equation for must be [ eq : closure ] v_theg_m-_ei(m+1 ) g_m+1 . solving this equation for yields the sought closure .the equation for therefore becomes : [ eq : last_g ] -_e + v_theg_m-1 = -_eim g_m , where is the parallel ( spitzer ) thermal diffusivity ( i.e. , the semi - collisional limit ) , then this equation needs to include the term proportional to the electron current [ the first term on the rhs of ] , becoming equation ( 99 ) of ref . . ] .it is easy to see how the exact same reasoning leads to the equivalent closure for .it can be useful to have an _ a priori _ estimate of the value of required to formally justify the asymptotic closure , for a given collision frequency .one such _ linear _estimate is provided in ref . : if the hermite spectrum is in steady - state , then the collisional cutoff , , can be shown to occur at . if then the collisional cutoff is superseded .this does not affect any of the considerations drawn here . ] : [ m_cutoff ] m_c= ^ 2/3 .thus , we expect the hermite closure , , to be valid if .the numerical implementation of introduces some difficulties and will be discussed in .since our primary interest lies in weakly collisional plasmas , one finds that .for example , a simple estimate using standard parameters for the solar corona suggests ; certain experiments on jet suggest in the edge region , considerably smaller than for the solar corona , but still quite large .further noticing that such cases are invariably tied to a broad range of spatial scales , thereby also requiring high spatial resolutions , renders obvious the impracticability of such computations : not only must one solve a very large set of nonlinear , coupled pde s , as also the stiffness increases , due to the coefficients proportional to .one possibility of avoiding this problem is to artificially enhance the value of the collision frequency .note however that , i.e. , a relatively weak scaling , implying that cutting the number of necessary s down to computationally manageable sizes would require drastic increases in the collision frequency .to make matters worse , the collision operator scales only linearly with , implying that in fact one needs to retain to adequately capture the dissipation range and validate the closure . one way to circumvent these difficultiesis to make use of a ` hyper - collision ' operator , i.e. , add a term of the form to the rhs of . here, is the order of the hyper - diffusion operator ( a typical value would be ) and is a numerically - based coefficient defined such that energy arriving at can be dissipated in one timestep : [ hyper - coeff ] ~m^h_h g_m .thus , in practice , one may simply set : _h=1/(m^h ) .it is worth remarking that if it is possible to choose a value of that is very deep into the dissipation range , then presumably the issue of which closure to implement becomes less sensitive , and it may be justified to simply set .indeed , we have performed simulations with both closures and observed no differences ( not reported in this paper ) .finally , we point out that in alternative to a hyper - collision operator one may use a spectral filter ( in -space ) , such as the one of hou and li , as proposed by parker and dellar ( see also for a discussion of this filter in fourier space ) .in the absence of collisions , conserve a quadratic invariant usually referred to as free energy .this quantity can be defined as , where w_fluid=_k (1-_0 ) + is the `` fluid '' ( electromagnetic ) part of the free energy , and h_e = d^3v is the electron free energy ( i.e. , the free energy associated with the reduced electron distribution function ) . upon introducing the hermite expansion of , , and allowing for finite collisions ( modelled by the lenard - bernstein collision operator ), one finds that evolves according to the following equation : [ eq : energy ] w_fluid + _m=2^g_m^2 = -n_0et_0e_ei_m=3^m g_m^2 -j_^2 .the above equation is exact .however , as discussed in , the numerical implementation of the hermite expansion requires that only a finite number of hermite polynomials are kept , and some form of closure to the expansion is required .if we adopt the closure described by , and truncate the expansion at , adopts the truncated form : [ eq : trunc_energy ] & & w_fluid + _m=2^mg_m^2 = + & & -n_0et_0e_ei_m=3^m m g_m^2 -n_0et_0e_e ^ 2 -j_^2 , + where the second term on the rhs is due to the specific closure that we have used ( and would vanish if , for example , we instead use the simpler closure . )the same arguments that were invoked to motivate the hermite closure in apply here to justify the asymptotic equivalence of the full form of the energy balance , and its truncated version , that is , as long as is as large as required for to lie in the collisional ( i.e. , - ) dissipation range , one expects the terms neglected in going from to to be exponentially small .the corresponding equation for krmhd is equation ( 4.7 ) of ref .the closure that we propose in can be implemented in this set of equations in a similar way , and it is straightforward to obtain the krmhd counterpart of .the normalizations that we adopt for the krehm set of equations ( [ eq : mom],[eq : ohm],[eq : gk_poisson],[eq : gms ] ) are : * _ length scales : _ ( x , y)= ( x , y)/;z = z/ , where are , respectively , the perpendicular and parallel ( to the guide - field ) reference length - scales . *_ times : _t = t/ , where is the parallel alfvn time . * _ fields : _ ( n_e , g_m ) & = & _ i , + & = & , + & = & . under these normalizations , equations ( [ eq : mom ] ) , ( [ eq : ohm ] ) and ( [ eq : gms ] ) become : [ momentum ] & & = - , + [ ohm ] & & = + _ s^2 + & & - + ^2 , + [ g2_eq ] & & = \ { - } + \{- } , + [ gm_eq ] & & = \{ - } + \{ - } + & & - m_eig_m , m>2 . where now = + .the normalized form of the quasi - neutrality is [ poisson ] n_e= .it can immediately be seen that neglecting the reduces the above set of equations to the simpler two - field gyrofluid model treated in . for the krmhd set of the normalisation of space and timeare as above , upon which the only modification is the conversion of the prefactor into , where is the ion plasma beta .the normalisation of the hermite moments is arbitrary since those equations are linear in .the rhs of is conveniently separated into operators acting either in the direction perpendicular ( ) or parallel ( ) to the guide field .this suggests that an efficient way of integrating those equations is to use operator splitting techniques such as to individually handle each class ( perpendicular or parallel ) of operators .allows for both godunov or strang splitting .although godunov splitting is formally only 1st - order accurate , direct comparisons of both splitting schemes performed by us ( not reported here ) yield undistinguishable results .thus , by default , employs godunov splitting ( as it is computationally cheaper ) ; all results reported in are obtained with this option .we now detail the algorithms employed for the perpendicular and parallel steps .the numerical discretisation of is the straightforward generalisation of that derived in . for presentational simplicity ,let us denote the nonlinear terms ( i.e. , the poisson brackets ) in by generalised operators , such that we have which is used in tearing mode simulations to prevent the resistive diffusion of the background ( reconnecting ) magnetic field . ] : & = & ( n_e , ) , + ( 1+^2d_e^2 ) & = & a(n_e , , g_2 ) - ^2(-a_,eq ) , + & = & _ 2(n_e,,g_2,g_3 ) , + & = & _ m(n_e,,g_m-1,g_m , g_m+1 ) -m_eig_m .then , the integration scheme is as follows .first we take a predictor step : n_e^n+1 , * & = & n_e^n + ( n_e^n,^n ) , + ^n+1 , * & = & e^-d_^n + ,eq + + & & (n_e^n,^n , g_2^n ) , + g_2^n+1 , * & = & g_2^n + _ 2(n_e^n,^n , g_2^n , g_3^n ) , + g_m^n+1 , * & = & e^-m_eig_m^n + + & & (n_e^n,^n , g_m-1^n , g_m^n , g_m+1^n ) , where .this is followed by the corrector step , which can be iterated times until the desired level of convergence is achieved : ^n+1,p+1 & = & e^-d_^n + ,eq + (n_e^n,^n , g_2^n ) + + & & (n_e^n+1,p,^n+1,p , g_2^n+1,p ) , + n_e^n+1,p+1 & = & n_e^n + + , + g_2^n+1,p+1 & = & g_2^n + + + & & _ 2(n_e^n+1,p+1,^n+1,p+1,g_2^n+1,p , g_3^n+1,p ) , + g_m^n+1,p+1 & = & e^-m_eig_m^n + e^-m_ei_m + + & & _ m . for presentational simplicity , we have not included here the hyper - diffusion and hyper - collisions operators , but it is trivial to do so : they are handled in the same way as the resistivity or the collisions are in the above equations . to deal withthe possibility of aliasing instability , offers two options .one is the standard s rule , where the fourier transformed fields are multiplied by a step function defined by : [ two_thirds_rule ] ( k/)= 1 & , + 0 & , where for a grid with points .the second option is the high - order fourier filter proposed by hou & li : [ hou_li_filter ] ( k/)= . compared to , the hou - li filter retains 12 - 15% more active fourier modes in each direction . for other advantages of this filter , and justification of its specific functional form ,the reader is referred to ref .tests reported in refs . unanimously confirm the numerical superiority of the hou - li filter over the s rule dealiasing , as will our results presented in .has inbuilt two distinct methods for the integration of the equations in the direction along the guide - field , : a maccormack scheme , and a combination of a third - order total variation diminishing ( tvd ) runge kutta method for the time derivative with a seventh - order upwind discretization for the fluxes ( tvdrk3uw7 for short ) .the maccormack scheme is fairly standard ( see , e.g. , for textbook presentations ) and there is no need to detail it here .the tvdrk3uw7 is not as conventional and is described below .the -advection step consists in solving the following set of equations : [ z_system ] = a , where = ( n_e , , g_2 , ... , g_m)^t is the solution vector and is tridiagonal matrix of size whose only non - zero entries are the coefficients of the -derivatives , as follows : & & a_k , k+1 = \{^2 , ^2 , - , , - , } , + & & k=1 , , m , + & & a_k , k-1 = \{ , ^2 , - , , - , } , + & & k=2 , , m. to be able to use upwind schemes , we need to write in characteristics form , i.e. , we need to diagonalize . to do so, we introduce the matrix such that becomes p^-1 = p^-1 a p p^-1 .we define and solve for requiring that p^-1ap = d , where is a diagonal matrix .the equation for is now in characteristics form : [ z_charact ] = d , namely , if , is a right propagating wave field , and vice - versa . finally , since the entries of are independent of , so are the entries of .can thus be written in flux - conservative form : [ z_fluxform ] = , where . as is well known from standard linear algebra , the diagonal entries of the matrix are the eigenvalues of , whereas is the matrix whose column vectors are the eigenvectors of . in , both eigenvalues and eigenvectors of are easily obtained with the linear algebra package lapack . as an example , let us consider the simplest possible case : the reduced - mhd limit .matrix becomes : a= 0 & ^2 + & 0 .it is a trivial exercise to obtain the matrices , and .they are : p= -^2 & ^2 + 1 & 1 , p^-1= - & + & , d= -1 & 0 + 0 & 1 . in this case , the characteristic fields are = p^-1*u*=(- , + ) ^t . to relate this to a more familiar case , note that , using in the limit to express the electron density perturbation in terms of the electrostatic potential , , we immediately recognize the commonly used elsasser potentials : w^ = ( ) .note that the entries of are constants , independent of either time or space .thus , the matrices , and need only to be calculated once per run , with negligible impact on the overall code performance .the derivative of the flux is computed using a seventh - order upwind scheme : = , where , for the component of , we have [ flux1 ] f_i+1/2^j =- f_i+4^j+f_i+3^j - f_i+2^j + f_i+1^j + + f_i^j - f_i-1^j+f_i-2^j , d_j>0 , [ flux2 ] f_i+1/2^j =- f_i-3^j+f_i-2^j - f_i-1^j + f_i^j + + f_i+1^j - f_i+2^j+f_i+3^j , d_j<0 . for the time integration of we follow .the time derivative is discretized using an optimal third - order total variation diminishing ( tvd ) runge kutta method : [ tvdrk ] & & * w*^(1 ) = * w*^(n ) + , + & & * w*^(2 ) = ^(n ) + ^(1 ) + , + & & * w*^(n+1 ) = ^(n ) + ^(2 ) + .the final step is to compute .compared to the maccormack method , the tvdrk3uw7 scheme just described has the disadvantage of being somewhat slower , as it requires three evaluations of the right hand side ( as opposed to only two for maccormack ) and there are more communications involved between different processors to compute the fluxes , .this is partially offset by the fact that the tvdrk3uw7 scheme requires much fewer grid points per wavelength than the maccormack method for an adequate resolution , as will be exemplified in .expanding the operator in the closure term in , we find that it becomes : [ eq : gm ] & & = -v_theg_m-1 + & & + _ e \{-- + } + & & -_eim g_m .as we have discussed in previous sections , the numerical algorithm employed in uses operator splitting methods to deal separately with the -derivatives and with the poisson brackets ( i.e. , it splits the dynamics parallel and perpendicular to the magnetic guide - field ) .this raises a difficulty when discretising the equation above , which contains mixed terms ( the second and third terms inside the curly brackets ) introduced by the closure , ; this is an especially subtle issue when the -step scheme advects the equations in characteristics form , as is the case of the tvdrk3uw7 that we employ ( and would equally be the case for any other upwind scheme ) .simple solutions to this problem require abandoning the operator splitting scheme and forsaking the use of the characteristics form for the -derivative terms of the equations , both of which are not only highly convenient from the point of view of numerical accuracy and stability , but also physically motivated .one possibility would be to treat this equation differently from all other equations solved by the code .although this is certainly possible , at this stage we have chosen not to introduce this additional complexity .as such , the actual form of implemented in is [ eq : gm_closed ] & = & -v_theg_m-1 + _ e \ { + } + & & -_eim g_m .we emphasize that the dropping of the mixed terms is purely for algorithmic reasons . from the physical point of viewthose terms are , _ a priori _ , as important as the closure terms which are kept ; their implementation is thus left to future work .a serious drawback of this approach , for example , is that the semi - collisional limit of the krehm equations ( which results from setting , see section v.c of ref . ) is , therefore , not correctly captured . on the other hand ,note that : ( i ) for 2d problems , our implementation of the closure is exact ; ( ii ) for simple linear 3d problems [ where the background magnetic field is simply given by that guide - field ( which is the setup used to investigate alfvn wave propagation in ) , the numerical implementation of the closure is also exact ; ( iii ) in weakly collisional plasmas ( which are our main focus ) , provided that is sufficiently large to lie in the collisional dissipation range , one expects and thus the actual functional form of the closure may not be very important ; ( iv ) if we first apply the operator splitting scheme ( i.e. , the separation of the perpendicular and parallel operators ) and then impose our closure scheme on the parallel and perpendicular equations separately , we would obtain instead of .finally , we remark that adopting as the evolution equation for changes the second term on the rhs of the energy balance equation , ( [ eq : trunc_energy ] ) , in the obvious way .in this section , we report an extensive suite of linear and nonlinear benchmarks of . to illustrate the relative merits of the two numerical schemes for the -advection available in , we carry out a simple test in the limit of isothermal electrons and cold ions . and reduce to [ iso_cold_ne ] & = & ^2 , + [ iso_cold_apar ] & = & .the initial condition we adopt is : ( z , t=0 ) = .are solved on a periodic box , with .the grid step size is .the time step is set by the cfl condition , where .we chose , and .there is no explicit dissipation in this test . a measure of how well resolved the wave front is is given by the parameter , where is the number of grid points per wavelength .we test the behaviour of the maccormack and tvdrk3uw7 schemes for three representative values of ( note that the highest resolvable wave number corresponds to , i.e. , ) . for each of these cases ,the equations are integrated for 10 transit times across the box , .time traces of the energy conservation for both schemes are plotted in . as expected, the tvdrk3uw7 scheme behaves remarkably better than maccormack .notice , for example , that for the extreme case of , tvdrk3uw7 yields an amount of energy loss after 10 crossing times of , very similar to what is obtained with the maccormack scheme for the ten times better resolved case of .-axis is the time normalized by the transit time across the simulation box , .the -axis is the variation in energy ( ) normalized by the initial energy , .the parameter , where is the number of grid points per wavelength ., width=340 ] ( top ) and ( bottom ) using the maccormack scheme ( left panels ) and the tvdrk3uw7 scheme ( right panels ) , for the case .the maccormack scheme is seen to introduce strong gibbs oscillations , which are remarkably minimized by the tvdrk3uw7 scheme ., width=453 ] besides much better energy conservation properties , we find the tvdrk3uw7 scheme to be very robust against spurious gibbs oscillations , even though it is not a shock - capturing scheme .this is clearly visible in , where we plot the time history of the profiles of and obtained with both schemes for .as can be seen , the tvdrk3uw7 scheme advects the initial condition with no visible deterioration , unlike the maccormack scheme .the linearisation of in the collisionless limit yields the kinetic alfvn wave dispersion relation : \[1+\zeta z(\zeta)\]= \frac{1}{2}\kperp^2d_e^2,\ ] ] where , is the plasma dispersion function and . on the left plot of we show a comparison between the analytical values of the frequencies and damping rates , obtained by solving , and those computed by setting the number of hermite moments to and the number of grid points in the -direction to .very good agreement is observed over several orders of magnitude of the electron skin depth , ; the maximum relative error , obtained for the highest value of , is only a few percent .the right plot shows the values of the frequency and damping rate for as a function of the number of hermite moments . for damping rate converges to the analytical value ( ) , whereas for very little dependence on is observed ., at fixed , , , as a function of the electron skin - depth .lines are the exact solution of the analytical dispersion relation , , whereas data points are obtained from .right : kaw frequency and damping rate obtained from for fixed , as a function of the total number of hermite moments kept , ., title="fig:",scaledwidth=49.0% ] , at fixed , , , as a function of the electron skin - depth .lines are the exact solution of the analytical dispersion relation , , whereas data points are obtained from .right : kaw frequency and damping rate obtained from for fixed , as a function of the total number of hermite moments kept , ., title="fig:",scaledwidth=49.0% ] the tearing mode is a fundamental plasma instability driven by a background current gradient .tearing leads to the opening , growth and saturation of ( one or more ) magnetic island(s ) via the reconnection of a background magnetic field .it is of intrinsic interest to magnetic confinement fusion devices , where it occurs either in standard or modified form ( i.e. , neoclassical tearing , microtearing ) .it also represents the most basic paradigm for studies of magnetic reconnection . in this section, we present the results of a linear benchmark of against the gyrokinetic code astrogk for the tearing mode problem .we consider an in - plane magnetic equilibrium configuration given by , with , with the normalizing equilibrium scale length .the simulations are performed in a doubly periodic box of dimensions , with and , such that yields the tearing instability parameter .other parameters are , , .all simulations keep . shows a plot of the linear growth rate of the tearing mode as a function of the lundquist number .the case is obtained by setting , in which case the tearing mode is collisionless , i.e. , the frozen - flux condition is broken by electron inertia instead .calculations with astrogk are done at three different values of and mass ratio : ( crosses , squares and circles , respectively ; this is the same data as plotted in fig . 2 of ref .as seen , the agreement between the two codes improves for smaller , and is rather good for the smallest value of .though it is expected that gyrokinetics will converge to krehm as is decreased , we note that , at least in this particular case , agreement is achieved for substantially larger than ( a factor of ) , suggesting that krehm may remain a reasonable approximation to the plasma dynamics outside its strict asymptotic limit of validity set by the requirement . and .as expected , good agreement is obtained in the small limit.,width=302 ] a nonlinear benchmark is provided by the comparison of the tearing mode saturation amplitude with the prediction of mhd theory .this was reported in ref . , where it is shown that accurately reproduces the theoretical prediction in the parameter region where such prediction is valid [ i.e. , for and as long as islands are larger than the kinetic scales of the problem ( ) ] .finally , see also figs . 1 and 3 of ref. for more direct comparisons between and astrogk in the linear and nonlinear regime of a collisionless tearing mode simulation .the orszag - tang ( ot ) vortex problem is a standard nonlinear test for fluid codes , and a basic paradigm in investigations of decaying mhd turbulence .here we present results from a series of 2d and 3d runs , including a kinetic case . for easy reference ,we summarise the main parameters of each simulation performed in .+ .main parameters for decaying turbulence runs [ with the orszag - tang - type initial conditions of for the 2d runs , and of for the 3d runs ] . in all cases , and .run e also includes hermite moments . [ cols="^,^,^,^,^,^",options="header " , ] + to avoid an overly symmetric initial configuration , we adopt the modification of ot initial conditions proposed in ref . , namely : the runs are performed on a box of dimensions , at a resolution of collocation points . in the cases where no hyper - dissipation is used ( runs a and b ) , the resistivity is set to , and the magnetic prandtl number .the kinetic scales are set to zero , so this is strictly a rmhd run .magnetic ( ) and kinetic ( ) energy time traces for runs a and b are shown on the left - hand panel of .we compare the results obtained using the hou - li high order fourier filter , , with those obtained with the standard s dealiasing rule of orszag , .the agreement between the two sets of results is perfect , demonstrating that the hou - li filter does as good a job at conserving energy as the s rule .the right - hand panel shows the time trace of the energy dissipation , normalized by the instantaneous total energy , i.e. , .since no energy is being injected into the system , the rmhd equations should obey the conservation relation [ rmhd_inv ] = -d . in order to demonstrate the accuracy of the code, we overplot a time trace of .the very good agreement between the two curves is manifest ; in this particular run , is satisfied to better than . ) and kinetic ( ) energies , obtained from runs with different dealiasing methods : `` hou - li '' uses the high - order fourier filter of ref . , given by ; `` dealia . '' uses the usual s rule of ref .right panel : time trace of the energy dissipation rate ( for the hou - li run ) , normalized by the instantaneous total energy , .overplotted is : code conserves energy to better than in this run .the vertical dotted lines identify the times at which the contours of and spectra of are plotted . ]contour plots of current and vorticity ( i.e. , ) at the times identified by the vertical lines in are plotted in ( top and bottom rows , respectively ) .the formation of sharp current and vorticy sheets is observed , as expected . at onecan observe a plasmoid erupting from the current sheet on the lower right - hand corner of the plot , in what is perhaps the small - scale version of the observations reported in ref .the role of the tearing instability of current sheets in 2d decaying turbulence has been previously discussed in refs .. shows the total energy spectra obtained from the simulation with the hou - li filter ( run b ) , taken at the times identified by the vertical lines in .there is no evidence of pile - up ( bottleneck ) at the small scales ( we note that the only dissipation terms present in this simulation are the standard laplacian resistivity and viscosity , i.e. , there is no hyper - dissipation ) . due to the relatively large values of the dissipation coefficients used in this simulation ,the inertial range is very limited and it is not possible to clearly fit a unique power law ; for reference , is indicated in , following the iroshnikov - kraichnan prediction , and its numerical confirmation reported in refs . ( although steeper power - laws have also been reported in the literature ) .slope is shown for reference.,scaledwidth=70.0% ] a much longer and cleaner inertial range is obtained by replacing the standard ( laplacian ) dissipation terms with hyper - dissipation ( runs a1 and b1 ) . in that case , the spectra shown in are obtained ; the inertial range now shows an excellent agreement with the power - law slope of .note also the extended inertial range obtained when the hou - li filter is used ( b1 ) instead of the standard s dealiasing . obtained with the hou - li filter ( blue , full line ) and with the standard s dealising rule ( red , dashed line ) .the hou - li method results in an extended inertial range for the same number of collocation points , as expected .neither spectra shows signs of energy pile - up at the small scales .the power - law is indicated for reference.,scaledwidth=70.0% ] for the 3d simulations the initial conditions differ from the 2d case only in that they are modulated in the -direction , as follows : \sin(2\pi z / l_z),\\ \label{modot3d - psi } \psi(x , y ) = \[\cos(4\pi x / l_x+2.3 ) + \cos(2\pi y / l_y + 4.1)\]\cos(2\pi z / l_z).\end{aligned}\ ] ] we perform three different runs with these initial conditions ( runs c , d and e ) .the first ( run c ) is just a straightforward extension to 3d of run b1 , except now with a resolution of . the second ( run d )is designed to look at sub - ion - larmor radius turbulence ( i.e. , kinetic alfvn wave turbulence ) ; thus we set , where , and .the resolution in this case is ( we use a smaller resolution here because the timestep , which is set by the cfl condition , is now also smaller , due to the dispersive nature of the kinetic alfvn waves ) . finally , rune also includes the velocity - space dependence , represented with hermite moments ( meaning that it differs from run d in that the electrons are no longer isothermal , i.e. , ) .a slope is shown for reference.,scaledwidth=70.0% ] the total energy spectrum obtained for run c is shown in .the inertial range shows very good agreement with the goldreich - sridhar power law and again is clean of bottleneck effects . shows the magnetic , kinetic and electric energy spectra for run d , where we are now focussing on sub - ion larmor radius scales .the slopes indicated refer to several power laws that have been widely discussed in the literature .in particular , we see that the separation between electric and magnetic energy scalings , occurring at around , agrees quite well with the solar wind observations reported by bale _et al . _ and with the gyrokinetic simulations of howes .however , instead of the power law for the magnetic energy suggested in those works ( discussed in more detail in ref . ) , we see that our data seems to more closely fit a scaling , which is a better fit to the slope often reported in observations ( e.g. , ) and in agreement with the recent work of boldyrev and perez on strong kinetic alfvnic turbulence . at .the blue ( full ) line represents the perpendicular electric energy spectrum ; the red ( dashed ) line is the perpendicular magnetic field energy and the green ( dash - dot ) line is the kinetic energy .the slopes , and are indicated for reference ( see text for discussion ) .the vertical line indicates the ion larmor radius scale.,scaledwidth=70.0% ] hermite moments ) .spectra at for ot - decaying kinetic turbulence .lines represent the same quantities as in .see text for a discussion of the power laws indicated.,scaledwidth=70.0% ] again shows energy spectra , this time for run e , which differs from run d in that it also includes hermite moments ( i.e. , it is a fully kinetic run , whereas d assumes isothermal electrons , ) .comparing the magnetic spectra in the two cases ( i.e , runs d and e , both drawn at the same time ) , we see that its values increase at the larger ( spatial ) scales when adding the hermite moments , by about an order of magnitude , and run e s spectrum seems to be somewhat steeper than .such differences may be due to landau damping , which is present in run e , but absent in run d. the hermite spectrum ( i.e. , the electron free energy spectrum , ) for run e is shown in , at different times .a slope is indicated for reference ; this is the inertial - range slope predicted by zocco & schekochihin for the linear phase - mixing of kinetic alfvn waves .since the number of hermite moments ( ) used is quite small we get an equivalently limited inertial range , and thus the agreement with the slope can only be regarded as indicative ; however , this tentative agreement lends credence to the idea that landau damping may be playing a significant role in this simulation. a detailed analysis of kinetic turbulence in the krehm framework and , in particular , of the relative importance of the different energy dissipation mechanisms available , will be the subject of a future publication .finally , for completeness we show in contour plots of the electron parallel velocity , , and of the density perturbations , , taken at the same time as the spectra of ( ) .hermite moments ) .electron free - energy spectra at different times .an indicative power law of for the inertial range is also shown .,scaledwidth=70.0% ] ( left ) , and density perturbations ( right ) , at .,title="fig:",scaledwidth=49.0% ] ( left ) , and density perturbations ( right ) , at .,title="fig:",scaledwidth=49.0% ] we turn now to a benchmark of s implementation of the krmhd equations .linearly , slow modes in krmhd are subject to collisionless damping via the barnes damping mechanism .an initial perturbation damps at a rate that depends on the parameter . if slow mode fluctuations are constantly driven with an external force ( this is achieved by adding a forcing term to ) , then the system can be thought of as a plasma - kinetic langevin equation .the mean - squared amplitude of the electrostatic potential for such a system reaches a steady - state saturation level , which can be derived analytically . , where .the solid line is the analytical prediction , the red crosses are numerical results calculated using gandalf , and the green circles are calculated using .,width=340 ] in , we compare the steady - state saturation levels computed using with the analytical predictions , and the numerical results from another code gandalf ( a fully spectral gpu code that solves the krmhd equations ) .slow mode fluctuations were driven using white noise forcing which injected energy into the system with unit power .the spatial resolution was set to ; hermite moments of the distribution function were retained , .the system was evolved until it reached a steady state .the saturation level was then calculated by averaging over the steady state fluctuations for a few alfvn times .it can be seen that the saturation amplitudes obtained using are in near perfect agreement with those calculated by gandalf , as well as with the analytical prediction .has been used on a variety of computing clusters , with different architectures .it is quite easy to install and run , having dependencies only on standard , widely - used libraries such as lapack and fftw .its parallelization relies on standard mpi routines .as described in detail in , the direction parallel to the field can be integrated by two different numerical methods , both of them fairly scalable , in terms of parallel performance .in contrast , the direction perpendicular to uses standard pseudospectral techniques , which are plagued with well - known limits on scalability , due to the inherent non - locality of fourier transforms . for this reason , if one wishes to increase the number of processors for a given computation , it is more effective to do so by increasing the ratio between the number of processes for the parallel direction and the number of processes in the perpendicular direction .the results of such a test , made on the helios machine ( an intel xeon e5 cluster ) , can be seen on , where the maccormack method was used in the parallel direction .the initial conditions are the 3d orszag - tang vortex given by , with hermite moments .we look at strong scaling , keeping the problem size fixed and varying the number of mpi processes , mainly in the parallel direction .this produces a supralinear scaling , which breaks down after cores for the case and at cores for the one .similar results have been obtained on other clusters , such as stampede ( a mixed intel xeon e5 and intel xeon phi coprocessor cluster ) , hopper ( a cray xe6 ) and edison ( a cray xc30 .currently ongoing optimization work includes parallelizing the computation of the hermite moments via openmp .cores for the case and at cores for the one .the vertical axis gives the wall - clock time ( in seconds ) spent per timestep ., width=302 ]this paper describes , a novel code developed to investigate strongly magnetised , weakly - collisional , fluid - kinetic plasma dynamics in ( 2d or 3d ) slab geometry . solves two different sets of equations : the kinetic reduced electron heating model ( krehm ) of zocco & schekochihin ( which simplifies to conventional reduced - mhd in the appropriate limit ) and the kinetic reduced mhd ( krmhd ) equations of schekochihin .the main numerical methods and the overall algorithm are described .a noteworthy feature of is its spectral representation of velocity - space , achieved via a hermite expansion of the distribution function , as proposed in for krehm and in for the krmhd equations .this representation has the attractive property of converting the kinetic equation for the distribution function into a coupled set of fluid - like equations for each hermite polynomial coefficient the advantage being that such equations are numerically more convenient to solve than the kinetic equation where they stem from . on the other hand ,the hermite expansion introduces a closure problem ( in the sense that the equation for the hermite coefficient of order couples to that of order ) . to address this problem, we present a nonlinear , asymptotically rigorous closure whose validity requires only that collisions are finite , but otherwise as small as required .naturally , the smaller the collision frequency the higher the number of hermite moments that need to be kept to guarantee the accuracy of the closure .realistic values of the collision frequency in the systems that are of primary interest to us ( e.g. , modern fusion devices , space and astrophysical environments ) lead to impractically large number of moments .the adoption of a hyper - collision operator ( the direct translation into hermite space of the usual hyper - diffusion operators used in ( fourier ) -space ) allows us to deal with this problem .together with a pseudo - spectral representation of the plane perpendicular to the background magnetic field , and the option of a spectral - like algorithm for the dynamics along the field , the hermite representation of velocity space implies that is ideally suited to the investigation of magnetised kinetic plasma turbulence and magnetic reconnection , with the unique capability of allowing for the direct monitoring of energy flows in phase - space .a series of linear and nonlinear numerical tests of is presented , with emphasis on orszag - tang - type decaying turbulence , both in the fluid and kinetic limits , where it is shown that recovers the theoretically expected power - law spectra . in this context ,an interesting , novel result that warrants further investigation and will be discussed in a separate publication is the velocity - space ( hermite ) spectrum that is obtained in the 3d kinetic ( sub - ion larmor radius scales ) orszag - tang run presented in ( see ) .this particular form of the hermite spectrum is indicative of linear phase mixing and suggests that this ( and ensuing landau damping ) may be a key energy transfer mechanism in kinetic decaying turbulence .the authors are greatly indebted to alex schekochihin for many discussions and ideas that have been fundamental to this work .nfl thanks paul dellar for pointing out the high - order fourier smoothing method of ref . , ravi samtaney for discussions on high - order integration schemes for advection - type partial differential equations , and ryusuke numata for providing the data obtained with astrogk that appears in of this paper .this work was partly supported by fundao para a cincia e tecnologia via grants uid / fis/50010/2013 , ptdc / fis/118187/2010 and if/00530/2013 , and by the leverhulme trust network for magnetised plasma turbulence .simulations were carried out at hpc - ff ( juelich ) , helios ( iferc ) , edison and hopper ( nersc ) , kraken ( ncsa ) and stampede ( tacc ) .a recent paper by zocco extends the krehm model to include a background electron temperature gradient .this extension is also implemented in ; results of ongoing investigations exploring different instabilities introduced by these terms ( namely , the electron temperature gradient mode , and the microtearing instability ) will be reported elsewhere . for completeness, we write below the krehm equations with this extension in normalised form ( see for the details of the normalisation adopted in ) .they are : & & = - , + & & = + _ s^2 - _ te + & & - + ^2 + & & = \ { - } + \{- } + & & - _ te , + & & = \{ - } + \{ - } + & & - m_eig_m + _ m,3_te , m>2 , where , with the electron larmor radius and the electron temperature gradient scale length .r. bruno , v. carbone , http://adsabs.harvard.edu/abs/2005lrsp....2....4b[the solar wind as a turbulence laboratory ] , liv .solar phys . 2 ( 2005 ) 4 .http://adsabs.harvard.edu/abs/2005lrsp....2....4b b. g. elmegreen , j. scalo , http://www.annualreviews.org/doi/abs/10.1146/annurev.astro.41.011802.094859[interstellar turbulence i : observations and processes ] , annu .astrophys . 42 ( 1 ) ( 2004 )211273 . http://dx.doi.org/10.1146/annurev.astro.41.011802.094859 [ ] .d. a. uzdensky , http://link.springer.com/article/10.1023/b%3aastr.0000045064.93078.87[magnetic interaction between stars and accretion disks ] , astrophys .space sci .292 ( 1 - 4 ) ( 2004 ) 573585. http://dx.doi.org/10.1023/b:astr.0000045064.93078.87 [ ] .http://link.springer.com/article/10.1023/b%3aastr.0000045064.93078.87 e. a. frieman , l. chen , http://adsabs.harvard.edu/abs/1982phfl...25..502f[nonlinear gyrokinetic equations for low - frequency electromagnetic waves in general plasma equilibria ] , phys .fluids 25 ( 1982 ) 502508 . http://dx.doi.org/10.1063/1.863762 [ ] . http://adsabs.harvard.edu/abs/1982phfl...25..502f g. g. howes , s. c. cowley , w. dorland , g. w. hammett , e. quataert , a. a. schekochihin , http://adsabs.harvard.edu/abs/2006apj...651..590h[astrophysical gyrokinetics : basic equations and linear theory ] , astrophys . j. 651 ( 2006 ) 590614 .http://dx.doi.org/10.1086/506172 [ ] .x. garbet , y. idomura , l. villard , t. watanabe , http://stacks.iop.org/0029-5515/50/i=4/a=043002[gyrokinetic simulations of turbulent transport ] , nucl .fusion 50 ( 4 ) ( 2010 ) 043002 .http://dx.doi.org/10.1088/0029-5515/50/4/043002 [ ] .http://stacks.iop.org/0029-5515/50/i=4/a=043002 j. a. krommes , http://www.annualreviews.org/doi/abs/10.1146/annurev-fluid-120710-101223[the gyrokinetic description of microturbulence in magnetized plasmas ] , annu .fluid mech .44 ( 1 ) ( 2012 ) 175201 .http://dx.doi.org/10.1146/annurev-fluid-120710-101223 [ ] .i. g. abel , g. g. plunk , e. wang , m. barnes , s. c. cowley , w. dorland , a. a. schekochihin , http://stacks.iop.org/0034-4885/76/i=11/a=116201[multiscale gyrokinetics for rotating tokamak plasmas : fluctuations , transport and energy flows ] , rep .76 ( 11 ) ( 2013 ) 116201 . http://dx.doi.org/10.1088/0034-4885/76/11/116201 [ ] . http://stacks.iop.org/0034-4885/76/i=11/a=116201 g. g. howes , w. dorland , s. c. cowley , g. w. hammett , e. quataert , a. a. schekochihin , t. tatsuno , http://link.aps.org/doi/10.1103/physrevlett.100.065004[kinetic simulations of magnetized turbulence in astrophysical plasmas ] , phys100 ( 6 ) ( 2008 ) 065004 . http://dx.doi.org/10.1103/physrevlett.100.065004 [ ] .a. a. schekochihin , s. c. cowley , w. dorland , g. w. hammett , g. g. howes , e. quataert , t. tatsuno , http://iopscience.iop.org/0067-0049/182/1/310[astrophysical gyrokinetics : kinetic and fluid turbulent cascades in magnetized weakly collisional plasmas ] , astrophys .182 ( 1 ) ( 2009 ) 310 .http://dx.doi.org/10.1088/0067-0049/182/1/310 [ ] .g. g. howes , j. m. tenbarge , w. dorland , e. quataert , a. a. schekochihin , r. numata , t. tatsuno , http://link.aps.org/doi/10.1103/physrevlett.107.035004[gyrokinetic simulations of solar wind turbulence from ion to electron scales ] , phys .107 ( 3 ) ( 2011 ) 035004 .http://link.aps.org/doi/10.1103/physrevlett.107.035004 b. n. rogers , s. kobayashi , p. ricci , w. dorland , j. drake , t. tatsuno , http://pop.aip.org/resource/1/phpaen/v14/i9/p092110_s1?isauthorized=no[gyrokinetic simulations of collisionless magnetic reconnection ] , phys .plasmas 14 ( 9 ) ( 2007 ) 092110 . http://dx.doi.org/10.1063/1.2774003 [ ] . http://pop.aip.org/resource/1/phpaen/v14/i9/p092110_s1?isauthorized=no m. j. pueschel , f. jenko , d. told , j. bchner , http://pop.aip.org/resource/1/phpaen/v18/i11/p112102_s1?isauthorized=no[gyrokinetic simulations of magnetic reconnection ] , phys . plasmas 18 ( 11 ) ( 2011 ) 112102 .r. numata , w. dorland , g. g. howes , n. f. loureiro , b. n. rogers , t. tatsuno , http://pop.aip.org/resource/1/phpaen/v18/i11/p112106_s1?isauthorized=no[gyrokinetic simulations of the tearing instability ] , phys .plasmas 18 ( 11 ) ( 2011 ) 112106. http://dx.doi.org/10.1063/1.3659035 [ ] .j. m. tenbarge , w. daughton , h. karimabadi , g. g. howes , w. dorland , collisionless reconnection in the large guide field regime : gyrokinetic versus particle - in - cell simulations , phys .plasmas 21 ( 2 ) ( 2014 ) 020708 .http://arxiv.org/abs/1312.5166 [ ] , http://dx.doi.org/10.1063/1.4867068 [ ] .p. a. muoz , d. told , p. kilian , j. bchner , f. jenko , http://adsabs.harvard.edu/abs/2015arxiv150401351m[gyrokinetic and kinetic particle - in - cell simulations of guide - field reconnection .part i : macroscopic effects of the electron flows ] , arxiv e - printshttp://arxiv.org / abs/1504.01351 [ ] .http://adsabs.harvard.edu/abs/2015arxiv150401351m a. zocco , a. a. schekochihin , http://adsabs.harvard.edu/abs/2011phpl...18j2309z[reduced fluid - kinetic equations for low - frequency dynamics , magnetic reconnection , and electron heating in low - beta plasmas ] , phys .plasmas 18 ( 2011 ) 2309 . http://dx.doi.org/10.1063/1.3628639 [ ] . http://adsabs.harvard.edu/abs/2011phpl...18j2309z m. j. aschwanden , a. i. poland , d. m. rabin , http://www.annualreviews.org/doi/abs/10.1146/annurev.astro.39.1.175[the new solar corona ] , annu .39 ( 1 ) ( 2001 ) 175210 .http://dx.doi.org/10.1146/annurev.astro.39.1.175 [ ] .d. a. uzdensky , http://iopscience.iop.org/0004-637x/671/2/2139[the fast collisionless reconnection condition and the self - organization of solar coronal heating ] , astrophys .j. 671 ( 2 ) ( 2007 ) 2139 .http://dx.doi.org/10.1086/522915 [ ] .w. gekelman , h. pfister , z. lucky , j. bamber , d. leneman , j. maggs , http://rsi.aip.org/resource/1/rsinak/v62/i12/p2875_s1?isauthorized=no[design , construction , and properties of the large plasma research device the lapd at ucla ] , rev .62 ( 12 ) ( 1991 ) 28752883 . http://dx.doi.org/10.1063/1.1142175 [ ] .http://rsi.aip.org/resource/1/rsinak/v62/i12/p2875_s1?isauthorized=no g. saibene , n. oyama , j. lnnroth , y. andrew , e. d. l. luna , c. giroud , g. t. a. huysmans , y. kamada , m. a. h. kempenaars , a. loarte , d. m. donald , m. m. f. nave , a. meiggs , v. parail , r. sartori , s. sharapov , j. stober , t. suzuki , m. takechi , k. toi , h. urano , http://iopscience.iop.org/0029-5515/47/8/031[the h mode pedestal , elms and tf ripple effects in jt-60u / jet dimensionless identity experiments ] , nucl .fusion 47 ( 8) ( 2007 ) 969 .http://dx.doi.org/10.1088/0029-5515/47/8/031 [ ] .http://iopscience.iop.org/0029-5515/47/8/031 n. f. loureiro , a. a. schekochihin , a. zocco , http://link.aps.org/doi/10.1103/physrevlett.111.025002[fast collisionless reconnection and electron heating in strongly magnetized plasmas ] , phys .( 2013 ) 025002 .http://dx.doi.org/10.1103/physrevlett.111.025002 [ ] .b. b. kadomtsev , o. p. pogutse , http://adsabs.harvard.edu/abs/1974jetp...38..283k[nonlinear helical perturbations of a plasma in the tokamak ] , sov . j. exp .38 ( 1974 ) 283290 .http://adsabs.harvard.edu/abs/1974jetp...38..283k h. r. strauss , http://pof.aip.org/resource/1/pfldas/v19/i1/p134_s1[nonlinear , threedimensional magnetohydrodynamics of noncircular tokamaks ] , phys .fluids 19 ( 1 ) ( 1976 ) 134140 .http://dx.doi.org/10.1063/1.861310 [ ] .n. loureiro , g. hammett , http://www.sciencedirect.com/science/article/pii/s002199910800034x[an iterative semi - implicit scheme with robust damping ] , j. comp .227 ( 9 ) ( 2008 ) 45184542 .http://www.sciencedirect.com/science/article/pii/s002199910800034x p.b. snyder , g. w. hammett , w. dorland , http://pop.aip.org/resource/1/phpaen/v4/i11/p3974_s1[landau fluid models of collisionless magnetohydrodynamics ] , phys .plasmas 4 ( 11 ) ( 1997 ) 39743985 . http://dx.doi.org/10.1063/1.872517 [ ] . http://pop.aip.org/resource/1/phpaen/v4/i11/p3974_s1 t. j. schep , f. pegoraro , b. n. kuvshinov , http://pop.aip.org/resource/1/phpaen/v1/i9/p2843_s1?isauthorized=no[generalized twofluid theory of nonlinear magnetic structures ] , phys .plasmas 1 ( 9 ) ( 1994 ) 28432852 .http://dx.doi.org/10.1063/1.870523 [ ] .s. pirozzoli , http://www.sciencedirect.com/science/article/pii/s002199910297021x[conservative hybrid compact - weno schemes for shock - turbulence interaction ] , j. comp .178 ( 1 ) ( 2002 ) 81117 . http://dx.doi.org/10.1006/jcph.2002.7021 [ ] .http://www.sciencedirect.com/science/article/pii/s002199910297021x a. zocco , n. f. loureiro , d. dickinson , r. numata , c. m. roach , http://stacks.iop.org/0741-3335/57/i=6/a=065008[kinetic microtearing modes and reconnecting modes in strongly magnetised slab plasmas ] , plasma phys .fusion 57 ( 6 ) ( 2015 ) 065008 .http://stacks.iop.org/0741-3335/57/i=6/a=065008 j. a. krommes , http://adsabs.harvard.edu/abs/2002phr...360....1k[fundamental statistical descriptions of plasma turbulence in magnetic fields ] , phys .360 ( 2002 ) 1352 .http://adsabs.harvard.edu/abs/2002phr...360....1k f. c. grant , m. r. feix , http://adsabs.harvard.edu/abs/1967phfl...10..696g[fourier-ermite solutions of the lasov equations in the linearized limit ] , phys . fluids 10 ( 1967 ) 696702 . http://dx.doi.org/10.1063/1.1762177 [ ] .http://adsabs.harvard.edu/abs/1967phfl...10..696g f. c. grant , m. r. feix , http://adsabs.harvard.edu/abs/1967phfl...10.1356g[transition between landau and van kampen treatments of the vlasov equation ] , phys .fluids 10 ( 1967 ) 13561357 . http://dx.doi.org/10.1063/1.1762288 [ ] .g. joyce , g. knorr , h. k. meier , http://www.sciencedirect.com/science/article/pii/0021999171900349[numerical integration methods of the lasov equation ] , j. comp .phys . 8 ( 1 ) ( 1971 ) 5363 .g. knorr , m. shoucri , http://www.sciencedirect.com/science/article/pii/0021999174900011[plasma simulation as eigenvalue problem ] , j. comp .14 ( 1 ) ( 1974 ) 17 .http://www.sciencedirect.com/science/article/pii/0021999174900011 g. w. hammett , m. a. beer , w. dorland , s. c. cowley , s. a. smith , http://iopscience.iop.org/0741-3335/35/8/006[developments in the gyrofluid approach to tokamak turbulence simulations ] , plasma phys .fusion 35 ( 8) ( 1993 ) 973 .http://dx.doi.org/10.1088/0741-3335/35/8/006 [ ] .s. a. smith , http://w3.pppl.gov/~hammett/sasmith/thesis.html[dissipative closures for statistical moments , fluid moments , and subgrid scales in plasma turbulence ] , ph.d . ,princeton university ( 1997 ) .h. sugama , t .- h .watanabe , w. horton , collisionless kinetic - fluid closure and its application to the three - mode ion temperature gradient driven system , phys .plasmas 8 ( 2001 ) 26172628 . http://dx.doi.org/10.1063/1.1367319 [ ] .a. kanekar , a. a. schekochihin , w. dorland , n. f. loureiro , fluctuation - dissipation relations for a plasma - kinetic langevin equation , j. plasma phys .81 ( 2015 ) 3004 . http://arxiv.org/abs/1403.6257 [ ] , http://dx.doi.org/10.1017/s0022377814000622 [ ] .a. zocco , http://journals.cambridge.org/article_s0022377815000331[linear collisionless landau damping in hilbert space ] , journal of plasma physics firstview ( 2015 ) 17 .http://dx.doi.org/10.1017/s0022377815000331 [ ] .j. t. parker , p. j. dellar , fourier - hermite spectral representation for the vlasov - poisson system in the weakly collisional limit , j. plasma phys .81 ( 2015 ) 3003 . http://arxiv.org/abs/1407.1932 [ ] , http://dx.doi.org/10.1017/s0022377814001287 [ ] .l. gibelli , b. d. shizgal , http://dx.doi.org/10.1016/j.jcp.2006.06.017[spectral convergence of the hermite basis function solution of the vlasov equation : the free - streaming term ] , j. comput .219 ( 2 ) ( 2006 ) 477488 .http://dx.doi.org/10.1016/j.jcp.2006.06.017 [ ] .http://dx.doi.org/10.1016/j.jcp.2006.06.017 g. w. hammett , f. w. perkins , http://link.aps.org/doi/10.1103/physrevlett.64.3019[fluid moment models for andau damping with application to the ion - temperature - gradient instability ] , phys .( 1990 ) 30193022 .http://link.aps.org/doi/10.1103/physrevlett.64.3019 p.b. snyder , g. w. hammett , http://scitation.aip.org/content/aip/journal/pop/8/7/10.1063/1.1374238[a andau fluid model for electromagnetic plasma microturbulence ] , phys .plasmas 8 ( 7 ) ( 2001 ) 31993216 .http://dx.doi.org/10.1063/1.1374238 [ ] .v. borue , s. a. orszag , http://link.aps.org/doi/10.1103/physreve.55.7005[spectra in helical three - dimensional homogeneous isotropic turbulence ] , phys .e 55 ( 6 ) ( 1997 ) 70057009 .http://dx.doi.org/10.1103/physreve.55.7005 [ ] .t. y. hou , r. li , http://www.sciencedirect.com/science/article/pii/s0021999107001623[computing nearly singular solutions using pseudo - spectral methods ] , j. comp .phys . 226 ( 1 ) ( 2007 ) 379397 . http://dx.doi.org/10.1016/j.jcp.2007.04.014 [ ] . http://www.sciencedirect.com/science/article/pii/s0021999107001623 s. godunov , http://mi.mathnet.ru/eng/msb4873[a difference method for numerical calculation of discontinuous solutions of the equations of hydrodynamics ] , mat .( n. s. ) 47 ( 3 ) ( 1959 ) 271306 .s. a. orszag , http://adsabs.harvard.edu/abs/1971jats...28.1074o[on the elimination of aliasing in finite - difference schemes by filtering high - wavenumber components . ] , j. atmosph .28 ( 1971 ) 10741074 . http://dx.doi.org/10.1175/1520-0469(1971)028<1074:oteoai>2.0.co;2 [ ] . http://adsabs.harvard.edu/abs/1971jats...28.1074o t. grafke , h. homann , j. dreher , r. grauer , http://www.sciencedirect.com/science/article/pii/s0167278907004101[numerical simulations of possible finite time singularities in the incompressible euler equations : comparison of numerical methods ] , phys .d : nonlin .237 ( 1417 ) ( 2008 ) 19321936 . http://dx.doi.org/10.1016/j.physd.2007.11.006 [ ] .t. y. hou , blow - up or no blow - up ?a unified computational and analytic approach to 3d incompressible euler and navier-stokes equations , acta num .18 ( 2009 ) 277346 . http://dx.doi.org/10.1017/s0962492906420018 [ ] .j. dellar , http://www.sciencedirect.com/science/article/pii/s0021999112007012[lattice boltzmann magnetohydrodynamics with current - dependent resistivity ] , j. comp .237 ( 2013 ) 115131 . http://dx.doi.org/10.1016/j.jcp.2012.11.021 [ ] . http://www.sciencedirect.com/science/article/pii/s0021999112007012 r. w. maccormack , http://www.worldscientific.com/doi/abs/10.1142/9789812810793_0002[the effect of viscosity in hypervelocity impact cratering ] , in : frontiers of computational fluid dynamics , world scientific , 2002 , ch . 2 ,http://arxiv.org/abs/http://www.worldscientific.com/doi/pdf/10.1142/9789812810793_0002 [ ] , http://dx.doi.org/10.1142/9789812810793_0002 [ ] . http://www.worldscientific.com/doi/abs/10.1142/9789812810793_0002 e. anderson , z. bai , c. bischof , s. blackford , j. demmel , j. dongarra , j. du croz , a. greenbaum , s. hammarling , a. mckenney , d. sorensen , usersguide , 3rd edition , society for industrial and applied mathematics , philadelphia , pa , 1999 .r. samtaney , http://iopscience.iop.org/1749-4699/5/1/014004[numerical aspects of drift kinetic turbulence : ill - posedness , regularization and a priori estimates of sub - grid - scale terms ] , comput . sci . disc . 5 ( 1 ) ( 2012 ) 014004 .http://dx.doi.org/10.1088/1749-4699/5/1/014004 [ ] .h. p. furth , j. killeen , m. n. rosenbluth , http://pof.aip.org/resource/1/pfldas/v6/i4/p459_s1?isauthorized=no[finite-resistivity instabilities of a sheet pinch ] , phys .fluids 6 ( 4 ) ( 1963 ) 459484 .http://dx.doi.org/10.1063/1.1706761 [ ] .r. numata , g. g. howes , t. tatsuno , m. barnes , w. dorland , http://www.sciencedirect.com/science/article/pii/s0021999110005000[astrogk : astrophysical gyrokinetics code ] , j. comp .229 ( 24 ) ( 2010 ) 93479372 .http://dx.doi.org/10.1016/j.jcp.2010.09.006 [ ] .f. militello , f. porcelli , http://pop.aip.org/resource/1/phpaen/v11/i5/pl13_s1[simple analysis of the nonlinear saturation of the tearing mode ] , phys .plasmas 11 ( 5 ) ( 2004 ) l13l16 .http://dx.doi.org/10.1063/1.1677089 [ ] .d. escande , m. ottaviani , http://www.sciencedirect.com/science/article/pii/s0375960104001781[simple and rigorous solution for the nonlinear tearing mode ] , phys .lett . a 323 ( 34 ) ( 2004 ) 278284 . http://dx.doi.org/10.1016/j.physleta.2004.02.010 [ ] .http://www.sciencedirect.com/science/article/pii/s0375960104001781 n. f. loureiro , s. c. cowley , w. d. dorland , m. g. haines , a. a. schekochihin , http://link.aps.org/doi/10.1103/physrevlett.95.235003[x-point collapse and saturation in the nonlinear tearing mode reconnection ] , phys .95 ( 23 ) ( 2005 ) 235003 . http://dx.doi.org/10.1103/physrevlett.95.235003 [ ] . http://link.aps.org/doi/10.1103/physrevlett.95.235003 r. numata , n. f. loureiro , ion and electron heating during magnetic reconnection in weakly collisional plasmas , j. plasma phys .81 ( 2015 ) 3001 . http://arxiv.org/abs/1406.6456 [ ] , http://dx.doi.org/10.1017/s002237781400107x [ ] .d. biskamp , h. welter , http://pop.aip.org/resource/1/pfbpei/v1/i10/p1964_s1[dynamics of decaying twodimensional magnetohydrodynamic turbulence ] , phys .fluids b : plasma phys . 1 ( 10 ) ( 1989 ) 19641979 . http://dx.doi.org/10.1063/1.859060 [ ] . http://pop.aip.org/resource/1/pfbpei/v1/i10/p1964_s1 h. politano , a. pouquet , p. l. sulem , http://scitation.aip.org/content/aip/journal/pop/2/8/10.1063/1.871473[current and vorticity dynamics in threedimensional magnetohydrodynamic turbulence ] , phys . plasmas 2 ( 8) ( 1995 ) 29312939 .http://dx.doi.org/10.1063/1.871473 [ ] .d. biskamp , e. schwarz , http://pop.aip.org/resource/1/phpaen/v8/i7/p3282_s1?isauthorized=no[on two - dimensional magnetohydrodynamic turbulence ] , phys .plasmas 8 ( 7 ) ( 2001 ) 32823292 .n. f. loureiro , a. a. schekochihin , s. c. cowley , instability of current sheets and formation of plasmoid chains , phys .plasmas 14 ( 10 ) ( 2007 ) 100703 .http://arxiv.org/abs/astro-ph/0703631 [ ] , http://dx.doi.org/10.1063/1.2783986 [ ] .n. f. loureiro , a. a. schekochihin , d. a. uzdensky , plasmoid and kelvin - helmholtz instabilities in sweet - parker current sheets , phys . rev .e 87 ( 1 ) ( 2013 ) 013102 . http://arxiv.org/abs/1208.0966 [ ] , http://dx.doi.org/10.1103/physreve.87.013102 [ ] .n. f. loureiro , d. a. uzdensky , a. a. schekochihin , s. c. cowley , t. a. yousef , turbulent magnetic reconnection in two dimensions , mon . not .r. astron .( 2009 ) l146l150 .http://arxiv.org/abs/0904.0823 [ ] , http://dx.doi.org/10.1111/j.1745-3933.2009.00742.x [ ] .r. kinney , j. c. mcwilliams , t. tajima , http://scitation.aip.org/content/aip/journal/pop/2/10/10.1063/1.871062[coherent structures and turbulent cascades in twodimensional incompressible magnetohydrodynamic turbulence ] , phys .plasmas 2 ( 10 ) ( 1995 ) 36233639 . http://dx.doi.org/10.1063/1.871062 [ ] . http://scitation.aip.org/content/aip/journal/pop/2/10/10.1063/1.871062 s. bale , p. kellogg , f. mozer , t. horbury , h. reme , http://link.aps.org/doi/10.1103/physrevlett.94.215002[measurement of the electric fluctuation spectrum of magnetohydrodynamic turbulence ] , phys .94 ( 2005 ) 215002 . http://dx.doi.org/10.1103/physrevlett.94.215002 [ ] . http://link.aps.org/doi/10.1103/physrevlett.94.215002 o. alexandrova , j. saur , c. lacombe , a. mangeney , j. mitchell , s. j. schwartz , p. robert , http://link.aps.org/doi/10.1103/physrevlett.103.165003[universality of solar - wind turbulent spectrum from mhd to electron scales ] , phys . rev . lett . 103( 2009 ) 165003 .http://dx.doi.org/10.1103/physrevlett.103.165003 [ ] .s. boldyrev , j. c. perez , http://stacks.iop.org/2041-8205/758/i=2/a=l44[spectrum of kinetic - lfvn turbulence ] , astrophys . j. lett .758 ( 2 ) ( 2012 ) l44 .http://dx.doi.org/10.1088/2041-8205/758/2/l4 [ ] .j. m. tenbarge , g. g. howes , w. dorland , g. w. hammett , http://www.sciencedirect.com/science/article/pii/s0010465513003664[an oscillating langevin antenna for driving plasma turbulence simulations ] , comp .185 ( 2 ) ( 2014 ) 578589 .http://dx.doi.org/10.1016/j.cpc.2013.10.022 [ ] . | we report on the algorithms and numerical methods used in , a novel fluid - kinetic code that solves two distinct sets of equations : ( i ) the kinetic reduced electron heating model ( krehm ) equations [ zocco & schekochihin , phys . plasmas * 18 * , 102309 ( 2011 ) ] ( which reduce to the standard reduced - mhd equations in the appropriate limit ) and ( ii ) the kinetic reduced mhd ( krmhd ) equations [ schekochihin , astrophys . j. suppl . * 182*:310 ( 2009 ) ] . two main applications of these equations are magnetised ( alfvnic ) plasma turbulence and magnetic reconnection . uses operator splitting ( strang or godunov ) to separate the dynamics parallel and perpendicular to the ambient magnetic field ( assumed strong ) . along the magnetic field , allows for either a second - order accurate maccormack method or , for higher accuracy , a spectral - like scheme composed of the combination of a total variation diminishing ( tvd ) third order runge - kutta method for the time derivative with a 7th order upwind scheme for the fluxes . perpendicular to the field is pseudo - spectral , and the time integration is performed by means of an iterative predictor - corrector scheme . in addition , a distinctive feature of is its spectral representation of the parallel velocity - space dependence , achieved by means of a hermite representation of the perturbed distribution function . a series of linear and nonlinear benchmarks and tests are presented , including a detailed analysis of 2d and 3d orszag - tang - type decaying turbulence , both in fluid and kinetic regimes . |
studies of some epidemics , for example , the spread of the black death in europe from 13471350 , the past influenza pandemics rvachev95 , spread of fox rabies in europe , or spread of rabies among raccoons in eastern united states and canada , indicate that host and infective interactions and spatial distributions of their populations should play an important role in the dynamics of spread of many infectious diseases .until recently most mathematical models of spread of epidemics have described interactions of large number of individuals in aggregate form and often these models have neglected aspects of spatial distribution of populations , importance of which have been addressed in . adopting methodologies like cellular automata , coupled map lattices , lattice gas cellular automata or agent based simulations , new classes of modelshave been proposed and studied , to incorporate with various levels of abstraction and details : direct interactions among individuals ; spatial distribution of population types ( i.e. , infective , susceptible , removed ) ; individuals movement ; effects of social networks on spread of epidemics .the goal of our work is to study the effects of population interactions and mixing on the spatio - temporal dynamics of spread of epidemics of sir ( susceptible - infected - removed ) type in a realistic population distribution . for the purpose of our study we developed a fully discrete individually - based simulation model that incorporates the random nature of disease transmission .the key feature of this model is the fact that for each individual the set of all individuals with whom he / she interacts may change with time .this results in time varying small world network structure . as a case study we considerder census data obtained from statistic canada for southern and central ontario .the data set specifies population of small areas composed of one or more neighbouring street blocks , called dissemination areas . using these data , we study the effects of two types of interactions among individuals on the spread of epidemics .the first type of interaction is the one among individuals located only in adjacent dissemination areas .the second type of interaction is the one among individuals who in addition to being in contact with members of their own and adjacent dissemination areas may also be in contact with individuals located in remote , non adjacent , dissemination areas .this last case can be seen as a case of `` short - cuts '' among multiple far away dissemination areas .we investigate spatial correlations in our model and how they can be destroyed by the `` short - cuts '' in population contacts .additionally , we derive a mean field description of our individually based simulation model and compare the results of the two models .the presented work is continuation and expansion of our work in paper15,paper16,paper24 and contributes to better understanding of spread of epidemics of sir type , including influenza .in order to study how population interactions ( `` population mixing '' ) affects the spread of epidemics we construct an individually based model in which each individual is represented by a particle , as in our earlier work paper15,paper16 .models of this type take various forms , ranging from stochastic interacting particle systems to models based on cellular automata or coupled map lattices schon93,bc93,duryea99,benyo2000 . in our model, we consider a set of individuals , labelled with consecutive integers .this set of labels is denoted by .we assume that each individual , at any given time can be in one of three , mutually exclusive , distinct states : susceptible ( s ) , infected ( i ) or removed ( r ) .an individual can change state only in two ways : a susceptible individual who comes in direct contact with an infected individual can become infected with probability ; an infected individual can become removed with probability .the precise description of the model is as follows .the state of the -th individual at the time step is described by a boolean vector variable , where if the -th individual is in the state , where , and otherwise .thus , assume that and , that is , the time is discrete .hence , the only allowed values of the vector are no other values of are possible in sir epidemic model . in sir modelan individual can only be at one state at any give time and transitions occur only from susceptible to infected and from infected to removed .a removed individual does not become susceptible or infected again in sir model .therefore , sir model is suitable for studying spread of influenza in the same season because the same type of influenza virus can infect an individual only once and once the individual is recovered from the flu it becomes immune to this type of virus .we further assume that at time step the -th individual can interact with individuals from a subset of , to be denoted by .using this notation , after one time iteration the becomes and is a sequence of iid boolean random variables such that , , and and is a sequence iid boolean variables such that , .we assume that the sequences and of random variables are independent of each other and of the random variables observe that , if the -th individual interacts with an infected -th individual at time step and then the infection is transmitted from the -th individual to the -th individual at this time step .thus , if some product takes the value 1 , then , meaning that the -th individual has changed its state from susceptible to infected .the key feature of this model is the set , representing all individuals with whom the -th individual may have interacted at time step . in a large human population , it is almost impossible to know for each individual , so we make some simplifying assumptions . first of all , it is clear that the spatial distribution of individuals must be reflected in the structure of .we have decided to use realistic population distribution for southern and central ontario using census data obtained from statistic canada .the selected region is mostly surrounded by waters of great lakes , forming natural boundary conditions. the data set specifies population of so called dissemination areas , that is , small areas composed of one or more neighbouring street blocks. we had access to longitude and latitude data with accuracy of roughly , hence some dissemination areas in densely populated regions have the same geographical coordinates .we combined these dissemination areas into larger units , to be called modified dissemination areas(mda ) .we now define the set using the concept of mdas .this set is characterized by two positive integers and .let us label all mdas in the region we are considering by integers , where in our case . for the -thindividual belonging to the -th mda , the set consists of all individuals belonging to the -th mda plus all individuals belonging to the mdas nearest to the -th mda and the mdas randomly selected among all remaining mdas .while the close neighbours , that is , the nearest mdas , will not change with time , the far neighbours , that is , the randomly selected mdas , will be randomly reselected at each time step .the model described in the previous section involves strong spatial coupling between individuals .before we describe consequences of this fact , we first construct a set of equations which approximate dynamics of the model under the assumption of perfect mixing , in other words , neglecting the spatial coupling . the state of the system described by eq .( [ sirdyn1][sirdyn3 ] ) at time step is determined by the states of all individuals and is described by the boolean random field . under the assumptions of our model ,the boolean field is a markov stochastic process . by taking the expectation of this markov stochastic processwhen the initial configuration is we get the probabilities of the -th individual being susceptible , or infected , or removed at time , that is , $ ] for .since the sequences of random variables and are independent of each other and of the sequences of the random variables , assuming additionally independence of random variables , the expected value of a product of these variables is equal to the product of expected values . under these mean field assumptions , taking expected values of both sides of equations ( [ sirdyn1][sirdyn3 ] ) we obtain mean field approximations neglect spatial correlations , we further assume that is independent of , that is . even though sets have different number of elements for different and , for the purpose of this approximate derivation we assume that they all have the same number of elements , where is the average mda population .all these assumptions lead to third equation in the above set is obviously redundant , since . similarly to the classical kermack - mckendrick model , mean field equations ( [ mf1])-([mf3 ] )exhibit a threshold phenomenon .depending on the choice of parameters , we can have for all , meaning that the infection is not growing and eventually it will die out because in our model no new individuals are being born or arrive from outside the area under consideration during the time of the epidemic .alternatively , we can have for some , meaning that the epidemic is spreading .the intermediate scenario of constant will occur when , that is , when that initially the entire population consists only of susceptible and infective individuals , that is , there are no individuals in the removed group at have .furthermore , if is large , we can assume .solving eq .( [ trecond ] ) for under these assumptions we obtain , assuming the mean field approximation the epidemic can occur only if .the mean - field equations derived in the previous section depend only on the sum of and .this means , for example , that the model with , and the model with , will have the same mean field equations .however , the actual dynamics in these two cases are very different , see figure [ fig : frontlow ] and figure fig : frontdest . depending on the relative size of and ,the epidemic may propagate or die out , as the following analysis shows . in order to make the subsequent analysis more convenient, we introduce parameter , defined as be the expected value of the total number of individuals belonging to class , that is , say that an epidemic occurs if there exists such that . for fixed , and ,there exists a threshold value of to be denoted by , such that for each an epidemic occurs , and for it does not occur .obviously depends on , and this is illustrated in figure [ phasetran1 ] , which shows graphs of as a function of for several different values of , where .the graphs were obtained numerically by direct computer simulations of the model .the condition means that the size of the neighbourhood is kept constant , but the proportion of far neighbours(represented by ) varies . , and .the first line from the bottom represents mean field approximation . ]figure [ phasetran1 ] also shows the mean - field line given by eq .( separatmf ) .we observe that the parameter controls dynamics of the epidemic process in a significant way , shifting the critical line up or down .when , that is , when there are no interactions with far neighbours , the epidemic process has a strictly local nature , and we can observe well defined epidemic fronts propagating in space , regardless at which mda the epidemic starts at .this is illustrated in figure [ fig : frontlow ] , where the epidemic starts at a single centrally located mda with low population density ( figure fig : frontlowa ) and on figure [ fig : fronthigh ] , where the epidemic starts in a mda with high population density ( figure [ fig : fronthigh]a ) .the simulations were done for the same parameters in both cases except for the different locations of the onsets of epidemics .the figures display mdas that are represented by pixels colored according to the density of individuals of a given type .the red component of the color represents density of infected individuals , green density of susceptibles , and blue density of removed individuals . by density we mean the number of individuals of a given type divided by the size of the population of the mda . , , , with ( a ) , ( b ) , ( c ) and ( d ) .the initial outbreak is located in an area with low population density .modified dissemination areas are represented by pixels colored according to density of individuals of a given type , such that the red component represents density of infected , green density of susceptibles , and blue density of removed individuals . ]the epidemic waves propagating outwards can be clearly seen on figure fig : frontlow and figure [ fig : fronthigh ] , in the successive snapshots ( b ) , ( c ) and ( d ) . the fronts are mostly red .this means that the bulk of infected individuals is located at the fronts .after these individuals gradually recover the centers become blue .let us now consider slightly modified parameters , taking .this means that we now replace one close mda by one farmda .this does not seem to be a significant change , yet the effect of this change is truly noticeable . , , , with ( a ) , ( b ) , ( c ) and ( d ) .colour coding is the same as in the previous figure . ]as we can see in figure [ fig : frontdest ] , the epidemic propagates much faster , and there are no visible fronts .the disease quickly spreads over the entire region and large metropolitan areas become red in a short time , as shown in figure [ fig : frontdest](b ) .this suggests that infected individuals are more likely to be found in densely populated regions , and their distribution is dictated by the population distribution unlike in figure [ fig : frontlow ] or figure [ fig : fronthigh ] , where infected individuals are to be found mainly at the propagating front .in order to quantify the observations of the previous section , we use a spatial correlation function for densities of infected individuals defined as is the distance between -th and -th individual , and represents averaging over all pairs , satisfying condition . in the following considerations we take .the distance between two individuals is defined as the distance between mdas to which they belong .consider now a specific example of the epidemic process described by eq .( [ sirdyn1]-[sirdyn3 ] ) , where , , and . for this choice of parameters epidemics always occur as long as .figure [ cor3d ] shows graphs of the correlation functions at the peak of each epidemic , so that is the time step at which the number of infected individuals achieves its maximum value . for different values of , where , , and .] an interesting phenomenon can be observed in the figure under consideration : while the increase of the proportion of far neighbours does destroy spatial correlations , one needs very high proportion of farneighbours to make the correlation curve completely flat . in bonabeau1998it is reported that for influenza epidemics .if we fit curve to the correlation data shown in figure [ cor3d ] , we obtain values of the exponent as shown in figure [ alpha ] . in order to obtain of comparably small magnitude as reported in , one would have to take equal to at least , meaning that vast majority of neighbours would have to be far neighbours . for different values of the parameter .the exponent has been obtained by fitting to simulation data . ] in reality , this would require that the vast majority of all individuals one interacted with were not his / her neighbours , coworkers , etc . , but individuals from randomly selected and possibly remote geographical regions .this is clearly at odds with our intuition regarding social interactions , especially outside large metropolitan areas .this prompted us to investigate further and to find out what is responsible for this effect .upon closer examination of spatial patterns generated in simulations of our individually - based model , we reach the conclusion that the inhomogeneity of population sizes in neighbourhoods makes spatial correlations so persistent .since different mdas have different population sizes , we expect that some individuals will have larger neighbourhood populations than others , and as a result they will be more likely to get infected , even if the proportion of infected individuals is the same in all mdas .this will build up clusters of infected individuals around populous mdas . to test if this is indeed the factor responsible for strong spatial correlations in our model , we replaced all mda population sizes with a constant population size , that is , the average mda population size .as expected , graphs of the correlation functions obtained in this case were are all essentially flat , with the exponent close to zero even in the case of , when we obtained .we conclude that spatial correlations are difficult to destroy if neighbourhood sizes are inhomogeneous. very significant amount of long - range interactions ( i.e. , very strong mixings ) is required to obtain flat correlations curves . however , for homogeneous neighbourhood sizes , even relatively small long - range interaction immediately forces the process into the perfect - mixing regime , resulting in the lack of spatial correlations .the model that we developed can be applied to other realistic population distributions and geographical regions . | _ abstract : - _ we study dynamics of spread of epidemics of sir type in a realistic spatially - explicit geographical region , southern and central ontario , using census data obtained from statistics canada , and examine the role of population mixing in epidemic processes . our model incorporates the random nature of disease transmission , the discreteness and heterogeneity of distribution of host population.we find that introduction of a long - range interaction destroys spatial correlations very easily if neighbourhood sizes are homogeneous . for inhomogeneous neighbourhoods , very strong long - range coupling is required to achieve a similar effect . our work applies to the spread of influenza during a single season . |
state - space models ( ssm ) are a very popular class of non - linear and non - gaussian time series models in statistics , econometrics and information engineering ; see for example , , .an ssm is comprised of a pair of discrete - time stochastic processes , and , where the former is an -valued unobserved process and the latter is a -valued process which is observed .the hidden process is a markov process with initial density and markov transition density , i.e. it is assumed that the observations conditioned upon are statistically independent and have marginal density , i.e. we also assume that , and are densities with respect to ( w.r.t . )suitable dominating measures denoted generically as and . for example , if and then the dominating measures could be the lebesgue measures .the variable in the densities of these random variables are the particular parameters of the model .the set of possible values for is denoted .the model ( [ eq : evol])-([eq : obs ] ) is also referred to as a hidden markov model ( hmm ) in the literature . for any sequence and integers ,let denote the set .( when this is to be understood as the empty set . ) equations ( [ eq : evol ] ) and ( [ eq : obs ] ) define the joint density of , which yields the marginal likelihood , let , , be a sequence of functions and , , be the corresponding sequence of additive functionals constructed from as follows on , i.e. is the sums of term of the form , is merely a matter of redefining in the computations to follow.] there are many instances where it is necessary to be able to compute the following expectations recursively in time, .\label{eq : representationadditivefunctional}\ ] ] the conditioning implies the expectation should be computed w.r.t .the density of given , i.e. and for this reason is referred to as a _ smoothed additive functional_. as the first example of the need to perform such computations , consider the problem of computing the score vector , .the score is a vector in and its component is _ { i}=\frac{\partial\log\text { } p_{\theta}\left ( y_{0:n}\right ) } { \partial \theta^{i}}. \label{eq : score}\ ] ] using fisher s identity , the problem of computing the score becomes an instance of ( [ eq : representationadditivefunctional ] ) , i.e. + \sum \limits_{k=0}^{n}\mathbb{e}_{\theta}\left [ \left .\nabla\log g_{\theta } \left ( \left .y_{k}\right\vert x_{k}\right ) \right\vert y_{0:n}\right ] \nonumber\\ & + \mathbb{e}_{\theta}\left [ \left . \nabla\log\mu_{\theta}\left ( x_{0}\right ) \right\vert y_{0:n}\right ] .\label{eq : scoreadditivefunctional}\ ] ] an alternative representation of the score as a smoothed additive functional based on infinitesimal perturbation analysis is given in .the score has applications to maximum likelihood ( ml ) parameter estimation , .the second example is ml parameter estimation using the expectation - maximization ( em ) algorithm .let be a batch of data and the aim is to maximise w.r.t . . given a current estimate , a new estimate is obtained by maximizing the function + \sum \limits_{k=0}^{n}\mathbb{e}_{\theta^{\prime}}\left [ \left .\log g_{\theta } \left ( \left .y_{k}\right\vert x_{k}\right ) \right\vert y_{0:n}\right ] \\ & + \mathbb{e}_{\theta^{\prime}}\left [ \left . \log\mu_{\theta}\left ( x_{0}\right ) \right\vert y_{0:n}\right]\end{aligned}\ ] ] w.r.t . and setting to the maximising argument .a fundamental property of the em algorithm is . for linear gaussian models and finite state - space hmm ,it is possible to perform the computations in the definition of . for general non - linear non - gaussian state - space models of the form ( [ eq : evol])-([eq : obs ] ) , we need to rely on numerical approximation schemes .smc methods are a class of algorithms that sequentially approximate the sequence of posterior distributions using a set of weighted random samples called particles . specifically , the smc approximation of , for , is where is the importance weight associated to particle and is the dirac measure with an atom at .the particles are propagated forward in time using a combination of importance sampling and resampling steps and there are several variants of both these steps ; see , for details .smc methods are parallelisable and flexible , the latter in the sense that smc approximations of the posterior densities for a variety of ssms can be constructed quite easily .smc methods were popularized by the many successful applications to ssm .a smc approximation of may be constructed by replacing in eq .( [ eq : representationadditivefunctional ] ) with its smc approximation in eq .( [ eq : filteringdistribution ] ) - we call this the _ path space _method since the smc approximation of , which is a probability distribution on , is used .fortunately there is no need to store the entire ancestry of each particle , i.e. , which would require a growing memory .also , this estimate can be computed recursively . however , the reliance on the approximation of the joint distribution is bad .it is well - known in the smc literature that the approximation of becomes progressively impoverished as increases because of the successive resampling steps , , .that is , the number of distinct samples representing for any fixed diminishes as increases this is known as the _ particle path degeneracy _ problem . hence ,whatever being the number of particles , will eventually be approximated by a single unique particle for all ( sufficiently large ) .this has severe consequences for the smc estimate . in , under favourable mixing assumptions , the authors established an upper bound on the error which is proportional to . under similar assumptions ,it was shown in that the asymptotic variance of this estimate increases at least quadratically with . to reduce the variance ,alternative methods have been proposed .the technique proposed in relies on the fact that for a ssm with good forgetting properties , when the horizon is large enough ; that is observations collected after times bring little additional information about .( see ( * ? ? ?* corollary 2.9 ) for exponential error bounds . )this suggests that a very simple scheme to curb particle degeneracy is to stop updating the smc estimate beyond time .this algorithm is trivial to implement but the main practical problem is that of determining an appropriate value for such that the two densities in eq .( [ eq : forgetting ] ) are close enough and particle degeneracy is low .these are conflicting requirements .a too small value for the horizon will result in being a poor approximation of but the particle degeneracy will be low .on the other hand , a larger horizon improves the approximation in eq .( [ eq : forgetting ] ) but particle degeneracy will creep in .automating the selection of is difficult .additionally , for any finite the smc estimate of will suffer from a non vanishing bias even as . in , for an optimized value of which is dependent on and the typically unknown mixing properties of the model , the smc estimates of based on the approximation in eq .( [ eq : forgetting ] ) were shown to have an error and bias upper bounded by quantities proportional to and under regularity assumptions .the computational cost of the smc approximation of computed using either the path space method or the truncated horizon method of is .a standard alternative to computing is to use smc approximations of fixed - interval smoothing techniques such as the forward filtering backward smoothing ( ffbs ) algorithm , .theoretical results on the smc approximations of the ffbs algorithm have been recently established in ; this includes a central limit theorem and exponential deviation inequalities . in particular , under appropriate mixing assumptions ,the authors have obtained time - uniform deviation inequalities for the smc - ffbs approximations of the marginals ( * ? ? ? * section 5 ) ; see for alternative proofs and complementary results .let denote the smc - ffbs estimate of .in this work it is established that the asymptotic variance of only grows linearly with time ; a fact which was also alluded to in .the main advantage of the smc implementation of the ffbs algorithm is that it does not have any tuning parameter other than the number of particles .however , the improved theoretical properties comes at a computational price ; this algorithm has a computational complexity of compared to for the methods previously discussed .( it is possible to use fast computational methods to reduce the computational cost to . )another restriction is that the smc implementation of the ffbs algorithm does not yield an online algorithm .the contributions of this article are as follows .* we propose an original online implementation of the smc - ffbs estimate of .a particular case of this new algorithm was presented in , to compute the score vector ( [ eq : score ] ) . however , because it was catered to estimating the score , the authors failed to realise its full generality . * an upper bound for the _ non - asymptotic _ mean square error of the smc - ffbs estimate of is derived under regularity assumptions .it follows from this bound that the asymptotic variance of is bounded by a quantity proportional to .this complements results recently obtained in , .* we demonstrate how the online implementation of the smc - ffbs estimate of can be applied to the problem of recursively estimating the parameters of a ssm from data .we present original smc implementations of recursive maximum likelihood ( rml ) , , and of the online em algorithm , , , , , ( * ? ? ?* section 3.2 . ) .these smc implementations do not suffer from the particle path degeneracy problem .the remainder of this paper is organized as follows . in section [ sec : smoothedadditivefunctionals ] the standard ffbs recursion and its smc implementation is presented .it is then shown how this recursion and its smc implementation can be implemented exactly with only a forward pass .a non - asymptotic variance bound is presented in section [ sec : theory ] .recursive parameter estimation procedures are presented in section [ sec : experiments ] and numerical results are given in section [ sec : simulations ] .we conclude in section [ sec : discussion ] and the proof of the main theoretical result is given in the appendix .we first review the standard ffbs recursion and its smc approximation , .this is then followed by a derivation of a forward - only version of the ffbs recursion and its corresponding smc implementation .the algorithms presented in this section do not depend on any specific smc implementation to approximate .recall the definition of in eq .( [ eq : representationadditivefunctional ] ) . the standard ffbs procedure to compute proceeds in two steps . in the first step , which is the forward pass ,the filtering densities are computed using bayes formula: the second step is the backward pass that computes the following marginal smoothed densities which are needed to evaluate each term in the sum that defines : where we compute eq .( [ eq : standardforwardbackward ] ) commencing at and then , decrementing each time , until .( integrating eq .( [ eq : standardforwardbackward ] ) w.r.t . will yield which is needed for the next computation . ) to compute , backward steps must be executed and then expectations computed .this must then be repeated at time to incorporate the effect of the new observation on these calculations .clearly this is not an online procedure for computing .the smc implementation of the ffbs recursion is straightforward . in the forward pass ,we compute and store the smc approximation of for . in the backward pass, we simply substitute this smc approximation in the place of in eq .( [ eq : standardforwardbackward ] ) .let be the smc approximation of , , initialised at by setting . by substituting for in eq .( [ eq : conditionaldensity ] ) , we obtain this approximation is combined with ( see eq .( [ eq : standardforwardbackward ] ) ) to obtain marginalising this approximation will give the approximation to , that is , where the smc estimate of is then given by the backward recursion for the weights , given in eq .( [ eq : backwardweights ] ) , makes this an off - line algorithm for computing . to circumvent the need for the backward pass in the computation of , the following auxiliary function ( on )is introduced , it is apparent that the following proposition establishes a forward recursion to compute , which is henceforth referred to as the _ forward smoothing recursion_. for sake of completeness , the proof of this proposition is given .[ propdn ] for , we have p_{\theta}\left ( \left . x_{n-1}\right\vert y_{0:n-1},x_{n}\right ) dx_{n-1 } , \label{eq : recursionadditivefunctional}\ ] ] where . * proof . *the proof is straightforward \text { } p_{\theta}\left ( \left .x_{0:n-1}\right\vert y_{0:n-1},x_{n}\right ) dx_{0:n-1}\\ & = \int\left [ \int s_{n-1}\left ( x_{0:n-1}\right ) p_{\theta}\left ( \left .x_{0:n-2}\right\vert y_{0:n-2},x_{n-1}\right ) dx_{0:n-2}\right ] p_{\theta}\left ( \left .x_{n-1}\right\vert y_{0:n-1},x_{n}\right ) dx_{n-1}\\ & + \int s_{n}\left ( x_{n-1},x_{n}\right ) \text { } p_{\theta}\left ( \left .x_{n-1}\right\vert y_{0:n-1},x_{n}\right ) dx_{n-1}.\end{aligned}\ ] ] the integrand in the first equality is while the integrand in the first integral of the second equality is . this recursion is not new and is actually a special instance of dynamic programming for markov processes ; see for example . for a fully observed markov process with transition density , the dynamic programming recursion to compute the expectation of with respect to the law of the markov processis usually implemented using a backward recursion going from time to time . in the partially observed scenario considered here , conditional on is a backward markov process with non - homogeneous transition densities .thus ( [ eq : recursionadditivefunctional ] ) is the corresponding dynamic programming recursion to compute with respect to for this backward markov chain .this recursion is the foundation of the online em algorithm and is described at length in ( pioneered in ) where the density appearing in is usually written as or as in eq .( [ eq : conditionaldensity ] ) in , , .the forward smoothing recursion has been rediscovered independently several times ; see , for example .a simple smc scheme to approximate can be devised by exploiting equations ( [ eq : additivesmoothfunctionalsasfunctionoft ] ) and ( [ eq : recursionadditivefunctional ] ) .this is summarised as algorithm smc - fs below .* algorithm smc - fs : forward - only smc computation of the ffbs estimate * } { \sum_{j=1}^{n}w_{n-1}^{\left ( j\right ) } f_{\theta}\left ( x_{n}^{(i)}|x_{n-1}^{(j)}\right ) } , \quad1\leq i\leq n,\label{eq : tapproximation}\\ \widehat{\mathcal{s}}_{n}^{\theta } & = \sum_{i=1}^{n}w_{n}^{\left ( i\right ) } \text { } \widehat{t}_{n}^{\theta}\left ( x_{n}^{\left ( i\right ) } \right ) .\label{eq : smcapproxadditivefunctionals}\ ] ] this algorithm is initialized by setting for it has a computational complexity of which can be reduced by using fast computational methods .the rationale for this algorithm is as follows . by using in eq .( [ eq : mcconditionaldensity ] ) in place of in eq .( [ eq : recursionadditivefunctional ] ) , we obtain an approximation of which is computed at the particle locations . the approximation of in eq .( [ eq : smcapproxadditivefunctionals ] ) now follows from eq .( [ eq : additivesmoothfunctionalsasfunctionoft ] ) by using in place of .it is valid to use the same notation for the estimates in eq .( [ eq : batchsmcestimate ] ) and in eq .( [ eq : smcapproxadditivefunctionals ] ) as they are indeed the same . the verification of this assertion may be accomplished by unfolding the recursion in eq .( [ eq : tapproximation ] ) .in this section , we present a bound on the non - asymptotic mean square error of the estimate of . for sake of simplicity , the result is established for additive functionals of the type where , and when algorithm smc - fs is implemented using the bootstrap particle filter ; see , for a definition of this vanilla particle filter .the result can be generalised to accommodate an auxiliary implementation of the particle filter , , . likewise , the conclusion is also valid for additive functionals of the type in ( [ eq : additivefunctional ] ) ; the proof uses similar arguments but is more complicated. the following regularity condition will be assumed . * ( a ) * there exist constants such that for all , and , admittedly , this assumption is restrictive and typically holds when and are finite or are compact spaces . in general , quantifying the errors of smc approximations under weaker assumptions is possible .( more precise but complicated error bounds for the particle estimate of are also presented in under weaker assumptions .) however , when ( a ) holds , the bounds can be greatly simplified to the extent that they can usually be expressed as linear or quadratic functions of the time horizon .these simple rates of growth are meaningful as they have also been observed in numerical studies even in scenarios where assumption a is not satisfied . for a function ,let .the oscillation of , denoted , is defined to be .the main result in this section is the following non - asymptotic bound for the mean square error of the estimate of given in eq .( [ eq : smcapproxadditivefunctionals ] ) .[ nonasymptheo ] assume ( a ) .consider the additive functional in ( [ eq : additivefunctionalsimple ] ) with and for .then , for any and , where is a finite constant that is independent of time , and the particular choice of functions . the proof is given in the appendix .it follows that the asymptotic variance of , i.e. as the number of particles goes to infinity , is upper bounded by a quantity proportional to as the bias of the estimate is ( * ? ? ?* corollary 5.3 ) .let denote the smc estimate of obtained with the standard path space method .this estimate can have a much larger asymptotic variance as is illustrated with the following very simple model .let , i.e. is an i.i.d .sequence , and let and for all where is some real valued function on , and .it can be easily established that the formula for the asymptotic variance of given in , ( * ? ? ?( 9.13 ) , page 304 ) simplifies to ^{2}}{\mu_{\theta}\left ( x\right ) } dx+\frac{n\left ( n-1\right ) } { 2}\int\frac{\pi_{\theta}\left ( \left .x\right\vert y\right ) ^{2}}{\mu_{\theta}\left ( x\right ) } dx\text { } \int\widetilde{s}_{\theta}\left ( x\right ) ^{2}\pi_{\theta}\left ( \left .x\right\vert y\right ) dx \label{eq : asymptoticvariancepathbased}\ ] ] where thus the asymptotic variance increases quadratically with time .note though that the asymptotic variance of converges as tends to infinity to a positive constant .thus path space method can provide stable estimates of ] in this more general time - averaging setting .once again let denote the smc estimate of ] is computed using algorithm smc - fs , will converge to zero as tends to infinity .an important application of the forward smoothing recursion is to parameter estimation for non - linear non - gaussian ssms .we will assume that observations are generated from an unknown ` true ' model with parameter value , i.e. . the static parameter estimation problem has generated a lot of interest over the past decade and many smc techniques have been proposed to solve it ; see for a recent review . in a bayesian approach to the problem, a prior distribution is assigned to and the sequence of posterior densities is estimated recursively using smc algorithms combined with markov chain monte carlo ( mcmc ) steps , , .unfortunately these methods suffer from the particle path degeneracy problem and will result in unreliable estimates of the model parameters ; see , for a discussion of this issue . given a fixed observation record , an alternative offline mcmc approach to estimate has been recently proposed which relies on proposals built using the smc approximation of . in a ml approach ,the estimate of is the maximising argument of the likelihood of the observed data .the ml estimate can be calculated using a gradient ascent algorithm either offline for a fixed batch of data or online ; see section [ subsec : rml ] .likewise , the em algorithm can also be implemented offline or online .the online em algorithm , assuming all calculations can be performed exactly , is presented in , , , and . for a general ssm for which the quantities required by the online em can not be calculated exactly , an smc implementation is possible , ( * ? ? ?* section 3.2 . ) ; see section [ sec : parameterestimation ] . to maximise the likelihood w.r.t . , we can use a simple gradient algorithm .let be the sequence of parameter estimates of the gradient algorithm .we update the parameter at iteration using where is the score vector computed at and is a sequence of positive non - increasing step - sizes defined in ( [ eq : stepchoice ] ) . for a general ssm, we need to approximate .as mentioned in the introduction , the score vector admits several smoothed additive functional representations ; see eq .( [ eq : score ] ) and .( [ eq : score ] ) , it is possible to approximate the score with algorithm smc - fs . in the online implementation , the parameter estimate at time is updated according to , upon receiving , is updated in the direction of ascent of the predictive density of this new observation .a necessary requirement for an online implementation is that the previous values of the model parameter estimates ( other than ) are also used in the evaluation of at .this is indicated in the notation .( not doing so would require browsing through the entire history of observations . )this approach was suggested by for the finite state - space case and is named rml .the asymptotic properties of this algorithm ( i.e. the behavior of in the limit as goes to infinity ) have been studied in the case of an i.i.d . hidden process by and for an hmm with a finite state - space by . under suitable regularity assumptions ,convergence to and a central limit theorem for the estimation error has been established . for a general ssm, we can compute a smc estimate of using algorithm smc - fs upon noting that is equal to in particular , at time , a particle approximation of is computed using the particle approximation at time and parameter value . similarly , the computation of eq .( [ eq : tapproximation ] ) is performed using and with the estimate of is now the difference of the estimate in eq .( [ eq : smcapproxadditivefunctionals ] ) with the same estimate computed at time . under the regularity assumptions given in section [ sec : theory ] , it follows from the results in the appendix that the asymptotic variance ( i.e. as ) of the smc estimate of computed using algorithm smc - fs is uniformly ( in time ) bounded . on the contrary , the standard path - based smc estimate of has an asymptotic variance that increases linearly with .gradient ascent algorithms are more generally applicable than the em algorithm . however , their main drawback in practice is that it is difficult to properly scale the components of the computed gradient vector . for this reasonthe em algorithm is usually favoured by practitioners whenever it is applicable .let be the sequence of parameter estimates of the em algorithm . in the offline approach , at iteration ,the function is computed and then maximized .the maximizing argument is the new estimate .if belongs to the exponential family , then the maximization step is usually straightforward .we now give an example of this .let , , be a collection of functions with corresponding additive functionals and let the collection is also referred to as the _ summary statistics _ in the literature .typically , the maximising argument of can be characterised explicitly through a suitable function , i.e. where {l}=\mathcal{s}_{l , n}^{\theta}\mathcal{s}_{l , n+1}=\gamma_{n+1}\text { } \int s^{l}\left ( x_{n},x_{n+1},y_{n+1}\right ) p_{\theta_{0:n}}(x_{n},x_{n+1}|y_{0:n+1})dx_{n : n+1} \ \ \ \ \ \ \ \ \ \ \ + \left ( 1-\gamma_{n+1}\right ) \int\sum_{k=1}^{n}\left ( \prod\limits_{i = k+1}^{n}\left ( 1-\gamma_{i}\right ) \right ) \gamma_{k}s^{l}\left ( x_{k-1},x_{k},y_{k}\right ) p_{\theta_{0:n}}(x_{0:n}|y_{0:n+1})dx_{0:n}, ] . here is a step - size sequence satisfying the same conditions stipulated for the rml in section [ subsec : rml ] .( the recursive implementation of is standard . )the subscript indicates that the posterior density is being computed sequentially using the parameter at time ( and at time . )references , , and have proposed an online em algorithm , implemented as above , for finite state hmms . in the finite state setting all computations involved can be done exactly in contrast to general ssms where numerical procedures are called for .it is also possible to do all the calculations exactly for linear gaussian models .define the vector valued function as follows : ^{\text{t}}$ ] .computing sequentially using smc - fs is straightforward and detailed in the following algorithm .* smc - fs implementation of online em * . , } { \sum_{j=1}^{n}w_{n-1}^{\left ( j\right ) } f_{\theta_{n-1}}\left ( x_{n}^{(i)}|x_{n-1}^{(j)}\right ) } .\ ] ] it was suggested in ( * ? ? ?* section 3.2 . ) that the two other smc methods discussed in section [ sec : litreview ] could be used to approximate ; the path space approach to implement the online em was also independently proposed in .doing so would yield a cheaper alternative to algorithm smc - em above with computational cost , but not without its drawbacks .the fixed - lag approximation of would introduce a bias which might be difficult to control and the path space approach suffers from the usual particle path degeneracy problem .consider the step - size sequence in ( [ eq : stepchoice ] ) .if the path space method is used to estimate then the theory in section [ sec : theory ] tells us that , even under strong mixing assumptions , the asymptotic variance of the estimate of will not converge to zero for .thus it will not yield a theoretically convergent algorithm .numerical experiments in appear to provide stable results which we attribute to the fact that this variance might be very small in the scenarios considered is assigned a prior distribution and we estimate , , , the path degeneracy problem has much more severe consequences than in the ml framework considered here as illustrated in .indeed in the ml framework , the filter will have , under regularity assumptions , exponential forgetting properties for any whereas this will never be the case for .in contrast , the asymptotic variance of the estimate converges to zero in time for the entire range under the same mixing conditions . the original implementation proposed herehas been recently successfully adopted in to solve a complex parameter estimation problem arising in robotics .we commence with a study of a scalar linear gaussian ssm for which we may calculate smoothed functionals analytically .we use these exact values as benchmarks for the smc approximations .the model is where and .we compared the exact values of the following smoothed functionals , \quad\mathcal{s}_{2,n}^{\theta } = \mathbb{e}_{\theta}\left [ \left .\sum_{k=1}^{n}x_{k-1}\right\vert y_{0:n}\right ] , \quad\mathcal{s}_{3,n}^{\theta}=\mathbb{e}_{\theta}\left [ \left. \sum_{k=1}^{n}x_{k-1}x_{k}\right\vert y_{0:n}\right ] , \label{eq : benchmarkfunctionals}\ ] ] computed at with the bootstrap filter implementation of algorithm smc - fs and the path space method .comparisons were made after 2500 , 5000 , 7500 and 10,000 observations to monitor the increase in variance and the experiment was replicated 50 times to generate the box - plots in figure [ combinednandn2boxplots ] .( all replications used the same data record . )both estimates were computed using particles . ) for a linear gaussian ssm .estimates were computed with path space method ( left column ) and algorithm smc - fs ( right column ) .the long horizontal line intersecting the box indicates the true value.,scaledwidth=100.0% ] from figure [ combinednandn2boxplots ] it is evident that the smc estimates of algorithm smc - fs significantly outperforms the corresponding smc estimates of the path space method .however one should bear in mind that the former algorithm has computational complexity while the latter is .thus a comparison that takes this difference into consideration is important . from theorem [ nonasymptheo ] and the discussion after it, we expect the variance of algorithm smc - fs s estimate to grow only linearly with the time index compared to a quadratic in time growth of variance for the path space method .hence , for the same computational effort we argue that , for large observation records , the estimate of algorithm smc - fs is always going to outperform the path space estimates .specifically , for a large enough , the variance of algorithm smc - fs s estimate with particles will be significantly less than the variance of the path space estimate with particles .if the number of observations is small then , taking into account the computational complexity , it might be better to use the path space estimate as the variance benefit of using algorithm smc - fs may not be appreciable to justify the increased computational load .figure [ fig : constantstep ] shows the parameter estimates obtained using the smc implementation of online em for the stochastic volatility model discussed in example [ ex : stochvol ] .the true value of the parameters were and 500 particles were used .smc - em was started at the initial guess .for the first 100 observations , only the e - step was executed .that is the step , which is the m - step was skipped .smc - em was run in its entirety for observations 101 and onwards .the step size used was for and for .figure [ fig : constantstep ] shows the sequence of parameter estimates computed with a very long observation sequence . .true and converged values ( average of the last 1000 iterations ) are indicated on the left and the right of the plot respectively.,scaledwidth=60.0% ]we proposed a new smc algorithm to compute the expectation of additive functionals recursively in time . essentially , it is an online implementation of the ffbs smc algorithm proposed in .this algorithm has an computational complexity where is the number of particles .it was mentioned how a standard path space smc estimator to compute the same expectations recursively in time could be developed .this would have an computational complexity .however , as conjectured in , it was shown here that the asymptotic variance of the smc - ffbs estimator increased linearly with time whereas that of the method increased quadratically .the online smc - ffbs estimator was then used to perform recursive parameter estimation .while the convergence of rml and online em have been established when they can be implemented exactly , the convergence of the smc implementation of these algorithms have yet to be established and is currently under investigation .the authors would like to thank olivier capp , thomas flury , sinan yildirim and ric moulines for comments and references that helped improve the first version of this paper .the authors are also grateful to rong chen for pointing out the link between the forward smoothing recursion and dynamic programming. finally , we are thankful to robert elliott to have pointed out to us references , and .the proofs in this section hold for any fixed and therefore is omitted from the notation .this section commences with some essential definitions .consider the measurable space .let denote the set of all finite signed measures and the set of all probability measures on .let denote the banach space of all bounded and measurable functions equipped with the uniform norm .let , i.e. is the lebesgue integral of the function w.r.t .the measure . if is a density w.r.t .some dominating measure on then , . we recall that a bounded integral kernel from a measurable space into an auxiliary measurable space is an operator from into such that the functions are -measurable and bounded , for any . in the above displayed formulae, stands for an infinitesimal neighborhood of a point in .let denote the dobrushin coefficient of which defined by the following formula where stands the set of -measurable functions with oscillation less than or equal to 1 .the kernel also generates a dual operator from into defined by .a markov kernel is a positive and bounded integral operator with . given a pair of bounded integral operators , we let the composition operator defined by . for timehomogenous state spaces , we denote by the -th composition of a given bounded integral operator , with . given a positive function on ,let be the bayes transformation defined by the definitions above also apply if is a density and is a transition density .in this case all instances of should be replaced with and by where and are the dominating measures .the proofs below will apply to any fixed sequence of observation and it is convenient to introduce the following transition kernels , with the convention that , the identity operator .note that .let the mapping , , be defined as follows several probability densities and their smc approximations are introduced to simplify the exposition .the _ predicted filter _ is denoted by with the understanding that is the initial distribution of .let denote its smc approximation with particles .( this notation for the smc approximation is opted for , instead of the usual , to make the number of particles explicit . )the bounded integral operator from into is defined as is defined for any pair of time indices satisfying with the convention that for and . the smc approximation , , is where is the smc approximation of obtained from the smc - ffbs approximation of section [ sec : ffbs ] , i.e. where the backward markov transition kernels are defined through it is easily established that the smc - ffbs approximation of , , is precisely the marginal of where was defined in ( [ eq : filteringdistribution ] ) . finally , we define the following estimates are a straightforward consequence of assumption ( a ) . for time indices , and for , [ lem : kintchine ] let , , be the natural filtration associated with the -particle approximation model and be the trivial sigma field . for any , there exist a finite ( non random ) constant such that the following inequality holds for all and functions s.t . ,[ lem : lperrorfilter]for any , there exists a constant such that the following inequality holds for all and s.t . , (\varphi ) \right\vert ^{r}\right ) ^{\frac{1}{r}}\leq a_{r}~\sum_{k=0}^{n}~b_{k , n}~\beta\left ( \frac{q_{k , n}}{q_{k , n}(1)}\right ) .\label{eq : lperrorfilter}\ ] ] the following decomposition is central the convention that , for . lemma [ lem : dpn ] states that therefore the decomposition can be also written as the convention , for . let every term in the r.h.s .of ( [ eq : decompsn ] ) takes the following form the integral operators are defined as follows , , using ( [ eq : decompsn ] ) and ( [ eq : decompsnterm ] ) , is expressed as the first order term is the second order remainder term is non - asymptotic variance bound is based on the triangle inequality bounds are derived below for the individual expressions on the right - hand side of this equation.using the fact that is zero mean and uncorrelated, following results are needed to bound the right - hand side of ( eq : fisrtordererror ) .first , observe that , and .now using the decomposition , ~\psi _ { q_{k , n}(1)}(\phi _ { k}(\eta _ { k-1}^{n}))(dx_{k}^{\prime } ) , \end{array}\]]it follows that linear functionals of the form ( [ eq : additivefunctionalsimple ] ) , it is easily checked that ( s_{q})+\sum_{k < q\leq n}q_{k , q}(s_{q}~q_{q , n}(1))\]]with the convention , the identity operator , for . recalling that , we conclude that ( s_{q})+\sum_{k <q\leq n}\frac{q_{k , q}(q_{q , n}(1)~s_{q})}{q_{k , q}(q_{q , n}(1))}\]]and therefore ( s_{q})+\sum_{k\leq q\leq n}\frac{q_{k , q}(q_{q , n}(1)~s_{q})}{q_{k , q}(q_{q , n}(1))}\]]thus , the estimates in ( [ eq : contractionest ] ) and ( eq : contractionest2 ) for the contraction coefficients , and the estimate in ( [ eq : contractionest ] ) for , it follows that there exists some finite ( non random ) constant such that the bound for any pair of time indexes satisfying , particle number and choice of functions . the desired bound for ( [ eq : oscboundparticleppn ] ) is now obtained by combining this result with lemma [ lem : kintchine]: is a constant whose value does not depend on .concerning the term in ( [ eq : nonasympvartriainequality]). ^{2}\right\ } ^{\frac{1}{2 } } \notag \\ & \leq \sum_{0\leq k\leq n}\frac{1}{\sqrt{n}}b_{k ,n}\mathbb{e}\left\ { \left [ \sqrt{n}\left ( \eta _ { k}-\eta _ { k}^{n}\right ) \overline{d}_{k , n}(1)\times \sqrt{n}v_{k}^{n}\left ( \overline{d}_{k , n}^{n}(\widetilde{s}_{k , n}^{n})\right ) \right ] ^{2}\right\ } ^{\frac{1}{2 } } \notag \\ & \leq \sum_{0\leq k\leq n}\frac{1}{\sqrt{n}}b_{k , n}\mathbb{e}\left\ { \left [ \sqrt{n}\left ( \eta _ { k}-\eta _ { k}^{n}\right ) \overline{d}_{k , n}(1)\right ] ^{4}\right\ } ^{\frac{1}{4 } } \notag \\ & \times \mathbb{e}\left\ { \left [ \sqrt{n}v_{k}^{n}\left ( \overline{d}_{k , n}^{n}(\widetilde{s}_{k , n}^{n})\right ) \right ] ^{4}\right\ } ^{\frac{1}{4 } } \notag \\ & \leq \frac{1}{\sqrt{n}}e(n+1 ) \label{eq : secondorderrem_l2}\end{aligned}\]]where is a constant whose value does not depend on .the second line follows from ( [ eq : contractionest ] ) and the third by the cauchy - schwartz inequality .the final line was arrived at by the same reasoning used to derive bound ( [ eq : fisrtordererror_meansquare ] ) and lemma [ lem : lperrorfilter ]. the assertion of the theorem may be verified by substituting bounds ( [ eq : fisrtordererror_meansquare ] ) and ( eq : secondorderrem_l2 ) into ( [ eq : nonasympvartriainequality ] ) .let denote the largest integer less than or equal to .since the result is obvious for , let . where and it may be verified that and hence the result follows .rou , f. , del moral , p. and guyader , a. ( 2008 ) . a non asymptotic variance theorem for unnormalized feynman - kac particle models , technical report inria-00337392 .available at http://hal.inria.fr/inria-00337392_v1/ elliott , r.j ., ford , j.j . and moore , j.b .( 2002 ) on - line almost - sure parameter estimation for partially observed discrete - time linear systems with known noise characteristics . _control sig ._ , * 16 * , 435 - 453 .kantas , n. , doucet , a. , singh , s.s . and maciejowski , j.m .an overview of sequential monte carlo methods for parameter estimation in general state - space models . in _proceedings ifac system identification _ ( sysid ) meeting .poyiadjis , g. , doucet , a. and singh , s.s .particle approximations of the score and observed information matrix in state - space models with application to parameter estimation . _biometrika _ , to appear . | sequential monte carlo ( smc ) methods are a widely used set of computational tools for inference in non - linear non - gaussian state - space models . we propose a new smc algorithm to compute the expectation of additive functionals recursively . essentially , it is an online or forward - only implementation of a forward filtering backward smoothing smc algorithm proposed in . compared to the standard path space smc estimator whose asymptotic variance increases quadratically with time even under favourable mixing assumptions , the asymptotic variance of the proposed smc estimator only increases linearly with time . this forward smoothing procedure allows us to implement on - line maximum likelihood parameter estimation algorithms which do not suffer from the particle path degeneracy problem . _ some key words _ : expectation - maximization , forward filtering backward smoothing , recursive maximum likelihood , sequential monte carlo , smoothing , state - space models . |
let be a measurable space .we equip it with an assumption that will be explained when required .let be any transition probability on .an homogeneous markov process is naturally associated to . in the target problem , we are interested in the probabilities of reaching the target class within steps , namely in the set is a priori given and does not change through the computations .let be a measurable space and let .let be a sub -algebra of such that .a function ] be the -algebra on ] is the unique probability on s.t . for any , )=2^{-n} ] , it is clear that in `` some '' sense , it must happen that , where is the lebesgue measure on the borel sets of ] , and ] , such that .we write .going back to the example , it is simple to check that } \nu_* ] is the standard topology on ] . for any , we have by that where is the cumulative function of , which implies the weak convergence of to and , therefore , } \nu_* ] , for any , and be the standard topology on ] the borel -algebra on ] . is an _-equivalence on _ if it is a discrete equivalence on and whenever .the choice of on is linked to the total variation distance between probability measures .the total variation distance between two probability measures and is defined by . nowthe total variation of a measure is , where the supremum is taken over all the possible partitions of . as , we have that , see . to each the probability measure on with ( and viceversa ) .therefore , since , we have [ exa : e - cut ] define the -cut as follows . , where denotes the entire part of .then is an -equivalence on .indeed , * is at most countable ( since we divide each ] . in particular ,let be random variables .if , then . see appendix [ app : measuring ] .the problem is that even if a partition is more informative than another one , it is not true that it generates a finer -algebra , _ i.e. _ , the following implication is not always true for any couple of random variables and then lemma [ lem : ordering_1 ] is not invertible , if we do not require the further assumption on the measurable space .this last fact connects the space with the theory of blackwell spaces ( see lemma [ lem : a0_a1 ] ) . we will assume the sole assumption . we give here a counterexample to assumption , where two random variables generate two different sigma algebras with the same set of atomsobviously , assumption does not hold .let be a polish space and suppose .let and consider the sequence that determines , i.e. .let . . as a consequence of lemma [ lem : count_gen_new ] ,there exist two random variables such that and .the atoms of are the points of , and then the atoms of are also the points of , since .we recall here the definition of blackwell spaces . a measurable space is said _ blackwell _ if is a countably generated -algebra of and whenever is another countably generated -algebra of such that , and has the same atoms as . a metric space is blackwell if , when endowed with its borel -algebra , it is blackwell .the measurable space is said to be a _ strongly blackwell space _if is a countably generated -algebra of and * if and only if the sets of their atoms coincide , where and are countably generated -algebras with . for what concerns blackwell spaces ,the literature is quite extensive .d. blackwell proved that every analytic subset of a polish space is , with respect to its relative borel -field , a strongly blackwell space ( see ) . therefore ,if is ( an analytic subset of ) a polish space and , then can not be a weakly blackwell space . to see this ,take a base of and .then and have the same set of atoms ( the points of ) but ( or , equivalently , the identity function is not measurable ) .moreover , as any ( at most ) countable set equipped with any -algebra may be seen as an analytic subset of a polish space , then it is a strongly blackwell space .a. maitra exhibited coanalytic sets that are not blackwell spaces ( see ) .m. orkin constructed a nonanalytic ( in fact nonmeasurable ) set in a polish space that is a blackwell space ( see ) .jasiski showed ( see ) that continuum hypothesis ( ch ) implies that there exist uncountable sierpiski and luzin subsets of which are blackwell spaces ( implying in a strong way that blackwell spaces do not have to be lebesgue measurable or have the baire property ) .jasiski also showed that ch implies that there exist uncountable sierpiski and luzin subsets of which are not blackwell spaces ( implying in a strong way that lebesgue measurable sets and sets with the baire property do not have to be blackwell spaces ) .this latter result is strengthened by r.m .shortt in by showing that ch implies the existence of uncountable sierpiski and luzin subsets of which are highly non - blackwell in the sense that all blackwell subspaces of the two sets are countable .note that assumption and assumption ( a1 ) coincide , as the following lemma states .[ lem : a0_a1 ] let be a measurable space . then holds if and only if holds .lemma [ lem : count_gen_new ] in appendix states that is countably generated if and only if there exists a random variable such that .in addition , as a consequence of lemma [ lem : ordering_1 ] , we have only to prove that ( a1 ) implies . by contradiction ,assume ( a1 ) , , but .we have , and then by ( a1 ) and lemma [ lem : count_gen_new ] . on the other hand , as a consequence of eq . , we have that .we call _ weakly blackwell space _ a measurable space such that assumption holds . if is a weakly blackwell space , then is a weakly blackwell space , for any .moreover , every strong blackwell space is both a blackwell space and a weakly blackwell space whilst the other inclusions are not generally true . in , examples are provided of blackwell spaces which may be shown not to be weakly blackwell .the following example shows that a weakly blackwell space need not be blackwell .let be an uncountable set and be the countable cocountable -algebra on . is easily shown to be not countably generated , and therefore is not a blackwell space .take any countably generated -field , i.e. .* since each set ( or its complementary ) of is countable , then , without loss of generality , we can assume the cardinality of to be countable .* each atom of is of the form note that the cardinality of the set is countable , as it is a countable union of countable sets . as a consequence of, we face two types of atoms : 1 . for any , .this is the atom made by the intersections of all the uncountable generators .this is an uncountable atom , as it is equal to .2 . exists such that .this implies that this atom is a subset of the countable set .therefore , all the atoms ( except ) are disjoint subsets of the countable set and hence they are countable .it follows that the number of atoms of is at most countable .thus , is a strongly blackwell space , i.e. is a weakly blackwell space .[ exa : why_a0 ] suppose ] be defined as . then {e , r}{\epsilon_n } \node[3]{s/\epsilon_n } \end{diagram}\ ] ] obviously , . for the induction step , as , we have that is -measurable , and therefore . then .therefore , which implies by lemma [ lem : ordering_1 ] , and hence is optimal .[ cor : uniqueness ] does not depend on the choice of . is optimal , .the optimal projection being unique , we are done .let be defined as in theorem [ thm : pi_infty ] and be given by theorem [ thm : cp+tp ] so that for any . then each of definition [ eq : def_of_p_n ] can be rewritten as } p_\infty(x , f)\mu(dz)}{\mu([x]_n ) } , \qquad \forall x\in x , \forall f\in { \mathcal f}_n,\ ] ] where ] since \neq\varnothing ] if , it follows that equation becomes on the other hand , by equation , }\frac{p_\infty(z , a^{(m)}_i)-p(x , a^{(m)}_i)}{\mu([x]_m)}\mu(dz).\ ] ] the definition of states that whenever \mu ] the closure of a set in a given topology .note that the monotonicity of implies {\tau_{\mathrm{str } } } = \cap_n [ [ f]]_{\tau_{\mathrm{str}_n}}\ ] ] where is the ( discrete ) topology on generated by . since is the intersection of all the topologies , we have {\tau_p } \supseteq [ [ f]]_{\tau_{\mathrm{str } } } = \cap_n [ [ f]]_{\tau_{\mathrm{str}_n } } , \qquad \forall f\in 2^x,\forall \mathrm{str}.\ ] ] let be the closed set in so defined i.e. , is the complementary of an open ball in with center and radius . if we show that , then we are done , since the arbitrary choice of and spans a base for the topology . we are going to prove {\tau_{\mathrm{str } } } = \cap_m [ [ f]]_{\tau_{\mathrm{str}_m } } , \qquad \forall \mathrm{str},\ ] ] which implies {\tau_p } = f ] ; we prove the nontrivial inclusion {\tau_{\mathrm{str}_m}} ] .now , {\tau_{\mathrm{str}_m}} ] , where ] .the last part of the proof is a consequence of lemma [ lem : count_gen_new ] and of the first point , since {\pi_f}\subseteq [ x]_{\pi_g}=g^{-1}(\{g(x)\}),\ ] ] or , equivalently , which is the thesis . note that is countable , generated by . then is a measurable equivalency by lemma [ lem : count_gen_new ] .conversely , we can use the standard approximation technique : if is measurable , let for any .since are discrete random variables , are defined through lemma [ lem : discrete_measur ] . by lemma [ lem : ordering_1 ] and eq . , the thesis will be a consequence of the fact that . by definition , which implies . finally , as , we have , which completes the proof .before proving the theorem , we state the following lemma . .let be the identity relation : .by hypothesis , there exists such that , and thus is injective . then .now , take and let be the relation so defined : since any equivalency is measurable , then there exists such that . but , which shows that , i.e. . .since , there exists an injective function .let be a equivalence relationship on , and define the following equivalence on : by definition of , if we denote by the canonical projection of on , then is such that the axiom of choice ensures the existence of a injective map .then is such that . is measurable since .assume is uncountable .by ch , exists s.t . ( i.e. is in bijection with via ) . take a bijection .then the map is a bijective map from to .equip with the borel -algebra and let . is countably generated and its atoms are all the points in and the set .now , take a non - borel set of the real line . is also countably generated , and its atoms are all the points in and the set , too . since and , is not a weakly blackwell space by lemma [ lem : a0_a1 ] . . since is countable ,then is .therefore , lemma [ lem : discrete_measur ] ensures any equivalence is measurable , since .finally , just note that each countable set is strongly blackwell . andthus lemma [ lem : a0_a1 ] concludes the proof .patrick billingsley , _ probability and measure _ , third ed ., wiley series in probability and mathematical statistics , john wiley & sons inc . , new york , 1995 , a wiley - interscience publication .mr mr1324786 ( 95k:60001 ) david blackwell , _ on a class of probability spaces _ , proceedings of the third berkeley symposium on mathematical statistics and probability , 19541955 , vol .ii ( berkeley and los angeles ) , university of california press , 1956 , pp . 16 .mr mr0084882 ( 18,940d ) | on a weakly blackwell space we show how to define a markov chain approximating problem , for the target problem . the approximating problem is proved to converge to the optimal reduced problem under different pseudometrics . a computational example of compression of information is discussed . let be an homogeneous markov chain . suppose the process stops once it reaches an absorbing class , called the target , according to a given stopping rule : the resulting problem is called target problem ( tp ) . the idea is to reduce the available information in order to only use the necessary information which is relevant with respect to the target . a new markov chain , associated with a new equivalent but reduced matrix is defined . in the ( large ) finite case , the problem has been solved for tps : in , it has been proved that any tp on a finite set of states has its `` best target '' equivalent markov chain . moreover , this chain is unique and there exists a polynomial time algorithm to reach this optimum . the question is now to find , in generality , an of the markov problem when the state space is measurable . the idea is to merge into one group the points that -behave the same with respect to the objective , but also in order to keep an almost equivalent markov chain , with respect to the other `` groups '' . the construction of these groups is done through equivalence relations and hence each group corresponds to a class of equivalence . in fact , there are many other mathematical fields where approximation problems are faced by equivalences . for instance , in integration theory , we use simple functions , in functional analysis , we use the density of countable generated subspaces and in numerical analysis , we use the finite elements method . in this paper , the approximation is made by means of discrete equivalences , which will be defined in the following . the purpose of any approximation is to reach the exact solution when . we prove that the sequence of approximations tends to the optimal exact equivalence relation defined in , when we refine the groups . finer equivalence will imply better approximation , and accordingly the limit will be defined as a countably generated equivalence . under a very general blackwell type hypothesis on the measurable space , we show that it is equivalent to speak on countably generated equivalence relationships or on measurable real functions on the measurable space of states . if we do not work under this framework of blackwell spaces , we can be faced to paradoxes , as it is explained by , of enlarging , while decreasing the information available to a decision - maker . the of the markov chain depends always upon the kind of objective . in , jerrum deals with ergodic markov chains . his objective is to approximate the stationary distribution by means of a discrete approximating markov chain , whose limit distribution is close in a certain sense to the original one . however , unlike our following work , his purpose is not the explicit and unified construction of the approximating process . in this paper , we focus on the target problem . we solve extensively the tp , where the objective is connected with the conditional probability of reaching the target , namely , for any . this part extends the work in , since tps approximation may help to understand the behavior of those tps where the best equivalent markov chain is also very large . the setting of an approximating problem can be extended to a general form , but we will not develop it in this paper . |
microsoft kinect ( hereafter , simply ` kinect ' ) , a low - cost , portable motion - sensing hardware device , was developed by the microsoft corporation ( microsoft , usa ) as an accessory to the xbox video - game console ( 2010 ) .the sensor is a webcamera - type , add - on peripheral device , enabling the operation of xbox via gestures and spoken commands . in 2011 , microsoft released the software - development kit ( sdk ) for kinect , thus enabling the development of applications in several standard programming languages .the first upgrade of the sensor ( ` kinect for windows v2 ' ) , both hardware- and software - wise , tailored to the needs of xbox one , became available for general development and use in july 2014 .the present paper is part of a broader research programme , aiming at involving either sensor in the analysis of motion data of subjects walking or running ` in place ' ( e.g. , on a commercially - available treadmill ) . if successful , kinect could become an interesting alternative to marker - based systems ( mbss ) in capturing data for motion analysis , one with an incontestably high benefit - to - cost ratio .regarding medical applications of this sensor , e.g. , in physiotherapy in the home environment , a number of products are available , e.g. , by ` home team ' ( massachusetts , usa , https://www.hometeamtherapy.com ) and ` reflexion ' ( california , usa , http://www.reflexionhealth.com ) . in particular , it is known that ` reflexion ' aims at increasing the success rates in rehabilitation ; the participation of the us navy in the tests of their product attests to the importance of the availability of such solutions .the validation of the output of the original kinect sensor in static measurements or in case of slow movements was a popular subject in the recent past .data acquired from healthy subjects , performing three simple tasks ( forward reach , lateral reach , and single - leg , eyes - closed standing balance ) were analysed in ref . . in their work , the authors drew attention to systematic effects in the kinect output , in particular for the sternum .even more optimistic were the results obtained ( and the conclusions drawn ) in ref . , which investigated the accuracy in the determination of the joint angles from data acquired with an original kinect sensor and an mbs ; an inclinometer was assumed to provide the reference or baseline solution ( also called ` ground truth ' ) .the authors concluded that the differences in accuracy and reliability between the two measurement systems were small , thus enabling the use of kinect as `` a viable tool for calculating lower extremity joint angles '' .in fact , the kinect results and those obtained with the inclinometer were found to agree to better than .interestingly , the authors also made a point regarding the depth measurements of the kinect sensor , which are subject to increasing uncertainty with increasing distance from the sensor , reaching a maximal value of about cm at the most distal position of the sensor , which ( according to the specifications ) should not exceed about m. naturally , such a dependence introduces bias in the analysis of walking- and running - motion data acquired with a treadmill .these effects are present regardless of the largeness of the subject s stray motion in depth ; for example , the depth uncertainties are different at the two extreme lower - leg positions , i.e. , ahead of and behind the walker / runner .another medical application of kinect was investigated in ref . , namely its use for home - based assessment of movement symptoms in patients with parkinson s disease . in that study ,a number of tasks were performed by subjects , ten of which comprised the control group ; parallel data were acquired with kinect and an mbs .the authors reported that the kinect results were generally ( but not in all cases ) found to correlate well with those obtained from the mbs for a variety of movements . regarding the use of kinect in medical / health - related applications , complacency and optimism were impaired after the paper of bonnech appeared .the authors recorded data from subjects , performing four simple tasks ( shoulder abduction , elbow flexion , hip abduction , and knee flexion ) , and compared the results of different sessions pursuing both their reproducibility within each measurement system , as well as an assessment of the differences between the two systems .the authors concluded that the lower body is not tracked well by kinect .the conclusions of ref . constitute rather disturbing news in terms of applications of the sensor in monitoring walking and running behaviour in a medical / health - related environment .the literature on the biomechanics of motion is extensive .some selected works , relevant to the present study , include refs .earlier scientific works are cited therein , in particular in the review articles . * cavagna and kaneko studied the efficiency of motion in terms of the mechanical work done by the subject s muscles . *cavanagh and lafortune studied the ground reaction forces in running at about km / h , as well as the motion of the ` centre of the pressure distribution ' during the stance phase of the right foot of subjects .relatively large variability is seen in their results , partly due to the different characteristics in the motion of rear - foot and mid - foot strikers , partly reflecting the extent of their database in terms of running experience , weekly training , and ( perhaps , more importantly ) habitual individual behaviour .the vertical component of the ground reaction force , which may be as large as three times the subject s body weight , showed sizeable variability .* cairns , burdett , pisciotta , and simon analysed the motion of ten competitive race - walkers in terms of the ankle flexion , of the knee and hip angles , as well as of the pelvic tilt , obliquity , and rotation .the work discussed the main differences between walking and race - walking , and provided explanations for the peculiarity of the motion in the latter case , invoking the goal of achieving higher velocities ( than in normal walking ) while maintaining double support with fully - extended knee and suppressing the vertical undulations of the subject s centre of mass ( cm ) .* unpuu discussed important aspects of the biomechanics of gait , including the variation of relevant physical quantities within the gait cycle . that work may be used as a starting point for those in seek of an overview in the topic .it must be borne in mind that the subjects used in ref . were children .* novacheck also provided an introduction to the biomechanics of motion .5 and 6 of that work contain the variation of the important angles ( projections on the coronal , sagittal , and transverse planes ) within the gait cycle , at three velocities : km / h ( walking ) , km / h ( running ) , and km / h ( sprinting ) .9 therein provides the variation of the joint moments and powers ( kinesics ) in the sagittal plane within the gait cycle . * in a subsequent article , schache , bennell , blanch , and wrigley investigated the inter - relations in the movement of the lumbar spine , pelvis , and hips in running , aiming at optimising the rehabilitation process in case of relevant injuries .it is rather surprising that only one study , addressing the possibility of involving kinect in the analysis of walking and running motion ( i.e. , not in a static mode or in slow motion ) , has appeared so far . using similar methodology to the one proposed herein, the authors in this study came to the conclusion that the original sensor is unsuitable for applications requiring high precision ; after analysing preliminary data , we have come to the same conclusion .of course , it remains to be seen whether any improvement ( in the overall quality of the output ) can be obtained with the upgraded sensor .our aim in the present paper is to develop the theoretical background required for the comparison of the output of two measurement systems used ( or intended to be used ) in the analysis of human motion ; we will give all important definitions and outline meaningful tests .although this methodology has been developed for a direct application in the case of the kinect sensors , other applications may use this scheme of ideas in order to obtain suitable solutions in other cases .the tests we propose in section [ sec : tests ] should be sufficient to identify the important differences in the output of two such measurement systems .as such , they should ( in a comparative study ) pinpoint the essential differences in the performance of the two kinect sensors or ( if the second measurement system is an mbs ) enable the validation of the kinect sensors .the material in the present paper has been organised as follows . in section [ sec : systems ] , the output of the two kinect sensors is described ; subsequently , the output , obtained with one popular marker - placement scheme from an mbs , is detailed .a scheme of association of these two outputs is developed .the definitions of important quantities , used in the description of the motion , are given in section [ sec : method ] .section [ sec : acquisition ] describes one possibility for the data acquisition ; in the second part of this section , we explain how one may extract characteristic forms ( waveforms ) from the motion data , representative of the subject s motion within one gait cycle . in section[ sec : tests ] , we outline our proposal for the necessary tests , to be performed on the waveforms obtained in the previous section .the last part contains a short summary of the paper and outlines two directions in future research .to enable the analysis of the data with the same software application , the mbs output , obtained for the specific marker - placement scheme described in subsection [ sec : pig ] , will be transformed into kinect - output format , using reasonable associations between the kinect nodes and the marker locations ; due to the removal of the constant offsets in the data analysis ( see subsection [ sec : analysis3 ] ) , the exact matching between the kinect nodes and the locations at which these markers are placed is not essential . in the original kinect sensor , the skeletal data( ` stick figure ' ) of the output comprises time series of three - dimensional ( 3d ) vectors of spatial coordinates , i.e. , measurements of the ( ,, ) coordinates of the nodes which the sensor associates with the axial and appendicular parts of the human skeleton . in coronal ( frontal )view of the subject ( sensor view ) , the kinect coordinate system is defined with the axis ( medial - lateral ) pointing to the left ( i.e. , to the right part of the body of the subject being viewed ) , the axis ( vertical ) upwards , and the axis ( anterior - posterior ) away from the sensor , see fig .[ fig : kinect ] . the nodes to are main - body nodes , identified as hip_center , spine , shoulder_center , and head .the nodes to relate to the left arm : shoulder_left , elbow_left , wrist_left , and hand_left ; similarly , the nodes to on the right arm are : shoulder_right , elbow_right , wrist_right , and hand_right .the eight remaining nodes pertain to the legs , the first four to the left ( hip_left , knee_left , ankle_left , and foot_left ) , the remaining four to the right ( hip_right , knee_right , ankle_right , and foot_right ) leg of the subject. the nodes of the original sensor may be seen in fig .[ fig : nodeskinect ] . in the upgraded kinect sensor ,some modifications have been made in the naming ( and placement ) of some of the nodes .the original node hip_center has been replaced by spine_base ( and appears slightly shifted downwards ) ; the original node spine has been replaced by spine_mid ( and appears slightly shifted upwards ) ; finally , the original node shoulder_center has been replaced by neck ( and also appears slightly shifted upwards ) .five new nodes have been appended at the end of the list ( which was a good idea , as this action enables easy adaption of the analysis code processing the kinect output ) , one of which is a body node ( spine_shoulder , node ) , whereas four nodes pertain to the subject s hands , hand_tip_left ( ) , thumb_left ( ) , hand_tip_right ( ) , and thumb_right ( ) .evidently , emphasis in the upgraded sensor is placed on the orientation of the subject s hands ( i.e. , on gesturing ) . in both versions , parallel to the captured video image, kinect acquires an infrared image , generated by the infrared emitter ( seen on the left of the original sensor in fig .[ fig : kinect ] ) ; captured with a ccd camera , this infrared image provides the means of extracting information on the depth .the sampling rate in the kinect output ( for the video and the skeletal data , for both versions of the sensor ) is hz .the description of the algorithm , used in the determination of the 3d positions of the skeletal joints of the subject being viewed by the original sensor , may be found in ref .candidate values for the 3d positions of each skeletal joint are obtained via the elaborate analysis of each depth image separately .these positions may be used as starting points in an analysis featuring the temporal and kinematic coherence in the subject s motion ; it is not clear whether such a procedure has been hardcoded in the preprocessing ( hardware processing ) of the captured data .shotton define body segments covering the human body , some of which are used in order to localise skeletal joints , some to fill the gaps or yield predictions for other joints . in the development of their algorithm ,shotton generated static depth images of humans ( of children and adults ) in a variety of poses ( synthetic data ) .the application of their method results in the extraction of probability - distribution maps for the 3d positions of the skeletal joints ; their joint proposals represent the modes ( maxima ) in these maps . according to the authors , the probability - distribution maps are both accurate and stable , even without the imposition of temporal or kinematic constraints .it must be borne in mind that the ` 3d positions of the joints ' of ref . are essentially produced from the ` 3d positions of the projections of the joints onto the front part of the human body ' after applying a ` shift ' in depth ( i.e. , from the surface to the interior of the human body ) , namely a constant offset ( ) of mm ( see end of section 3 of ref . ) . although the ` computational efficiency and robustness ' of the procedure are praised in ref . , it remains to be seen whether results of similar quality can be obtained in dynamic applications ( e.g. , when the subject is in motion ) .featuring several cameras , viewing the subject from different directions , mbss provide powerful object - tracking solutions , yielding high - quality , low - latency data , at frame rates exceeding that of the kinect sensors .such systems reliably reconstruct the time series of the spatial coordinates of markers ( reflective balls , flat markers , active markers , etc . )directly attached to the subject s body or to special attire worn by the subject .one popular placement scheme of the markers , known as ` plug - in gait ' , uses a total of markers ( see table [ tab : mbsmarkers ] ) . the mbs output for these markersmay be transformed into kinect - output format ( for simplicity , we refer to the naming of the nodes in the original kinect sensor ) by using the following association scheme . *the kinect - equivalent head is assigned to the midpoint of the marker positions lfhd and rfhd .the marker positions lbhd and rbhd , pertaining to the back of the head , are not used . *the kinect - equivalent shoulder_center is taken to be the marker position clav .the marker positions c7 and rbak , which are placed on the back part of the body , are not used in comparisons with data acquired with the original kinect sensor ; in the upgraded kinect sensor , spine_shoulder may be identified with c7 . *the kinect - equivalent spine is estimated as an average of the marker positions t10 , lpsi , and rpsi . *the kinect - equivalent shoulder_left and shoulder_right are taken to be the marker positions lsho and rsho , respectively . regarding the upper part of the body , the marker positions lupa , lfra , rupa , and rfraare not used .* the kinect - equivalent elbow_left and elbow_right are taken to be the marker positions lelb and relb , respectively . *the kinect - equivalent wrist_left and wrist_right are assigned to the midpoints of the marker positions lwra and lwrb , and of rwra and rwrb , respectively .* the kinect - equivalent hand_left and hand_right are taken to be the marker positions lfin and rfin , respectively . *the kinect - equivalent knee_left and knee_right are taken to be the corrected ( according to ref . ) marker positions lkne and rkne , respectively . *the kinect - equivalent ankle_left and ankle_right are taken to be the corrected ( according to ref . ) marker positions lank and rank , respectively .* the kinect - equivalent foot_left and foot_right are taken to be the marker positions ltoe and rtoe , respectively . *the kinect - equivalent hip_left and hip_right positions are evaluated from those of the marker positions lasi , rasi , lpsi , and rpsi , according to ref . . regarding the procedure set forth in that paper , a few comments are due .the positions of the hips are obtained therein using a model for the geometry of the pelvis , featuring three parameters ( , , and ) , the values of which had been obtained from a statistical analysis of radiographic data of subjects ; however , the values of these parameters are poorly known ( see page 583 of ref . ) . a simple analysis of the uncertainties given in ref . shows that , when following that method , the resulting uncertainties in the estimation of the positions of the hips are expected to exceed about mm in each spatial direction . as a result, the positions of the hips , calculated from the mbs output according to that procedure , should not be considered as accurate as the rest of the information obtained from the mbs .more importantly , it is not evident how the movement of the pelvis reflects itself in the motion of the four markers which are used in the extraction of its position and orientation ; it is arguable whether any markers , placed on the surface of the human body , can capture the pelvic motion accurately . *the kinect - equivalent hip_center is estimated as an average of the kinect - equivalent hip_left and hip_right , and of the marker position strn . * regarding the lower part of the body , the marker positions lthi , ltib , lhee , rthi , rtib , and rheeare not used . in regardto the markers placed on the human extremities , it must be borne in mind that their positions are also affected by rotations , not only by the translational motion of these extremities ; the markers are placed at some distance from the actual rotation axes , coinciding with the longest dimension of the upper- and lower - extremity bones . for instance , rotating the left humerus by around its long axis ( assumed , for the sake of the argument , to align with the vertical axis ) will result in a movement of the marker lelb along a circular arc , thus affecting its and coordinates . on the other hand , the kinect nodes are rather placed _ on _ ( or , in any case , closer to ) the rotation axes ; as a result, it is expected that they are less affected by such rotations . as such effectscan not be easily accounted for , it is evident that the association scheme , proposed in the present section , can only lead to an approximate comparison of the output of the two measurement systems .we will next describe how one may obtain estimates of three important angles in the sagittal plane , representing the level of flexion of the trunk , of the hip , and of the knee .estimates for the left and right parts of the body will be obtained for the hip and knee angles . * * trunk angle*. this angle is obtained from the ( , ) coordinates of four points , comprising the nodes ( hip_center ) , ( shoulder_center ) , and two midpoints , namely of the nodes ( hip_left ) and ( hip_right ) , and of the nodes ( shoulder_left ) and ( shoulder_right ) .an unweighted least - squares fit on the ( , ) coordinates of these four points ( spine ) , this node should not be included in estimations involving the coordinate . ]yields the slope ( with respect to the axis ) of the optimal straight line .the trunk angle is defined as ; in the upright position , positive for forward leaning . * * hip angle*. two definitions of the hip angle have appeared in the literature : the angle may be defined with respect to the trunk or to the axis ; in the present paper , we adopt the latter definition .if the relevant hip coordinates are ( , ) and those of the knee are ( , ) , the hip angle is obtained via the expression : two hip angles will be obtained : the left - hip angle uses the nodes ( hip_left ) and ( knee_left ) ; the right - hip angle uses the nodes ( hip_right ) and ( knee_right ) . ** knee angle*. this is the angle between the femur ( thigh ) and the tibia ( shank ) .two definitions of the knee angle have appeared in the literature : the knee angle may be or in the extended position of the knee ; we adopt the latter definition .it will shortly become clear why we make use of both the sine and the cosine of the knee angle : and where the coordinates of the ankle are denoted as ( , ) , and and are the projected lengths of the femur and the tibia onto the sagittal plane , respectively : and we define the knee angle as : two knee angles will be obtained : the left - knee angle uses the nodes ( hip_left ) , ( knee_left ) , and ( ankle_left ) ; the right - knee angle uses the nodes ( hip_right ) , ( knee_right ) , and ( ankle_right ) .we define four angles in the coronal plane : the lateral trunk , the lateral hip , the lateral knee , and the lateral pelvic angles ; the lateral pelvic angle is also called pelvic obliquity .estimates for the left and right parts of the body will be obtained for the lateral hip and lateral knee angles . * * lateral trunk angle*. the same four points , which had been used in the evaluation of the trunk angle in the sagittal plane , are also used in extracting an estimate of the lateral trunk angle ; of course , the ( , ) coordinates of these points must be used now .in addition to these nodes , node ( spine ) may also be used .the lateral trunk angle is defined with respect to the axis ; in the upright position , positive for tilting in the positive direction ( tilt of the subject to his / her right ) .* * lateral hip angle*. this angle describes hip abduction / adduction in the coronal plane .similarly to the hip angle in the sagittal plane , two definitions of the lateral hip angle are possible : the angle may be defined with respect to the trunk or to the axis ; herein , we adopt the latter definition .if the relevant hip coordinates are ( , ) and those of the knee are ( , ) , the lateral hip angle is obtained via the expression : two lateral hip angles will be obtained : the lateral left - hip angle uses the nodes ( hip_left ) and ( knee_left ) ; the lateral right - hip angle uses the nodes ( hip_right ) and ( knee_right ) . * * lateral knee angle*. this is the projection of the angle between the femur and the tibia onto the coronal plane . where and are now redefined as the projected lengths of the femur and the tibia onto the coronal plane , respectively : and the angle is defined positive when , with respect to the femur direction , the ankle appears ( in coronal view ) ` further away ' from the subject s body .of course , two lateral knee angles may be defined , corresponding to the left and right parts of the human body , and , respectively . ** pelvic obliquity*. this angle is defined as : where ( , ) and ( , ) are the ( , ) coordinates of the left and right hips , respectively . in regard to motion analysis, a few additional angles may be found in the literature : the pelvic tilt and the angle describing the plantarflexion / dorsiflexion of the foot are defined in the sagittal plane ; the hip , pelvic , and foot rotations in the transverse plane .we do not believe that the kinect output can yield reliable ( if any ) information on these quantities .the knee angle , obtained from the 3d vectors ( ,, ) and ( ,, ) , will be called ` knee angle in 3d ' ; it is easily evaluated using expressions analogous to eqs .( [ eq : eq02])-([eq : eq04 ] ) . in view of the fact that the angle between 3d vectors is invariant under rotations ( so(3 ) rotation group ) and translations in 3d ,the knee angle in 3d is independent of the details regarding the alignment between the relevant coordinate systems ( e.g. , between the kinect sensor and the mbs coordinate systems ) .two last comments are due . 1 .the trunk angle is positive in walking and running ; it is difficult to maintain balance if one leans backwards while moving forwards .however , the trunk angle , obtained from the kinect output , is frequently negative .this is due to the fact that the nodes of the kinect output , which enter the evaluation of , do not represent locations on the spine .2 . due to the properties of the knee joint, the knee angle is expected to satisfy the condition . in practice , even in the fully - extended position , remains ( for many subjects ) positive ; knee hyperextension is a deformity .however , owing to the placement of the nodes by kinect , the knee angle ( estimated from the kinect output ) may occasionally come out negative . to examine further such cases ,we retain eq .( [ eq : eq04 ] ) in the evaluation of the knee angle .one possibility to avoid these effects is to extract robust measures for the selected physical quantities from the data .for instance , one could use the variation of these quantities within the gait cycle or even their range of motion ( rom ) , i.e. , the difference between the maximal and minimal values within the gait cycle .as long as an extremity moves as one rigid object , such measures ( being differences of two values ) are not affected by a constant bias which may be present in the data .we propose that the similarity of corresponding waveforms ( representing the variation of a quantity within the gait cycle , see subsection [ sec : analysis3 ] ) be judged on the basis of one ( or more ) of the following scoring options : pearson s correlation coefficient , the zilliacus error metric , the rms error metric , whang s score , and theil s score .assuming that a ( -centred ) waveform from measurement system ( e.g. , from one of the kinect sensors ) is denoted by and the corresponding ( -centred ) waveform from measurement system ( e.g. , from the mbs ) by , these five scoring options are defined in eqs .( [ eq : eq08])-([eq : eq12 ] ) ( for details on the original works , see ref . ) ; all sums are taken from to , where stands for the number of bins used in the histograms yielding these waveforms .( in our analyses , we normally use . ) in case of identical waveforms ( from the two measurement systems ) , ; all other scores vanish ( ) . evidently , whang s score is the symmeterised version of the zilliacus error metric , whereas theil s score is the symmeterised version of the rms error metric .although the differences between the zilliacus and the rms error metric are generally small ( as are those between whang s and theil s scores ) , we make use of all aforementioned scoring options in our research programme . other ways for testing the similarity of the output of different measurement systems have been put forth .for instance , some authors favour the use of the ` coefficient of multiple correlation ' ( cmc ) .ferrari , cutti , and cappello define the cmc as : ^{1/2 } \ , \ , \ , , \ ] ] where the triple array contains the entire data , i.e. , waveforms of dimension ( depends on the gait cycle in ref . ) ; is the number of measurement systems being used in the study ( ` protocols ' , in the language of ref . ) and denotes the number of waveforms obtained within each measurement system . the averages and in eq .( [ eq : eq13 ] ) are defined as : unlike pearson s correlation coefficient , ` directional information ' for the association between the tested quantities is lost when using the cmc in an analysis . in its first definition , the cmc was bound between and .however , the quantity cmc , obtained with eq .( [ eq : eq13 ] ) , is frequently imaginary ( the ratio of the triple sums may be larger than ) ; this is due to the use of , instead of the grand mean ( along with the normalisation factor , instead of ) , in the denominator of the expression . importantly ,it is unclear how the obtained cmc values relate to the goodness of the association between the tested waveforms .the association scheme of ref . is arbitrary ; there is no theoretical justification for such an interpretation of the cmc results . the basic problem in testing the similarity of the waveforms lies with the fact that the established tests in correlation theory enable the acceptance or the rejection of the hypothesis that the observed effects can be accounted for by an underlying correlation of ` strength ' , where .the test when involves the transformation : the variable is expected to follow the -distribution ( student s distribution ) with degrees of freedom ( dof ) .the tests when involve fisher s transformation ; the details may be found in standard textbooks on statistics .no tests are possible when , i.e. , when attempting to judge the _ goodness _ of the association between waveforms , if ideally the waveforms should be identical .the only tests which can be carried out in such a case are those involving , i.e. , investigating the presence of a statistically - significant correlation between the tested waveforms when the null hypothesis for no such effects is assumed to hold . in practice , the one - sided tests for dof result in the rejection of the null hypothesis at the significance level of when and at the significance level of when .formal , well - defined ( in the mathematical sense ) ways to compare waveforms do exist . as a general rule , the application of rigorous tests has the tendency to yield significant discrepancies in many cases , even when a judgment based on a visual inspection of the tested quantities is favourable .a ) one possibility would be to obtain the uncertainties in the histogram bins and make use of a function to assess the goodness of the association .the variability of the output across different sensors could also be assessed and this additional uncertainty could be taken into account in the tests .b ) another possibility would be to invoke analysis of variance ( anova ) , defining the reduced ` within - treatments ' variation as and the reduced ` between - treatments ' variation as appearing in these expressions are two average waveforms : the average waveform obtained with measurement system : and the grand - mean waveform : the ratio is expected to follow fisher s distribution with and dof .the resulting p - value enables a decision on the acceptance or rejection of the null hypothesis , i.e. , of the observed effects being due to statistical fluctuation .c ) a third possibility would be to histogram the _ difference _ of corresponding waveforms obtained from the two measurement systems within the same gait cycle ; the decision on whether the final waveform is significantly different from can be made on the basis of a number of tests , including tests for the constancy and shape of the result of the histogram . nevertheless , to retain simplicity in the present paper , we have decided to make use in the data analysis of the simple scoring options introduced by eqs.([eq : eq08])-([eq : eq12 ] ) .the data acquisition may involve subjects walking and running on commercially - available treadmills .the placement of the treadmill must be such that the motion of the subjects be neither hindered nor influenced in any way by close - by objects .prior to the data - acquisition sessions , the two measurement systems must be calibrated and the axes of their coordinate systems be aligned ( spatial translations are insignificant ) .the measurement systems must then be left untouched throughout the data acquisition .the original kinect sensor also provides information on the elevation ( pitch ) angle at which it is set . during our extensive tests, we discovered that this information is not reliable , at least for the particular device we used in our experimentation . to enable the accurate determination of the elevation angle of the kinect sensor , we set forth a simple procedure .the subject stands ( in the upright position , not moving ) at a number of positions on the treadmill belt , and static measurements ( e.g. , s of kinect data ) at these positions are obtained and averaged .the elevation angle of the kinect sensor may be easily obtained from the slope of the average ( over a number of kinect nodes , e.g. , of those pertaining to the hips , knees , and ankles ) ( , ) coordinates corresponding to these positions .the output data , obtained from the kinect sensor , must be corrected ( off - line ) accordingly , to yield the appropriate spatial coordinates in the ` untilted ' coordinate system . to prevent kinect from re - adjusting the elevation angle during the data acquisition ( which is a problematic feature ) , we attach its body unto a plastic structure mounted on a tripod .it is worth mentioning that , as we are interested in capturing the motion of the subject s lower legs ( i.e. , of the ankle and foot nodes ) , the kinect sensors must be placed at such a height that the number of lost lower - leg signals be kept reasonably small .our past experience dictates that the kinect sensor must be placed close to the minimal height recommended by the manufacturer , namely around ft off the ( treadmill - belt ) floor . placing the sensor higher( e.g. , around the midpoint of the recommended interval , namely at ft off the treadmill - belt floor ) leads to many lost lower - leg signals leg ( the ankle and foot nodes are not tracked ) , as the lower leg is not visible by the sensor during a sizeable fraction of the gait cycle , shortly after the toe - off ( to ) instant .the kinect sensor may lose track of the lower parts of the subject s extremities ( wrists , hands , ankles , and feet ) for two reasons : either due to the particularity of the motion of the extremity in relation to the position of the sensor ( e.g. , the identification of the elbows , wrists , and hands becomes problematic in some postures , where the viewing angle of the ulnar bone by kinect is small ) or due to the fact that these parts of the human body are obstructed ( behind the subject ) for a fraction of the gait cycle . assuming that these instances remain rare ( e.g. , below about of the available data in each time series , namely one frame in ) , the missing values may be reliably obtained ( interpolated ) from the well - determined ( tracked ) data .although , when normalised to the total number of the available values , the untracked signals usually appear ` harmless ' as they represent a small fraction of the total amount of measurements , particular attention must paid in order to ensure that no node be significantly affected , as in such a case the interpolation might not yield reliable results .a few velocities may be used in the data acquisition : walking - motion data may be acquired at km / h ; running - motion data at and km / h . at each velocity setting , the subject must be given time ( e.g. , min ) to adjust his / her movements comfortably to the velocity of the treadmill belt .to obtain reliable waveforms from the kinect - captured data , we recommend measurements spanning at least min at each velocity .the subject s motion is split into two components : the motion of the subject s cm and the motion of the subject s body parts relative to the cm .of course , the accurate determination of the coordinates of the subject s physical cm from the kinect or mbs output is not possible . as a result ,the obtained cm should rather be considered to be one reference point , moving synchronously with the subject s physical cm .ideally , these two points are related via a simple spatial translation ( involving an unknown , yet constant 3d vector ) at all times ; if this condition is fulfilled , the obtained cm may be safely identified as the subject s physical cm , because a constant spatial separation between these two points does not affect the evaluation of the important quantities used in the modelling of the motion . at all time frames , we obtain the coordinates of the subject s cm from seven nodes , namely from the first three main - body nodes to , from the shoulder nodes and , as well as from the hip nodes and . being subject to considerable movement in walking and running motion , the node ( head ) is not included in the determination of the coordinates of the subject s cm . prior to further processing ,the cm offsets ( ,, ) are removed from the data ; thus , the motion is defined relative to the subject s cm at all times .( the angles , defined in subsection [ sec : definitions ] , involve differences of corresponding coordinates ; as a result , they are not affected by the removal of the cm offsets from the data . )the largeness of the ` stray ' motion of the subject may be assessed on the basis of the root - mean - square ( rms ) of the , , and distributions .to investigate the stability of the motion over time , the data may be split into segments . in our data analysis, the duration of these segments may be chosen at will ; up to the present time , we have made use of and s segments in the analysis of the kinect - captured data . within each of these segments , information which may be considered ` instantaneous ' is obtained , thus enabling an examination of the ` stability ' of the subject s motion at the specific velocity ( see subsection [ sec : analysis2 ] ) .the symmetry of the motion for the left and right parts of the human body may be investigated by comparing the corresponding waveforms .finally , the largeness of the motion of the extremities may be examined on the basis of the roms obtained from these waveforms .we subsequently address some of these issues in somewhat more detail .ideally , the period of the gait cycle is defined as the time lapse between successive time instants corresponding to identical postures of the human body ( position and direction of motion of the human - body parts with respect to the cm ) .( of course , the application of ` identicalness ' in living organisms is illusional ; no two postures can ever be expected to be identical in the formal sense . )we define the period of the gait cycle as the time lapse between successive most distal positions of the same lower leg ( i.e. , of the ankle or of the ankle - foot midpoint ) .the arrays of time instants , at which the left or right lower leg is at its most distal position with respect to the subject s instantaneous cm , may be used in timing the waveforms corresponding to the left or right part of the human body .the period of the gait cycle is related to two other quantities which are used in the analysis of motion data . *the stride length is the product of the velocity and the period of the gait cycle : . *the cadence is defined as the number of steps per unit time ; one commonly - used unit is the number of steps per min .it has been argued ( e.g. , by daniels ) that the minimal cadence in running motion should be ( optimally ) steps per min , implying a maximal period of the gait cycle of s. to examine the constancy of the period of the gait cycle throughout each session ( according to our definition , each session involves _one _ velocity ) , the values of the instantaneous period of the gait cycle are submitted to further analysis . the overall constancy is judged on the basis of a simple test , assessing the goodness of the representation of the input data by one overall average value ; the resulting p - value is obtained from the minimal value for the given number of dof , i.e. , for the number of data segments reduced by one unit . to assess statistical significance, we favour the use of the p - value threshold of , which is a popular choice among statisticians . using the time - instant arrays from the analysis of the left and right lower - leg signals ( as described in subsection [ sec : analysis1 ] ) , each time series ( pertaining to a specific node and spatial direction ) is split into one - period segments , which are subsequently superimposed and averaged , to yield a representative movement for the node and spatial direction over the gait cycle .finally , one average waveform for each node and spatial direction is obtained , representative of the motion at the particular velocity .the investigation of the asymmetry in the motion rests on the comparison of the waveforms obtained for corresponding left and right nodes , and spatial directions .average waveforms for all nodes and spatial directions , representing the variation of the motion of that node ( in 3d ) within the gait cycle , are extracted separately for the left and right nodes of the extremities ; waveforms are also extracted for the important angles introduced in subsection [ sec : definitions ] .as mentioned in subsection [ sec : analysis1 ] , the time instant at which the subject s left ( right ) lower leg is at its most distal position ( with respect to the subject s cm ) marks the start of each gait cycle ( as well as the end of the previous one ) , suitable for the study of the left ( right ) part of the human body . in case that left / right ( l / r ) information is not available ( as , for example , for the trunk angle ) , the right lower leg may be used in the timing .all waveforms are subsequently -centred .the removal of the average offsets is necessary , given that the two measurement systems yield output which can not be thought of as corresponding to the same anatomical locations .for instance , according to the ` plug - in gait ' placement scheme , the markers for the shoulder are placed on top of the acromioclavicular joints ; the kinect nodes shoulder_left and shoulder_right match better the physical locations of the shoulder joints .the left and right waveforms yield two new waveforms , identified as the ` l / r average ' ( lra ) and the ` right - minus - left difference ' ( rld ) ; if emphasis is placed on the extraction of asymmetrical features in the motion from the kinect output , the validation of the rlds is mandatory .the comparisons of the waveforms obtained for the nodes of the extremities from the two measurement systems , as well as of those obtained for the important angles defined in subsection [ sec : definitions ] , are sufficient in providing an estimate of the degree of the association of the output of the systems under investigation .if one of these systems is an mbs , such a comparison enables decisions on whether reliable information may be obtained from the tested kinect sensor ( assumed to be the second system ) ; a common assumption in past studies is that the inaccuracy of the mbs output is negligible compared to that of the kinect sensor .( of course , to obtain from the marker positions information on the internal motion , i.e. , on the motion of the human skeletal structure , is quite another issue ; we are not aware of works addressing this subject in detail . )as already mentioned , the theoretical background , developed in the present paper , also applies to a comparative study of the two kinect sensors , identifying the similarities and the differences in their performance , but ( of course ) it can not easily enable decisions on which of the two sensors performs better . in summary ,irrespective of whether one of the two measurement systems is an mbs or not , the same tests are performed , but the interpretation of the results is different .we propose tests as follows : * identification of the node levels of the extremities and spatial directions with the worst association ( e.g. , with a similarity - index value in the first quartile of the distribution ) between the waveforms of the two measurement systems . * determination of the similarity of the association between the waveforms pertaining to the upper and lower parts of the human body . * determination of the similarity of the association between the waveforms pertaining to the three spatial directions , , and . * determination of the similarity of the association between the waveforms obtained from the raw lower - leg signals .we propose separate tests for the lra and rld waveforms ( see end of subsection [ sec : analysis3 ] ) ; if the reliable extraction of the asymmetry of the motion is not required in a study , one may use only the lra waveforms . after studying the goodness of the association between the waveforms at fixed velocity, velocity - dependent effects may be investigated .we will now provide additional details on each of these tests .the goodness of the association between the waveforms , obtained from the two measurement systems for the eight node levels of the extremities ( shoulder , elbow , wrist , hand , hip , knee , ankle , and foot ) and spatial directions , may be assessed as follows . separately for each of the five scoring options of subsection [ sec : comparison ] , for each velocity setting , and for each spatial direction , the node levels may be ranked according to the goodness of the association of the waveforms of the two measurement systems .the node level with the worst association may be given the mark of , whereas the one with the best association the mark of .the sum of the ranking scores over all velocities and scoring options yields an ` matrix of goodness of the association ' ( node levels of the extremities , spatial directions ) ; entries in this matrix are restricted between and , where is the number of the velocities used in the data acquisition ; further analysis of the entries of this matrix yields relative information on the goodness of the association for the node levels of the extremities and spatial directions , e.g. , it identifies those pertaining to the first quartile of the similarity - index distribution ( worst association ) . to assess the similarity of the waveforms of the two measurement systems , obtained for the nodes of the upper and lower extremities ,one - factor anova tests may be performed , separately for each of the five scoring options of subsection [ sec : comparison ] , on the scores obtained at each velocity setting , for all upper - extremity nodes and spatial directions , and all lower - extremity nodes and spatial directions .the outlined test should be sufficient in determining whether the performance between the two measurement systems for the lower part of the human body ( in relation to its upper part ) deteriorates .it must be also investigated whether the aforementioned results are significantly affected after excluding the nodes with the worst association between the waveforms of the two measurement systems .the goodness of the association between the waveforms , pertaining to the three spatial directions , , and , may be determined after employing anova tests .similarly to the previous tests , it must be investigated whether the results are significantly affected after the exclusion of the nodes with the worst association between the waveforms of the two measurement systems .our past experience indicates that the waveforms , corresponding to the raw lower - leg signals ( i.e. , the offsets of the subject s cm are not be removed from the signals ) , must be examined .this comparison is important for two reasons .first , the lower - leg signals are used in timing the motion ; second , we intend to use these signals in order to obtain the times ( expressed as fractions of the gait cycle ) of the initial contact ( ic ) and the to ; the difference of these two values is the stance fraction .we had noticed in the past that a salient feature in the waveforms obtained from the original kinect sensor is a pronounced peak appearing around the ic ; this peak is less pronounced in the data obtained with the upgraded sensor , e.g. , see figs .[ fig : lly_l ] and [ fig : lly_r ] .although it can not influence the timing of the motion ( because of its position ) , this artefact complicates the determination of the stance fractions , at least when using the original sensor .the goodness of the association between the rld waveforms must be investigated in the case that emphasis is placed on the reliable determination of any asymmetric features in the motion . to establishwhether the differences in the reliability of the lra and rld waveforms are significant , two - sided t - tests may be performed on the score distributions between corresponding lra and rld waveforms , a total of tests ( five scoring options , three tests per scoring option , velocity settings ) .as it is not clear which type of t - tests is more suitable , we propose that three tests be made per case : paired , homoscedastic , and unequal - variance .finally , we address the comparison of the roms obtained from the waveforms of the two measurement systems .it might be argued that one could simply use in a study the roms , rather than the waveforms , as representative of the motion of each node .of course , given that each waveform is essentially replaced by one number , the information content in the roms is drastically reduced compared to that contained in the waveforms .plotted versus one another ( scatter plot ) , the ideal relation between the roms obtained from the two measurement systems should be linear with a slope equal to , both for the lra and for the rld waveforms .the comparison of the two straight - line slopes , obtained in case of the lra and the rld waveforms , provides an independent assessment on the significance of the differences in the reliability of the lra and rld waveforms .our aim in the present paper was to develop the theoretical background required for the comparison of the output of two measurement systems used ( or intended to be used ) in the analysis of human motion ; important definitions are given in section [ sec : method ] , whereas the data acquisition and the first part of the data analysis are covered in section [ sec : acquisition ] .a list of meaningful tests , comprising the second part of the data analysis , is given in section [ sec : tests ] .although this methodology has been developed for a direct application in the case of the microsoft kinect ( ` kinect ' , for short ) sensors , the use of which in motion analysis is our prime objective , its adaption may yield solutions suitable in other cases. the outcome of the proposed tests of section [ sec : tests ] should be sufficient in identifying the important differences in the output of two measurement systems .as such , these tests identify ( in our case ) differences in the performance of the two kinect sensors ( in a comparative study ) or enable conclusions regarding the outcome of the validation of the output of either of the kinect sensors ( if the second measurement system is a marker - based system ( mbs ) ) . as next steps in our research programme, we first intend to conduct a comparative study of the two kinect sensors , after applying the methodology set forth herein . at a later stage , we will attempt to validate the output of the two kinect sensors on the basis of standard mbss .99 http://www.xbox.com/en-gb/kinect/ , http://www.microsoft.com/en-us/kinectforwindows/ r.a .clark , validity of the microsoft kinect for assessment of postural control , gait posture 36 ( 2012 ) 372 - 377 .a. schmitz , mao ye , r. shapiro , ruigang yang , b. noehren , accuracy and repeatability of joint angles measured using a single camera markerless motion capture system , j. biomech .47 ( 2014 ) 587 - 591 .b. galna , accuracy of the microsoft kinect sensor for measuring movement in people with parkinson s disease , gait posture 39 ( 2014 ) 1062 - 1068 .b. bonnech , validity and reliability of the kinect within functional assessment activities : comparison with standard stereophotogrammetry , gait posture 39 ( 2014 ) 593 - 598 .cavagna , m. kaneko , mechanical work and efficiency in level walking and running , j. physiol . 268 ( 1977 ) 467 - 481cavanagh , m.a .lafortune , ground reaction forces in distance running , j. biomech . 13 ( 1980 ) 397 - 406 .cairns , r.g .burdett , j.c .pisciotta , s.r .simon , a biomechanical analysis of racewalking gait , med .sports exercise 18 ( 1986 ) 446 - 453 .s. unpuu , the biomechanics of walking and running , clin .sport med .13 ( 1994 ) 843 - 863 .novacheck , the biomechanics of running , gait posture 7 ( 1998 ) 77 - 95 .schache , k.l .bennell , p.d .blanch , t.v .wrigley , the coordinated movement of the lumbo - pelvic - hip complex during running : a literature review , gait posture 10 ( 1999 ) 30 - 47 .a. pfister , a.m. west , s. bronner , j.a .noah , comparative abilities of microsoft kinect and vicon 3d motion capture for gait analysis , j. med .technol . 38 ( 2014 ) 274 - 280 .j. shotton , real - time human pose recognition in parts from single depth images , computer vision and pattern recognition ( cvpr ) , 2011 ieee conference on , issue date : 20 - 25 june 2011 .http://www.idmil.org/mocap/plug-in-gait+marker+placement.pdf , http://wweb.uta.edu/faculty/ricard/classes/kine-5350/pigmanualver1.pdf r.b .davis , s. unpuu , d. tyburski , j.r .gage , a gait analysis data collection and reduction technique , hum .movement sci . 10 ( 1991 ) 575 - 587 .m. mongiardini , m.h . ray , m. anghileri , development of a software for the comparison of curves during the verification and validation of numerical models , european ls - dyna conference , salzburg , austria , may 14 - 15 , 2009 .kadaba , repeatability of kinematic , kinetic , and electromyographic data in normal adult gait , j. orthop .res . 7 ( 1989 ) 849 - 860 .mcginley , r. baker , r. wolfe , m.e .morris , the reliability of three - dimensional kinematic gait measurements : a systematic review , gait posture 29 ( 2009 ) 360 - 369 .p. garofalo , inter - operator reliability and prediction bands of a novel protocol to measure the coordinated movements of shoulder - girdle and humerus in clinical settings , med .comput . 47 ( 2009 ) 475 - 486 .a. ferrari , a.g .cutti , a. cappello , a new formulation of the coefficient of multiple correlation to assess the similarity of waveforms measured synchronously by different motion analysis protocols , gait posture 31 ( 2010 ) 540 - 542 .h. theil , economic forecasts and policy , second edition , north - holland , amsterdam ( 1961 ) .a. ferrari , first in vivo assessment of `` outwalk '' : a novel protocol for clinical gait analysis based on inertial and magnetic sensors , med .comput . 48 ( 2010 ) 1 - 15 .j. daniels , daniels running formula , third edition , human kinetics ( 2013 ) , pp .the nodes of the original kinect sensor .the figure has been produced with carmetal , a dynamic geometry free software ( gnu - gpl license ) , first developed by r. grothmann and recently under e. hakenholz .,width=585 ] preliminary results for the waveforms for the raw coordinate of the left lower leg ( ankle ) obtained from one subject , using both kinect sensors .the quantity is the fraction of the gait cycle .the sensors were attached unto a plastic structure mounted on a tripod ; the difference in the values simply reflects the higher position on the mount of the upgraded kinect sensor.,width=585 ] | the present paper aims at providing the theoretical background required for investigating the use of the microsoft kinect ( ` kinect ' , for short ) sensors ( original and upgraded ) in the analysis of human motion . our methodology is developed in such a way that its application be easily adaptable to comparative studies of other systems used in capturing human - motion data . our future plans include the application of this methodology to two situations : first , in a comparative study of the performance of the two kinect sensors ; second , in pursuing their validation on the basis of comparisons with a marker - based system ( mbs ) . one important feature in our approach is the transformation of the mbs output into kinect - output format , thus enabling the analysis of the measurements , obtained from different systems , with the same software application , i.e. , the one we use in the analysis of kinect - captured data ; one example of such a transformation , for one popular marker - placement scheme ( ` plug - in gait ' ) , is detailed . we propose that the similarity of the output , obtained from the different systems , be assessed on the basis of the comparison of a number of waveforms , representing the variation within the gait cycle of quantities which are commonly used in the modelling of the human motion . the data acquisition may involve commercially - available treadmills and a number of velocity settings : for instance , walking - motion data may be acquired at km / h , running - motion data at and km / h . we recommend that particular attention be called to systematic effects associated with the subject s knee and lower leg , as well as to the ability of the kinect sensors in reliably capturing the details in the asymmetry of the motion for the left and right parts of the human body . the previous versions of the study have been withdrawn due to the use of a non - representative database . + _ pacs : _ 87.85.gj ; 07.07.df , , biomechanics , motion analysis , treadmill , marker - based system , kinect - mail : evangelos[dot]matsinos[at]zhaw[dot]ch , evangelos[dot]matsinos[at]sunrise[dot]ch |
as indicated in , astronomical photometry is about the measurement of the brightness of radiating objects in the sky .many factors , such as those coming from the instrument limitation , from a fixed measurement strategy , or the limitation from the mean through which the measurement is taking place , make this area of the science relatively imprecise .the improvement in the dectectors technology plays a key role in the area of optimizing the resulting astronomical photometric measurements . in this sense , a signal - to - noise ratio capable of being configured as part of an optimization framework of the measurement system seems to be a useful input .+ charge - coupled devices ( ccds ) constitutes the state - of - the - art of detectors in many observational fields . enumerates the areas involved in the recent advances of the ccds systems , which are : * manufacturing standards that provide higher tolerances in the ccd process leading directly to a reduction in their noise output . *increased quantum efficiency , especially in the far red spectral regions .* new generation controll electronics with the ability for faster readout , low noise performance , and more complex control functions . * new types of scientific grade ccds with some special properties . any data , in general , is always limited in accuracy and incomplete , therefore , deductive reasoning does not seem to be the proper way to prove a theory .however , and as said in , statistical inference provides a mean for estimating the model parameters and their uncertainties , which is known as data analysis .it also allows assessing the plausability of one or more competing models .the use of a bayesian approach here is also justified in where it is stated that for data with a high signal - to - noise ratio for example , a bayesian analysis can frequently yield many orders of magnitude improvement in model parameter estimation , through the incorporation of relevant prior .this is exactly what we intend through the implementation of what will be described in this paper , and detailed in the following section .the problem to be resolved here consists in the implementation of bayesian inference for a set of configurable parameters which affect the signal - to - noise ratio of a measurement . in comparisson with other methodologies , such as anova , which shows serious weaknesswhen outliers are present in the measured data , bayesian parameter inference offers a robust method against outliers .it also allow to improve the results of inference by using the posterior probability density distribution ( pdf ) of one execution as the prior pdf for another execution in a recursive framework .this will lead to an adaptive measurement strategy which can be addressed as a calibration refinement .professional surveys plan the measurement strategy well in advance , taking into account all the relevant factors impacting on the measurement ; this involves the set of specified fix parameters from the detector and also a set of parameters which configure the measurement , such as integration time , diameter of the aperture , etc .once a measurement has finished , the data are archived and their analysis and processing begin .the problem proposed here is to establish a link between the results of a measurement under a specific detector configuration and the refinement , by application of parameter bayesian inference , of the configuration parameters to be applied in a further measurement .the result of this bayesian inference at parameter level is proposed to be injected as additional knowledge for a model selection problem in the context of measurement data analysi ( i.e , photometric cross - matching of multiwavelength astronomical sources ) .for example , let us imagine that we have performed colour measurement in a multi - wavelength survey with ten different instruments , each one under a specific configuration .let us imagine that in the process of model comparison for the cross - matching scenarios , the existence of a source inferred in a bandwith which is not detecting it , is plausible .then , based on this result , a new configuration for that instrument can be inferred in such a way that allows us to explore the refined plausability indicated by the model comparison from the data obtained in the first measurement loop .figure [ fig1 ] shows a block diagram reflect in general lines the idea of the problem proposed here + [ fig1 ]the great majority of detectors used in the astronomical field are silicon - based ones ; this means that the electronics involved in the specification , manufacturing and operational life of these detectors are relvant to the outcome obtained .as detailed in , the excitations of electrons responding to incident photons constitutes the fundamentals for practical flux measurement in almost all nowadays photometric systems .+ in nowadays , ccds are used in many instruments involved in the main current astronomical surveys ; an extensive and increasing bibliography is currently being publishing .therefore our paper will focussed on this type of instruments , however we have tried to retain the generic aspect of the characterization of any other type of detector and obviously the methology presented here is fully valid with any other set of specific parameters . a summary from the relevant information related to detector parameters has been included in this subsection for the sake of clarity .a more detailed information can be found in + in general terms and for the context of our problem , a detector can be characterized by the following parameters , as indicated in : + * * quantum efficiency ( q ) * : it is the ratio between registered events and the incident photon . * * information transfer efficiency ( e ) * : it is the ration between the square of the signal - to - noise ratio of the output and the square of the signal - to - noise ratio of the input .+ * * noise equivalent power ( nep)*:it is defined as the optical power producing an output equal to the noise level of the detector . * * linearity and saturation * : one of the most relevant characteristics of a ccd detector is its linear property .this means that , under ideal scenario ( ignoring effects such as , noise , dark current , polarization current , sky background contribution etc ) , the intentsity registered in each pixel ( as elecrons ) is proportional to the incident light .however this linear behaviour has its limits .the most obvious one is the * saturation threshold * , which is measured by the * full - well capacity * * * full - well capacity*:this parameter measures the limit of the accumulated charge before saturation begins .this value is normally included in the technical specification of the detector s supplier . * * event or pixel capacity * : it is defined as how many events can be usually accumulated before some saturation effect takes place . * * working range * : also named * dynamic range * , is essentially the same as event capacity for a noiseless accumulative single detector , but more generally interpreted as the difference between useful maximum and minimum event counts . * * gain(g ) * : the digital image consists of a table of numbers which indicate the intensity registered in each pixel .however , the numbers stored do not mean the quantity of electrons found in each electrode , as this quantity can be huge and this would make the resultant storage files too big .therefore what is normally done is to divide the quantity of electrons by a certain number , named * gain * ; thus , what we register in the file is the number of * counts * obtained when performed the above division .sometimes counts are called * adu * ( analog - to - digital units ) or * dn * ( data number ) .the gain is therefore measured in electrons per count . + some cameras allow the user to choose the gain. then we could choose a small value for faint detections or bit to measure correctly sources of various brightness , but all this without overpassing the limit done by : + * * counts(c ) * : the number of photons that fall on a pixel is related to the counts by : + detector parameters such as quantum efficiency , linearity event capacity and so on , are often characterized by * figures of merit * , which manufacturers quote about their products .the error which all scientific measurements should carry means really uncertainty and it is due to noise . following the line of discussion presented at the begining of this section , once an electron has been excited by a photon , the next step consists in registering this event by the electronics of the detector . in this process ,handling and reducing the number of extraneous electron activity which does not come from the source subject to the pure measurement ( noise ) consitutes a delicate and complex step of this process .a summary from the relevant information related to sources of noise has been included in this section for the sake of clarity . for more detailed information about the functional block description of any detector can be found in + there are numerous sources of noises in the ccd images .the following list identify those sources of noise which are more relevant to the problem described in section [ sect:2 ] : * * dark current * ; also named * thermal noise * , it is produced by the spontaneous generation of electrons in the silicium ( si ) due to the themal excitation of the material .+ the noise associated with dark current in a ccd is regarded as having a spatial dependence , in that it relates to minor irregularities in the solid state molecular bonding lattice , associated with material nterfaces and impurities .each pixel generates a slightly different level of dark current , so the noise depends on non - uniformity of the response over the surface as well as the dark current s inherent quasi - poissonian contribution to the electron counts .dark current , following a generally richardson law type dependence on temperature , could fill the potential wells of an uncooled ccd at in , typically , a few seconds , but cooling to say , reduces this to a tolerable few electrons per pixel per second .the standards approach to spatial non - uniformity of response in ccd is through * flat fielding * , although it should be rememberd that there is an additional shot noise contribution to the adopted flat field contribution .+ within the ccd , each pixel has a slightly different gain or quantum efficiency when compared with its neighbors . in order to flatten the relative response for each pixel to the incoming radiation , a flat field imageis obtained and used to perform this calibration .ideally , a flat field image would consist of uniform illumination of every pixel by a light of source of identical spectral response to that of the object frames .once a flat field image is obtained , one them simply divides each object frame by it implementing then instant removal of pixel - to - pixel variations . * * cosmic rays * : they are part of the inhabitants of the interstellar space ; cosmic rays are sub - atomic particles which go at very high speed , near to the speed of light .cosmic rays are very annoying for the astronomers because they interact with the silicium of the coupled charged devices .these electronic micro - aluds , concentrated in a few pixels appear in the images as points , or sometimes , lines very bright . as much the exposition time is , more quantity of cosmic rays will deteriorate it . * * read noise * ; this is a very important contribution to the toal noise of the images . this noise is due to the random and unavoidable errors that are produced during the reading of the image , in the process of amplification and counting of the electrons captured in each pixel .these errors are intrinsic to the nature of the detector device .+ the existance of read noise has always to be taken into account because it affects to all steps towards the obtainment and traitment of the digital images .each camera must have its own level of read noise , documented in its technical specifications .* * shot noise * : this is a statistical noise due to the inherently non - steady photon influx .a poissonian distribution is normal for the arrival of the primary photon stream from a source of constant emissive power . * * clocking noise * ; this noise comes from the various high frequency oscillators involvd in the gating circuitry .this noise rises with load and clocking frequency , but it can normally be controlled by manufacturers to a negligible level for astronomical applications . * * atmosphere * : the total noise of detection is not just that of the photoelectric effect on the detector , since the signal has already been deteriorated by the atmosphere .further reading on the problems with atmosphere in the astronomical measurements can be found in for simplification purposes in the resolution of the problem describe in [ sect:2 ] , only the following list of noise sources will be considered representative for the context of our problem .however the methodology can be extended to a more exhaustive list of noises : dark current , read noise , background noise .as detailed in , a careful understanding of the main sources of uncertainties can suggest ways to improve our measurement strategy , this means , the observation , reduction and analysis processes .+ a crucial concept in photometry is the * signal - to - noise ratio ( s / n ) * , which is equivalent to the concept of percentage error + it is very important to assess and , if possible , to reduce the noise of the images.in this direction , the parameter is very useful in the assessment of the feasibility , reliability and quality of the detection . as a general rule , and based on the considerations expressed in , to get reliable photometry and/or astrometry measurements , the minimum threshold must be : + as a first preliminary simplification , under the asumption of photon noise dominating the noise , the counting statistics of the number of photons impacting on a given area per second can be modelled by a gaussian distribution , where the scatter is the square root of the number of photons , therefore : where includes the photon counts for the sky foreground and the sky background , both of them carrying noise components . to obtain the count from the source alone ,the sky background contribution has to be substracted . then , considering this two contributions , we can write the following : the s / n ratio changes with the integration time , and with the telescope aperture, , therefore these two parameters encompass the configuration domain by which the s / n ratio can be optimized in the process explained in section [ sect:2 ] .+ for the telescope aperture , we know that increasing the diameter of the telescope primary, , by a factor of 2 increases the collecting area by factor of 4 , thus for a given integration time , we get : + regarding the integration time dependency with the signal - to - noise ratio , we can write : where is th count rate expressed in .therefore , similarly the number of exposures impact on the noise of the resulting image . according to if the exposure is broken into equal short exposures , the error in the mean of the measurements would be : where is the scatter in the individual short exposures and is the number of such short exposures.it is important to keep in mind that if we add images resulting from short exposures , the resulting signal is , and total noise of the resulting image will be as a conclusion , the signal - to - noise ratio of an addition of images from the same object is bigger and therefore better than the signal - to - noise ration of each individual image .so far , photon noise has been assumed to be the dominant contribution of the noise , and therefore the other noise sources contributions have been reduced to zero for the equations above . however , in the case of faint sources detection the dark current and the read noise can play a key role in the , therefore , we will develop the equation , with the integration time dependency and including at least the following sources of noise : photon noise , dark current and read noise .for each of the noise sources , valid approximations will be considered in order to obtain a final equation which is computational cost affordable for a intel mac core 2 duo computer . therefore , the equation for the of a measurement made with a ccd can be given by : where : * is the total number of photons which compound the signal detected from the object * is the total number of photons coming from the background also called sky * is the total number of dark current electrons per pixel * is the total number of electrons per pixel resulting from the read noise .the noise terms in equation [ eq8 ] can be modelled by poisson distributions .the term is used to apply each noise term on a per pixel basis to all the pixels involved in the measurement .a more complete equation taking into account digitization noise within the a / d converter can be found in + as explained in , and using the fact that , a standard error for the measurement can be obtained as : where is equal to the noise terms indicated in [ eq8 ] , and is the correction term between an error in flux ( electrons ) and that same error in magnitudes ( howell , 1993 ) .the equation [ eq8 ] can also be expressed in terms of count rate and integration time , as follows : and in line with the text above , each terms of equation [ eq8 ] can be expressed as follows : where , and are proporcionality constants .in general , as described in , a bayesian probability density function is a measure of our state of knowlege of the value of the parameter .when we acquire some new data , bayes theorem provides a means for combining the information about the parameter coming from the data , through the likelihood function , with the prior probability , to arrive at a posterior probability density , , for the parameter .+ let us be the known model for the signal - to - noise ratio of a multi - wavelength measurement system which identifies cross - matching sources , the set of configurable parameters, , being the integration time and the configurable aperture diameter of the instrument , and the data of the measurement . is the information associated to the model . by application of bayesian inference , the configurable parameters for a established model can be estimated as follows : this equation can be written in the following way : therefore it becomes evident that the denominator of [ eq2 ] is just a normalization factor and we can focus on just the numerator , where is named the probability a priori and is the likelihood . strictly speaking , as it is explained in , bayesian inferencedoes not provide estimates for parameters ; rather , the bayesian solution to the parameter estimation problem is the full posterior pdf , and not just a single point in the parameter space .it is useful to summarize this distribution and one possible candidate of the best - fit value is the posterior mean , we will develop here the bayesian estimation parameter for the integration time , however the consideration of more than one parameter is immediate and in that case multiple integrals will be considered for the marginalization of the corresponding nuisance parameters . a preliminary choice for the prior probability will be a uniform distribution , therefore : where and are the maximum and minimum integration time to be defined for the observation under consideration . in general ,the difference between the data and the model is called the error , therefore : the model here is the one proposed in [ eq9],however for the sake of clarity in the following expressions , we will consider that equation [ eq9 ] can be simplified as follows : being a constant the likelihood for all the channels under consideration in a multiwavelength observation , can be expressed as follows : now we can compute as indicated in [ eq14 ] , and considering a normalizing unity constant for the denominator we obtain the following result : .the main purpose of the methodology presented here is to enable the capability of an observational system to adapt its configurable parameters depending on the results from previous observations . in this way , it is clear that the integration time has a considerably strong impact on the faint sources detections , therefore the approach covered in the previous chapter of this paper intends to infer the optimum integration time in order to increase the probability of detection for the next observation to be planned and defined .the inclusion in the model of the main contributions of noise leads to a more refined parametric inference .a toy example has been built in order to show some preliminary results derived from the approach presented in this paper .figure [ fig2 ] represents the data in terms of integration times from several observations and figure [ fig3 ] shows the probability density corresponding to the posterior probability solved as detailed in the previous section .a real example is planned to be implemented using a model of signal - to - noise ratio in line with the expression presented in equation [ eq20 ] [ fig1 ] [ fig1 ]the authors would like to thank dr .luis manuel sarro baro and dr .tamas budavari for his support and valuable insights in the area of bayesian inference , and mr .georges bernede for his continuous support in the effective conciliation of my professional and academic activities .h. kopka and p. w. daly , _ a guide to latex _, 3rd ed.1em plus 0.5em minus 0.4emharlow , england : addison - wesley , 1999 .steve b. howell , _ handbook of ccd astronomy _ , 2nd ed.1em plus 0.5em minus 0.4emnational optical astronomy observatory and wiyn observatory : cambridge university press .edwin budding and osman demircan , _ introduction to astronomical photometry _ , 2nd ed.1em plus 0.5em minus 0.4emcanakkale university , turkey : cambridge university press . w. romanishin , _ an introduction to astronomical photometry using ccds _ ,september 16 , 2000.1em plus 0.5em minus 0.4emuniversity of oklahoma .david galadi enriquez and ignasi ribas canudas , _ manual practico de astronomia con ccd _, 1998.1em plus 0.5em minus 0.4emdepartment of astronomy and metrology of university of barcelona : omega . j. v. wall and c. r. jenkins , _ practical statistics for astronomers _, 2003.1em plus 0.5em minus 0.4emuniversity of oxford and schlumberger cambridge research : cambridge university press .michael p .hobson , andrew h. jaffe , andrew r. liddle , pia mukherjee and david parkinson , _ bayesian methods in cosmology _, 2010.1em plus 0.5em minus 0.4emcavendish laboratory , university of cambridge , imperial college , london , university of sussex : cambridge university press .phil gregory , _bayesian logical data analysis for the physical sciences _, 2005.1em plus 0.5em minus 0.4emdepartment of physics and astronomy , university of british columbia : cambridge university press | key words : signal ; noise ; gain ; quantum efficiency ; count ; rad noise ; dark current ; nuisance parameters + calibration is nowadays one of the most important processes involved in the extraction of valuable data from measurements . the current availability of an optimum data cube measured from a heterogeneous set of instruments and surveys relies on a systematic and robust approach in the corresponding measurement analysis . in that sense , the inference of configurable instrument parameters can considerably increase the quality of the data obtained . + any measurement devoted to scientific purposes contains an element of uncertainty . the level of noise , for example , determines the limit of usability of an image . therefore , a mathematical model representing the reality of the measured data should also include at least the sources of noise which are the most relevant ones for the context of that measurement . + this paper proposes a solution based on bayesian inference for the estimation of the configurable parameters relevant to the signal to noise ratio . the information obtained by the resolution of this problem can be handled in a very useful way if it is considered as part of an adaptive loop for the overall measurement strategy , in such a way that the outcome of this parametric inference leads to an increase in the knowledge of a model comparison problem in the context of the measurement interpretation . + the context of this problem is the multi - wavelength measurements coming from diverse cosmological surveys and obtained with various telescope instruments . as a first step , , a thorough analysis of the typical noise contributions will be performed based on the current state - of - the - art of modern telescope instruments , a second step will then consist of identifying configurable parameters relevant to the noise model under consideration , for a generic context of measurement chosen . then as a third step a bayesian inference for these parameters estimation will be applied , taking into account a proper identification of the nuisance parameters and the adequate selection of a prior probability . finally , a corresponding set of conclusions from the results of the implementation of the method proposed here will be derived |
with advances in technology , it will become indispensable to determine numerous relativistic effects in the propagation of light beyond the first order in the newtonian gravitational constant , in particular in the area of space astrometry .the global astrometric interferometer for astrophysics ( gaia , perryman _2001 ) is already planned to measure the positions and/or the parallaxes of celestial objects with typical uncertainties in the range 1 - 10 ( ) whereas the laser astrometric test of relativity ( lator ) mission will measure the bending of light near the sun to an accuracy of 0.02 ( turyshev _ et al ._ 2004 ) . in this last case , it is clear that the effects of the second order in must be taken into account . to obtain a modelling of the above - mentioned projects , it is necesssary to determine the deflection of light rays between two points and of space - time . in almost all of the theoretical studies devoted to this problem ,the properties of light rays are determined by integrating the differential equations of the null geodesics .this procedure is workable as long as one contents oneself with analyzing the effects of first order in , as it is proven by the generality of the results obtained in the litterature ( klioner 1991 , klioner & kopeikin 1992 , kopeikin 1997 , kopeikin & schfer 1999 , kopeikin & mashhoon 2002 , klioner 2003 ) .unfortunately , analytical solution of the geodesic equations requires cumbersome calculations when terms of second order in are taken into account , even in the case of a static , spherically symmetric space - time ( richter & matzner 1982 , 1983 ) . however , an alternative approach exists and seems to be promising .based on the synge s world function and variational properties of geodesic , it precisely does not require the knowledge of the geodesic and directly provides the time delay of light and the direction of a ray at the point of reception , i.e. at the observation point .in this work we derive the general expression of the angular separation between two point light sources as measured by a given observer in arbitrary motion .we show that the angular distance is fully determined if we calculate several ratios which can be obtained from the knowledge of the synge s world function . throughout this paper , is the speed of light in a vacuum and is the newtonian gravitational constant .the lorentzian metric of space - time is denoted by .we adopt the signature .we suppose that space - time is covered by some global coordinate system .we assume that the curves of equations const are timelike in the neighbourhood of the observer .this condition means that in the vicinity of the observer .we employ the vector notation in order to denote either the ordered set , or the orderer set .given , for instance , denotes if and if ) , the einstein convention of summation on repeated indices being used in each case . the quantity denotes the ordinary euclidean norm of : if , and if .the indices in parentheses characterize the order of a term in a perturbative expansion .theses indices are set up or down , depending on the convenience .to begin with , let us consider a light ray received at point and let us recall how is defined the direction of this ray as measured by an observer moving at with a unit 4-velocity .the three - space relative to the observer at point is the subspace of tangent vectors orthogonal to ( see figure below ) . , width=510 ] an arbitrary vector at admits one and only one decomposition of the form v = v__u + v__u , where is colinear to the unit vector and is a vector of the three - space .since and are orthogonal , one has v__u = ( u .v ) u and v__u = v - ( u .v ) u . the vector is called the ( orthogonal ) projection of onto the three - space relative to the observer .its magnitude is given by v__u= .the direction of vector as seen by the observer is the direction of the unit spacelike vector defined as v__u^ = = .consider now a light ray received at and denote by a vector tangent to at . in this work, we always assume that a vector tangent to a light ray is a null vector and is future oriented , so that l^2 = 0 , u.l > 0 .the direction of the ray as measured by the observer is the direction of the vector . by using eq .( [ di1 ] ) , and then taking into account eqs .( [ nul ] ) , it is easy to see that l__u^ = - u .let be an other light ray received at .if denotes a vector tangent to at , the direction of as observed by is given by eq .( [ di2 ] ) in which is substituted for . as a consequence , the angular separation between and as measured by may be defined as the angle between the two vectors and belonging to the same subspace ( see soffel 1988 , brumberg 1991 ) . angle may be characterized without ambiguity by relations as follow _u = - l__u^ .l__u^ , 0 _ u < .taking into account properties of a light ray , we have the following relations as is an unitary vector , we can express with as follows = g_00 + 2 g_0i ^i + g_ij^i^j , where ^i = = .finally , substituting equations ( [ llll]-[hlb ] ) into equation ( [ an1 ] ) yields the fundamental formula ^2 = - _ x_o , where _ 0 = 1 , _i = , _ 0 = 1 , _the determination of the angular distance thus requires explicit computations of the ratios and . they can be obtained by the integration of null geodesic equations .however we will show that they result easily from the knowledge of the synge s world function .synge s world function is a scalar function of the base point and the field point .it is defined by ( synge , 1964 ) and the integral is evaluated on the unique geodesic that links to , being an affine parameter . a fundamental property of is to give an important information concerning the covariant components of the vectors tangent to at and respectively : if is a light ray , we can consider as the observation point . moreover , in this case , we have we recently show that explicit determination of can be obtained from the integration of hamilton - jacobi equations without the knowkledge of ( le poncin - lafitte _ et al .all this means that determination of ratios and require the following steps : * to determine by solving hamilton - jacobi equations , * to impose the condition , * to compute the following relation this paper , we give the general and rigorous expression of the observed angular distance between two point sources of light .the fundamental formulae ( [ an7 ] ) and ( [ cov ] ) show that the theoretical calculation of the angular distance can be carried out when the world function is known . in this idealized sense, one can say that the problem of space astrometry involves one and only one unknown function .an other important point is that the aberration and the gravitational deflection of light can not be treated as completely distinct phenomena . | almost all of the studies devoted to relativistic astrometry are based on the integration of the null geodesic differential equations . however , the gravitational deflection of light rays can be calculated by a different method , based on the determination of the bifunction giving half the squared geodesic distance between two arbitrary points - events , the so - called synge s world function . we give a brief review of the main results obtained by this method . |
spin glasses are magnetic materials with strange properties that distinguish them from ordinary ferromagnets . in statistical physics , the study of spin glasses originated with the works of edwards and anderson and sherrington and kirkpatrick in 1975 . in the following decade, the theoretical study of spin glasses led to the invention of deep and powerful new methods in physics , most notably parisi s broken replica method .we refer to for a survey of the physics literature . however , these physical breakthroughs were far beyond the reach of rigorous proof at the time , and much of it remains so till date .the rigorous analysis of the sherrington - kirkpatrick model began with the works of aizenman , lebowitz and ruelle and frhlich and zegarliski in the late eighties ; the field remained stagnant for a while , interspersed with a few nice papers occasionally ( e.g. , ) .the deepest mysteries of the broken replica analysis of the s - k model remained mathematically intractable for many more years until the path - breaking contributions of guerra , toninelli , talagrand , panchenko and others in the last ten years ( see e.g. , , , , , , ) .arguably the most notable achievement in this period was talagrand s proof of the parisi formula .however , in spite of all this remarkable progress , our understanding of these complicated mathematical objects is still shrouded in mystery , and many conjectures remain unresolved . in this articlewe attempt to give a mathematical foundation to some aspects of spin glasses that have been well - known in the physics community for a long time but never before penetrated by rigorous mathematics .let us now embark on a description of our main results .further references and connections with the literature will be given at the appropriate places along the way .consider the following simple - looking probabilistic question : suppose are i.i.d .standard gaussian random variables , and we define , for each ,the quantity then is it true that with high probability , there is a large subset of such that and any two distinct elements of are nearly orthogonal , in the sense that ( in the spin glass literature , the quantity is called the ` overlap ' between the ` configurations ' and . ) to realize the non - triviality of the question , consider a slightly different gaussian field on , defined as where are i.i.d .standard gaussian random variables .then clearly , is maximized at , where .note that for any , it is not difficult to argue from here that if is another configuration that is near - maximal for , then must agree with at nearly all coordinates .thus , the field does not satisfy the ` multiple peaks picture ' that we are investigating about .this is true in spite of the fact that and are approximately independent for almost all pairs .we have the following result about the existence of multiple peaks in the field .it says that with high probability , there is a large collection of configurations satisfying and , that is , for any two distinct , and for each .[ multisk ] let be the field defined in , and define the overlap between configurations by the formula .let then there are constants , , , and such that with probability at least , there is a set satisfying 1 . , 2 . for all , , and 3 . for all .quantitatively , we can take , , and , where is an absolute constant. however these are not necessarily the best choices .let us now discuss the implication of this result in spin glass theory .the sherrington - kirkpatrick model of spin glasses , introduced in , is defined through the hamiltonian ( i.e. energy function ) the s - k model at inverse temperature defines a probability measure on through the formula where is the normalizing constant .the measure is called the gibbs measure . according to the folklore in the statistical physics community , the energy landscape of the s - k model has ` multiple valleys ' .although no precise formulation is available , one way to view this is that there are many nearly orthogonal states with nearly minimal energy . for a physical discussion of the ` many states ' aspect of the s - k model, we refer to , chapter iii .a very interesting rigorous formulation was attempted by talagrand ( see , conjecture 2.2.23 ) , but no theorems were proved . although our achievement is quite modest , and may not be satisfactory to the physicists because we do not prove that the approximate minimum energy states correspond to significantly large regions of the state space in fact , one may say that it is not what is meant by the physical term ` multiple valleys ' at all because an isolated low energy state does not necessarily represent a valley it does seem that theorem [ multisk ] is the first rigorous result about the multimodal geometry of the sherrington - kirkpatrick energy landscape .we may call it ` multiple valleys in a weak sense ' .theorem [ multisk ] can be generalized to the following corollary , which shows that weak multiple valleys exist at ` every energy level ' and not only for the lowest energy .[ multisk2 ] let all notation be the same as in theorem [ multisk ] . fix a number ] .suppose a randomly chosen fraction of the couplings are replaced by independent copies to give a perturbed gibbs measure .let be chosen from the original gibbs measure and is chosen from the perturbed measure .let the overlap be defined as in .then where is an absolute constant and the expectation is taken over all randomness .this theorem shows that the system is chaotic if the fraction goes to zero slower than .the derivation of this result is based on the ` superconcentration ' property of the free energy in the s - k model that we present in the next subsection .the notion of perturbation in the above theorem , though natural , is not the only available notion .in fact , in the original physics papers ( e.g. ) , a different manner of perturbation is proposed , which we call continuous perturbation . herewe replace by , where is another set of indepenent standard gaussian random variables and so that the resultant couplings are again standard gaussian .when , we say that the perturbation is small . a convenient way to parametrize the perturbation is to set , where is a parameter that we call ` time ' .this nomenclature is natural , because perturbing the couplings up to time corresponds to running an ornstein - uhlenbeck flow at each coupling for time , with initial value .the following theorem says that the s - k model is chaotic under small continuous perturbations .[ chaoscont ] consider the s - k model at inverse temperature .take any .suppose we continuously perturb the couplings up to time , as defined above .let be chosen from the original gibbs measure and be chosen from the perturbed measure .let the overlap be defined as in .then there is an absolute constant such that for any positive integer , the expectation is taken over all randomness .again , the achievement is very modest , and does not come anywhere close to the claims of the physicists .but once again , this is the first rigorous result about chaos of any kind in the s - k model .to the best of our knowledge , the only other instance of a rigorous proof of chaos in any spin glass model is in the work of panchenko and talagrand , who established chaos with respect to small changes in the external field in the spherical s - k model .disorder chaos in directed polymers was established by the author in .a deficiency of both theorems in this subsection is that they do not cover the case of zero temperature , that is , , where gibbs measure concentrates all its mass on the ground state . in principle, the same techniques should apply , but there are some crucial hurdles that can not be cleared with the available ideas .the notion of superconcentration was defined in .the definition in pertains only to maxima of gaussian fields , but it can be generalized to roughly mean the following : a lipschitz function of a collection of independent standard gaussian random variables is superconcentrated whenever its order of fluctuations is much smaller than its lipschitz constant .this definition is related to the classical concentration result for the gaussian measure , which says that the order of fluctuations of a lipschitz function under the gaussian measure is bounded by its lipschitz constant ( see e.g. theorem 2.2.4 in ) , irrespective of the dimension .the free energy of the s - k model is defined as where is the hamiltonian defined in .it follows from classical concentration of measure that the variance of is bounded by a constant multiple of ( see corollary 2.2.5 in ) .this is the best known bound for . when , talagrand ( theorems 2.2.7 and 2.2.13 in ) proved that the variance can actually be bounded by an absolute constantthis is also indicated in the earlier works of aizenman , lebowitz and ruelle and comets and neveu .therefore , according to our definition , the free energy is superconcentrated when .the following theorem shows that is superconcentrated at any .[ superconc ] let be the free energy of the s - k model defined above in .for any , we have where is an absolute constant .this result may be reminiscent of the improvement in the variance of first passage percolation time . however , the proof is quite different in our case since hypercontractivity , the major tool in , does not seem to work for spin glasses in any obvious way . in that sense ,the two results are quite unrelated .our proof is based on our chaos theorem for continuous perturbation ( theorem [ chaoscont ] ) and ideas from . on the other hand ,theorem [ superconc ] is used to derive the chaos theorem for discrete perturbation , again drawing upon ideas from .this equivalence between chaos and superconcentration is one of the main themes of , which in a way shows the significance of superconcentration , which may otherwise be viewed as just a curious phenomenon .incidentally , it was shown by talagrand ( , eq . ( 10.13 ) ) that the lower tail fluctuations of are actually as small as order under an unproven hypothesis about the parisi measure .let be an undirected graph .the edwards - anderson spin glass on is defined through the hamiltonian where is again a collection of i.i.d .random variables , often taken to be gaussian .the s - k model corresponds to the case of the complete graph , up to normalization by . for a survey of the ( few ) rigorous and non - rigorous results available for the edwards - anderson model, we refer to newman and stein . unlike the s - k model , there are two kinds of overlap in the e - a model .the ` site overlap ' is the usual overlap defined in .the ` bond overlap ' between two states and , on the other hand , is defined as we show that the bond overlap in the e - a model is not chaotic with respect to small fluctuations of the couplings at any temperature .this does not say anything about the site overlap ; the site overlap in the e - a model can well be chaotic with respect to small fluctuations of the couplings , as predicted in .[ nochaos ] suppose the e - a hamiltonian on a graph is continuously perturbed up to time , according to the definition of continuous perturbation in section [ chaossec ] .let be chosen from the original gibbs measure at inverse temperature and is chosen from the perturbed measure .let the bond overlap be defined as in .let where is the maximum degree of .then where is a positive absolute constant .moreover , the result holds for also , with the interpretation that the gibbs measure at is just the uniform distribution on the set of ground states .an interesting case of the above theorem is when .the result then says that if two configurations are drawn independently from the gibbs measure , they have a non - negligible bond overlap with non - vanishing probability .the fact that this holds at any finite temperature is in contrast with the mean - field case ( i.e. the s - k model ) , where there is a high - temperature phase ( ) where the bond overlap becomes negligible . however , while theorem [ nochaos ] establishes that the bond overlap does not become zero for any amount of perturbation , it does exhibit a sort of ` quenched chaos ' , in the following sense .[ quenchedchaos ] fix and let be as in theorem [ nochaos ]. then that is , if we perturb the system by an amount , the bond overlap between two configurations drawn from the two gibbs measures is approximately equal to the quenched average of the overlap .in physical terms , the overlap ` self - averages ' . the combination of the last two theorems brings to light a surprising phenomenon . on the one hand ,the perturbation retains a memory of the original gibbs measure , because the overlap is non - vanishing in theorem [ nochaos ] . on the other hand, the perturbation causes a chaotic reorganization of the gibbs measure in such a way that the overlap concentrates on a single value in theorem [ quenchedchaos ] .the author can see no clear explanation of this confusing outcome .the proof of theorem [ nochaos ] is based on the following result , which says that the free energy is not superconcentrated in the e - a model on bounded degree graphs .this generalizes a well - known result of wehr and aizenman , who proved the analogous result on square lattices .the relative advantage of our approach is that it does not use the structure of the graph , whereas the wehr - aizenman proof depends heavily on properties of the lattice .[ nosuper ] let denote the free energy in the edwards - anderson model on a graph , defined in .let be the maximum degree of . then for any , including ( where the free energy is just the energy of the ground state ), we have the above result is based on a formula ( theorem [ varformula ] ) for the variance of an arbitrary smooth function of gaussian random variables .it will clear from our proofs that the chaos and superconcentration results hold for the -spin versions of the s - k model for even .( see chapter 6 of for the definition of these models and various results . ) in fact , a generalization of theorem [ chaoscont ] is proven in theorem [ pspin ] later , which includes the -spin models for even .it will also be clear that the lack of superconcentration is true in the random field ising model on general bounded degree graphs .( again , the lattice case is handled in .we refer to for the definition of the rfim . )the absence of superconcentration in the rfim implies that the _ site _ overlap is stable under perturbations , instead of the bond overlap as in the e - a model .a simple model where our techniques give sharp results is the random energy model ( rem ) .this is discussed in subsection [ rem ] . in spite of the progress made in this paper over ,many key issues are still out of reach .some of them are as follows : 1 .improve the multiple valley theorem ( theorem [ multisk ] ) so that is a negative power of , preferably better than , which will prove ` strong multiple valleys ' in the sense of .another possible improvement to theorem [ multisk ] can be achieved by increasing to something of the form .3 . prove the chaos theorems ( theorems [ chaosdisc ] and [ chaoscont ] ) for the ground state ( ) of the s - k model .4 . improve the superconcentration result ( theorem [ superconc ] ) so that the right hand side is for some .this is tied to the improvement of the chaos result .if the above is not possible , at least prove a version of the superconcentration result where the right hand side does not depend on , or has a better dependence than .this will solve the question of chaos for .prove that the site overlap in the edwards - anderson model is chaotic with respect to fluctuations in the disorder , even though the bond overlap is not .prove disorder chaos in the s - k model with nonzero external field , that is , if there is an additional term of the form in the hamiltonian .the general nature of the s - k model indicates that any result for may be substantially harder to prove than for .( reportedly , a sketch of the proof in this case will appear in the new edition of . )show that in the e - a model , the variance of tends to zero and the graph size goes to infinity .establish temperature chaos in any of these models .the rest of the paper is organized as follows . in section[ sketches ] , we sketch the proofs of the main results . in section [ proofs ] , we present some general results that cover a wider class of gaussian fields .all proofs are given in section [ proofs ] .in this section we give very short sketches of some of the main ideas of this paper .suppose we choose from the gibbs measure at inverse temperature and from the measure obtained by applying a continuous perturbation up to time .let and be the two hamiltonians .suppose and sufficiently slowly so that chaos holds ( i.e. as ) .clearly this is possible by theorem [ chaoscont ]. then due to chaos , and are approximately orthogonal .since , nearly minimizes and nearly minimizes .but , since , .thus , and both nearly minimize .this procedure finds two states that have nearly minimal energy and are nearly orthogonal . repeating this procedure , we find many such states .the details are of this argument are worked out in subsection [ multval ] .let denote when is drawn from the unperturbed gibbs measure at inverse temperature and is drawn from the gibbs measure continuously perturbed up to time .let be the free energy defined in .then we show that the proof of this result ( theorem [ supergauss ] ) is simply a combination of the heat equation for the ornstein - uhlenbeck process and integration - by - parts .the formula directly shows that whenever falls of sharply to zero , which is a way of saying that chaos implies superconcentration . in subsection [ chaosgauss ], we show that is a nonnegative and decreasing function .this proves the converse implication , since the integral of a nonnegative decreasing function can be small only if the function drops off sharply to zero .suppose is drawn from the gibbs measure of the s - k model at inverse temperature , and from the measure continuously perturbed up to time .let be the overlap of and , as usual , and let we have to show that for all , where is some constant that depends only on . by repeated applications of differentiation and gaussian integration - by - parts ,we show that for all and . here denotes the derivative of .such functions are called completely monotone .now , by a classical theorem of bernstein about completely monotone functions , there is a probability measure on such that by hlder s inequality and the above representation , it follows that for , in other words , _ chaos under large perturbations implies chaos under small perturbations_. thus , it suffices to prove that for sufficiently large .the next step is an ` induction from infinity ' .it is not difficult to see that when , after integrating out the disorder , and are independent and uniformly distributed on . from thisit follows that .we use this to obtain a similar bound on for sufficiently large , through the following steps .first , we show that for any and , thus , we have a _ chain of differential inequalities_. it is possible to manipulate this chain to conclude that the right hand side is bounded by if and only if is sufficiently large .( this is related to the fact that when is a standard gaussian random variable , if and only if . )this completes the proof sketch .the details of the above argument are worked out in subsection [ chaosgauss ] .the proof of theorem [ nochaos ] , again , is based on the representation of the variance of the free energy and the representation of the function ( both of which hold for the e - a model as well ) . from, it follows that there is a nonnegative random variable such that for all , from this and it follows that next , we prove a simple analytical fact : suppose is a nonnegative random variable and let . then for any , using this inequality for the random variable and the lower bound on the variance from theorem [ nosuper ] , it is easy to obtain the required lower bound on the function , which establishes the absence of chaos .the details of this argument are presented in subsection [ nochaosproof ] .the proof of theorem [ quenchedchaos ] involves a new idea .let , and let be independent copies of . for each , let for each , let denote a configuration drawn from the gibbs measure defined by the disorder . for , we assume that and are independent given . define by a similar logic as in the derivation of , one can show that is a completely monotone function . also , is bounded by .thus , for any , latexmath:[\[\label{pbd0 } and let it turns out that and where is the bond overlap between and . combining these two identities with the inequality , it is easy to complete the proof of theorem [ quenchedchaos ] .the details are in subsection [ quenchedproof ] .let , and let be an independent copy of . for any , let be the array whose component is let be the free energy , considered as a function of .suppose and are constants such that for all , fix , and let be a subset of , chosen uniformly at random from the collection of all subsets of size .let be chosen from the gibbs measure at inverse temperature defined by the disorder , and let be drawn from the gibbs measure defined by .let denote the overlap of and , as usual .the key step is to prove that for some absolute constant , this inequality is the content of theorem [ discgauss ] .the proof is completed by showing that we can choose and such that , and using the superconcentration bound ( theorem [ superconc ] ) on the variance of .the details of the proof are given in subsection [ chaosdiscproof ] .although this result was already proven in for the e - a model on lattices , it may be worth sketching our argument for general bounded degree graphs here .our proof is based on a general lower bound for arbitrary functions of gaussian random variables .the result ( theorem [ varlowbd ] ) goes as follows : suppose is an absolutely continuous function such that there is a version of its gradient that is bounded on bounded sets .let be a standard gaussian random vector in , and suppose and are both finite .then where denotes the usual inner product on .we apply this result to the gaussian vector , taking the function to be the free energy .a few tricks are required to get a lower bound on the right hand side that does not blow up as .incidentally , the above lower bound on the variance of gaussian functionals is based on a multidimensional plancherel formula that may be of independent interest : versions of this formula have been previously derived in the literature using expansions with respect to the multivariate orthogonal hermite polynomial basis ( see subsection [ plancherel ] for references ) .we give a different proof avoiding the use of the orthogonal basis .the results of section [ intro ] are applications of some general theorems about gaussian fields .these are presented in this section , together with the proofs of the theorems of section [ intro ] . unlike the previous sections , we proceed according to the theorem - proof format in the rest of the paper .let be a finite set and let be a centered gaussian random vector .let let be an independent copy of , and for each , let fix .for each , define a probability measure on that assigns mass to the point , for each .the average of a function under the measure will be denoted by , that is , we will consider the covariance kernel as a function on , defined as .alternatively , it will also be considered as a square matrix .[ main ] assume that for all . for each ,let let be any convergent power series on all of whose coefficients are nonnegative .then for each , moreover , is a decreasing function of .roughly , the way to apply this theorem is the following : prove that the right hand side is small for some large using high temperature methods , and then use the infimum to show that the smallness persists for small as well .since the application of theorem [ main ] to the s - k model seems to yield a suboptimal result ( theorem [ chaoscont ] ) , one can question whether theorem [ main ] can ever give sharp bounds . in subsection [ rem ]we settle this issue by showing that theorem [ main ] gives a sharp result for derrida s random energy model .let us first extend the definition of to negative .this is done quite simply .let be another independent copy of that is also independent of , and for each , let let us now recall gaussian integration by parts : if is an absolutely continuous function such that has finite expectation , then for any , where denotes the partial derivative of along the coordinate ( see e.g. , appendix a.6 ) .the following lemma is simply a reformulated version of the above identity . for each ,define a simple computation gives ( note that issues like moving derivatives inside expectations are easily taken care of due to the assumption that . )one can verify by computing covariances that and the pair are independent .moreover , so for any , gaussian integration by parts gives the proof is completed by combining the last two steps .[ complete ] let be the class of all functions on that can be expressed as for some nonnegative integer and nonnegative real numbers , and functions in . for any , there is a probability measure on such that for each , in particular , for any , note that any must necessarily be a nonnegative function , since and are independent and identically distributed conditional on , which gives now , if , then for all , and there is nothing to prove .so let us assume .since is a positive semidefinite matrix , there is a square matrix such that .thus , given a function , if we define then by lemma [ basic ] we have from this observation and the definition of , it follows easily that if , then .proceeding by induction , we see that for any , is a nonnegative function ( where denotes the derivative of ) .such functions on are called ` completely monotone ' . the most important property of completely monotone functions ( see e.g. feller , vol .ii , section xiii.4 ) is that any such function can be represented as the laplace transform of a positive borel measure on , that is , moreover , . by taking ,this proves the first assertion of the theorem . for the second ,note that by hlder s inequality , we have that for any , this completes the proof .the next lemma is obtained by a variant of the gaussian interpolation methods for analyzing mean field spin glasses at high temperatures .it is similar to r. lataa s unpublished proof of the replica symmetric solution of the s - k model ( to appear in the new edition of ) . for each , define a function as note that where if and otherwise .since is bounded , this proves in particular that .take any nonnegative integer . since is a positive semidefinite matrix ,so is .( to see this , just note that are independent copies of , then . )therefore there exists a matrix such that .define the functions in the following we will denote and by and respectively , for all . let by lemma [ basic ] we get our objective is to get a lower bound for . for this, we can delete the two middle terms in the above expression because they contribute a positive amount .for the fourth term , note that by hlder s inequality , thus , by lemma [ complete ] and the above inequalities , we have now let for . then the inequality simply becomes fix , . for each ,let using ( [ diffineq2 ] ) , we see that inductively , this implies that for any , again , for any , where .thus , . finally , observe that and are independent .this implies that for any , combining , we conclude that the result now follows easily by taking and summing over , using the fact that has nonnegative coefficients in its power series .let be as in the proof of lemma [ hightemp ] . as noted in the proof of lemma [ hightemp ] ,the matrix is positive semidefinite for every nonnegative integer .since has nonnegative coefficients in its power series , it follows that the matrix is also positive semidefinite .let be a matrix such that . then therefore , the function belongs to the class of lemma [ complete ] .the proof is now finished by using lemma [ complete ] and lemma [ hightemp ] , and the observation that ( since has the same law as ) . the claim that is a decreasing function of is automatic because .we are now ready to give a proof of theorem [ chaoscont ] using theorem [ main ] .in fact , we shall prove a slightly general result below , which also covers the case of -spin models for even , as well as further generalizations .let be a positive integer and suppose is a centered gaussian random vector with where is some function on ] . then there is a constant depending only on such that for all and , and any positive integer , by symmetry , it is easy to see that for each , again , it follows from elementary combinatorial arguments that there are positive constants and that do not depend on , such that for any positive integer and any , choosing so that , and , we see from theorem [ main ] ( and the assumption that ) that for any where is a constant that does not depend on .this proves the result for .for we use the last assertion of theorem [ main ] to conclude that is decreasing in . finally , observe that for some constant that depends only on and . in this subsectionwe use theorem [ main ] to prove a multiple valley result for general gaussian fields .let all notation be the same as in subsection [ chaosgauss ] .the idea of the proof is borrowed from the proof of theorem 3.7 in , although there are added complications resulting from the fact that we are trying to derive a result about from a result about finite ( i.e. theorem [ main ] ) . [ multigauss ] let be a positive integer , and let .choose any and .let , and .define where the gibbs average in the last term is taken at inverse temperature .then with probability at least , there exists a set of size such that for any distinct , we have , and for all , . given ,let be a random variable chosen from the set such that next , let and define a random function then we have thus , an easy verification shows that therefore is an increasing function of and hence combining this with the observation that , we have now let be i.i.d .copies of .let and then and are independent ( jointly gaussian and all covariances vanish ) , and let be a random variable on whose conditional distribution given is the same as that of given .in particular , by the independence of and , and are also independent . from this observation and the above representation of , we have the last equality holds because and have the same unconditional distribution .for the same reason , we have combining the last two inequalities , we see that thus , now , we clearly have that for any , thus , finally , by gaussian concentration ( see proposition 1.3 in ) , we have . combining all steps ,we get and putting together the last two bounds , we see that the set satisfies the requirements of the theorem .[ multigauss2 ] let all notation be the same as in theorem [ multigauss ] .fix any .let then with probability at least , there exists a set of size such that for any distinct , we have , and for all , .let be an independent copy of , and let note that has the same distribution as .let be a set as in theorem [ multigauss ] .let .then for any , by gaussian concentration ( see e.g. proposition 1.3 in ) , we have and moreover , by the independence of and and a standard result for gaussian random variables ( see e.g. lemma 2.1 in ) , we get therefore , from the above steps and theorem [ multigauss ] , we have that with probability at least , there is a set of size such that for any distinct , we have , and for each , . since and have the same distribution , this completes the proof. these are direct applications of theorem [ multigauss ] and corollary [ multigauss2 ] .consider the gaussian field defined in , and choose , \\delta = ( \log n)^{-1/8 } , \\ & t = ( \log n)^{-1/3 } , \\epsilon = e^{-(\log n)^{1/8}}.\end{aligned}\ ] ] note that and .note that the quantity , according to the notation of theorem [ chaosgauss ] , is just .thus , with the above value of , theorem [ chaoscont ] says that for some absolute constants , again , by the sudakov minoration technique ( see e.g. lemma 2.3 in ) it is not difficult to prove that for some positive absolute constant .invoking theorem [ multigauss ] , we now get where and denote arbitrary absolute constants .this completes the proof of theorem [ multisk ] . to prove corollary [ multisk2 ] , note that the quantity in corollary [ multigauss2 ] can be bounded by since as noted before , this completes the proof of corollary [ multisk2 ] .carrying on with the notation of subsection [ chaosgauss ] , we have the following formula for the variance of the free energy associated with a gaussian field at inverse temperature .this is a direct analog of lemma 3.1 in .we follow the notation of subsection [ chaosgauss ] .note that by lemma [ basic ] , for any smooth we have taking , we get again , since has the same joint law as , we see that . combining the steps ,we get this completes the proof .the goal of this subsection is to prove that in the absence of superconcentration , we do not have chaos either .this is an improved version of theorem 3.2 in , where the absence of chaos was proved only up to a finite time , but not for all .note that now , for any , we have thus , if , then now if and only if .combining the steps , we see that this completes the proof of the lemma . by theorem [ supergauss ] , by lemma [ complete ], we see that there is a non - negative random variable such that for all , combined with the formula for the variance derived above , this gives the result now follows from lemma [ lowlmm ] .in this subsection we present a general formula for the variance of a function of independent standard gaussian random variables .after that , we derive a useful lower bound for the variance using this formula .the variance formula looks similar to those in hour and kagan and houdr but it is not the same .various versions of the formula have appeared in houdr and prez - abreu ( , remark 2.3 ) and houdr , prez - abreu and surgailis ( , proposition 10 ) .essentially , this is the parseval identity for the norm of a gaussian functional expressed as a sum of squares of its fourier coefficients in the orthogonal basis of multidimensional hermite polynomials .we present a direct proof that does not involve the multivariate hermite polynomial basis . yetanother proof , based on heat kernel expansions , was suggested to the author in a private communication by michel ledoux .[ varformula ] let be a vector of i.i.d .standard gaussian random variables , and let be a function of with bounded derivatives of all orders . then the convergence of the infinite series is part of the conclusion . let and be i.i.d .copies of , and for each , define let then by lemma [ basic ] , we have for , define , where .then repeating this step times shows that as in the proof of lemma [ complete ] , we observe that the expectations on the right hand side are always nonnegative. we can continuously extend to the closed interval ] that is in with all derivatives non - negative .such functions are known as absolutely monotone ( see feller , p. 223 ) , and their most important property is that they can be represented as a power series , where the coefficients are non - negative and sum to . from thisone can easily deduce that for any , since and , this completes the proof .a great advantage of theorem [ varformula ] is that we can extract lower bounds for the variance just by collecting a subset of the terms in the infinite sum .this is exactly what we do to get the following theorem .we do not actually need the theorem in its full generality ( with respect to the smoothness conditions on ) , but prove it in the general form nonetheless .[ varlowbd ] suppose is an absolutely continuous function such that there is a version of its gradient that is bounded on bounded sets .let be a standard gaussian random vector in , and suppose and are both finite. then where denotes the usual inner product on .first assume that .theorem [ varformula ] gives integration by parts gives thus , for any function , let us now show that the above inequality holds for any bounded lipschitz function . for each and , define then we can write since is a bounded function , it is clear from the above representation that for any , and hence holds for . again , since is lipschitz , where is the lipschitz constant of .this shows that we can take and obtain for .next , we want to show whenever is absolutely continuous and square - integrable under the gaussian measure , and the gradient of is bounded on bounded sets .take any such .let ] is a lipschitz function that is on ] .then the above identity holds for each , and we can pass to the limit using the dominated convergence theorem .( actually , it can be shown that the finiteness of suffices . ) as a last step , we observe that by the cauchy - schwarz inequality , this completes the proof . note that and therefore now , it is easy to verify that is a convex function of , and hence for each , is an increasing function of .thus , is also an increasing function of .moreover , and therefore for all . finally note that by integration by parts , combined with theorem [ varlowbd ] , this completes the proof .we are now ready to complete the proof of theorem [ nosuper ] .consider an undirected graph , and the edwards - anderson spin glass model on as defined in subsection [ ea ] .let denote the average with respect to the gibbs measure at inverse temperature .first , we will work with .let be as in theorem [ nosuper ] .by lemma [ varlmm ] , with , and , we get now , under the gibbs measure at inverse temperature , the conditional expectation of given the rest of the spins is , where is the neighborhood of in the graph . using this fact and the inequality , we get thus , taking , we get finally , to prove the lower bound for , just note that almost surely , and the quantities are all bounded , so we can apply the dominated convergence theorem to get convergence of the variance .this completes the proof of theorem [ nosuper ] . for ,this is just a combination of theorem [ nosuper ] and theorem [ supchaos ] .( note that the notations of the two theorems are related as ; also note that in this case , and therefore . ) next , note that as , the gibbs measure at inverse temperature converges weakly to the uniform distribution on the set of ground states .the same holds for the perturbed gibbs measure .thus , where denotes the gibbs average at inverse temperature . since all quantities are bounded by , we can take expectations on both sides and apply dominated convergence .let , and let be independent copies of .for each , let for each , let denote a configuration drawn from the gibbs measure defined by the disorder . for , we assume that and are independent given .define by lemma [ complete ] , it follows that is a completely monotone function on . also , is bounded by .thus , for any , then by lemma [ basic ] we know that now fix , and let .then and so by , we have now let and define and . then and are both uniformly bounded by , and so since , an application of the cauchy - schwarz inequality and to the above bound gives to complete the proof , note that where is the bond overlap between and .our goal in this subsection is to prove that superconcentration implies chaos under discrete perturbations .accordingly , let us first set the stage for discrete perturbation . henceforth , we deviate from the notation of subsection [ chaosgauss ] .the result of this subsection and its proof are inspired by lemma 2.3 in ; we follow the same notation as in .let be a vector of independent random variables with for each .let be an independent copy of .for any : = \{1,\ldots , n\} ] , chosen uniformly at random from the collection of all subsets of size . define as above .let .then the proof of theorem [ discgauss ] is divided into a series of lemmas .first , let us introduce some further conventions . to simplify notation , we will write for . when , we will simply write . for any and such that , let as usual , when , we will simply write . let denote the collection of all subsets of \backslash \{i\} ] such that and ] is .this proves . combining with, we see that } ) ) = \frac{1}{2n}\sum_{i=1}^n \sum_{k=0}^{n-1}\frac{1}{{n-1 \choose k } } \sum_{a\in \ma_{k , i } } \ee(\delta_i f\delta_i f^a).\ ] ] this completes the proof of the lemma .take any and .it is easy to see that given and , the random variables and are i.i.d .therefore , from this and jensen s inequality , it is clear that , and for any \backslash \{i\} ] of size . using and, we get from this and lemma [ disc3 ] , we conclude that for , the same conclusion can be drawn for and by defining and verifying that all steps hold .this completes the proof .consider the s - k hamiltonian defined in as a function of the disorder .fix , and define , where is the free energy defined in .let be an independent copy of , and define as we defined in theorem [ discgauss ] .let ( and assume that is an integer ) , and define a perturbed hamiltonian using the disorder , where is chosen uniformly at random from the set of all subsets of of size .let be sampled from the original gibbs measure , and from the perturbed gibbs measure .an easy verification shows that where is the derivative of with respect to the coordinate . on the other hand , by theorem [ superconc ]we know that finally , note that for any , therefore , we can take and while applying theorem [ discgauss ] . using all the above information , we can now apply theorem [ discgauss ] to conclude that where is an absolute constant .since , we can ignore the second term on the right after replacing by in the first term .this completes the proof .the random energy model ( rem ) , introduced by derrida , is possibly the simplest model of a spin glass .the state space is as usual , but here the energies of states are chosen to be i.i.d .gaussian random variables with mean zero and variance .we show that theorem [ main ] gives a sharp result in the low temperature regime ( ) of this model .we follow the notation of theorem [ main ] .suppose is drawn from the original gibbs measure of the rem and from the gibbs measure perturbed continuously up to time , in the sense of subsection [ chaossec ] .if , there are positive constants and depending only on such that for all and , in the notation of theorem [ main ] , we have if , and if . also , clearly , for each .suppose is drawn from the original gibbs measure and from the gibbs measure perturbed continuously up to time .taking in theorem [ main ] , we get now choose so large that .the above inequality shows that for , and for , a simple computation via theorem [ supergauss ] now gives where is a constant depending only on .now suppose .let , where solves let denote the numbers when enumerated in non - increasing order .it follows from arguments in section 1.2 of talagrand that this point process converges in distribution , as , to a poisson point process with intensity on , where .it is not difficult to extend this argument to show that we skip the details , which are somewhat tedious .( here is required to ensure that the infinite sum converges almost surely . )however , .thus , there is a positive constant depending only on such that for any , we can now use theorem [ supchaos ] to prove that for some positive constant depending only on , we have that for any and , however , we also have by theorem [ main ] that is a decreasing function of , and hence combined with , and , this completes the proof ..2 in * acknowledgments . *the author thanks michel talagrand , persi diaconis , daniel fisher , victor prez - abreu , christian houdr , michel ledoux , rongfeng sun , tonci antunovic and partha dey for helpful discussions and comments , and itai benjamini for asking the question that led to theorem [ chaosdisc ] . | we prove that the sherrington - kirkpatrick model of spin glasses is chaotic under small perturbations of the couplings at any temperature in the absence of an external field . the result is proved for two kinds of perturbations : ( a ) distorting the couplings via ornstein - uhlenbeck flows , and ( b ) replacing a small fraction of the couplings by independent copies . we further prove that the s - k model exhibits multiple valleys in its energy landscape , in the weak sense that there are many states with near - minimal energy that are mutually nearly orthogonal . we show that the variance of the free energy of the s - k model is unusually small at any temperature . ( by ` unusually small ' we mean that it is much smaller than the number of sites ; in other words , it beats the classical gaussian concentration inequality , a phenomenon that we call ` superconcentration ' . ) we prove that the bond overlap in the edwards - anderson model of spin glasses is _ not _ chaotic under perturbations of the couplings , even large perturbations . lastly , we obtain sharp lower bounds on the variance of the free energy in the e - a model on any bounded degree graph , generalizing a result of wehr and aizenman and establishing the absence of superconcentration in this class of models . our techniques apply for the -spin models and the random field ising model as well , although we do not work out the details in these cases . |
the authors acknowledge the support of the deutsche forschungsgemeinschaft , f.w . by grant sfb509 and c.v.f . by sfb237 . here, we generalize the sum over in eq .( 2 ) to include all sites with lattice coordinates and such that and . to achieve a visible segmentation in fig .1c and 1d we use . | we present a fast and robust cluster update algorithm that is especially efficient in implementing the task of image segmentation using the method of superparamagnetic clustering . we apply it to a potts model with spin interactions that are are defined by gray - scale differences within the image . motivated by biological systems , we introduce the concept of neural inhibition to the potts model realization of the segmentation problem . including the inhibition term in the hamiltonian results in enhanced contrast and thereby significantly improves segmentation quality . as a second benefit we can - after equilibration - directly identify the image segments as the clusters formed by the clustering algorithm . to construct a new spin configuration the algorithm performs the standard steps of ( 1 ) forming clusters and of ( 2 ) updating the spins in a cluster simultaneously . as opposed to standard algorithms , however , we share the interaction energy between the two steps . thus the update probabilities are not independent of the interaction energies . as a consequence , we observe an acceleration of the relaxation by a factor of 10 compared to the swendson and wang procedure . the segmentation of images into connected areas or objects is a formidable task and an important step in the process of recognition . nature provides us with many examples of biological systems that solve this and other tasks related to the recognition problem in highly efficient ways . taken as such , the problem is ill - defined : one will distinguish different numbers of objects in a noisy picture depending on the level of contrast and resolution . a physicists answer to the problem has been presented by the method of ` superparamagnetic clustering of data ' where the pixels of an image are represented by a potts model of spins which interact in such a way that neighboring spins corresponding to similar pixels tend to align . then the image - segments ( or objects ) may be identified as subsets or clusters of correlated spins at a given temperature . at high temperature one will find a disordered paramagnetic phase while , when lowering the temperature , superparamagnetic phases occur with clusters of aligned spins . from a theoretical point of view any method of simulating a given spin system is equivalent as long as it preserves general concepts such as detailed balance . for practical purposes it is of course desirable to choose a method that is efficient and best adapted to the model . cluster update algorithms are commonly used to to accelerate the equilibration of large spin systems . as opposed to single spin updates following a metropolis procedure , these algorithms provide a method to update connected clusters of aligned spins simultaneously . our approach to the problem is twofold : on the one hand we introduce to the spin model the concept of ( 1 ) _ global inhibition _ , motivated by the analogy to neural visual systems , on the other hand ( 2 ) we have developed a novel cluster algorithm that utilizes the energy landscape , which underlies the equilibration process , in a more efficient way . \(1 ) the concept of global inhibition is found in many biological neural networks and has successfully been applied also in neural computation . we implement it by adding a small global penalty for spins to align . it serves to identify different clusters by different spin labels without need to observe the spin correlations over a longer time period . \(2 ) in a cluster update algorithm the clusters are formed by `` freezing '' bonds between aligned spins with some probability . commonly the clusters are then updated independently . we update the clusters taking into account also the interactions on bonds that were not frozen . in addition the inner surface of the larger clusters is reduced by incorporating islands that they might contain . both of our improvements are implemented while preserving detailed balance . as a result , we observe a significant increase in quality and speed . without loss of generality in the following we will use the problem of segmenting an image into individual objects as an example to describe our approach . specifically , given a picture in form of color ( or gray - scale ) values on the sites of a finite lattice , we have the clustering problem : find ` objects ' i.e. clusters of almost the same color . we define for each pair of nearest neighbors or _ bond _ on the lattice the distance and the mean distance averaged over all bonds . to perform the clustering task we assign a spin variable to each site and for each bond an interaction strength with the normalization in eq.([gray ] ) the color of sites is assumed to be similar when the gray value distance is smaller than the average . then the interaction strength is positive with a maximum value of for equal color . we implement the spin model in such a way that for neighboring sites with similar color the spins have the tendency to align . for this purpose we use a -state potts model with the hamiltonian here , denotes that are nearest neighbors connected by a bond and is the kronecker delta function . the second term is introduced in analogy to neural systems , where it is generally called `` global inhibition '' . it serves to favor different spin values for spins in different clusters as explained below . this is a concept realized in many neural systems that perform recognition tasks . the segmentation problem is then solved by finding clusters of correlated spins in the low temperature equilibrium states of the hamiltonian . we perform this task by implementing a clustering algorithm : in a first step the ` satisfied ' bonds , i.e. those that connect nearest neighbor pairs of identical spins are identified . the satisfied bonds are then ` frozen ' with some probability . sites connected by frozen bonds define the clusters . each cluster is then updated by assigning to all spins inside the cluster the same new value . commonly this is done independently for each cluster . in that sense the external bonds connecting the clusters are ` deleted ' . here , we use a more general cluster algorithm . when choosing a new spin configuration we take these bonds into account . to preserve detailed balance , we adjust the bond freezing probabilities and the interaction on the external bonds . our cluster update algorithm , which we call energy - sharing cluster update ( ecu ) is divided in two basic steps . similar to the swendson & wang cluster algorithm also in our approach the temperature remains fixed and no annealing takes place between the iterations . * as for any cluster update we first identify the _ satisfied _ bonds with and freeze these with probability when and . here is the product of the boltzmann constant and temperature . + the additional coefficients with allow us to `` share '' the interaction energy with the following additional steps . if one chooses then one obtains the usual swendson - wang clusters which may then be updated independently . * in an intermediate step we identify ` invisible ' islands i.e. clusters according to step ( 1a ) that have a boundary only with _ one _ other cluster and have the same spin value . these islands often delay the spin flip of the larger cluster in step ( 2 ) as their total boundary may be large . for this reason we want to remove them with some finite probability . this step is not indispensable for our algorithm but it further improves its performance . we freeze the bonds between an island and the surrounding cluster with probability where if is a bond connecting an island with a surrounding cluster after step ( 1 ) and otherwise . we impose the condition . note that we do not increase the bond freezing probability beyond the swendson - wang probability and no size limit for the islands is implied . * finally we identify the clusters of spins connected by frozen bonds after steps ( 1a ) and ( 1b ) . on this system of clusters that in similar approaches is referred to as a hyperlattice we perform a metropolis update that updates all spins in each cluster simultaneously to a common new label . the metropolis rate is calculated using the modified hamiltonian as has been shown on general grounds in detailed balance is preserved under the condition that in the modified hamiltonian one uses . this amounts to sharing the interaction energy between the clustering and updating steps . note that the inhibition term in eq . ( [ 3 ] ) does not enter the bond freezing probabilities . for the cluster update it has the effect of favoring a different spin value for each cluster . we have tested the performance of the proposed segmentation method based on the hamiltonian in eq . ( 2 ) with a finite inhibition of in combination with the ecu cluster update algorithm with energy sharing parameters . to our experience the efficiency of the algorithm does not depend sensitively on these parameters . further refinements may be added to improve the segmentation delivered by the ecu algorithm to cope with more delicate recognition problems . we have compared the algorithm to the performance of other known segmentation methods . as methods of reference we have used in particular the method of simulated annealing and the method of superparamagnetic clustering without inhibition ( ) using the standard swendson&wang ( sw ) update . in addition we have tested a variant of the sw update that allows to freeze anti - ferromagnetic bonds when . an example that illustrates the different solutions to the segmentation problem is shown in fig . 1 . let us explain this comparison in some detail . the gray scale values that define the interactions according to eq . ( 1 ) are taken from fig . 1a . some noise is included in this input . all segmentation methods that we consider use state spin variables . a random initial configuration of the spins is shown in terms of a gray scale picture in fig . 1b . as a first reference we show the sequence of a simulated annealing procedure in fig . 1c and 1d . here , the hamiltonian in eq . ( 2 ) with is used to define the metropolis rate of local spin updates . after each sweep of spin updates the temperature is lowered by a constant factor . we started with a temperature and lowered by in 1c and in 1d for each sweep . the spin configurations at intermediate steps are shown in fig . 1c and 1d . in the slower annealing procedure the two large rectangles in the image are segmented according to the original picture while the finer structure is not recognized by this algorithm . when the faster schedule is applied as in 1d then even the larger connected areas are divided into artificial segments . obviously the simulated annealing method is inefficient for the segmentation task and due to slowing down at low temperatures the local update is very time consuming . even optimizing the annealing rate during the schedule can not change this picture as an extremely slow rate is needed to indentify the fine structure of the thin border line . in fig . 1e - g we compare different cluster update algorithms that avoid the problem of slowing down and we test the influence of the inhibition term and the energy sharing that are included only in fig . 1 g . comparing the series of spin configurations in fig . 1e and 1 g one notices that the inhibition term in 1 g indeed introduces a forced contrast between different segments as compared to 1e , in particular at and . also the increase in speed is remarkable . + in fig . 1f we test a cluster update algorithm that includes anti - ferromagnetic clustering where in the clustering step ( 1a ) anti - ferromagnetic bonds with and are frozen with probability ] with some appropriately chosen . with this choice the contribution of the non - frozen bonds to the update is clipped at . in our case we share the energies in a proportional way between the clustering and update steps . the alignment of clusters is thus enhanced by also including the stronger bonds with higher energy content . + in summary , the recognition task of segmenting an image may be performed with high efficiency by a simple cluster update algorithm if global inhibition is implemented . furthermore , we believe that our cluster update approach may also be useful for the simulation of other spin models as its efficiency is not dependent on the special properties of the potts model we use here . |
the frontier field of cold ( k ) and ultracold ( mk ) gas - phase molecular physics has brought forth many innovations . among motivating challenges is the prospect of studying collision processes , especially chemical reactions , under `` matterwave '' conditions .reaching that realm , where quantum phenomena become much more prominent than in ordinary `` warm '' collisions , requires attaining _ relative _ velocities so low that the debroglie wavelength becomes comparable to or longer than the size of the collision partners . that has been achieved recently for reactions of alkali atoms with alkali dimers formed from ultracold trapped alkali atoms by photoassociation or feshbach resonances . with the aim of widening the chemical scope, much effort has been devoted to developing means to slow and cool preexisting molecules .( compilations are given in . ) for chemical reactions , however , as yet it has not proved feasible to obtain sufficient yields at very low collision energies , using either trapped reactants or crossed molecular beams .the major handicap in such experiments is that _ both _reactants must contribute adequate flux with very low translational energy .merged codirectional beams with closely matched velocities offer a way to obtain far higher intensity at very low _ relative _ collision energies , since then _ neither _ reactant needs to be particularly slow .moreover , many molecular species not amenable for slowing techniques become available as reactants .merged beams have been extensively used with ions and/or neutrals formed by charge transfer , to perform experiments at relative energies below 1 ev with beams having kev energies .a key advantage is a kinematic feature that deamplifies contributions to the relative energy by velocity spreads in the parent beams . by virtue of precise control feasible with ions , the velocity spreads are also quite small , typically .1% or less . for thermal molecular beams , such as we consider here , the spreads are usually % or more . that enables attaining low relative collision energy , but much lower energies and improved resolutioncan be obtained by narrowing the spreads to % , which now appears feasible at an acceptable cost in intensity .surprisingly , application of merged beams to low - energy collisions of neutral molecules has been long neglected .we have come across only three previous , very brief suggestions .our treatment accompanies experiments now underway at texas a&m university . in sec .ii , in order to assess the major role of velocity spreads in merged beams , we evaluate the average relative kinetic energy , and its rms spread by integrating over velocity distributions familiar for molecular beams . reduced variable plots are provided that display the dependence on the ratio of most probable velocities in the merged beams , their velocity spreads , and merging angle . in secs .iii and iv we discuss experimental prospects , limitations , and options .for beams with lab speeds and intersecting at an angle , the relative kinetic energy is with the reduced mass . for merged beams it is feasible to restrict the angle to a small spread , fixed by geometry and typically only about a degree or so about . here we evaluate the average of over the beam velocity distributions , \ ] ] and the rms spread , ^{1/2}.\ ] ] these require only , with k = 1 - 4 , for the individual beams .we obtain analytic expressions for averages over velocity distributions for beams formed by effusive flow , by supersonic expansion , and by a rotating supersonic nozzle .figure 1 illustrates these distributions . for the supersonic beams ,three widths are shown ; the broadest ( 10% ) is typical , the narrowest ( 1% ) is near the best achieved in stark or zeeman molecular decelerators exploiting phase stability and transverse focusing .results given here pertain to molecular flux distributions ; as noted in an appendix , they are readily adapted for number density distributions . the flux distribution , \ ] ]is governed by a single parameter , , with the boltzmann constant , and the source temperature .the averaged powers of the velocity are \ ] ] with and the gamma function . the most probable velocity , , and the rms velocity spread is ^{1/2 } = 0.483 ] .the relative kinetic energy is with c = cos and s = sin . the rms spread is ^{1/2}\ ] ] with figure 2 shows how and vary with the ratio of beam velocities , which is proportional to .the curves given are for intersection angles near zero , pertinent for merged beams . formatched beam velocities , with , both and are smallest .there , for small , and are less than the nominal relative kinetic energy , , for a perpendicular collision ( ) by factors of only about 5 and 3 , respectively . as decreases below , and increase in nearly parallel fashion .accordingly , whether or not the merged beam velocities are closely matched , exceeds appreciably ( by 40% when ) .the resolution of the relative kinetic energy hence is worse than seen in eqs .( 6 ) and ( 7 ) for the beam kinetic energy itself .the velocity distribution for effusive beams is the same as for a bulk gas .thus , and for gas mixtures cooled by cryogenic means can be obtained from the merged beam results by merely setting , equivalent to integrating over all angles of collision . a standard approximation , ^ 2 \}\ ] ] for supersonic beams characterizes the velocity distribution by the flow velocity u along the centerline of the beam , and a width parameter , where , termed the parallel or longitudinal temperature , pertains to the molecular translational motion relative to the flow velocity . according to the thermal conduction model , determined by the pressure within the source , , the nozzle diameter , , and the heat capacity ratio , .likewise , the flow velocity is given by ^{1/2}[1 - ( t_{\|}/t_0)]^{1/2}\ ] ] analytic results for the velocity averages are readily obtained , with the ratio of velocity width to flow velocity , and ^ 2\}dt\ ] ] with . in the appendixwe give exact analytic formulas for the functions ; table i provides polynomial approximations ; for these are accurate to better than 0.03% .the most probable velocity is ^{1/2}\ } \approx u(1 + x^2) ] .the corresponding average beam kinetic energy is and the rms spread in kinetic energy is ^{1/2}\ ] ] for a pair of merged supersonic beams , the averaged relative kinetic energy is where .the rms spread is ^{1/2}\ ] ] with + s^4\left[\langle v_2 ^ 4\rangle-\langle v_2 ^ 2\rangle^2\right]\ ] ] \ ] ] \ ] ] these expressions are akin to eqs .( 8) and ( 9 ) , with and replaced by and . here , given by eq.(12 ) , may differ for the two beams ( = 1 , 2 ) if their velocity widths ( ) differ .figure 3 displays the dependence of and on the ratio of flow velocities of the beams , , and their velocity widths .curves are shown for and , to illustrate that the dependence on the merging angle is weak if the velocity widths of the beams are fairly large ( _ cf_. fig.2 ) but for becomes significant if the velocity widths are small ( _ cf_. fig . 6 below ) .the dependence on the magnitude of the flow velocities is included simply by adopting units for and that compare them with the relative kinetic energy for collisions at about the most probable beam velocities ( nearly equal to , ) at right angles ( ) , or equivalently in a bulk gas .we designate that by . as seen in panel ( a ) , if the merged beam velocities are precisely matched ( ; ) the collision kinetic energy is very sensitive to the velocity widths .in contrast , when the beam flow velocities are unmatched by more than about 15% ( i.e. , ) , the ratio becomes nearly independent of the velocity widths and grows larger as the unmatch increases .panel ( b ) shows that regardless of whether the beam velocities are matched or not , the spread in relative kinetic energy , , varies strongly with the velocity widths .panel ( c ) plots the ratio , which defines the energy resolution . whereas for closely matched beams is minimal , then approaches its maximal value .indeed , for matched beams with , the resolution ratio , , is near ; that is just as poor as found in fig . 2 for effusive beams .to improve the resolution ratio for matched beams requires narrowing the velocity widths .table ii compares , both for the single beam and matched merged beams , effects of reducing the spread from = 0.1 to 0.01 . for the single beam ,the change in average kinetic energy is very slight , whereas the rms spread in the kinetic energy shrinks tenfold . for the merged beams ,the relative kinetic energy is lowered by a factor of 25 , and its rms spread by a factor of 100 ; so the resolution ratio is only improved fourfold . the resolution ratio can be lowered further if the merged beam velocities are unmatched , but that raises the averaged relative kinetic energy .figure 4 displays the trade - offs involved . to obtain optimally low requires nearly exact matching ; that can provide for = 0.01 or for = 0.02 .however , for exact matching , the resolution ratio is only = 0.35 for = 0.01 and surges to 0.8 for = 0.02 . to attain resolution of 0.2 or 0.3 even with = 0.01 requires = 0.91 or 0.94 and hence would increase to or , respectively .the upshot is , to improve the resolution ratio by a factor of less than 2 ( from 0.35 to 0.2 ) by unmatching , requires increasing by a factor of 25 .g. quemener , n. balakrishnan and a. dalgarno , _inelastic collisions and chemical reactions of molecules at ultracold temperatures , in cold molecules : theory , experiment , applications _, r. v. krems , w. c. stwalley , and b. friedrich , eds . , ( taylor and francis , london , 2009 ) , p. 69 . ,for beams formed ( a ) by effusive flow , ( b ) supersonic expansion from a stationary source , and ( c ) a rotating supersonic source , defined by eqs.(4 ) , ( 10 ) , and ( 18 ) , respectively . parameters for ( a ) are ; m / s ; for ( b ) and ( c ) flow velocities are m / s and m / s and widths = 0.01 , 0.05 , 0.10 . for ( b ) and( c ) the widths also influence somewhat the most probable velocity . ] .the abscissa scale pertains to the ratio , which ranges from to .results are shown for an intersection angle of and four sets of velocity spreads : = 0.01 ; 0.05 ; 0.1 ; and , . dashed curves included for the caseare for . ] , for relative kinetic energy , ( left ordinate scale ) and resolution ratio , ( right ordinate scale ) .curves pertain to velocity spreads in beams of and 0.02 and merging angle of . ] and . both ( a ) and ( b ) exhibit pronounced minima where the matching condition holds : and hence .full curves show results for slowing mode , with , , and ; dashed curves are for speeding mode , with , , and 2 . ] , for merged supersonic beams from stationary sources .values of are indicated by black dots .abscissa scale is in units of , as used in figs.3 - 5 .( a ) for merging angle and various ratios of the flow velocities , to 0.85 and velocity widths .note log - log plot is used .( b ) for matched flow velocities , and , but merging angle varied to illustrate its role in eq.(20 ) .( c ) for flow velocities unmatched by % ( ) and with , , , to illustrate the reduced role of the merging angle when the first term in eq.(20 ) becomes predominant . ] | molecular collisions can be studied at very low relative kinetic energies , in the millikelvin range , by merging codirectional beams with much higher translational energies , extending even to the kilokelvin range , provided that the beam speeds can be closely matched . this technique provides far more intensity and wider chemical scope than methods that require slowing both collision partners . previously , at far higher energies , merged beams have been widely used with ions and/or neutrals formed by charge transfer . here we assess for neutral , thermal molecular beams the range and resolution of collision energy that now appears attainable , determined chiefly by velocity spreads within the merged beams . our treatment deals both with velocity distributions familiar for molecular beams formed by effusion or supersonic expansion , and an unorthodox variant produced by a rotating supersonic source capable of scanning the lab beam velocity over a wide range . |
multi core chip architectures are emerging as feasible solution to effectively utilizing the ever growing number of chip .multi - core chip depends on success in system software technology ( compiler and runtime system ) , in order to have thread level parallelism and utilizing on - chip concurrency . with multi - core processors ,additional speedups can be achieved by the use of parallelism in data - independent tasks .there is a gradual shift towards making current algorithms and design into parallel algorithms .it is rather difficult to achieve lock free and low cache contention parallel algorithms .+ in 70 s papers appeared ideas on parallel compilation of programming languages and parallel execution of programs were expected . in those papers discussions on parallel lexical analysis , syntactic analysis and code generation were done . with vlsi applications ,prominent increase in research on parallel compilation is observed . +a compiler contains different phases : lexical analyzer , syntactic analyzer , semantic analyzer and code generator .parsing or syntax analysis is the phase of compiler which analyses the program code according to the language . after analysis , it converts the code into another formal representation which will act as input for succeeding phases of compiler .+ complexity of software source code is increasing . an effort to compile large code baseis very time consumable . describes two types of parsers : top down and bottom up parsers .top down parsers have less power as compared to bottom up parser . lr ( k ) ,slr ( k ) and lalr ( 1 ) are types of bottom up parsers . with more power bottom up parsers also requires more space and more time in parsing a string as compared to top down parsers .most of the compiler compilers like yacc and bison creates lr ( 1 ) parsers and compilers like clang , mono c # compiler etc . uses lr(1 ) parsers .so , it is evident that programming languages can be represented easily by lr ( 1 ) languages .+ parsing is very time consuming phase of compiler . parsing different files in parallelis not enough . as programming languages like c and c++ can includes different files( using # include ) in a single file which results in generation of very long file . if we can parallel the parsing phase of single file , it will give performance benefits in compiling the source code .many significant techniques are already proposed for making parallel parsers ( , , , , ) .a parallel parsing for programming language is given by .+ a block is a section of code which is grouped together . in a language , a block may contain class definition , member or method declaration .another block could be a block of statements also called compound statement .this block is usually associated with a function code or if statement or loop .programming languages such as c , c++ , java , python use the concept of blocks heavily .one of most important property of parsing blocks is that they all are independent of each other i.e. each block can be parsed independently of other block .so , we could parse many blocks in a parallel fashion . in this paper , we will propose a technique to parse various blocks of the code in parallel .it can also work as a block parallel incremental parser .our parser is termed as block parallelized parser ( bpp , for short ) .+ our technique of parallel parsing is based on incremental parsing .an incremental parser is the one that parse only those portions of a program that have been modified .whereas an ordinary parser must process the entire program when it is modified .an incremental parser takes only the known set of changes done in a source file and updates its internal representation of source file which may be an abstract syntax tree . by building upon the previously parsed files , the incrementalparser avoids the wasteful re - parsing of entire source file where most of the cod remains unchanged .+ bpp is based on the properties that an incremental parser can parse any part of a source code without the need of parsing the whole source code and different blocks in a source code can be parsed independently of other blocks . in bppthese parts are blocks in a source code . using the property of incremental parser , bpp parse each of the blocks independently of other blocks .each of these blocks are parsed in their own thread .it can be easily seen that bpp follows a divide and conquer approach .it divides the source into different blocks , parse each of them in parallel and at the end conquer these blocks . in our schemethe conquer step does nothing except waiting for all the bpp threads to complete their operations .+ there have been many works on incremental parsing [ incremental parsing references ] .we choose to extend on the works of incremental jump shift reduce parser of .bpp is derived from incremental jump shift reduce parser . in , authors defined incremental jump shift reduce parser for slr ( 1 ) languages only .however , we decided to extend this parser to accept lr(1 ) language because lr(1 ) languages has more power than slr ( 1 ) and nearly all programming languages can be defined in the form of lr(1 ) grammars .we define the incremental categories to be a statement containing a block like class definition or function definition or if statement or for loop statement .then , we give a notion of first non - terminal symbols of a non - terminal symbol .we used this notion to create partitions of a grammar such that a partition includes an incremental category and its first non - terminals .we observed that this scheme gives us a very interesting property in incremental jump shift reduce parser .we used this property to create our block parallelized parser .+ whenever a start of the block symbol is encountered the parser will first check whether the current block is related to any incremental category and can it be parsed independently .if bpp is able to find the incremental category for it , bpp will start parsing the block in parallel . in this paper we developed this bpp for lr(1 ) languages but we believe it can be easily extended to lr(k ) or can be easily converted to lalr ( 1 ) or slr ( 1 ) grammars .we also see that no major changes were done to the current lr(1 ) parsing algorithm and hence , it should be easy to create an abstract syntax tree . this parser can also work as an incremental parallel parser which can parse different blocks in parallel .moreover , it could be seen that there is no requirement of any thread synchronization to communicate between different threads of bpp each of which is parsing a block in parallel .this is because no two blocks are related in any way for the purpose of parsing .+ we compiled c # implementation using mono c # compiler 3.12.1 and executed the implementation using mono jit compiler 3.12.1 on machine running fedora 21 with linux kernel 3.19.3 with 6 gb ram and intel core i7 - 3610 cpu with hyperthreading enabled .we found out that our technique showed 28% performance improvement in the case of including header files and 52% performance improvement in the case of excluding header files . + the following paper is designed as follows .section 2 shows some previous work done in parallel parsing .section 3 and 4 provides the terminology we will use and the background required to understand our technique . in section 5 we will extend incremental jump shift reduce parser to accept lr(1 ) grammars . in section 6, we introduced the concept of first non terminals of a non terminal . in section 7we will use this concept to create partitions of the grammar .we also showed that by creating partitions using this concept we get a very interesting property .this property would be used by bpp .we have generalized this property in a theorem and also provided a proof for it . in section 8we presents our block parallelized parser and its parsing algorithm . in section 9we will compare our algorithm with previous work .section 10 shows our evaluation and results .in section 11 and section 12 we complete our document with conclusion and related work .a lot of previous work has been done in parallel parsing of lr ( 1 ) and context free languages .the most recent work done by in parallel parsing of lr(1 ) is an extension of an lr substring parser for bounded context languages ( developed by cormack ) for parallel environment . provided a substring parser for bounded context - lr grammars and simple bounded context - lr grammars . distributes the work of parsing the substrings of a language over different processors .the work was extended to different processors in a balanced binary tree fashion and achieved o(log n ) time complexity of parsing . but constructing a bounded context lr grammar for a programming language is also difficult .c++ is one of the programming languages which can not be parsed by lr ( 1 ) parsing so creating a bounded context grammar is out of question here .+ parallel and distributed compilation schemes can be divided into two broad categories , functional decomposition and data decomposition . and talks about distributed compilation using a scheme based on functional decomposition .functional decomposition scheme divides different phases of compiler : lexer , parser , semantic analyzer into functional component and running each of them on separate processors like an instruction pipeline fashion .the data decomposition scheme divide the input into sections of equal length and parse them in parallel .bpp is data decomposition scheme which parallel the parser by divide and conquer approach .the data decomposition scheme was developed by , , , .these schemes are parsing lr ( k ) in parallel .they divide the input into sections of equal length and then parse them in parallel . , , describes asynchronous algorithms while develops a synchronous algorithm . a parallel lr parser algorithm using the error recovery algorithm of .+ has developed an incremental parallel compiler which could be used in interactive programming environment and he developed an incremental parallel parser also . improves upon the divide and conquer parsing technique developed by .they show that while the conquer step of algorithm in is but under certain conditions it improves to .+ describes a grammar partitioning scheme which would help in parsing the language in parallel . in type of statement level parallelism has been developed .the grammar is divided into n different sub - grammars corresponding to n subsets of the language which will be handled by each sub - compiler . for eachn sub grammars required to generate parse tables ( using parser generator ) along with driver routine constitute a parser for sub - compiler . for each sub - compiler ,a requirement of modified scanner is there which recognizes subset of the language .the technique described in requires a lot of modification to the lexical analyzer .a separate lexical analyzer has to be developed for one type of language .the parser of requires automatic tools for its implementation .+ in all the above described techniques constructing abstract syntax tree for a block structured language is difficult . as blocks in a blockare independent on each other , so they can be parsed independently .moreover this scheme would not involve inter - thread communication before a thread completes its parsing of blocks .hence , no shared memory synchronization methods are required to coordinate between different threads .it could be easily seen that the creation of an abstract syntax tree is also very easy . with all these required things in mindwe have developed block parallelized parser for lr ( 1 ) languages .we assume the notation for context free grammar is represented by where _ n _ is set of non - terminals , _t _ is set of terminals , _ s _ is start symbol and _ p _ is set of productions of the grammar .the language generated by _g _ is given as we will use the following conventions .+ given a grammar _g _ , we represent its augmented grammar as , where + here is called the augmented start symbol of and ] , where _ a _ is the lookahead symbol .+ in a programming language , a block represents a section of code grouped together .this section of code can be a group of statements , or a group of declaration statements .for example in java , a block corresponding to class definition contains declaration statements for fields and methods .block corresponding to function definition can contain declaration statement for local variables or expression statements or control flow statements .a top - level block is the starting block of a program which contains definition for classes , functions , import / include statements etc .child blocks are contained in either top - level block or another child block .as we proceed further , block will be in reference to child block .1 , shows an example of top - level and child blocks of a java program .a start block symbol could be `` \ { '' in c style languages , `` begin '' in pascal style languages is represented as terminal . an end block symbol which could be `` } '' in c style languages or `` end '' in pascal style languagesis represented as .we now survey lr ( 1 ) parsers and their generation algorithm as given by .lr ( 1 ) parsers are table driven shift reduce parsers . in lr ( 1 ) , `` l '' denotes left - to - right scanning of input symbols and `` r '' denotes constructing right most derivation in reverse .some extra information is indicated with each item : the set of possible terminals which could follow the item s lhs .this set of items is called the lookahead set for the item . here, `` 1 '' signifies that number of lookahead symbols required are 1 .+ lr ( 1 ) parser consists of an input , an output , a stack , a driver routine .a driver routine runs its parsing algorithm which interacts with two tables action and goto . any entry in action and goto tablesare indexed by a symbol which belongs to and the current state .an entry in both the tables can be any one of the following : * if action [ j , a ] = , q then a shift action must be taken . *if action [ j , a ] = , then reduce symbols to a production . * if action [ j , a ] = accept then grammar is accepted . * if action [ j , a ] = error then error has occurred . * if goto [ j , a ] = q then go to state q. an lr ( 1 ) item is of the form ] ~|~[a \rightarrow \alpha .x\beta , a ] \in i\}) ] action and goto tables are created using this collection .following is the procedure to create these tables : 1 .create collection of set of lr(1 ) items .let this collection be 2 .let _ i _ be the state of a parser constructed from .entries in action table are computed as follows : 1 .action [ _ i _ , _ a _ ] = shift _ j _ , if \in i_i ] and 3 .action [ _ i _ , ] 4 .goto [ _ i _ , _ a _ ] = _ j _ , if goto ( , a ) = 3 .all other entries not defined by ( b ) and ( c ) are set to error .the initial state is the one containing the item _ ] = accept then input is accepted . *if tab( ) [ _ j _ , = accept then input for incremental category will be accepted . * if tab( ) [ _ j _ , _ a _ ] = , k then jump to a subtable tab( ) . * if tab( ) [ _ j _ , _ a _ ] = error then error occurred .as jsr generation algorithm was already developed for lr ( 0 ) items and incremental jsr generation algorithm was developed for slr ( 0 ) items . in this section , we will first extend the jsr parser generation algorithm to accept lr ( 1 ) grammar and then we will extend i_jsr parser to accept lr(1 ) grammar .+ generation of subtables first requires the generation of canonical collection of sets of augmented items .an augmented item is a triplet represented as , ff , tf , where i is an lr item , ff called from - field and tf called to - field are the names of parsing sub - tables. from - field represents the sub - table which contains the last action performed by parser and to - field represent the sub - table which contains the next action to be performed .although we focus only lr ( 1 ) items but the procedure for lr ( k ) items is very similar .+ to - field of an augmented item of a state is determined using to function .let us define the to function . function to calls function choose_next .this function selects a parsing table out of those in which the parsing of the remaining string could continue .let be a total ordering relation over the set of the names of parsing sub - tables , such that + from - field of an augmented item is enriched using from function .from takes two arguments , which is a set of items enriched with to - field and , whose items we have to enrich with from - field .+ ,~tf >\in i_t'' ] states procedure is used to generate the collection of set of jsr items .states algorithm first generates the collection of sets of lr(1 ) items using items procedure which was discussed previously .afterwards , it calls to and from functions to generate set of augmented items from the corresponding set of lr(1 ) items .+ we will now extend i_jsr parser to accept lr ( 1 ) languages . this extended parser is based on the previous jsr parsing algorithm .the first function used to compute the set of first symbols related to a non - terminal has to be modified to include the incremental categories also .the reason being an incremental category can also occur in the input string , can be shifted on the stack while parsing and can reduce to a production .moreover , first should now also include the eos markers .hence , the new first becomes + for an incremental category _a _ , there will be a set of items containing item ] .then the state corresponding to this set of items will be the start state of the incremental grammar corresponding to a. correspondingly , there will a single set of items that contains the item ] =<j , k> ] = < j , k> ] = accept ] , where _ d _ is a lookahead symbol .so we have \in i_m ] should be result of a closure of another item .the only such item we can see is ] . as discussed in section 2 , to get a set of lr(1 ) items we have to apply closure ( ] + \in i_m ] + \in i_m ] .let be a set of jsr items enriched with to fields corresponding to all lr(1 ) items in .after applying to procedure over we would get the to field for item ] , ] , ] will be equal to the from field of ] , which is .so , the set of jsr items ( ) corresponding to contains the following items : + \in i_m^a ] + \in i_m^a ] + let after performing shift operation in the state _m _ over the symbol _b _ , parser reaches state _n_. so , we must have + ~|~\forall~g \in first ( \beta f)\}) ] besides other items of the form ] as .let us represent these set of items enriched with to field as .+ after parsing next state will be obtained by performing goto over with symbol _x_. let that state be .now we must have , + .after performing goto we get + \in i_y ] \in i_y ] after applying to procedure over , we get the to fields of items ] ] as . to enrich jsr items of lr(1 )items in we would apply from procedure as .now , we could see that the from field of jsr item for ] which in turn is equal to the to field of ] \in i_y^a ] \in i_y^a ] moreover , the to and from fields of all jsr items of will be .+ so , we have .this shows that the states _ z _ and _ n _ are same .let us name these states as _q_. also , the to fields for the items of and are same i.e. .hence , in this case shift on terminal _b _ in states _ m _ and _ x _ results only in one state _q _ and in the sub - table for incremental category .+ theorem is * proved * in this case .+ _ case 1.2 _ : if is used .x _ is the state reached before performing the shift on _b_. then , the set of items , say related to state _ x _ will contain following items : + \in i_y^a ] \in i_y^a ] + after applying to operation to , the to fields of all jsr items for above lr(1 ) items will be .+ let be the state reached after performing shift on _ b _ in the state .so , ~|~\forall~g \in first ( \beta f)~\})$ ] also , the to and from fields of all the jsr items for the above set of lr ( 1 ) items will be .+ hence , .this shows that the states _y _ and _ n _ are same .let us name the state as _q_. moreover , the to fields of lr(1 ) items of and are . hence , in this case shift on terminal _b _ in the states _ m _ and _ x _ results only in one state _q _ in the subtable of .+ theorem is proved in this case .+ + as theorem has been proved in _ case 1.1 _ and _ case 1.2_. so , for * case 1 * also the theorem has been proved .+ we have proved for the case containing both productions .two other cases are when only one of these productions is present .the proof of both of these cases are very similar to the * case 1*. + please note that with the given set of conditions in the theorem , we could nt have the case in which none of the productions belong to this set .theorem 1 is crucial to bpp . in the succeeding sections we will use this theorem to create our parallel parsing algorithm .in this section we will present our block parallelized parser for lr ( 1 ) grammars. we will first give the intuition and working of bpp .then we will present our algorithm and give a proof that our parser can accept all the lr(k ) and lalr ( k ) languages which can be accepted by a shift reduce lr(k ) and lalr ( k ) parser .+ let be the augmented grammar .the incremental categories are the non terminals associated with the blocks to be parsed in parallel .for example , if we want to parse class definitions in parallel then we can define _ class - definition _ as the incremental category .other examples can be _ function - definition _ , _ if - statement _ , _ for - loop _ , _ while - loop _ if they can be parsed in parallel . for most of the programming languagesincluding c , c++ , java , c # above blocks can be parsed in parallel . in generalwe can define an incremental category to be the non terminals which derive a string containing the start of the block symbol , and ending with the end of the block symbol .in mathematical terms , for bpp the set of incremental category _ ic _is defined as : + in the c programming language , a _ function - definition _ can be represented by the following context free grammar productions : + + + + + + + + + + according to the definition of incremental categories above , we can see that _ function - definition _ , _ if - stmt _ , _ while - loop _, _ for - loop _ follows the condition for an incremental category .+ in a programming language there could be many types of blocks like in java , a block related to the class definition may contain only method definitions and declaration of member variables while a block related to the method definition would contain statements including expression statements , variable declarations , if statement or loop etc .this means in a programming language not all blocks contains same type of statements .hence , encountering the start symbol of block does nt give us enough information about what kind of statements the block would contain . to overcome this problemwe will modify the productions of incremental category such that a reduce action will happen when the start symbol is encountered .modify the productions of in such a way that every production is split into two productions : + if the productions of incremental category is structured as above then during the parsing of a word related to this incremental category there will be reduce action to reduce the current symbols to production when becomes the look ahead symbol . as each related to only one and vice - verse , we can easily determine which incremental category has to be parsed next .+ now we are in a stage to define block parallelized grammar . given a grammar and a set of incremental categories + we define block parallelized grammar such that + and is partitioned using firstnt as given in section 7 .+ now we can use theorem 1 to create bpp .let us have a string where and . during the parsing _, when is encountered we should have a reduce action to , based on the production .now , we can get associated with . according to theorem 1 , during parsing of the word _ w _ if the parser reaches at state _q _ in the sub table of after performing shift action on _ a _ then during the incremental parsing of the word for the incremental category the parser will also reach the state _q _ in the sub table of after performing the shift action on _a_. this means , we can replace the state reached just before performing shift action on _ a _ with the start state of subtable of and can now be parsed incrementally .+ it is now evident that why the partitions of non terminals should be created as described in section 7 .if the partitions are not created in this way , then it may happen after a shift on _ a _ during the parsing of _ w _ and incremental parsing of may reach the same state but not in the same sub - table . by creating partitions as described in section 8 ,we make sure that when is encountered by bpp then the newly created parallel parser knows in which sub - table it has to continue the parsing in . on the other handif partitions are not created as described in section 8 , then newly created parallel parser would nt know in which sub - table it has to continue parsing in . + this property is used by bpp to parse the block incrementally .algorithm 1 is the bpp parsing algorithm of incremental table . if _= 1 , then the algorithm is for the top level block .otherwise it is for other incremental category .+ this parsing algorithm is for any block be it top level block or child block .lines 1 - 4 initializes different local variables .lines 5 - 32 is the main loop of algorithm which does the whole work . line 6 , gets the top state on the stack .lines 7 - 9 pushes the next state on the stack if there is a shift operation .similarly , lines 10 - 11 changes current table if there is a jump operation .line 12 - 27 are executed if there is reduce action which reduces according to the production .line 14 checks if current input symbol is a start of the block symbol and if reduce action reduces for an incremental category .if yes then lines 15 - 22 gets related to , pops states from stack and pushes these states and start state to a new stack , creates and starts a new bpp for and shifts to the end of block . in this casenext symbol will become .if check of line 14 fails then it means this is a regular reduce action not associated with any block .lines 24 - 27 , pops states from stack and shifts to a new state .line 28 returns if there is an accept action .accept action can be for both top level block and child block .line 30 reports error if none of the above cases are satisfied .2 shows an example of how bpp works .it will start parsing the block of function _ f_.when it will encounter an _ if _ block a new bpp in another thread will be created which will parse _ if _ block .parent bpp will move ahead to the end of _ if _ block and will also create another thread to parse _ else _ block . in this way input is divided into different threads parsing each block .+ in this algorithm we have tried to minimize the amount of serial work to be done to get to the end of the block for the new block parser .one bpp does nt have to do any communication with other bpps . also , there are no side effects of the above algorithm .all the variables which are being modified are local variables .hence , there is no need of synchronization .this also reduces any amount of cache contention between different processors .generation of abstract syntax tree or parsing tree is easy using above algorithm and it requires very little change in the above algorithm .+ it may be argued that the step `` go to the end of this block '' is a serial bottleneck for the parallel algorithm . describes an algorithm to perform lexical analysis of string in _o(log n ) _ time using _o(n ) _ processors in a parallel fashion .when performing lexical analysis in parallel as described in , lexer could store the start symbols of a block with its corresponding end symbol for that block .now determining the end of block is just a matter of searching through the data structure .many ways exist to make this searching as fast as possible like using a binary search tree or a hash table .in this section we will show how our technique is better than other techniques . developed a technique which divides whole grammar into n sub- grammars which are individually handled by n sub - compilers .each sub - compiler needs its own scanner which can scan a sub - grammar .it requires an automatic tool to generate sub - compiler .this technique requires significant changes in not only in parser and grammar but also in lexical analyzer phase .contrast to this our block parallelized parser is easy to generate as our technique does not require any change in the grammar and lexical analyzer and it is pretty easy to modify current yacc and bison parser generator tools to support the generation of our parser .+ lr substring parsing technique described in is specifically for bounded context ( 1 , 1 ) grammars .there are no limitations like this to block parallelized parser .although in this paper we have shown how we can create an lr ( 1 ) block parallelized parser but we believe it can be extended to lr ( k ) class of languages and also could be used by lalr ( 1 ) parser .hence , our technique accepts a larger class of grammars .+ , , , all develops algorithm for parsing lr ( k ) class of languages in parallel . these techniques and in all other techniques the creation of abstract syntax tree is not as easy as itis in our technique .moreover our technique is simpler than all others .+ hence , we could see that block parallelized parser is easy to construct , accepts wider class of languages and supports an easy construction of abstract syntax tree .we implemented lexer , block parallelized parser and shift reduce lr ( 1 ) parser for c programming language supporting a few gnu c extensions required for our tests .implementation was done in c # programming language . to simplify our implementationwe only included function - definition as the incremental category for bpp .moreover , function - definition would still give us a sufficient amount of parallelism as we would see in the evaluation .we modified the lexer phase so that it will keep track of the position of s b and its corresponding e b .this information was stored in the form of a c # dictionary ( which is implemented as a hash table ) with the position of s b as the key and position of e b as the value .as , thread creation has significant overhead so we used c # taskparallel library which is thread pool implementation in c#. our implementation does nt have a c preprocessor implementation .so , we first used gcc to perform preprocessing and the preprocessed file is used as input to our implementation .+ we evaluated the performance of bpp with shift reduce lr ( 1 ) parser by parsing 10 random files from the linux kernel source code .we compiled c # implementation using mono c # compiler 3.12.1 and executed the implementation using mono jit compiler 3.12.1 on machine running fedora 21 with linux kernel 3.19.3 with 6 gb ram and intel core i7 - 3610 cpu with hyperthreading enabled .+ in c programming language , preprocessing ` # include ` could actually generate very long files .normally , the header files contains declarations not function definitions .so , this leads to less amount of parallelism being available .hence we decided to evaluate with including header files and excluding header files .3 shows the performance improvement with respect to shift reduce lr(1 ) parser of 10 random linux kernel files .3 shows performance improvement for both cases including the header files and excluding the header files .as expected we could see that the performance improvement with excluding the header files is more than the performance improvement including the header files .+ the performance improvement in the case of excluding header files matters most for the programming languages like java , python , c # where most of the program is organized into blocks the results because in these programs the amount of parallelism available is high .+ the average performance improvement in the case of excluding header files is 52% and including header files is 28% .in this document we present our block parallelized parser technique which could parse the source code in a parallel fashion .our approach is a divide and conquer approach in which , the divide step divides the source code into different blocks and parse them in parallel whereas the conquer step only waits for all the parallel parsers to complete their operation .it is based on the incremental jump shift reduce parser technique developed by .our technique does nt require any communication in between different threads and does nt modify any global data .hence , this technique is free of thread synchronization .we develop this technique for lr ( 1 ) languages and we believe that it can be extended to accept lr ( k ) languages and could be converted to an lalr ( 1 ) parser easily .our technique does nt do any major changes in the parsing algorithm of a shift reduce parser hence the abstract syntax tree can be created in the same way as it has been creating in shift reduce parser .moreover , our parser can also work as an incremental block parallelized parser .we implemented block parallelized parser and shift reduce lr ( 1 ) parser for c programming language in c#. the performance evaluation of bpp with shift reduce lr ( 1 ) parser was done by parsing 10 random files from the linux kernel source code .we compiled c # implementation using mono c # compiler 3.12.1 and executed the implementation using mono jit compiler 3.12.1 on machine running fedora 21 with linux kernel 3.19.3 with 6 gb ram and intel core i7 - 3610 cpu with hyperthreading enabled .we found out that our technique showed 28% performance improvement in the case of including header files and 52% performance improvement in the case of excluding header files .our parser accepts lr ( 1 ) languages we would like to extend it to accept lr ( k ) languages . in our technique , the parser determines when to create a new parallel parser thread .if the responsibility of this decision can be given to the lexical analysis phase then the lexical analysis can actually start the parsers in parallel .this will lead to significant performance advantage .moreover , our technique has been applied to languages which does nt have indentation in their syntax like the way python has . shows an efficient way to parse the language which has indentation as a mean to determine blocks .our parser can be extended to accept those languages also .we are working towards developing a block parallelized compiler which could compile different blocks of a language in parallel .block parallelized parser is one of the components of a block parallelized compiler .semantic analysis phase also share the same properties as the syntax analysis phase . in programming languages , an entity like variable or typeis declared before using it .so , in this case also a lot can be done to actually parallelize the semantic analysis phase .neal m. gafter and thomas j. leblanc .parallel incremental compilation .department of computer science , university of rochester : ph .d. thesis .113 gwen clarke and david t. barnard .an lr substring parser applied in a parallel environment , journal of parallel and distributed computing . g. v. cormack .an lr substring parser for noncorrecting syntax error recovery , acm sigplan notices alfred v. aho , monica s. lam , ravi sethi and jeffery d. ullman .2007 . compilers principle tools and techniques second edition , prentice hall floyd , r. w. bounded context synctactic analysis .acm 7 , 21 february 1961 j. h. williams , 1975 . bounded context parsable grammars .information and control 28 , 314 - 334 r. m. schell , 1979 .methods for constructing parallel compilers for use in a multiprocessor environment .d. thesis , univ . of illinois at urbana - champaign j. cohen and s. kolodner .1985 . estimating the speedup in parallel parsing , ieee trans .software engrg .se-11 c. n. fischer , 1975 . on parsing context free languages in parallel environments .d. thesis , cornell univ .m. d. mickunas and r. m. schell .parallel compilation in a multi - processor environment ( extended abstract ) .comput d. ligett , g. mccluskey , and w. m. mckeeman , parallel lr parsing tech . rep . , wang institute of graduate studies , july 1982 edward d. willink , meta - compilation for c++ , university of surrey , june 2001 pierpaolo degano , stefano mannucci and bruno mojana , efficient incremental lr parsing for syntax - directed editors , acm transactions on programming languages and systems , vol . 10 , no .3 , july 1988 , pages 345 - 373 .sanjay khanna , arif ghafoor and amrit goel , a parallel compilation technique based on grammar partitioning , 1990 acm 089791 - 348 - 5/90/0002/0385 w. daniel hillis and guy l. steele , data parallel algorithms , communications of the acm , 1986 bison , http://www.gnu.org/software/bison yacc , http://dinosaur.compilertools.net/yacc clang , http://clang.llvm.org mono , http//mono - project.com el - essouki , w. huen , and m. evans , `` towards a partitioning compiler for distributed computing system '' , ieee first international conference on distributed computing systems , october 1979 .thomas j. pennello and frank deremer , `` a forward move algorithm for lr error recovery '' , fifth annual acm symposium on principles of programming language .jean - philipe bernardy and koen claessen , efficient divide - and - conquer parsing of practical context - free languages .l. valiant .general context - free recognition in less than cubic time .j. of computer and system sciences , 10(2):308 - 314 , 1975 .micheal d. adams .`` principled parsing for indentation - sensitive languages '' , acm symposium on principles of programming languages 2013 .january 23 - 25 . | software s source code is becoming large and complex . compilation of large base code is a time consuming process . parallel compilation of code will help in reducing the time complexity . parsing is one of the phases in compiler in which significant amount of time of compilation is spent . techniques have already been developed to extract the parallelism available in parser . current lr(k ) parallel parsing techniques either face difficulty in creating abstract syntax tree or requires modification in the grammar or are specific to less expressive grammars . most of the programming languages like c , algol are block - structured , and in most language s grammars the grammar of different blocks is independent , allowing different blocks to be parsed in parallel . we are proposing a block level parallel parser derived from incremental jump shift reduce parser by . block parallelized parser ( bpp ) can even work as a block parallel incremental parser . we define a set of incremental categories and create the partitions of a grammar based on a rule . when parser reaches the start of the block symbol it will check whether the current block is related to any incremental category . if block parallel parser find the incremental category for it , parser will parse the block in parallel . block parallel parser is developed for lr(1 ) grammar . without making major changes in shift reduce ( sr ) lr(1 ) parsing algorithm , block parallel parser can create an abstract syntax tree easily . we believe this parser can be easily extended to lr ( k ) grammars and also be converted to an lalr ( 1 ) parser . we implemented bpp and sr lr(1 ) parsing algorithm for c programming language . we evaluated performance of both techniques by parsing 10 random files from linux kernel source . bpp showed 28% and 52% improvement in the case of including header files and excluding header files respectively . |
lll techniques & advantages & disadvantages + optical rotation in & no electric and magnetic fields & unavoidable systematic effects , + atomic vapor & are involved , no frequency & poor signal to noise ratio at zero + & measurements & crossing in the dispersion curve + stark interference & measurement procedure is & measured transitions are + in atomic vapor & relatively simple & doppler broadened + stark interference & doppler broadening is reduced , & limited by volume and time + in atomic beams & signal to noise ratio is larger & of interaction , coherence + & due to large no. of atoms & time is short due to collision + - shift in single & absence of doppler broadening , & accurate determination of the + trapped and laser & tractable systematic , long & electric field of the light at the + cooled ion & coherence time , large signal to & position of the ion in the trap + & noise ratio & + interference & large signal to noise ratio & less systematic from collision + with small number & & broadening + of atoms & & + weak interaction between atomic electron and the nucleus through the exchange of z boson leads to parity violation in atomic systems .atomic parity violation ( apv ) has become a subject of keen interest as it has the potential to test the standard model ( sm ) of particle physics and to search for new physics beyond it .several experiments have been performed over the last three decades on some heavy elements like cs , pb , tl , bi _ etc_. there are also some proposals with promising prospects on elements like yb , fr and atomic ions like ba and ra .one of the most promising candidates is yb whose parity non - conserving ( pnc ) amplitude so far the largest .this point has also been verified experimentally but the experimental precision needs to be improved in order to compete with the present bench mark value of cs pnc experiment .the experiment on cs with an accuracy of , has successfully explained the sm of particle physics .higher precision ( ) is required to search for new physics beyond sm .the physical parameter that one seeks by combining these experiments and theory is the pnc transition amplitude . in table[ tab1 ] presently available techniques have been mentioned along with their advantages and respective challenges .a single trapped and laser cooled ion is free from unknown perturbations and it has long coherence time .systematic uncertainties are easily tractable and therefore , the system is more favored for such experiment even though this has not yet been experimentally demonstrated .single ion trapping and laser cooling are routinely done in radio frequency paul traps .the possibility of apv experiment based on such a system was first put forward by fortson .the overall idea has been reviewed here in brief focusing ra as a possible candidate . in fig .[ fig1 ] the relevant energy levels of singly charged radium ( ra ) and barium ( ba ) have been shown .after confining radium ion in an rf paul trap , it can be laser cooled by exciting the transition at nm .a repumping laser at nm is necessary to bring the ion back to the cooling cycle from the metastable state .atomic parity violation leads to mixing of different parity states with the ground state .thus the ground state has a small contribution from state resulting in a non - zero probability of dipole transition between and states which is normally a forbidden electric dipole transition . and ( b ) ba,scaledwidth=85.0% ]a transitional dipole interacts with the electric field while a quadrupole interacts with the field gradient . in an experimental setup as shown in fig .[ fig2 ] , it is possible to induce both a dipole transition ( due to apv ) as well as a quadrupole transition between and states . the interference term of these two leads to a measurable frequency change of the larmor frequency between the ground state zeeman sublevels in presence , as compared to , in absence of the laser fields .one of the suitable laser field configurations that produce the needed apv frequency shift is & where and are the electric field amplitudes of the two lasers .an ion placed at the antinode of field will suffer pnc induced electric dipole light - shift while the ion placed at the node of field , will show electric quadrupole light - shift .the quadrupole light - shifts of the zeeman sublevels in the ground state due to the field are of the same magnitude and direction .therefore , field will not lead to any change of the ground state larmor frequency defined by the energy difference between the zeeman sublevels of the ground state . on the contrary , the shifts due to field will increase the larmor frequency .this change in larmor frequency is proportional to the magnitude of the field . in the experiment one measures the larmor frequency with and without these laser fields .the difference of these two frequencies therefore , gives directly the apv light - shift which can be expressed as where and are the rabi frequencies for pnc and quadrupole induced transitions which are respectively proportional to the electric field amplitude and field gradient of the standing wave lasers , ; , are the zeeman sublevels of and states respectively .the distinguished advantage of this technique is that measurement of is free from any fluctuation in the laser frequency and other sources of quadrupole shift ( ) .the statistical uncertainty in the measurement of is given by where is an efficiency factor that depends on experimental conditions , and are the number of ions and coherence time respectively and is the time of observation . though in this experiment , longer coherence time improves the uncertainty in the measurement .accurate determination of from measured depends on precise determination of the electric fields and at the position of the ion in the trap which is a challenge of this experiment and is a major source of systematic uncertainty . also placingthe ion at the antinode of field or node of field is a difficult task .since pnc induced light - shift ( ) is very small ( cycles / s for v / m for ra ) , little fluctuations of the magnetic field resolving zeeman sublevels , will worsen the accuracy of the result . however , in the past decades several experimental techniques have been developed by which the above problems can be solved .one can manipulate the ion position with respect to the standing wave .recently , a technique has been reported by which the nodal point or the nodal line of the trap can be shifted upto few micrometers . thus the ion can be placed on the geometrical line of the standing wave .systematic uncertainty originating from inaccurate positioning of the ion with respect to the electric field of the laser has been determined in the following .since the ion is cooled to lamb - dicke regime , its motion is confined within its de broglie wavelength ( ) , typically nm for ra and ba .the uncertainty in and fields for antinodal and nodal positions respectively are and these systematic uncertainties tabulated in table [ tab2 ] , have been estimated from the lamb - dicke parameter .there are also some techniques for controlling magnetic field fluctuations .recent improvement of high precision rf spectroscopic techniques opens up the possibility of success of the desired experiment in near future .lll atomic properties & & + stability ( neutral specis ) & stable & years ( ) + in & & + pnc light - shift ( ) ( ) & & + coherence time ( s ) & & + uncertainty ( ) & & + uncertainty from field ( eq .5 ) & & + uncertainty from field ( eq .6 ) & & + quenching rate of & 0.002 & 0.04 + quenching rate of & 0.0033 & 3.36 + quenching rate of & 0.0004 & 0.48 +atomic parity violation effect scales little faster than for heavier element . in is times larger as compared to and times larger than that of atomic cesium .that is why at first sight is seemed to be a promising candidate though there are other advantages over ba .the most recent calculation shows that the pnc amplitude present in is in the unit of , where is the weak charge . in table[ tab2 ] the relevant atomic properties of and have been compared .the lasers required for are in visible and near infra - red region .thus these lasers are available commercially as solid state diode lasers .radium being a heavier element may be confined within smaller orbit than barium as the lamb - dicke parameter is inversely proportional to the square root of mass of the ion . in addition , the known relative systematic uncertainties for ra are three times smaller as compared to ba .the element has a large number of isotopes with significant stability , thus opening up the possibility of the experiment to extract the effect of nuclear structure in apv .pnc amplitude contains both nuclear spin dependent ( nsd ) and independent ( nsi ) parts . from nsd part of , nuclear anapole momentcan be measured .however , in transition in ra or ba , the contribution of nsd part is smaller by few orders than nsi part and hence determination of anapole moment is difficult . to avoid nsi part , a similar experiment explained above can be performed using transition of nuclear spin non - zero ( ) isotopes of ra or ba .pnc allowed transition in these isotopes contains only nsd part and may lead to a direct measurement of nuclear anapole moment . is more favored candidate for an apv experiment than as pnc amplitude for transition is times larger in this isotope .however , there are several disadvantages of choosing ra as a possible candidate . trapping andcooling of ra has not been demonstrated so far .the lack of spectroscopic data on ra is also a problem .theoretical calculation needs to be more accurate ( below 1 % ) in order to compare with the experimentally obtained data .the atomic structure of radium is not well studied .coherence time is smaller for ra ( ) which will reduce the signal to noise ratio .the systematic uncertainties originating from the determination of and at the ion position are too large for ra and demand some special experimental techniques to eliminate those .the production of various isotopes of radium for the study of nuclear structure effects on apv demands well established facilities . at the kvi ,groningen such a facility has been developed where some isotopes of radium may be produced with an aim for performing apv experiment based on single trapped and laser cooled ra .atomic ra has been successfully trapped in a mot and laser cooled at argonne national laboratory in search of permanent electric dipole moment ( edm ) in atoms .thus there is hope for details spectroscopic data on ra to be available shortly which will lead towards the implementation of apv experiment on ra .with an aim to perform high precision rf spectroscopy in search of apv , work has been started by our group rcamos at iacs . has been chosen initially as barium is available commercially .a linear paul trap has been designed .the repumping laser at nm has been frequency stabilized using pound - drever - hall locking technique and frequency doubling of nm laser to produce cooling laser at nm is processing .part of this work is being supported by dst - serc , india .p. mandal and a. sen are thankful to csir , india for entertaining fellowship during the research work .p. mandal acknowledges the organizing committee of laser 2009 workshop , poznan for sponsoring financial support to participate in the workshop .spphys fortson , n. : phys .lett . * 70 * , 2383 ( 1993 ) bouchait , m. a. , bouchait , c. : phys .b * 48 * , 111 ( 1974 ) , rep . prog. phys . * 60 * , 1351 ( 1997 ) ginges , j. s. m. , flambaum , v. v. : phys . rep . * 397 * , 63 ( 2004 ) , marciano , w. j. , j. l. rosner , j. l. : phys . rev. lett . * 65 * , 2963 ( 1990 ) wood , c. s. , bennett , s. c. , cho , d. , masterson , b. p. , roberts , j. l. , tanner , c. e. , wieman , c. e. : science * 275 * , 1759 ( 1997 ) bennett , s. c. , wieman , c. e. : phys .* 82 * , 2484 ( 1999 ) meekohof , d. m. , vetter , p. , majumdar , p. k. , lamoreaux , s. k. , fortson , e. n. : phys .lett . * 71 * , 3442 ( 1993 ) vetter , p. a. , meekhof , d. m. , majumder , p. k. , lamoreaux , s. k. , fortson , e. n. : phys .74 * , 2658 ( 1995 ) macpherson , m. j. d. , zetie , k. p. , warrington , r. b. , stacey , d. n. , hoare , j. p. : phys .lett . * 67 * , 2784 ( 1991 ) demille , david : physlett . * 74 * , 4165 ( 1995 ) bouchait , m. a. : phys .lett . * 100 * , 123003 ( 2008 ) koerber , t. w. , schacht , m. , nagourney , w. , fortson , e. n. : j. phys .b : at . mol .. phys . * 36 * , 637 ( 2003 ) tsigutkin , k. , dounas - frazer , d. , family , a. , stalnaker , j. e. , yashchuk , v. v. , budker , d. : phys .lett . * 103 * , 071601 ( 2009 ) langacker , p. , luo , m. , mann , a. k. : rev .* 64 * , 87 ( 1992 ) majumdar , p. k. , tsai , l. l. : phys .a * 60 * , 267 ( 1999 ) leibfried , d. , blatt , r. , monroe , c. , wineland , d. : rev .* 75 * , 281 ( 2003 ) herskind , p. f. , dantan , a. , albert , m. , marler , j. p. , drewsen , m. : j. phys .b : at . mol .phys . * 42 * , 154008 ( 2009 ) sherman , j. a. , andalkar , a. , nagourney , w. , fortson , e. n. : phys .a * 78 * , 052514 ( 2008 ) sherman , j. a. , koerber , t. w. , markhotok , a. , nagourney , w. , fortson , e. n. : phys .* 94 * , 243001 ( 2005 ) koerber , t. w. , schacht , m. h. , hendrickson , k. r. g. , nagourney , w. , fortson , e. n. : phys .lett . * 88 * , 143002 ( 2002 ) sahoo , b. k. , chaudhuri , r. , das , b. p. , mukherjee , d. : phys .rev . lett . * 96 * , 163003 ( 2006 ) dzuba , v. a. , flambaum , v. v. , ginges , j. s. m. : phys .d * 66 * , 076013 ( 2002 ) wansbeek , l. w. , sahoo , b. k. , timmermans , r. g. e. , jungmann , k. , das , b. p. , mukherjee , d. : phys .a * 78 * , 050501(r ) ( 2008 ) fortson , e. n. , pang , y. , wilets , l. : phys . rev . lett . *65 * , 2857 ( 1990 ) geetha , k. p. , singh , a. d. , das , b. p. : phys .a * 58 * , r16 ( 1998 ) koerber , t. w. , thesis , doctor of philosophy , university of washington ( 2003 ) shidling , p. d. , giri , g.s . ,vanderhoek , d.j . ,jungmann , k. , kruithof , w. , onderwater , c.j.g . , sohani , m. , versolato , o.o . ,willmann , l. , wilschut , h.w .instrum . meth .a * 606 * , 305 ( 2009 ) guest , j. r. , scielzo , n. d. , ahmad , i. , bailey , k. , greene , j. p. , holt , r. j. , lu , z. t. , oconnor , t. p. , potterveld , d. h. : phys .* 98 * , 093001 ( 2007 ) drever , r. w. p. , hall , j. l. , kowalski , f. v. , appl .b * 31 * , 97 ( 1983 ) raab , c. , bolle , j. , oberst , h. , eschner , j. , schmidt - kaler , f. , blatt , r. : applb * 67 * , 683 ( 1998 ) | single trapped and laser cooled radium ion as a possible candidate for measuring the parity violation induced frequency shift has been discussed here . even though the technique to be used is similar to that proposed by fortson , radium has its own advantages and disadvantages . the most attractive part of radium ion as compared to that of barium ion is its mass which comes along with added complexity of instability as well as other issues which are discussed here . example.eps gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore |
today we are excited by the fast advance of the physics and applied mathematics in the area of research of complex systems in nature and society [ 1 - 8 ] .an important part of the ground for this success was created more than 60 years ago when l. f. richardson and l. w. lanchester , both famous british scientists , were the first who rose and applied the idea for mathematical modeling of arms races and military combats [ 9 - 12 ] . for many yearsthe research on wars and other military conflicts was concentrated in the military universities and academies . in the last 20 years and especially after the terrorist attack on 11th of september 2001 this research became actual for many physicists and applied mathematicians too [ 13 - 15 ] . in this paperwe shall follow the terminology used by epstein who applied ecological models of lotka - volterra kind for description of combats .let us have two conflicting groups named the `` red group '' and the `` blue group '' .a general form of model equations of an armed conflict between these groups is where and are the numbers of armed members of the two groups ; and are the `` firing efectivenes '' ( technology level ) of the groups ; and and are linear or nonlinear functions , depending on the character of the conflict .epstein proposed the following class of models of the conflict where are real nonnegative coefficients .if these coefficients are constants the models are called hard models .if the coefficients depend on the the number of participants or on the parameters of the environment the models are called soft ones .important characteristics of the model ( [ epst1 ] ) is the casuality exchange ratio or state equation ( the ratio of eliminated members of the `` red '' and `` blue '' groups ) . for ( [ epst1 ] )the ratio is where and .the integral form of the above casuality exchange ratio is where and are the numbers of the members of the two groups at the beginning of the conflict ( at ) . for developing of intuition and decision skills it is of interest to know and in closed form .such analytical solutions are possible only in small number of cases .section 2 contains the solution of the system of model equations for selected values of the parameters .several concluding remarks are summarized in section 3 .the linear model of lanchester describes position conflicts such as the battles for somna and verdune in 1916 .the coefficients in ( [ epst1 ] ) are , .the equation of state is quadratic where can be positive , negative , or . for the case we obtain and the solutions are evidently which means that after endless position conflict the two groups are destroyed and no one of them wins .however the situation changes if .let us first assume that . from the state equation ( [ steq1 ] ) andthe solutions of the model system of equations are where .the obtained solutions satisfy the initial conditions , .in addition at we obtain and . in other wordsthe blue group will win the conflict if it lasts long enough .the red group commanders have to change the strategy if they want to escape the defeat .if this does not happen after the time from the beginning the red group will be completely destroyed - fig. 1 . ) .the conflict ends at finite time by destruction of the red group .nevertheless the blue group suffers heavy losses.,width=529 ] we note again that this happens when , i.e. , when the condition ( [ sqlaw ] ) ( but with instead of ) for military combats is known as the square law of lanchester : to stalemate and adversay army two times as numerous as yours , your army must be four times as effective .but in this case our army will be also destroyed .thus the correct statement of the square law is as follows : to stalemate and adversay army two times as numerous as yours , your army must be four times as effective .in other words : in position war , in order to stop army that is time larger than yours your army has to be more than technologically better ( to have more than times larger firepower ) .we now discuss the case .the solutions of the model equations are now the red group wins if i.e. when , . in this casethe coefficients in the general model are and the equation of state is epstein assumes that stalemate occurs when , i.e. , when .let us discuss this in more details .if ( epstein case ) then the solution of the model system of equations is thus at , none of the groups wins , the attack is stopped but the two groups are completely destroyed . this of course is not of favor for the group leaders . in order to consider more realistic scenarios we have to set . in this casethe solutions of the model equations are where and . nowthe sign of determines the asymptotic behavior of the number of members of the two groups . if then and .thus the blue groups wins and the results of the attack is that the red group is defeated -fig.2 ) . , width=529 ] now let .then in this case as winner from the attack scenario is the red group as , . now let us discuss the following detail .let us assume that at the beginning of the attack the blue group has more soldiers than the red group : but the firepower of the red group is larger : .then there must be a moment where the two groups will have equal number of armed members : .this moment can be determined from the equation of state ( [ steq2 ] ) what is interesting that when a solution exists only if and it is ( see fig.3 ) ) , width=529 ] for this case , .the equation of state is what is interesting here is that the large group does not win in any case .let for an example .then the solutions of the model equations are which means that none wins ( ) and the groups are destroyed .this of course is not acceptable for both sides .more realistic are situations where .let first .the solution of the model equations is where and .thus in the asymptotic case and , i.e. the blue groups wins .quite interesting is the case . in this casethe solution of the model system is where , . in this case at obtain .thus at this conflict ends .blue group is completely destroyed and the red group wins . for these models , and .epstein argues that the exponents in the general model ( [ epst1 ] ) should be kept in the interval $ ] .however there is no evidence to sustain such an assertion .here we investigate two models for which some of the exponents equal .let first , .the model equations are the equation of state is if then the equation of the state together with one of the model equations yield no one of the groups wins at . if we obtain where .thus and .if thus and .let now and . the equilibrium condition ( [ icer ] ) becomes ^{\epsilon } , \epsilon = \frac{r}{b}\ ] ] the final solution is where is a typical time of decay .let us form the ratio where or .assuming but and letting we obtain \hskip.5 cm \kappa = \frac{r+b}{r - b}\ ] ] thus at . nevertheless , both groups , fight to the end ( ) .it is known from the history of the conflicts that epidemic events had frequently occurred particularly in case of attrition and ambush conflicts .the simplest way to account for this effect ( removing of conflict participants because of sickness ) is to modify the general model as follows where and are coefficients of morbility ( sick rate ) removal .generally .we choose to demonstrate the effect of epidemics on the model ( [ steq3 ] ) in the form hence where in the general case ( [ dop4 ] ) the quantity can be zero , positive or negative .for an example if the solution of the model system is where these solutions degenerate into ( [ solution31 ] ) at .a class of mathematical models of armed conflicts ( [ epst1 ] ) was investigated .the purpose was to identify particular cases with exact simple solutions in analytical form and , where and were the armies numbers .it was found that these requirements were met by the linear ( lanchester s ) model known from long ago in the form ( [ steq1 ] ) as well as by several nonlinear models described in this paper .these models demonstrate rich behavior in the time . *no one of the two groups and wins after endless ( ) attrition conflict- fighting to the finish ( [ res1 ] ) , ( [ sol6y ] ) * one of the groups wins after limited in time or after an endless conflict ( see table 1 ) + -2.5 cm + .summary of model predictions .[ cols="^,^,^,^,^,^ " , ] both conflicting groups are defined by their initial numbers , and respective firing efectivenesses per shot . the character of the conflict is modeled by the form of the functions and ( [ model1 ] ) which can be linear or nonlinear .all models can be extended to account for occuring of epidemic events in the fighting groups .the model ( [ dop2 ] ) is an example .99 nature , * 337 * , 1989 , 701 - 704 . j. math .sociology , * 18 * , 1993 , 47 -64 .studies quarterly , * 37 * , 1993 , 55 - 72 .usa , * 99 * , 2002 , 7193 - 7194 .sci.usa , * 102 * , 2005 , 255 - 260 .theoretical population biology , * 66 * , 2004 , 1 - 12 .lett . a , * 349 * , 2006 , 350 - 355 .chaos solitons & fractals , * 33 * , 2007 , 1658 - 1671 .nature , * 135 * , 1935 , 830 - 831. nature , * 148 * , 1941 , 598 - 598 .soc . , * 107 * , 1944 , 242 - 250 .soc . a , * 109 * , 1946 , 130 - 156 .int . security , * 12 * , 1998 , 154 - 165 .american anthropology , * 109 * , 2007 , 318 - 329 .j. infectious diseases , * 11 * , 2007 , 98 - 108 .nonlinear dynamics , mathematical biology and social sciences .addison - wesley , readings , ma , 1997 . | the human society today is far from perfection and conflicts between groups of humans are frequent events . one example for such conflicts are armed intergroup conflicts . the collective behavior of the large number of cooperating participants in these conflicts allows us to describe the conflict on the basis of models containing only few variables . in this paper we discuss several cases of conflicts without use of weapons of non - conventional kind . in the ancient times the chinese writer sun tsu mentioned that the war is an art . we can confirm that the conflict is an art but with much mathematics at the background . * key words * : conflict , attrition , ambush , combat , mathematical models |
since the seminal works on the small - world phenomenon by watts and strogatz and the scale - free property by barabsi and albert , the studies of complex networks have attracted a lot of interests within the physical community .most of the previous works focus on the topological properties ( i.e. non - geographical properties ) of the networks . in this sense, every edge is of length 1 , and the _ topological distance _ between two nodes is simply defined as the number of edges along the shortest path connecting them .to ignore the geographical effects is reasonable for some networked systems ( e.g. food webs , citation networks , metabolic networks ) , where the euclidean coordinates of nodes and the lengths of edges have no physical meanings . yet , many real - life networks , such as transportation networks , the internet , and power grids , have well - defined node - positions and edge - lengths .in addition to the topologically preferential attachment introduced by barabsi and albert , some recent works have demonstrated that the spatially preferential attachment mechanism also plays a major role in determining the network evolution .very recently , some authors have investigated the spatial structures of the so - called _ optimal networks _ .an optimal network has a given size and an optimal linking pattern , and is obtained by a certain global optimization algorithm ( e.g. simulated annealing ) with an objective function involving both geographical and topological measures .their works provide some guidelines in network design . however , the majority of real networks are not fixed , but grow continuously . therefore , to study growing networks with an optimal policy is not only of theoretical interest , but also of practical significance . in this paper , we propose a growing network model , in which , at each time step , one node is added and connected to some existing nodes according to an optimal policy .the degree distribution , edge - length distribution , and topological as well as geographical distances are analytically obtained subject to some reasonable approximations , which are well verified by simulations .consider a square of size with open boundary condition , that is , a open set in euclidean space , where " signifies the cartesian product .this model starts with fully connected nodes inside the square , all with randomly assigned coordinates . since there exists nodes initially , the discrete time steps in the evolutionare counted as .then , at the time step ( ) , a new node with randomly assigned coordinates is added to the network .rank each previously existing node according to the following measure : and the node having the smallest is arranged on the top . here, each node is labelled by its entering time , represents the position of the node , is the degree of the node at time , and is a free parameter . the newly added node will connect to existing nodes that have the smallest ( i.e. on the top of the queue ) .all the simulations and analyses shown in this paper are restricted to the specific case , since the analytical approach is only valid for the tree structure with .however , we have checked that all the results will not change qualitatively if is not too large compared with the network size . [ 0.6 ] : ( a ) , ( b ) , ( c ) , ( d ) .all the four networks are of size and .,title="fig : " ] [ 0.8 ] and .the solid line represents the assumption . in this simulation , we first generate a network of size by using the present optimal growing policy . to detect ,some test nodes with randomly assigned coordinates are used , and for each test node , following eq .( 1 ) , the existing nodes with minimal is awarded one point .the test node will be removed from the network after testing .the area for the node is approximately estimated as the ratio of the score node has eventually got after all the testings to the total number of test nodes .the simulation shown here is obtained from 30000 test nodes , and the parameter is fixed.,title="fig : " ] in real geographical networks , short edges are always dominant since constructing long edges will cost more . on the other hand , connecting to the high - degree nodes will make the average topological distance from the new node to all the previous nodes shorter .these two ingredients are described by the numerator and denominator of eq .( 1 ) , respectively .in addition , the weights of these two ingredients are usually not equal .for example , the airline travellers worry more about the number of sequential connections , the railway travellers and car drivers consider more about geographical distances , and the bus riders often simultaneously think of both factors . in the present model , if , only the geographical ingredient is taken into account . at another extreme , if , the geographical effect vanishes .1 shows some examples for different values of . when only the geographical ingredient is considered ( ) , most edges are very short and the degree distribution is very narrow . in the case of , the average geographical length of edges becomes longer , and the degree distribution becomes broader .when , the scale - free structure emerges and a few hub nodes govern the whole network evolution . as becomes very large , the network becomes star - like .at the time step , there are pre - existing nodes .the square can be divided into regions such that if a new node is fallen inside the region , the quantity is minimized , thus the new node will attach an edge to the node . since the coordinate of the new node is randomly selected inside the square , the probability of connecting with the node is equal to the area of . if the positions of nodes are uniformly distributed , statistically , the area of is approximately proportional to with a time factor as .2 shows the typical simulation results , which strongly support the valid of this assumption .accordingly , by using the mean - field theory , an analytic solution of degree distribution can be obtained . however , when , most edges are connected to one single node ( see fig .1d ) , so analytic solution is unavailable . here, we only consider the case of .assume where is a constant that can be determined self - consistently .using the continuum approximation in time variable , the evolving of node s degree reads with the initial condition .the solution is where accordingly , the degree distribution can be obtained as the constant is determined by the condition , where signifies the average degree .the above solution is similar to the one obtained by using the approach of _ rate equation _ proposed by krapivsky _in addition , one should note that if , the mean - field theory yields a solution , which is comparable to the exact analytic solution .clearly , the degree distribution obeys a power - law form at , and an exponential form at .when is in the interval , the networks display the so - called stretched exponential distribution : for small , the distribution is close to an exponential one , while for large , it is close to a power law .this result is in accordance with the situation of transportation networks .if only the geographical ingredient is taken into account ( e.g. road networks ) , then the degree distribution is very narrow . on the contrary , if the topological ingredient plays a major role ( e.g. airport networks ) , then the scale - free property emerges . when both the two ingredients are not neglectable ( e.g. bus networks ) , the degree distribution is intervenient between power - law and exponential ones .[ 0.8 ] ( a ) and ( b ) .the black squares and red curves represent simulated and analytic results , respectively .all the data are averaged over independent runs , with network size ( i.e. ) fixed.,title="fig : " ] [ 0.8 ] ( a ) and ( b ) . the black squares and red curves represent simulated and analytic results , respectively . all the data are averaged over independent runs , with network size ( i.e. ) fixed.,title="fig : " ] fig .3 shows the simulation results for and .the degree distribution follows a power - law form when , which well agrees with the analytic solution . in the case of ,the degree distribution is more exponential .however , it is remarkably broader than that of the erds - rnyi model .note that , the positions of all the nodes are not completely uniformly distributed , which will affect the degree distribution .this effect becomes more prominent when the geographical ingredient plays a more important role ( i.e. smaller ) .therefore , although the simulation result for is in accordance with the analysis qualitatively , the quantitative deviation can be clearly observed .denote by the topological distance between the node and the first node . by using mathematical induction, we can prove that there exists a positive constant , such that .this proposition can be easily transferred to prove the inequality under the condition for . indeed ,since the network has a tree structure , does not depend on time . under the framework of the mean field theory , the iteration equation for reads with the initial condition .( 7 ) can be understood as follows : at the time step , the node has probability to connect with the node .since the average topological distance between the node and the first node is , the topological distance of the node to the first one is if it is connected with the node .according to the induction assumption , note that , statistically , if , therefore where denotes the average over all the nodes . substituting inequality ( 9 ) into ( 8) ,we have rewriting the sum in continuous form , we obtain according to the mathematical induction principle , we have proved that the topological distance between the node and the first node , denoted by , could not exceed the order . for arbitrary nodes and ,clearly , the topological distance between them could not exceed the sum , thus the average topological distance of the whole network could not exceed the order either .this topological characteristic is referred to as the small - world effect in network science , and has been observed in a majority of real networks . actually , one is able to prove that the order of in the large limit is equal to ( see appendix a for details ) . furthermore , the iteration equation for general functions and , has the following solution : for the two special cases of ( , ) and ( , ) , the solutions are simply and , respectively . in fig .4 , we report the simulation results about the average distance vs network size . in each case , the data points can be well fitted by a straight line in the semi - log plot , indicating the growth tendency , which agrees well with the analytical solution . [ 0.8 ] ( a ) and ( b ) .as shown in each inset , the data points can be well fitted by a straight line in the semi - log plot , indicating the growth of the average distance approximately obeys the form .all the data are averaged over independent runs , where the maximal network size is ( i.e. ).,title="fig : " ] [ 0.8 ] ( a ) and ( b ) .as shown in each inset , the data points can be well fitted by a straight line in the semi - log plot , indicating the growth of the average distance approximately obeys the form .all the data are averaged over independent runs , where the maximal network size is ( i.e. ).,title="fig : " ]denote by the edge between nodes and , and the geographical length of edge is . when the node is added to the network, the geographical length of its attached edge approximately obeys the distribution where in the large limit .the derivation of this formula is described as follows .the probability of the edge length being between and is given by the summation , where is the probability that falls between and , and the node minimizes the quantity among all the previously existing nodes .this probability is approximately given by straightforwardly , the geographical length distribution of the newly added edge at the time step ( the edge for short ) is obtained as the lower boundary in the integral is replaced by 0 in the last step , which is valid only when .the cumulative length distribution of the edges at time step is given by ,\end{aligned}\ ] ] where the argument of function is .for , the approximate formula for reads and , when , if , the last step in eq .( 16 ) is invalid but the analytic form for can be directly obtained as \end{aligned}\ ] ] therefore , when , is approximately given by where is a numerical constant , and when , has the same form as that of eq .( 19 ) .5 plots the cumulative edge - length distributions . from this figure, one can see a good agreement between the theoretical and the numerical results .furthermore , one can calculate the expected value of the edge s geographical length , as which is valid only for sufficiently large and . according to eq .( 22 ) , decreases as as increases , which is consistent with the intuition since all the nodes are embedded into a 2-dimensional euclidean space .it may also be interesting to calculate the total length of all the edges at the time step , as is proportional to for .when , a finite fraction of nodes will be connected with a single hub node and therefore we expect that in this case .therefore , in the large limit , will increase quite abruptly when the parameter exceeds 1 .this tendency is indeed observed in our numerical simulations , as shown in fig .[ 0.8 ] ( a ) and ( b ) .the black squares and red curves represent simulation and analytic results , respectively .all the data are averaged over independent runs , with network size ( i.e. ) fixed.,title="fig : " ] [ 0.8 ] ( a ) and ( b ) . the black squares and red curves represent simulation and analytic results , respectively .all the data are averaged over independent runs , with network size ( i.e. ) fixed.,title="fig : " ]for an arbitrary path from node to , the corresponding geographical length is , where denotes the length of edge .accordingly , the geographical distance between two nodes is defined as the minimal geographical length of all the paths connecting them .now , we calculate the geographical distance between the node and the first node .since our network is a tree graph , does not depend on time . by using the mean field theory, we have or where , according to eq . ( 22 ) , it is not difficult to see that has an upper bound as approaches infinity .one can use the trial solution to test this conclusion : where from eq .( 27 ) , one obtains that .similar to the solution of eq .( 13 ) , as for and as for . however , it only reveals some qualitative property , and the exact numbers are not meaningful .this is because the value of is obtained by the average over infinite configurations for infinite , while in one evolving process is mainly determined by the randomly assigned coordinates of the node .[ 0.8 ] vs parameter with fixed . increases when exceeds the critical value , which well agrees with the theoretical prediction.,title="fig : " ]in many real - life transportation networks , the geographical effect can not be ignored .some scientists proposed certain global optimal algorithms to account for the geographical effect on the structure of a static network . on the other hand , many real networks grow continuously .therefore , we proposed a growing network model based on an optimal policy involving both topological and geographical measures .we found that the degree distribution will be broader when the topological ingredient plays a more important role ( i.e. larger ) , and when exceeds a critical value , a finite fraction of nodes will be connected with a single hub node and the geographical effect will become insignificant .this critical point can also be observed when detecting the total geographical edge - length in the large limit .we obtained some analytical solutions for degree distribution , edge - length distribution , and topological as well as geographical distances , based on reasonable approximations , which are well verified by simulations ..empirical degree distributions for geographical transportation networks .[ cols="<,^,>",options="header " , ] although the present model is based on some ideal assumptions , it can , at least qualitatively , reproduce some key properties of the real transportation networks . in table 1 , we list some empirical degree distributions of transportation networks .clearly , when building a new airport , we tend to firstly open some flights connected with previously central airports which are often of very large degrees .even though the central airports may be far from the new one , to open a direct flight is relatively convenient since one does nt need to build a physical link .therefore , the geographical effect is very small in the architecture of airport networks , which corresponds to the case of larger that leads to an approximately power - law degree distribution .for other four cases shown in table 1 , a physical link , which costs much , is necessary if one wants to connect two nodes , thus the geographical effect plays a more important role , which corresponds to the case of smaller that leads to a relatively narrow distribution .a specific measure of geographical network is its edge - length distribution .a very recent empirical study shows that the edge - length distribution of the highly heterogenous networks ( e.g. airport networks , corresponding to the present model with larger ) displays a single - peak function with the maximal edge - length about five times longer than the peaked value ( see fig .1c of ref . ) , while in the extreme homogenous networks ( e.g. railway networks , corresponding to the present model with ) , only the very short edge can exist ( see fig .1a of ref .these empirical results agree well with the theoretical predictions of the present model .firstly , when is obviously larger than zero , the edge - length distribution is single - peaked with its maximal edge - length about six times longer than the peak value ( see fig . 5 ) .and , when is close to zero , eq .( 17 ) degenerates to the form where denotes the network size . clearly , in the large limit , except a very few initially generated edges , only the edge of very small length can exist .the analytical approach is only valid for the tree structure with .however , we have checked that all the results will not change qualitatively if is not too large compared with the network size . some analytical methods proposed here are simple but useful , andmay be applied to some other related problems about the statistical properties of complex networks .for example , a similar ( but much simpler ) approach , taken in section 4 , can also be used to estimate the average topological distance for some other geographical networks . finally , it is worthwhile to emphasize that , the geographical effects should also be taken into account when investigating the efficiency ( e.g. the traffic throughput ) of transportation networks .very recently , some authors started to consider the geographical effects on dynamical processes , such as epidemic spreading and cascading , over scale - free networks .we hope the present work can further enlighten the readers on this interesting subject .the authors wish to thank dr .hong - kun liu to provide us some very helpful data on chinese city - airport networks .this work was partially supported by the national natural science foundation of china under grant nos .10635040 , 70471033 , and 10472116 , the special research founds for theoretical physics frontier problems under grant no .a0524701 , and specialized program under the presidential funds of the chinese academy of science .substituting eq . ( 4 ) into eq . ( 7 ) , one obtains that then , define we next prove that in the large limit by using mathematical induction .suppose for sufficiently large , all are less than for with being a constant greater than .then , from eq .( a1 ) , we have therefore , for all . similarly ,suppose for sufficiently large , all are greater than for with being a constant less than .then , from eq .( a1 ) , we have therefore , for all . watts1998 d. j. watts , and s. h. strogatz , nature * 393 * , 440 ( 1998 ) .barabsi , and r. albert , science * 286 * , 509 ( 1999 ) ; a. -l .barabsi , r. albert , and h. jeong , physica a * 272 * , 173 ( 1999 ) .r. albert , and a. -l .barabsi , rev .mod . phys . * 74 * , 47 ( 2002 ) .s. n. dorogovtsev , and j. f. f. mendes , adv . phys . * 51 * , 1079 ( 2002 ) . m. e. j. newman , siam review * 45 * , 167 ( 2003 ). s. boccaletti , v. latora , y. moreno , m. chavez , and d. -u .hwang , phys .rep . * 424 * , 175 ( 2006 ) .d. garlaschelli , g. caldarelli , and l. pietronero , nature * 423 * , 165 ( 2003 ) .k. brner , j. t. maru , and r. l. goldstone , proc .natl . acad .* 101 * , 5266 ( 2004 ) .h. jeong , b. tombor , r. albert , z. n. oltvai , and a. -l .barabsi , nature * 407 * , 651 ( 2000 ) .p. sen , s. dasgupta , a. chatterjee , p. a. sreeram , g. mukherjee , and s. s. manna , phys .e * 67 * , 036106 ( 2003 ) .a. brrrat , m. barthlemy , r. pastor - satorras , and a. vespignani , proc .* 101 * , 3747 ( 2004 ) ; r. guimer , s. mossa , a. turtschi , and l. a. n. amaral , proc .102 * , 7794 ( 2005 ) .m. faloutsos , p. faloutsos , and c. faloutsos , comput .* 29 * , 251 ( 1999 ) .r. pastor - satorras , a. vzquez , and a. vespignani , phys .lett . * 87 * , 258701 ( 2001 ) .r. crucitti , v. latora , and m. marchiori , physica a * 338 * , 92 ( 2004 ) .r. albert , i. albert , and g. l. nakarado , phys .e * 69 * , 025103 ( 2004 ) .s. h. yook , h. jeong , and a. -l .barabsi , proc .* 99 * , 13382 ( 2001 ) .m. barthlemy , europhys . lett . *63 * , 915 ( 2003 ) ; c. herrmann , m. barthlemy , and p. provero , phys . rev .e * 68 * , 026128 ( 2003 ) .s. s. manna and p. sen , phys .e * 66*,066114 ( 2002 ) ; p. sen , k. banerjee , and t. biswas , phys .e * 66 * , 037102 ( 2002 ) ; p. sen and s. s. manna , phys.rev .e * 68 * , 026104 ( 2003 ) ; s. s. manna , g. mukherjee , and p. sen , phys .e * 69 * , 017102 ( 2004 ) ; g. mukberjee , and s. s. manna , phys .e * 74 * , 036111 ( 2006 ) .m. t. gastner , and m. e. j. newman , eur .j. b * 49 * , 247 ( 2006 ) .m. barthlemy , and a. flammini , e - print arxiv : physics/0601203 . b. m. waxman , ieee j. selected areas comm .* 6 * , 1617 ( 1988 ) .zhang , k. chen , y. he , t. zhou , b. -b .su , y. -d .jin , h. chang , y. -p .zhou , l. -c .sun , b. -h .wang , d. -r .he , physica a * 360 * , 599 ( 2006 ) .krapivsky , s. redner , and f. leyvraz , phys .. lett . * 85 * , 4629 ( 2000 ) ; p.l .krapivsky , and s. redner , phys .e * 63 * , 066123 ( 2001 ) .j. laherrere , and d. sornette , eur .j. b * 2 * , 525 ( 1998 ) .p. erds , and a. rnyi , publ .inst . hung .sci . * 5 * , 17 ( 1960 ) .l. a. n. amaral , a. scala , m. barthlemy , and h. e. stanley , proc .97 * , 11149 ( 2000 ) ; s. h. strogatz , nature * 410 * , 268 ( 2001 ). l. a. braunstein , s. v. buldyrev , r. cohen , s. havlin , and h. e. stanley , phys .lett . * 91 * , 168701 ( 2003 ) .v. latora , and m. marchiori , physica a * 314 * , 109 ( 2002 ). k. h. chang , k. kim , h. oshima , and s. -m .yoon , j. korean phys .* 48 * , s143 ( 2006 ) .m. kurant , and p. thiran , phys .lett . * 96 * , 138701 ( 2006 ) ; phys .e * 74 * , 036114 ( 2006 ) .j. sienkiewicz , and j. a. hoyst , phys .e * 72 * , 046127 ( 2005 ) .liu , and t. zhou , acta physica sinica ( to be published ) .g. bagler , e - print cond - mat/0409773 .t. zhou , g. yan , and b. -h .wang , phys .e * 71 * , 046141 ( 2005 ) ; z. -m .gu , t. zhou , b. -h .wang , g. yan , c. -p .zhu , and z. -q .discret . impuls .algorithms * 13 * , 505 ( 2006 ) .zhang , l. -l .rong , and f. comellas , physica a * 364 * , 610 ( 2006 ) ; z. -z .zhang , l. -l .rong , and f. comellas , j. phys .a * 39 * , 3253 ( 2006 ) .g. yan , t. zhou , b. hu , z. -q .fu , and b. -h .wang , phys .e * 73 * , 046108 ( 2006 ) ; t. zhou , g. yan , b. -h .wang , z. -q .fu , b. hu , c. -p .zhu , and w. -x .wang , dyn .discret . impuls .algorithms * 13 * , 463 ( 2006 ) .xu , z. -x .wu , and g. chen , e - print arxiv : physics/0604187 ; x. -j .xu , w. -x .wang , t. zhou , and g. chen , e - print arxiv : physics/0606256 . l. huang , l. yang , and k. yang , phys . rev .e * 73 * , 036102 ( 2006 ) . | in this article , we propose a growing network model based on an optimal policy involving both topological and geographical measures . in this model , at each time step , a new node , having randomly assigned coordinates in a square , is added and connected to a previously existing node , which minimizes the quantity , where is the geographical distance , the degree , and a free parameter . the degree distribution obeys a power - law form when , and an exponential form when . when is in the interval , the network exhibits a stretched exponential distribution . we prove that the average topological distance increases in a logarithmic scale of the network size , indicating the existence of the small - world property . furthermore , we obtain the geographical edge - length distribution , the total geographical length of all edges , and the average geographical distance of the whole network . interestingly , we found that the total edge - length will sharply increase when exceeds the critical value , and the average geographical distance has an upper bound independent of the network size . all the results are obtained analytically with some reasonable approximations , which are well verified by simulations . |
during the past decade , much attention has been devoted to accommodate single high - dimensional sources of molecular data ( omics ) in the calibration of prediction models for health traits .for example , microarray - based transcriptome profiling and mass spectometry proteomics have been established as promising omic predictors in oncology and , to lesser extent , in metabolic health .nowadays , due to technical advances in the field and evolving biological knowledge , novel omic measures , such as nmr proteomics and metabolomics or nano - lcms and uplc glycomics are emerging as potentially powerful new biomolecular marker sets . as a result, it is increasingly common for studies to collect a range of omic measurements in the same set of individuals , using different measurement platforms and covering different aspects of human biology .this causes new statistical challenges , among which the evaluation of the ability of novel biomolecular markers to improve predictions based on previously established predictive omic sources , often referred as their added or incremental predictive ability .an illustrative example of these new methodological challenges is given by our motivating problem .we have access to data from 248 individuals sampled from the helsinki area , finland , within the dietary , lifestyle , and genetic determinants of obesity and metabolic syndrome ( dilgom ) study .one - hundred - thirty - seven highly correlated nmr serum metabolites and 7380 beads from array - based transcriptional profiling from blood leukocytes were measured at baseline in 2007 , together with a large number of clinical and demographic factors which were also measured in 2014 , after seven years of follow - up .our primary goal is the prediction of future body mass index ( bmi ) , using biomolecular markers measured at baseline .more specifically , we would like to compare the predictive ability of the available metabolomics and transcriptomics , and to determine if both should be retained in order to improve single - omic source predictions of bmi at seven years of follow - up . in our setting, it is necessary to both calibrate the predictive model based on each source of omic predictors and assess the incremental predictive ability of a secondary one relative to the first set , using the same set of observations .hence , in order to avoid overoptimism and provide realistic estimates of performance , it is necessary to control for the re - use of the data , which has already been employed for model fitting within the same observations .this is a very important issue in omic research , where external validation data are hard to obtain .it is well known that biased estimation of model performance due to re - use of the data increases with large number of predictors and omic sets are typically high - dimensional ( , sample size and the number of predictors ) .extra difficulties in our setting are the different dimensions ( number of features ) , scales and correlation structure of each omic source , and possible correlation between omic sources induced by partially common underlying biological information . evaluatingthe added predictive ability of new biomarkers regarding classical , low dimensional , settings has been a topic of intense debate in the biostatistical literature in the last years ( see , for example , and references therein ) .getting meaningful summary measures and valid statistical procedures for testing the added predictive value are difficult tasks , even when considering the addition of a single additional biomarker in the classical regression context .in particular , widely used testing procedures for improvement in discrimination based on area under the roc curve ( auc ) differences and net reclassification index ( nri ) have shown unacceptable false positive rates in recent simulation studies .overfitting is a big problem when comparing estimated predictions coming from nested regression models fitted in the same dataset .moreover , the distributional assumptions of the proposed tests seem inappropriate , translating into poor performance of the aforementioned tests even when using independent validation sets . to date, little attention has been given to the evaluation of the added predictive ability in high - dimensional settings , where the aforementioned problems are larger and new ones appear , such as the simultaneous inclusion in an unique prediction model of predictors sets of very different nature .tibshirani and efron have shown that overfitting may dramatically inflate the estimated added predictive ability of omic sources with respect to a low - dimensional set of clinical parameters . to solve this issue, they proposed to first create a univariate ` pre - validated ' omic predictor based on cross - validation techniques and incorporate it as a new covariate to the regression with low - dimensional clinical parameters . in a subsequent publication , hoefling and tibshirani have shown that standard tests in regression models are biased for pre - validated predictors . as a solution , the authors suggest a permutation test which seems to perform well under independence of clinical and omic sets .boulesteix and hothorn have proposed an alternative method for the same setting of enriching clinical models with a high - dimensional set of predictors .in contrast to , they first obtain a clinical prediction based on traditional regression techniques . in a second step ,the clinical predictor is incorporated as an offset term in a boosting algorithm based on the omic source of predictors .previous calibration of the clinical prediction is not addressed in the second step and the same permutation strategy than hoefling and tibshirani is used to derive p - values . in this paper, we propose a two - step procedure for the assessment of additive predictive ability regarding two high - dimensional and correlated sources of omic predictors . to the best of our knowledge, no previous work has addressed this problem before .our approach combines double cross - validation sequential prediction based on regularized regression models and a permutation test to formally assess the added predictive value of a second omic set of predictors over a primary omic source .let the observed data be given by , where is the continuous outcome measured in independent individuals and and are two matrices of dimension and , respectively , representing two omic predictor sources with and features .we assume that we are in a high - dimensional setting ( ) and that the main goal is to evaluate the incremental or added value of beyond in order to predict in new observations .our approach is based on comparing the performance of a primary model based only on with an extended model based on and adjusted by the primary fit based on .we propose a two - step procedure based on the replacement of the original ( high - dimensional ) sources of predictors by their corresponding estimated values of based on a single - source - specific prediction model . in the first step ,we build a prediction model for based on and a given model specification .based on the fitted model , the fitted values are estimated . then , for each individual , we take the residual .we consider as new response and construct a second prediction model based on as predictor source : this is equivalent to including as an offset term ( fixed ) in the model based on for the prediction of the initial outcome : several statistical methods are available to derive prediction models of continuous outcomes in high - dimensional settings . in this work ,we focus on regularized linear regression models , where and the estimation of is conducted by solving , where .the penalty parameter regularizes the coefficients , by shrinking large coefficients in order to control the bias - variance trade - off .the pre - fixed parameter determines the type of imposed penalization .we used two widely used penalization types : ( ridge , i.e. , type penalty ) and ( lasso , i.e. , penalty ) . note that other model building strategies for prediction of continuous outcomes could have been used in this framework , such as the elastic net penalization by setting , or boosting methods , among others . the use of a previously estimated quantity ( ) in the calibration of a prediction model based on ( expressions ( 1 ) and ( 2 ) ) requires , in absence of external validation data , the use of resampling techniques to avoid bias in the assessment of the role of and .we use double cross - validation algorithms , consisting of two nested loops . in the inner loop a cross - validated grid - selection is used to determine the optimal prediction rule , i.e. , for model selection , while the outer loop is used to estimate the prediction performance by application of models developed in the inner loop part of the data ( training sets ) to the remaining unused data ( validation sets ) . in this manner , double cross - validation is capable of avoiding the bias in estimates of predictive ability which would result from use of a single - cross - validatory approach only . in our setting ,the outer loop of a ` double ' cross - validatory calculation allows obtaining ` predictive'-deletion residuals which fully account for the inherent uncertainty of model fitting on the primary source ( ) , before assessing the added predictive ability of , given by .the basic structure of the double cross - validation procedure to estimate unbiased versions of and is as follows : step 1 : : obtain double cross - validation predictions of , , based on : + - ; ; randomly split sample in mutually exclusive and exhaustive sub - samples of approximately equal size - ; ; for each , merge subsamples into + - : : randomly split sample in sub - samples - : : for each , merge subsubsamples into + * ; ; fit regression model for a grid of values of shrinkage parameters , to * ; ; evaluate , in the held - out sub - sample by calculating - : : compute overall cross - validation error : , - : : choose and calculate predictions of in the held - out sub - sample , , - ; ; the vector of predictions of , is obtained by concatenating the , vectors , i.e. , step 2 : : repeat the process detailed in step 1 considering the double cross - validated residuals , as outcome and as set of predictors and obtain the double cross - validation predictions .note that this is equivalent to obtaining the double cross - validation predictions of based on considering as offset variable in the fits of model ( 1 ) . in order to evaluate the performance of the sequential procedure introduced in subsection 2.1 ., we propose three measures of predictive accuracy , denoted by , , and , based on sum of squares of the double cross - validated predictions and , obtained following the procedure described in subsection 2.2 .. these summary measures can be regarded as high - dimensional equivalents of calibration measurements for continuous outcomes in low - dimensional settings , and an extension of previously discussed proposals in the cross - validation literature . denote by the prediction sum of squares based on a vector of predictions , obtained according to some arbitrary model , and by the sum of squared differences between two cross - validated vectors of predictions , e.g. , .let be the simplest cross - validated predictor of , based on the sample mean of only . to summarize the first step of the sequential procedure , we use double cross - validation to estimate the predictive ability of by , represents the proportion of the variation of the response that is expected to be explained by in new individuals , re - scaled by the total amount of prediction variation in the response . in the worst case scenario , when ( as predictive as a null model based on the mean of ) and if . since the computation of , for each of the random splits of the sample based on the observations not belonging to , we proceed in an analogous way to compute the average predicted variation of .hence , in order to get an appropriate re - scaling factor , for each subset , we compute , the mean value of the outcome variable calculated without the observations belonging to .assume that , the contribution of the second omic source , , in the prediction of can be summarized by accounts for the predictive capacity of , after removing the part of variation in that can be attributed to the first source of predictors .its computation relies on the squared difference between ( the double cross - validated predictions resulting from the second step of the proposed procedure in subsection 2.2 . ) and the corresponding residual from the step 1 ( ) based on , re - scaled by the remaining predicted variation on after the first step of the procedure . as a result, can be regarded as the expected ability of to predict the part of , after adjusting for the predictive capacity of and accounting for all model fitting in the first stage of the assessment .following the same arguments used in deriving expression ( 3 ) , for each subset , the computation of the average variation of is based on , i.e. , excluding the observations belonging to .note that in step 1 of the sequential procedure models are fitted , each based on , providing residuals with expected zero mean ( given specification ( 1 ) ) , i.e. , , .hence , and thus finally , we derive a third summarizing measurement of the overall sequential process , , defined as : represents the total predictive capacity of the overall sequential procedure based on and , i.e. , the combined predictive ability of and given by . note that is based on the same squared difference between and as , but the re - scaling factor refers to the total predictive variation of the original response .the three introduced measures jointly summarize the performance of the two omic sources under study and their interplay in order to predict the outcome .in all the cases higher values are indicative of higher predictive ability .the three measurements vary between 0 ( null predictive ability ) and the maximal value of 1 .the interpretation of is straightforward , as it simply captures the predictive capacity of the firstly evaluated omic source .note that the difference between and relies on the denominator .in general , if is informative , the denominator in expression ( 4 ) will be smaller than in expression ( 6 ) .thus , the residual variation after step 1 will be smaller than the total initial variation .the three summary measures are related by the following expression : consequently , we can rewrite as follows : note that in cases in which , we get that , and viceversa .however , and differ when not zero .specifically , from expression ( 8) , we obtain that . in short , may be regarded as the conditional contribution of for the prediction of with respect to what may be predicted using alone . measures the absolute gain in predictive ability from adding to .note that a given source may present a large conditional but a small absolute ( if , for example , presents high predictive ability itself ) .moreover , due to the relation between and the resulting vector of predictions after combining and , , expression ( 8) implies that . this desirable property may not be fulfilled using alternative combination strategies . in practice ,our sequential procedure relies on the realistic assumption of positive predictive ability of the first source of predictors , ( one would only be interested in assessing additional or incremental information on top of an informative source itself ) .accordingly , we advise to conduct our sequential procedure using as primary source only if , which is , furthermore , required to derive expression ( 8) .the summary measures may be used to introduce formal tests for assessing the added or augmented predictive value of over to predict .we propose a permutation procedure to test the null hypothesis against the alternative hypothesis .the test is based on permuting the residuals obtained after applying the first step of our two - stage procedure with the data at hand .our goal is to remove the potential association between and while preserving the original association between and .explicitly , we propose the following algorithm : step 1 : : calculate the residuals based on the predictions of based on , obtained in the first step of the procedure presented in section 2.1 . step 2 : : permute the values of , obtaining and generate values of the response under the null hypothesis : .step 3 : : repeat the two - stage procedure from section 2.1 . for predicting and obtain the corresponding .the procedure is repeated times and the resulting permutation p - values are obtained as follows : where is the number of permutations , and is the actual observed value with the data at hand .note that in * step 2 * , we generate a ` null ' version of the original response and then we repeat the overall two - stage procedure , which implies that the ` null ' residuals used in * step 3 * are not fixed and are , in general , different from .this is necessary in order to capture all the variability of the two - stage procedure and to correctly generate the null hypothesis of interest .moreover , the cross - validation nature of the procedure protects against systematic bias of the residuals obtained in step 3 based on ( see chapter 7 of * ? ? ?. given the aforementioned relations between , , and specified by expression ( 6 ) , note that is equivalent to .this result immediately follows from expression ( 6 ) , given that if and only if ( assuming ) .hence , both tests are equivalent provided that the distribution under the null hypothesis is generated by the aforementioned permutation procedure , i.e. , the p - values , resulting from using as test statistic or are approximately the same . due to the reliance of our method on resampling techniques ,computational cost is a potential limitation of our approach .however , our method is easy to split in independent realizations of the same computational procedure .hence , we can use parallel computing to speed up the procedure .we simulate two omic predictor sources ( dimension ) and ( dimension ) and a vector , the continuous outcome .we use matrix singular value decomposition ( svd , ) of each of the two omic sources to generate common ` latent ' factors associated with , and .common eigenvectors in the svd of and introduce correlation among the omic sources .we consider different patterns in terms of the conditional association between and ( see figure 1 ) .the details of the data generation procedure are as follows : ( x1 ) ; ( x2 ) ; ( lj ) ; ( l1 ) ; ( l2 ) ; ( y ) ; ( l1 ) ( x1 ) ; ( l2 ) ( x2 ) ; ( lj ) ( x2 ) ; ( lj ) ( x1 ) ; ( lj ) to [ out=120,in=120 , looseness=0.5 ] ( y ) ; ( x1 ) ; ( x2 ) ; ( lj ) ; ( l1 ) ; ( l2 ) ; ( y ) ; ( l1 ) ( x1 ) ; ( l2 ) ( x2 ) ; ( lj ) ( x2 ) ; ( lj ) ( x1 ) ; ( l1 ) to [ out=120,in=120 , looseness=0.5 ] ( y ) ; ( l2 ) to [ out=270,in=270 , looseness=0.5 ] ( y ) ; * step 1 * generate , a matrix of i.i.d .latent factors of and .* step 2 * define ( ) and ( ) , the correlation matrices of and , respectively , according to a predefined covariance structure of interest .following the recent literature on pathway and network analysis of omics data , we generated , according to a hub observation model ( * ? ? ?* see figure 2 ) .* step 3 * draw , , and obtain the singular value decomposition for each of the independent matrices and : .* step 4 * generate the final correlated and by manipulation of and , the left eigenvectors matrices from and , respectively .specifically , for a certain number ( ) of predefined columns and , the original submatrices and ( independent ) are replaced by common independent latent factors , generated in * step 1*. in this manner , correlation between and is induced , while the within - omic source correlation structures and are preserved . *step 5 * simulate the outcome , where and are vectors of regression coefficients of length and , respectively and . since , , we can rewrite and thus , where represents the association between and the outcome through the orthogonal directions given by . consequently , we first generate and we then transform it to the predictor space by using .( ) , 4 groups of 250 features each .right : correlation matrix of ( ) , 2 groups of 50 features each.,width=188,height=188 ] ( ) , 4 groups of 250 features each .right : correlation matrix of ( ) , 2 groups of 50 features each.,width=188,height=188 ] * simulation 1 ( ` null ' scenarios ) : * the second omic source is non - informative , i.e. , , but is strongly correlated to , by imposing common first columns of and ( , the correlation between omic sources is driven through the maximal variance subspace ) .we considered different assumptions regarding the regression dependence of on which has an impact on the ability to calibrate prediction rules based on for .we consider two situations in which the association with is unifactorial , in the sense that only one latent factor ( one column of ) is associated with and two multi - factorial situations .one of our objectives is to illustrate how changing the complexity of the calibration of a prediction rule based on ( by formulating the problem through regression on either larger or smaller variance latent factors ) may affect the results .we consider the following ` null ' scenarios : scenario 1a : : , ; . is associated to high - variance subspace of , corresponding to the largest eigenvalue of .scenario 1b : : , , .the association with relies on a low - variance subspace of .hence , we expect lower values of , compared to scenario 1a . scenario 1c : : , otherwise . in this setting we consider a bifactorial regression , as association with is a combination of the effect of the two first eigenvectors of .scenario 1d : : , otherwise . in thissetting we consider a multifactorial regression , as association with is a combination of the effect of the four first eigenvectors of .* simulation 2 ( ` alternative ' scenarios ) : * is associated with through latent factors non - shared with .the following ` alternative ' scenarios are investigated : scenario 2a : : , , , .the eigenvector related to the largest eigenvalue of each source is associated to and the association between and is generated by sharing the second eigenvectors , i.e. , by setting .scenario 2b : : , , , and , , , , and the association between and is generated by setting .scenario 2c : : , , , and , , , , and the association between and is generated by setting .figure 3 shows a monte carlo approximation based on a sample of observations of the regression coefficients and in the studied simulated scenarios . from panels 3(a ) to 3(d ) , we can observe that the different simulation settings differ in the level of imposed sparsity in the association between and . on the one hand ,scenarios presented in panels 3(a ) ( scenarios 1a , 2a ad 2b ) and 3(b ) ( scenario 1b ) are relatively sparse , with most of the simulated coefficients close to zero . on the other hand , the of scenario 1c ( represented in panel 3(c ) ) and specially of scenario 1d ( represented in panel 3(d ) ) are less sparse , based on a large number of non - null regression coefficients in . with regard to , panel 3(e ) ( scenario 2a ) represents a sparser situation than panel 3(f ) ( scenarios 2b and 2b ) . , y - axis ) corresponding to each of the predictors of ( x - axis ) .( c)-(d ) : regression coefficients ( elements of , y - axis ) corresponding to each of the predictors of ( x - axis ) . the outcome variable is generated as , .(e)-(f ) provide information about association between and and ( e ) and ( f ) corresponds to the independent association between and in the alternative scenarios ( for the null scenarios 1a-1d the independent association between and is null ) .( a ) corresponds to scenarios 1a , 2a and 2b , ( b ) correspond to scenarios 1b and 2c respectively , while ( c ) and ( d ) correspond to scenario 1c and 1d .( e ) shows the association ( ) between and in scenario 2a and ( e ) shows for scenarios 2b and 2c.,title="fig : " ] , y - axis ) corresponding to each of the predictors of ( x - axis ) .( c)-(d ) : regression coefficients ( elements of , y - axis ) corresponding to each of the predictors of ( x - axis ) . the outcome variable is generated as , .(e)-(f ) provide information about association between and and ( e ) and ( f ) corresponds to the independent association between and in the alternative scenarios ( for the null scenarios 1a-1d the independent association between and is null ) .( a ) corresponds to scenarios 1a , 2a and 2b , ( b ) correspond to scenarios 1b and 2c respectively , while ( c ) and ( d ) correspond to scenario 1c and 1d .( e ) shows the association ( ) between and in scenario 2a and ( e ) shows for scenarios 2b and 2c.,title="fig : " ] , y - axis ) corresponding to each of the predictors of ( x - axis ) .( c)-(d ) : regression coefficients ( elements of , y - axis ) corresponding to each of the predictors of ( x - axis ) . the outcome variable is generated as , .(e)-(f ) provide information about association between and and ( e ) and ( f ) corresponds to the independent association between and in the alternative scenarios ( for the null scenarios 1a-1d the independent association between and is null ) .( a ) corresponds to scenarios 1a , 2a and 2b , ( b ) correspond to scenarios 1b and 2c respectively , while ( c ) and ( d ) correspond to scenario 1c and 1d .( e ) shows the association ( ) between and in scenario 2a and ( e ) shows for scenarios 2b and 2c.,title="fig : " ] + , y - axis ) corresponding to each of the predictors of ( x - axis ) .( c)-(d ) : regression coefficients ( elements of , y - axis ) corresponding to each of the predictors of ( x - axis ) . the outcome variable is generated as , .(e)-(f ) provide information about association between and and ( e ) and ( f ) corresponds to the independent association between and in the alternative scenarios ( for the null scenarios 1a-1d the independent association between and is null ) .( a ) corresponds to scenarios 1a , 2a and 2b , ( b ) correspond to scenarios 1b and 2c respectively , while ( c ) and ( d ) correspond to scenario 1c and 1d .( e ) shows the association ( ) between and in scenario 2a and ( e ) shows for scenarios 2b and 2c.,title="fig : " ] , y - axis ) corresponding to each of the predictors of ( x - axis ) .( c)-(d ) : regression coefficients ( elements of , y - axis ) corresponding to each of the predictors of ( x - axis ) . the outcome variable is generated as , .(e)-(f ) provide information about association between and and ( e ) and ( f ) corresponds to the independent association between and in the alternative scenarios ( for the null scenarios 1a-1d the independent association between and is null ) .( a ) corresponds to scenarios 1a , 2a and 2b , ( b ) correspond to scenarios 1b and 2c respectively , while ( c ) and ( d ) correspond to scenario 1c and 1d .( e ) shows the association ( ) between and in scenario 2a and ( e ) shows for scenarios 2b and 2c.,title="fig : " ] , y - axis ) corresponding to each of the predictors of ( x - axis ) .( c)-(d ) : regression coefficients ( elements of , y - axis ) corresponding to each of the predictors of ( x - axis ) . the outcome variable is generated as , .(e)-(f ) provide information about association between and and ( e ) and ( f ) corresponds to the independent association between and in the alternative scenarios ( for the null scenarios 1a-1d the independent association between and is null ) .( a ) corresponds to scenarios 1a , 2a and 2b , ( b ) correspond to scenarios 1b and 2c respectively , while ( c ) and ( d ) correspond to scenario 1c and 1d .( e ) shows the association ( ) between and in scenario 2a and ( e ) shows for scenarios 2b and 2c.,title="fig : " ] in our basic setting , we considered observations , features in and features in . for each scenario , we provide the mean values and standard deviations of , , and , based on 5-folds double cross - validation , jointly with the rejection proportions for testing along monte carlo trials .we evaluated the permutation test introduced in subsection 2.3 . using permutations .we complemented our empirical evaluations of the proposed sequential double cross - validation procedure by extending our basic simulation setting in two directions .we checked the impact on modifying sample size ( ) and the complexity of the problem by varying the number of variables considered in the first stage ( ) .additionally , we compared the performance of our procedure based on double - cross validation with two alternative strategies . on the one hand , we provide results based on a two - stage procedure using a single cross - validation loop ( cross - validation is used for model choice but predictions and therefore the residuals used as outcome in the second stage are directly computed on the complete sample ) . on the other hand, we check the impact on the results of over - penalization . specifically , instead of taking as defined in the inner loop of the double cross - validation procedure presented in subsection 2.1 . , we choose a larger value for , namely .both are usual strategies in penalized regression in single omic prediction frameworks , so it is of practical interest to quantify their impact from the added predictive value point of view .the results of these alternative strategies are provided as supplemental material but discussed in the main text .the results for the sequential double cross - validation procedure ( labeled as ` cv type= , ' ) are summarized in tables 1 and 2 .the top part of each table contains the results concerning the ` null ' scenarios ( no added value of ) , while the bottom part shows the results of the ` alternative ' scenarios ( added value of ) .table 1 contains results based on ridge regression , in expression ( 1 ) , while table 2 summarizes the results for the lasso penalty type ( ) . forthe four ` null ' scenarios 1a-1d , given that is not independently associated to , we expect and rejection proportions of about 0.05 .the results of the sequential double cross - validation procedure based on ridge regression are satisfactory in this regard , with rejection proportions close to the nominal level in all the studied null scenarios and for different sample sizes ( , ) and levels of complexity of the first step ( , ) .the top part of table 1 shows that the estimated for scenarios 1a , 1c and 1d are large and very similar ( ) .as it was expected , the estimated predictive ability of is lower in scenario 1b and presents a larger variability , since the association between and relies on a small variance subspace . in general , for all 1a-1d scenarios the estimated is close to zero .however , we observe that the sample size influences the estimated and hence , due to the correlation between and , also affects the estimation of .we observe systematically lower values of for than for in all the studied ` null ' scenarios .this feature translates in systematically larger values of for than for .however , the permutation test is able to account for this issue and the level of the test is respected independently of the sample size .analogously , increasing the number of features of the first source ( from to ) while keeping fixed the number of features of ( ) also affects the estimation of and . in this case , the values of are larger and hence , the values of tend to be closer to zero .worth noting is that the level of the test is also well respected in this case .the bottom part of table 1 shows the results for the alternative scenarios . as desirable, the power increases with sample size for all the three studied alternative scenarios .as it was the case for the ` null ' scenarios , increasing the sample size tends to lead to better predictive ability of .this result matches intuition , since larger sample sizes provide more information for model building , and hence , under correct model specification , the resulting predicting models are expected to behave better in new data .an exception to this is scenario 2c , where our double cross - validation procedure seems to overfit with .this is due to the fact that scenario 2c , unlike scenarios 2a and 2b , is characterized as a ` difficult ' prediction problem when considering ( association with is driven by a low - variance subspace of ) . in line with this , the power of the test is different for the three different studied scenarios . the greatest power is reached in scenario 2a , in which the independent association between and is driven through the subspace of maximum variation and the first step of the procedure relies on a relatively ` easy ' prediction problem . even if scenarios 2b and 2c are based on the same independent association between and , the impact of the first source on the power of the test is large .scenario 2b , in which for reaches a power of 71 % , while the rejection rate reduces to 19% in scenario 2c , corresponding to a more ` difficult ' prediction problem in the first stage , reflected in a low and unstable ( for and for ) .[ table1 ] table 2 shows the results for double cross - validation procedure based on the lasso specification ( ) . with regard to the ` null ' scenarios ,we observe a good performance for scenarios 1a and 1b , with rejection proportions close to the nominal level .interestingly , the rejection proportion of the permutation test increases with sample size and the number of features in the first source in scenarios 1c and 1d , which indicates a bad performance of the procedure based on laso regression in these settings .namely , the bad performance of the lasso specification for scenario 1d does not improve by increasing sample size ( 7% of rejections with , 9% of rejections with and 36% of rejections for ) .the reason behind this difference with the ridge - based results is the mis - specification of the lasso with respect to the underlying data - generating mechanism .lasso regression assumes that the true model is sparse , while , as mentioned , scenario 1c and specially 1d correspond to non - sparse solutions .these findings illustrate how model mis - specification may result in an improvement of predictions by adding a second source of predictors , not because of independent association to the outcome , but just because of the correlation with the first source of predictors .the bottom part of table 1 shows the results for the alternative scenarios . with respect to the alternative scenarios ( bottom part of table 2 ) , the conclusions are similar to those observed for ridge regression .the power increases with the sample size , and the rejection proportions differ across the three scenarios .however , we observe that ridge outperforms lasso in terms of power , specially for scenarios 2a and 2b . tables s3 and s4 summarize the results for the two aforementioned alternative strategies in the basic setting ( , ) and two sample sizes ( , ) : ` , ' corresponds to the strategy in which the sequential double cross - validation is over - shrunk ( by taking instead of in the inner cross - validation loop ) and ` , ' represents the sequential procedure based on one single cross - validation loop ( standard residuals as opposed to deletion - based residuals ) . in general ,these two alternative strategies provide different estimates for the predictive ability of the two studied sources of predictors . taking the double - cross validation approach as gold - standard, we observe that the over - shrinkage of the predictions in the first step of the ` , ' method provokes an under - estimation of , while the ` , ' provides an over - estimation , specially when the association between outcome and first source of predictors is driven through a low - variance space .for example , in the scenario 1b , for , when based on a single cross - validation approach , notably larger than estimated by the double cross - validation approach . moreover , we observe that the effect of re - using the data is larger for small sample sizes , with systematically larger for than for . however , under the null hypothesis , the introduced bias on the first step for both alternatives does not translate in an inflated type i error . the method labeled as ` , ' , based on double cross - validation but based on under - fitting by over - penalization controls the false discovery rate under the null hypothesis in similar fashion than the procedure introduce in subsection 2.1 . with regard to the method based on single cross - validation ( ` , ' ) , its behavior is slightly conservative under the null hypothesis . for the alternative scenarios , as , and , are underestimated by ` , ' , while ` , ' overfits both .even if power increases with sample size , both methods are systematically less powerful than our proposal, , , which makes it the preferable method from both an estimation and testing point of view .to illustrate the performance of the proposed sequential double cross - validation procedure , and to compare it to the alternative strategies discussed in section 3 , we analyzed data from the dilgom study .we are interested in the ability of serum nmr metabolites and microarray gene expression levels in blood to predict body mass index ( bmi ) at 7 years of follow - up .the metabolomic predictor data consists of quantitative information on 137 metabolic measures , mainly composed of measures on different lipid subclasses , but also amino acids , and creatine .the gene expression profiles were derived from illumina 610-quad snparrays ( illumina inc . ,san diego , ca , usa ) .initially , 35,419 expression probes were available after quality filtering .in addition to the pre - processing steps described by , we conducted a prior filtering approach and removed from our analyses probes with extremely low variation ( see for details on the conducted pre - processing ) . as a result , we retained measures from 7380 beads for our analyses .the analyzed sample contained individuals for which both types of omic measurements and the bmi after 7 years of follow - up ( mean=26 , sd=5 ) were available .we carried out two distinct analyses using the added predictive value assessment approach described in this paper .as a first analysis , we consider the metabolic profile as primary omic source for the prediction of the log - transformed bmi and we evaluated the added predictive value of blood transcriptomics profiles .this approach is the most relevant in practice , because of both biological and economical reasons .on the one hand , metabolome ( which contains , among other , cholesterol measures ) is presumably more predictive of bmi than gene expression in blood . on the other hand ,nmr technology is typically more affordable than available technologies for transcriptomic profiling , so favoring the nmr source seems a sensible approach in our setting .nevertheless , to illustrate the properties of our method , we also consider a second analysis in which we reversed the roles of the omic sources , first fitting a model based on gene expression and then evaluating the added predictive value of the metabolome . as in the simulation study , we considered ridge and lasso regression as prediction models , using the same alternative strategies to the sequential double cross - validation procedure presented in section 3 ( ` , ' ) : ` , ' , and ` , ' .the main findings are summarized in table 3 . to check stability of the results, we artificially reduced the sample size of the available dilgom data and checked the impact on the estimation of the added predictive ability with our sequential double cross - validation approach and its corresponding p - value .we also compared our method with a naive approach , consisting in stacking both metabolites and transcriptomics , and hence ignoring their different origin .the results of these two additional analyses are given as supplemental materials ..application to dilgom data .alternative cross - validation strategies .p - values based on 1000 permutations [ cols="^,^,^,^,^,^",options="header " , ] [ tab : addlabel ]simulation results based on two modification of the two - stage procedure presented in section 2 : , relies on single cross - validation ( cross - validation is used for model choice but predictions and therefore the residuals used as outcome in the second stage are directly computed on the complete sample ) ; , relies on over - penalization . specifically , instead of taking as defined in the inner loop of the double cross - validation procedure presented in subsection 2.1 ., we choose a larger value for , namely . simulations for these two approaches are based on the same specifications detailed in subsection 3.1 . , considering and . | enriching existing predictive models with new biomolecular markers is an important task in the new multi - omic era . clinical studies increasingly include new sets of omic measurements which may prove their added value in terms of predictive performance . we introduce a two - step approach for the assessment of the added predictive ability of omic predictors , based on sequential double cross - validation and regularized regression models . we propose several performance indices to summarize the two - stage prediction procedure and a permutation test to formally assess the added predictive value of a second omic set of predictors over a primary omic source . the performance of the test is investigated through simulations . we illustrate the new method through the systematic assessment and comparison of the performance of transcriptomics and metabolomics sources in the prediction of body mass index ( bmi ) using longitudinal data from the dietary , lifestyle , and genetic determinants of obesity and metabolic syndrome ( dilgom ) study , a population - based cohort from finland . * sequential double cross - validation for assessment of added predictive ability in high - dimensional omic applications * + mar rodrguez - girondo , perttu salo , tomasz burzykowski , markus perola , jeanine houwing - duistermaat and bart mertens of medical statistics and bioinformatics , leiden university medical center , leiden , the netherlands . institute for health and welfare , helsinki , finland . institute for biostatistics and statistical bioinformatics ( i - biostat ) , hasselt university , hasselt , belgium . of statistics , leeds university , leeds , united kingdom . * keywords : * _ added predictive ability ; double cross - validation ; regularized regression ; multiple omics sets _ |
the use of patches in image processing can be seen as an instance of the divide and conquer " principle : since it is admittedly very difficult to formulate a global prior / model for images , patch - based approaches use priors / models for patches ( rather than whole images ) , the combination of which yields the desired image prior / model . to keep the discussion and formulation at their essential and focus on the image modelling aspects, we will concentrate on image denoising , arguably the quintessential image processing problem .nevertheless , much of what will be presented below can be easily extended ( at least in principle ) to more general inverse problems .there are basically two approaches to patch - based image denoising . in earlier methods , , , patches are extracted from the noisy image , then processed / denoised independently ( or maybe even collaboratively , as in bm3d ) , and finally returned to their original locations .since the patches overlap ( to avoid blocking artifacts ) , there are several estimates of each pixel , which are combined by some form of averaging ( unweighted or weighted ) .this approach is also used in the nonlocal bayesian method , and in methods based on gaussian mixtures . for a comprehensive review of these and related methods ,see .arguably , a conceptual flaw of these methods is that they obtain patch estimates without explicitly taking into account that these will subsequently be combined additively . as a consequence , although some of these methods achieve state - of - the - art results , they do not explicitly provide a global image prior / model .a more recent class of approaches does build global image models that are based on a function computed from image patches , but does not treat them as independent by explicitly taking into account that they are overlapping patches of the same image ; this approach was initiated with the _ expected patch log - likelihood _ ( epll ) , and is adopted by most of the recent work , .these methods do not have the conceptual flaw pointed out in the previous paragraph and provide a coherent global image model .the analysis vs synthesis dichotomy in global image models / priors ( _ e.g. _ , based on wavelet frames , or total variation ) has been first formalized in , and further studied in ; more recently , it has been ported to patch - wise models .to the best of our knowledge , this dichotomy has not been pointed out before concerning the way in which patch - level models / priors are used to build a global image models ; that is precisely the central contribution of this paper . in the synthesis vs analysis dichotomy, the epll - type class of patch - based models can be seen as an analysis method ( as explained below in detail ) .this paper shows that there exists the synthesis counterpart of epll ; in other words , that the synthesis vs analysis dichotomy is also present in the way the whole image and the patches are related .we also show that the two formulations are not , in general , equivalent .the remaining sections of this paper are organized as follows . after reviewing the classical analysis / synthesis dichotomy in section [ sec : a_vs_s ] , we shown in section [ sec : patch_a ] that the classical patch - based methods follow an analysis formulation .section [ sec : patch_s ] then introduces a synthesis patch - based formulation , and its relationship with the analysis counterpart is established in section [ sec : rel ] . in section [ sec : admm ] , we present an admm algorithm to efficiently perform image denoising under the proposed synthesis formulation .finally , section [ sec : conclusion ] concludes the paper by referring to future work directions .[ sec : a_vs_s ] before addressing patch - based models , we briefly review the analysis and synthesis global formulations of image denoising , where the goal is to estimate an unknown image ( is the total number of pixels in , which is a vectorized version of the corresponding image ) from a noisy version thereof where is a sample of a white gaussian noise field of zero mean and ( known ) variance , that is , . the classical approach to estimate from is to adopt ( or learn ) a prior for the unknown and seek a maximizer of the posterior density ( a _ maximum a posteriori_map estimate ) where , up to an irrelevant constant .the analysis and synthesis formulations build priors for as follows .analysis : : : in this formulation , the prior takes the form where is a constant and a linear ( analysis ) operator . in this case , the map estimate takes the form where map - a stands for _ map - analysis_. synthesis : : : here , the starting point is to assume that is linearly synthesized / represented according to , where is a vector of coefficients , and the prior is formulated on rather than directly on ; once an estimate is obtained , the corresponding estimate of is simply . in summary , and is a negative log - prior on .one of the main distinguishing features of analysis and synthesis formulations is that in the former the object of the estimation procedure is the image itself , whereas in the latter , one estimates a representation from which the image estimate is synthesized .a central tool in most patch - based approaches is a collection of operators that extracts patches from a given image of size ; each can be seen as a binary matrix with size , where is the total number of pixels in each patch ( assumed square , of size ) .the standard way of formulating a patch - based prior is by writing where is a function expressing the patch - wise prior distribution and is a normalizing constant .function may itself be a _ probability density function _( pdf ) , _e.g. _ , a gmm , in which case this prior is an instance of a so - called _ product of experts _ ( poe ) .however , does not need to be a pdf , as long as it takes non - negative values ; in fact , can also be seen as a _ factor graph _ model , where each factor corresponds to a patch and the all factors have the same function .moreover , this prior is also equivalent to the formulation known as epll ( _ expected patch log - likleihood _ ) , although epll was not originally interpreted as a prior . given a noisy image , a map estimate of is given by to tackle the large - scale optimization problem in , the so - called _ half quadratic splitting _strategy replaces it with where , which obviously becomes equivalent to as .the optimization problem in is tackled by alternating between minimizing with respect to and , while slowly increasing .other strategies for setting have also been proposed . an obvious alternative ( as recently mentioned in )is to reformulate as a constrained problem and tackle it with admm ( _ alternating direction method of multipliers _ ) .of course , convergence of admm for this problem can only be guaranteed if the negative log factors are convex ; this is not the case if is a gmm , but it is true if is an norm or the -th power thereof , _e.g. _ , , with . examining ( with ) or reveals that this formulation seeks a consensus among the patches , in the sense that the several replicates of each pixels that exist in different patches are forced to agree on a common value for that pixel .in other words , the clean patches are not modelled as additively generating a clean image , and are merely used to write a joint prior that factorizes across overlapping patches .the estimation criterion in clearly falls in the analysis - type category , since it considers the image itself as the object to be estimated and as the argument of the prior .however , as a generative model for clean images , its meaning is not very clear , since it is not a trivial task to obtain samples from this distribution . to obtain a more compact notation ,let be the operator ( an matrix ) that extracts the set of patches , _i.e. _ , the prior may then be written as where denotes a density defined in according to with this notation , the map denoising problem can be written as which clearly reveals the analysis nature of this formulation ( see ) .we stress that the analysis / synthesis dichotomy addressed in this paper concerns the way in which an image relates to its patches , not the way the patches themselves are modelled .in fact , the patch - analysis formulation just reviewed is compatible with a synthesis patch model , _e.g. _ , one that models each patch as linear combination of elements of some dictionary , with coefficients equipped with some prior ( for example , a sparsity - inducing prior , as in ) .using the half quadratic splitting approach , the formulation becomes where .naturally , the patch - analysis formulation is also compatible with an analysis patch model , by using a patch prior of the form , _i.e. _ , , for some function and matrix .to identify the analysis / synthesis dichotomy in terms of how the patches are related to the underlying image , not in terms of how the patches are modelled , we refer to the formulation reviewed in this section as _ patch - analysis _ and to its synthesis counterpart ( to be introduced in the next section ) as _ patch - synthesis_.we now present the patch - synthesis formulation , which can be summarized as follows : the clean image is generated by additively combining a collection of patches ; the patches themselves follow some probabilistic model , but are a priori mutually independent .consider a collection of patches and let image be synthesized from these patches by combining them additively according to where matrices are such that they average the values in the several patches that contribute to a given pixel of .a simple 1d example will help clarify the structure of the matrices .consider that ^t ] , the synthesis expression in can be written compactly as with the patches modelled as independent and identically distributed samples of some patch - wise pdf , their joint log - prior is notice that obtaining samples from is as simple as obtaining samples from itself . consequently, generating image samples under this synthesis model simply corresponds to generating samples from and then multiplying them by .this is in contrast with the patch - analysis prior , where , even if is a valid pdf , it is not trivial to obtain samples from .the resulting map denoising criterion can now be written as as the patch - analysis model , the patch - synthesis formulation that we have just presented is compatible with both analysis and synthesis priors for the patches , and of course with any valid pdf for vectors in .an analysis formulation simply amounts to choosing a patch prior of the form , _i.e. _ , , for some function and matrix . in a synthesis formulation, each patch is synthesized using some dictionary as , and the follow some prior ; stacking all the patch coefficients in vector , we can write ( where is a block - diagonal matrix with replicas of ) , thus the map denoising criterion becomes where aside for now the choice of the patch priors , let us focus on the relationship between formulations and .the key observation underlying the relationship between these two formulations is that ( assuming the patch structure in both formulations is the same ) where denotes the identity matrix , but in general in other words , is a left pseudo - inverse of . to prove , simply notice that if a collection of patches is extracted from some image and then these patches are used to synthesize an image by averaging the overlapping pixels , an identical image is obtained .of course , the converse is not true , in general : if an image is synthesized by averaging the overlapping pixels of a collection of patches , and then patches are extracted from the synthesized image , there is no guarantee that these patches are equal to the original ones , thus proving . in the trivial and uninteresting cases where the patches are singletons , or non - overlapping, we would have . as shown in ,given the analysis formulation , an equivalent synthesis formulation is where the constraint enforces to be in the subspace spanned by the columns of .notice that this constraint corresponds to having a collection of patches extracted from some image , _i.e. _ , it forces the patches to agree on the value of each shared pixel . since does not enforce this constraint , it is not , in general , equivalent to the patch - analysis formulation .in this section , we derive an instance of admm to deal with . the first step is to rewrite it as a constrained problem , where is the negative log - prior , or regularizer .admm for this problem takes the form the update equation for is a denoising step .due to the separability of the squared norm and of ( see ) , this can be separately solved with respect to each patch : for , which is simply the proximity operator of , computed at . in a bayesian viewpoint ,corresponds to obtaining the map estimate of from observations , assuming additive white gaussian noise of variance and a negative log - prior .if the are convex , this map estimate is unique , due to the strict convexity of the quadratic term in .computing corresponds to solving an unconstrained quadratic problem , the solution being the bottleneck in this update equation seems to be the matrix inversion , since is a huge matrix .however , this inversion can be solved very efficiently by resorting to the sherman - morrison - woodbury matrix inversion formula .in fact , where matrix is diagonal , thus its inversion is trivial . to prove that is diagonal , recall that , thus the element of matrix is the inner product between the -th and the -th rows of .since each pixel in the -th patch contributes to one and only one pixel in the synthesized image , the rows of have disjoint support , thus , that is , has no non - zero elements outside of its main diagonal , thus is a diagonal matrix . finally , since is a sum of diagonal matrices , it is a diagonal matrix .it is also easy to obtain explicitly the elements of the diagonal of .the diagonal elements are given by if the -th patch contributes to pixel , the sum in contains exactly one non - zero term , thus it is equal to the square of the weight with which the -th patch contributes to the synthesis of pixel . if the -th patch does not contributes to pixel , then .finally , since the weight with which each patch element contributes to each pixel equals the inverse of the number of patches that contributes to that pixel , is equal to the inverse of the number of patches that contribute to pixel . referring to the example in ,we have simply , because each pixel is synthesized from two patches .finally , letting , and denoting , we can write the update equation as where denotes element - wise product between two vectors . the leading cost of this update is that of the matrix - vector products involving and , which is ; all the other operations in have lower computational cost .in this paper , we have revisited patch - based image priors under the light of the synthesis vs analysis dichotomy . after showing that the classical patch - based image models ( namely the epll ) corresponds to an analysis formulation, we have proposed a patch - synthesis formulation , and analyzed its relationship with the analysis formulation , showing that they are , in general , not equivalent .finally , we have shown how to address image denoising under the proposed formulation , via an admm algorithm .we stress again that the purpose of this paper is not to introduce a new particular image prior , but a general patch - based synthesis formulation / framework , which ( to the best of our knowledge ) was missing from the literature on patch - based image processing , and which can be instantiated with many different patch models / priors . for this reason ,we have abstained from presenting experimental results ; these would critically depend on the choice and estimation of a particular patch model , which is not the focus of this paper .s. boyd1 , n. parikh , e. chu , b. peleato , j. eckstein , distributed optimization and statistical learning via the alternating direction method of multipliers " , _ foundations and trends in machine learning _ , vol . 3 , pp .1-122 , 2011 . | in global models / priors ( for example , using wavelet frames ) , there is a well known analysis vs synthesis dichotomy in the way signal / image priors are formulated . in patch - based image models / priors , this dichotomy is also present in the choice of how each patch is modeled . this paper shows that there is another analysis vs synthesis dichotomy , in terms of how the whole image is related to the patches , and that all existing patch - based formulations that provide a global image prior belong to the analysis category . we then propose a synthesis formulation , where the image is explicitly modeled as being synthesized by additively combining a collection of independent patches . we formally establish that these analysis and synthesis formulations are not equivalent in general and that both formulations are compatible with analysis and synthesis formulations at the patch level . finally , we present an instance of the _ alternating direction method of multipliers _ ( admm ) that can be used to perform image denoising under the proposed synthesis formulation , showing its computational feasibility . rather than showing the superiority of the synthesis or analysis formulations , the contributions of this paper is to establish the existence of both alternatives , thus closing the corresponding gap in the field of patch - based image processing . |
neutrino energy transfer to the matter adjacent to the nascent neutron star is supposed to trigger the explosion of a massive star ( ) as a type ii supernova .since the energy released in neutrinos by the collapsed stellar iron core is more than 100 times larger than the kinetic energy of the explosion , only a small fraction of the neutrino energy is sufficient to expel the mantle and envelope of the progenitor star .numerical simulations have demonstrated the viability of this neutrino - driven explosion mechanism ( wilson ; bethe & wilson ; wilson et al . ) but the explosions turned out to be sensitive to the size of the neutrino luminosities and the neutrino spectra ( janka ; janka & mller ; burrows & goshy ) both of which determine the power of the neutrino energy transfer to the matter outside the average neutrinosphere .the rate of energy deposition per nucleon via the dominant processes of electron neutrino absorption on neutrons and electron antineutrino absorption on protons is given by : here and are the number fractions of free neutrons and protons , respectively ; the normalization with the baryon density indicates that the rate per baryon is calculated in eq .( [ eq-1 ] ) . denotes the luminosity of either or in units of and is the radial position in .the average of the squared neutrino energy , , is measured in units of and enters through the energy dependence of the neutrino and antineutrino absorption cross sections . is the angular dilution factor of the neutrino radiation field ( the `` flux factor '' , which is equal to the mean value of the cosine of the angle of neutrino propagation relative to the radial direction ) which varies between values much less than unity deep inside the protoneutron star atmosphere , about 0.25 around the neutrinosphere , and 1 for radially streaming neutrinos very far out .the factor determines the local neutrino energy density according to the relation and thus enters the heating rate of eq .( [ eq-1 ] ) . only far away from the neutrinoemitting star , , and dilutes like .although it was found in two - dimensional simulations that convective instabilities in the neutrino - heating region can help the explosion ( herant et al . ; janka & mller , ; burrows et al . ; miller et al . ; shimizu et al . ) by the exchange of hot gas from the heating layer with cold gas from the postshock region , the strength of this convective overturn and its importance for the explosion is still a matter of debate ( janka & mller , mezzacappa et al . , lichtenstadt et al .in addition , it turns out that the development of an explosion remains sensitive to the neutrino luminosities and the mean spectral energies even if convective overturn lowers the required threshold values .this is the case because convective instabilities can develop sufficiently quickly only when the heating is fast and an unstable stratification builds up more quickly than the heated matter is advected from the postshock region through the gain radius ( which is the radius separating neutrino cooling inside from neutrino heating outside ) down onto the neutron star surface ( janka & mller , janka & keil ) .`` robust '' neutrino - driven explosions might therefore require larger accretion luminosities ( to be precise : a larger value of the product in eq .( [ eq-1 ] ) ) during the early post - bounce phase , or might call for enhanced neutrino emission from the core .the latter could be caused , for example , by convective neutrino transport within the nascent neutron star ( burrows ; mayle & wilson ; wilson & mayle , ; keil et al . ) or , alternatively , by a suppression of the neutrino opacities at nuclear densities through nucleon correlations ( sawyer ; horowitz & wehrberger ; raffelt & seckel ; raffelt et al . ; keil et al . ; janka et al . ; burrows & sawyer , ; reddy et al . ) , nucleon recoil and blocking ( schinder ) and/or nuclear interaction effects in the neutrino - nucleon interactions ( prakash et al . ; reddy et al . , ) , all of which have to date not been taken into account fully self - consistently in supernova simulations .the diffusive propagation of neutrinos out from the very opaque inner core is determined by the value of the diffusion constant and thus sensitive to these effects . most of the current numerical treatments of neutrino transport , however , are deficient not only concerning their description of the extremely complex neutrino interactions in the dense nuclear plasma but also concerning their handling of the transition from diffusion to free streaming .while the core flux is fixed in the diffusive regime , the accretion luminosity as well as the spectra of the emitted neutrinos depend on the transport in the semitransparent layers around the sphere of last scattering .since neutrino - matter interactions are strongly dependent on the neutrino energy , neutrinos with different energies interact with largely different rates and decouple in layers with different densities and temperatures .the spectral shape of the emergent neutrino flux is therefore different from the thermal spectrum at any particular point in the atmosphere .even more , through the factor in the denominator of eq .( [ eq-1 ] ) the energy deposition rate depends on the angular distribution of the neutrinos in the heating region .a quantitatively reliable description of these aspects requires the use of sophisticated transport algorithms which solve the boltzmann equation instead of approximate methods like flux - limited diffusion techniques ( janka , ; mezzacappa & bruenn ,, ; messer et al .the detection of electron antineutrinos from sn 1987a in the kamiokande ii ( hirata et al . ) and imb laboratories ( bionta et al . ) and the construction of new , even larger neutrino experiments for future supernova neutrino measurements have raised additional interest in accurate predictions of the detectable neutrino signals from type ii supernovae .neutrino transport in core collapse supernovae is a very complex problem and difficult to treat accurately even in the spherically symmetric case .some of the major difficulties arise from the strong energy dependence of the neutrino interactions , the non - conservative and anisotropic nature of the scattering processes such as neutrino - electron scattering , the non - linearity of the reaction kernels through neutrino fermi blocking , and the need to couple neutrino and antineutrino transport for the neutrino - pair reactions .therefore various simplifications and approximations have been employed in numerical simulations of supernova explosions and neutron star formation .the so far most widely used approximation with a high degree of sophistication is the ( multi - energy - group ) flux - limited diffusion ( mgfld ) ( bowers & wilson , bruenn , myra et al . , suzuki , lichtenstadt et al . ) where a flux - limiting parameter is employed in the formulation of the neutrino flux to ensure a smooth interpolation between the diffusion regime ( where the neutrinos are essentially isotropic ) and the free streaming regime ( where the neutrinos move radially outward ) .although the limits are accurately reproduced , there is no guarantee that the intermediate regime is properly treated . since in a quasi - stationary situation ( e.g. , for the cooling protoneutron star )the flux and the mean energy of the emitted neutrinos are determined in the diffusion regime , little change of these is found when the flux - limiter is varied ( suzuki ) or the transport equation is directly solved , e.g. , by monte carlo calculations ( janka ) .this , however , is not true when the spectral form is considered , because the spectra are shaped in the semitransparent surface - near layers .moreover , significant differences are also expected for problems where the local angular distribution is important in the region between the diffusion and free streaming limits . due to the factor appearing in eq .( [ eq-1 ] ) the hot - bubble heating is such a problem , neutrino - antineutrino pair annihilation is another problem of this kind .in fact , monte carlo simulations ( janka & hillebrandt , ; janka ; janka et al . ) have shown that all flux - limiters overestimate the anisotropy of the radiation field outside the neutrinosphere , i.e. , is enforced too rapidly ( see also cernohorsky & bludman ) .this leads to an underestimation of the neutrino heating in the hot - bubble region between neutrinosphere and supernova shock( [ eq-1 ] ) ) , and sensitivity of the supernova dynamics to the employed flux - limiting scheme must be expected ( messer et al . , lichtenstadt et al . ) .modifications of flux - limited diffusion have been suggested ( janka , ; dgani & janka , cernohorsky & bludman ) by which considerable improvement can be achieved for spherically symmetric , static and time - independent backgrounds ( smit et al . ) , but satisfactory performance for the general time - dependent and non - stationary case has not been demonstrated yet .therefore the interest turns towards direct solutions of the boltzmann equation for neutrino transport , also because the need to check the applicability of any approximation with more elaborate methods remains .moreover , the rapid increase of the computer power and the wish to become independent of ad hoc constraints on generality or accuracy yield a motivation for the efforts of several groups ( in particular mezzacappa & bruenn ,, and messer et al . ; more recently also burrows ) to employ such boltzmann solvers in neutrino - hydrodynamics calculations of supernova explosions .there are different possibilities to solve the boltzmann equation numerically , one of which is by straightforward discretization of spatial , angular , energy , and time variables and conversion of the differential equation into a finite difference equation which can then be solved for the values of the neutrino phase space distribution function at the discrete mesh points .dependent on the number of angular mesh points , this procedure is called s method . since solvingthe equation is computationally very expensive , there are limitations to the resolution in angle and energy space .therefore tests need to be done whether a chosen ( and affordable ) number of energy and angle grid points is sufficient to describe the spectra well and , in particular , to reproduce the highly anisotropic neutrino distribution outside the neutrinosphere .another , completely different approach to solve the boltzmann equation is the monte carlo ( mc ) method by which the probabilistic history of a large number of sample neutrinos is followed to simulate the neutrino transport statistically ( tubbs ; janka , ; janka & hillebrandt ) . in principle, the accuracy of the results is only limited by the statistical fluctuations associated with the finite number of sample particles .since the mc transport essentially does not require the use of angle and energy grids , it allows one to cope with highly anisotropic angular distributions and to treat with high accuracy neutrino reactions with an arbitrary degree of energy exchange between neutrinos and matter .however , the mc method is also computationally very time consuming , in particular if high accuracy on a fine spatial grid or at high optical depths is needed .therefore it is not the transport scheme of one s choice for coupling it with a hydrodynamics code . in the present work ,we make use of the advantages of the mc method in order to test the accuracy and reliability of a newly developed neutrino transport code that follows the lines of the s scheme described by mezzacappa & bruenn ( ,, ) . in particular, we shall test the influence of the number of radial , energy , and angular mesh points on predicting the spectra and the radial evolution of the neutrino flux in `` realistic '' protoneutron star atmospheres as found in hydrodynamic simulations of supernova explosions ( wilson ) . since our investigationsare restricted to static and time - independent backgrounds , we concentrate on generic properties of the transport description which should also hold for more general situations .the radial evolution of the angular distribution of the neutrinos is such a property , because it is primarily dependent on the profile of the opacity and the geometry of the neutrino - decoupling region , but is not very sensitive to the details of the temperature and composition in the neutron star atmosphere .finally , good overall agreement of the mc and s results would strengthen the credibility of the mc transport with its limited ability to yield high spatial resolution .the paper is organized as follows .the details of the boltzmann solver and essential information for the mc method are given in sect .2 . in sect . 3we describe the background models .section 4 presents the results of our comparative calculations , i.e. , neutrino spectra , luminosities , and eddington factors .some of our calculations are also compared against results obtained with a mgfld code developed by suzuki ( ) .the dependence of the results from the s scheme on the energy , angular , and radial grid resolution is discussed , too .it is shown that an angular mesh that varies with the position in the star can improve the angular resolution and the representation of the beamed neutrino distributions without increasing the number of angular mesh points .finally , a summary of our results and a discussion of their implications can be found in sect .our s code is based on a finite difference form of the general relativistic boltzmann equation for neutrinos .we assume spherical symmetry of the star throughout this paper .for the misner - sharp metric ( misner & sharp ) : the boltzmann equation in the lagrangian frame takes the following form : \left ( \frac{f_{\nu}}{\rho_{\rm b } } \right)\right\ } \nonumber \\ & + & \left [ \mu ^{2 } \frac{1}{\rm c } { \rm e}^{- \phi } \frac{\partial \ln ( \rho_{\rm b } r^{3})}{\partial \ , t } \right . \nonumber \\ & - & \frac{1}{\rm c } { \rm e}^{- \phi } \frac{\partial \ln r}{\partial \ , t } \nonumber \\ & - & \left .\mu 4 \pi r^{2 } \rho_{\rm b } \frac{\partial \phi}{\partial m } \right ] \frac{\partial \ \ } { \partial \frac{1}{3 } \varepsilon _ { \nu } ^{3 } } \left\ { \varepsilon _ { \nu } ^{3 } \left ( { \frac{f_{\nu}}{\rho_{\rm b } } } \right ) \right\ } \nonumber \\ & = & \frac{1}{\rho_{\rm b } } \frac{1}{\rm c } { \rm e}^{- \phi } \left ( \frac{\delta f_{\nu}}{\delta \ , t } \right)_{\rm coll } \quad .\end{aligned}\ ] ] in the above formula and are the velocity of light and the gravitational constant , respectively , is the coordinate time and is the baryonic mass coordinate which is related to the circumference radius through the conservation law of the baryonic mass . in view of combination with a lagrangian hydrodynamics code ( yamada ) ,the baryonic mass is chosen to be the independent variable instead of the radius . , and are the metric components which are determined by the einstein equations . in this paper , however , these quantities are given from the background models and set to be constant with time during the neutrino transport calculations . is the neutrino phase space distribution function . under the assumption of spherical symmetry, is a function of , , and , where is the cosine of the angle of the neutrino momentum with respect to the outgoing radial direction and is the neutrino energy . is the baryonic mass density . the right hand side of eq .( [ eqn : be ] ) is the so - called collision term , which actually includes absorption , emission , scattering and pair creation and annihilation of neutrinos , details of which are described below .the left hand side , on the other hand , looks a little bit different from the form used , for example , in mezzacappa et al .( ,, ) .this is not only because it is fully general relativistic but also because all the velocity dependent terms are expressed as time derivatives so that it can easily be coupled to the implicit general relativistic lagrangian hydrodynamics code ( yamada ) . in this waya fully coupled , implicit system of the radiation - hydrodynamics equations is formed , in which the time derivatives can be treated easily because they are just off - diagonal components of the matrix set up from the linearized equations .it should be noted , however , that since we assume that the matter background is static in this paper , all these time derivatives are automatically set to be zero , although these terms have been already implemented in the code .the conserved neutrino number in the absence of source terms is represented in terms of the chosen independent variables as this suggests to cast eq .( [ eqn : be ] ) in a conservation form with respect to the neutrino number .it is also evident that the combination of is more convenient to be used than itself . in the following , therefore, we define the specific neutrino distribution function as in mezzacappa et al .( ,, ) by and use this quantity as the dependent variable to be solved for . as mentioned above the specific neutrino distribution function is a function of , , and .this four - dimensional phase space is discretized and eq .( [ eqn : be ] ) is written as a finite - difference equation . in the time directionwe adopt a fully implicit differencing .the discretized specific distribution function is defined at the mesh centers of the spatial , angular and energy grids . herethe subscripts refer to the spatial , angular and energy grid points , respectively .the superscript corresponds to the time step .the value at each cell interface is evaluated by interpolation of the distribution at two adjacent mesh centers .our finite difference method is essentially the same as that of mezzacappa et al .( ,, ) with some modifications .for the spatial advection , the upwind difference and the centered difference are linearly averaged with the weights determined by the ratio of the mean free path to the distance to the stellar surface unlike mezzacappa et al .( ,, ) who used the ratio of the mean free path to the local mesh width .in fact , in the latter case we found that the upwind distribution was given too large a weight in the optically thick region and thus the flux was overestimated .this issue will be addressed later .the angular mesh is determined such that each mesh center and cell width correspond to the abscissas and weights of the gauss - legendre quadrature , respectively . in angular direction , the neutrino distribution at each interface is simply taken as the upwind value .the advection in the energy space is also approximated by an upwind scheme following mezzacappa & matzner ( ) , although this does not allow to conserve both lepton number and energy in non - static situations ( which are not considered here ) unless a large number of energy zones is used ( mezzacappa , private communication ) . in typical calculations ,105 , 6 and 12 mesh points are used for the spatial , angular and energy discretizations , respectively .the dependence of the results on the numbers of mesh points will be discussed below .the finite - differenced boltzmann equation forms a nonlinear coupled system of equations for all radial grid points which is linearized and solved iteratively by using a newton - raphson scheme .the linearized equations adopt a block tridiagonal matrix form , which can be efficiently solved by the feautrier method .different from the finite difference method , the monte carlo method constructs the statistical ensemble average by following the destinies of individual test particles and performing the average when all particles have been transported . due to the fact that neutrinos are fermionsit is impossible to propagate them independently .instead , the full time - dependent problem has to be simulated by following a large number ( typically ) of sample particles along their trajectories simultaneously in order to be able to construct the local phase space occupation functions and to include anisotropies as well as phase space blocking effects self - consistently into the calculation of the reaction rates and source terms .the modeling of the phase space distribution function from the local particle sample must guarantee the correct approach to chemical equilibrium .also the pauli exclusion principle has to be satisfied by the statistical average .we refer readers to janka ( ) and references therein for details .the stellar background is divided into 15 equispaced spherical shells of homogeneous composition and uniform thermodynamical conditions , the number of which was determined both from physical requirements for spatial resolution and from the requirement to have acceptably small statistical errors in the local neutrino phase space distributions constructed from the chosen number of sample particles . although the monte carlo method is essentially mesh free , about 60 energy bins and approximately 35 angular bins are used only for representing the phase space distribution functions and for calculating the reaction kernels .neutrinos are injected into the computational volume at the inner boundary in the way described in the next section , while particles passing inwardly through the inner boundary are simply forgotten .the outer boundary is treated as a free boundary , where particles escape unhindered and no neutrino is assumed to come in from outside . in the presented calculations we are mainly interested in the neutrino transport in the region where the neutrinos decouple from the stellar background and the emitted spectra form .therefore we calculate the neutrino transport only in the vicinity of the `` neutrinosphere '' . for this reason we have to set an inner boundary condition as well as an outer boundary condition for each model . at the outer boundarywe impose the condition that no neutrinos enter the computational volume from outside . in the boltzmann solver , this is realized by setting where is the radius of the outermost mesh center , and is the radius of the outer surface which is dislocated outward from the outermost mesh center by half a radial cell width . on the other hand, we have to specify the distribution of the neutrinos coming into the computational volume at the inner boundary which is dislocated inward from the innermost mesh center by half a radial cell width . for this purposewe adopt fermi - dirac distribution functions , in which the temperature , chemical potential and a normalization factor are determined such that the neutrino number density at ( where the neutrinos are essentially isotropically distributed because the inner boundary is chosen to be located at high optical depth ) , the average energy and the width of the energy spectrum , measured by the parameter ( see janka & hillebrandt 1989 ) , are reproduced as given by wilson s ( ) models .thus the inner boundary condition is set as : where , and are the fitting parameters , the values of which are summarized in table [ tab2 ] for all considered models and neutrino species . concerning the distribution of neutrinos that leave the computational volume, we impose in eq .( [ eqn : ibc ] ) in the boltzmann solver the condition that it is the same as the distribution at the innermost mesh center . from the physical point of view , however , and treated correctly in the monte carlo simulations , it should be determined by the fraction of neutrinos which is emitted or backscattered towards the inner boundary .this is in general different from what the phase space distribution at the innermost mesh center yields because the mean free path of the neutrinos near the inner boundary is shorter than the mesh width . as a resultthe imposed condition for in eq .( [ eqn : ibc ] ) leads to a minor discrepancy of the treatment of the inner boundary condition in the monte carlo and boltzmann computations and sometimes causes a small oscillation of the neutrino distribution near the inner boundary in the latter computations .this issue will be revisited later .ccrrr models & neutrinos & [ mev ] & [ mev ] & + w1 & & 9.56932 & 7.85190 & 1.221490 + & & 9.38259 & 7.13610 & 3.874140 + & & 10.32539 & 30.87195 & 17.911456 + w2 & & 10.52454 & 5.75932 & 3.672680 + w3 & & 9.87875 & 11.27370 & 7.108076 + the neutrino opacities of dense neutron star matter are still one of the major uncertainties of supernova simulations .theoretical and numerical complications arise from the description and treatment of nucleon thermal motion and recoil ( schinder ) , nuclear force effects and nucleon blocking ( prakash et al . ; reddy et al . ) , and nucleon correlations , spatial ( sawyer ; burrows & sawyer , ; reddy et al . ) as well as temporal ( raffelt & seckel ; raffelt et al .although in particular nucleon recoil and auto - correlations might play an important role even in the sub - nuclear outer layers of the protoneutron star down to densities below ( janka et al . ; hannestad & raffelt ) we do not concentrate on this problem here but rather employ the standard description of the neutrino opacities , according to which neutrinos interact with isolated nucleons ( see , e.g. , tubbs & schramm ; bruenn ) .also , as in most other simulations , bremsstrahlung production of neutrino - antineutrino pairs is neglected here , although it may be important as pointed out by suzuki ( ) and more recently by burrows ( ) and hannestad & raffelt ( ) . doing so , we intend to enable comparison with other ( already published ) work and want to avoid the mixing of effects from a different numerical treatment of the transport with those from a non - standard description of neutrino - nucleon interactions or from the inclusion of processes typically not considered in the past .the following neutrino reactions have been implemented in our codes .+ [ 1 ] + ( electron - type neutrino absorption on neutrons ) , + [ 2 ] + ( electron - type anti - neutrino absorption on protons ) , + [ 3 ] + ( neutrino scattering on nucleons ) , + [ 4 ] + ( neutrino scattering on electrons ) .+ in addition to the above reactions , the following processes have also been implemented in both codes , although these reactions are not used in the present paper .+ [ 5 ] + ( electron - type neutrino absorption on nuclei ) , + [ 6 ] + ( neutrino coherent scattering on nuclei ) , + [ 7 ] + ( electron - positron pair annihilation and creation ) , + [ 8 ] + ( plasmon decay and creation ) . + since the pair processes are not taken into account in this paper , we can treat each species of neutrinos separately .a test showed that in the considered protoneutron star atmospheres neutrino pair creation and annihilation as well as processes involving nuclei are unimportant to determine the fluxes and spectra .in this paper we use a simplified equation of state , in which only nucleons , electrons , alpha particles and photons are included .they are all treated as ideal gases .for given density , temperature and electron fraction we derive the mass fractions and the chemical potentials of nucleons and the electron chemical potential from this equation of state .the disregard of nuclei is well justified for the densities and temperatures we are considering , where most of the nuclei are dissociated into free nucleons .this is actually confirmed by comparing our eos with the more realistic eos of wolff that is based on the skyrme - hartree - fock method ( hillebrandt & wolff ) .only very small differences of the nucleon chemical potentials are found for the innermost region where small amounts of nuclei appear and for the outermost region where some contribution from alpha particles is mixed into the stellar medium .we also repeated some of the calculations making use of the wolff eos with the nuclei - related reactions [ 5 ] , [ 6 ] ( neutrino emission , absorption and scattering on nuclei ) turned on and found qualitatively and quantitatively the same results .hence the nuclei - related reactions are switched off in the calculations described below .the time - dependent transport calculations presented here were performed for background profiles which are representative of protoneutron star atmospheres during the quasi - static neutrino cooling phase ( wilson ) . at this stage ,several seconds after core bounce , the typical evolution timescale of density , temperature , and electron fraction is much longer than the timescale for neutrinos to reach a stationary state .therefore our assumption of a static and time - independent background is justified . in addition , our interest is focused on the radial evolution of the eddington factors and on a test of the influence of the energy and angle resolution used in the s boltzmann solver .both aims do not require a fully self - consistent approach which takes into account the evolution of the stellar background ( in particular of the temperature and composition ) .in fact , the eddington factors are normalized angular integrals of the radiation intensity and as such reflect very general characteristics of the geometrical structure of the atmosphere where neutrinos and matter decouple . profiles from wilson s ( ) protoneutron star model were taken for three different times , 3.32 , 5.77 , and 7.81 s after core bounce . with the chosen fundamental variables density , temperature , and electron fraction , the thermodynamical state is defined for the plasma consisting of non - relativistic free nucleons , arbitrarily relativistic and degenerate electrons and positrons , and photons in thermal equilibrium .figures [ fig_1][fig_3 ] show the input used for the three models . in fig .[ fig_3 ] also the general relativistic metric coefficients and are given as provided by wilson s data and used for a comparative general relativistic calculation of the neutrino transport in model w3 .= 8.8 cm = 8.8 cm = 8.8 cm all models computed with the boltzmann code are summarized in table [ tab3 ] .models st are the standard models , in which 105 uniform spatial , 6 angular and 12 energy mesh points were used .the energy mesh is logarithmically uniform and covers 0.9 110 mev .the numbers of angular grid points and energy grid points were increased in models fa and fe , respectively . in modelcs we used the same radial grid as in the monte carlo simulations where 15 radial zones were chosen .105 spatial mesh points were again used in model ni with no interpolations of density , temperature and electron fraction in the radial grid of the monte carlo simulations .model gr took into account the general relativistic effects .we used a non - uniform spatial mesh in model nu .a different interpolation of up - wind differencing and centered differencing was tried for the radial advection term in model di .we assumed the nucleon scattering to be isotropic in model is .as is understood from table [ tab3 ] , most of the comparisons were done for the electron - type anti - neutrinos , since they are most important from the observational point of view .clccl model & & background model & species & + st1 & & w1 & & interpolation in spatial grid of mc + st2 & & w1 & & interpolation in spatial grid of mc + st3 & & w2 & & interpolation in spatial grid of mc + st4 & & w3 & & interpolation in spatial grid of mc + st5 & & w1 & & interpolation in spatial grid of mc + fa & & w1 & & interpolation in spatial grid of mc + fe & & w1 & & interpolation in spatial grid of mc + cs & & w1 & & same spatial grid as in mc + ni & & w1 & & no interpolation in spatial grid of mc + gr & & w3 & & general relativity included + nu & & w1 & & non - uniform spatial mesh + di & & w1 & & different interpolation for conservative radial advection + is & & w1 & & isotropic - scattering +in the following , we consider the neutrino transport results after the time - dependent simulations have reached steady states .first we compare observable quantities such as the luminosity , average energy and average squared energy of the neutrino flux for models st and gr .these quantities are calculated at the outermost spatial zone as where is again the surface radius , and the redshift corrections from the surface up to infinity are not taken into account .the results are summarized in table [ tab4 ] .cccccccc & & & & & & & + & b & & & & & & + & mc & & & & & & + & b & 12.8 & 16.3 & 15.9 & 16.2 & 24.3 & 15.5 + & mc & 12.7 & 16.1 & 15.8 & 16.3 & 24.2 & 15.6 + & b & 198.5 & 329.0 & 309.9 & 324.5 & 727.8 & 300.3 + & mc & 198.6 & 322.6 & 308.9 & 327.1 & 724.3 & 300.6 + the electron - type neutrino has the lowest energy while the muon - type neutrino has the highest .the reason for this is that the electron - type neutrino has the shortest mean free path due to absorptions on the abundant neutrons , and decouples in the surface - near layers where the temperature is lower .in contrast , the muon and tau neutrinos do not interact with particles of the stellar medium by charged currents and therefore their thermal decoupling occurs at a higher temperature .the luminosity for the electron - type anti - neutrino gets smaller as time passes , which is due to the cooling and shrinking of the protoneutron star .as can be seen in the table , the agreement of all quantities between the two methods is very good , which confirms the statistical convergence of the monte carlo simulations .the average energy is in general determined accurately because the energy spectrum is shaped in the region where the neutrino angular distribution is not very anisotropic .moreover , possible effects due to the rather coarse angular resolution of the boltzmann code essentially cancel out by taking the ratios of eqs .( [ eqn : emin ] ) and ( [ eqn : e2min ] ) . on the other hand ,the luminosity and the number flux of the monte carlo computations are also well reproduced by the boltzmann results .this is due to the fact that these quantities are also determined deep inside the star and are nearly conserved farther out .in fact , when integrating eq .( [ eqn : be ] ) over angle and energy multiplying with unity and , respectively , ignoring all time derivatives and general relativistic effects , one gets where and are the number and energy fluxes of neutrinos at radius , respectively , and defined as , as expected intuitively , the scattering kernels drop out of the right hand side of eq .( [ eqn : nexch ] ) while only the isoenergetic scattering does not contribute on the right hand side of eq .( [ eqn : eexch ] ) . in the given boltzmann code eqs .( [ eqn : nexch ] ) and ( [ eqn : eexch ] ) are discretized in a conservative form for the radial advection .therefore it is clear that the boltzmann code can calculate the number and energy fluxes accurately if also the number and energy exchange by the reactions are calculated accurately in the source terms on the right hand sides of the equations .moreover , the radial evolutions of and are entirely determined by the number and energy exchange through reactions of which the net effect is small in an atmospheric layer which is in a stationary state and reemits as much energy and lepton number as it absorbs .changes of the number fluxes and luminosities in the considered protoneutron star atmospheres occur only in regions where the neutrino distribution is still essentially isotropic and possible effects due to an insufficient angular resolution in the boltzmann s scheme do not cause problems .for all these reasons it is not surprising that the same quality of agreement is also found for the radial evolutions of the luminosity , average energy and average squared energy and that this is also true for the general relativistic case . in the previous section we discussed only the energy and angle integrated quantities .however , we also provide information about the energy spectra of each neutrino species , because they yield more evidence about the quality of the agreement between the calculations with the different codes . figs .[ fig_4 ] and [ fig_5 ] show energy flux spectra defined as at the protoneutron star surface for different cases . for the reasons discussed in section [ la ] ,the spectra computed with the boltzmann code ( symbols ) and the monte carlo code ( lines ) show excellent agreement .the number and the distribution of the energy bins in the boltzmann code seem to be adequate to reproduce the highly resolved monte carlo spectra .= 8.8 cm = 8.8 cm so far we discussed only angle integrated quantities since they are observable .however we are also interested in the angular distributions of the neutrinos , because information about the angular distributions is important to determine the neutrino heating rate in the hot - bubble region ( see eq .( [ eq-1 ] ) ) .although it is an advantage of the boltzmann solver over mgfld that one does not have to assume an ad hoc closure relation between the angular moments of the distribution function , one should remember that the usable number of angular mesh points is severely limited . in the feautrier method the computation time increases in proportion to the third power of the dimension of the blocks in the tridiagonal block matrixwhich has to be inverted when one chooses the radius as the outermost variable of the do - loops .the dimension of one block , in turn , is linearly proportional to the number of angular mesh points .the same dependence holds for the number of energy mesh points and the number of neutrino species . in the standard calculations we use 6 angular mesh points and 12 energy grid points , and we treat a single neutrino species at a time , which corresponds to a block matrix size of 72 . on the other hand ,the number of spatial grid points is about 100 .the cpu time is a few seconds per inversion of the whole matrix on a single vector processor of a fujitsu vpp500 .hence , use of more than 10 angular mesh points is almost prohibitive for calculations with three neutrino species even with a highly parallelized matrix inversion routine ( sumiyoshi & ebisuzaki ) .it is , therefore , important to clarify the sensitivity of the accuracy to the angular resolution .for this reason we consider the flux factor and the eddington factor which are defined as : here the subscript `` '' means that the averages are defined with the weight of energy . in mgfldthese factors are related with each other by a closure condition which can be derived from the employed flux - limiter ( janka , ) . for simplicitywe discuss here only energy integrated quantities as defined above .the fundamental features are similar for the individual energy groups . in figs .[ fig_6][fig_8 ] , we show the radial evolutions of the flux factors and the eddington factors for all neutrino species in case of background model w1 .the upper panels show the flux factors and the lower panels the corresponding eddington factors .the solid lines are the results of the boltzmann simulations ( having the finer radial resolution ) and the filled triangles are those of the monte carlo simulations .as can be seen , near the inner boundary the flux factors are almost zero while the eddington factors are , which implies that the neutrino angular distribution is nearly isotropic , a consequence of the fact that the neutrinos are in equilibrium with the surrounding matter .as we move outward , both factors begin to deviate from these values , reflecting the increase of the mean free path and a more rapid diffusion .farther out , the angular moments increase monotonically towards unity , the value in the free streaming limit , as the angular distribution gets more and more forward peaked with increasing distance from the source . as can be seen clearly in figs .[ fig_6][fig_8 ] , the boltzmann solver tends to _ underestimate _ both angular moments in the outermost region , where the neutrino angular distribution is most forward peaked .this can be explained by the insufficient angular resolution , or , to be more specific , by the fact that in case of the employed gauss - legendre quadrature the maximum angle cosine of the angular grid is significantly less than unity , if not a large number of angular grid points is used .this is directly confirmed by using a larger number of angular mesh points or , in particular , a variable angular mesh ( see sect .[ vam ] ) .the same trend is also present in the general relativistic case , model gr , as shown in fig .[ fig_9 ] .it turned out that the ray bending effect which tends to isotropize the angular distribution of the neutrinos is not very important and that differences between the monte carlo results and the boltzmann results are significantly larger . in fig .[ fig_10 ] we show the flux factor and the eddington factor for model fa where we employed 10 angular mesh points instead of 6 .the long dashed lines are for model fa and the short dashed lines depict the result of model st2 , the corresponding standard model , for comparison .the discrepancy between the boltzmann simulation and the monte carlo simulation is reduced with the increase of the number of angular mesh points .this supports our interpretation that the deviation stems entirely from the insufficient angular resolution and/or the unfavorable location of the angular grid points for the gauss - legendre quadrature used in the boltzmann simulations . indeed , the degree of improvement is consistent with the fact that our finite difference scheme is of first order for the angular advection , since we always take the upwind differencing as explained above .even with the finer 10-zone angular mesh the deviation of both factors from the exact values given by the monte carlo result is significant .the maximum deviations , for the flux factor and for the eddington factor , are approached as one goes farther out into the optically thin regime .this is visible in fig .[ fig_10 m ] which displays the ratios of the boltzmann to the monte carlo results for the flux factors and the eddington factors in case of 6 and 10 angular bins and the variable angular mesh .( the relatively large discrepancies of the flux factors for smaller radii are explained by slight differences of the treatment of the inner boundary condition , see sect .[ sec_boundary ] , and by the fact that the flux factor adopts very small values in the optically thick region . )it should be mentioned that the flux factors calculated in the recent paper by messer et al .( ) do not converge to unity but saturate at a nearly constant lower level ( around 0.9 ) even far outside of the neutrinosphere .this reflects the use of for the largest -bin of the angular grid in the 6-point quadrature of the s method .it is interesting to see that this tendency of the boltzmann solver is completely opposite to that of mgfld .janka ( , ) pointed out that all flux limiters used so far tend to overestimate the flux factor and the eddington factor in the optically thin region , which implies that the neutrino angular distribution approaches the free streaming limit much too rapidly ( see also messer et al . ) . in order to confirm this statement , transport calculations with mgfldwere done for the same models with three different flux limiters , which are bruenn s ( br ) , levermore & pomraning s ( lp ) and mayle & wilson s ( mw ) .we refer readers to janka ( ) and suzuki ( ) and references therein for details on the flux limiters .we show in figs .[ fig_11 ] and [ fig_12 ] the flux factors and local number densities of the electron - type neutrino and electron - type anti - neutrino for model w1 , respectively .it is clear that all flux limiters overestimate the forward peaking of the angular distributions of the neutrinos in the optically thin region , a trend that holds for all neutrino species and is not dependent on the background model .the typical deviation of mgfld results from the monte carlo results is much larger than that between the boltzmann solver and the monte carlo method .= 8.8 cm = 8.8 cm = 8.8 cm = 8.8 cm = 8.8 cm = 8.8 cm = 8.8 cm = 8.8 cm from the lower panels of figs .[ fig_11 ] and [ fig_12 ] we learn that the local neutrino number density , which is given by is overestimated in case of the boltzmann solver ( by about 10% ) and underestimated for mgfld ( by approximately 30% ) in the optically thin region .this is understood from the fact that the number flux , which is defined as is related to the local neutrino number density by denotes the average angle cosine for the neutrino number flux and is calculated from eq .( [ eqn : ffac ] ) with a factor omitted under the integrals in the numerator and denominator . since the number flux is determined deep inside the protoneutron star atmosphere where the neutrinos are still nearly isotropic , and conserved farther out , it is not affected by problems with a coarse angular resolution occurring in the optically thin regime . for this reason , the number and energy fluxes agree well between the monte carlo method and the boltzmann solver irrespective of the angular resolution as long as the boltzmann solver is based on conservative finite differencing in the radial direction .the good agreement of the fluxes is confirmed by fig .[ fig_13 ] , which depicts the radial behavior of the number flux in case of models st1 , st2 and st5 .it is now clear from eq .( [ eqn : nfac ] ) that an underestimation ( overestimation ) of the flux factor leads to an overestimation ( underestimation ) of the number density , if the flux is the same .it should be noted that even in the outermost zone of our computational region in the atmosphere of a protoneutron star , the neutrino angular distribution is not so strongly forward peaked as in the hot - bubble region farther out .hence it must be expected that the errors by an over- or underestimation of the neutrino number density might be even larger in the hot - bubble region where neutrino heating takes place .since the neutrino heating rate is proportional to the local neutrino number density ( actually : energy density)this is why the inverse of the flux factor appears in eq .( [ eq-1])the application of the boltzmann solver with a relatively small number of angular mesh points may lead to an overestimation of the neutrino energy deposition in the hot bubble for disadvantageous situations , in particular when most of the heating occurs in those regions where the deviation of the flux factor from the correct value is significant .in contrast , all flux limiters used in mgfld underestimate the heating rate significantly .therefore , even for the boltzmann solver , improvement of the angular resolution or a redistribution of the angular grid points is desirable in order to a priori avoid inaccurate evaluation of the heating rate .as already mentioned , an increase of the number of angular mesh points is not feasible .choosing a variable angular mesh which adjusts mesh point locations in dependence of time and spatial position might be a solution .this issue will be discussed in the next subsection .= 8.8 cm here we attempt to improve the angular resolution of the boltzmann solver by redistributing the angular grid points in dependence of time and position so that their density is enhanced in the forward direction in the optically thin region where the neutrino angular distribution becomes strongly forward peaked and the boltzmann solver underestimates the flux factor and the eddington factor .this requires adding extra angular advection terms in the numerical scheme which compensate for the motions of the angular mesh points .we assume that the position of each interface of the angular grid is a function of time , baryonic mass and neutrino energy , i.e. , .integrating eq.([eqn : be ] ) over angular bins then leads to the following additional angular advection fluxes at each angular mesh interface i : \left ( \frac{\partial \ \mu _ { \rm i}}{\partial \frac{1}{3 } \varepsilon _ { \nu } ^{3 } } \right ) \left ( \frac{f_{\nu}}{\rho_{\rm b } } \right ) .\end{aligned}\ ] ] it is easy to understand that eqs .( [ eq : vat])([eq : vae ] ) originate from the variability of the angular mesh points because of the differentials of the with respect to time , mass and neutrino energy. since the neutrino reaction rates are strongly energy dependent and so is the neutrino angular distribution , it would be desirable to implement the energy dependent angular mesh according to eq .( [ eq : vae ] ) . in the current preliminary attempt ,however , we installed only eqs .( [ eq : vat ] ) and ( [ eq : vam ] ) for simplicity . incidentally , since eq .( [ eq : vae ] ) is proportional to in static background calculations , it is anyway negligible for the models considered here .we note that the motion of mesh points is not calculated implicitly , that is , the angular mesh points for the next time step are determined from the neutrino angular distribution at the current time step and are kept fixed during the implicit calculation of the transport for the next step . in fig .[ fig_10 ] , we show both the flux factor and the eddington factor obtained from the computation with the variable angular mesh .the improvement is clear from a comparison with the result of model fa which employed 10 angular mesh points and is also shown in the figure .it should be emphasized that increasing the number of angular mesh points from 6 to 10 leads to an increase of cpu time by a factor of .5 , while the additional operations for the variable angular mesh imply negligible computational load .we repeated all boltzmann calculations with the variable angular mesh method and found that the same improvement could be achieved for all cases .our scheme is stable at least for the static background models , although the stability for dynamical background models remains to be tested .thus we think this method is promising in applying the boltzmann solver to the study of neutrino heating in the hot - bubble region of supernovae , although there is room for improvement concerning the prescription of the motion of the mesh points and the implementation of the energy - dependent angular mesh . in this sectionwe discuss how the numerical results change in dependence of the number of spatial and energy grid points , the boundary condition and the treatment of the radial advection in the boltzmann solver . in fig .[ fig_14 ] we show the energy spectrum of electron - type anti - neutrinos for model fe , in which we used 18 energy mesh points , compared with the spectrum for the corresponding standard model st2 which has 12 energy zones .no qualitative or quantitative difference is found between the two cases .this is also true for the luminosity and the angular distribution .thus we think that about 15 energy mesh points are sufficient for the calculation of the energy spectrum .these results are in agreement with previous findings by mezzacappa & bruenn ( ,, ) .= 8.8 cm in models cs and ni we reduced the spatial resolution , because the monte carlo simulations were done with only 15 radial mesh points which were used to represent the stellar background on which the reaction kernels were evaluated .another motivation for testing the sensitivity to the radial resolution is that it is hardly possible to describe the protoneutron star atmosphere with about 100 radial grid points in the context of a full supernova simulation . in modelcs we used the same 15 spatial grid points as in the monte carlo simulations .on the other hand , in model ni we used 105 spatial mesh points but the density , temperature and electron fraction were not interpolated between the grid points of the monte carlo simulations . in fig .[ fig_15 ] we show the radial evolution of the average energy as defined in eq .( [ eqn : emin ] ) and that of the number flux given by eq .( [ eqn : nflx ] ) for model cs , to be compared with the corresponding result for model st2 in fig .[ fig_13 ] .it is clear that the agreement between the monte carlo and the boltzmann results for both quantities is good .we note also that the angular distribution as well as the energy spectrum are hardly affected by this change of the spatial resolution .model ni agrees with the monte carlo data nearly perfectly ( except for the problems with the angular distribution discussed in sect .[ fefac ] ) after averaging over spatial mesh zones in accordance to the way the monte carlo data represent the transport result .since the boltzmann results do not change with the number of radial grid points , we conclude that the quality of the numerical solutions is not degraded very much for simulations with a decreased spatial resolution .= 8.8 cm minor oscillations of the number flux near the inner boundary can be seen in fig .[ fig_13 ] .this problem results from the fact that one can not consistently specify the distribution of neutrinos which leave the computational volume at the inner boundary .while this distribution should be determined from the transport result just above the inner boundary , the boltzmann solver , however , requires an ad hoc specification in order to calculate the flux at the inner boundary .this leads inevitably to an inconsistency of the flux in the innermost zone and thus to the observed oscillations .in fact , when an inhomogeneous spatial mesh was used in model nu , in which the innermost grid zone was five times smaller than in the standard models , the oscillations were diminished as well . finally , we illustrate possible errors which are associated with the treatment of the finite differencing of the spatial advection term in the boltzmann solver . in the radial advection term a linear average of the centered difference and of the upwind differenceis used with a weight factor that changes according to the ratio of the neutrino mean free path to some chosen length scale .mezzacappa et al .( ,, ) took the ratio of the mean free path to the local mesh width in order to construct the weighting .however , we found that this does not work well if the mesh width becomes of the same order as the mean free path but is much smaller than the scale height of the background .this is indeed the case in the inner optically thick region of our standard models with 105 radial zones . in a more recent version of his code ,mezzacappa ( private communciation and ) defines the weighting factors by refering them to the neutrinospheric radius . in fig .[ fig_16 ] the dashed line shows the number flux of muon - type neutrinos for model di which used the prescription suggested by mezzacappa et al .( ,, ) .the flux is slowly increasing with radius because the upwind differencing is given too large a contribution in the optically thick region where the centered differencing should actually be chosen . as demonstrated by the solid line in fig .[ fig_16 ] , the constancy of the flux , however , is recovered when the ratio of the mean free path to the distance up to the surface is chosen instead of the ratio of the mean free path to the local mesh width .yet , this issue is probably not very important for realistic calculations of the entire neutron star , since the mesh width is usually not much smaller than the typical scale height of the matter distribution .= 8.8 cm to finish , we comment briefly on a last model in which we assumed that the nucleon scattering is taken isotropic to see to what extent the result changes .no significant effect was found by modifying the angular distribution of the dominant scattering reaction .in this paper an extensive comparison was made between a newly developed boltzmann neutrino transport code based on the discrete ordinate ( s ) method as described by mezzacappa & bruenn ( ,, ) , and a monte carlo transport treatment ( janka , ) by performing time - dependent calculations of neutrino transport through realistic , static profiles of protoneutron star atmospheres under the assumption of spherical symmetry .in particular , the sensitivity of the results of the boltzmann solver to the employed numbers of radial , angular and energy grid points and to the treatment of the radial advection terms was investigated . the flux factor and eddington factor , which contain information about the angular distribution of the neutrinos in the neutrino - decoupling region ,were also compared with the approximate treatment of this regime by a multi - energy - group flux - limited diffusion code ( mgfld ) .the boltzmann and monte carlo results showed excellent agreement for observables such as the luminosity and the flux spectra which are determined in those regions of the star where the neutrino - matter interactions are still very frequent and thus the neutrino distributions are still nearly isotropic . since the luminosity and the spectraare essentially conserved farther out , the spatial evolutions as well as the surface values exhibit this agreement , as long as the finite differencing of the boltzmann solver is done in a conservative way . some problems , however ,were observed concerning the description of the angular distribution of the neutrinos by the boltzmann results in the semitransparent and transparent regimes . due to severe limitations of the number of angular grid points which can be used typically only 610 angular bins between zero and 180 degrees are compatible with the steep increase of the requirements of computer time for better resolution the boltzmann code is not able to describe strong forward peaking of the neutrino distributions very well , if a quadrature set is employed for the angular integration where the maximum angle cosine is significantly less than unity . in this casethe boltzmann results _ underestimate _ the anisotropy in the optically thin region outside the average `` neutrinosphere '' , and the exact limits for the flux factor , , and for the eddington factor , , at large distances away from the neutrino source can not be satisfactorily reproduced .this is in agreement with the trends also seen in recent results of messer et al .( ) and is exactly opposite to the deficiencies of mgfld which tends to _ overestimate _ the radial beaming of the radiation because flux limiters enforce the free - streaming limit when the optical depth of the stellar background becomes very low ( janka , ) . since the dominant energy deposition rate by absorption of electron neutrinos and antineutrinos in the hot - bubble region of the supernova core is inversely proportional to the flux factor ( see eq .( [ eq-1 ] ) ) , which means that the energy transfer from neutrinos to the stellar plasma scales with the neutrino energy density ( or number density ) in the heating region , one can not exclude that the boltzmann solver may lead to an overestimation of the neutrino heating in disadvantageous situations , whereas mgfld yields rates which are definitely too low .for a set of typical post - core bounce situations , messer et al .( ) , however , claim on grounds of numerical tests that the net neutrino heating is converged with s and that the differences between s and s are minor .the problems may be more serious for neutrino reactions like neutrino - antineutrino annihilation which are sensitive to both the flux factor and the eddington factor of the neutrinos ( see janka ) .the description of the angular neutrino distribution by the boltzmann solver and thus the agreement with the highly accurate monte carlo data can be significantly improved by employing a variable angular mesh , even without increasing the total number of angular mesh points .the positions of the angular grid points must be moved at each time step and in every spatial zone such that they are clustering in the forward direction in the optically thin regime .we found that the energy spectra can be well calculated with about 15 energy mesh points .the fact that a reduction of the number of spatial grid points from more than 100 to only 15 in the neutron star atmosphere did not change the quality of the transport results means that the boltzmann code can be reliably applied to realistic simulations which involve the whole supernova core .moreover , it was demonstrated that the details of the interpolation between centered differencing and upwind differencing in the spatial advection term can affect the accuracy of the transport results .the excellent overall agreement of the results obtained with the boltzmann code and the monte carlo method confirms the reliability of both of them .good performance of the s method for solving the boltzmann equation of neutrino transport has been found before by mezzacappa & bruenn ( ,, ) and messer et al .( ) even for realistic dynamic situations .we hope that the work described here also helps to reveal possible deficiencies and weaknesses and thus will serve for further improvements of the numerical treatment of neutrino transport in supernovae and protoneutron stars .encouraging discussions and helpful suggestions by e. mller are acknowledged .we are grateful to a. mezzacappa for critical comments on a first version of the paper and for updating us with the most recent improvements of his code .this work was partially supported by the japanese society for the promotion of science ( jsps ) , postdoctoral fellowships for research abroad , and by the supercomputer projects ( no.97 - 22 and no.98 - 35 ) of the high energy accelerator research organization ( kek ) .t.j . was supported , in part , by the deutsche forschungsgemeinschaft under grantthe numerical calculations were mainly done on the supercomputers of kek .bethe , h. a. and wilson .j. r. 1985 , apj , 295 , 14 bionta , r. m. et al .1987 , phys .lett . , 58 , 1494 bowers , r. l. and wilson , j. r. 1982 , apjs , 50 , 115 bruenn , s. w. 1985 , apjs , 58 , 771 burrows , a. 1987 , apj , 318 , 57 burrows , a. 1997 , to be published in the proceedings of the 18th texas symposium on relativistic astrophysics , held in chicago , 15 - 20 december 1996 , edited by olinto , a. , frieman , j. and schramm , d. , ( world scientific press , singapore , 1997 ) burrows , a. and goshy , j. 1993 , apj , 416 , 75 burrows , a. , hayes , j. and fryxell , b. a. 1995 , apj , 450 , 830 burrows , a. , sawyer , r. f. 1998a , phys ., c58 , 554 burrows , a. , sawyer , r. f. 1998b , submitted to phys .cernohorsky , j. and bludman , s. a. 1994 , apj , 433 , 250 dgani , r. and janka , h .- th .1992 , a&a , 256 , 428 hannestad , s. and raffelt , g. 1997 , to appear in apj herant , m. , benz , w. , hix , j. , freyer , c. and colgate , s. a. 1994 , apj , 435 , 339 hillebrandt , w. and wolff , r. g. 1985 , nucleosynthesis : challenges and new developments , edited by arnett , w. d. and truran , j. w. , ( university of chicago press , chicago , 1985 ) , p131 hirata , k. et al .1987 , phys .lett . , 58 , 1490 horowitz , c. j. and wehrberger , k. 1991 , phys .b , 266 , 236 janka , h .- th .1987 , nuclear astrophysics ; proceedings of the workshop , tegernsee , germany , apr . 21.24 . , 1987 , ( springer - verlag , berlin and new york , 1987 ) , p319 janka , h .- th .1991a , ph.d .thesis , technische universitt mnchen janka , h .- th .1991b , a&a , 244 , 378 janka , h .- th .1992 , a&a , 256 , 452 janka , h .- th .1993 , frontier objects in astrophysics and particle physics , proc . of the vulcano workshop 1992 , conf .40 , edited by giovannelli , f. and mannocchi , g. , ( sif , bologna , 1993 ) , p345 janka , h .- th . ,dgani , r. and van den horn , l. j. 1992 , a&a , 265 , 345 janka , h .- th . and hillebrandt , w. 1989a , a&as , 78 , 375 janka , h .- th . and hillebrandt , w. 1989b , a&a , 224 , 49 janka , h .- th . and keil , w. 1998 , supernovae and cosmology , proc . of a colloquium in honor of prof .g. tammann on the occasion of his 65th birthday , augst , switzerland , jun . 13 . , 1997 , edited by labhardt , l. , binggeli , b. and buser r. , ( astronomischesinstitut der universitt basel , basel , 1998 ) p7 janka , h .- th . , keil , w. , raffelt , g. and seckel , d. 1996 , phys .lett . , 76 , 2621 janka , h .- th . and mller , e. 1993 , frontiers of neutrino astrophysics , proc . of the international symposium on neutrino astrophysics , takayama / kamioka , japan , oct .19.22 . , 1992 , edited by suzuki , y. and nakamura , k. , ( universal academy press , tokyo , 1993 ) p203 janka ,h .- th . and mller , e. 1996 , a&a , 306 , 167 keil , w. , janka , h .- th . and mller , e. 1996 , apj lett ., 473 , l111 keil , w. , janka , h .- th . and raffelt , g. 1995 , phys . rev . ,d51 , 6635 lichtenstadt , i. , khokhlov , a. m. and wheeler , j. c. 1998 , submitted to apj mayle , r. and wilson , j. r. 1988 , apj , 334 , 909 messer , o. e. b. , mezzacappa , a. , bruenn , s. w. and guidry , m. w. 1998 , submitted to apj , astro - ph 9805276 mezzacappa , a. 1998 , j. computational and applied mathematics , in press mezzacappa , a. and matzner , r. a. 1989 , apj , 343 , 853 mezzacappa , a. and bruenn , s. w. 1993a , apj , 405 , 637 mezzacappa , a. and bruenn , s. w. 1993b , apj , 405 , 669 mezzacappa , a. and bruenn , s. w. 1993c , apj , 410 , 740 mezzacappa , a. et al . 1998 , apj , 495 , 911 miller , d. s. , wilson , j. r. and mayle , r. w. 1993 , apj , 415 , 278 misner , c. w. and sharp , d. h. 1964 , phys .rev , 136 , b571 myra , e. s. et al .1987 , apj , 318 , 744 prakash , m. et al .1997 , phys ., 280 , 1 raffelt , g. g. and seckel , d. 1995 , phys .d52 , 1780 raffelt , g. g. , seckel , d. and sigl , g. 1996 , phys . rev . ,d54 , 2784 reddy , s. , prakash , m. and lattimer , j. m. 1997 , apj ., 478 , 689 reddy , s. , prakash , m. and lattimer , j. m. 1998a , to appear in proc .second oak ridge symposium on nuclear and atomic and nuclear astrophysics reddy , s. , prakash , m. and lattimer , j. m. 1998b , phys ., d58 , 1309 sawyer , r. f. 1989 , phys ., c40 , 865 schinder , p. j. 1990 , apjs , 74 , 249 shimizu , t. , yamada , s. and sato , k. 1994 , apj lett . , 432 , l119 smit , j. m. , cernohorsky , j. and dullemond , c. p. 1997 , a&a , 325 , 203 sumiyoshi , k. and ebisuzaki , t. 1998 , parallel computing , 24 , 287 suzuki , h. 1990 , ph.d .thesis , university of tokyo suzuki , h. 1993 , frontiers of neutrino astrophysics , proc . of the international symposium on neutrino astrophysics , takayama / kamioka , japan , oct .19.22 . , 1992 , edited by suzuki , y. and nakamura , k. , ( universal academy press , tokyo , 1993 ) p219 suzuki , h. 1994 , physics and astrophysics of neutrinos , edited by fukugita , m. and suzuki , a. , ( springer - verlag , tokyo , 1994 ) , p763 tubbs , d. l. 1978 , apjs , 37 , 287 tubbs , d. l. and schramm , d. n. 1975 , apj , 201 , 467 wilson , j. r. 1982 , proc .illinois meeting on numerical astrophysics wilson , j. r. 1988 , private communication wilson , j. r. and mayle , r. w. 1988 , phys .163 , 63 wilson , j. r. and mayle , r. w. 1993 , phys .227 , 97 wilson , j. r. , mayle , r. w. , woosley , s. e. and weaver , t. 1986 , ann .n. y. acad ., 470 , 267 yamada , s. 1997 , apj , 475 , 720 | we have coded a boltzmann solver based on a finite difference scheme ( s method ) aiming at calculations of neutrino transport in type ii supernovae . close comparison between the boltzmann solver and a monte carlo transport code has been made for realistic atmospheres of post bounce core models under the assumption of a static background . we have also investigated in detail the dependence of the results on the numbers of radial , angular , and energy grid points and the way to discretize the spatial advection term which is used in the boltzmann solver . a general relativistic calculation has been done for one of the models . we find overall good agreement between the two methods . this gives credibility to both methods which are based on completely different formulations . in particular , the number and energy fluxes and the mean energies of the neutrinos show remarkably good agreement , because these quantities are determined in a region where the angular distribution of the neutrinos is nearly isotropic and they are essentially frozen in later on . on the other hand , because of a relatively small number of angular grid points ( which is inevitable due to limitations of the computation time ) the boltzmann solver tends to underestimate the flux factor and the eddington factor outside the ( mean ) `` neutrinosphere '' where the angular distribution of the neutrinos becomes highly anisotropic . as a result , the neutrino number density is overestimated in this region . this fact suggests that one has to be cautious in applying the boltzmann solver to a calculation of the neutrino heating in the hot - bubble region because it might tend to overestimate the local energy deposition rate . a comparison shows that this trend is opposite to the results obtained with a multi - group flux - limited diffusion approximation of neutrino transport , employing three different flux limiters , all of which lead to an underestimation of the hot - bubble heating . the accuracy of the boltzmann solver can be considerably improved by using a variable angular mesh to increase the angular resolution in the semi - transparent regime . + 3.0 cm |
the problem of computing a function from distributed information arises in many different contexts ranging from auctions and financial trading to sensor networks . in order to compute the desired target function , communication between the distributed usersis required .if this communication takes place over a shared medium , such as in a wireless setting , the channel introduces interactions between the transmitted signals .this suggests the possibility to harness these signal interactions to facilitate the task of computing the desired target function .a fundamental question is therefore whether by jointly designing encoders and decoders for communication and computation , we can improve the efficiency of distributed computation . in this paper, we explore this question by considering computation of a function over a two - user multiple - access channel ( mac ) . in order to focus on the impact of the structural mismatch between the target and channel functions on the efficiency of computation , we ignore channel noise and consider only _ deterministic _ macs here .more formally , the setting consists of two transmitters observing a ( random ) variable and , respectively , and a receiver aiming to compute the function of these variables .the two transmitters are connected to the destination through a deterministic mac with inputs and output , where describes the actions of the channel .a straightforward achievable scheme for this problem is to separate the tasks of communication and computation : the transmitters communicate the values of and to the destination , which then uses these values to compute the desired target function .this requires the receiver to decode message bits .however , the mac itself also computes a function of the two inputs , creating the opportunity of taking advantage of the structure of to calculate .this is trivially possible when and are _ matched _, i.e. , compute the same function on their inputs . in such cases , performing the tasks of communication and computation jointly results in significantly fewer bits to be communicated . indeed , in the matched case only the bits describing the function value are recovered at the receiver .this could be considerably less than the bits resulting from the separation approach .naturally , in most cases the channel and the target function are _ mismatched_. the question is thus whether we can still obtain performance gains over separation in this mismatched situation . in other words , we ask if in general the natural computation done by the channel can be harnessed to help with the computation of the desired target function .we consider two cases : i ) _ one - shot communication _, where the mac is used only once , but the channel input alphabet and output alphabet are allowed to vary as a function of the domain of the target function . in this case ,performance is measured in terms of the scaling needed for the channel alphabets with respect to the computation alphabets , i.e. , how grow with .this is closer to the formulation in the computer science literature .ii ) _ multi - shot communication _ , where the channel alphabets are of fixed size , but the channel can be used several times . in this case , performance is measured in terms of computation rate , i.e. , how many channel uses are needed to compute the target function .this is closer to the formulation considered in information theory . as the main result of this paper, we show that separation between computation and communication is essentially optimal for most with given domain and range , and all channel functions with given input alphabet and output alphabet , separation is optimal except for at most an exponentially small ( in domain size ) fraction of pairs . ]pairs of target and channel functions .in other words , the structural mismatch between the functions and is in general too strong for joint computation and communication designs to yield any performance gains .we illustrate this with an example for one - shot communication .assume that the variables at the transmitters take on a large range of values , say , and the receiver is only interested in knowing if , i.e. , in a binary target function .then for most macs and one - shot communication , a consequence , it follows from the proofs that for a given domain the statements hold for all but an exponentially small ( in ) fraction of channel functions . ] of theorems [ thm : identityoneuse ] and [ thm : balanced ] in section [sec : main ] ( illustrated in example [ eg : greater ] ) is that the transmitters need to convey the _entire _ values of to the destination , which then simply compares them .thus , even though the destination is interested in only a _ single _bit about , it is still necessary to transmit bits over the channel .more generally , theorems [ thm : identityoneuse ] and [ thm : balanced ] in section [ sec : main ] together demonstrate that for most target functions separation of communication and computation is asymptotically optimal for most macs .example [ eg : equality ] illustrates that only for special functions like an equality check ( i.e. , checking whether ) can we significantly improve upon the simple separation scheme . intuitively , this is because the structural mismatch between most target and channel functions is too large to allow for any possibility of direct computation of the target function value without resorting to recovering the user messages first .the technical ideas that enable these observations are based on a connection with results in extremal graph theory such as existence of complete subgraphs and matchings of a given size in a bipartite graph .these connections might be of independent interest . similarly , for multi - shot communication , where we repeatedly use a fixed channel , theorem [ thm : converse_n ] in section [ sec :main ] shows that for most functions , the computation rate is necessarily as small as that for the identity target function describing the entire variables at the destination . in other words ,separation of communication and computation is again optimal for most target and channel functions . to prove this result, the usual approach using cut - set bound arguments is not tight enough .indeed , example [ eg : n ] shows that the ratio between the upper bound on the computation rate obtained from the cut - set bound and the correct scaling derived in theorem [ thm : converse_n ] can be unbounded .rather , the structures of the target and channel functions have to be analyzed jointly .these results show that , in general , there is little or no benefit in joint designs : computation - communication separation is optimal for most cases .we thus advocate in this paper that separation of computation and communication for multiple - access channels is not just an attractive option from an implementation point of view , but , except for special cases , actually entails little loss in efficiency .the problem of distributed function computation has a rich history and has been studied in many different contexts . in computer science , it has been studied under the branch of communication complexity , for example see and references therein .early seminal work by yao considered interactive communication between two parties . among several other important results ,the paper showed that the number of exchanged bits required to compute most target functions is as large as for the identity function . in the context of information theory ,distributed function computation has been studied as an extension of distributed source coding in .for example , krner and marton showed that for the computation of the finite - field sum of correlated sources linear codes can outperform random codes .this was extended to large networks represented as graphs in and references therein .randomized gossip algorithms have been proposed as practical schemes for information dissemination in large unreliable networks and were studied in the context of distributed computation in among several others . in most of these works , communication channels are represented as orthogonal point - to - point links .when the channel itself introduces signal interaction , as is the case for a mac , there can be a benefit from jointly handling the communication and computation tasks as illustrated in .function computation over macs has been studied in and references therein .there is some work touching on the aspect of structural mismatch between the target and the channel functions . in ,an example was given in which the mismatch between a linear target function with integer coefficients and a linear channel function with real coefficients can significantly reduce efficiency . in ,it was conjectured that , for computation of finite - field addition over a real - addition channel , there could be a gap between the cut - set bound and the computation rate . in , mismatched computation when the network performs linear finite - field operations was studied . to the best of our knowledge ,a systematic study of channel and computation mismatch is initiated in this work .the paper is organized as follows . in section [ sec : problem ] , we formally introduce the questions studied in this paper .we present the main results along with illustrative examples in section [ sec : main ] .most of the proofs are given in section [ sec : proofs ] .throughout this paper , we use sans - serif font for random variables , e.g. , .we use bold font lower and upper case to denote vectors and matrices , e.g. , and .all sets are typeset in calligraphic font , e.g. , .we denote by and the logarithms to the base and , respectively . a discrete , memoryless , deterministic two - user mac consists of two _ input alphabets _ and , an _ output alphabet _ , and a deterministic _ channel function _ .given channel inputs , the output of the mac is each transmitter has access to an independent and uniformly distributed _ message _ .the objective of the receiver is to compute a _target function _ of the user messages , see fig .[ fig : mac ] . formally , each transmitter of an _ encoder _ mapping the message into the channel input the receiver consists of a _ decoder _ mapping the channel output into an _ estimate _ of the target function .the _ probability of error _ is we point out that this differs from the ordinary communication setting , in which the decoder aims to recover both messages . instead , in the setting here , the decoder is not interested in , but only in the value of the target function . in the following, it will often be convenient to represent the target function and the channel by their corresponding matrices and , respectively .in other words , for , denote by the -fold use of the same channel matrix .in other words , the matrix describes the actions of the ( memoryless ) channel on the sequence ,x_2[1 ] ) , ( x_1[2],x_2[2 ] ) , \ldots , ( x_1[n],x_2[n ] ) \bigr)\ ] ] of length of channel inputs .a pair of target and channel functions is _-feasible _ , if there exist encoders and a decoder computing the target function over with probability of error at most .we will often consider pairs , in which case the definition of -feasibility allows for coding over uses of the channel . without loss of generality ,we assume that the target function has no two identical rows or two identical columns , since we could otherwise simply eliminate one of them . for ease of exposition , we will focus on the case to simplify notation , we assume without loss of generality that finally , to avoid trivial cases , we assume that all cardinalities are strictly bigger than one , and that .we denote by the collection of all target functions .similarly , we denote by the collection of all channels .the next example introduces several target functions and channels that will be used to illustrate results in the remainder of the paper .[ eg : fnmac ] we start by introducing four target functions . *let .the _ identity _target function is for all . since we will refer to the identity target function repeatedly, we will denote it by the symbol .* let .the _ equality _target function is for all .* let .greater - than _ target function is for all .* a _ random _ target function corresponds to the matrix being a random variable , with each entry chosen independently and uniformly over .the matrix is generated before communication begins and is known at both the transmitters and at the receiver .we now introduce three channels .* let and . the _ binary adder _mac is given by for all , and where denotes ordinary addition .* let and .boolean _ or _ boolean or _mac is for all . *a _ random _ channel corresponds to the matrix being a random variable , with each entry chosen independently and uniformly over .the matrix is generated before communication begins and is known at both the transmitters and at the receiver .the emphasis in this paper is on the asymptotic behavior for large function domains , i.e. , as .we allow the other cardinalities , and to scale as a function of .we use the notation for the relation and analogously for .similarly , we use for the relation and analogously for . finally , is short hand for for example , is equivalent is as stands for . ] to as . with slight abuse of notation , we will write to mean that for _ some _ finite . throughout this paper , we are interested in efficient computation of the target function over the channel . in theorems [ thm : identityoneuse ] and[ thm : balanced ] only a single use of the channel is permitted , and efficiency is expressed in terms of the required cardinalities and of the channel alphabets as a function of . in theorems [ thm : identitymultipleuse ] and [thm : converse_n ] , multiple uses of the channel are allowed , and efficiency is then naturally expressed in terms of the number of required channel uses as a function of .finally , all results are stated in terms of the fraction of channels ( in theorems [ thm : identityoneuse ] and [ thm : balanced ] ) or target functions ( in theorem [ thm : converse_n ] ) for which successful computation is possible .the proofs of all the theorems are based on probabilistic methods by using a uniform distribution over choices of channel or target functions .let be the identity target function introduced in example [ eg : fnmac ] , and let be an arbitrary channel matrix . consider any other target function over the same domain , but with possibly different range .assume is -feasible .then is also -feasible , since we can first compute ( and hence and ) over the channel and then simply apply the function to the recovered messages and . this architecture , separating the computation task from the communication task , is illustrated in fig .[ fig : separation ] . as a concrete example , let be the greater - than target function introduced in example [ eg : fnmac ] . the range of has cardinality two . on the other hand ,the identity function has range of cardinality .in other words , for large , the identity target function is considerably more complicated than the greater - than target function . as a result, one might expect that the separation - based architecture in fig .[ fig : separation ] is highly suboptimal in terms of the computation efficiency as described in section [ sec : problem ] . as the main result of this paper ,we prove that this intuition is wrong in most cases . instead, we show that for most pairs of target function and mac , separation of computation and communication is close to optimal .we discuss the single channel - use case in section [ sec : main_single ] , and the channel - uses case in section [ sec : main_multiple ] . in this section, we will focus on the case where the target function needs to be computed using just one use of the channel .the natural value of the upper bound on the probability of error is in this case . in other words, we will be interested in -feasibility .we start by deriving conditions under which computation of the identity target function over a mac is feasible .equivalently , these conditions guarantee that _ any _ target function with same domain cardinality can be computed over a mac by separating communication and computation as discussed above . [thm : identityoneuse ] let be the identity target function , and assume [ eq : identity1 ] then , the proof of theorem [ thm : identityoneuse ] is reported in section [ sec : proofs_identity ] .recall that is the collection of all channels of dimension and range of cardinality .theorem [ thm : identityoneuse ] ( together with the separation approach discussed earlier ) thus roughly implies that any target function with a domain of cardinality can be computed over most macs of input cardinality of order at least and output cardinality of order at least .the precise meaning of `` most '' is that the fraction of channels in for which the statement holds goes to one as .a look at the proof of the theorem shows that the convergence to this limit is , in fact , exponentially fast . in other words ,the fraction of channels for which the theorem fails to hold is exponentially small in the domain cardinality .since the achievable scheme is separation based , this conclusion holds regardless of the cardinality of the range of the target function .similarly , since it is clear that the channel input has to have at least cardinality of order for successful computation , we see that the condition on in theorem [ thm : identityoneuse ] is not a significant restriction . what is significant , however , is the restriction that is at least of order .the next result shows that this restriction on is essentially also necessary .before we state the theorem , we need to introduce one more concept .consider a target function .for a set , consider for ] independent of .assume and [ eq : balanced0 ] let be any -balanced target function .then the proof of theorem [ thm : balanced ] is reported in section [ sec : proofs_balanced ] .recall that the notation is used to indicate that grows at most polynomially in assumption that is quite mild .thus , theorem [ thm : balanced ] roughly states that regardless of the value of , if the cardinality of the channel output is order - wise less than , then any balanced target function with a range of cardinality can not be computed over most macs . here the precise meaning of `` most '' is again that the fraction of channel matrices with at most channel outputs for which successful computation is possible converges to zero , and a look at the proof reveals again that this convergence is , in fact , exponentially fast in .comparing this to theorem [ thm : identityoneuse ] , we see that the same scaling of allows computation of a target function using a separation based scheme ( i.e. , by first recovering the two messages and then applying the target function to compute the estimate ) .thus , for the computation of a given balanced function over most macs , separation of computation and communication is essentially optimal .moreover , since most functions are balanced by , the same also holds for most pairs of target and channel functions .[ eg : greater ] let be the greater - than target function of domain introduced in example [ eg : fnmac ] .note that this target function has range of cardinality , i.e. , is binary .from example [ eg : balanced ] , we know that is balanced for any constant and large enough. thus theorem [ thm : balanced ] applies , showing that , for large and most macs , separation of computation and communication is essentially optimal .observe that the receiver is interested in only a _single _ bit of information about .nevertheless , the structure of the greater - than target function is complicated enough that , in order to recover this single bit , the decoder is essentially forced to learn itself . in other words , in order to compute the single desired bit , communication of message bits is essentially necessary .theorem [ thm : balanced ] is restricted to balanced functions . even though only a vanishingly small fraction of target functions is not balanced , it is important to understand this restriction .we illustrate this through the following example .[ eg : equality ] assume and [ eq : equality0 ] let be the equality target function introduced in example [ eg : fnmac ] .then the proof of the above statement is reported in section [ sec : proofs_equality ] .this result shows that the equality function can be computed over a large fraction of macs with output cardinality of order at least .this contrasts with output cardinality of order that is required for successful computation of balanced functions in theorem [ thm : balanced ] .recall from example [ eg : balanced ] that the equality target function is _ not _-balanced for any and large enough .thus , does not contradict theorem [ thm : balanced ] .it does , however , show that for unbalanced functions separation of communication and computation can be suboptimal .in this section , we allow multiple uses of the mac .our emphasis will again be on the asymptotic behavior for large function domains .however , in this section we keep the mac , and hence also the cardinalities of the channel domain and channel range , fixed .instead , we characterize the minimum number of channel uses required to compute the target function .we begin by stating a result for the identity target function introduced in example [ eg : fnmac ] .equivalently , this result applies to _ any _ target function ( with same domain cardinality ) by using a scheme separating communication and computation .let denote the entropy of a random variable .[ thm : identitymultipleuse ] fix a constant independent of , and assume that and are constant . let be the identity target function , and let be any mac .consider any joint distribution of the form , where is specified by the channel function .then , for any satisfying [ eq : identityconstraints ] is -feasible for large enough .the result follows directly from the characterization of the achievable rate region for ordinary communication over the mac , see for example ( * ? ? ?* theorem 14.3.3 ) .using separation , theorem [ thm : identitymultipleuse ] implies that , for large enough , any target function of domain cardinality can be reliably computed over uses of an mac as long as it satisfies the constraints in .the next result states that for most functions these restrictions on are essentially also necessary .[ thm : converse_n ] assume that as stands for . ] as , that , and that and are constant .let be any mac .then , for any satisfying we must have for some joint distribution of the form , where is specified by the channel function .the proof of theorem [ thm : converse_n ] is presented in section [ sec : proofs_n ] .recall that denotes the collection of all target functions of dimension and range of cardinality .together , theorems [ thm : identitymultipleuse ] and [ thm : converse_n ] thus show that , for any deterministic mac and most target functions , the smallest number of channel uses that enables reliable computation is of the same order as that needed for the identity function .moreover , they show that for most such pairs , separation of communication and computation is essentially optimal even if we allow multiple uses of the channel and nonzero error probability . herethe precise meaning of `` most '' is that the statement holds for all but a vanishing fraction of functions .moreover , the proof of the theorem shows again that this fraction is , in fact , exponentially small in .[ eg : n ] let be the binary adder mac introduced in example [ eg : fnmac ] .define where the maximization is over all independent random variables taking values in the channel input alphabet . denotes the maximum entropy that can be induced at the channel output . for the binary adder mac , it follows from in theorem [ thm : identitymultipleuse ] that the identity function can be reliably computed over if the number of channel uses .on the other hand , theorem [ thm : converse_n ] shows that for most functions of domain and range cardinality , the smallest number of channel uses required for reliable computation is of order .thus , near - optimal performance can be achieved by separating computation and communication .in other words , even though the receiver is only interested in function bits , it is essentially forced to learn the message bits as well .this example also illustrates that the usual way of proving converse results based on the cut - set bound is not tight for most pairs .for example , ( * ? ? ?* lemma 13 ) shows that for reliable computation we need to have where denotes entropy .since has range of cardinality , we have for and as considered here , the tightest bound that can in the _ best case _ be derived via the cut - set approach is thus however , we know that the correct scaling for is . hence , the cut - set bound is loose by an unbounded factor as .we now prove the main results .the proofs of theorems [ thm : identityoneuse ] and [ thm : balanced ] are reported in sections [ sec : proofs_identity ] and [ sec : proofs_balanced ] respectively .the proof of in example [ eg : equality ] is presented in section [ sec : proofs_equality ] .finally , the proof of theorem [ thm : identitymultipleuse ] is covered in section [ sec : proofs_n ] .we start by presenting some preliminary observations in section [ sec : proofs_preliminaries ] .recall our assumption that no two rows or two columns of the target function are identical . as a result, can be computed over the channel with zero error , i.e. , is -feasible , if and only if there exists a submatrix ( with ordered rows and columns ) of such that any two entries and with satisfy , see fig . [fig : scheme ] . on the other hand, this is not necessary if a probability of error can be tolerated .as an example , consider the equality function . for any positive , a trivial decoder that always outputs computes the equality function with probability of error .as , the probability of error is eventually less than .this motivates the following definition .given a target function , a function with is said to be a _-approximation _ of if there exist two mappings and such that in words , the target function is equal to the approximation function for at least a fraction of all message pairs . as before , a -approximation function can be represented by a matrix .we have the following straightforward relation .[ lemma : translation ] consider a target function and mac .if is -feasible , then there exists a -approximation of such that is -feasible .let and be the encoders and the decoder achieving probability of error at most for .let be the range of , and set for all .then is a -approximation of , and is -feasible .we will make frequent use of the chernoff bound , which we recall here for future reference .let be independent random variables , and let by markov s inequality , assume furthermore that each takes value in , and set then , for any , see for example ( * ? ? ?* theorem 4.1 , theorem 4.2 ) .a scheme can compute the identity target function with zero error if and only if the channel output corresponding to any two distinct pairs of user messages is different . inwhat follows , we will show that such a scheme for computing the identity target function over _ any _ mac exists whenever the elements of take at least distinct values in .we then argue that , if the assumptions on and in are satisfied , a _ random _mac ( as introduced in example [ eg : fnmac ] ) of dimension has at least distinct entries with high probability as .together , this will prove the theorem .note that implies that and for large enough .we will prove the result under these two weaker assumptions on and . since we can always choose to ignore part of the channel inputs, we may assume without loss of generality that . in order to simplify notation, we suppress the dependence of and on in the remainder of this and all other proofs .given an arbitrary mac , create a bipartite graph as follows ( see fig .[ fig : identity ] ) .let the vertices of on each of the two sides of the bipartite graph be .now , consider a value appearing in .this corresponds to a collection of vertex pairs such that .pick exactly one arbitrary such vertex pair and add it as an edge to .repeat this procedure for all values of appearing in .thus , the total number of edges in the graph is equal to the number of distinct entries in the channel matrix .observe that any complete bipartite subgraph of the bipartite graph corresponds to a computation scheme for the identity function . indeed , by construction each edge in corresponds to a different channel output .hence by encoding and as the left and right vertices , respectively , of the subgraph , we can uniquely recover from the channel output .this problem of finding a bipartite subgraph in the bipartite graph is closely related to the zarankiewicz problem , see for example ( * ? ? ?* chapter vi ) .formally , the aim in the zarankiewicz problem is to characterize , the smallest integer such that every bipartite graph with vertices on each side and edges contains a subgraph isomorphic to . the kvri - ss - turn theorem ,see for example ( * ? ? ?* theorem vi.2.2 ) , states that using , we now argue that the bipartite graph defined above contains a complete bipartite subgraph if the number of edges in is at least . by definition, contains if there are at least edges in . by , using the inequality for },\end{aligned}\ ] ] which follows from the observation that the left - hand side evaluates to zero at and is monotonically increasing for $ ] . substituting into and using the definition of yields thus concluding the proof .consider again the coupon collector problem as in appendix [ sec : appendix_distinct ] , and let denote the number of rounds required to collect distinct coupons .then the event that is at least is equivalent to being at most . following the proof of lemma [ lemma : distinctentries ] , we have from the chernoff bound that where the last inequality follows since by assumption , and with m. p. wilson , k. narayanan , h. d. pfister , and a. sprintson , `` joint physical layer coding and network coding for bidirectional relaying , '' _ ieee trans .inf . theory _ ,56 , no . 11 , pp . 56415654 , nov . | we consider the problem of distributed computation of a target function over a two - user deterministic multiple - access channel . if the target and channel functions are matched ( i.e. , compute the same function ) , significant performance gains can be obtained by jointly designing the communication and computation tasks . however , in most situations there is mismatch between these two functions . in this work , we analyze the impact of this mismatch on the performance gains achievable with joint communication and computation designs over separation - based designs . we show that for most pairs of target and channel functions there is no such gain , and separation of communication and computation is optimal . |
quantum information theory explores the potential of quantum mechanics in order to process and transmit information . a two - level system , a qubit , constitutes the unit resource for storing information .similarly , a unitary operation on one qubit can be regarded as a basic unit of information processing . in this paperwe explore the possibility of storing quantum dynamics , in particular unitary transformations , in the state of a quantum system , in a manner that the transformation can be performed at a later time and on another system almost perfectly. the problem we address can be well - posed in the context of quantum circuitry .we will say that the _ program _ state of some _ program register _ stores the one - qubit transformation , if some `` fixed '' protocol employing the state is able to perform on an arbitrary _ data _ state of a single qubit _ data register_. here , a `` fixed '' protocol means that the manipulation of the joint state _u [ estat ] does not require knowing the operation nor the state .a device able to transform state ( [ estat ] ) into uu^_,u , [ universal ] where is just some residual state , is known as a _ programmable _ quantum gate . in a similar fashion as modern ( classical ) computers take both the program to be executed and the data to be processed as input bit strings , a programmable or universal quantum gate is a device whose action on an arbitrary data state is determined by the program state .nielsen and chuang analyzed the possibility of constructing such a programmable quantum gate .its total dynamics was described by means of a fixed unitary operator g according to g [ ] = ( u ) , [ perfect ] where only pure data states were considered because this already warranties the mixed state case .notice that the program state and the residual state which was showed to be independent of can always be taken to be pure , by extending the program register with an ancillary system if needed .nielsen and chuang proved that any two inequivalent operations and require orthogonal program states , that is .thus , in order to perfectly store a given operation from some set , a vector state from an orthonormal basis has to be used . the operation can then be implemented by , say , measuring the program register to obtain the value , and gauging correspondingly some convenient experimental device . since the set of unitary operations is infinite ,their result implied that no universal gate can be constructed using finite resources , that is , with a finite dimensional program register .the aim of this work is to present programmable quantum gates with a finite program register , and thus physically feasible .a finite register turns out to be sufficient if a degree of imperfection , no matter how small , is allowed in performing the unkwon operation .we will construct a family of _ probabilistic _ programmable quantum gates , that is programmable quantum gates which work with a given prior probability of a successful implementation of .such a one - qubit gate with was already described in . herewe will achieve any arbitrarily small .we will also consider _ approximate _ programmable quantum gates , which perform an operation very similar to the desired , that is for some transformation fidelity .the second main result is a lower bound on the dimension of the program register of the programmable gate in terms of its degree of imperfection .it implies that the orthogonality result of is robust. we will discuss its implications in the context of secure secret computation .finally , operations stored in a quantum state can be teleported .this leads to a new scheme for quantum remote control that only requires unidirectional communication .we start by showing how to store and reimplement , in an imperfect but feasible fashion , an arbitrary one - qubit unitary operation of the form u _ ( i_z ) , [ unialpha ] where .notice that a general one - qubit operation can be obtained by composing three operations of the form of eq .( [ unialpha ] ) with some fixed unitary operations , for instance as .let us consider the state ( e^i+e^-i ) , which someone , say alice , can prepare by applying on a qubit in the standard state .suppose she also prepares , along with , another qubit in some arbitrary state and provides bob , who does nt know nor the complex coefficients and , with the two qubits in state .alice challenges now bob to obtain the state .what bob can do in order to implement the unknown with some probability of success is to perform a c - not operation taking the data qubit in state as the control and the program qubit in state as the target .this will constitute the basic part of our simplest programmable quantum gate .recalling that the c - not gate , i + _ x , permutes the and states of the target ( second qubit ) only if the control ( first qubit ) is in state , it is easy to check that the two - qubit state is transformed according to ( u _ + u_^ ) .therefore , a projective measurement in the basis of the program register will make the data qubit collapse either into the desired state or into the wrong state , with each outcome having prior probability .that is , we have already constructed a probabilistic programmable quantum gate with error rate ( see figure ( [ fig1 ] ) ) .notice that a single qubit has been sufficient for alice to store an arbitrary unitary , i.e. , one from an infinite set , although its recovery only succeeds with probability .if bob obtains instead of , then not only he fails at performing the wished operation , but in addition he does no longer have the initial data state .how can we construct a more efficient programmable gate ?notice that in case of failure , a second go of the previous gate can correct into . indeed , bob needs only apply the gate of fig .( [ fig1 ] ) to , inserting a new program state , namely , which alice can prepare by performing twice the operation on .therefore , if alice supplies the state to bob , he can perform the operation with probability .figure ( [ fig2 ] ) displays a more compact version of this second probabilistic programmable gate , which requires a two - qubit program register and has a probability of failure . in case of a new failure ,the state of the system becomes .bob can insert again this state , together with state , into the elementary gate .if bob has no luck and keeps on obtaining failures , he can try to correct the state as many times as he wishes , provided that the state is available for the attempt . therefore , for any , the -qubit state can be used to implement the transformation with probability .resembles that used in to implement a non - local unitary operation . in the present contextall intermediate measurements and conditional actions can be substituted by a single unitary operation , as described in figures 2 and 3 . in this sectionwe have first presented the several - measurement version for pedagogical reasons . ] the corresponding probabilistic programmable gate ( see figure ( [ fig3 ] ) ) , consists of the unitary transformation of into ( u _+ u_^(2^n-1 ) ) [ nqubits ] and of a posterior measurement of the program register ( either in state or , ) .its failure probability , , decreases exponentially with the size of the program register .it is interesting to look at how long the program needs to be , on average , until bob succeeds to perform with certainty . with probability he succeeds after using a single - qubit program ; with probability a two - qubit program is sufficient ; etc .the average length of the required program is thus |n= _[ average ] that is , a two qubit register is sufficient , on average , to store an arbitrary so that it can be performed with certainty .a probabilistic programmable gate may either succeed or fail , depending on the result of the final measurement on the program register .an approximate gate , instead , performs a transformation only similar to the desired one , but it is always successful .suppose we want to apply the unitary transformation on but instead another ( general ) transformation is actually performed .a possible way of quantifying how similar these two operations are is by applying both operations to the same state , and then computing the fidelity between the two transformed states , and . when averaged over all possible this reads f(,u ) du^()u .[ fid ] suppose now that after the transformation ( [ nqubits ] ) of the previous probabilistic gate we decide to ignore the state of the -qubit program register .then the programmable gate works approximately , implementing an operation , where .the average fidelity of performance ( [ fid ] ) satisfies .so far we have explicitly constructed programmable quantum gates that perform , either probabilistically or approximately , some class of one - qubit unitary operations .but the previous protocols also allow alice to codify with finite resources any unitary operation acting on an arbitrary number of qubits .indeed , as already mentioned , alice can codify an arbitrary one - qubit unitary operation using only s , and then also combine several of those with c - not gates to obtain . in view of these results, one may wonder whether quasi - perfect programmable gates can be applied , in the context of quantum cryptography and computation , to secretly compute some unitary operation , for instance a precious algorithm , on some initial -qubit state .the idea is that alice gives a program state and the data state to bob , who operates a programmable quantum gate array but ignores .bob is required to compute , but alice does not want bob to know what program he is running on his quantum computer . if the gate is perfect as in ( [ perfect ] ) , then bob can in principle distinguish from any other program state , since they are orthogonal .therefore he can , imperceptibly to alice , make an illegal copy of the program , perform the required transformation using the original program state , and give the computed state to alice . however , when the gate is slightly imperfect , different programs need no longer be orthogonal .now bob can not determine perfectly well which program he is to run in his computer .if he tries to estimate , then in addition he will necessarily modify the program state , which will result in an improper performance of the gate and then alice who may have simply been testing bob s integrity can , in principle , detect it .that is , it is not possible for bob to copy , even in an approximate form , the program state and at the same time perform the operation alice has commended him with , without this being detectable .we next derive lower bounds on the size of the program register of any quasi - perfect ( i.e. with ) programmable gate , and on the degree of orthogonality between its program states corresponding to similar operations , in terms of its failure parameter .these bounds represent a severe limitation on the degree of reliability that a security scheme based on the above ideas can offer .they indicate that the program vectors and are significantly non - orthogonal ( that is , non - distinguishable ) only when the imperfection parameter makes them effectively equivalent .let us consider a generic imperfect programmable gate acting on a system , so that it can be programmed to perform some or all .it can be described by a unitary operator according to g _ [ ] = ( u ) + , [ rotllo ] where the wrong state is not required to fulfill any requirement for an _ approximate _ gate , whereas it must satisfy ( here is the null vector of the data register ) for any two inequivalent operations and and any two data states and for a _probabilistic _ gate .this last condition is necessary for bob , who ignores both and the data , to be able to know whether the transformation has been successfully performed by measuring the program register .we first notice that the state only depends on through a contribution of order , where from now on for approximate gates and for probabilistic ones . indeed , for any program state , the scalar product of ( [ rotllo ] ) corresponding to any two data states and reads = + o(^p ) , from which , by fixing and considering any , we find that .that is = + o(^ ) . keeping this in mind , we now consider , for any given , the scalar product of ( [ rotllo ] ) corresponding to two unitary operations and , which turns out to read = u^v + o(^ ) .[ productuv ] the scalar product does not depend on . therefore the dependence of on has to be of order , at most .suppose and are very close . that is , u^ v = e^il = i + il - l^2 + o(l^3 ) , where all the eigenvalues of the traceless ( ) hermitian operator , are very small .the largest variation of in ( [ productuv ] ) for two different vectors is .we introduce a distance on the set of operators on , d_u , v ( [ ( u - v)^(u - v)])^. then .subtracting ( [ productuv ] ) for from itself for we conclude that , which finally implies || .[ bound1 ] this bound says that in a programmable quantum gate with a small error rate , two transformations and will have program states with significant overlap ( states and are indistinguishable ) only if and are also very close to each other , .that is , only if and process the data very similarly , then a dishonest bob is unable to distinguish between the corresponding programs . the previous result can also be used to derive a lower bound on the dimension of the program register of an imperfect programmable gate with error .for simplicity we will assume that the gate can only be programmed to perform the one - qubit transformations from eq .( [ unialpha ] ) . consider a discrete subset of such transformations , namely those with , , and apply the previous bound to and .we obtain || k^pm , t0 , [ bo ] where is some unimportant constant .we need the following lemma . * lemma : * let be a set of ( normalized ) vectors such that their scalar products satisfy for . then the vectors are linearly independent . _proof : _ the rank of the set is equal to the rank of the matrix , , which has ones in all diagonal entries .the modulus of any entry of the matrix is smaller than .let be a normalized eigenvector of , with eigenvalue .then =[(n\!-\!i){\mbox{}}]_i = \sum_{j=1}^q \nu_{ij } [ { \mbox{}}]_j ] denotes the vector component .let be such that | \geq |[{\mbox{}}]_j|| \varphi \rangle ] ||\varphi \rangle ] is of the order ( random walk ) .this suggests that in order for the set to have rank close to , it is sufficient that , instead of as required in the lemma. let us set .then ( [ bo ] ) becomes , and this means , because of the lemma , that at least an -dimensional hilbert is required to contain .that is , the program register must consists of at least qubits .notice that the previous remark suggests that this bound may be reduced to qubits , in which case the probabilistic programmable gate of figure ( [ fig3 ] ) would require , asymptotically , the smallest possible program register .for a general programmable gate implementing some or all transformations it is straightforward to obtain a similar lower bound on the dimensions of the program register , which also says that its number of qubits grows proportionally to the logarithm of the inverse of the rate error , , for some positive constant .we have shown how to store an arbitrary unitary transformation in the pure state of a finite quantum register , in such a way that it can be performed quasi - perfectly at a later time . once the unknown operation has been encoded in a quantum state ,it can of course be processed using _ any _ known state manipulation technique .an interesting application of our results is in the context of quantum remote control . as introduced by huelga __ in , let us suppose bob wishes to manipulate some data state according to an unknown operation alice , a distant party , can implement by using some device .if the state of alice s device can not be teleported , then the optimal protocol is to use standard teleportation to send the data from bob to alice , who will use the device to process it and will teleport it back to bob .but we now know how to efficiently store operations in quantum states , which can then be teleported .this leads to a new scheme for quantum remote control : alice stores the operation in a quantum state and applies standard teleportation to send it to bob . remarkably enough , in this protocol only one - way communication is required in addition to entanglement in order for alice to remotely manipulate bob s data , as opposed to the two - way classical communication of the scheme presented in .this implies that the operation can be teleported independently of whether bob s data state is already available .more specifically , we find that a -qubit program can be teleported from alice to bob by using up ebits of entanglement and by sending classical bits from alice to bob ( recall that the classical communication cost of quantum teleportation of equatorial states , as , require only one bit per state ) . eq .( [ average ] ) implies that , on average , ebits of entanglement between alice and bob , and classical bits from alice to bob are sufficient for alice to teleport an arbitrary to bob , so that he , ignoring , can perform it with certainty .the storage of quantum transformations turns out to be useful in several other contexts .if state estimation techniques are applied to the system that stores an unknown operation , then we obtain the scheme for estimation of quantum dynamics recently exploited by acn _et al _ .cirac _ et al _ have recently explored the possibilities of encoding operations in quantum states in the context of non - local transformations of a composite system .in particular , they have shown how to implement non - local unitary transformations using less than one ebit of entanglement . in an extension of their work ,dr _ et al _ have considered alternative schemes for storing and manipulating quantum transformations .we have presented a scheme for storing unitary operations in the quantum state of a finite dimensional program register .the operations can be implemented at a later time with some associated error , which decreases exponentially with the number of qubits of the program register .we have presented both probabilistic and approximate programmable quantum gates , and have discussed the possibility of using them to make a secrete computation on a public quantum computer .finally , a unidirectional scheme for remote manipulation of quantum states has also been put forward .we thank w. dr for useful comments .g.v . acknowledges a marie curie fellowship ( hpmf - ct-1999 - 00200 , european community ) .this work was also supported by the sfb project 11 on `` control and measurement of coherent quantum systems '' ( austrian science foundation ) , the institute for quantum information gmbh and the project equip ( contract ist-1999 - 11053 , european community ) .indeed , an arbitrary unitary operation on qubits can decomposed as a fixed that is , -independent sequence of ( -dependent ) one - qubit unitary operations and a specific kind of two - qubit unitary operations , for instance c - not gates since the sequence is fixed , is specified by the one - qubit transformations only ; an arbitrary von neumann measurement , given by orthogonal projectors , can be decomposed into a general unitary operation , with , a von neumann measurement on a fixed set of projectors and a second unitary operation .finally , a general quantum operation on an arbitrary system can be achieved by appending an ancilla in some blank state to the system , performing a unitary transformation followed by a von neumann measurement , , and disposing of part of the system and/or ancilla .therefore any possible transformation of a quantum system can be essentially specified by means of one - qubit unitary operations . | we show how quantum dynamics ( a unitary transformation ) can be captured in the state of a quantum system , in such a way that the system can be used to perform , at a later time , the stored transformation almost perfectly on some other quantum system . thus programmable quantum gates for quantum information processing are feasible if some small degree of imperfection is allowed . we discuss the possibility of using this fact for securely computing a secret function on a public quantum computer . finally , our scheme for storage of operations also allows for a new form of quantum remote control . |
statically stable density stratified shear layers are ubiquitous in the atmosphere and oceans .such shear layers can become hydrodynamically unstable , leading to turbulence and mixing in geophysical flows .turbulence and mixing strongly influence the atmospheric and oceanic circulation - processes known to play key roles in shaping our weather and climate .hydrodynamic instability is characterized by the growth of wavelike perturbations in a laminar base flow .such perturbations can grow at an exponential rate , transforming the base flow from a laminar to a turbulent state . in the present study, we will theoretically investigate the underlying mechanism(s ) leading to modal ( exponential ) , as well as non - modal , growth of small wavelike perturbations in idealized homogeneous and stratified shear flows .the classical method used to determine flow stability is the normal - mode approach of linear stability analysis . under the normal - mode assumption , a waveform can grow or decay but can not deform .normal - mode perturbations are added to the laminar background flow profile , followed by linearizing the governing navier stokes equations . for inviscid , density stratified shear flows, the normal - mode formalism leads to the celebrated taylor - goldstein equation , derived independently by and .the taylor - goldstein equation is an eigenvalue problem which calculates the wave properties like growth - rate , phase - speed , and eigenfunction associated with each normal - mode . for stability analysis ,the range of unstable wavenumbers and the wavenumber corresponding to the fastest growing normal - mode are of prime interest . in many practical scenarios ,the taylor - goldstein equation is found to accurately capture the onset of instability , and it also provides a first order description of the developing flow structures . the normal - mode approach , however , has shortcomings .firstly , it provides limited insight into the physical mechanism(s ) responsible for hydrodynamic instability .the answer to why an infinitesimal perturbation vigorously grows in an otherwise stable background flow is provided in the form of mathematical theorems - rayleigh - fj s theorem for homogeneous flows , and miles - howard criterion for stratified flows .it is often difficult to form an intuitive understanding of shear instabilities from these theorems . sincelinear instability is the first step towards understanding the more complicated and highly elusive non - linear processes like chaos and turbulence , it is desirable to explore alternative routes in order to provide additional insight . in this context sir g. i. taylorwrites : `` it is a simple matter to work out with the equations which must be satisfied by waves in such a fluid , but the interpretation of the solutions of these equations is a matter of considerable difficulty '' .a second drawback of the normal - mode approach is the normal - mode assumption itself . the extensive work by , , and others have shown that shear allows rapid non - modal transient growth due to non - orthogonal interaction between the modes . developed the `` generalized stability theory '' for obtaining the optimal non - modal growth from the singular value decomposition of the propagator matrix of a linear dynamical system .lord rayleigh was probably the first to inquire about the mechanism behind shear instabilities , and conjectured the possible role of wave interactions .lord rayleigh s hypothesis was later corroborated by sir g. i. taylor while he was theoretically studying three - layered flows in constant shear .he explained the mechanism to be as follows : `` thus the instability might be regarded as being due to a free wave at the lower surface forcing a free wave at the upper surface of separation when their velocities in space coincide '' .early explorations of the wave interaction concept have been reviewed in .the first mathematically concrete mechanistic description of stratified shear instabilities was provided by .using idealized velocity and density profiles , holmboe postulated that the resonant interaction between stable propagating waves , each existing at a discontinuity in the background flow profile ( density profile discontinuity produces interfacial gravity waves and vorticity profile discontinuity produces vorticity waves ) , yields exponentially growing instabilities .he was able to show that rayleigh / kelvin - helmholtz ( hereafter , `` kh '' ) instability is the result of the interaction between two vorticity waves ( also known as rayleigh waves ) .moreover , holmboe also found a new type of instability , now known as the `` holmboe instability '' , produced by the interaction between vorticity and interfacial gravity waves .bretherton , a contemporary of holmboe , proposed a similar theory to explain mid - latitude cyclogenesis .he hypothesized that cyclones form due to a baroclinic instability caused by the interaction between two rossby edge waves ( vorticity waves in a rotating frame of reference ) , one existing at the earth s surface and the other located at the atmospheric tropopause .the theories proposed by holmboe and bretherton have been refined and re - interpreted over the years , see . as reviewed in , resonant interaction between two edge waves in an idealized homogeneous or stratified shear layeroccurs when these waves attain a phase - locked state , i.e. they are at rest relative to each other . maintaining this phase - locked configuration , the waves grow equally at an exponential rate .there is also an alternative description of shear instabilities through wave interactions which was put forward by .he introduced the concept of `` negative energy waves '' , which are stable modes , and their introduction into the flow causes a decrease in the total energy .whether a given wave mode has positive or negative energy depends on the frame of reference used .instability results when negative energy mode resonates with a positive energy mode , and this occurs when the waves have the same phase - speed and wavelength .this can be identified by the crossing of dispersion curves for the positive and negative energy modes in a frequency - wavenumber diagram .yet another mechanistic picture of shear instabilities was proposed by lindzen and co - authors ( summarized in ) .this theory , known as the `` over - reflection theory '' , proposes that under the right flow configuration , over - reflection of waves can continuously energize an advective `` orr process '' which is finally responsible for the perturbation growth . the present paper focusses on studying shear instabilities in terms of wave interactions .from the recent review by it can be inferred that the wave interaction theory is in its early phases of development .in fact , there is no strong theoretical justification behind the argument that two progressive waves lock in phase and resonate , thereby producing exponentially growing instabilities in a shear layer .many questions in this context remains unanswered , e.g. ( a ) is there a condition which determines whether two waves will lock in phase ?( b ) starting from an initial condition , how long does it take for the waves to get phase - locked ?( c ) if exponentially growing instabilities occur after phase - locking , then what kind of instabilities ( if any ) occur prior to phase - locking ?a point worth mentioning here is that the phenomenon of phase - locking occurs in diverse problems ranging from biology to electronics .in fact `` synchronization '' is an area of study ( in dynamical systems theory ) specifically dedicated to this purpose .the history of synchronization goes back to the century when the famous dutch scientist christiaan huygens reported his observation of synchronization of two pendulum clocks .we suspect that there may be an analogy between the fundamental aspects of synchronization theory and that of wave interaction based interpretation of shear instabilities .if exploited successfully , this analogy could be beneficial in answering some of the key questions concerning the origin and evolution of shear instabilities . in recent years, heifetz and co - authors have extensively studied the interaction between rossby edge waves .they performed a detailed analysis of non - modal instability and transient growth mechanisms in idealized barotropic shear layers .modal and non - modal growth occurring due to the interactions between surface gravity waves and a pair of vorticity waves ( existing at the interfaces of a submerged piecewise linear shear layer ) was recently studied by .the goal of our paper is to frame a theoretical model of shear instabilities , which could be applied to different idealized ( broken - line ) shear layer profiles ( e.g. the classical profiles studied by rayleigh , taylor or holmboe ) .the model should be sufficiently generalized in the sense that the constituent wave types need not be specified _ a priori_.the model should be able to capture the transient dynamics , hence should not be limited to normal - mode waveforms .the outline of the paper is as follows . in [ sec : linlin ] we provide a theoretical background of linear progressive waves , and focus on two types of waves - vorticity waves and internal gravity waves .the wave theory in this section is more generalized than that usually reported in the literature . in [ sec : waveint ] we investigate the mechanism of interaction between two progressive waves .we undertake a dynamical systems approach to better understand the wave interaction problem , especially the resonant condition .since the wave interaction formulation is not restricted to normal - mode type instabilities , we investigate non - modal / transient growth processes in [ sec : matstuff ] . finally in [ sec : inst ] we use wave interactions to analyze three well known types of shear instabilities - kh instability ( resulting from the interaction between two vorticity waves ) , taylor - caulfield instability ( resulting from the interaction between two interfacial internal gravity waves ) , and holmboe instability ( resulting from the interaction between a vorticity wave and an interfacial internal gravity wave ) .consider a fluid interface existing at in an unbounded , inviscid , incompressible , two dimensional ( - ) flow .moreover , let the interface be perturbed by an infinitesimal displacement in the direction , given as follows : } \}. \label{eq : eta}\ ] ] this displacement manifests itself in the form of stable progressive wave(s ) , the amplitude and phase of which are and , respectively .for example , when this interface is a vorticity interface , it produces a vorticity wave .likewise , two oppositely traveling gravity waves are produced in case of a density interface .we have assumed the interfacial displacement ( or the wave ) to be monochromatic , having a wavenumber .moreover , the interface satisfies the _ kinematic condition _ - a particle initially on the interface will remain there forever .the linearized kinematic condition is given by where is the background velocity in the direction and is the -velocity at the interface .we prescribe the latter to be as follows : }\}. \label{eq : wi}\ ] ] here is the amplitude and is the phase of .the interfacial displacement creates vorticity perturbation _ only _ at the interface , the perturbed velocity field is irrotational everywhere else in the domain .thus in the above equation is the perturbation streamfunction . assuming \} ] .the frequency and the growth - rate of a wave are respectively defined as and ( overdot denotes ) . using these definitions in ( [ eq : singwave ] ) , we get where .( [ eq:1freq ] ) shows that the frequency of a wave consists of two components - ( i ) the doppler shift , and ( ii ) the _ intrinsic frequency _ , where .the phase - speed of the wave is found to be where denotes the _ intrinsic phase - speed_. noting that a wave in isolation can not grow or decay on its own , ( [ eq:13 ] ) demands that .therefore for a stable wave , the vertical velocity field at the interface has to be in quadrature with the interfacial deformation .another point worth mentioning is that an isolated wave can not accelerate or decelerate on its own , hence should be constant .this in turn means is a constant quantity .applying the quadrature condition to ( [ eq:1freq ] ) and ( [ eq:14 ] ) , the magnitudes of the intrinsic frequency and the intrinsic phase - speed become and , respectively .the _ intrinsic direction of motion _ of the wave , however , is determined by . for waves moving to the left relative to the interfacial velocity ( ) , .similarly for right moving waves , .when such a stable progressive wave is acted upon by external influence(s ) ( e.g. when another wave interacts with the given wave , as detailed in [ sec : waveint ] ) , the quadrature condition is no longer satisfied , i.e. .therefore , the wave may grow ( ) or decay ( ) , and its intrinsic frequency and phase - speed may change . in our analyseswe will consider two types of progressive interfacial waves - vorticity waves and internal gravity waves .vorticity waves , also known as rayleigh waves , exist at a vorticity interface ( i.e. regions involving a sharp change in vorticity ) .such interfaces are a common feature in the atmosphere and oceans . in a rotating frame ,the analog of the vorticity wave is the rossby edge wave which exists at a sharp transition in the potential vorticity .when rossby edge waves propagate in a direction opposite to the background flow , they are called `` counter - propagating rossby waves '' . in order to evaluate the frequency of vorticity waves ,let us consider a velocity profile having the form here the constant is the vorticity , or the shear in the region .( [ eq : v1 ] ) shows that the vorticity is discontinuous at .this condition supports a vorticity wave .an interfacial deformation adds vorticity to the upper layer and removes it from the lower layer , thereby creating a vorticity imbalance at the interface .this imbalance is countered by the background vorticity gradient , which acts as a restoring force , and thereby leads to wave propagation . the horizontal component of the perturbation velocity field set up by the interfacial deformation undergoes a jump at the interface, the value of which can be determined from stokes theorem ( see appendix [ app : b ] ) : by taking an derivative of ( [ eq : stokes ] ) and invoking the continuity relation , we get by substituting ( [ eq : eta ] ) and ( [ eq : w ] ) in ( [ eq : vort12 ] ) we obtain where is the sign function . from ( [ eq : vortwaverel ] ) of a vorticity wave is found to be .the phase - speed can be evaluated by substituting ( [ eq : vortwaverel ] ) into ( [ eq:14 ] ) : if , the vorticity wave moves to the left relative to the background flow .the conventional derivation of the frequency and phase - speed of a vorticity wave can be found in .interfacial gravity waves exist at a density interface , i.e. a region involving sharp change in density .the most common example is the surface gravity wave existing at the air - sea interface .however , in this paper we will only consider waves existing at the interface between two fluid layers having small density difference .such waves are known as interfacial _gravity waves ( hereafter , gravity waves ) , and often arise in the pycnocline region of density stratified natural water bodies like lakes , estuaries and oceans . in the case of gravity waves , can be evaluated by considering the _ dynamic condition_. this condition implies that the pressure at the density interface must be continuous . let the density of upper and lower fluids be and respectively .the background velocity is constant , and is equal to .then the linearized dynamic condition at the interface after some simplification becomes ( ( 3.13 ) of ) : here is the reduced gravity and is the reference density . under boussinesq approximation . by taking an derivative of ( [ eq : dynbound ] ) and using the streamfunction relation , we get substitution of ( [ eq : eta ] ) and ( [ eq : wi ] ) in ( [ eq : gravrel ] ) yields the quantity . on substituting this relation in ( [ eq:12221 ] )we obtain an important aspect of ( [ eq : gravwaverel ] ) is that it has been derived _independent _ of the kinematic condition .the presence of single or multiple interfaces _ does not _ alter the expression in ( [ eq : gravwaverel ] ) , implying that this equation provides a generalized description of .inclusion of the kinematic condition yields an expression for which is simpler , but problem specific .for example , when a single interface is present , inclusion of the kinematic condition in ( [ eq : gravwaverel ] ) produces the well known expression for gravity wave frequency .this can be shown by substituting ( [ eq:1freq ] ) ( note that this equation has been derived from ( [ eq : kincon ] ) , which is the kinematic condition for a single interface . ) in ( [ eq : gravwaverel ] ) and considering only the positive value : the above equation is the dispersion relation for gravity waves .substitution of ( [ eq : gravwaverel ] ) in ( [ eq:14 ] ) produces the well known expression for the phase - speed of a gravity wave : the above equation shows that each density interface supports two gravity waves , one moving to the left and the other to the right relative to the background velocity .it is not uncommon in the literature to use the expressions ( [ eq : gravwaverel111 ] ) and ( [ eq : gravspeed ] ) _ even _ when multiple interfaces are present .such usage certainly leads to erroneous results , especially when the objective is to study multiple wave interactions .probably the confusion arises from the traditional derivation of gravity wave frequency and phase - speed .this derivation strategy obscures the fact that the kinematic condition ( at an interface ) is influenced by the number of interfaces present in the flow , while the dynamic condition is not .let us now consider a system with two interfaces , one at and the other one at .the linearized kinematic condition at each of these interfaces is given by : it has been implicitly assumed that both waves have the same wavenumber . the r.h.s .of ( [ eq : kin1])-([eq : kin2 ] ) reveal the subtle effect of wave interaction , and can be understood as follows .the effect of extends away from the interface , hence it can be felt by a wave existing at another location , say .therefore the vertical velocity of the wave at gets modified - it becomes the linear superposition of its own vertical velocity and the component of existing at .this phenomenon is also known as `` action - at - a - distance '' , see . on substituting ( [ eq : eta ] ) and ( [ eq : wi ] ) in ( [ eq : kin1])-([eq : kin2 ] ) ,we get proceeding in a manner similar to [ sec : linlin ] , the growth - rate and phase - speed of each wave are found to be \label{eq:24}\\ & & \gamma_{2}=\frac{a_{w_{2}}}{a_{\eta_{2}}}\cos\left(\triangle\phi_{22}\right)+\frac{a_{w_{1}}}{a_{\eta_{2}}}e^{-\alpha\left|z_{1}-z_{2}\right|}\cos\left(\triangle\phi_{21}\right ) \label{eq:25}\\ & & c_{2}=u_{2}-\frac{1}{\alpha}\left[\frac{a_{w_{2}}}{a_{\eta_{2}}}\sin\left(\triangle\phi_{22}\right)+\frac{a_{w_{1}}}{a_{\eta_{2}}}e^{-\alpha\left|z_{1}-z_{2}\right|}\sin\left(\triangle\phi_{21}\right)\right ] .\label{eq:26}\end{aligned}\ ] ] here . when , the two waves decouple , and we recover ( [ eq:13])-([eq:14 ] ) for each wave .as argued in [ sec : linlin ] , a wave in isolation can not grow or decay on its own .therefore , the first term in each of ( [ eq:23 ] ) and ( [ eq:25 ] ) should be equal to zero , implying . in all our analyses, we will be considering a system with a _ left moving top wave _ ( ) and a _right moving bottom wave _ ( ) , the wave motion being relative to the background velocity at the corresponding interface .therefore , we will _ only _ consider counter - propagating waves ( i.e. waves moving in a direction opposite to the background flow at that location ) . provides a detailed explanation how counter - propagating vorticity waves in rayleigh s shear layer naturally satisfies rayleigh - fj s necessary condition of shear instability .let the phase - shift between the bottom and top waves be .therefore and .defining amplitude - ratio , we re - write ( [ eq:23])-([eq:26 ] ) to obtain \label{eq:28}\\ & & \gamma_{2}=r\upomega_{1}e^{-\alpha\left|z_{1}-z_{2}\right|}\sin\phi\label{eq:29}\\ & & c_{2}=u_{2}+\frac{1}{\alpha}\left[\upomega_{2}-r\upomega_{1}e^{-\alpha\left|z_{1}-z_{2}\right|}\cos\phi\right].\label{eq:30 } \end{aligned}\ ] ] the quantity , redefined here for convenience , is constant according to the argument in [ sec : linlin ] . eqs .( [ eq:27])-([eq:30 ] ) describe the linear hydrodynamic stability of the system . unlike the conventional linear stability analysis, we did not impose normal - mode type perturbations ( they only account for exponentially growing instabilities ) in our derivation .therefore the equation - set provides a _ non - modal _ description of hydrodynamic stability in multi - layered shear flows .we refer to this theory as the `` wave interaction theory ( wit ) '' .a schematic description of the wave interaction process is illustrated in figure [ fig : wave_int ] .interestingly , there exists an analogy between wit and the theory behind _ synchronization _ of two coupled harmonic oscillators .synchronization is the process by which interacting , oscillating objects affect each other s phases such that they spontaneously lock to a certain frequency or phase . weakly ( linearly ) coupled oscillators interact only through their phases , however more complicated interaction takes place when the amplitudes of oscillation can not be neglected . the first analytical step to includethe effect of amplitude in the synchronization problem is by assuming the coupling to be weakly non - linear . for the case of twocoupled harmonic oscillators , weakly non - linear theory yields a system of equations ( * ? ?* equation ( 8.13 ) ) , which represents a dynamical system describing the temporal variation of amplitude and phase of each oscillator ( i.e. and ) .once the highest order terms are neglected , these equations become analogous to our wit equation - set ( [ eq:27])-([eq:30 ] ) .notice that the wit equation - set is expressed in terms of and , hence in order to see the correspondence between the two sets , we recall the following relations : and . observing the analogy with the theory of coupled oscillators , we re - frame the wave interaction problem into a dynamical systems problem. subtracting ( [ eq:29 ] ) from ( [ eq:27 ] ) and ( [ eq:30 ] ) from ( [ eq:28 ] ) , we find .\label{eq : dphidt}\end{aligned}\ ] ] the two parameters have the following range of values : and ] ( for example see figure [ fig : manyfigs](d ) ) . [ [ b ] ] ( b ) + + + _ mutual growth _ or : not only the two waves lock in phase , they also lock in amplitude , producing the unique steady state amplitude - ratio ; see figure [ fig : summaryfig](b ) .( [ eq : drdt ] ) reveals that implies , which in turn signifies _ resonance _ between the two waves .furthermore , ( [ eq:27 ] ) and ( [ eq:29 ] ) imply that at steady state , meaning that the wave amplitudes grow at an exponential rate .in the previous section we used wit and dynamical systems theory to describe multi - layered inviscid hydrodynamic instabilities in terms of interacting interfacial waves .we found that the `` equilibrium condition '' of the dynamical system basically signifies the `` resonant condition '' of wit .a question which naturally arises is `` _ _ what do these conditions mean in terms of conventional hydrodynamic stability theory _ _ ? '' in this section we will draw parallels between wit ( and the dynamical systems formulation ) and eigenanalysis and singular value decomposition ( svd ) , which are conventionally used to study modal and non - modal instabilities , respectively . the linearized kinematic condition with two interfaces , ( [ eq : kin1])-([eq : kin2 ] ) , can be written in the matrix form by expressing in terms of ( by invoking the definition of ) , and imposing the condition for counter - propagation ( and ) : where \,\,\,\,\,\,\,\,\,\textrm{and}\,\,\,\,\,\,\,\,\boldsymbol{\mathcal{m}}=-i\left[\begin{array}{cc } \alpha u_{1}-\upomega_{1 } & \upomega_{2}e^{-\alpha\left|z_{1}-z_{2}\right|}\\ -\upomega_{1}e^{-\alpha\left|z_{1}-z_{2}\right| } & \alpha u_{2}+\upomega_{2 } \end{array}\right].\ ] ] eq .( [ eq : eettaa ] ) represents the first order perturbation dynamics , and is the linearized dynamical operator . sinceour dynamical system is autonomous is time independent , and the solution is explicit : \boldsymbol{\eta}\left(0\right)=\left[\boldsymbol{\mathcal{u}}\boldsymbol{\sigma}\boldsymbol{v}^{\dagger}\right]\boldsymbol{\eta}\left(0\right ) .\label{eq : boroeqn}\ ] ] the matrix exponential in the above equation is the propagator matrix , which advances the system in time .the transient dynamics of the system is solely governed by the normality of , i.e. whether or not commutes with its hermitian transpose .if they commute ( ) then is normal and has complete set of orthogonal eigenvectors . in this casethe dynamics can be fully understood from the eigendecomposition of the propagator .this basically means expressing as , where is a diagonal matrix containing the complex eigenvalues of ( arranged by the real part of the eigenvalues in the descending order of magnitude ) , and is the corresponding matrix of eigenvectors .alternatively if is non - normal , the interaction between the discrete non - orthogonal modes of produces non - normal growth processes .non - normality can be understood through svd of the propagator .svd is a generalized matrix factorization technique and matches eigendecomposition _ only _ when is hermitian ( ) .svd of the propagator yields , where contains the eigenvectors of , contains the eigenvectors of , and is a diagonal matrix containing the singular values ( , which are real and positive ) arranged in the descending order of magnitude . if commutes with its hermitian transpose , then .otherwise , implying that growth - rate higher than the least stable normal - mode is possible .if is a normal matrix , then eigendecomposition of the propagator matrix is sufficient to capture the dynamics .if is non - normal , then eigenanalysis captures the asymptotic dynamics _ only _ for large times .the eigenvalues of the matrix are as follows : \pm\frac{1}{2}\sqrt{\mathscr{d } } , \label{eq : lamd}\ ] ] where ^{2} ] .for our calculations we have chosen the entire time window of non - modal evolution .thus ( [ eq : gg8 ] ) becomes notice that unlike and , the value of is finite as .figure [ fig : manyfigs](b ) depicts the variation of with and for kh instability .the flow is linearly unstable for ( see [ sec : kh ] ) .non - modal gain _ exceeds _modal gain by _ several _ orders of magnitude in the neighbourhood of stability boundaries .this behaviour is especially prominent near the lower boundary .the fact that substantial transient growth occurs near stability boundaries has also been observed for non - modal holmboe instability .the magnitude of is strongly dependent on the initial condition .we observe higher values of when the component waves are long ( smaller values of ) , and initially out of phase ( away from ) .this situation reverses near the upper boundary ; waves starting in - phase exhibit higher magnitudes of . when the n&s condition is not satisfied , and the flow is linearly stable according to the normal - mode theory .the amplitude of a small perturbation is therefore expected to stay constant in this region .however non - modal processes can produce transient growth in the parameter space for which the flow is deemed stable by the normal - mode analysis .this can be shown by studying the evolution of the optimal perturbation .using svd analysis have shown that optimal perturbation evolves such that the final phase - shift .the phase - shift is symmetric in time about , maximizing in ( [ eq : gg ] ) .thus the optimal gain is found to be the optimal gain is maximum when , and is given by is known as the _ global optimal gain_. we take kh instability as an example for studying the non - modal behaviour .the condition in this case translates to ( see [ sec : kh ] ) . from figure[ fig : manyfigs](c ) we find that the perturbation amplitude corresponding to grows to a maximum of times its initial value .similar results have been obtained by , who studied the non - normal behaviour using the enstrophy norm .figure [ fig : manyfigs](d ) shows the corresponding temporal variation of its phase - shift .unlike figure [ fig : summaryfig](a ) , there is no phase - locking since n&s condition is not satisfied here . instead, the two `` marginally stable '' ( as referred to in the classical theory ) wave - modes continue to propagate in opposite directions .hence the phase - shift oscillates between and .the wit formulation proposed in [ sec : waveint ] is quite general in the sense that we have not prescribed the types of the constituent waves .different kinds of shear instabilities may arise depending on the wave types .for example , the interaction between two vorticity waves results in kh instability ; taylor - caulfield instability results from the interaction between two gravity waves in constant shear , while the interaction between a vorticity and a gravity wave produces holmboe instability . in this section we will use wit to analyze the three above mentioned classical examples of shear instabilities .let us consider a piecewise linear velocity profile this profile is a prototype of barotropic shear layers occurring in many geophysical and astrophysical flows .it supports two vorticity waves , one at and the other at .the shear .we nondimensionalize the problem by choosing a length scale and a velocity scale . in a reference framemoving with the mean flow , the non - dimensional velocity profile becomes this profile , along with the vorticity waves , is shown in figure [ fig : kh](a ) .the top wave is left moving while the bottom wave moves to the right . using the classical normal - mode theory showed that the profile in ( [ eq : kh2 ] ) is linearly unstable for .this instability is often referred to as the `` rayleigh s shear instability '' .however we have addressed it as the `` rayleigh / kelvin - helmholtz '' instability , and used the acronym `` kh '' .figure [ fig : kh](a ) reveals that the kh instability can be analyzed using wit framework . since the two waves involved in the kh instability are vorticity waves , we substitute ( [ eq : vortwaverel ] ) in ( [ eq:27])-([eq:30 ] ) , and after non - dimensionalization we obtain \label{eq : khh2}\\ & & \gamma_{2}=\frac{r}{2}e^{-2\alpha}\sin\phi\label{eq : khh3}\\ & & c_{2}=-1+\frac{1}{2\alpha}\left[1-re^{-2\alpha}\cos\phi\right].\label{eq : khh4 } \end{aligned}\ ] ] eqs .( [ eq : khh1])-([eq : khh4 ] ) provide a _non - modal _ description of kh instability .the equation - set is isomorphic to ( 14a)-(14d ) of and homomorphic to ( 7a)-(7d ) of . while the equation - set described by shows how counter - propagating rossby wave interactions lead to barotropic shear instability , the equation - set formulated by shows how baroclinic instability is produced through the interaction of temperature edge waves .the generalized non - linear dynamical system given by ( [ eq : drdt])-([eq : dphidt ] ) in this case translates to the equilibrium points of this system are , where .\label{eq : nor_mode_kh}\end{aligned}\ ] ] the phase portrait is shown in figure [ fig : kh](b ) .it confirms that the dynamical system is indeed of source - sink type , as predicted in [ sec : waveint ]. the n&s condition for instability expressed via ( [ eq : necessary ] ) in this case reads the range of unstable wavenumbers obtained from the above equation corroborates rayleigh s normal - mode analysis . normal - mode theory also shows that the wavenumber of maximum growth is . the same answer can be obtained from wit by imposing the normal - mode condition and maximizing or with respect to . the fact that kh instability develops into a standing wave instability can be verified by applying the normal - mode condition in ( [ eq : khh2 ] ) and ( [ eq : khh4 ] ) . performing the necessary steps we find , i.e. the waves became stationary after phase - locking .the waves stay in this configuration and grow exponentially , hence the shear layer grows in size .the growth process eventually becomes non - linear , and the shear layer modifies into elliptical patches of constant vorticity .let us consider a uniform shear layer with two density interfaces the shear is constant .we choose as the density scale , as the length scale , and thereby nondimensionalize ( [ eq : tayl0 ] ) . the physical state of the system is determined by the competition between density stratification and shear , the non - dimensional measure of which is given by the bulk richardson number , where is the reduced gravity , and is the reference density .the dimensionless velocity and density profiles therefore become this flow configuration is shown in figure [ fig : taylor](a ) .notice that a _ homogeneous _ flow with is linearly stable .moreover , a _ stationary _fluid having a density distribution as in ( [ eq : tayl1 ] ) is gravitationally stable .surprisingly , the flow obtained by superimposing the two stable configurations is linearly unstable . was the first to report and provide a detailed theoretical analysis of this rather non - intuitive instability .the first experimental evidence of this instability was provided by .following we refer to it as the `` taylor - caulfield ( tc ) instability '' .the interplay between the background shear and the gravity waves existing at the density interfaces produce the destabilizing effect . found that for each value of , there exists a band of unstable wavenumbers ( and vice - versa ) , shown in figure [ fig : taylor](b ) .this unstable range is given by ( see or ( 2.154 ) of ) : probably the only way to make sense of tc instability is through wave interactions , which has been described in and using normal - mode theory .wit provides the framework for studying non - modal tc instability .substituting ( [ eq : gravwaverel ] ) in ( [ eq:27])-([eq:30 ] ) , and performing non - dimensionalization , we obtain here , and by definition is a positive quantity . from ( [ eq : t2 ] ) and ( [ eq : t4 ] )we construct a quadratic equation for : amongst the two roots , only the positive root is relevant .equations ( [ eq : t1])-([eq : t4 ] ) provide a _ non - modal _ description of tc instability .the equation - set is of coupled nature , and is therefore _ not _ homomorphic to ( [ eq:27])-([eq:30 ] ) .the non - linear dynamical system in this case is given by at phase - locking .substituting this value in ( [ eq : betaevol ] ) gives .therefore and at resonance , which implies that tc instability , like kh , also evolves into a stationary disturbances .the phase portrait is shown in figure [ fig : taylor](c ) .the phase - shift is evaluated from ( [ eq : tphi ] ) : .\label{eq : nor_mode_taylor}\ ] ] the n&s condition for tc instability is given by the latter result corroborates the classical normal - mode result given in ( [ eq : ta1 ] ) .non - modal tc instability has also been studied by . in their paperthe non - modal equation - set is given by ( 3.11a , b)-(3.12a , b ) , which is quite different from our equation - set ( [ eq : t1])-([eq : t4 ] ) .however , these two equation - sets should be identical since they are describing the same physical problem . using the equation - set of we obtained the following range of unstable wavenumbers : .this bound is different from the correct stability bound ( [ eq : ta1 ] ) .let us consider the following velocity and density profiles we nondimensionalize ( [ eq : holm0 ] ) exactly like the tc instability problem , which gives us the dimensionless velocity and density profiles : the vorticity interface at the top supports a vorticity wave , while the density interface at the bottom supports two gravity waves .the interaction between the left moving vorticity wave at the upper interface and the right moving gravity wave at the lower interface leads to an instability mechanism , known as the `` holmboe instability '' ( note that the generation of ocean surface gravity waves by wind shear can be viewed as `` non - boussinesq holmboe instability '' , and can therefore be interpreted using wave interactions . ) .the corresponding flow setting is shown in figure [ fig : holmboe](a ) . was the first to consider the instability mechanism resulting from the interaction between vorticity and gravity waves . performing a normal - mode stability analysis, holmboe discovered the existence of an unstable mode characterized by traveling waves .the profile used by holmboe is more complicated than that which we are considering .holmboe s profile , which included multiple wave interactions , was substantially simplified by by introducing the profile in ( [ eq : holm1 ] ) .linear stability analysis shows that corresponding to each value of , there exists a band of unstable wavenumbers , shown in figure [ fig : holmboe](b ) .the stability boundary has been evaluated in appendix [ app : a ] . in order to understand the holmboe instability in terms of wit , we substitute ( [ eq : vortwaverel ] ) and ( [ eq : gravwaverel ] ) in ( [ eq:27])-([eq:30 ] ) .after non - dimensionalization , we obtain like the tc instability case , the holmboe equation - set is also of coupled nature , and is therefore not homomorphic to ( [ eq:27])-([eq:30 ] ) .the non - linear dynamical system is given by : the equilibrium points of this system are , where .\label{eq : nor_mode_holmboe}\end{aligned}\ ] ] the n&s condition for holmboe instability is found to be this provides the range of leading to holmboe instability , and is as follows : where \\ & & c=\left(2\alpha-1+e^{-2\alpha}\right)\left(2\alpha-1\right)^{3}.\end{aligned}\ ] ] eq .( [ eq : bound_holmboe ] ) corroborates the normal - mode result given in appendix [ app : a ] . the phase portrait of holmboe instability , corresponding to an unstable combination of and , is shown in figure [ fig : holmboe](c ) .this phase portrait is slightly different from tc and kh instabilities , because in this case .another feature of holmboe instability is that , unlike tc and kh instabilities , its phase - speed is non - zero at the equilibrium condition .this phase - speed is found to be in the limit of large and , the two phase - locked waves move with a speed of unity to the positive direction .shear instability plays a crucial role in atmospheric and oceanic flows . in the last 50 years, significant efforts have been made to develop a mechanistic understanding of shear instabilities . using idealized velocity and density profiles, researchers have conjectured that the resonant interaction between two counter - propagating linear interfacial waves is the root cause behind exponentially growing instabilities in homogeneous and stratified shear layers .support for this claim has been provided by considering interacting vorticity and gravity waves of the normal - mode form . in this paperwe investigated the wave interaction problem in a generalized sense .the governing equations ( [ eq:27])-([eq:30 ] ) of hydrodynamic instability in idealized ( broken - line profiles ) , homogeneous or density stratified , inviscid shear layers have been derived _ without _ imposing the wave type , or the normal - mode waveform .we refer to this equation - set as wave interaction theory ( wit ) . using wit we showed in figures [ fig : summaryfig](a ) and [ fig :summaryfig](b ) that two counter - propagating linear interfacial waves , having _ arbitrary _ initial amplitudes and phases , eventually _ resonate _ ( lock in phase and amplitude ) , provided they satisfy the n&s condition ( [ eq : necessary ] ) . in [ subsub : eigenanalysis ] we showed that the n&s condition is basically the criterion for normal - mode type instabilities .the waves which do not satisfy the n&s condition may exhibit non - normal instabilities , but will never resonate . in other words , such waves will never lock in phase and undergo exponential growth .these facts have been demonstrated in figures [ fig : manyfigs](c ) and [ fig : manyfigs](d ) .we considered three classic examples of shear instabilities - rayleigh / kelvin - helmholtz ( kh ) , taylor - caulfield ( tc ) and holmboe , and derived their discrete spectrum non - modal stability equations by modifying the basic wit equations ( [ eq:27])-([eq:30 ] ) .these equations are respectively given by ( [ eq : khh1])-([eq : khh4 ] ) , ( [ eq : t1])-([eq : t4 ] ) and ( [ eq : h1])-([eq : h4 ] ) .for each type of instability we validated the corresponding equation - set by showing that the n&s condition matches the predictions of the canonical normal - mode theory .although validation is an important first step for checking the non - modal equation - sets , the aim of this work is _ not _ proving well - known results using an alternative theory .the real strength of the non - modal formulation is in providing the opportunity to explore the transient dynamics .non - orthogonal interaction between the two wave modes can lead to rapid transient growth .we show in figure [ fig : manyfigs](b ) that depending on wavenumber and initial phase - shift , non - modal gain can exceed corresponding modal - gain by several orders of magnitude .this implies that the flow may become non - linear during the initial stages of flow development , leading to an early transition to turbulence .instability has been observed for wavenumbers which are deemed stable by the normal - mode theory .all these facts are shown for the example of kh instability ; see figures [ fig : manyfigs](a)-(d ) . in order to be able to study the transient dynamics of holmboe and tc instabilities, the analysis in [ sec : matstuff ] needs to be modified in parts .this is because the equations developed in [ sec : matstuff ] are restricted to systems whose non - modal equation - sets are _ homomorphic _ to ( [ eq:27])-([eq:30 ] ) .some examples of homomorphic equation - sets are : ( [ eq : khh1])-([eq : khh4 ] ) representing kh instability , ( 14a)-(14d ) of representing barotropic shear instability , and ( 7a)-(7d ) of representing baroclinic instability of the eady model .another objective of our work is to provide an intuitive description of shear instabilities , and to find connection with other physical processes .how two waves can interact to produce instability has been discussed throughout the paper , and schematically represented in figure [ fig : wave_int ] .we observed an analogy between wit equations and that governing the synchronization of two coupled harmonic oscillators . on the basis of this analogy , we re - framed wit as a non - linear dynamical system .the resonant configuration of the wave equations translated into a steady state configuration of the dynamical system .this dynamical system is of the source - sink type ; the source and the sink nodes being the two equilibrium points .when interpreted in terms of canonical linear stability theory , the source and the sink nodes respectively correspond to the decaying and the growing normal - modes of the discrete spectrum .the analogous two coupled harmonic oscillator system exhibits two normal modes of vibration - the in - phase and the anti - phase synchronization modes .the growing normal - mode of wit is analogous to the in - phase synchronization mode , while the decaying normal - mode is analogous to the anti - phase synchronization mode .although we studied wit in the context of shear instabilities , the framework is actually derived from ( [ eq:23])-([eq:26 ] ) , and is therefore applicable to both sheared and unsheared flows , as well as co - propagating and counter - propagating waves .as an example , one can study the interaction between deep water surface gravity and internal gravity waves ( which gives rise to barotropic and baroclinic modes of oscillations in lakes ; see pg .259 - 261 ) .the framework can also be extended to the weakly non - linear regime to study multiple - wave interactions leading to resonant triads .wit can also be applied to understand liquid - jet instability .finally we focus on the implications of using broken - line profiles of velocity and/or density .these idealizations have allowed us to concentrate only on the discrete spectrum dynamics and understand hydrodynamic instability in terms of interfacial wave interactions .real profiles are always continuous , hence realistic analysis requires inclusion of the continuous spectrum .using green s function , have shown that the continuous spectrum dynamics in a smooth homogeneous shear layer can be understood in terms of infinite number of interacting vorticity ( rossby edge ) waves . used the same approach to understand the normal - mode continuous spectrum of smooth stratified shear layers .these studies indicate that the growth process is better understood through the `` orr mechanism '' of shearing of waves , than edge wave interactions .this implies that the intuition brought by wit formulation may become less apparent in continuous systems .however the applicability of wit in continuous systems can only be properly judged when the current version is modified to include the effects of a continuous spectrum .we speculate that the analogy between wit and the synchronization of coupled harmonic oscillators can be exploited to obtain an intuitive understanding of continuous spectrum dynamics .our speculation is based on the work of , who studied the kuramoto model in the continuous limit , and observed superficial similarities between oscillator synchronization and instabilities in ideal plasmas and inviscid fluids .we would like to acknowledge the useful comments and suggestions of prof .neil balmforth of ubc , prof .kraig winters of ucsd , prof .michael mcintrye , prof .colm caulfield and prof .peter haynes of the university of cambridge ( damtp ) , prof .eyal heifetz of tel aviv university , prof .ishan sharma and prof .mahendra verma of iit kanpur , and the anonymous referees .stokes theorem relates the surface integral of the curl of a vector ( velocity in this case ) field over a surface to the line integral of the vector field over its boundary : used this theorem to relate the interfacial displacement with the difference in velocity perturbation ( ) produced at a vorticity interface ; see ( [ eq : stokes ] ) .this equation is referred to as `` eq .( 3.2 ) '' in his paper .however , the relevant steps required to derive this equation were not provided . in order to understand how ( [ eq : stokes ] )is obtained , we first graphically describe the problem in figure [ fig : appen ] .the background velocity is such that the flow is irrotational when , and has a constant vorticity , say , when .when the interface is disturbed by an infinitesimal displacement ( solid black curve in figure [ fig : appen ] ) , the velocity field also changes slightly - the perturbation velocity in the upper layer ( ) becomes and that in the lower layer ( ) becomes .let us consider a circuit a - b - c - d .applying stokes theorem , we obtain where is the area of a - b - c - d , and is the vorticity in this area. therefore we obtain which is basically ( [ eq : stokes ] ) .both interfaces in the holmboe profile ( [ eq : holm1 ] ) individually satisfy the kinematic condition : where is the stream function perturbation at the lower interface .this interface being a density interface also satisfies the dynamic condition : we assume the perturbations to be of normal - mode form : , , and . herethe wave speed is generally complex . defining ^{t}$ ] , we obtain the following eigenvalue problem : where .\ ] ] eq .( [ eq : app2 ] ) generates the following characteristic polynomial : this equation produces complex conjugate roots only when the discriminant is negative . since the presence of complex roots signify normal - mode instability , negative values of the discriminant is of our interest .the discriminant ( d ) in this case is given by : -\left(1 - 2\alpha\right)^{3}\left(2\alpha-1+e^{-2\alpha}\right).\ ] ] imposing the condition , we find where \\ & & c=\left(2\alpha-1+e^{-2\alpha}\right)\left(2\alpha-1\right)^{3}.\end{aligned}\ ] ] thus holmboe instability occurs only when the condition in ( [ eq : bbbb ] ) is satisfied . | postulated that resonant interaction between two or more progressive , linear interfacial waves produces exponentially growing instabilities in idealized ( broken - line profiles ) , homogeneous or density stratified , inviscid shear layers . here we have generalized holmboe s mechanistic picture of linear shear instabilities by ( i ) not initially specifying the wave type , and ( ii ) by providing the option for non - normal growth . we have demonstrated the mechanism behind linear shear instabilities by proposing a _ purely _ kinematic model consisting of two linear , doppler - shifted , progressive interfacial waves moving in opposite directions . moreover , we have found a _ necessary _ and _ sufficient _ ( n&s ) _ condition _ for the existence of exponentially growing instabilities in idealized shear flows . the two interfacial waves , starting from _ arbitrary _ initial conditions , eventually phase - lock and resonate ( grow exponentially ) , provided the n&s condition is satisfied . the theoretical underpinning of our wave interaction model is analogous to that of synchronization between two coupled harmonic oscillators . we have re - framed our model into a non - linear autonomous dynamical system , the steady state configuration of which corresponds to the resonant configuration of the wave - interaction model . when interpreted in terms of the canonical normal - mode theory , the steady state / resonant configuration corresponds to the growing normal - mode of the discrete spectrum . the instability mechanism occurring prior to reaching steady state is non - modal , favouring rapid transient growth . depending on the wavenumber and initial phase - shift , non - modal gain can exceed the corresponding modal gain by many orders of magnitude . instability is also observed in the parameter space which is deemed stable by the normal - mode theory . using our model we have derived the discrete spectrum non - modal stability equations for three classical examples of shear instabilities - rayleigh / kelvin - helmholtz , holmboe and taylor - caulfield . we have shown that the n&s condition provides a range of unstable wavenumbers for each instability type , and this range matches the predictions of the normal - mode theory . |
let s start with a game : `` hex '' is a board game for two players , invented by the ingenious danish poet , designer and engineer piet hein in 1942 , and rediscovered in 1948 by the mathematician john nash who got a nobel prize in economics in 1994 ( for his work on game theory , but not really for this game ) .hex , in hein s version , is played on a rhombical board , as depicted in the figure . the rules of the game are simple : there are two players , whom we call white and black .the players alternate , with white going first .each move consists of coloring one `` grey '' hexagonal tile of the board white resp .white has to connect the white borders of the board ( marked and ) by a path of his white tiles , while black tries to connect and by a black path .they ca nt both win : any winning path for white separates the two black borders , and conversely .( this is nt hard to prove however , the statement is closely related to the jordan curve theorem , which is trickier than it may seem when judged at first sight : see exercise [ exer : jct ] . ) however , here we concentrate on the opposite statement : there is no draw possible when the whole board is covered by black and white tiles , then there always is a winner .( this is even true if one of the players has cheated badly and ends up with much more tiles than his / her opponent !it is also true if the board is nt really `` square , '' that is , if it has sides of unequal lenghts . )our next figure depicts a final hex position sure enough one of the players has won , and the proof of the following `` hex theorem '' will give us a systematic method to find out which one . [ the hex theorem ] if every tile of an -hex board is colored black or white , then either there is a path of white tiles that connects the white borders and , or there is a path of black tiles that connects the black borders and .our plan for this section is the following : we give a simple proof of the hex theorem .we show that it implies the brouwer fixed point theorem , and even conversely : the brouwer fixed point theorem implies the hex theorem .then we prove that one of the players has a winning strategy .and then we see that on a square board , the first player can win , while on an uneven board , the player with the longer borders has a strategy to win .all of this is really quite simple , but it nicely illustrates how a topological theorem enters the analysis of a discrete situation . we follow a certain path _ between _ the black and white tiles that starts in the lower left - hand corner of the hex board on the edge that separates and .whenever this path reaches a corner of degree three , there will be both colors present at the corner ( due to the edge we reach it from ) , and so there will be a unique edge to proceed on that does have different colors on its two sides . our path can never get stuck or branch or turn back onto itself : otherwise we would have found a vertex that has one or three edges that separate colors , whereas this number clearly has to be even at each vertex .thus the path can be continued until it leaves the board that is , until it reaches or .but that means that we find a path that connects to , or to , and on its sides keeps a white path of tiles resp . a black path .that is , one of white and black has won ! now this was easy , and ( hopefully ) fun .we continue with a re - interpretation of the hex board in nash s version that buys us two drinks for the price of one : * a -dimensional version of the hex theorem , and * the connection to the brouwer fixed point theorem .the -dimensional _ hex board _ is the graph with vertex set , in which two vertices are connected if .the _ colors _ for the -dimensional hex game are , where we identify `` '' and `` . ''the _ interior _ of the hex board is given by .all the other vertices , in , form the _ boundary _ of the board .the vertices in the boundary of get preassigned colors our drawing depicts the -dimensional hex board , which represents a dual graph for the -board that we used in our previous figures , with the preassigned colors on the boundary .the -dimensional hex game is played between players who take turns in coloring the interior vertices of .the -th player _ wins _ if he achieves a path of vertices of color that connects a vertex whose -th coordinate is to a vertex whose -th coordinate is .there is no draw possible for -dimensional hex : if all interior vertices of are colored , then at least one player has won .the proof that we used for -dimensional hex still works : it just has to be properly translated for the new setting .for this we first check that is the graph of a triangulation of ^d ] lies in the relative interior of a unique simplex , which is given by & & \lfloor x_i - x_j\rfloor\le v_i - v_j\le\lceil x_i - x_j\rceil \mbox{\rm~for all }\big\}.\end{aligned}\ ] ] our picture illustrates the -dimensional case . ) now every full - dimensional simplex in has vertices .a simplex in is _ completely colored _ if it has all colors on its vertices .thus each completely colored -simplex in has exactly two completely colored facets , which are -faces of the complex .conversely , every completely colored -face is contained in exactly two -simplices if it is not on the boundary of ^d]at a different simplex than the one we started from .in particular , the last -simplex in the chain has a completely colored facet ( a -face ) in the boundary , and by construction this facet has to lie in a hyperplane .( in fact , at this point we check that every completely colored -simplex in the boundary of is contained in one of the hyperplanes , with the sole exception of the boundary facet of our starting -simplex . ) and the chain of -simplices thus provides us with an -colored path from the -colored vertex to the -colored vertex in : so the -th player wins .our drawing illustrates the chain of completely colored simplices ( shaded ) and the sequence of ( white ) vertices for the winning path that we get from it .now we will proceed from the discrete mathematics setting of the hex game to the continuous world of topological fixed point theorems . hereare three versions of the brouwer fixed point theorem .[ t : brouwer ] the following are equivalent ( and true ) : every continuous map has a fixed point .every continuous map has a fixed point .every null - homotopic map has a fixed point . ( the term _ null - homotopic _ that appears here refers to a map that can be deformed to a constant map ; see the proof below . )( br1)(br2 ) is trivial , since . for ( br2)(br3 ) let \to s^{d-1} ] instead of the ball : it should be clear that the brouwer fixed point theorem equally applies to self - maps of any domain that is homeomorphic to , resp . of the boundary of such a domain .if ^d\to [ 0,1]^d ] ( namely , one can take ^d\} ] is compact ) . furthermore , any continuous function on the compact set ^d ] , and thus at least one component of has to be at least in its absolute value .now the -dimensional hex theorem guarantees , for some , a chain of vertices of color , where and .furthermore , we know for . and at the ends we of the chain know the signs : * ^d ] implies and hence .hence , for some we must have a sign change : * and .all this taken together provides a contradiction , since * whereas assume we have a coloring of .we use it to define a map ^d\to [ 0,n]^d \vv \ww w_i=0 \vv i ] .hence this defines a simplicial map ^d\to [ 0,n]^d ] , calculates a point ^ 2 ] such that for each , we have . the _ size _ of a fractional packing is , and the _ fractional packing number _ is the supremum of the sizes of all fractional packings for .so in a fractional packing , we can take , say , one - third of one set and two - thirds of another , but at each point , the fractions for the sets containing that point must add up to at most 1 .we always have , since a packing defines a fractional packing by setting for and otherwise .similar to the fractional packing , one can also introduce a fractional version of a transversal .a _ fractional transversal _ for a ( finite ) set system on a ground set is a function ] is the number of the hole in contained in .a -interval _ escapes _ through a -hole if it is contained in the union of its holes .the drawing shows a -hole , of type , and a -interval escaping through it : let be the hypergraph with vertex set \times [ t{+}1] ] , coincide .it is this lemma whose proof is topological .we postpone that proof and finish the combinatorial part .let us suppose that a trap was chosen as in the lemma , with for all . if then is a transversal , since all edge weights are 0 and no escapes .so suppose that .let , the _ escape hypergraph _ of , consist of the edges of with nonzero weights .note that indeed , given a matching in , for each edge choose a escaping through gives a matching in .we note that the re - normalized edge weights determine a fractional packing in ( since the weights at each vertex sum up to 1 ) . for the size of this fractional packing , which isthe total weight of all vertices , we find by double counting \times [ t{+}1]}\frac{w_v}w= \frac 1d \sum_{v } 1 = t+1.\ ] ] the last step is to show that can not be small if is large . here is a simple argument leading to a slightly suboptimal bound , namely . given a fractional matching of size in , a matching can be obtained by the following greedy procedure : pick an edge and discard all edges intersecting it , pick among the remaining edges , etc ., until all edges are exhausted .the -weight of plus all the edges discarded with it is at most , while all edges together have weight .thus , the number of steps , and also the size of the matching , is at least .if we set , we get , which contradicts ( [ e : nuhnuf ] ) . therefore ,for this choice of , all the vertex weights must be 0 and as in lemma [ l : samewt ] is a transversal of of size at most .the improved bound for follows similarly using a theorem of fredi , which implies that the matching number of any -uniform -partite hypergraph satisfies .( for , a separate argument needs to be used , based on a theoreom of lovsz stating that for all graphs . ) the tardos kaiser theorem [ t : d - int ] is proved .let denote the standard -dimensional simplex in , i.e. the set .a point defines a -point multiset ] be an index set , and , for , be auxiliary pairwise disjoint unit intervals on the real line . in each , we choose distinct points , . the constructed system consists of homogeneous -intervals . for each , we choose auxiliary sets , and we construct as follows : the picture shows an example of for , and : the heart of the proof is the construction of suitable sets and on the ground set . since the should be homogeneous -intervals , we obviously require 1 . for all , and .the condition that every two members of intersect is implied by the following : 1 . for all , , we have or ( or both ) .finally , we want to have no small transversal .since no two -intervals of have a point component in common , a transversal of size intersects no more than members of in their point components , and all the other members of must be intersected in their interval components .therefore , the transversal condition translates to 1 .put for a sufficiently small constant , and let .then , and consequently for any arising from by removing at most sets .a construction of sets and as above was provided by sgall .his results give the following : [ p : sgall ] let be a given integer , let for a sufficiently small constant , and let be -element subsets of ] .let ( note that never contains both and , since no edge of does ) . by the assumption , plusany other edge together intersect in at least vertices .thus , any contains at least vertices of , and consequently no more than vertices of .let be the total weight of the vertices in and the total weight of the vertices in .the edges in contribute solely to , while any other edge contributes at least as much to as to , and so .but this is impossible since all vertex weights are identical and .the claim , and theorem [ t : multipoint ] too , are proved .an interesting open problem is whether in theorem [ t : multipoint ] could be replaced by for some constant independent of .the best known lower bound is .tardos proved the optimal bound for 2-intervals by a topological argument using the homology of suitable simplicial complexes .kaiser s argument is similar to the presented one , but he proves lemma [ l : samewt ] using a rather advanced borsuk ulam - type theorem of ramos concerning continuous maps defined on products of spheres .the method with brouwer s theorem was used by kaiser and rabinovich for a proof of theorem [ t : multipoint ] .alon s short proof of the bound for families of -intervals applies a powerful technique developed in alon and kleitman .for the so - called hadwiger debrunner -problem solved in the latter paper , the quantitative bounds are probably quite far from the truth .it would be interesting to find an alternative topological approach to that problem , which could perhaps lead to better bounds .see , for example , hell . the variant of the piercing problem for families of homogeneous -intervals has been considered simultaneously with -intervals ( , , , ) .the upper bounds obtained for the homogeneous case are slightly worse : for homogeneous -intervals , which is tight , and for homogeneous -intervals , .the reason for the worse bounds is that the escape hypergraph needs no longer be -partite , and so fredi s theorem relating to gives a little worse bound ( for , one uses a theorem of lovsz instead , asserting that for any graph ) .sgall s construction answered a problem raised by wigderson in 1985 .the title of sgall s paper refers to a different , but essentially equivalent , formulation of the problem dealing with labeled tournaments .alon proved by the method of that if is a tree and is a family subgraphs of with at most connected components , then .more generally , he established a similar bound for the situation where is a graph of bounded tree - width ( on the other hand , if the tree - width of is sufficiently large , then one can find a system of connected subgraps of with and arbitrarily large , and so the tree - width condition is also necessary in this sense ) . a somewhat weaker bound for trees has been obtained independently by kaiser .[ ex : polytprod ] let and be convex polytopes . show that there is a bijection between the nonempty faces of the cartesian product and all the products , where is a nonempty face of and is a nonempty face of .show that the following `` brouwer - like '' claim resembling lemma [ l : polyt - surj ] is _ not _ true : if is a continuous map of the -ball such that the boundary of is mapped surjectively onto itself , then is surjective .[ ex : homog ] prove the bound for any family of _ homogeneous_ -intervals ( unions of intervals on a single line ) .hint : follow the proof for -intervals above , but encode a candidate transversal by a point of a simplex ( rather than a product of simplices ) .fixed point theorems are `` global - local tools '' : from global information about a space ( such as its homology ) they derive local effects , such as the existence of special points where `` something happens . ''of course , in application to combinatorial problems we need to combine them with suitable `` continuous - discrete tools '' : from continous effects , such as topological information about continuous maps of simplicial complexes , we have to find our way back to combinatorial information .in addition to the usual game of graphs , posets , complexes and spaces , we will in the following exploit the deep topological effects caused by symmetry , that is , by finite group actions . a ( finite ) group _ acts _ on a ( finite ) simplicial complex for the polyhedron ( the geometric realization of a simplicial complex ) . ] if each group element corresponds to a permutation of the vertices of , where composition of group elements corresponds to composition of permutations , in such a way that is a face of for all and for all .this action on the vertices is extended to the geometric realization of the complex , so that acts as a group of simplicial homeomorphisms .the action is _ faithful _ if only the identity element in acts as the identity permutation . in general ,the set is a normal subgroup of .hence we get that the quotient group acts faithfully on , and we usually only consider faithful actions . in this case , we can interpret as a subgroup of the _ symmetry group _ of the complex .the action is _ vertex transitive _ if for any two vertices of there is a group element with .a _ fixed point _ ( also known as _ stable point _ ) of a group action is a point that satisfies for all .we denote the set of all fixed points by . note : this is not in general a subcomplex of .let } ] .loops and multiple edges are excluded .thus any graph is determined by its edge set , which is a subset of the set of `` potential edges . ''we identify a _ property _ of graphs with the family of graphs that have the property , and thus with the set family given by ,a)\mbox { has property } \pp\}.\ ] ] furthermore , we will consider only graph properties that are isomorphism invariant ; that is , properties of abstract graphs that are preserved under renumbering the vertices . with the symmetry condition of definition [ d : gp ] , we would accept `` being connected '' , `` being planar , '' `` having no isolated vertices , '' and `` having even vertex degrees '' as graph properties . however ,`` vertex is not isolated , '' `` is a triangle , '' and `` there are no edges between odd - numbered vertices '' are not graph properties .having no edge : : : clearly we have to check every single in order to be sure that it is not contained in , so this property is evasive : its argument complexity is .having at most edges : : : let us assume that we ask questions , and the answer we get is yes for the first questions , and then we get no answers for all further questions , except for possibly the last one . assuming that , this implies that the property is evasive .otherwise , for , the property is trivial .being connected : : : this property is evasive for .convince yourself that for any strategy , a sequence of `` bad '' answers can force you to ask all the questions .being planar : : : this property is trivial for but evasive for . in fact , for one has to ask all the questions ( in arbitrary order ) , and the answer will be unless we get a yes answer for all the questions including the last one .this is , however , not at all obvious for : it was claimed by hopcroft & tarjan , and proved by best , van emde boas & lenstra ( * ? ? ?* example 2 ) . a large star : : : let be the property of being a disjoint union of a star and an arbitrary graph on vertices , andlet be the corresponding set system . then for . for we can easily see this , as follows .test all the edges with . that way we will find exactly one vertex with at least neighbors ( otherwise property can not be satisfied ) : that vertex has to be the center of the star .we test all other edges adjacent to : we must find that has exactly neighbors .thus we have identified three vertices that are not neighbors of : at least one of the edges between those three has not been tested .we test all other edges to check that ,a) ] .loops and parallel edges are excluded , but anti - parallel edges are allowed .thus any digraph is determined by its arc set , which is a subset of the set of all `` potential arcs '' ( corresponding to the off - diagonal entries of an adjacency matrix ) .a _ digraph property _ is a property of digraphs ,a)$ ] that is invariant under relabelling of the vertex set .equivalently , a digraph property is a family of arc sets that is symmetric under the action of that acts by renumbering the vertices ( and renumbering all arcs correspondingly ) .a digraph property is _ evasive _ if the associated set system is evasive , otherwise it is _ non - evasive_. for bipartite graph properties we use a fixed vertex set of size , and use as the set of potential edges .bipartite graph property _ is a property of graphs with that is preserved under renumbering the vertices in , and also under permuting the vertices in .equivalently , a bipartite graph property on is a set system that is stable under the action of the automorphism group that acts transitively on .having at most arcs : : : again , this is clearly evasive with if , and trivial otherwise .having a sink : : : a _ sink _ in a digraph on vertices is a vertex for which all arcs going into are present , but no arc leaves , that is , a vertex of out - degree , and in - degree .let be the set system of all digraphs on vertices that have a sink .it is easy to see that . in particular , for `` having a sink '' is a non - trivial but non - evasive digraph property .+ in fact , if we test whether , then either we get the answer yes , then is not a sink , or we get the answer no , then is not a sink .so , by testing arcs between pairs of vertices that `` could be sinks , '' after questions we are down to one single `` candidate sink '' . at this point at least one arc adjacent to has been tested .so we need at most further questions to test whether it is a sink .[ history : the aanderaa rosenberg conjecture][arc ] originally , arnold l. rosenberg had conjectured that all non - trivial digraph properties have quadratic argument complexity , that is , that there is a constant such that for all non - trivial properties of digraphs on vertices one has .however , s. aanderaa found the counter - example ( for digraphs ) of `` having a sink '' ( * ? ? ?* example 15 ) .we have also seen that `` being a scorpion graph '' is a counter - example for graphs .hence rosenberg modified the conjecture : at least all _ monotone _ graph properties , that is , properties that are preserved under deletion of edges , should have quadratic argument complexity .this is the statement of the _ aanderaa rosenberg conjecture _ .richard karp considerably sharpened the statement , as follows .we will prove this below for graphs and digraphs in the special case when a prime power ; from this one can derive the aanderaa rosenberg conjecture , with .similarly , we will prove that monotone properties of bipartite graphs on a fixed ground set are evasive ( without any restriction on and ) .however , we first return to the more general setting of set systems .any strategy to determine whether an ( unknown ) set is contained in a ( known ) set system in definition [ d : ask]can be represented in terms of a decision tree of the following form .a _ decision tree _ is a rooted , planar , binary tree whose leaves are labelled `` yes '' or `` no , '' and whose internal nodes are labelled by questions ( here they are of the type `` '' ) .its edges are labelled by answers : we will represent them so that the edges labelled `` yes '' point to the right child , and the `` no '' edges pointing to the left child .a _ decision tree for _ is a decision tree such that starting at the root with an arbitrary , and going to the right resp .left child depending on whether the question at an internal node we reach has answer yes or no , we always reach a leaf that correctly answers the question `` '' . the root of a decision tree is at _ level _ , and the children of a node at level have level .the _ depth _ of a tree is the greatest such that the tree has a vertex at level ( a leaf ) .a decision tree for is _ optimal _ if it has the smallest depth among all decision trees for ; that is , if it leads us to ask the smallest number of questions for the worst possible input .let us consider an explicit example . the following figure represents an optimal algorithm for the `` sink '' problem on digraphs with vertices .this has a ground set of size .the algorithm first asks , in the root node at level , whether . in casethe answer is yes ( so we know that is not a sink ) , it branches to the right , leading to a question node at level that asks whether , etc . in casethe answer to the question is no ( so we know that is not a sink ) , it branches to the left , leading to a question node at level that asks whether , etc .for every possible input ( there are different ones ) , after two questions we have identified a unique `` candidate sink '' ; after not more than question nodes one arrives at a leaf node that correctly answers the question whether the graph has a sink node : yes or no .( the number of the unique candidate is noted next to each node at level . ) for each node ( leaf or inner ) of level , there are exactly different inputs that lead to this node .this proves the following lemma .another way to view a ( binary ) decision tree algorithm is as follows . in the beginning, we do not know anything about the set , so we can view the collection of possible sets as the complete boolean algebra of all subsets of . in the first node ( at `` level '' ) we ask a question of the type `` '' ; this induces a subdivision of the boolean algebra into two halves , depending on whether we get answer yes or no .each of the halves is an interval of length of the boolean algebra . at level ask a new question , depending on the outcome of the first question .thus we _ independently _ bisect the two halves of level , getting four pieces of the boolean algebra , all of the same size . process is iterated .it stops we do not need to ask a further question on the parts which we create that either contain only sets that are in ( this yields a yes - leaf ) or that contain only sets not in ( corresponding to no - leaves ) . thus the final result is a special type of partition of the boolean algebra into intervals . some of them are yes intervals , containing only sets of , all the others are no - intervals , containing no sets from .if the property in question is monotone , then the union of the yes intervals ( i.e. , the set system ) forms an _ ideal _ in the boolean algebra , that is , a `` down - closed '' set such that with any set that it contains it must also contain all its subsets .consider one interval in the partition of that is induced by any optimal algorithm for . if the leaf , at level , corresponding to the interval is reached through a sequence of yes - answers and no - answers ( with ), then this means that there are sets with and with , such that in other words , the interval contains all sets that give yes - answers when asked about any of the elements of , no - answers when asked about any of the elements of , while the elements of may or may not be contained in .thus the interval has size , and its counting polynomial is now the complete set system is a disjoint union of the intervals , and we get in particular , for an optimal decision tree we have and thus at every leaf of level , which means that all the summands have a common factor of .we can now draw the conclusion , based only on simple counting , that most set families are evasive .this can not of course be used to settle any specific cases , but it can at least make the various evasiveness conjectures seem more plausible .let be any -orbit of , that is , a collection of -sets on which acts transitively .while every set in contains elements , we know from transitivity that every element of is contained in the same number , say , of sets of the orbit . thus , double - counting the edges of the bipartite graph on the vertex set defined by `` '' ( displayed in the figure below ) we find that .thus for we have that divides , while is one single `` trivial '' orbit of size , and does nt appear .hence we have which implies evasiveness by corollary [ c : euler - char ] .* , so we have * and all images under ,that is , all singleton sets : , * and and all images under , so , * and and all their -images , so , * and their -images , so . an explicit decision tree of depth for this is given in our figure below . herethe _ pseudo - leaf _ `` yes(7,10 ) '' denotes a decision tree where we check all elements that have not been checked before , other than the elements and .if none of them is contained in , then the answer is yes ( irrespective of whether or ) , otherwise the answer is no .the fact that two elements need not be checked means that this branch of the decision tree denoted by this `` pseudo - leaf '' does not go beyond depth .similarly , a pseudo - leaf of the type `` yes(7 ) '' represents a subtree of depth .we now concentrate on the case where is closed under taking subsets , that is , is an abstract simplicial complex , which we also denote by . in this setting ,the symmetry group acts on as a group of simplicial homeomorphisms .if is a graph or digraph property , then this means that the action of is transitive on the vertex set of , which corresponds to the edge set of the graph in question .again we denote the cardinality of the ground set ( the vertex set of ) by .a complex is _ collapsible _ if it can be reduced to a one - point complex ( equivalently , to a simplex ) by steps of the form are faces of with , where is the _unique _ maximal element of that contains .our figure illustrates a sequence of collapses that reduces a -dimensional complex to a point . in each casethe face that is contained in a unique maximal face is drawn fattened .{eps / collapse.eps}\ ] ] the first implication is clear : for a cone we do nt have to test the apex in order to see whether a set is a face of , since if and only if . the third implication is easy topology : one can write down explicit deformation retractions .the middle implication we will derive from the following lemma , which uses the notion of a _ link _ of a vertex in a simplicial complex : this is the complex formed by all faces such that but . is non - evasive if and only if either is a simplex , or it is not a simplex but it has a vertex such that both the deletion and the link are non - evasive . if no questions need to be asked ( that is , if ) , then is a simplex .otherwise we have some that corresponds to the first question to be asked by an optimal algorithm .if one gets a yes answer , then the problem is reduced to the link , since the faces correspond to the faces of for which . in the case of a no - answerthe problem similarly reduces to the deletion .we use induction on , where is clear .if the vertex corresponds to a good first vertex to ask , then we start with a sequence of collapses of the complex that correspond to a collapsing sequence for the link of in : this is possible by induction , since the link of is non - evasive and has at most vertices .( a non - maximal face in the link that is contained in a unique maximal face provides the same type of face in the complete complex . )thus we can apply collapses to until we get that .then one further collapsing step ( with and ) takes us to the one - point complex .if is the vertex set of , then any point has a unique representation of the form with and .if the group action , with is transitive , then this means that for every there is some with .furthermore , if is a fixed point , then we have for all , and hence we get for all . from thiswe derive for all .hence we get and this is a point in only if is the complete simplex with vertex set . alternatively : the fixed point set of any group action is a subcomplex of the barycentric subdivision , by lemma [ l : fps ] .thus a vertex of the fixed point complex is the barycenter of a face of .since is fixed by the whole group , so is its support , the set .thus vertex transitivity implies that , and .[ the evasiveness conjecture for prime powers : kahn , saks & sturtevant ] all monontone non - trivial graph properties and digraph properties for graphs on a prime power number of vertices are evasive .we identify the fixed vertex set with .corresponding to a non - evasive monotone non - trivial graph property we have a non - evasive complex on a set of vertices . by theorem [ t :collapsible ] is collapsible and hence -acyclic .the symmetry group of includes the symmetric group , but we take only the subgroup of all `` affine maps '' that permute the vertex set , and ( since we are considering graph properties ) extend to an action on the vertex set of .then we can easily verify the following facts : is doubly transitive on , and hence induces a vertex transitive group of symmetries of the complex on the vertex set ( interpret as a -dimensional vector space , then any ( ordered ) pair of distinct points can be mapped to any other such pair by an affine map on the line ) ; taking these facts together , we have verified all the requirements of oliver s theorem [ t : oliver ] . hence has a fixed point on , and by lemma [ l : transitive - fixed ] is a simplex , and hence the corresponding ( di)graph property is trivial. from this one can also deduce with a lemma due to kleitman & kwiatowski ( * ? ? ? * thm .2)that every non - trivial monotone graph property on vertices has complexity at least .( for the proof see ( * ? ? ?* thm . 6 ) . )this establishes the aanderaa rosenberg conjecture [ arc ] . on the other hand , the evasiveness conjecture is still an open problem for every that is not a prime power .kahn , saks & sturtevant ( * ? ? ?4 ) report that they verified it for .an interesting aspect of yao s proof is that it does not use a vertex transitive group .in fact , let the cyclic group act by cyclically permuting the vertices in , while leaving the vertices in fixed .the group satisfies the assumptions of oliver s theorem [ t : oliver ] , with .it acts on the complex which is acyclic by theorem [ t : collapsible ] .thus we get from oliver s theorem that the fixed point set is acyclic .this fixed point set is not a subcomplex of ( it does not contain any vertices of ) , but it is a subcomplex of the order complex , which is the barycentric subdivision of ( lemma [ l : fps ] ) .the bipartite graphs that are fixed under are those for which every vertex in is adjacent to none , or to all , of the vertices in ; thus they are complete bipartite graphs of the type for suitable .our figure illustrates this for the case where , , and . monotonicity now implies that the fixed graphs under are _ all _ the complete bipartite graphs of type with for some with .( here is impossible , since then would be a simplex , corresponding to a trivial bipartite graph property . )now we observe that is the order complex ( the barycentric subdivision ) of a different complex , namely of the complex whose vertices are the complete bipartite subgraphs , and whose faces are _ all _ sets of at most vertices . thus is the barycentric subdivision of the -dimensional skeleton of an -dimensional simplex . in particular , this space is not acyclic . even its reduced euler characteristic , which can be computed to be , does not vanish .however , conjectures ( 5 ) and ( 4 ) fail for : a counterexample is provided by the six - vertex triangulation of the real projective plane ( see ( * ? ? ?* section 5.8 ) ) .even conjectures ( 3 ) and possibly ( 2 ) fail for : a counterexample by oliver ( unpublished ) , of dimension , is based on ; see lutz .so , it seems that conjecture ( 1)the monotone version of the generalized aanderaa rosenberg conjecture [ c : gar]may be the right generality to prove , even though its non - monotone version fails by proposition [ e : illies ] . 1 .what kind of values of are possible for graph properties of graphs on vertices ?for monotone properties , it is assumed that one has , and this is proved if is a prime power . in general , it is known that unless , by bollobs & eldridge , see ( * ? ? ?2 . show that the digraph property `` has a sink '' has complexity can you also prove that for any non - trivial digraph property one has ? + ( this is stated in best , van emde boas & lenstra ; there are analogous results by bollobs & eldridge ( * ? ? ?viii.5 ) in a different model for digraphs . ) 3 .show that if a complex corresponds to a non - evasive monotone graph property , then it has a complete -skeleton .4 . give examples of simplicial complexes that are contractible , but not collapsible .( the `` dunce hat '' is a key word for a search in the literature ) 5 .assume that when testing some unknown set with respect to a set system , you always get the answer yes ( unless you have already proved that the answer is no , in which case you would nt ask ) .* show that with this type of answers you _ always _ need questions for _ any _ algorithm ( and thus is evasive ) if and only if satisfies the following property : * * for any there is some such that .* show that for , the family of edge sets of planar graphs satisfies property ( * ) . * give other examples of graph properties that satisfy ( * ) , and are thus evasive .+ ( this is the `` simple strategy '' of milner & welsh ; see bollobs . )let be a vertex - homogeneous simplicial complex with vertices and euler characteristic .suppose that is the prime factorization and let .prove that 7 .let be the set of all words of length in the alphabet , .for subsets , let be the least number of inspections of single letters ( or rather , positions ) that the best algorithm needs in the worst case in order to decide whether `` '' + define the polynomial where for .+ show that e. c. milner and d. j. a. welsh . on the computational complexity of graph theoretical properties . in c. s. j. a. nash - williams and j. sheehan , editors ,fifth british combinatorial conference , aberdeen 1975 _ , utilitas math . , pages 471487 , winnipeg , 1976 . | these lecture notes were written about 15 years ago , with a history that goes back nearly 30 years for some parts . they can be regarded as a `` prequel '' to the book , and one day they may become a part of a more extensive book project . they are not particularly polished , but we decided to make them public in the hope that they might be useful . we refer to for notation and terminology not explained here . |
this paper seeks to investigate the appearance of periodic and non - periodic cycles in the time series of stock market returns .the analysis of the existence of cycles and trends in stock indexes provides us with an essential contribution to the understanding of market efficiency and the distribution of stock indexes returns .cycles in the economic data have been studied extensively , resulting in a number of stylized facts that characterize some cyclical or seasonal effects to financial time series .the study of cycles in economic data dates back to the early 1930s .various techniques to measure seasonality have been widely applied , combining ideas from mathematics , physics , economics and social sciences .these efforts have resulted in research findings of , among other , intraday trading effects , weekend and/or three - day effects , intramonth effects , quarterly and annual cycles , and various multi - year cyclical variations in stock market index returns . a consensus of opinion on the nature or the character of cyclic effects , however , has not been reached .financial markets belong to a class of human - made systems exhibiting complex organization and dynamics , and similarity in behavior .complex systems have a large number of mutually interacting parts , are often open to their environment , and self - organize their internal structure and dynamics , which produce various forms of a large - scale collective behaviors .they operate simultaneously at different scales .the outputs of such systems , time series of records of their activity , display co - existence of collectivity and noise ; the complexity of systems is reflected in time series that exhibit a wealth of dynamic features , including trends and cycles on various scales . the tools to study such systemsthus can not be analytical , but rather must be adapted to enable accurate quantification of their long - range order . in this sense , we have chosen to contribute to the debate about the existence and types of cycles in stock market data in two ways : by way of applying wavelet spectral analysis to study market returns data , and through the use of hurst exponent formalism to study local behavior around market cycles and trends .firstly , we utilized wavelets to study cyclical consistency in time series of stock market indexes ( smis ) .wavelet analysis is appropriate for such a task ; it was originally introduced to study complex signals .we use wavelet - based spectral analysis , which estimates the spectral characteristics of a time - series as a function of time , revealing how the different periodic components of a particular time - series evolve over time .it enables us to compare stock market index time series wavelet spectra from different economies , to examine the similarities in contributions of cycles at various characteristic frequencies to the total energy spectrum , to observe when exactly these contributions happen in time for different economies , and to compare if the ups and downs of each cycle occur simultaneously in different smi series . with this tool we can attempt to address the question of whether the complexity of a financial market is specifically limited to the statistical behavior of each smi time series or parts of an smis series complexitycan be attributed to the overall world market .we use the hurst exponent formalism to attempt to address the question of what type , or types , of cyclical behavior smi data possess . in recent years, the application of the hurst - exponent - based analyses has led many researchers to conclude that financial time series possess multi - scaling properties .in addition , these methods have allowed for the examination of local scaling around a given instance of time , so that the complex dynamical properties of various time series can be analyzed locally rather than globally . in this paper, we use the time dependent hurst exponent approach to test the local character of cycles at various characteristic frequencies of smi time series from different economies . in doingso we aim to compare the behavior of each cycle across stock markets and to find ways to classify various markets according their cyclical behavior .we use the wavelet and the hurst exponent approach to analyze three types of smi time series : data belonging to stock markets of developed economies , emerging economies , and of the underdeveloped or transitional economies .previous and recent work by our group and others has demonstrated that smi series exhibit scaling properties connected to the level of growth and/or maturity of the economy the stock market is embedded in , and that all the smi series exhibit the effects of various periodic or non - periodic cycles that are visible in their wavelet spectra .it has been demonstrated that in emerging or transitional markets stock indexes do not fully represent the underlying economies , therefore we wanted to tailor our smi study with this in mind and differentiate between underdeveloped ( transitional ) economies , emerging economies , and developed economies .our study is structured as follows . in sec .2 . we give a brief overview of the smi dataset and the variables that we study . in sec .we describe the general framework of the wavelet transform ( wt ) spectral analysis and the specificities of the use of the morlet wavelet basis for the transformation we apply on our dataset .we apply the wt framework to study the appearance and consistency of cycles across stock markets , and we present our results within this section . in addition , within sec .we have also investigated the statistical effects of the observed seasonal behavior on the spectral behavior of our smi data . in sec .we give an introduction to the technique of time - dependent hurts exponent analysis and present the specificities of the detrended moving average ( dma ) method and its time - dependent variation ( tddma ) .we then list the results of the use of tddma on our smi data , and develop a quantitative indicator ( that we have dubbed the development index ) , which may help classify the level of development of a particular market according to the markets local cyclical behavior .we end our paper with a list of conclusions and a few suggestions for future work in sec .in this paper , we investigate data from the following stock markets : the new york stock exchange nyse index , the standard & poor s 500 ( s&p500 ) index , the uk ftse 100 index , the tokyo stock exchange nikkei 225 index , the french cac 40 index , and the german stock market dax index , which we consider developed economies ; the shanghai stock exchange sse composite index , the brazil stock market bovespa index , the johannesburg stock exchange jse index , the turkey stock market xu 100 index , the budapest stock exchange bux index , and the croatian crobex index , which we consider emerging economies ; the tehran tepix index , the egyptian stock market egx 30 index , and the indexes of the developing economies in the western balkans - the belgrade stock exchange belexline index , the montenegrin montex 20 index , the sasx 100 index of the market of bosnia and herzegovina and the birs index of bosnian entity republic of srpska , representing markets of underdeveloped economies . table 1 .lists general characteristics of the smi time series which we have analyzed ; depending mainly on the market development level , they are of varying duration .the variables studied in our paper are the daily price logarithmic returns that are defined as + where is the closure price of the stock market index at day , and the lag period is a time interval of recording of index values .all of the analyzed time series of prices on the stock markets are publicly available ( from the official web - sites of the markets in question or from the yahoo finance database ) , and are given in local currencies .the values of the smi data are listed only for trading days that is , they are recorded according to the market calendar , with all weekends and holidays removed from datasets . ]the wavelet transform ( wt ) was introduced in order to circumvent the heisenberg uncertainty principle problem in classical signal analysis , and achieve good signal localization in both time and frequency that a classical fourier transform approach lacks .namely , in wt the window of examination length is adjusted to the frequency analyzed : slow events are examined with a long window , whilst a shorter window is used for fast events . in this way an adequate time resolution for high frequencies and a good frequency resolution for low frequenciesis achieved in a single transform .the continuous wavelet transform of a discrete sequence is defined as the convolution of with wavelet functions in the following way : with and being the scale and translation - in - time ( coordinate ) parameters , and the total length of the smi series studied .the wavelet functions , used in eq.(2 ) , are related to the scaled and translated version of the mother - wavelet , through + in order to examine the existence of cycles in smi data , we used the wavelet scalegrams ( mean wavelet power spectra ) , that are defined by + the scalegram can be related to the corresponding fourier power spectrum via the formula + this formula implies that if the two spectra , and , exhibit power - law behavior , then they should have the same power - law exponent .the meaning of the wavelet scalegram is the same as that of the classical spectrum - it gives a contribution to the signal energy at a specific scale parameter .we are thus able to view and estimate the peaks of wavelet spectra in the same way as one would approach this problem in a classical fourier approach . in this paper , we find it convenient to follow the choice of many researchers before us and use the standard set of morlet wavelet functions as a wavelet basis for our analysis .the choice of the wavelet is an important aspect of any wt analysis , as the wavelet coefficients contain combined information on both the function we analyze and the analyzing wavelet . in applying the wt methodit is important to choose a mother - wavelet that is well localized in both time and frequency , and is at least of zero mean ( of course , these properties remain valid for the corresponding generated wavelet functions ) .the morlet wavelet has proven to possess this optimal joint time - frequency localization , and can thus be used for detecting locations and spatial distribution of singularities in time series .the morlet set is defined by with the corresponding fourier transform where , is the heaviside step function , is the fourier circular frequency , and is the characteristic nondimensional parameter ( a real number ) . for the morlet wavelet ,the relation between the scale and the frequency is given by where is a real number that characterize the particular wavelet .in our study we use .we have calculated wt power spectra for all our smi series , and for all the periods ( durations ) these data series were available to us .we took into consideration only the values of the wt spectra between the minimum time scale of and the statistically meaningful maximum time scale of , and searched for characteristic peaks ( local maxima ) within those limits .figure [ fig1 ] . depicts the typical findings we got for all our smi series , represented by the wt spectra of the dax returns series ( as a primer of a developed market ) , xu 100 returns series ( representing emerging markets ) , and tepix returns series ( representing the underdeveloped economies ) . from figure [ fig1 ] characteristic peaks , or rather characteristic cycle periods around characteristic peaks , can be recognized on different time scales .these include : \1 . the working week cycle , or the peak around 5 days .this peak is visible in all our smi series , with the exception of the tepix series , where it is lacking altogether .the exact positions of the peak are not the same in all the series , but all the peaks are embedded in the period ( equivalent to the frequency band in a classical fourier case ) of 2 days to 6 days ( see figure [ fig1 ] , where the borders of all periods are drawn around the corresponding peaks ) .the peak around 7 days , which probably belongs to a calendar - week cycle .this peak is not visible in xu 100 and sse series , while it is visible in all other smi series . in all the series where it is visible, this peak is embedded in the period of 6 to 10 days .the peak around 14 days , probably tracing a two - week cycle .this peak is visible in all out smi series .the period that surrounds this peak is from 10 days to 25 days .in addition to this peak , some series show a very near one at around 20 days ( probably equivalent to a three - week cycle ) ; this additional peak appears in ftse 100 , nyse , cac 40 , dax and sse markets .since the 20-day peak appears in only few ( developed ) economies , we have decided not to list it as a separate cycle , just to mention it in the text .it is also depicted in figure [ fig1 ] ., where the border around it is drawn with dotted vertical line .+ 4 . the monthly cycle , or the peak around 30 days .it is present in all our smi series , and is embedded in the period between 25 days and 60 days . at this point in time , again , an additional peak appears in some of our smi series . where it exists , this peak is positioned at around 45 days , and it appears in ftse 100 , s&p 500 , cac 40 , dax , xu 100 , bovespa and egx 30 markets .for the same reasons as above , we have chosen just to mention this cycle here ( and have shown it s border with dotted vertical line in figure [ fig1 ] ) .+ 5 . the quarterly cycle , or the 90-day peak .this peak was found in all our smi series , and it is positioned in the region between 60 days and 110 days .+ 6 . the 4 - 5 month cycle , or the 150-day peak .this peak was found in all our smi series , and is positioned in the region between 110 days and 190 days .the 6 - 7 month peak , or the semi - annual cycle .this peak is not visible in all our underdeveloped smi series , and in the nyse and nikkei 225 series .where it appears , the peak is positioned in the period between 190 and 250 days .the annual cycle , or the peak around 360 days .this peaks was not found in ftse 100 , sse and egx 30 series . in all other cases ,it is embedded in the region between 250 days and 450 days .+ 9 . the bi - annual cycle , or the 600-day peak .this multi - year cycle was found in all our smi series .it is embedded in a time interval between 450 days and 950 days .other multi - year cycles have been found in the series of developed economies .these data series contain sufficient data points that enable the calculation of wt spectra on higher scales .due to our inability to investigate the existence of these peaks in all our data , we will just mention them here , but will not label them in the way we labelled other peaks and peak regions . in order to be able to compare and characterize the obtained wavelet spectra of our stock market data , we have calculated relative energy content and relative amplitude of all the regions ( periods , bands ) under characteristic peaks in all our data series .the relative energy content of the -th peak in a wt power spectrum is defined as : where represents the average energy content of the period surrounding the -th peak : and is the total energy content of the wt spectrum of the stock market series analyzed .the energy content is a physical quantity behind a wt power spectrum , so it represents it s natural characteristic .similarly , the relative amplitude of the spectral band under the -th peak is defined as : with its average amplitude , and the total amplitude of the wt power spectrum of the stock market series of interest .the amplitude of the wt power spectrum depends on the variability of the frequency ( scale ) band analyzed - the more constant the frequency , the higher the amplitude .we calculated the relative energy contents and the relative amplitudes for all the obtained peaks in all the analyzed wt spectra .we then performed statistical analysis of three groups of data sets - those belonging to the developed economies , the emerging markets , and the underdeveloped economies .we first performed the shapiro - wilk test for normality of distributions within these three data groups .if normality of distributions existed within our datasets , we performed the one - way anova test to compare our sample means , with the significance level of . if the anova test confirmed the existence of differences of means , the average means for all three groups of datawas compared using the bonferroni method .if , however , the shapiro - wilk test did not confirm the existence of normality of distributions within our dataset , we performed the kruskal - wallis anova test to compare the means , with the significance level of .if the kruskal - wallis anova test confirmed the existence of differences in the groups means , the comparison of average means for all three groups of data was done using the wilcoxon mann - witney method .lists the calculated average values of relative energy content and the relative amplitudes of all the peaks for the three smi groups .the statistically significantly different values between the groups for each of the peaks are marked in bold - if only one value is bolded , then it differs from the other two market groups in a peak group ; if two values are bolded they differ mutually ; and if all three values have been bolded then all the three market groups values differ from each other . and b ) the relative amplitudes for all three market groups .results are depicted for the three peak regions - a small scale region surrounding the peak at 5 days , a mid - scale region surrounding the peak at 150 days , and a large scale region surrounding the peak at 600 days .squares enclose the of the values within the smi group , while the error bars depict the maximum and the minimum value within the same group.[fig2 ] ] our results are also illustrated in figure [ fig2 ] , where average values of the relative energy content and the relative amplitudes for all three market groups , and in three peak regions - a small scale region surrounding the peak at 5 days , a mid - scale region surrounding the peak at 150 days , and a large scale region surrounding the peak at 600 days , are depicted .figure [ fig2 ] and table 2 .show that in the small scales region ( peaks of up to 90 days ) the values of both the relative energy contents and the relative amplitudes under the spectral peaks for the underdeveloped markets are smaller than the values of the two other groups in a clear , statistically significant manner .even more so , for the values of the relative energy content for the two small scale peaks ( peaks at 5 days and at 14 days ) , we can statistically differentiate all three market groups .for the peaks at lager scales ( peaks at 150 days and more ) , the behavior of underdeveloped markets data does not differ from the other two groups , except in the case of a large scale region of the peak at 600 days .these results can be understood in the light of values of slopes of the wavelet spectra of our entire dataset .it has been shown that the smi series from the underdeveloped economies have wt spectra that show highly correlated long - range behavior , with the exponent .the difference of values of and for the peaks in the small scales region could explain the smaller contribution of small scale peaks to the overall spectral behavior in underdeveloped markets data , which bring about greater contribution of higher scale peaks and consequently higher slopes of wavelet spectra for these data .further , the slopes of wavelet spectra for the other two groups of market s data ( for developed and emerging economies ) are either close or equal to , which means that contributions of different peaks in wavelet spectra have to be approximately equal .it seems , therefore , that the transitional markets do not follow the same behavioral pattern as the markets of emerging or developed economies at time scales of days , weeks , and even several months .our results thus show that measures like and for the peaks in the small scales region could be used for partial differentiation between market economies .in order to gain another insight into the local complexity of our smi data , and the possibility of finding means to further quantitatively distinguish the three groups of smi data we use , we have applied the time - dependent detrended moving average ( dma ) algorithm to all our smi series .firstly , in order to study the general statistics of our smi data , we have employed the centered detrended moving average ( cdma ) technique .the variation of a standard cdma method that we use here is performed in three consecutive steps . in the first step , for each sequence of , we construct the sequence of cumulative sums . with assigned the total number of recorded values for a given series ( the smi number plays the role of time ) . in the next step ,the entire series of is divided into a set of overlapping segments of the length , and a moving average for each segment is calculated . in the next step a new series , the so - called series of detrended walk is calculated , which we get by removing the moving average from the . where is the window size .hence , the moving average function considers data points in the past and same number of points in the future .finally , one has to calculate the variance about the moving average for each segment and determine the average of these variances over all segments , which brings about the detrended moving average function + by increasing the segment length the function increases as well .when the analyzed time series follows a scaling law , the cdma function is of a power - law type , that is , , with . scaling exponent usually called the hurst exponent of the series . in the case of short - range data correlations ( or no correlations at all ) behaves as . for data with power - law long - range autocorrelationsone may expect that , while in the long - range negative autocorrelation case we have .when scaling exists , the exponent can be related to the fourier and wt power spectrum exponent through the scaling relation .we applied the time - dependent dma algorithm ( tddma ) to the subset of data in the intersection of the smi signal and a sliding window of size , which moves along the series with step .the scaling exponent is calculated for each subset and a sequence of local , time - dependent hurst exponent values is obtained .the minimum size of each subset is defined by the condition that the scaling law holds in the subset , while the accuracy of the technique is achieved with appropriate choice of and .we have chosen windows of up to , with the step for our tddma algorithm , while the scaling features are studied in the nine regions that enclose smi spectral peaks ( listed above and depicted in figure [ fig1 ] . in figure [ fig3 ]we give an example of the calculated tddma values for the three randomly selected representatives of smi market groups , given for a time interval from year 2008 to year 2011 . ] in order to be able to quantify the local behavior of smi data , we have built the hurst vectors .each coordinate of stock market vector corresponds to the mean value of local hurst exponents from a selected interval that includes and borders each peak .our analysis was performed on nine intervals that separate nine market peaks ( listed in the text above , and defined through the use of wt analysis ) , marked by index ( ) , while counts the smi series . from all these valueswe have built the hurst reference smi vector , composed of the mean values of for each coordinate ( peak ) and for all the smis that we have analyzed .we have done that in order to be able to compare efficiency of stock markets on each of the separate nine time intervals .the hurst reference smi vector is thus defined as : for different smi indexes in our dataset . in the case of our dataset, the values of the reference vector are changing with the addition of new smi data ( markets ) , but for this change becomes insignificantly small . from the two vectors and have calculated the relative smi hurst unit vectors that we have defined as : the unit vectors give us the information of the direction of the representative point for each market in relation to the hurst reference point that , to a certain accuracy , marks the overall financial status of the markets dataset that we use .however , in the case of our dataset , the distance of representative points from the hurst reference point did not provide us with any relevant additional information about the market development or efficiency .this can be demonstrated through the use of the so - called cosine similarity , which is the scalar euclidean product of two vectors , and which can quantify the level of correlation between vectors .scalar products of are defined as : + where and count smi series ( ) , while numbers peaks ( peak regions ) .we have arranged values of these scalar products so as to form a correlogram of market development .the markets correlogram is depicted in figure [ fig4 ] ; it consists of two block matrices that differentiate strong correlations within the group of underdeveloped markets ( upper left corner ) and within the group of developed markets ( lower right corner ) .the correlogram depicted in figure [ fig4 ] displays the existence of market group that does not belong neither to developed nor to underdeveloped type .members of this third group - the emerging markets - mainly correlate weakly with other two groups and within the group , and show random unpredictable strong correlations with some members ( markets ) in the developed or underdeveloped market group .this inability to look alike clearly differentiates emerging markets in correlogram and in figure [ fig4 ] . .positive correlations of market s hurst unit vectors are given in shades of blue ( for ) , while negative correlations are depicted in shades of red ( ) .horizontal and vertical white lines mark , from left to right , borders between groups of underdeveloped , emerging , and developed markets . [ fig4 ] ] market development correlogram depicted in figure [ fig4 ] prompted us to try to find a unique hurst indicator of market development , based on the projection of market s hurst unit vectors on the direction of development ( or maturity ) of markets dataset analyzed . in order to calculate this indicator ( we dubbed it the development index ) ,we have firstly defined the direction of development in markets indexes hurst space .we have made an assumption that the unit vector of development has to be able to link representative points of ideal markets , that is , markets with hurst vectors made of identical values for all the peak intervals .this could be defined in a following simple way in the general hurst coordinate space : where stands for the vectors made of all unit components . in the next step , in order to project hurst reference vectors to the direction of development , it is necessary to define the representation of the unit vector of development in the hurst space that originates at the hurst reference vector .the relative ( in relation to the market s hurst reference vector ) direction of development would then be : in the case of our dataset , the values of this new vector s components have not significantly changed with the addition of new smi data to dataset for ( being the number of markets in the dataset analyzed ) . the relations in eqs .17 - 19 led us to the value of for our dataset of stock market indexes : with the error for each component being .we have defined the development index ( di ) as a projection of hurst unit vectors onto this direction of development : graphical illustration of these projections is given in figure [ fig5 ] ., while the direction of the main axes is given by the unit vector of the direction of developemnt .the development index is calculated as a projection of hurst unit vectors onto the , which is directed to a portion of hurst space where representative points of developed markets are grouped .[ fig5 ] ] values of di for markets in our dataset are given in table [ tab_4 ] .it is visible from table [ tab_4 ] that the three market categories ( underdeveloped , emerging , and developed markets ) can be differentiated by this order parameter .we have decided to define the borders that separate our three market categories following the phenomenological arguments we have set in section 4.1 .namely , the values of the hurst vectors and a correlogram that we have calculated all point to the existence of two distinct groups that are well clustered ( underdeveloped and developed markets ) , divided by a claster of smi time series that transitions between these two groups ( the emerging markets ) .this is how , in the case of our dataset , we have decided to use the symmetry principle and define the border between the group of developed and emerging markets at , while the underdeveloped and emerging markets are , in our case , separated at ( for our dataset , ) . based on the criterion that we have produced , the egyptian stock market index egx30 would be classified as an emerging market , rather than an underdeveloped market as we initially assumed , while the hungarian bux index would classify as developed rather than the emerging market smi .[ tab_4 ] with this procedure we can examine the stock market time series in groups or individually , for any given smi time series .in this paper we have analyzed spectral properties of time series of stock market indexes ( smis ) of developed , emerging , and underdeveloped ( or transitional ) market economies , in order to examine differences and similarities in their seasonal behavior , and to try to re - classify markets in our dataset according to the character of their cyclical behavior .we have used two different well established techniques of data analysis to obtain and verify our findings : the wavelet transformation ( wt ) spectral analysis and the time - dependent hurst exponent approach ( tddma ) . we have found multiple peaks in wavelet spectra of all our smi time series .moreover , we have found all the peaks positioned at roughly the same times ( or time intervals ) in all our data , which points to the similarity in seasonal behavior across different market economies in our dataset .we have identified what can be termed a working - week cycle ( or a 5-day peak ) , a one - week cycle ( or a 7-day peak ) , a two - week cycle ( or a 14-day peak ) , a monthly cycle ( or a 30-day peak ) , a quarterly cycle ( or a 90-day peak ) , a 4- to 5-month cycle ( or a 150-day peak ) , a semi - annual cycle ( or a 6- to 7-month peak ) , an annual cycle ( or a 360-day peak ) , and a bi - annual ( or a 600-days ) multi - year cycle in our dataset .the dissimilarities between smi records from the different economies that we have observed occur only in the lack of a spectral peak in some of the analyzed markets ( in the case of a one - week cycle , a semi - annual cycle , and an annual cycle ) , or a slight lack of synchronization at a particular peak interval ( peaks are not positioned at exactly the same time instances in all the smi series analyzed ) .this prompted us to conclude that the seasonal behavior in different markets is probably a reflection of universality in market behavior , rather than a local characteristic of a particular economy .given that financial markets are human - made complex systems , it is plausible to believe that our findings can be explained by the fact that business cycles are a reflection of common human working habits and behavior .some authors find this commonality even desirable for the optimal functioning of a stock market , as was , for example , shown for the euro monetary area .some researchers , on the contrary , claim that these effects are not significant for the effectiveness of a stock market . in order to examine whether the observed seasonal adjustments to the behavior of stock markets could be used as an indicator of the level of development or strength of the economy that underlies the specific stock market ,we have performed a statistical analysis of the properties of wavelet spectra that characterize particular peak behaviors .we have statistically compared the relative energy content and the relative amplitude of each peak between the three groups of smi series that we have analyzed - those belonging to developed economies , emerging economies and economically underdeveloped ( or transitional ) countries .we have found that the underdeveloped markets do not follow the same behavioral pattern as emerging or developed economies at time scales of days , weeks , and even several months .namely , their wt spectra show , in a statistically significant manner , less pronounced effects of fast ( small time scale ) cycles on the overall spectral behavior than in the wt spectra of the smi series from developed and emerging economies .in contrast , developed economies appear to even out all the cyclical ( peak ) effects in their wt spectra , or even to show a larger influence of the fast ( small time scale ) peak regions on their overall spectral behavior .the emerging markets spectra behave somewhere in the middle of these two cases .these observed differences could contribute to the variations in scaling behavior of markets , which has been reported previously .it was proven that the economies of underdeveloped countries have wt spectra that show highly correlated long - range behavior , with the exponent ( ) , opposite to emerging and developed economies , which show uncorrelated or even slightly anti - correlated spectral behavior , with ( ) .the observed sensitivity of scaling exponents to the level of development of economies could be related to the findings we present here - to the relative influence of the small scale spectral peaks on the overall smi spectral behavior .finally , in this paper we propose a way to quantify the level of development of a stock market , based on the relative influence ( or , in some cases , existence ) of wt spectral peak intervals on the overall scaling behavior of smi time series . in order to do that we have used the hurst exponent approach to calculate what we named the development index , which proved , at least in the case of our dataset , to be suitable to rank the smi series that we have analyzed in three distinct groups .further verification of this method remains open for future studies by us , or by other groups . | in this paper we have analyzed scaling properties and cyclical behavior of the three types of stock market indexes ( smi ) time series : data belonging to stock markets of developed economies , emerging economies , and of the underdeveloped or transitional economies . we have used two techniques of data analysis to obtain and verify our findings : the wavelet spectral analysis to study smi returns data , and the hurst exponent formalism to study local behavior around market cycles and trends . we have found cyclical behavior in all smi data sets that we have analyzed . moreover , the positions and the boundaries of cyclical intervals that we have found seam to be common for all markets in our dataset . we list and illustrate the presence of nine such periods in our smi data . we also report on the possibilities to differentiate between the level of growth of the analyzed markets by way of statistical analysis of the properties of wavelet spectra that characterize particular peak behaviors . our results show that measures like the relative wt energy content and the relative wt amplitude for the peaks in the small scales region could be used for partial differentiation between market economies . finally , we propose a way to quantify the level of development of a stock market based on the hurst scaling exponent approach . from the local scaling exponents calculated for our nine peak regions we have defined what we named the development index , which proved , at least in the case of our dataset , to be suitable to rank the smi series that we have analyzed in three distinct groups . stock market returns , wavelet analysis , hurst exponent formalism , cycles and trends , development index 82c41 , 82c80 , 91b84 |
the following problem appears in the field of quantum communication and in quantum statistics : a collection of statistical operators with some a priori probabilities ( initial ensemble ) describes the possible initial states of a quantum system and an observer wants to decide in which of these states the system is by means of a quantum measurement on the system itself .the quantity of information given by the measurement is the classical mutual information of the input / output joint distribution ( shannon information ) .interesting upper and lower bounds for , due to the quantum nature of the measurement , are given in the literature , where the measurement is described by a _generalized observable _ or _ positive operator valued _ ( pov ) _ measure _ ; an exception is the paper , which considers also the information left in the post - measurement states .with respect to a pov measure , a more detailed level of description of the quantum measurement is given by an _ instrument _ : given a quantum state ( the preparation ) as input , the instrument gives as output not only the probabilities of the outcomes but also the state after the measurement , conditioned on the observed outcome ( the a posteriori state ) .we can think the instrument to be a channel : from a quantum state ( the pre - measurement state ) to a quantum / classical state ( a posteriori state plus probabilities ) .the mathematical formalization of the idea that an instrument _ is _ a channel is given in section [ instrsec ] , together with a new construction of the a posteriori states . in section [ ment+bounds ] , by using the identification of the instrument with a channel and the notion of quantum mutual entropy , we are able to give a unified approach to various bounds for and for related quantities , which can be thought to quantify the informational performances of the instrument .one of the most interesting inequality is the strengthening ( [ sww ] ) of holevo s bound ( [ holevos_bound ] ) ; in the finite case it has been obtained in ref . where the authors introduce a specific model of the measuring process ( without speaking explicitly of intruments ) and use the strong subadditivity of the von neumann entropy .the introduction of the general notion of instrument , the association to it of a channel and the use of uhlmann s monotonicity theorem allows us to obtain the same result in a more direct way and to extend it to a more general set up . in section [ hall ] a new upper bound ( [ newbound ] ) for the classical mutual information is obtained by combining an idea by hall and inequality ( [ sww ] ) .we already gave some results in , mainly in the discrete case .here we give the general results , which are based on the theory of relative entropy on von neumann algebras .continuous parameters appear naturally in quantum statistical problems , but also in the quantum communication set up infinite dimensional hilbert spaces and general initial ensembles are needed .some of the informational quantities presented here have been studied in in the case of instruments describing continual measurements .we denote by the space of bounded linear operators from to , where are banach spaces ; moreover we set .let be a separable complex hilbert space ; a normal state on is identified with a statistical operator , and are the trace - class and the space of the statistical operators on , respectively , and , , .more generally , if belongs to a -algebra and to its dual or predual , the functional applied to is denoted by .let be a measure space , where is a -finite measure. by theorem 1.22.13 of , the -algebra ( -tensor product ) is naturally isomorphic to the -algebra of all the -valued -essentially bounded weakly measurable functions on . moreover( , proposition 1.22.12 ) , the predual of this -algebra is , the banach space of all the -valued bochner -integrable functions on , and this predual is naturally isomorphic to ( tensor product with respect to the greatest cross norm , pp . 45 , 58 , 59 , 67 , 68 ) .let us note that a normal state on is a measurable function , , such that is a probability density with respect to .a _ channel _ ( p. 137 ) , or dynamical map , or stochastic map is a completely positive linear map , which transforms states into states ; usually the definition is given for its adjoint .the channels are usually introduced to describe noisy quantum evolutions , but we shall see that also quantum measurements can be identified with channels .[ channdfn ] let and be two -algebras .a linear map from to is said to be a _ channel _ if it is completely positive , unital ( i.e. identity preserving ) and normal ( or , equivalently , weakly continuous ) . due to the equivalence of w-continuity and existence of a preadjoint , definition [ channdfn ] is equivalent to : is a completely positive linear map from the predual to the predual , normalized in the sense that , { \mathds1}_2 \rangle_2= \langle \rho,{\mathds1}_1\rangle_1 ] .note that we have used a subscript `` c '' for classical quantities , a subscript `` q '' for purely quantum ones and no subscript for general quantities , eventually of a mixed character .a key result which follows from the convexity properties of the relative entropy is _uhlmann s monotonicity theorem _( , theor . 1.5p. 21 ) , which implies that channels decrease the relative entropy .[ uhltheo ] if and are two normal states on and is a channel from , then | \lambda[\pi]) ] , , ( -additivity ) for every countable family of pairwise disjoint sets in ,\ , a \right\rangle_2= \big\langle { \mathcal{i}}\big(\bigcup_i f_i\big)[\rho],\ , a \big\rangle_2\ , , \qquad\forall \rho\in { \mathcal{t}}({\mathcal{h}}_1 ) , \quad\forall a\in { \mathcal{l}}({\mathcal{h}}_2).\ ] ] unlike the usual definitions of instrument we have introduced two hilbert spaces , an initial one and a final one ; we allow the hilbert space where the quantum system lives to be changed by the measurement , which is the standard set up when quantum channels are considered and which is usefull when we shall construct something similar to the compound state of ohya .the map ] . [ abscont ] it is easy to show that all the measures , , are absolutely continuous with respect to , where is any faithful normal state on .so , we can fix also a -finite measure on such that all the probabilities measures are absolutely continuous with respect to .moreover we complete and extend the instrument to the extended -algebra in the same way as ordinary measures are extended ( problem 3.10 , p. 49 ) : for any set in the extended -algebra , there exist such that ( is the symmetric difference ) with and we define . for the extended objects we use the same symbols as for the original ones .it is always possible to take for a probability measure , but it is convenient to leave more freedom ; for instance , in the case of a discrete one takes for the counting measure or in the case of a measurement of position and/or momentum one takes for the lebesque measure . from now on are two separable complex hilbert spaces , is a complete -finite measure space , is an instrument as in definition [ instrdfn ] and the associated probabilities ( [ probi ] ) are such that then , we introduce the -algebras [ theochann ] let us set \rangle_1 : = \int_\omega f(\omega ) \langle { \mathcal{i}}({\mathrm{d}}\omega)[\rho],a\rangle_2\ , , \quad \forall \rho \in { \mathcal{t}}({\mathcal{h}}_1 ) , \\forall a \in { \mathcal{m}}_2\ , , \ \forallf\in { \mathcal{m}}_3\,;\ ] ] by linearity and continuity the map can be extended to a channel viceversa , the instrument is uniquely determined by the channel .let us note that by approximating with simple functions we get from ( [ itochann ] ) \rangle_1 \leq { \left\vert\rho\right\vert}_{{\mathcal{t}}({\mathcal{h}}_1 ) } { \left\verta\right\vert}_{{\mathcal{l}}({\mathcal{h}}_2 ) } { \left\vertf\right\vert}_{l^\infty} ] is an equivalence class of bochner integrable -valued functions of ; let (\omega) ] , -a.s . , and in this case we take the representative to be positive everywhere ; we asked the completeness of just to have the freedom of making modifications inside null sets without having to take care of measurability .moreover , if is normalized , also ] by setting (\omega ) \right\ } \right)^{-1 } \lambda_{\mathcal{i}}[\rho](\omega ) & \text{if } { \operatorname{tr}}_{{\mathcal{h}}_2}\left\{\lambda_{\mathcal{i}}[\rho](\omega ) \right\}>0 \\\tilde \rho \quad \big(\tilde \rho\in { \mathcal{s}}({\mathcal{h}}_2)\text { , fixed } \big ) & \text{if } { \operatorname{tr}}_{{\mathcal{h}}_2}\left\{\lambda_{\mathcal{i}}[\rho](\omega ) \right\}=0 \end{cases}\ ] ] by eqs .( [ rnder])([apstates ] ) we have \ , , \quad \forall f\in { \mathcal{f}}\ , , \qquad \text{(bochner integral)}.\ ] ] this construction gives directly the result by ozawa on the _ existence _ of a family _ of a posteriori states _ , with the small generalization of the use of two hilbert spaces .[ propapost ] let be as above. for any there exists a -a.s .unique family of _ a posteriori states _ for , which means that the function is measurable and that eq .( [ intapstates ] ) holds .theorem [ theochann ] and proposition [ propapost ] generalize immediately to the case of substituted by von neumann algebras with separable predual ; the separability is needed in the results quoted in subsection [ qcalgebra ] and taken from and which are at the bases of the whole construction .in quantum statistics , the following problem of identification of states is a natural one .there is a parametric family of quantum states ( the subscript `` i '' stays for `` initial '' ) , where belongs to some parameter space and it is distributed with some a priori probability .the experimenter has to make inferences on by using the result of some measurement on the quantum system . in quantum communication theory ,the problem of the transmission of a message through a quantum channel is similar .a message is transmitted by encoding the letters in some quantum states , which are possibly corrupted by a quantum noisy channel ; at the end of the channel the receiver attempts to decode the message by performing measurements on the quantum system .so , one has an alphabet and the letters are transmitted with some a priori probabilities .each letter is encoded in a quantum state and we denote by the state associated to the letter as it arrives to the receiver , after the passage through the transmission channel .let us give the formalization of both problems ; we use the language of the quantum communication set up .first of all , we have a -finite measure space ; is the alphabet and the a priori probabilities for the letters are given by , where is a suitable probability density with respect to .the _ letter states _ are with measurable and the mixture can be called the _ initial a priori state_. one calls the _ initial ensemble_. it would be possible to take as ; then , . however , it is convenient to distinguish and , mainly for the cases when one has more initial ensembles .note that is nothing but a random variable in the probability space with value in .let the decoding measurement be represented by the instrument of the previous section with the associated pov measure . by using the notations of section [ instrsec ] and , in particular , the radon - nikodim derivative ( [ rnder ] ), we can construct the following probabilities , conditional probabilities and densities : , (\omega)\ } , \\p_{{\mathrm{f}}}(f):= \int_a p_{{\mathrm{f}}|{\mathrm{i}}}(f|\alpha)\ , p_{\mathrm{i}}({\mathrm{d}}\alpha)= p_{\eta_{\mathrm{i}}}(f ) , \qquad q_{{\mathrm{f}}}(\omega):= \frac{p_{{\mathrm{f}}}({\mathrm{d}}\omega)}{q({\mathrm{d}}\omega)}={\operatorname{tr}}_{{\mathcal{h}}_2}\{\lambda_{\mathcal{i}}[\eta_{\mathrm{i}}](\omega)\ } , \\ p_{{\mathrm{i}}{\mathrm{f}}}({\mathrm{d}}\alpha\times { \mathrm{d}}\omega):= p_{{\mathrm{f}}|{\mathrm{i}}}({\mathrm{d}}\omega|\alpha)\,p_{\mathrm{i}}({\mathrm{d}}\alpha ) , \qquad q_{{\mathrm{i}}{\mathrm{f}}}(\alpha,\omega):= \frac{p_{{\mathrm{i}}{\mathrm{f}}}({\mathrm{d}}\alpha \times { \mathrm{d}}\omega)}{\nu({\mathrm{d}}\alpha)\,q({\mathrm{d}}\omega)}=q_{{\mathrm{f}}|{\mathrm{i}}}(\omega|\alpha)q_{\mathrm{i}}(\alpha ) , \\ \label{probz } p_{{\mathrm{i}}|{\mathrm{f}}}(b|\omega):=\frac{p_{{\mathrm{i}}{\mathrm{f}}}(b\times{\mathrm{d}}\omega)}{p_{\mathrm{f}}({\mathrm{d}}\omega)}\ , , \qquad \qquad q_{{\mathrm{i}}|{\mathrm{f}}}(\alpha|\omega):= \frac { p_{{\mathrm{i}}|{\mathrm{f}}}({\mathrm{d}}\alpha|\omega)}{\nu({\mathrm{d}}\alpha ) } = \frac{q_{{\mathrm{i}}{\mathrm{f}}}(\alpha,\omega)}{q_{{\mathrm{f}}}(\omega)};\end{gathered}\ ] ] the subscript `` f '' stays for `` final '' .if we apply the measurement , but we do not do any selection on the system , we obtain the _ post - measurement a priori states _,\qquad \eta_{\mathrm{f}}:= { \mathcal{i}}(\omega)[\eta_{\mathrm{i}}]=\int_a p_{\mathrm{i}}({\mathrm{d}}\alpha)\,\eta_{{\mathrm{f}}}^{\alpha}.\ ] ] by applying the definition ( [ apstates ] ) we can introduce two families of a posteriori states : by using eqs .( [ intapstates ] ) for , ( [ iapriori])([2apost ] ) , one obtains here and in the following integrals on states are in the bochner sense .let us stress that the states , are uniquely defined -almost surely , -a.s . and -a.s .with respect to the algebras given in ( [ alg123 ] ) we have one more von neumann algebra , ; then , we set in particular , we have the identification the states are represented by densities with respect to , , , it is easy to see that the initial ensemble can be seen as a normal state on .by using a superscript which indicates the algebras on which a state is acting , we can write for the initial state and its marginals . we already constructed the channel ; by dilating it with the identity we obtain the _ measurement channel _ by applying the measurement channel to the initial state we obtain the final state = \{q_{\mathrm{i}}(\alpha ) \lambda_{\mathcal{i}}[\rho_{\mathrm{i}}(\alpha)](\omega)\ } = \{q_{{\mathrm{i}}{\mathrm{f}}}(\alpha,\omega ) \rho_{{\mathrm{f}}}^{\alpha}(\omega)\},\ ] ] whose marginals are let us note that =\sigma^{0}_{\mathrm{f}}\otimes \sigma^{23}_{\mathrm{f}}\,.\ ] ] holevo s bound ( [ holevos_bound ] ) involves a mean quantum relative entropy , which is often called _holevo s chi - quantity _ , given by in general , given a probability space and a measurable family of statistical operators on some hilbert space , the -quantity of the ensemble is defined by in this definition the set could be itself , see pp .by using the definition ( [ relqentropy ] ) of the quantum relative entropy and the definition of von neumann entropy , when , one has the expressions of the mutual entropies we shall need will contain the -quantities , , , and the mean -quantities the mixtures appearing in these -quantities are given by eqs .( [ iapriori ] ) , ( [ post - m ] ) , ( [ somemixtures ] ) . by using the definitions above and property ( [ cqs2 ] ) , it is easy to compute all the mutual entropies involving the initial and the final state. first of all we get that holevo s -quantity is the initial mutual entropy and that the mutual entropy involving only the classical part of the final state is the shannon input / output classical mutual entropy , i.e. the classical information on the input extracted by the measurement : then , the remaining mutual entropies turn out to be by the chain rules ( [ chain ] ) we get which gives the expression of the `` tripartite '' mutual entropy and the identities uhlmann s monotonicity theorem ( see theorem [ uhltheo ] ) and eqs .( [ defsigmaf ] ) , ( [ tensf ] ) give us the inequality |\lambda[\sigma_{\mathrm{i}}^0\otimes \sigma_{\mathrm{i}}^1])= s(\sigma_{\mathrm{f}}^{023}|\sigma_{\mathrm{f}}^0\otimes \sigma_{\mathrm{f}}^{23});\ ] ] by eqs .( [ initmentr ] ) , ( [ 3mentr ] ) this inequality becomes in this inequality was found in the discrete case ; in it was derived , again in the discrete case , by using relative entropies as here and the general case was announced .roughly , eq . ( [ sww ] ) says that the quantum information contained in the initial ensemble is greater than the classical information extracted in the measurement plus the mean quantum information left in the a posteriori states .inequality ( [ sww ] ) can be seen also as giving some kind of information - disturbance trade - off , a subject to which the paper , which contains a somewhat related inequality , is devoted .holevo s bound , generalized to the continuous case in , is or , in terms of mutual entropies , the derivation of holevo s bound given in is based on a measurement channel involving only the pov measure , not the whole instrument ; the fact that inequality ( [ sww ] ) is stronger than holevo s bound ( [ holevos_bound ] ) is a consequence of the fact that our channel is a refinement of the channel used in ( see the discussion given in ) . by using one of the identities ( [ idts ] ) ,the inequality ( [ sww ] ) can be rewritten in an equivalent form , which is slightly more symmetric : by restriction of the states ( see remark [ restr ] ) we get the inequality by eqs .( [ 2mentr ] ) and ( [ 3mentr ] ) we get which says that the classical information extracted in the measurement plus the mean quantum information left in the a posteriori states is greater than the quantum information left in the post - measurement a priori states . all the other inequalities which can be obtained from the final state are also consequences of inequality ( [ lb ] ) and identities ( [ idts ] ) .given an instrument and a statistical operator , an interesting quantity , which can be called the _quantum information gain _ , is this is nothing but the quantum entropy of the pre - measurement state minus the mean entropy of the a posteriori states .it is a measure of the gain in purity ( or loss , if negative ) in passing from the pre - measurement state to the post - measurement a posteriori states and it gives no information on the ability of the measurement in identifying the pre - measurement state , ability which is contained in . by using the expression of a -quantity in terms of entropies andmean entropies , as in ( [ altchi ] ) , one can see that , when inequality ( [ sww ] ) is equivalent to here the state is given and has to be thought as any demixture of .an interesting question is when the quantum information gain is positive .groenewold has conjectured and lindblad has proved that the quantum information gain is non negative for an instrument of the von neumann - lders type . the general case has been settled down by ozawa , who in has proved the following theorem in the case .a shorter proof with respect to ozawa s one is based on inequality ( [ iqineq ] ) .[ theooza ] let be two separable complex hilbert spaces , be a measurable space and a completely positive instrument as in definition [ instrdfn ] .then , the instrument sends any pure input state into almost surely pure a posteriori states if and only if , for all statistical operators for which .\(b ) ( a ) is trivial : put a pure state into the definition and you get is pure -a.s . to see ( a ) ( b ) , we take a demixture of into pure states ; then , by ( a ) also the states are pure and ; then , eq .( [ iqineq ] ) gives .in ohya introduced a notion of compound states which involves the input and output states of a quantum channel .taking inspiration from this idea , we are able to produce some inequalities which strengthen a lower bound on given by scutaru in . first of all we need some new families of statistical operators and the relationships among them : the state ( [ scustate ] ) has been introduced by scutaru and the state ( [ ohyastate ] ) is similar to the compound state introduced by ohya for quantum channels .now , let us construct a first compound state on and let us give some of its marginals : for this state we have and remark [ restr ] gives the inequalities which give is scutaru s bound .let us give also a second compound state and some of its marginals : as before we get the inequalities it is possible to obtain these inequalities also by constructing suitable channels and by using the idea of the refinement of a channel .in hall exhibits a transformation on the initial ensemble and on the pov measure which leaves invariant but not the initial -quantity and in this way produces a new upper bound on the classical information . inspired by hall s transformation , a new instrument can be constructed in such a way that the analogous of inequality ( [ sww ] ) produces an upper bound on stronger than both hall s and holevo s ones .let us set : = m(\alpha ) \tau m(\alpha)^*\ , , \quad \forall \tau \in { \mathcal{t}}({\mathcal{h}}_1);\ ] ] by eq .( [ iapriori ] ) the operators satisfy the normalization condition then , the position defines an instrument from into with value space .the instrument has been constructed by using only the old initial ensemble .the associated pov measure is now , we can construct the associated channel and a posteriori states , as in section [ instrsec ] . by looking at eq .( [ itochann ] ) one has immediately (\alpha)={\mathcal{g}}(\alpha)[\tau]=m(\alpha ) \tau m(\alpha)^ * , \qquad \forall \tau\in { \mathcal{t}}({\mathcal{h}}_1)\ ] ] and by looking at eq .( [ apstates ] ) one has that , for , is a family of a posteriori states for . let us stress that sends pure states into a.s .pure a posteriori states ; therefore , by theorem [ theooza ] one has let be a c.o.n.s . of eigenvectors of , so that we can write , with and . as in remark [ abscont ] one can show that the complex measures are absolutely continuous with respect to ; therefore the radon - nikodim derivatives exist and the position defines a family of statistical operators ; in an abbreviated way we write now we consider as initial ensemble for ; note that one gets let us consider now holevo s bound for the new set up : the pov measure and the states have been constructed just in order to have as it is easy to verify ; this implies immediately therefore , we have which is the `` continuous '' version of hall s bound . ( 19 ) of .this bound , in the discrete case , is discussed also in refs . .having defined a new instrument and not only a pov measure , we obtain from ( [ sww ] ) the inequality which gives a stronger bound than hall s one ( [ halldualbound ] ) . in order to render more explicit this bound , it is convenient to start from the equivalent form ( [ iqineq ] ) , which now reads by eqs . ( [ malpha ] ) and ( [ apostalpha ] )we obtain ; together with eqs .( [ newiq ] ) , ( [ newe ] ) , ( [ altchi ] ) , this gives therefore , eq .( [ nnn ] ) gives the new bound let us stress that because of eq .( [ newiq ] ) . more explicitly , by eqs .( [ newe ] ) , ( [ newinistat ] ) , ( [ newiq ] ) , we have where is given by ( [ newinistat ] ) and , by eqs . ( [ malpha ] ) , ( [ apostalpha ] ) , ( [ newinistat ] ) , this last quantity is defined similarly to ( [ newinistatrig ] ) , by starting from the diagonalization of .let us stress that the upper bound in ( [ newbound ] ) involves the initial ensemble and the pov measure , not the full instrument , while the bound ( [ sww ] ) involves , and also the a posteriori states of .both bounds ( [ sww ] ) and ( [ newbound ] ) are stronger than holevo s bound ( [ holevos_bound ] ) .99 , _ entropy and information gain in quantum continual measurements _ , in : quantum communication , computing , and measurement 3 , p. tombesi and o. hirota ( eds . ) , kluwer , new york , 2001 , 4957 ; arxiv : quant - ph/0012115 . , _ instrumental processes , entropies , information in quantum continual measurements _ , in : quantum information , statistics , probability , o. hirota ( ed . ) , rinton , princeton , 2004 , 3043 ; arxiv : quant - ph/0401114 . ,_ instruments and channels in quantum information theory _ , arxiv : quant - ph/0409019 v1 3 sep 2004 ., _ probability and measure _ , wiley , new york , 1995 ., _ on the heisenberg principle , namely on the information - disturbance trade - off in a quantum measurement _ ,fortschr .* 51 * ( 2003 ) , 318330 ; * doi * 10.1002/prop.200310045 . , _ quantum theory of open systems _ , academic press , london , 1976 . , _ an operational approach to quantum probability _ , commun .* 17 * ( 1970 ) , 239260 . ,_ les algbres doprateurs dans lespace hilbertien _, gauthier - villars , paris , 1957 . , _ a problem of information gain by quantal measurements _ , int.j .phys . * 4 * ( 1971 ) , 327338 . ,_ quantum information and correlation bounds _a * 55 * ( 1997 ) , 100113 . ,_ techniques for bounding quantum correlations _ , in : quantum communication ,computing , and measurement , o. hirota , a. s. holevo and c. m. caves ( eds . ) , plenum , new york , 1997 , 5361 ., _ some estimates for the amount of information transmittable by a quantum communication channel _ , probl .* 9 * , no . 3 ( 1973 ) , 177183 ( engl .transl . : 1975 ) . , _ continuous ensembles and the -capacity of infinite - dimensional channels _ , arxiv : quant - ph/0408176 v1 30 aug 2004 ., _ capacity of quantum channels using product measurements _ , j. math . phys .* 42 * ( 2001 ) , 8798 ; arxiv : quant - ph/0004062 . , _ an entropy inequality for quantum measurements _ , commun .math.phys .* 28 * ( 1972 ) , 245249 . , _ efficient measurements , purification , and bounds on the mutual information _ ,rev . a * 68 * art .054302 ( 2003 ) ; arxiv : quant - ph/0306039 . ,_ on compound state and mutual information in quantum information theory _ , ieee trans .theory * it-29 * ( 1983 ) , 770774 . ,_ quantum entropy and its use _ , springer , berlin , 1993 . , _ quantum measuring processes of continuous observables _, j. math.phys .* 25 * ( 1984 ) , 7987 ., _ conditional probability and a posteriori states in quantum mechanics _ , publ .r.i.m.s .kyoto univ .* 21 * ( 1985 ) , 279295 ., _ concepts of conditional expectations in quantum theory _ , j. math.phys .* 25 * ( 1985 ) , 19481955 ., _ on information gain by quantum measurements of continuous observables_ , j. math .* 27 * ( 1986 ) , 759763 ., _ inequalities for quantum entropy : a review for conditions for equality _ , j. math .* 43 * ( 2002 ) , 43584375 ; arxiv : quant - ph/0205064 . , _ -algebras and -algebras_ , springer , berlin , 1971 . , _ limitation on the amount of accessible information in a quantum channel _ , phys .* 76 * ( 1996 ) , 34523455 ., _ lower bound for mutual information of a quantum channel _ , phys.rev .* 75 * ( 1995 ) , 773776 ., _ quantum information theory , the entropy bound , and mathematical rigor in physics _ , in : quantum communication , computing , and measurement , o. hirota , a. s.holevo and c. m. caves ( eds . ) , plenum , new york , 1997 , 1723 . ,_ ultimate information carrying limit of quantum systems _ , phys .* 70 * ( 1993 ) , 363366 . | general quantum measurements are represented by instruments . in this paper the mathematical formalization is given of the idea that an instrument is a channel which accepts a quantum state as input and produces a probability and an a posteriori state as output . then , by using mutual entropies on von neumann algebras and the identification of instruments and channels , many old and new informational inequalities are obtained in a unified manner . such inequalities involve various quantities which characterize the performances of the instrument under study ; in particular , these inequalities include and generalize the famous holevo s bound . |
in the context of hybrid medical imaging methods , a physical coupling between a high - contrast modality ( e.g. electrical impedance tomography , optical tomography ) and a high - resolution modality ( e.g. acoustic waves , magnetic resonance imaging ) is used in order to benefit from the advantages of both . without this coupling ,the high - contrast modality , usually modeled by an inverse problem involving the reconstruction of the constitutive parameter of an elliptic pde from knowledge of boundary functionals , results in a mathematically severely ill - posed problem and suffers from poor resolution .the analysis of this coupling usually involves a two - step inversion procedure where the high - resolution modality provides internal functionals , from which we reconstruct the parameters of the elliptic equation , thus leading to improved resolution .a problem that has received a lot of attention recently concerns the reconstruction of the conductivity tensor in the elliptic equation from knowledge of internal power density measurements of the form , where and both solve with possibly different boundary conditions .this problem is motivated by a coupling between electrical impedance imaging and ultrasound imaging and also finds applications in thermo - acoustic imaging .explicit reconstruction procedures for the above non - linear problem have been established in , successively in the 2d , 3d , and isotropic case , and then in the 2d and anisotropic case . in these articles , the number of functionals may be quite large .the analyses in were recently summarized and pushed further in . if one decomposes into the product of a scalar function and a scaled anisotropic structure such that , the latter reference establishes explicit reconstruction formulas for both quantities with lipschitz stability for in , and involving the loss of one derivative for . in the isotropic case ,several works study the above problem in the presence of a lesser number of functionals .the case of one functional is addressed in , whereas numerical simulations show good results with two functionals in dimension .theoretical and numerical analyses of the linearized inverse problem are considered in .the stabilizing nature of a class of internal functionals containing the power densities is demonstrated in using a micro - local analysis of the linearized inverse problem .the above inverse problem is recast as a system of nonlinear partial differential equations in and its linearization analyzed by means of theories of elliptic systems of equations .it is shown in the latter reference that functionals , where is spatial dimension , is sufficient to reconstruct a scalar coefficient with elliptic regularity , i.e. , with no loss of derivatives , from power density measurements .this was confirmed by two - dimensional simulations in .all known explicit reconstruction procedures require knowledge of a larger number of internal functionals . in the present work , we study the linearized version of this inverse problem in the anisotropic case , i.e. we write an expansion of the form with known and , and study the reconstructibility of from linearized power densities ( lpd ) .we first proceed by supporting the perturbation away from the boundary and analyze microlocally the symbol of the linearized functionals , and show that , as in , a large enough number of functionals allows us to construct a left - parametrix and set up a fredholm inversion .the main difference between the isotropic and anisotropic settings is that the anisotropic part of the conductivity is reconstructed with a loss of one derivative .such a loss of a derivative is optimal since our estimates are elliptic in nature .it is reminiscent of results obtained for a similar problem in .secondly , we show how the explicit inversion approach presented in carries through linearization , thus allowing for reconstruction of fully anisotropic tensors supported up to the boundary of . in this case, we derive reconstruction formulas that require a smaller number of power densities than in the non - linear case , giving possible room for improvement in the non - linear inversion algorithms . for additional information on hybrid inverse problems in other areas of ( mostly medical ) imaging, we refer the reader to , e.g. , .consider the conductivity equation , where is open , bounded and connected with , and where is a uniformly elliptic conductivity tensor over .we set boundary conditions and call the unique solution to with , and conductivity .we consider the measurement functionals considering an expansion of the form , where the background conductivity is known , uniformly elliptic and so small that the total remains uniformly elliptic , we first look for the frchet derivative of with respect to at . expanding the solutions accordingly as the pde at orders and gives rise to two relations measurements then look like therefore , the component of the frchet derivative of at is where the s are linear functions in according to . in both subsequent approaches ,reconstruction formulas are established under the following two assumptions about the behavior of solutions related to the conductivity of reference .the first hypothesis deals with having a basis of gradients of solutions of over a certain subset .[ hyp : det ] for an open set , there exist such that the corresponding solutions of with boundary condition ( ) satisfy once hypothesis [ hyp : det ] is satisfied , any additional solution of gives rise to a matrix , { \quad\text { where } } \quad z_i:= \nabla \frac{\det ( \nabla u_1,\dots,\overbrace{\nabla u_{n+1}}^i , \dots , \nabla u_n)}{\det ( \nabla u_1,\dots,\nabla u_n)}. \label{eq : zmat}\end{aligned}\ ] ] as seen in , such matrices can be computed from the power densities and help impose orthogonality conditions on the anisotropic part of .once enough such conditions are obtained by considering enough additional solutions , then the anisotropy is reconstructed explicitly via a generalization of the usual cross - product defined in three dimensions . in the linearizedsetting , we find that _one _ additional solution such that has full rank is enough to reconstruct the linear perturbation .we thus formulate our second crucial assumption here : [ hyp : z ] assume that hypothesis [ hyp : det ] holds over some fixed .there exists such that the solution of with boundary condition has a full - rank matrix ( as defined in ) over .[ rem : const ] in the case where is constant , then it is straightforward to see that ( ) fulfill hypothesis [ hyp : det ] over .moreover , if denotes an invertible constant matrix such that , then the boundary condition fulfills hypothesis [ hyp : z ] , since we have . throughout the paper , we use for ( real - valued ) square matrices and the contraction notation , with the transpose matrix of . in the treatment of the non - linear case ,it has been pointed out that hypothesis [ hyp : det ] may not be systematically satisfied globally in dimension . a more general hypothesis to considerwould come from picking a larger family ( of cardinality ) of solutions whose gradients have maximal rank throughout . while this additional technical point would not alter qualitatively the present reconstruction algorithms, it would add complexity in notation which the authors decided to avoid ; see also . in the reconstruction approach developped in for the non - linear problem, it was shown that not every part of the conductivity was reconstructed with the same stability .namely , consider the decomposition of the tensor into the product of a scalar function and a scaled anisotropic structure with .the following results were then established .starting from solutions whose gradients form a basis of over a subset , it was shown that under knowledge of a anisotropic structure , the scalar function was uniquely and lipschitz - stably reconstructible in from power densities .additionally , if one added a finite number of solutions such that the family of matrices defined as in imposed enough orthogonality constraints on , then the latter was explicitely reconstructible over from the mutual power densities of .the latter reconstruction was stable in for power densities in norm , thus it involved the loss of one derivative . passing to the linearized setting now ( recall ) , and anticipatingthat one scalar quantity may be more stably reconstructible than the others , this quantity should be the linearized version of .standard calculations yield and thus the quantity that should be stably reconstructible is .the linearization of the product decomposition above is now a spherical - deviatoric one of the form where is the linear projection onto the hyperplane of traceless matrices .the above inverse problem in - may be seen as a system of partial differential equations for .this is the point of view considered in .however , may be calculated from and the expression plugged back into .this allows us to recast as a linear operator for , which is smaller than the original linear system for , but which is no longer differential and rather pseudo - differential .the objective in this section is to show , following earlier work in the isotropic case in , that such an operator is elliptic under appropriate conditions .we first fix and assume that , so that integrals of the form are well - defined , with a matrix - valued symbol whose entries are polynomials in ( see ) and where the hat denotes the fourier transform .we also assume that and can be extended smoothly by outside . as pointed out in , in order to treat this problem microlocally, one must introduce cutoff versions of the operators , which in turn extend to pseudo - differential operators ( ) on .namely , if is a domain satisfying and is a smooth function supported in which is identically equal to on a neighborhood of , the operator can be made a upon considering as a second - order operator on and using standard pseudo - differential parametrices to invert it .we will therefore not distinguish the operators from their pseudo - differential counterparts .the task of this section is then to determine conditions under which a given collection of such functions becomes an elliptic operator of over .using relations and , we aim at writing the operator in the following form with symbol ( pseudo - differential terminology is recalled in sec .[ ssec : prelim ] ) .we first compute the main terms in the symbol expansion of ( call this expansion with homogeneous of degree in ) . from these expressions, we then directly deduce microlocal properties on the corresponding operators .the first lemma shows that the principal symbols can never fully invert for , no matter how many solutions we pick .when hypothesis [ hyp : det ] is satisfied , then the characteristic directions of the principal symbols reduce to a -dimensional subspace of . here and below, we recall that the colon `` '' denotes the inner product for and denotes the symmetric outer product for .[ lem : xieta ] * for any and , the symbol satisfies * suppose that hypothesis [ hyp : det ] holds over some .then for any , if is such that then is of the form for some vector satisfying .since an arbitrary number of zero - th order symbols can never be elliptic with respect to , we then consider the next term in the symbol expansion of .we must also add one solution to the initial collection , exhibiting appropriate behavior , i.e. satisfying hypothesis [ hyp : z ] .the collection of functionals we consider below is thus of the form and emanates from solutions of satisfying hypotheses [ hyp : det ] and [ hyp : z ] . in order to formulate the result, we assume to construct a family of unit vector fields homogeneous of degree zero in , smooth in and everywhere orthonormal .we then define the family of scalar elliptic zeroth - order which can be thought of as a microlocal change of basis after which the operator becomes both diagonal and elliptic . indeed, we verify ( see section [ ssec : proofs ] ) that for any and sufficiently regular , we have the above estimates come from standard result on pseudo - differential operators .the presence of the constant indicates that can be inverted microlocally , but may not injective .composing the measurements with appropriate scalar of order 0 and 1 , we are then able to recover each component of the operator .the well - chosen `` parametrices '' are made possible by the fact that the collection of symbols becomes elliptic over when hypotheses [ hyp : det ] and [ hyp : z ] are satisfied .rather than using the full collection of measurements , we will consider the smaller collection augmented with the measurement operators where , known from the measurements , are the coefficients in the relation of linear dependence we also define the operator with principal symbol .our conclusions may be formulated as follows : [ prop : microloc ] let the measurements defined in satisfy hypotheses [ hyp : det ] and [ hyp : z ] . * for and ,there exist such that * for any , there exist such that the following relation holds where the remainder can be expressed as a zeroth - order linear combination of the components and reconstructed in ( i ) . the presence of the term in part _( ii ) _ of prop . [ prop : microloc ] accounts for the loss of one derivative in the inversion process . from prop .[ prop : microloc ] , we can then obtain stability estimates of the form the above stability estimate holds for using the results of proposition [ prop : microloc ] and in fact for any updating by standard methods ( not detailed here ) the parametrices in and to inversions modulo operators in ( i.e. , classical of order ) provided that the coefficients are sufficiently smooth .the presence of the constant indicates that the reconstruction of may be performed up to the existence of a finite dimensional kernel as an application of the fredholm theory as in .equation means that some components of are reconstructed with a loss of one derivative while other components are reconstructed with no loss .the latter components are those that can be spanned by the components and .some algebra shows that the only such linear combination is , which , using the fact that , can be computed as confirming the heuristics of sec .[ sec : heur ] .it can be shown that all other components of ( i.e. any part of in ) are , to some extent , spanned by the components , and as such can not be reconstructed with better stability than the loss of one derivative in light of . combining the above results with, we arrive at the main stability result of the paper : such an estimate holds for any .the above estimate holds with when is an injective ( linear ) operator .injectivity can not be verified by microlocal arguments since all inversions are performed up to smoothing operators ; see in the isotropic setting . in the next section , we obtain an injectivity result , which allows us to set in the above expression . however , the above stability estimate is essentially optimal .an optimal estimate , which follows from the above and the equations for is the following : the left - hand - side inequality is a direct consequence of and the expression of .the right - hand side is a direct consequence of the expression of .the above estimate is clearly optimal .the operator is of order .if it were elliptic , then would be reconstructed with no loss of derivative .however , is not elliptic and the loss of ellipticity is precisely accounted for by the results in lemma [ lem : xieta ] . as we discussed above , it turns out that the only spatial coefficient controlled by is , and hence .now , allowing to be supported up to the boundary , we present a variation of the non - linear resolution technique used in . first considering solutions generated by boundary conditions fulfilling hypothesis [ hyp : det ], we establish an expression for in terms of the remaining unknowns : h^{-1 } { dh } h^{-1 } [ \nabla u]^t - [ \nabla v ] h^{-1 } [ \nabla u]^t - [ \nabla u ] h^{-1 } [ \nabla v]^t ) \gamma_0 , \label{eq : gammaelim}\end{aligned}\ ] ] where ] denote matrices whose -th columns are and , respectively , and where and . in particular we find from the relation [\nabla u]^{-1})^t .\label{eq : trgamma}\end{aligned}\ ] ] plugging back into the second equation in for , one can deduce a gradient equation for the quantity which in turn allows to reconstruct in a lipschitz - stable manner with respect to the lpd ( i.e. without loss of derivative ) . now turning to the full reconstruction of , we consider an additional solution generated by a boundary condition fulfilling hyp .[ hyp : z ] .the following proposition then establishes how to reconstruct from : [ prop : scesv ] assume that fulfill hypotheses [ hyp : det ] and [ hyp : z ] over and consider the linearized power densities .then the solutions satisfy a strongly coupled elliptic system of the form where the vector fields are known and only depend on the behavior of , and , and where the functionals are linear in the data .when the vector fields are bounded , system satisfies a fredholm alternative from which we deduce that if with a trivial right - hand side admits no non - trivial solution , then is uniquely reconstructed from .we can then reconstruct from . in the case where is constant, choosing solutions as in remark [ rem : const ] , one arrives at a system of the form where if , so that the system is decoupled and clearly injective . the conclusive theorem for the explicit inversion is thus given by [ thm : explicit ] assume that fulfill hypotheses [ hyp : det ] and [ hyp : z ] over and consider the linearized power densities .assume further that the system with trivial right - hand sides has no non - trivial solution .then is uniquely determined by and we have the following stability estimate we cover the microlocal inversion in sec . [sec : microloc ] .linear algebraic and pseudo - differential preliminaries are given in sec .[ ssec : prelim ] .the leading - order symbols of order of the lpd functionals are computed in sec .[ ssec : symbol0 ] and a proof of lemma [ lem : xieta ] is given .the symbols of order are then computed in [ ssec : symbolm1 ] and the proof proposition [ prop : microloc ] is given in sec .[ ssec : proofs ] .we then treat the explicit inversion in sec .[ sec : explicit ] .starting with some preliminaries in sec .[ ssec : prelim2 ] , we derive some crucial relations in sections [ ssec : der1 ] and [ ssec : der2 ] , before proving proposition [ prop : scesv ] and theorem [ thm : explicit ] in sec .[ ssec : proofs2 ] .[ [ linear - algebra . ] ] linear algebra .+ + + + + + + + + + + + + + + in the following , we consider the matrices with the inner product structure for which admits the orthogonal decomposition . for two vectors and in denote by the matrix with entries , and we also define the symmetrized outer product with denoting the standard dotproduct on , we have the following identities [ [ pseudo - differential - calculus . ] ] pseudo - differential calculus .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + recall that we denote the set of _ symbols _ of order on by , which is the space of functions such that for all multi - indices and and every compact set there is a constant such that we denote the operator as and the set of _ pseudo - differential operators _ ( ) of order on by , where suppose is strictly decreasing and , and suppose for each .we denote an _ asymptotic expansion _ of the symbol as if given two and with respective symbols and and orders and , we will make repetitive use of the symbol expansion of the product operator ( see ( * ? ? ?* theorem ( 8.37 ) ) for instance ) where denotes a symbol of order at most . as we will need to compute products of three , and , we write the following formula for later use , obtained by iteration of in the next derivations , some operators have matrix - valued principal symbols. however we will only compose them with operators with scalar symbols , so that the above calculus remains valid .writing and ( understood in the componentwise sense ) , we have thus equation reads , where the operators and have respective symbols for a smooth vector field , we will also need in the sequel to express the operator as , the symbol of which is denoted .we now write as a of with symbol as in . belongs to and we will compute in this paper the first two terms in the expansion of ( call them and ) , which in turn relies on constructing parametrices of of increasing order and doing some computations on symbols of products of based on formula . if is a parametrix of modulo , i.e. , then straightforward computations based on the relation yield the following relation for any , denotes the operator where solves , and standard elliptic theory allows to claim that smoothes by one derivative so that the error operator defined in smoothes by derivatives .in particular , upon computing a parametrix of modulo , the first three terms in are enough to construct the principal part of the symbol modulo .[ [ computation - of - m_ij_0 . ] ] computation of .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + in light of the last remark , we first compute a parametrix of modulo , that is , since , we look for a principal symbol of the form .clearly , we easily obtain . in this case , the principal symbol of at order zero is given by , according to and , admits a somewhat more symmetric expression if pre- and post - multiplied by , the unique positive squareroot of , so that we may write , where we have defined and for any as well as .this last expression motivates the proof of lemma [ lem : xieta ] .* proof of _( i ) _ : * let such that , and denote so that . then using identity and , we get :\hxi_0\odot\eta ' \\ & = ( v_i\cdot \hxi_0 ) ( v_j\cdot \eta ' ) + ( v_i\cdot\eta')(v_j\cdot \hxi_0 ) \\ & \quad - ( \hxi_0\cdot v_i ) ( \hxi_0\cdot\hxi_0 ) ( v_j\cdot \eta ' ) - ( \hxi_0\cdot v_i)(\hxi_0\cdot\eta')(v_j\cdot\hxi_0 ) \\ & \quad - ( \hxi_0\cdot v_j ) ( \hxi_0\cdot \hxi_0 ) ( v_i\cdot\eta ' ) - ( \hxi_0\cdot v_j ) ( \hxi_0\cdot\eta ' ) ( \hxi_0\cdot v_i)\\ & = 0 , \end{aligned}\ ] ] where we have used and , thus _ ( i ) _ holds .* proof of _ ( ii ) _ : * recall that : a_0^{-1 } p a_0^{-1}. \end{aligned}\ ] ] we write as the direct orthogonal sum of three spaces : with respective dimensions , and . decomposing uniquely into this sum , we write .direct calculations then show that since is a basis of , is a basis of and thus implies that therefore with for some , so with proportional to , i.e. such that , thus the proof is complete . in other words , *all * symbols of order zero are orthogonal to the -dependent -dimensional subspace of symmetric matrices .one must thus compute the next term in the symbol exampansion of the operators , i.e. .we will then show that enough symbols of the form will suffice to span the entire space for every and , so that the corresponding family of operators is elliptic as a function of .as the previous section explained , the principal symbols can never span .therefore , we compute the next term in their symbol expansion . we must first construct a parametrix of modulo , i.e. of the form [ lem : qm2m3 ] the symbols and defined in have respective expressions {pq } \partial_{x_i}[\gamma_0]_{ij } - 2 [ \gamma_{0}]_{ij } \partial_{x_i } [ \gamma_0]_{pq}\right ) .\label{eq : qm3 } \end{aligned}\ ] ] using formula with , and using the expansions of and , we get in order to match the expansion , the expansion above must satisfy , for large , that is , and now , we easily have and {pq } \xi_p \xi_q { { \bf e}}_i ] .the proof is complete since hyp .[ hyp : det ] ensures that never vanishes and is uniformly invertible , and hyp .[ hyp : z ] ensures that is uniformly invertible . *the operators .proof of prop .[ prop : microloc ] : * as advertised in sec .[ ssec : statmicroloc ] , because of the algebraic form of the symbols of the linearized power density operators , it is convenient for inversion purposes to define the microlocal change of basis as in , i.e. to convince ourselves that this collection forms a microlocally invertible operator of , let us introduce the zero - th order with scalar principal symbol for .then for any , the composition of operators has principal symbol ( repeated indices are summed over ) where we have used the following property , true for any smooth vector field : thus for any , the composition recovers up to a regularization term .this in particular justifies the estimates and the subsequent inversion procedure .we are now ready to prove proposition [ prop : microloc ] .from the fact that is a basis at every point and given their dotproducts , we have the following formula , true for every vector field : * proof of _ ( i ) _ : reconstruction of the components and .* we work with . using with ,straightforward computations yield which means that upon defining with scalar principal symbols relation is satisfied in the sense of operators since the previous calculation amounts to computing the principal symbol of the composition of operators in .+ it remains to construct appropriate operators that will map to the components for , which is where the additional measurements come into play .let as in and construct the as in .it is easy to see that , since the are only functions of , the terms of fixed homogeneity in the symbol expansion of satisfy then from equation and relation , we deduce that , so that .moreover , using equation together with relation , we deduce that is now the principal symbol of . using relation with , the symmetry of and multiplying by , we have the relation using this relation , we deduce the following calculation , for while the second term gives us the missing components , we claim that the first one is spanned by and .indeed we have so we deduce that in light of these algebraic calculations , we now build the parametrices .let , , , and the with respective principal symbols then the relation implies at the principal symbol level .the operator can indeed be expressed as the following zero - th order linear combination of the components and : so that the left - hand side of is expressed as a post - processing of measurement operators only .the proof is complete .for a matrix with columns and the canonical basis , one has the following representation more generally , for two matrices ] , we have the relation finally , for a matrix and ] and ] . with this fact in mind , the left - hand side looks like , where we compute ^t \gamma_0 [ \nabla u ] h^{-1 } { { \bf e}}_p \\ & = \gamma_0 z [ \nabla u]^t [ \nabla u]^{-t } { { \bf e}}_p = \gamma_0 z_p.\end{aligned}\ ] ] in equations and , the only unknown is the matrix := [ \nabla v_1,\dots,\nabla v_n] ] , however we do not follow that route here .we now show that provided that we use _ one _ additional solution ( on top of the basis ) such that the matrix is of full rank , then we can reconstruct via a strongly coupled elliptic system of the form , after which we can reconstruct from by formula .we now show how to derive this elliptic system . according to hypothesis [ hyp : z ] , the matrix ] vanish in the expression of .thus system is decoupled and clearly injective . by continuity, we also obtain that is injective for ( not necessarily scalar ) sufficiently close to a constant . starting from the integral version of the elliptic system in the case where , then the fredholm alternative implies . in order to translate inequality into a stability statement, we must bound in terms of the measurements .we have for , and since , expressed in involves the and their derivatives up to second order , if we assume all other multiplicative coefficients to be uniformly bounded , we obtain an estimate of the form thus we obtain in the end , an estimate of the form once is reconstructed , we can reconstruct uniquely from and $ ] using formula , with the stability estimate in order to see that satisfies a gradient equation that improves the stability of its reconstruction , the quickest way is to linearize ( * ? ? ?* equation ( 7 ) ) derived in the non - linear case , which reads as follows : where is the matrix of power densities and is the -th entry of . plugging the expansions , , , and using the fact that the linearized equation at reads from this equation , and using the stability estimates and , it is straighforward to establish the estimate and thus the proof is complete . | this paper concerns the reconstruction of an anisotropic conductivity tensor in an elliptic second - order equation from knowledge of the so - called power density functionals . this problem finds applications in several coupled - physics medical imaging modalities such as ultrasound modulated electrical impedance tomography and impedance - acoustic tomography . we consider the linearization of the nonlinear hybrid inverse problem . we find sufficient conditions for the linearized problem , a system of partial differential equations , to be elliptic and for the system to be injective . such conditions are found to hold for a lesser number of measurements than those required in recently established explicit reconstruction procedures for the nonlinear problem . guillaume bal and chenxi guo franois monard ( communicated by the associate editor name ) |
our general aim is to build up relativity theories as theories in the sense of mathematical logic .so we axiomatize relativity theories within pure first - order logic ( fol ) using simple , comprehensible and transparent basic assumptions ( axioms ) .we strive to prove all the surprising predictions of relativity from a minimal number of convincing axioms .we eliminate tacit assumptions from relativity by replacing them with explicit axioms ( in the spirit of the foundation of mathematics and tarski s axiomatization of geometry ) .we also elaborate logical and conceptual analysis of our theories .logical axiomatization of physics , especially that of relativity theory , is not a new idea , among others , it goes back to such leading scientists as hilbert , reichenbach , carnap , gdel , and tarski .relativity theory was intimately connected to logic from the beginning , it was one of the central subjects of logical positivism . for a short survey on the broader literature ,see , e.g. , .our aims go beyond these approaches in that along with axiomatizing relativity theories we also analyze in detail their logical and conceptual structure and , in general , investigate them in various ways ( using our logical framework as a starting point ) .a novelty in our approach is that we try to keep the transition from special relativity to general relativity logically transparent and illuminating .we `` derive '' the axioms of general relativity from those of special relativity in two natural steps .first we extend our axiom system for special relativity with accelerated observers ( sec.[acc - sec ] ) .then we eliminate the distinguished status of inertial observers at the level of axioms ( sec.[gen - sec ] ) .some of the questions we study to clarify the logical structure of relativity theories are : * what is believed and why ?* which axioms are responsible for certain predictions ? * what happens if we discard some axioms ?* can we change the axioms and at what price ?our aims stated in the first paragraph reflect , partly , the fact that we axiomatize a physical theory .namely , in physics the role of axioms ( the role of statements that we assume without proofs ) is more fundamental than in mathematics . among others ,this is why we aim to formulate simple , logically transparent and intuitively convincing axioms.our goal is that on our approach , surprising or unusual predictions be theorems and not assumed as axioms .for example , the prediction `` no faster than light motion ... '' is a theorem on our approach and not an axiom , see thm.[thm - noftl ] .getting rid of unnecessary axioms is especially important in a physical theory .when we check the applicability of a physical theory in a situation , we have to check whether the axioms of the theory hold or not .for this we often use empirical facts ( outcomes of concrete experiments ) .however , these correspond to existentially quantified theorems rather than to universally quantified statementswhich the axioms usually are .thus while we can easily disprove the axioms by referring to empirical facts , we can verify these axioms only to a certain degree .some of the literature uses the term empirical fact for universal generalization of an empirical fact elevated to the level of axioms , see , e.g. , , .we simply call these generalizations ( empirical ) axioms .for one thing , einstein s theory of relativity not just had but still has a great impact on many areas of science .it has also greatly affected several areas in the philosophy of science .relativity theory has an impact even on our every day life , e.g. , via gps technology ( which can not work without relativity theory ) .any theory with such an impact is also interesting from the point of view of axiomatic foundations and logical analysis . since spacetime is a similar geometrical object as space , axiomatization of relativity theories ( or spacetime theories in general )is a natural continuation of the works of euclid , hilbert , tarski and many others axiomatizing the geometry of space .there are many examples showing the benefits of using axiomatic method . for example , if we decompose relativity theories into little parts ( axioms ) , we can check what happens to our theory if we drop , weaken or replace an axiom or we can take any prediction , such as the twin paradox , and check which axiom is and which is not needed to derive it . this kind of reverse thinking helps to answer the why - type questions . for details on answering why - type questions by the methodology of the present work , see , .the success story of axiomatic method in the foundations of mathematics also suggests that it is worth applying this method in the foundations of spacetime theories , .let us note here that euclid s axiomatic - deductive approach to geometry also made a great impression on the young einstein , see . among others, logical analysis makes relativity theory modular : we can change some axioms , and our logical machinery ensures that we can continue working in the modified theory .this modularity might come handy , e.g. , when we want to unify general relativity and quantum theory to a theory of quantum gravity .for further reasons why to apply the axiomatic method to spacetime theories , see , e.g. , , , , , .we aim to provide a logical foundation for spacetime theories similar to the rather successful foundations of mathematics , which , for good reasons , was performed strictly within fol .one of these reasons is that fol helps to avoid tacit assumptions .another is that fol has a complete inference system while second - order logic ( or higher - order logic ) can not have one .still another reason for choosing fol is that it can be viewed as a fragment of natural language with unambiguous syntax and semantics .being a _ fragment of natural language _ is useful in our project because one of our aims is to make relativity theory accessible to a broad audience . _ unambiguous syntax and semantics _are important , because they make it possible for the reader to always know what is stated and what is not stated by the axioms .therefore they can use the axioms without being familiar with all the tacit assumptions and rules of thumb of physics ( which one usually learns via many , many years of practice ) . for further reasons why to stay within folwhen dealing with axiomatic foundations , see , e.g. , ( * ? ? ?* appendix : why fol ? ) , , , , .before we present our axiom system let us go back to einstein s original ( logically non - formalized ) postulates .einstein based his special theory of relativity on two postulates , the principle of relativity and the light principle : `` the laws by which the states of physical systems undergo change are not affected , whether these changes of state be referred to the one or the other of two systems of coordinates in uniform translatory motion . '' and `` any ray of light moves in the ` stationary ' system of co - ordinates with the determined velocity , whether the ray be emitted by a stationary or by a moving body . '' , see .the logical formulation of einstein s principle of relativity is not an easy task since it is difficult to capture axiomatically what `` the laws of nature '' are in general .nevertheless , the principle of relativity can be captured by our fol approach , see , . instead of formulating the two original principles , we formulate the following consequence of theirs : `` the speed of light signals is the same in every direction everywhere according to every inertial observer '' ( and not just according to the ` stationary ' observer ) . herewe will base our axiomatization on this consequence and call it light axiom .we will soon see that the light axiom can be regarded as the key assumption of special relativity .since we want to axiomatize special relativity , we have to fix some formal language in which we will write up our axioms .let us see the basic concepts ( the vocabulary " of the fol language ) we will use .we would like to speak about motion .so we need a basic concept of things that can move . we will call these object _bodies_. the light axiom requires a distinguished type of bodies called _ photons _ or _light signals_. we will represent motion as the changing of spatial location in time . thus we will use reference frames for coordinatizing events ( meetings of bodies ) .time and space will be marked by _the structure of quantities will be an _ ordered field _ in place of the field of real numbers . for simplicity , we will associate special bodies to reference frames .these special bodies will be called `` observers .'' observations will be formalized / represented by means of the _ worldview relation_. to formalize the ideas above , let us fix a natural number for the dimension of spacetime . to axiomatize theories of the -dimensional spacetime , we will use the following two - sorted fol language : where ( bodies ) and ( quantities ) are the two sorts , ( inertial observers ) and ( light signals or photons ) are one - place relation symbols of sort , and are two - place function symbols of sort , and ( the worldview relation ) is a -place relation symbol the first two arguments of which are of sort and the rest are of sort .atomic formulas and are translated as `` _ _ is an inertial observer _ _ , '' and `` __ is a photon _ _ , '' respectively . to speak about coordinatization, we translate as `` _ _ body coordinatizes body at space - time location _ _ , '' ( i.e. , at space location and at instant ) .sometimes we use the more picturesque expressions _ sees _ or _ observes _ for _ coordinatizes_. however , these cases of `` seeing '' and `` observing '' have nothing to do with visual seeing or observing ; they only mean associating coordinate points to bodies .the above , together with statements of the form are the so - called _ atomic formulas _ of our fol language , where and can be arbitrary variables of the same sort , or terms built up from variables of sort by using the two - place operations and .the _ formulas _ are built up from these atomic formulas by using the logical connectives _ not _( ) , _ and _ ( ) , _ or _ ( ) , _ implies _ ( ) , _ if - and - only - if _ ( ) and the quantifiers _ exists _ ( ) and _ for all _ ( ) .for the precise definition of the syntax and semantics of fol , see , e.g. , . to meaningfully formulate the light axiom, we have to provide some algebraic structure for the quantities .therefore , in our first axiom , we state some usual properties of addition and multiplication true for real numbers . : : the quantity part is a euclidean field , i.e. , + is a field in the sense of abstract algebra , + the relation defined by is a linear ordering on , and + positive elements have square roots : .the field - axioms ( see , e.g. , ) say that , are associative and commutative , they have neutral elements , and inverses , respectively , with the exception that does not have an inverse with respect to , as well as is additive with respect to .we will use , , , , as derived ( i.e. , defined ) operation symbols . is a mathematical " axiom in spirit .however , it has physical ( even empirical ) relevance .its physical relevance is that we can add and multiply the outcomes of our measurements and some basic rules apply to these operations .physicists usually use all properties of the real numbers tacitly , without stating explicitly which property is assumed and why .the two properties of real numbers which are the most difficult to defend from an empirical point of view are the archimedean property , see , , and the supremum property , see the remark after the introduction of axiom on p .. euclidean fields got their name after their role in tarski s fol axiomatization of euclidean geometry .by we can reason about the euclidean structure of a coordinate system the usual way , we can introduce euclidean distance , speak about straight lines , etc .in particular , we will use the following notation for ( i.e. , and are -tuples over ) if : we will also use the following two notations : for the _ space component _ and the _ time component _ of , respectively . now let us see how the light axiom can be formalized in our fol language . : : for any inertial observer , the speed of light is the same in every direction everywhere , and it is finite .furthermore , it is possible to send out a light signal in any direction .formally : axiom has an immediate physical meaning .this axiom is not only implied by the two original principles of relativity , but it is well supported by experiments , such as the michelson - morley experiment .moreover , it has been continuously tested ever since then .nowadays it is tested by gps technology .axiom says that `` it is _ possible _ for a photon to move from to iff ... '' .so , a notion of possibility plays a role here . in the present paper we work in an extensional framework , as is customary in geometry and in spacetime theory .however , it would be more natural to treat this possibility phenomenon " in a modal logic framework , and this is more emphatically so for relativistic dynamics . it would be interesting to explore the use of modal logic in our logical analysis of relativity theory. this investigation would be a nice unification of the works of imre ruzsa s school on modal logic and the works of our tarskian spirited school on axiomatic foundations of relativity theory .robin hirsch s work can be considered as a first step along this road .let us note that does not require that the speed of light be the same for every inertial observer or that it be nonzero .it requires only that the speed of light according to a fixed inertial observer be a quantity which does not depend on the direction or the location .why do we not require that the speed of light is nonzero ?the main reason is that we are building our logical foundation of spacetime theories examining thoroughly each part of each axiom to see where and why we should assume them .another ( more technical ) reason is that it will be more natural to include this assumption ( ) in our auxiliary axiom on page . our next axiom connects the worldviews of different inertial observers by saying that all observers observe the same external " reality ( the same set of events ) .intuitively , by the event occurring for at , we mean the set of bodies observes at .formally : : : all inertial observers coordinatize the same set of events : this axiom is very natural and tacitly assumed in the non - axiomatic approaches to special relativity , too. basically we are done .we have formalized the light axiom .we have introduced two supporting axioms ( and ) for the light axiom which are simple and natural ; however , we can not simply omit them without loosing some of the meaning of .the field axiom enables us to speak about distances , time differences , speeds , etc .the event axiom ensures that different inertial observers see the same events . in principle, we do not need more axioms for analyzing / axiomatizing special relativity , but let us introduce two more simplifying ones .we could leave them out without loosing the essence of our theory , it is just that the formalizations of the theorems would become more complicated .: : any inertial observer sees himself on the time axis : the role of is nothing more than making it easier to speak about the motion of reference frames via the motion of their time axes .identifying the motion of reference frames with the motion of their time axes is a standard simplification in the literature . is a way to formally capture this simplifying identification .our last axiom is a symmetry axiom saying that all inertial observers use the same units of measurements .: : [ axsymd ] any two inertial observers agree about the spatial distance between two events if these two events are simultaneous for both of them ; furthermore , the speed of light is 1 : let us see how states that `` all inertial observers use the same units of measurements . ''that `` the speed of light is 1 '' ( besides that the speed of light is nonzero ) means only that observers are using units measuring time distances compatible with the units measuring spatial distances , such as light years or light seconds .the first part of means that different observers use the same unit measuring spatial distances .this is so because if two events are simultaneous for both observers , they can measure their spatial distance and the outcome of their measurements are the same iff the two observers are using the same units to measure spatial distances .our axiom system for special relativity contains these 5 axioms only : in an axiom system , the axioms are the price " we pay , and the theorems are the goods " we get for them .therefore , we strive for putting only simple , transparent , easy - to - believe statements in our axiom systems .we want to get all the hard - to - believe predictions as theorems .for example , we prove from that it is impossible for inertial observers to move faster than light relative to each other ( no ftl travel " for science fiction fans ) . in the following, means logical derivability .( no faster than light inertial observers ) [ thm - noftl ] for a geometrical proof of thm.[thm - noftl ] , see .in relativity theory we are often interested in comparing the worldviews of different observers .so we introduce the worldview transformation between observers and as the following binary relation : by thm.[thm - poi ] , the worldview transformations between inertial observers in the models of are poincar transformations , i.e. , transformations which preserve the so - called minkowski - distance of -tuples . for the definition ,we refer to or .[ thm - poi ] for the proof of thm.[thm - poi ] , see ( * ? ? ?* thm.11.10 , 640 . ) or ( * ? ? ?* thm.3.2.2 , 22 . ) . by thm.[thm - poi ] , all predictions of special relativity , such as `` moving clocks slow down , '' are provable from . for details ,see , e.g. , , , .let us illustrate here by a simple example what we mean by logical analysis of a theory .in we have assumed that all observers see the same ( possibly infinite ) meetings of bodies .let us try to weaken to an axiom assuming something similar but only for finite meetings of bodies .a natural candidate is one of the following finite approximations of : : : all inertial observers see the same -meetings of bodies : for example , means only that inertial observers see the same bodies .let us also introduce axiom scheme as the collection of all the axioms . by prop.[prop - meet ], is strictly weaker assumption than and is strictly stronger than all the axioms of together .[ prop - meet ] item follows easily by the formulations of the axioms . to prove item , we are going to construct a model of in which is not valid .let .let all the bodies be inertial observers .let see all the bodies in and none of them in any other coordinate points , i.e. , let hold iff ; and for all let see all the bodies but at coordinate points for all , i.e. , let hold iff and .in this model , all inertial observers see all the possible -meetings .so is valid in this model .however , the only inertial observer who sees the -meeting is .so is not valid in this model .we are going to prove item by a similar model construction .the only difference is that now will be infinite . for simplicity ,let be the set of natural numbers .let all the other parts of the model be defined in the same way .now all the inertial observers see all the possible -meetings of the bodies for all natural numbers .so is valid in this model for all natural number .hence is valid in this model .however , only sees the event .so is not valid in this model .now we will use that there are no stationary ( i.e. , motionless ) light signals .so let us formalize this statement .: : inertial observers do not see stationary light signals . [ prop - m3e ] let us make some general observations .by , there is no nondegenerate triangle in whose sides are of slope .this is clear if ; and in the case , this can be shown by contradiction using the fact that the vertical projection of a triangle of this kind is a triangle whose one side is the sum of the other two sides .therefore , and together imply that any inertial observer sees the events in which a particular photon participates on a line of slope .by , and , every inertial observer sees different meetings of photons at different coordinate points .this is so since ( by ) for every pair of points there is a line of slope containing only one of the points .hence , by , there is a photon seen by only at one of the two coordinate points .let us now prove item .let and be inertial observers and let be a coordinate point . to prove , we have to find a coordinate point such that . to find this ,let , and , see fig.[fig - meet ] . by , there are photons , and such that , , and .since sees every photon on a line of slope , he sees the meeting of and only at and does not see the meeting of and .since implies , sees the same meetings of pairs of photons .so there is a where sees and meet . is the only point where sees both and .this is so because sees different meetings of photons at different points but sees the same -meetings as .so if there were another point , say , where sees and , there were photons and such that , and does not see the meeting of and . by axiom has to see the meetings and .the only point where can see these meetings is since the only point where sees and meet .therefore sees the meeting of and at .thus , by , also has to see the meeting of and , but does not see it . hence is the only point where sees both and .let be a body such that .by , has to see the meeting of , and .this point has to be since the only point where and meet is .since was an arbitrary body , we have .the same argument shows that .so as desired .[ l][l] [ l][l] [ br][br] [ tl][tl] [ l][l] [ l][l] [ tl][tl] [ br][br] [ b][b] [ bl][bl] [ tl][tl] [ tl][tl] [ tl][tl] [ tl][tl] ] we are going to prove item , by constructing a model .let be the field of real numbers .let us denote the set of natural numbers by .let .let and be all the inertial observers and let the lines of slope be all the photons .let and see the photon at coordinate point iff . let see all the bodies at iff .let see all the bodies but at iff ( i.e. , iff is in the horizontal hyperplane ) . , vertical lines can be used instead of horizontal hyperplanes , which gives a counterexample with bodies having more natural properties . ]it is straightforward from this construction that axioms , and are valid in this model .since every line of slope 1 intersects every horizontal hyperplane , and see the same -meetings of bodies .hence is also valid in this model .however , the only inertial observer who sees the meeting is .so is not valid in this model .we prove item by a similar construction .the only difference is that now the set of bodies is ; and the photons are the vertical lines .it is straightforward from the construction that axioms , are valid in this model ( ) .since every vertical line intersects every horizontal hyperplane , and see the same -meetings of bodies .hence is also valid in this model .however , only sees the meeting .so is not valid in this model .prop.[prop - m3e ] shows that a price to weaken axiom to is to assume that there are no stationary light signals .since contains this assumption , we can simply replace with in .a natural continuation of this investigation can be a search for assumptions that allow us to weaken to .a possible candidate is that bodies move along straight lines and the dimension is at least .the proof of item shows that assuming only that bodies move along straight lines is not enough , if .several similar investigations on the logical connections of axioms and predictions , see , e.g. , , on dynamics , , ( * ? ? ?* , 7 ) , on twin paradox , on kinematics , time - dilation and length - contraction , twin paradox , etc .in we restricted our attention to inertial observers .it is a natural idea to generalize the theory by including accelerated observers as well .it is explained in the classic textbook that the study of accelerated observers is a natural first step ( from special relativity ) towards general relativity .we have not introduced the concept of observers as a basic one because it can be defined as follows : an _ observer _ is nothing other than a body who observes " ( coordinatizes ) some other bodies somewhere , this property can be captured by the following formula of our language : our key axiom about accelerated observers is the following : : : at each moment of his life , every accelerated observer sees ( coordinatizes ) the nearby world for a short while in the same way as an inertial observer does . for formulation of in our fol language , see , or . axiom ties the behavior of accelerated observers to those of inertial ones .justification of this axiom is given by experiments .we call two observers _ co - moving _ at an event if they `` see the nearby world for a short while in the same way '' at the event . by this notion says that at each event of an observer s life , he has a co - moving inertial observer .we can think of a dropped spacepod as a co - moving inertial observer of an accelerated spaceship ( at the event of dropping ) . or , if a spaceship switches off its engines , it will move on as a co - moving inertial spaceship would . our next two axiomsensure that the worldviews of accelerated observers are big enough .they are generalized versions of the corresponding axioms for inertial observers , but now postulated for all observers . : : if sees in an event , then can not deny it : : : any observer sees himself in an interval of the time axis : our last two axioms will ensure that the worldlines of accelerated observers are tame " enough , e.g. , they have velocities at each moment . in ,the worldview transformations between inertial observers are affine maps , the next axiom will state that the worldview transformations between accelerated observers are approximately affine , wherever they are defined .: : the worldview transformations have linear approximations at each point of their domain ( i.e. , they are differentiable ) . for a precise formalization of ,see , e.g. , .we note that implies that the worldview transformations are functions with open domains .however , if the numberline has gaps , still there can be crazy motions .our last assumption is an axiom scheme supplementing by excluding these gaps .: : every definable , bounded and nonempty subset of has a supremum ( i.e. , least upper bound ) . in `` definable '' means `` definable in the language of , parametrically . '' for a precise formulation of , see or . is a mathematical axiom " in spirit .it is tarski s fol version of hilbert s continuity axiom in his axiomatization of geometry , see , fitted to the language of .when is the field of real numbers , is automatically true . that requires the existence of supremum only for sets definable in the language of instead of every set , is important not only because by this trick we can keep our theory within fol ( which is crucial in a foundational work ) , but also because it makes this postulate closer to the the physical / empirical level .the latter is true because does not speak about `` any fancy subset '' of the quantities , just those `` physically meaningful '' sets which can be defined in the language of our ( physical ) theory . adding this 5 axioms to , we get an axiom system for accelerated observers : as an example we show that the so - called _ twin paradox _ can be naturally formulated and analyzed logically in .our axiomatic approach also makes it possible to analyze the details of the twin paradox ( e.g. , who sees what , when ) with the clarity of logic , see for part of such an analysis . according to the twin paradox ,if a twin makes a journey into space ( accelerates ) , he will return to find that he has aged less than his twin brother who stayed at home ( did not accelerate ) .we formulate the twin paradox in our fol language as follows .: : every inertial observer measures at least as much time as any other observer between any two events and in which they meet ; and they measure the same time iff they have encountered the very same events between and : where .[ thm - twp ] for the proof of thm.[thm - twp ] , see or .item of thm.[thm - twp ] states that can not be replaced with the whole fol theory of real numbers in if we do not want to loose from its consequences .our theory is also strong enough to predict the gravitational time - dilation effect of general relativity via einstein s equivalence principle , see , .our theory of accelerated observers speaks about two kinds of observers , inertial and accelerated ones .some axioms are postulated for inertial observers only , some apply to all observers .we get an axiom system for general relativity by stating the axioms of in a generalized form in which they are postulated for all observers , inertial and accelerated ones equally . in other words, we will change all axioms of in the same spirit as and were obtained from and , respectively .this kind of change can be regarded as a `` democratic revolution '' with the slogan `` all observers should be equivalent , the same laws should apply to all of them . '' here `` law '' translates as `` axiom . ''this idea originates with einstein ( see his book ( * ? ? ?* part ii , ch.18 ) ) . for simplicity, we will use an equivalent version of the symmetry axiom ( see ( * ? ? ?* thm.2.8.17(ii ) , 138 . ) or ( * ? ? ?* thm.3.1.4 , 21 . ) ) , and we will require the speed of photons to be 1 in ( as opposed to requiring it in ) .: : the velocity of photons an observer meets " is 1 when they meet , and it is possible to send out a photon in each direction where the observer stands . :: meeting observers see each other s clocks slow down with the same rate . for a precise formulation of these axioms ,see , .we introduce an axiom system for general relativity as the collection of the following axioms : axiom system contains basically the same axioms as , the difference is that they are assumed only locally but for all the observers .thm.[grcomp - thm ] below states that the models of are exactly the spacetimes of usual general relativity . for the notion of a lorentzian manifoldwe refer to , and .[ grcomp - thm ] is complete with respect to its standard models , i.e. , with respect to lorentzian manifolds over real closed fields . this theorem can be regarded as a completeness theorem in the following sense .let us consider lorentzian manifolds as intended models of .how can we do that ? we give a method for constructing a model of from each lorentzian manifold ; and conversely, we show that each model of is obtained this way from a lorentzian manifold .after this is elaborated , we have defined what we mean by a formula in the language of being valid in a lorentzian manifold .then completeness means that for any formula in the language of , we have iff is valid in all lorentzian manifolds over real closed fields .this is completely analogous to the way in which minkowskian spacetimes were regarded as intended models of in the completeness theorem of , see ( * ? ? ?* thm.11.28 , 681 . ) and .we call the worldline of an observer _ timelike geodesic _ , if each of its points has a neighborhood within which this observer maximizes measured time ( wrist - watch time ) " between any two encountered events . for formalization of this concept in our fol language , see , e.g. , .according to the definition above , if there are only a few observers , then it is not a big deal that a worldline is a time - like geodesic ( it is easy to be maximal if there are only a few to be compared to ) . to generate a real competition for the rank of having a timelike geodesic worldline, we postulate the existence of many observers by the following axiom scheme of comprehension .: : for any parametrically definable timelike curve in any observers worldview , there is another observer whose worldline is the range of this curve .a precise formulation of can be obtained from that of its variant in . an axiom schema guarantees that our definition of a geodesic coincides with that in the literature on lorentzian manifolds .therefore we also introduce the following theory : so in our theory , our concept of timelike geodesic coincides with the standard concept in the literature on general relativity .all the other key concepts of general relativity , such as curvature or riemannian tensor field , are definable from timelike geodesics .therefore we can treat all these concepts ( including the concept of metric tensor field ) in our theory in a natural way . in general relativity , einstein s field equations ( efe )provide the connection between the geometry of spacetime and the energy - matter distribution ( given by the energy - momentum tensor field ) .since in all the geometric concepts of spacetime are definable , we can use einstein s equation as a definition of the energy - momentum tensor , see , e.g. , or , or we can extend the language of with the concept of energy - momentum tensor and assume einstein s equations as axioms .as long as we do not assume anything more of the energy - momentum tensor than its connection to the geometry described by einstein s equations , there is no real difference in these two approaches . in both approaches ,we can add extra conditions about the energy - momentum tensor to our theory , e.g. , the dominant energy condition or , e.g. , that the spacetimes are vacuum solutions .there is observational evidence suggesting that in our physical universe there exist regions supporting potential non - turing computations .namely , it is possible to design a physical device in relativistic spacetime which can compute a non - turing computable task , e.g. , which can decide whether zf set theory is consistent .this empirical evidence is making the theory of hypercomputation more interesting and gives new challenges to the physical church thesis , see , e.g. , .these new challenges do more than simply providing a further connection between logic and spacetime theories ; they also motivate the need for logical understanding of spacetime theories .we have axiomatized both special and general relativity in fol . moreover , via our theory , we have axiomatized general relativity so that each of its axioms can be traced back to its roots in the axioms of special relativity .axiomatization is not our final goal .it is merely an important first step toward logical and conceptual analysis .we are only at the beginning of our ambitious project .andrka , h. , j. x. madarsz , and i. nmeti , with contributions from a. andai , g. sgi , i. sain and cs .tke , 2002 , _ on the logical structure of relativity theories_. research report .budapest , alfrd rnyi institute of mathematics .http://www.renyi.hu/pub/algebraic-logic/contents.html .andrka , h. , j. x. madarsz , and i. nmeti , 2006 , logical axiomatizations of space - time .samples from the literature . in a.prkopa , et al ._ non - euclidean geometries_. berlin , springer .155185 .madarsz , j. x. , i. nmeti , and g. szkely , 2007 , first - order logic foundation of relativity theories . in d.gabbay , et al .( eds . ) , _ mathematical problems from applied logic ii_. berlin , springer . 217252 .h. andrka , j. x. madarsz , + i. nmeti , g. szkely + alfrd rnyi institute of mathematics + of the hungarian academy of sciences + budapest p.o .box 127 , h-1364 hungary + andreka.hu , madarasz.hu + nemeti.hu , turms.hu | the aim of this paper is to give an introduction to our axiomatic logical analysis of relativity theories . |
the cluster - update algorithm introduced for simulations of the potts model by swendsen and wang in 1987 has been a spectacular success , reducing the effect of critical slowing down by many orders of magnitude for the system sizes typically considered in computer simulation studies .a number of generalizations , including an algorithm for continuous - spin systems and the single - cluster variant as well as more general frameworks for cluster updates , have been suggested following the initial work of ref .the single bond update introduced by sweeny several years before swendsen s and wang s work is considerably less well known .this is mostly due to difficulties in its efficient implementation in a computer code , which is significantly more involved than for the swendsen - wang algorithm . in deciding about switching the state of a given bond from inactive to active or _ vice versa _ , one must know the consequences of the move for the connectivity properties of the ensemble of clusters , i.e. , whether two previously disjoint clusters will become connected or an existing cluster is broken up by the move or , instead , the number of clusters will stay unaffected .if implemented naively , these connectivity queries require a number of steps which is asymptotically close to proportional to the number of spins , such that the resulting _ computational critical slowing down _ outweighs the benefit of the reduced autocorrelation times of the updating scheme .even though it was recently shown that the decorrelation effect of the single - bond approach is asymptotically _ stronger _ than that of the swendsen - wang approach , this strength can only be played once the computational critical slowing down is brought under control . here, we use a poly - logarithmic dynamic connectivity algorithm as recently suggested in the computer science literature to perform bond insertion and removal operations as well as connectivity checks in run - times only logarithmically growing with system size , thus removing the problem of algorithmic slowing down . as the mechanism as well asthe underlying data structures for these methods are not widely known in the statistical physics community , we here use the opportunity to present a detailed description of the approach . for the convenience of the reader, we also provide a python class implementing these codes , which can be used for simulations of the random - cluster model or rather easily adapted to different problems where dynamic connectivity information is required .we consider the random - cluster model ( rcm ) which is a generalization of the bond percolation problem introducing a correlation between sites and bonds .it is linked to the -state potts model through the fortuin - kasteleyn transformation , generalizing the potts model to arbitrary real .special cases include regular , uncorrelated bond percolation ( ) as well as the ising model ( ) . to define the rcm , consider a graph with vertex set , ( ) , and edge set , ( ) .we associate an occupation variable with every edge .we say that is _ active _ if and _ inactive _ otherwise .the state space of the rcm corresponds to the space of all ( spanning is spanning if it contains all vertices of . ] ) sub - graphs , a configuration is thus represented as \in\omega ] , the density of active edges , and , the cluster number weight , these expressions define a family of pdfs .it is worthwhile to mention a number of limiting cases . for eq .( [ eq : rcm_pmf ] ) factorizes and corresponds to independent bond percolation with . in the limit of with fixed ratio , on the other hand , it corresponds to bond percolation with local probability and the condition of cycle - free graphs .taking or in the limit of and constant for we obtain the ensemble of uniform spanning trees for connected . naturally , in the latter two limits every edge in a configuration is a bridge .sweeny s algorithm is a local bond updating algorithm directly implementing the configurational weight ( [ eq : rcm_pmf ] ) .we first consider its formulation for the limiting case of independent bond percolation .for an update move , randomly choose an edge with uniform probability and propose a flip of its state from inactive to active or _vice versa_. move acceptance can be implemented with any scheme satisfying detailed balance , for instance the metropolis acceptance ratio where and for insertions and deletions of edges , respectively .this dynamical process is described by the following master equation : where ensures proper normalization of , given the normalization of .the metropolis transition rates are then given by \label{eq : loc_transr}.\end{aligned}\ ] ] eq .( [ eq : glob_transr ] ) expresses the uniform random selection of an edge and the corresponding edge dependent transition rate is defined in eq .( [ eq : loc_transr ] ) .the product of kronecker deltas ensures the single - bond update mechanism , i.e. , that only one edge per step is changed . from here , generalization to arbitrary is straightforward , leading to a modified transition rate .\label{eq : transr_rcm}\end{aligned}\ ] ] we note that this metropolis update is more efficient than a heat - bath variant for any value of apart from , where both rules coincide . clearly , for , to compute the acceptance probability of a given trial move one must find , the change in connected components ( clusters ) induced by the move .this quantity , equivalent to the question of whether the edge is a _ bridge _, is highly non - local . determining it involves deciding whether there exists at least one alternative path of active edges connecting the incident vertices and that does not cross .the dynamic connectivity problem is the task of performing efficient connectivity queries to decide whether two vertices and are in the same ( ) or different ( ) connected components for a dynamically evolving graph , i.e. , mixing connectivity queries with edge deletions and insertions . for a static graph, such information can be acquired in asymptotically constant time after a single decomposition , for instance using the hoshen - kopelman algorithm . under a sequence of edge insertions ( but no deletions ) , it is still possible to perform all operations , insertions and connectivity queries , in practically constant time using a so - called union - and - find ( uf ) data structure combined with path - compression and tree - balance heuristics . this fact has been used to implement a very efficient algorithm for the ( uncorrelated ) percolation problem .an implementation of sweeny s algorithm , however , requires insertions as well as deletions to ensure balance .hence , we need to be able to remove edges without the need to rebuild the data structure from scratch ..asymptotic run - time scaling at criticality of the elementary operations of insertion or deletion of internal or external edges , respectively , using sequential breadth - first search ( sbfs ) , interleaved bfs ( ibfs ) , union - and - find ( uf ) or the fully dynamic connectivity algorithm ( dc ) as a function of the linear system size . [ cols= " < , < , < , < , < " , ] this goal can be reached using a number of different techniques .building on the favorable behavior of the uf method under edge insertions and connectivity queries , the data structure can be updated under the removal of an external edge ( bridge ) by performing breadth - first searches ( bfss ) through the components connected to the two ends and of the edge .alternatively , one might try to do without any underlying data structure , answering each connectivity query through a separate graph search in breadth - first manner . in both cases ,the process can be considerably sped up by replacing the bfss by _ interleaved _ traversals alternating between vertices on the two sides of the initial edge and terminating the process as soon as one of the two searches comes to an end .as , at criticality of the model , the sizes of the two cluster fragments in case of a bridge bond turn out to be very uneven on average , this seemingly innocent trick leads to dramatic run - time improvements .the asymptotic run - time behavior of insertion and deletion steps for internal and external edges and the algorithms based on bfs or uf data structures is summarized in table [ tab : scaling ] for the case of simulations on the square lattice of edge length .we expect the same bounds with the corresponding exponents to hold for general critical hypercubic lattices . here, denotes the finite - size scaling exponent of the susceptibility and is a geometric exponent related to the two - arm crossing behavior of clusters .we note that for in two dimensions .asymptotically , it is the most expensive operation which dominates the run - time of the algorithm and , as a consequence , it turns out that ( for the square lattice ) a simple bfs with interleaving is more efficient than the approach based on union - and - find , cf . table [ tab : scaling ] . in any case , the implementations discussed so far feature a computational effort for a sweep of bond updates that scales faster than linearly with the system size , thus entailing some computational critical slowing down .it is found in ref . that for most choices of , this effect appears to asymptotically destroy any advantage of a faster decorrelation of configurations by the sweeny algorithm as compared to the swendsen - wang method .an alternative technique based on more complicated data structures allows to perform any mix of edge insertions , deletions and connectivity queries in _ poly - logarithmic _ run - time per operation .poly - logarithmic here denotes polynomials of powers of the logarithm of the independent variable , e.g. , system size , of the form here the base of the logarithm is not important and changes only the coefficients . in the following all logarithms are with respect to base .given the observation of generally faster decorrelation of configurations by the sweeny algorithm in the sense of smaller dynamical critical exponents , the use of such ( genuinely ) dynamic connectivity algorithms ( dc ) allows for an asymptotically more efficient simulation of the critical random - cluster model . in the following ,we discuss the basic ideas and some details of the algorithm employed here .the dc algorithm is based on the observation that for a given sub - graph it is possible to construct a spanning forest which is defined by the following properties : * in if and only if in * there exists exactly one path for every pair , with in other words a spanning forest of a graph associates a spanning tree to every component , an example is given in fig .[ spanning_graph_0 ] ( solid lines only ) .one advantage of is that it has fewer edges than , but represents the same connectivity information .for the sub - graph , the distinction of _ tree edges _ and _ non - tree edges _ allows for a cheap determination of . for the case of deleting an edge in know that there is an alternative path connecting the adjacent vertices , namely the path in , so this edge was part of a cycle and we conclude .if we insert an edge whose adjacent vertices are already connected in then we come to the same conclusion .if , on the other hand , we want to insert a tree edge , i.e. , an edge with adjacent vertices , not yet connected , we observe that because and belong to separate spanning trees before the insertion of , the new spanning subgraph obtained by linking and via is still a spanning tree .hence the only modification on the spanning forest for the insertion of a tree edge is the amalgamation of two trees .this can be done in steps by using the following idea of ref . which also supports the deletion of bridges . for a given component in transform the corresponding tree in into a directed circuit by replacing every edge by two directed edges ( arcs ) ] and every vertex by a loop ] and $ ] .the sequence of arcs between these two edges corresponds to and the concatenation of the remaining sequences without the two arcs corresponding to results in representing : \rightarrow[1,2]\rightarrow[2,2]\rightarrow[2,1 ] , \label{eq : et_1}\\ \mathcal{e}_{2 } & = & [ 4,4]\rightarrow[4,3]\rightarrow[3,3]\rightarrow[3,4 ] .\label{eq : et_2}\end{aligned}\ ] ] in summary , we see that by mapping every component in to a tree in and every such tree to a directed circuit which we store in an ets we are able to perform edge insertions / deletions into / from as well as connectivity queries with an amortised computational effort .the remaining operation not implemented efficiently by the provisions discussed so far is the deletion of edges from which are not bridges , i.e. , for which a replacement edge exists outside of the spanning forest .the dc algorithm first executes the tree splitting as in the case of a bridge deletion . additionally , however , it checks for a reconnecting edge in the set of non - tree edges .if such an edge is found , it concludes and merges the two temporary trees as indicated above by using the located non - tree edge , which hence now becomes a tree edge . if , on the other hand , no re - connecting edge is found , no additional work is necessary as the initially considered edge is a bridge . to speed up the search for replacement edges ,we limit it to the smaller of the two parts and as all potential replacement edges must be incident to both components . to allow for efficient searches for non - tree edges incident to a given component using the ets representation ,the search - tree data structures are augmented such that the loop arc for every vertex stores an adjacency list of non - tree edges ( vertices ) incident to it .further , every node in the underlying search tree representing the ets carries a flag indicating if any non - tree edge is available in the sub - tree [ which is a sub - tree of the search tree and not of ] .this allows for a search of replacement edges using the euler tour and ensures that any non - tree edge can be accessed in time .it turns out that exploiting this observation is , in general , beneficial but not sufficient to ensure the amortized time complexity bound indicated for the dc algorithm in general .suppose that the graph consists of a giant homogeneous component with and and the edge deletion results , temporarily setting aside the question of possible replacement edges , in two trees with and incident edges , respectively , where .then the computational effort caused by scanning all possible non - tree edges is clearly . in amortizing onto the insertionsperformed to build up this component , every such non - tree edge carries a weight of . if this case occurs sufficiently frequently , it will be impossible to bound the amortized cost per operation .this problem is ultimately solved in the dc algorithm by the introduction of an edge hierarchy .the intuitive idea is to use the expense of a replacement edge search following a deletion to reduce the cost of future operations on this edge .this is done in such a way as to separate dense from sparse clusters and more central edges from those in the periphery of clusters . by amortizing the cost of non - tree edge scans and level increases over edge insertions it follows that one can reduce the run - time for graph manipulations to an amortized and for connectivity queries .each time an incident non - tree edge is checked and found unsuitable for reconnecting the previously split cluster we promote it to be in a higher level . if we do this many times for a dense component we will be able to find incident non - tree edges very quickly in a higher level .these ideas are achieved in the dc algorithm by associating a level function to each , based on this level function , one then constructs sub - graphs with the property this induces a hierarchy of sub - graphs : as described above , for every sub - graph we construct a spanning forest . clearly the same hierarchy holds for the family of spanning forests . in other wordsthe edges in level connect components / trees of level .if an edge has to be inserted into , then it is associated to a level and hence it is in . to achieve an efficient search for replacement edges ,the algorithm adapts the level of edges after deletions of tree edges in a way which preserves the following two invariants : 1 .the maximal number of vertices in a component in level is .2 . any possible replacement edge for a previously deleted edge with level has level .trivially , both invariants are fulfilled when all edges have level 0 .we now have to specify how exactly the idea of keeping important edges at low levels and unimportant ones at higher levels is implemented . to do this , suppose we deleted an edge from , i.e. , at level , and temporarily have where ( say ) is the smaller of the two , i.e., it has less vertices . because of invariant ( i ) it follows that we are allowed to move the tree ( which is now at most half the size of ) to level by increasing the level of all tree edges of by one .after that we start to search for a replacement edge in the set of non - tree edges stored in the ets of in level where it also remains because of the fact that .for every scanned non - tree edge we have two options : * it does not reconnect and and has therefore both ends incident to . in this case, we increase the level of this edge .this implements the idea of moving unimportant edges in `` dense '' components to higher levels . *it does reconnect and hence we re - insert it at level .if we have not found a replacement edge at level we continue at level .the search terminates after unsuccessfully completing the search at level or when a replacement edge was found . in the first case it follows whereas in the second case remains unchanged .implementing this replacement - edge search following any tree - edge deletion introduces an upward flow of edges in the hierarchy of graphs and the level of an edge in the current graph never decreases . focusing on a single edge, we see that it is sequentially moved into levels of smaller cluster size and hence the cost of future operations on this edge is reduced . taking this into account it follows that the insertion of an edge has a cost of for inserting at level plus it also `` carries '' the cost of all possible level increases with cost of each resulting in amortized per insertion .deletions on the other hand imply a split of cost in levels . in case of an existing replacement edge another contribution of caused by an insertion at level 0the contribution of moving tree edges to higher levels and searching for replacement edges ( moving non - tree edges up ) is already paid for by the sequence of previous insertions ( amortization ) .the only missing contribution is the effort for obtaining the next replacement edge in an ets . in total, deletions hence have an amortized computational cost of .we tested the performance of the current dc implementation in the context of sweeny s algorithm in comparison to the simpler approaches based on breadth - first search and union - and - find strategies . while the algorithm discussed hereallows all operations to be performed in poly - logarithmic time , due to the complicated data structures the constants are relatively large .our results show consistency with the poly - logarithmic run - time bounds derived .it appears , however , that very large system sizes are required to clearly see the superior asymptotic performance of the dc algorithm as compared to the bfs and uf implementations .for details see the more elaborate discussion in ref . . as an example , fig .[ fig : fig_dc_run_times ] shows the average run - time per edge operation as a function of the system size for three different choices of parameters .average run - time for sweeny s algorithm for the rcm for different values of at the critical point on the 2d square lattice with periodic boundary conditions . ]apart from run - time considerations , the implementation has a rather significant space complexity .since we maintain overlapping forests over the vertices , the space complexity is .a heuristic suggested in ref . to decrease memory consumption is a truncation of higher edge levels as these are , for the inputs or graphs considered in our application , sparsely populated .we checked the impact on our implementation by comparing run - times and memory consumptions for a truncation .we did not see any significant change in the run - time . on the other hand we observed a reduction of almost a factor of two in the memory consumption .this conforms to our observation that during the course of a simulation almost no edges reached levels beyond for system sizes where the actual maximal level according to eq .( [ eq : levels ] ) is . likewise , a number of further optimizations or heuristics are conceivable to improve the typical run - time behavior .this includes a sampling of nearby edges when looking for a replacement edge before actually descending into the edge level hierarchy .a number of such heuristics and experimental comparisons of fully and partially dynamics connectivity algorithms has been discussed in the recent literature , see refs .a full exploration of these possibilities towards an optimal implementation of the dc class of algorithms for the purpose of the sweeny update is beyond the scope of the current article and forms a promising direction for future extensions of the present work .we provide a python class encompassing four different implementations of sweeny s algorithm based on : * sequential breadth - first searches ( sbfs ) * interleaved breadth - first searches ( ibfs ) * union - and - find with interleaved breadth - first searches ( uf ) *poly - logarithmic dynamic connectivities as discussed here ( dc ) the package is built on top of a c library and it is therefore possible to use the library in a stand - alone compiled binary . the necessary source code is also provided . for more details see the related project documentation .the source code is published under the mit license . herewe give a basic usage example , which simulates the rcm with ( the ising model ) at , using an equilibration time of sweeps , a simulation length of sweeps , and random number seed using the dc implementation : .... from sweeny import sweeny sy = sweeny(q=2.,l=64,beta = np.log(1 .+ np.sqrt(2.)),coupl=1 . ,cutoff=1000,tslength=10000,rngseed=1234567,impl='dc ' ) sy.simulate ( ) .... in order to extract an estimate , say , of the binder cumulant we need to retrieve the time series for and , ....sec_cs_moment= sy.ts_sec_cs_moment four_cs_moment = sy.ts_four_cs_moment sec_cs_moment * = sec_cs_moment binder_cummulant = four_cs_moment.mean()/sec_cs_moment.mean ( ) .... once an instance of the sweeny class is created , it is easy to switch the algorithm and parameters as follows : .... sy.init_sim(q=1.3,l=64,beta=np.log(1.+np.sqrt(1.3.)),coupl=1 . ,cutoff=5000,tslength=50000,rngseed=7434,impl='ibfs ' ) ....we have shown how to implement sweeny s algorithm using a poly - logarithmic dynamic connectivity method and we described the related algorithmic aspects in some detail . we hope that the availability of the source code and detailed explanations help to bridge the gap between the computer science literature on the topic of dynamic connectivity problems and the physics literature related to mc simulations of the rcm , specifically in the regime . the availability of an efficient dynamic connectivity algorithm opens up a number of opportunities for further research .this includes studies of the tricritical value where the phase transition of the random - cluster model becomes discontinuous for dimensions as well as the nature of the ferromagnetic - paramagnetic transition for and .e.m.e . would like to thank p. mac carron for carefully reading the manuscript .10 url # 1#1urlprefix[2][]#2 swendsen r h and wang j s 1987 _ phys .lett . _ * 58 * 8688 | we review sweeny s algorithm for monte carlo simulations of the random cluster model . straightforward implementations suffer from the problem of computational critical slowing down , where the computational effort per edge operation scales with a power of the system size . by using a tailored dynamic connectivity algorithm we are able to perform all operations with a poly - logarithmic computational effort . this approach is shown to be efficient in keeping online connectivity information and is of use for a number of applications also beyond cluster - update simulations , for instance in monitoring droplet shape transitions . as the handling of the relevant data structures is non - trivial , we provide a python module with a full implementation for future reference . |
in a recent paper , levin discusses the set of results presented over the last decade by various prominent physicists which led to the conclusion that black holes seem to be susceptible to chaos .levin argues that the most realistic description available of a spinning pair of black holes is chaotic motion , and goes on to complain that in physics and cosmology chaos has not received the attention it deserves in part because the systems studied have been highly idealized . in contrast , in economics we have the interesting fact that even some of the most simple and highly idealized models describing modern economies can easily lead to chaotic dynamics .in this paper we apply the techniques of symbolic dynamics to the analysis of a labor market which shows in almost developed economies large volatility in employment flows .the possibility that chaotic dynamics may arise in modern labor markets had been totally strange to economics until recently , at least as far as we are aware of . in an interesting paper , bhattacharya and bunzel found that the discrete time version of the pissarides - mortensen matching model , as formulated in ljungqvist and sargent , can easily lead to chaotic dynamics under standard sets of parameter values .however , in order to conclude about the existence of chaotic dynamics in the three numerical examples presented in the paper , the authors apply the li - yorke theorem or the mitra sufficient condition which should not be generally applied to all specific simulations because they may lead to misleading conclusions .moreover , in a more recent version of the paper , bhattacharya and bunzel present new results in which chaos is completely removed from the dynamics of the model .this paper explores the matching model so interestingly developed by bhattacharya and bunzel with the following objectives in mind : ( i ) to show that chaotic dynamics may still be present in the model for standard parameter values , for high values of the measure of labor tightness ( that is the ratio of vacancies to the number of workers looking for jobs ) which can occur in economic booms ; ( ii ) to clarify some open questions raised by the authors in their first paper , by providing a rigorous proof of the existence of chaotic dynamics in the model through the computation of topological entropy in a symbolic dynamics setting .therefore , if one is studying whether there are chaotic dynamics or not under certain ranges of parameters values , we suggest that a bifurcation diagram , the variation of the lyapunov exponent , the existence of a periodic orbit of period not equal to a power of two and positive topological entropy are some techniques that can give a clear answer to this problem .why would the labor market in most of the developed economies behave in such a volatile way as the evidence points out ?one possibility , and usually the most favoured one in the dominant view of economics , is that the economyhas an inherently linear structure and is hit by permanent exogenous shocks . as these shocks are entirely unpredictable , they render the dynamics and the cycles hardly predictable and controllable .another more recent view , which should also be considered for discussion because it seems consistent and realistic , is based on the possibility that the economy has a structure that is nonlinear and the cycles are an endogenous manifestation of this characteristic , either with or without external shocks added to the structure . in what follows, a very simple and fully deterministic model will be presented that is capable of generating such type of volatility with standard parameter sets and no noise .let us assume that in every period of time there are large flows of workers moving into and out of employment : a certain number of job vacancies is posted by firms and there is a total measure of workers looking for jobs .when a worker and a firm reach an agreement there is a successful match , and the total number of these matches is given by the aggregate matching function intuition behind ( [ eqm(u , v ) ] ) is very simple : the higher is the easier it will be for firms to get a worker with the desired qualifications ; and the higher is the level of vacancies posted by firms , the higher is the probability that a worker will find an appropriate job . for simplicity we will assume as a constant .however , a more adequate treatment would consist of treating as a variable dependent on the level of public provision of information by public agencies with the objective of increasing the number of successful matches .the measure of labor tightness is given by the ratio then , the probability of a vacancy being filled at is given by let be the total number of employed workers at the beginning of and let be defined as the probability of a match being dissolved at . therefore we have where notice that gives the number of undissolved matches prevailing at and passed on to , while represents the number of new matches formed at with the available number of unemployed workers and vacancies .as shown in , the model can be solved for the decentralized outcome of a nash bargaining game between workers and firms but to keep the model as close as possible to the presentation in and we should focus upon the central planner solution to the matching model .the objective function of the central planner is given by , and are parameters that represent , respectively , the productivity of each worker , the lost value of leisure due to labor effort , and the cost that firms incur per vacancy placed in the market .therefore , the planner chooses and the next period s employment level , by solving the following dynamic optimization problem\]]subject to is the time discount rate and an initial condition is given .the lagrangian can be written as + \lambda _ { t}\left [ \left ( 1-s\right ) n_{t}+q\left ( \frac{v_{t}}{1-n_{t}}\right ) v - n_{t+1}\right ] \right\ } .\]]the first order conditions ( foc ) , for an interior solution , are given by = 0 \\ \frac{\partial l}{\partial n_{t+1}}=-\lambda _{ t}+\beta ^{t+1}\left ( \phi -z\right ) + \lambda _ { t+1}\left [ \left ( 1-s\right ) + q^{\prime } \left ( \theta _ { t+1}\right ) \theta _ { t+1}^{2}\right ] = 0.\end{gathered}\ ] ] the very interesting point in and was the manipulation of these foc to arrive at a reduced equation that can lead to chaotic dynamics . from the first foc we get andsubstituting this and the corresponding expression for into the second foc we obtain the following parameter definitions and restrictions equation ( [ eqpricipal ] ) gives the law of motion for the index of labor market tightness in the economy under the planner s solution .in other words , given an initial condition equation ( [ eqpricipal ] ) completely characterizes the trajectory of and the whole economy , the backward dynamics of this model can be characterized by the four - parameter one - dimensional family of maps \rightarrow \left [ 0,g_{\max } \right ] , ] .the first point which we would like to emphasize is that all parameter values satisfy the restrictions presented in equation ( [ eqparam ] ) . that is is illustrated in figure [ fig1 ] , where the unimodal map and the typical chaotic time series associated to the unstable fixed point are presented .the first derivative of the unimodal map is , which shows that the equilibrium is unstable . by varying the parameter in the interval ] we recall that positive lyapunov exponent is a necessary condition for chaos .bhattacharya and bunzel suggest three examples for the study of the dynamics of the map . in the first case a period 3-cycle is found , which implies the existence of chaos in the li - yorke sense if the sharkovsky order is applied . in the second example, it was argued that a period 3-orbit could not be found for the following parameter values : since the equation has no solution . since the map is unimodal and mitra s sufficient condition for chaos in unimodal mapsis verified , the conclusion was that for this parameter setting chaotic motion in the backward dynamics could also be found . where is a non - negative interval, is the critical point such that and is the unique fixed point of the map such that mitra states that : if satisfies and , then shows topological chaos . ] finally , in the third example the set of parameters are : for these values no period three orbit is found , neither the sufficient condition of mitra is verified .therefore , it was argued that for this case the very existence of chaos for the unimodal map is questioned on the grounds of a lack of logical proof of such dynamics . in order to clarify some of the issues raised by ,a symbolic dynamics approach is developed for the unimodal map , which allow us to perform the computation of the topological entropy for any choice of parameters , and , of course , permit us to classify the complexity of the map since positive topological entropy implies the existence of chaotic dynamics .we will concentrate on their third example .we consider again the unimodal map \rightarrow % \left [ 0,g_{\max } \right ] .$ ] this kind of map has symbolic dynamics relative to a topological markov partition generated by the orbit of the critical point this is illustrated in figure [ figmarkov ] for the parameter values presented in example 2 .so , any numerical trajectory for the map corresponds to a symbolic sequence where depending on where the point falls in , i.e., symbolic sequences made of these letters may be ordered by the natural lexicographical order and defining the fullshift to be the set of all possible infinite symbolic strings of s and s , then any given infinite symbolic sequence is a singleton in the fullshift space .the bernoulli shift map is defined by . in general ,not all symbolic sequences correspond to the trajectory of an initial condition restricting the shift map to a subset of consisting of all the itineraries that are realizable yields the subshift we formulate the result in terms of topological markov chains , a special class of subshifts of finite type where the transition in the symbol sequence is specified by a matrix .any binary matrix generates a special subshift is called the topological markov chain associated with the markov matrix .we say that if the transition from to is possible .the matrix gives a complete description of the dynamics of the unimodal map .the premier numerical invariant of a dynamic system is its topological entropy defined by is the set of words of length occurring in sequences of .moreover , if the topological markov matrix is given then , the topological entropy is the natural logarithm of the spectral radius of . for the parameter values , we found a period 5 orbit : which is shown in figure [ figmarkov ] with the corresponding 4 interval markov partition .the critical point assumes the value and generates the symbolic partition for the map the periodic orbit has the following symbolic address : and in consequence we have the following markov matrix: .\]]the maximal eigenvalue of is given by which implies that the topological entropy is positive : and this shows very clearly that we are dealing with chaotic motion in this set of parameter values .it should be noted that for any other kneading sequence , for a suitable choice of parameters values , we can obtain a markov partition and a markov matrix which totally determine the complexity of the unimodal map .this is a very simple and rigorous way to estimate the topological entropy of a one - dimensional model and to check for the existence of chaos .the general use of the li - yorke theorem or the mitra condition may lead to misleading conclusions about the existence of chaotic dynamics , as it was done in .therefore , in order to obtain relevant answers to whether there are or not chaotic dynamics under certain ranges of parameters values in a 1-dimensional particular model , we suggest that a bifurcation diagram , the variation of the lyapunov exponent , the existence of a periodic point of period not equal to a power of two , and symbolic dynamics are very powerful techniques for that purpose . moreover , the application of these techniques to the matching labor market model so interestingly developed by bhattacharya and bunzel clearly confirmed that a very simple model of the labor market , with well behaved aggregate functions ( continuous , twice differentiable and linearly homogeneous ) do really produce chaotic behavior for a range of parameter sets which had been questioned in , for high values of the measure of labor market tightness . financial support from the fundao cincia e tecnologia , lisbon , is grateful acknowledged , under the contract no pocti/ eco /48628/ 2002 , partially funded by the european regional development fund ( erdf ) . | in this paper we apply the techniques of symbolic dynamics to the analysis of a labor market which shows large volatility in employment flows . in a recent paper , bhattacharya and bunzel have found that the discrete time version of the pissarides - mortensen matching model can easily lead to chaotic dynamics under standard sets of parameter values . to conclude about the existence of chaotic dynamics in the numerical examples presented in the paper , the li - yorke theorem or the mitra sufficient condition were applied which seems questionable because they may lead to misleading conclusions . moreover , in a more recent version of the paper , bhattacharya and bunzel present new results in which chaos is completely removed from the dynamics of the model . our paper explores the matching model so interestingly developed by the authors with the following objectives in mind : ( i ) to show that chaotic dynamics may still be present in the model for standard parameter values ; ( ii ) to clarify some open questions raised by the authors in , by providing a rigorous proof of the existence of chaotic dynamics in the model through the computation of topological entropy in a symbolic dynamics setting . symbolic dynamics , periodic orbits , chaos conditions , backward dynamics , matching and unemployment . |
interdependent behaviour and causality in coupled complex systems continue to attract considerable interest in fields as diverse as solid state science , biology , physiology , climatology . coupling and synchronization effectshave been observed for example in cardiorespiratory interactions , in neural signals , in glacial variability and in milankovitch forcing . in finance , the _ leverage effect _ quantifies the cause - effect relation between return and volatility and eventually financial risk estimates . in dna sequences ,causal connections among structural and compositional properties such as intrinsic curvature , flexibility , stacking energy , nucleotide composition are sought to unravel the mechanisms underlying biological processes in cells .many issues still remain unsolved mostly due to problems with the accuracy and resolution of coupling estimates in long - range correlated signals .such signals do not show the wide - sense - stationarity needed to yield statistically meaningful information when cross - correlations and cross - spectra are estimated . in ,a function , based on the detrended fluctuation analysis - a measure of autocorrelation of a series at different scales - has been proposed to estimate the cross - correlation of two series and .however , the function is independent of the lag , since it is a straightforward generalization of the detrended fluctuation analysis , which is a _ positive - defined _ measure of autocorrelation for long - range correlated series .therefore , holds only for .different from autocorrelation , the cross - correlation of two long - range correlated signals is a _ non - positive - defined function of _ , since the coupling could be delayed and have any sign . in this work ,a method to estimate the cross - correlation function between two long - range correlated signals at different scales and lags is developed .the asymptotic expression of is worked out for fractional brownian motions , being the hurst exponent , whose interest follows from their widespread use for modeling long - range correlated processes in different areas .finally , the method is used to investigate the coupling between ( _ i _ ) returns and volatility of the dax stock index and ( _ ii _ ) structural properties , such as deformability , stacking energy , position preference and propeller twist , of the escherichia coli chromosome .the proposed method operates : ( i ) on the integrated rather than on the increment series , thus yielding the cross - correlation at varying windows , as opposed to the standard cross - correlation ; ( ii ) as a sliding product of two series , thus yielding the cross - correlation as a function of the lag , as opposed to the method proposed in .the features ( i ) and ( ii ) imply higher accuracy , -windowed resolution while capturing the cross - correlation at varying lags .the _ cross - correlation _ of two nonstationary stochastic processes and is defined as : [y^\ast(t+\tau)-\eta_y^\ast(t+\tau)]\big\rangle\ ] ] where and indicate time - dependent means of and , the symbol indicates the complex conjugate and the brackets indicate the ensemble average over the joint domain of and .this relationship holds for space dependent sequences , as for example the chromosomes , by replacing time with space coordinate .( [ crosscovariance ] ) yields sound information provided the two quantities in square parentheses are jointly stationary and thus is a function only of the lag . in this work, we propose to estimate the cross - correlation of two nonstationary signals by choosing for and in eq .( [ crosscovariance ] ) , respectively the functions : and the wide - sense stationarity of eq .( [ crosscovariance ] ) can be demonstrated for fractional brownian motions . by taking , , and calculated according to eqs .( [ xtil],[ytil ] ) , writes : \big[b_{h_2}^*(t+\tau)-\widetilde{b}_{h_2}^*(t+\tau)\big]\big\rangle \;\;\;.\end{aligned}\ ] ] when writing and , we assume the same underlying generating noise to produce a sample of and .( [ dcab0 ] ) is calculated in the limit of large ( calculation details are reported in the appendix ) .one obtains : \;\;\;,\end{aligned}\ ] ] where is the _ scaled lag _ and is defined in the appendix .( [ theta ] ) is independent of , since the terms in square parentheses depend only on , and thus eq . ( [ crosscovariance ] )is made wide - sense stationary .it is worthy of note that , in eq .( [ theta ] ) , the coupling between and reduces to the sum of the exponents .( [ theta ] ) , for , reduces to : indicating that the coupling between and scales as the product of and .the property of the variance of fractional brownian motion to scale as is recovered from the eq .( [ zero ] ) for and , i.e. : eq .( [ auto ] ) has been studied in .the leverage effect is a _ stylized fact _ of finance .the level of volatility is related to whether returns are negative or positive .volatility rises when a stock s price drops and falls when the stock goes up .furthermore , the impact of negative returns on volatility seems much stronger than the impact of positive returns ( _ down market effect _ ) . to illustrate these effects, we analyze the correlation between returns and volatility of the dax stock index , sampled every minute from 2-jan-1997 to 22-mar-2004 , shown in fig .[ fig : figure1 ] ( a ) . the returns and volatilityare defined respectively as : and ^ 2}/{(t-1 ) } } \;.$ ] dax stock index : ( a ) prices ; ( b ) returns with ; ( c ) volatility with ; ( d ) volatility with .,width=302 ] fig .[ fig : figure1 ] ( b ) shows the returns for .the volatility series are shown in figs .[ fig : figure1 ] ( c , d ) respectively for and .the hurst exponents , calculated by the slope of the log - log plot of eq .( [ auto ] ) as a function of , are ( return ) , ( volatility ) and ( volatility ) .[ fig : figure2 ] shows the log - log plots of for the returns ( squares ) and volatility with ( triangles ) .the scaling - law exhibited by the dax series guarantees that its behaviour is a fractional brownian motion .the function with and with is also plotted at varying in fig .[ fig : figure2 ] ( circles ) . from the slope of the log - log plot of vs , one obtains , i.e. the average between and as expected from eq .( [ zero ] ) .+ next , the cross - correlation is considered as a function of . the plots of for and with and are shown respectively in fig . [fig : figure3 ] ( a , b ) at different windows .log - log plot of for the dax return ( squares ) and volatility ( triangles ) and of with and ( circles ) .red lines are linear fits .the power - law behaviour is consistent with eqs .( [ zero],[auto]).,width=302 ] cross - correlation with and with ( a ) and ( b ) ; ( c ) with and with . ranges from 100 to 500 with step 100.,width=302 ] plot of the function with and with . and ranges from 100 to 500 with step 100 . one can note that the five curves collapse , within the numerical errors of the parameters entering the auto- and cross - corerlation functions .this is in accord with the invariance of the product with the window .,width=302 ] leverage function with volatility windows = , , , .the value of is equal for all the curves.,width=302 ] the function for and , is shown in fig .[ fig : figure3](c ) .the cross - correlation takes negative values at small and reaches the minimum at about 10 - 12 days .this indicates that the volatility increases with negative returns ( i.e. with price drops ) .then changes sign relaxing asymptotically to zero from positive values at large .the positive values of indicate that the volatility decreases when the returns become positive ( i.e. when price rises ) and are related to the restored equilibrium within the market ( _ positive rebound days _ ) .it is worthy of remark that the ( positive ) maximum of the cross - correlation is always smaller than the ( negative ) minimum .this is the stylized fact known as _down market effect_. a relevant feature exhibited by the curves in figs .[ fig : figure3 ] ( a - c ) is that the zeroes and the extremes of occur at the same values of , which is consistent with wide - sense - stationarity for all the values of .a further check of wide sense stationarity is provided by the plot of the function . in fig .[ fig : figure4 ] , is plotted with and with , and , ranges from 100 to 500 with step 100 .one can note that the five curves collapse in accord with the invariance of the product with . in fig .[ fig : figure5 ] , the leverage correlation function according to the definition put forward in , is plotted for different volatility windows .the function has been calculated by means of eq.([crosscovariance ] ) .the negative values of cross - correlation ( at smaller ) and the following values ( _ positive rebound days _ ) at larger can be clearly observed for several volatility windows .the function for the dax stock index , estimated by means of the standard cross - correlation function , is shown in figs .1,2 of ref . . by comparing the curves shown in fig .[ fig : figure5 ] to those of ref . , one can note the higher resolution related to the possibility to detect the correlation at smaller lags ( note the unit is hours , while in ref. is days ) and at varying windows , implying the possibility to estimate the degree of cross - correlation at different frequencies .as a final remark , we mention that the cross correlation function between a fractional brownian motion and its own width can be computed analytically in the large limit , following the derivation in the appendix for two general fbm s .the width of a fbm is one possible definition for the volatility , therefore the derivation in the appendix provides a straightforward estimate of the leverage function .several studies are being addressed to quantify cross - correlations among nucleotide position , intrinsic curvature and flexibility of the dna helix , that may ultimately shed light on biological processes , such as protein targeting and transcriptional regulation .one problem to overcome is the comparison of dna fragments with di- and trinucleotide scales , hence the need of using high - precision numerical techniques .we consider deformability , stacking energy , propeller twist and position preference sequences of the escherichia coli chromosome. the sequences , with details about the methods used to synthetize / measure the structural properties , are available at the cbs database - center for biological sequence analysis of the technical university of denmark ( ) . in order to apply the proposed method , the average value is subtracted from the data , that are subsequently integrated to obtain the paths shown in fig . [fig : figure6 ] .the series are long and have hurst exponents : ( deformability ) , ( position preference ) , ( stacking energy ) , ( propeller twist ) .the cross - correlation functions between deformability , stacking energy , propeller twist and position preference are shown in fig .[ fig : figure7 ] ( a - e ) .there is in general a remarkable cross - correlation along the dna chain indicating the existence of interrelated patches of the structural and compositional parameters .the high correlation level between dna flexibility measures and protein complexes indicates that the conformation adopted by the dna bound to a protein depends on the inherent structural features of the dna .it is worthy to remark that the present method provides the dependence of the coupling along the dna chain rather than simply the values of the linear correlation coefficient . in table 4 of ref . one can find the following values of the correlation obtained by either numerical analysis or experimental measurements ( in parentheses ) over dna fragments : ( a ) ; ( b ) ; ( c ) ; ( d ) ; ( e ) , also for the genomic sequences the function is independent of within the numerical errors of the parameters entering the auto- and cross - correlation functions . in fig .[ fig : figure8 ] , is shown for the deformability , the stacking energy , and . ranges from 100 to 500 with step 100 .structural sequences of the escherichia coli chromosome.,width=302 ] cross - correlation between ( a ) deformability and stacking energy ; ( b ) position preference and deformability ( c ) propeller twist and position preference ; ( d ) propeller twist and stacking energy ; ( e ) propeller twist and deformability . ranges from 100 to 500 with step 100.,width=302 ] plot of the function with the deformability , the stacking energy , and . ranges from 100 to 500 with step 100 .one can note that the five curves collapse , within the numerical errors of the parameters entering the auto- and cross - corerlation functions .this is in in accord with the invariance of the product with the window .,width=302 ]a high - resolution , lag - dependent non - parametric technique based on eqs .( [ crosscovariance]-[ytil ] ) to measure cross - correlation in long range - correlated series has been developed .the technique has been implemented on ( _ i _ ) financial returns and volatilities and ( _ ii _ ) structural properties of genomic sequences .the results clearly show the existence of coupling regimes characterized by positive - negative feedback between the systems at different lags and windows .we point out that - in principle - other methods might be generalized in order to yield estimates of the cross - correlation between long - range correlated series at varying and .however , techniques operating over the series by means of a box division , such as dfa and r / s method , are _ a - priori _ excluded .the box division causes discontinuities in the sliding product of the two series at the extremes of each box , and ultimately incorrect estimates of the cross - correlation .the present method is not affected by this drawback , since eqs .( [ crosscovariance]-[ytil ] ) do not require a box division .let us start from eq .( [ dcab0 ] ) : \big[b_{h_2}^*(t+\tau)-\widetilde{b}_{h_2}^*(t+\tau)\big]\big\rangle \;\;\;,\end{aligned}\ ] ] that , after multiplying the terms in parentheses , becomes : \big\rangle \;\;\;.\end{aligned}\ ] ] in general , the moving average may be referred to any point of the moving window , a feature expressed by replacing eqs .( [ xtil],[ytil ] ) with with . in the limit of ,the sums can be replaced by integrals , so that : where , , .for the sake of simplicity , the analytical derivation will be done by using the harmonizable representation of the fractional brownian motion : where is a representation of in the domain . in the followingwe will consider the case of and . by using eq .( [ harmo ] ) , the cross - correlation of two fbms and can be written as : since is gaussian , the following property holds for any : by using eq .( [ gaussian ] ) , after some algebra eq .( [ xy ] ) writes : where is a normalization factor which depends on and . in the harmonizable representation of fbm , takes the following form : \gamma[-(h_1+h_2)]\ ] ] normalized such that when .different representations of the fbm lead to different values of the coefficient .( [ teo2 ] ) can be used to calculate each of the four terms in the right hand side of eq .( [ dca ] ) .the mean value of each term in eq .( [ dca ] ) is obtained from the general formula in eq .( [ teo2 ] ) ; thus , substituting the right hand side of eq .( [ teo2 ] ) and eq .( [ txy ] ) into each term in eq .( [ dca ] ) we obtain : \end{aligned}\ ] ] where each term in round parentheses corresponds to each of the four terms in eq .( [ dca ] ) . summing the terms in eq .( [ dca1 ] ) , one can notice that time cancels out , thus one finally obtains : \ ; , \nonumber \\\hspace{-25 mm } & \end{aligned}\ ] ] consistently with the large limit , we take , namely .the integral ( [ integral ] ) admits four different solutions , depending on the values taken by the parameters and .let us consider each case separately .[ [ case-1-hattautheta - and - hattautheta1 ] ] case 1 : and + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \nonumber \\\hspace{-25mm}&\end{aligned}\ ] ] [ [ case-2-hattautheta - and - hattautheta1 ] ] case 2 : and + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \nonumber \\\hspace{-25mm}&\end{aligned}\ ] ] [ [ case-3-hattautheta - and - hattautheta1 ] ] case 3 : and + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \nonumber \\\hspace{-25mm}&\end{aligned}\ ] ] it is easy to see that this case includes the eq . ( 2.5 ) treated in the paper .[ [ case-4-hattautheta - and - hattautheta1 ] ] case 4 : and + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \nonumber \\\hspace{-25mm}&\end{aligned}\ ] ]10 m. rosenblum and a. pikovsky , ( 2007 ) phys . rev .lett . * 98 * , 064101 . t. zhou , l. chen and k. aihara , ( 2005 ) phys. rev .lett . * 95 * , 178103 .s. oberholzer et al .( 2006 ) phys .lett . * 96 * , 046804. m. dhamala , g. rangarajan , ( 2008 ) m. ding , phys .lett . * 100 * , 018701 .f. verdes , ( 2005 ) phys .e * 72 * , 026222. m. palus and m. vejmelka , ( 2007 ) phys .e * 75 * , 056211. t. kreuz et al . ( 2007 ) physica d * 225 * , 29 .lu - chun du and dong - cheng mei , ( 2008 ) j. stat .tass et al .( 1998 ) phys .81 * , 3291 .p. huybers , ( 2006 ) w. curry , nature * 441 * , 7091 . y. ashkenazy , ( 2006 ) climate dynamics * 27 * , 421. f. black , ( 1976 ) j. of fin .econ . * 3 * , 167 .w. schwert , j. of finance ( 1989 ) * 44 * , 1115 .r. haugen , e. talmor , w. torous , ( 1991 ) j. of finance * 44 * , 1115. l. glosten , j. ravi and d. runkle , ( 1992 ) j. of finance * 48 * , 1779 .g. bekaert , g. wu , ( 2000 ) the review of financial studies * 13 * , 1 .s. figlewski , x. wang ( 2000 ) _ is the leverage effect a leverage effect ? _ , working paper , stern school of business , new york .j. p. bouchaud , a. matacz and m. potters , ( 2001 ) phys .* 87 * , 228701 . j. perello and j. masoliver , ( 2003 ) phys .e * 67 * , 037102. t. qiu , b. zheng , f. ren and s. trimper , ( 2006 ) phys .e * 73 * , 065103(r ) .r. donangelo , m. h. jensen , i. simonsen , k. sneppen , ( 2006 ) j. stat .i varga - haszonits and i kondor , ( 2008 ) j. stat .m. montero , ( 2007 ) j. stat .j. moukhtar , e. fontaine , c. faivre - moskalenko and a. arneodo , ( 2007 ) phys .98 * , 178101 .allen , n.d .price , a. joyce and b.o .palsson , ( 2006 ) plos computational biology , * 2 * , e2 .pedersen , l.j .jensen , s. brunk , h.h .staerfeld and d.w .ussery , ( 2000 ) j. mol .biol . * 299 * , 907 .jun , g. oh and s. kim , ( 2006 ) phys .e * 73 * , 066128 .b. podobnik , h.e .stanley , ( 2008 ) phys .lett . * 100 * , 084102. b. b. mandelbrot , j. w. van ness , ( 1968 ) siam rev . * 4 * , 422 .a. carbone , g. castelli , h. e. stanley , phys .e * 69 * , 026105 ( 2004 ) .a. carbone , phys .e * 76 * , 056703 ( 2007 ) .s. arianos and a. carbone , physica a * 382 * , 9 ( 2007 ) .a. carbone and h. e. stanley , physica a * 384 * , 21 ( 2007 ) .a. carbone and h. e. stanley , physica a * 340 * , 544 ( 2004 ) . the matlab and c++ codes implementing the proposed method , the dax and e - coli sequences used in this work are downloadable at : | a method for estimating the cross - correlation of long - range correlated series and , at varying lags and scales , is proposed . for fractional brownian motions with hurst exponents and , the asymptotic expression of depends only on the lag ( wide - sense stationarity ) and scales as a power of with exponent for . the method is illustrated on ( _ i _ ) financial series , to show the leverage effect ; ( _ ii _ ) genomic sequences , to estimate the correlations between structural parameters along the chromosomes . * keywords * : persistence ( experiment ) , sequence analysis ( experiment ) , scaling in socio - economic systems , stochastic processes |
the fokker - planck equation has been the focus of many decades of study due to its relevance in physics , finance , probability and statistics . provides a particularly early example examining analytically tractable solutions to the fokker - planck equation , with more contemporary examples provided by . essentially , this work focuses on time dependent densities , , of a diffusion process described by range given that it started at position , governed by , eq.([begin ] ) is defined on some interval on with endpoints where .the continuous functions and are referred to as the diffusion , drift and the sink coefficients respectively .the applications of eq.([begin ] ) are wide ranging . for , eq.([begin ] ) is commonly referred to as the fokker - planck equation and one can readily show that is conserved for .the corresponding solutions to the fokker - planck equation are the time dependent probability densities associated with the ( it ) stochastic langevin process , where is a gaussian white noise term with unit variance .the langevin equation is ubiquitous in a wide range of applications , from its beginnings in brownian motion ( see chap.1 of ) , to finance , biological processes and the synchronisation of networked oscillators . for ,the corresponding continuity equation is sinked , thus we expect the quantity being measured in eq.([begin ] ) to seep away with time .it is conceptually important to develop analytically tractable solutions to eq.([begin ] ) as they are green s functions , which are both inherently mathematically interesting , and highly applicable - for an account of their application in physics see chap.7 of .following , greens / density functions appearing in this work also figure heavily in random matrix theory .we shall indicate some of these connections throughout this work . additionally , many past results for the conserved fokker - planck equation ( we offer as a typical example ) rely simply on the steady state density to gain insights .sinked densities allow no such avenue for inquiry as their solutions decay with time .we shall highlight this behaviour in the proceeding sections .the general strategy for solving eq.([begin ] ) is as follows : we first obtain the _ weight function _ , found by solving the corresponding pearson s equation , for constants and . given the form of the diffusion , drift and sink coefficients , we categorise the spectrum of eq.(1 ) as either discrete or mixed discrete / continuous .this gives us the general form of as , where the sum will be an integral for the continuous parts of the spectrum .we then find the corresponding eigenvalues and eigenfunctions of the system through standard techniques .the final step involves solving for the normalisation constants which satisfy the initial condition .for the discrete spectrum eigenfunctions we utilise the orthogonal polynomial relation ( chap.3 of ) , and for the continuous spectrum eigenfunctions we employ a macrobert s inverse integral transform of the form , see for an instance involving the whittaker functions , and and chap.14 of for examples involving bessel / hankel functions .these inverse integral transforms usually rely on some key results attributable to macrobert which we shall exploit when deriving the continuous normalisation for the romanovski case . given the pervasive nature of the fokker - planck equation , green s functions and orthogonal polynomials ,most of the cases presented in this work have been fully solved in the literature _ without _ the sink term .what is new in this work is we present the full time dependent solutions for the sinked variants ( ) , and apply these results to obtain new solutions to the stochastic bertalanffy - richards ( b - r ) equation . for an introduction to the application of the b - r equation in population modelling and biological processes see . in the next sectionwe detail the forms of the diffusion , drift and sink coefficients that lead to the orthogonal polynomial eigenfunctions considered in this work . in sec.[1dsl ] we give necessary information about sturm - liouville ( s - l ) operators , the hilbert spaces their eigenfunctions span and how the sink terms in the s - l operators form associated variations of the orthogonal polynomials / eigenfunctions . in sec.[classification ] we detail how the form of the s - l operator determines the exact form of the spectra for the eigenfunctions , along with the corresponding solutions to eq.([begin ] ) . in sec.[applici ] we apply the results of this work to the stochastic b - r equation .finally we offer implications of these results and flag future work .applying the weight function , we decompose in eq.([begin ] ) as , hence the ensuing equation for is where is the s - l operator we require that the operator be negative , i.e. all relevant eigenfunctions of have negative eigenvalues , the eigenvalues may be discrete or continuous depending on boundary conditions , to be specified explicitly in sec.[classification ] . for this work and in eq.([s - lspectrum ] )have the specific forms : focusing on the case , theorem 4 of states that there are six classes of discrete spectrum eigenfunctions to eq.([s - lspectrum ] ) : hermite , laguerre , jacobi , bessel , fisher - snedecor ( shifted - jacobi ) , and romanovski ( pseudo - jacobi ) polynomials - the so - called _ continuous classical orthogonal polynomials_. for the eigenfunctions generally have the same polynomial form , but are multiplied by the diffusion coefficient raised to some power , , specified in sec.([1dsl ] ) of this work .we do not formally consider the hermite and jacobi case in this work ( we only refer to them ) as the hermite case offers nothing new , and the jacobi case only has finite support in . in , hypergeometric polynomial solutions to eq.([s - lspectrum ] ) for quite general forms of the diffusion and drift coefficients were studied .using the liouville transformation ( eq.([changevars ] ) of this work ) to transform eq.([s - lspectrum ] ) into a corresponding schrdinger equation , we note that the laguerre , bessel and romanovski potentials correspond to the _ coulomb _ ( chap.6 of ) , _ morse _ and _ trigonometric scarf _ potentials , albeit with an additional parameter corresponding to the sink coefficient .more recently the laguerre polynomials were applied as solutions to the non - linear madelung fluid equation and a burger s equation with a time - dependent forcing term .both and highlight the connection between the laguerre polynomials and the algebra , which is then exploited to construct the coherent laguerre function , and explore squeezed states in the calogero - sutherland model . also highlights connections between lie algebras and the associated laguerre functions .additionally , the non - sinked variant of eq.([solll ] ) for the laguerre case features in the financial cox - ingersoll - ross ( cir ) model , amongst other applications . with regards to the bessel polynomials , in the ladder operators of the associated bessel functions were explored .the non - sinked variant of eq.([solll ] ) for the bessel case was derived as the fokker - planck equation of an ergodic diffusion with reciprocal gamma invariant distribution in , and features in many financial models ( see sec.6.5 of ) .following chap.4 of , the fisher - snedecor polynomials are a variant of the jacobi polynomials under a simple linear transformation such that the corresponding weight function s support is extended to the positive real line . in the non - sinked variant of eq.([solll ] ) for the fisher - snedecor case was derived as the fokker - planck equation for an ergodic diffusion with fisher - snedecor invariant distribution .refer to for the corresponding jacobi expression of eq.([solll ] ) which has finite support in and an entirely discrete spectrum .the romanovski polynomials have received a fair amount of attention lately due to their application in supersymmetric quantum mechanics , quantum chromodynamics , and connections with yang - mills integrals . the non - sinked variant of eq.([solll ] ) for the romanovski case ( without a closed form expression for the continuous spectrum normalisation )was first given in as the fokker - planck equation for an ergodic diffusion with the symmetric scaled student invariant distribution .multidimensional generalisations of the classical polynomials of course exist ( see , amongst other works ) and are a current active field of study .of particular relevance to this work we see in chapters 2 and 3 of that the probability density functions of the eigenvalues of the _ chiral _ , _ laguerre _ , _ jacobi _ and _ cauchy _ ensembles of random matrices give the multidimensional generalisations of the hermite , laguerre , jacobi and romanovski weights respectively .additionally , chap.11 of the aforementioned work considers potentials which correspond to various classes of quantum calogero - sutherland models . in particularwe see that propositions 11.3.1 and 11.3.2 give multidimensional generalisations of the corresponding schrdinger hermite , laguerre and jacobi potentials ( amongst other more general cases ) , with the ( restricted ) green s functions of these three cases constructed in chap.11.6 .due to in eq.([s - lspectrum ] ) being a non - positive , self - adjoint s - l operator , the full set of solutions to eq.([s - lspectrum ] ) - present in eq.([solll ] ) - necessarily form a ( weighted ) square - integrable hilbert space with respect to the weighted inner product , the emergence of the continuous spectrum in eq.([solll ] ) for certain classes of eigenfunctions is due to the discrete spectrum eigenfunctions possessing so - called _ finite orthogonality _ ( see chapters 3 and 4 of ) : only a finite subset of the bessel , fisher - snedecor and romanovski polynomials obey eq.([finite1 ] ) .thus the continuous spectrum eigenfunctions are required to construct the hilbert space .following chap.22 of and chapters 7 and 8 of , if the spectrum of is mixed , the corresponding hilbert space is separable into the following orthogonal subspaces , where denotes the subspace of the hilbert space containing _ pure point _( discrete ) spectrum , and denotes the subspace of the hilbert space containing _ absolutely continuous _ spectrum . recently in the associated variants of the laguerre , bessel and romanovski polynomials , respectively , were considered ( the associated fisher - snedecor functions are a simple variation on the romanovski case ) .the new results in this paper involve applying the aforementioned results and constructing the associated sinked densities .we list the canonical forms of the diffusion , drift and sink coefficients of the four relevant cases in tab.[tab1 ] , and give the weight functions and the corresponding support of for each case in tab.[tab2 ] ..canonical forms of , and considered in this work .[ cols="^,^,^,^ " , ] [ tab7 ] hence , the expression for the density of the bessel case is , and the corresponding expression for the fisher - snedecor case is , if exhibits oscillatory behaviour at both boundaries for , and , then eq.([solll ] ) is given by , where ( following sec.5.3 of ) the eigenfunctions with the continuous eigenvalues , , , are the linearly independent solutions to eq.([s - lspectrum ] ) which are square - integrable with , and are valid in the neighbourhood of the natural boundaries ( in this case , ) for which exhibits oscillatory behaviour .similar to spectral category ii , we note that , the discrete normalisation constants in eq.([genexp3 ] ) are given by eq.([ortho1 ] ) , and in [ romnormapp ] we explicitly derive the continuous normalisations , using the aforementioned macrobert s style proof . from the forms of in tab.[tab4 ] and the corresponding support of , we see that the romanovski case fall under this particular mixed spectral category , and its highest discrete eigenvalue satisfies , concerning the discrete spectrum eigenfunctions , the associated romanovski functions have the following hypergeometric form , following chap.9.9 of , the eigenvalues and normalisation constants for this case are given by , under the restrictions , rescaling the continuous spectrum eigenfunctions , ( where is the complex conjugate of ) , the eigenvalues are parameterised by , and hypergeometric forms of are ( see and chap.15 of ) , where , the continuous orthogonality relations are given by , where and , ,\nonumber\\ \lambda_{1,2}(\mu)=\lambda_{2,1}(\mu ) = 2^{2\gamma+2 } \pi \left\ { \left| \tilde{\gamma}(\mu ) \right|^2 \cosh\left [ \frac{\pi}{2 } ( \sigma_2 + 2 \mu ) \right ] \right.\nonumber\\ \left. + \left| \tilde{\gamma}(-\mu ) \right|^2 \cosh\left [ \frac{\pi}{2 } ( \sigma_2 - 2 \mu ) \right]\right\}.\label{newresult}\end{aligned}\ ] ] thus the continuous normalisations are given by , we provide the detailed derivation of eq.([newresult ] ) in [ romnormapp ] using the aforementioned macrobert s method .hence the complete density function for the romanovski case is , we give an example of eq.([romres ] ) in fig.[fig2 ] .notice that as time increases the total area of the density ( which begins at unity ) decreases .for we notice that the density is barely distinguishable visually .we compare this to the stationary density of the non - sinked case - the left most density - where area is conserved for all . ) at various times with parameter values , and .,width=491,height=188 ]we now present an application of this work - time dependent distributions corresponding to various instances of the b - r langevin equation .the b - r equation is a deterministic system given by , where .we note that when , the and terms act as growth and decay terms respectively ; the greater the value of , the more pronounced the decay .the choice gives the famous logistic equation .to proceed we consider the two uncorrelated noise terms and with variance , where means an ensemble average over the noise .we perturb the growth and decay coefficients by and respectively to obtain the following ( it ) stochastic langevin equation , where .eq.([ito ] ) can be _ solved exactly _ through the transformation leading to the linear langevin equation , ,\end{aligned}\ ] ] and the formal solution , we refer to the above solution for as formal as it contains integrals of specific instances of the noise terms ( meaning that each solution will be different for different noise instances ) . in order to make general statements about the above system, we shall construct its probability density function .following chap.4.5 of , the stochastic process in eq.([ito ] ) obeys the following fokker - planck equation , as mentioned in earlier sections , since the above equation contains no sink term is conserved , and its most natural interpretation is density of probability , where and are the population of a species .our goal of this section is to analytically solve for various cases of eq.([qands ] ) using our polynomial solutions for density functions given in sec.[classification ] .applying the standard decomposition in eq.([decomp ] ) , and the nonlinear transformation in eq.([itotrans ] ) , the resulting expression for is , \xi + b + \frac{\omega \zeta \beta^2}{2 \xi } \right)\frac{\partial}{\partial \xi } \right\}g(\xi , t|\xi').\label{huen}\end{aligned}\ ] ] since the b - r equation is used extensively in population modelling , where the variable represents the number of living members of a species , only eigenfunctions in the range will be considered , hence leaving out the romanovski example .this leaves three relevant cases , laguerre , bessel and fisher - snedecor .the s - l operator on the right hand side of eq.([huen ] ) leads to the heun differential equation ( see chap.31 of ) . due to the heun equation possessing four distinct singular points , there is no equivalent hypergeometric closed form expression for the heun functions .nevertheless , we find the following mapping between the heun system and hypergeometric solutions : * leads to the laguerre case * leads to the bessel case * leads to the fisher - snedecor case we shall only detail the laguerre and fisher - snedecor cases in this work as the bessel case was first solved in and along with is one of the earlier results involving analytical expressions of densities with mixed spectra .the case in eq.([huen ] ) leads to the biconfluent heun equation , whose solution suffers the same non - closed properties as the heun equation ( see chap.31 of ) .additionally , the case leads to the _ bessel process with constant drift _ , which is a peculiar hypergeometric case ( beyond the scope of this work ) where the spectrum is mixed but the discrete part contains an _ infinite _ number of eigenvalues .setting , the weight function for this case is , applying the following change in variables , eq.([huen ] ) becomes , where eq.([brlag ] ) is the standard laguerre fokker - planck equation .hence , due to the initial condition , the time dependent solution for the density in this section is , to the best of our knowledge , eq.([brlag2 ] ) is a new result of a specific example of a b - r fokker - planck equation . making the connection with the langevin equation this densityis generated from , since for the deterministic system is divergent , but the density is normalisable , this particular case is an example of multiplicative noise stabilising the system . setting the weight function for this case is , applying the following change in variables , eq.([huen ] ) becomes , where eq.([brfish ] ) is the standard fisher - snedecor fokker - planck equation .hence the time dependent solution for the density is , as with the laguerre case , to the best of our knowledge , eq.([brfs ] ) is a new result of a specific instance of a b - r fokker - planck equation . in fig.[fig5 ] , we give a specific example of eq.([brfs ] ) at various times .it is elementary to show that the weight function , which is proportional to the steady state density , peaks at the value . making the connection with the langevin equation this densityis generated from , we see that this example models linear deterministic growth , with linear and quadratic multiplicative stochastic terms .again , like the laguerre case , since the deterministic system is divergent , but the density is normalisable , this system provides another example of multiplicative noise stabilising a system . ) at various times with parameter values , , , and .,width=491,height=188 ]in this work we have given closed form expressions of sinked densities associated with ( at most ) quadratic diffusion and linear drift .the eigenfunctions relating to the discrete part of the spectrum are associated variants of classical orthogonal polynomials . we have given a macrobert s style proof to obtain a new closed form expression for the continuous spectrum normalisation associated with the romanovski density .this technique is sufficiently generalisable , given one knows enough about the analytic continuation properties of the hypergeometric function under consideration .we then applied these results to obtain the time dependent fokker - planck solutions associated with various cases of the b - r langevin equation .given the pervasive nature of langevin equations ( and the densities and green s functions associated with them ) in the physical sciences , we anticipate that these results are a stepping stone to a richer understanding of a variety of processes , both conserved and non - conserved .specifically , we hope that processes involving mixed spectra eigenfunctions become increasingly commonplace , as more analytic examples of solution appear which increase our mathematical understanding and our ability to apply such results in novel ways . paraphrasing the relevant introduction of ; in a world of ever increasing computing power , we must never overlook the benefits provided from analytic solutions in terms of special functions .they provide insight for understanding non - trivial relationships among physical variables with unsurpassed economy of effort , and are an invaluable tool for the validation of more complicated models which require computational treatment .the author graciously acknowledges alexander kalloniatis for his fruitful discussions and constant encouragement .we begin by conveniently labelling the double integral of eq.([newresult ] ) by , focusing on the case , we apply eq.([chi12 ] ) to split up for the region and deform the integral onto the contours and as shown in fig.[figcont ] to obtain , contours and used for the macrobert s proof.,width=377,height=188 ] following chap.14 of and chap.7 of , we may reverse the order of integration as each term in eq.([int11 ] ) falls off like , , on the respective contours and , as .hence becomes , to proceed we note that ( chap.15.5 of ) , and their complex conjugates , obey the same governing s - l equation as and , as they are the corresponding linearly independent solutions in the neighbourhood of the singular point . thus decomposingeither of the aforementioned eigenfunctions as , the resulting governing equation for the s is , we now recast eq.([niceii ] ) in terms of the s . through considering two copies of eq.([definej ] ) , one for and one for , we multiply the equation for by , and vice versa . subtracting the two expressionsawards us with , integrating eq.([unamed ] ) over all of , and applying integration by parts we obtain , ^{x \rightarrow \infty}_{x \rightarrow- \infty}. \end{aligned}\ ] ] using eq.([chi12 ] ) the asymptotic forms of the desired limits are given by , where , using the above asymptotic forms , eq.([niceii ] ) becomes , where and , in the following we consider the _ dirichlet integral _ expressions from chap.1 of and chap.3 of : where and the analytic function obeys _dirichlet s conditions _ on the interval on the interval entail : ( i ) contains only a finite number of discontinuities on the interval , ( ii ) contains a finite number of turning points on the interval . ] .thus we deform the contours and back to the real line segment , and let and . assuming that the function obeys dirichlet s conditions, we immediately obtain the following expression for , which is the required form given in eq.([newresult ] ) .the expression for is simply the complex conjugate of the case just considered .the remaining cases can be verified in an equivalent method considered in this appendix .10 m abramowitz and i stegun ( editors ) , _ handbook of mathematical functions with formulas , graphs , and mathematical tables _ , in : applied mathematical series , vol .government printing office , washington d.c ., ( 1972 ) f avram , n leonenko and n uvak , _ spectral representation of transition density of fisher - snedecor diffusion _ , stochastics - an international journal of probability and stochastic processes , 85(2 ) , ( 2013 ) , 346 - 369 l von - bertalanffy , _ problems of organic growth _ , nature , 163 , ( 1949 ) , 156 - 158 a borodin and p salminen , _ handbook of brownian motion _ , birkhauser , boston , ( 1996 ) m brics , j kaupus and r mahnke , _ how to solve the fokker - planck equation treating mixed eigenvalue spectrum ? _ , condensed matter physics , 16(1 ) , 13002 , ( 2013 ) , 1 - 13 p broadbridge , _ the forced burgers equation , plant roots and schrdinger s eigenfunctions _ , journal of engineering mathematics , 36 , ( 1999 ) , 25 - 39 s bykaik and o pashaev , _ exactly solvable madelung fluid and complex burgers equations : a quantum sturm - liouville connection _, journal of mathematical chemistry , 50 , ( 2012 ) , 2716 - 2745 e celeghini and m del almo , _ coherent orthogonal polynomials _ , annals of physics , 335 , ( 2013 ) , 78 - 85 m compean and m kirchbach , _ trigonometric quark confinement potential of qcd traits _ , european physical journal a , 33 , ( 2007 ) , 1 - 4 j cox , j ingersoll and s ross , _ a theory of the term structure of interest rates _ , econometrica , 53 , ( 1985 ) , 385 - 407 b davies , _ integral transforms and their applications _ , ( 3rd ed . ) , in : texts in applied mathematics , vol .41 , springer - verlag , new york , ( 2002 ) d davydov and v linetsky , _ pricing options on scalar diffusions : an eigenfunction expansion approach _ , operations research , 51 , ( 2003 ) , 185 - 209 n dunford and j schwartz , _ linear operators .part ii : spectral theory , self - adjoint operators in hilbert space _ , wiley , new jersy , ( 1988 ) h fakhri and a chenaghlou , _ ladder operators for the associated laguerre functions _, journal of physics a , 37 , ( 2004 ) , 7499 - 7507 h fakhri and a chenaghlou , _ ladder operators and recursion relations for the associated bessel polynomials _ , physics letters a , 358 , ( 2006 ) , 345 - 353 h fakhri and b mojaveri , _ the remarkable properties of the associated romanovski functions _ , journal of physics a , 44 , ( 2011 ) 195205 p forrester , _ log gases and random matrices _ , in : the london mathematical society monograph series , vol .34 , princeton university press , princeton nj , ( 2010 ) h fu and r sasaki , _ exponential and laguerre squeezed states for algebra and the calogero - sutherland model _ , physical review a , a53 , ( 1996 ) , 3836 - 3844 o garca , _ a stochastic differential equation model for the height growth of forest stands _ , biometrics , 39(4 ) , ( 1983 ) , 1059 - 1072 u graf , _ introduction to hyperfunctions and their integral transforms : an applied and computational approach _ , springer science and business media , new york , ( 2010 ) e hille and r phillips , _ functional analysis and semigroups _ , in : american mathematical society colloquium publications , vol .31 , american mathematical society , rhode island , ( 1957 ) m hortasu , _ heun functions and their uses in physics _ , proceedings of the 13th regional conference on mathematical physics , antalya , turkey , october 27 - 31 , 2010 , edited by u camci and i semiz , world scientific , ( 2013 ) , 23 - 39 a inayat - hussain , _ a new generalization of the hankel integral transform _ , journal of mathematical physics , 30(1 ) , ( 1989 ) , 41 - 44 r koekoek , p leskey and r swarttoue , _ hypergeometric orthogonal polynomials and their q - analogues _ , springer - verlag , berlin , ( 2010 ) t koornwinder , _ two variable analogues of the classical orthogonal polynomials _, in : theory and application of special functions , edited by r askey , academic press , ( 1975 ) , 435495 h langer and w schenki , _ generalised second - order differential equations , corresponding gap diffusions and susuperharmonic transformations _ , mathematische nachrichten , 148 , ( 1990 ) , 7 - 45 n leonenko and n uvak , _ statistical inference for reciporical gamma diffusion process _ , journal of statistical planning and inference , 140 , ( 2010 ) , 30 - 51 n leonenko and n uvak , _ statistical inference for student diffusion process _ , stochastic analysis and applications , 28(6 ) , ( 2010 ) , 972 - 1002 v linetsky , _ the spectral decomposition of the option value _ , international journal of theoretical and applied finance , 7(3 ) , ( 2004 ) , 337 - 384 v linetsky , _ the spectral representation of bessel processes with constant drift : applications in queueing and finance _ , journal of applied probability , 41 , ( 2004 ) , 327 - 344 t macrobert , _ spherical harmonics : an elementary treatise on harmonic functions , with applications _ , ( 2nd ed . ) , dover publications , new york , ( 1947 ) x mao , g marion and e renshaw , _ environmental brownian noise suppresses explosions in population dynamics _ , stochastic processes and their applications , 97(1 ) , ( 2002 ) , 95 - 110 h mckean , _ elementary solutions for certain parabolic partial differential equations _, transactions of the american mathematical society , 82 , ( 1956 ) , 519 - 548 e merzbacher , _ quantum mechanics _ , ( 3rd ed . ) , wiley , new jersey , ( 1997 ) p morse , _ diatomic molecules according to the wave mechanics .vibrational levels _ , physical review , 34 , ( 1929 ) , 57 - 64 p morse and h feshach , _ methods of theoretical physics _ , in : international series in pure and applied physics , mcgraw hill , new york , ( 1953 ) f olver , d lozier , r boisvert , and c clark ( editors ) , _ nist handbook of mathematical functions _, cambridge university press , new york , ( 2010 ) c quesne , _ extending romanovski polynomials in quantum mechanics _ , journal of mathematical physics , 54 , ( 2013 ) , 102103 a raposo , h weber , d alvarez - castillo and m kirchbach , _ romanovski polynomials in selected physics problems _ , central european journal of physics , 5(3 ) , ( 2007 ) , 253 - 284 m reed and b simon , _ functional analysis _ , in : methods of modern mathematical physics , vol .1 , academic press , san diego , ( 1981 ) f richards , _ a flexible growth function for empirical use _ , journal of experimental botany , 10 , ( 1959 ) , 290 - 300 h risken , _ the fokker - planck equation _ , ( 2nd ed . ) , springer , heidelberg , ( 1989 ) n saad , r hall and h ciftci , _ criterion for polynomial solutions to a class of linear differential equations of second order _ , journal of physics a , 39 , ( 2006 ) , 13445 - 13454 f scarf , _ new soluble energy band problem _ , physical review , 112 , ( 1958 ) , 1137 - 1141 a schenzle and h brand , _ multiplicative stochastic processes in statistical physics _ , physics letters , 69a(5 ) , ( 1979 ) , 313 - 315 z schuss , _ theory and applications of stochastic processes _ , in : series in applied mathematical sciences , vol . 170 , springer , new york , ( 2010 ) i stewart and d tall , _ complex analysis _ , cambridge university press , cambridge , ( 1999 )s strogatz , _ from kuramoto to crawford : exploring the onset of synchronization in populations of coupled oscillators _ , physica d , 143 , ( 2000 ) , 1 - 20 m tierz , _ matrix model and supersymmetric yang - mills integrals _ , physical review d , 76(10 ) , ( 2007 ) , 107701 j weidmann , _ spectral theory of ordinary linear operators _ , in : lecture notes in mathematics , vol .1258 , springer , berlin , ( 1987 ) j wimp , _ a class of integral transforms _edinburgh math .14 , ( 1964 ) , 33 - 40 e wong , _ the construction of a class of stationary markoff processes _ , in : stochastic processes in mathematical physics and engineering , edited by r bellman , ( proceedings of symposia of applied mathematics , vol .26 ) , american mathematical society , providence , r.i . , ( 1964 ) , 264 - 276 m yadav , _ solutions of a system of forced burgers equation in terms of generalized laguerre polynomials _ , acta mathematica scientia , 34b(5 ) , ( 2014 ) , 1461 - 1472 m zuparic and a kalloniatis , _ stochastic ( in)stability of synchronisation of oscillators on networks _ , physica d , 255 , ( 2013 ) , 35 - 51 | we analytically solve for the time dependent solutions of various density evolution models . with specific forms of the diffusion , drift and sink coefficients , the eigenfunctions can be expressed in terms of hypergeometric functions . we obtain the relevant discrete and continuous spectra for the eigenfunctions . with non - zero sink terms the discrete spectra eigenfunctions are generalisations of well known orthogonal polynomials : the so - called _ associated - laguerre , bessel , fisher - snedecor _ and _ romanovski functions_. we use a macrobert s proof to obtain closed form expressions for the continuous normalisation of the romanovski density function . finally , we apply our results to obtain the analytical solutions associated with the bertalanffy - richards langevin equation . _ keywords _ : fokker - planck equation , density function , hypergeometric function , classical orthogonal polynomial , bertalanffy - richards langevin equation |
in the era of big data inferring features of complex systems , characterized by many degrees of freedom , is of crucial importance .the high - dimensional setting , where the number of features to extract is not small compared to the number of available data , makes this task statistically or computationally hard .one case of practical interest is the inference of the largest component ( eigenvector ) of correlation matrices .consider independently drawn observations of interacting gaussian variables , _i.e. _ such that the population covariance matrix is not the identity matrix .if is much larger than the empirical covariance matrix computed from the observations converges to , and recovering the top eigenvector is easy .the case where both are large ( sent to infinity at fixed ratio ) has received a lot of attention , both theoretically and practically . from a theoretical point of view , it has been shown , in the case of a covariance matrix with one ( or few compared to ) eigenvalues larger than unity , say , , that recovery is possible if is smaller than the critical value . for larger sampling noise ( ) , the top eigenvector of is essentially orthogonal to the top component of , and is therefore not informative .it is reasonable to expect that the situation will improve in the presence of additional , prior information about the structure of the top component to be recovered , and that recovery will be possible even when is ( not too much ) larger than . that this is indeed the case has been rigorously shown whenall entries are nonnegative , and is supported by strong numerical evidence when the top component is known to have large entries ( finite as ) . in the present work , using techniques from statistical physics we propose explicit conjectures about the critical noise level and its dependence on the signal eigenvalue ( ) and on prior knowledge .the framework is general and can be applied to any kind of entry - wise prior probability , _i.e. _ factorized over the entries of the top component .we show how rigorous results in the nonnegative case of are recovered , and present new results for the large entry prior . the motivation to consider the latter prior stems from computational biology , more precisely , from the study of coevolution between amino acids in protein families .sequences of proteins diverged from a common ancestor widely differ across many organisms , while the protein structure and function are often very well conserved .the constraints induced by structural and functional conservation manifest themselves as correlations between amino acids ( the variables , where is the protein length ) across the different organisms ( the observations ) .recently , it was shown that the eigenmodes of the amino - acid covariance matrix corresponding to _ low _ eigenvalues were informative about three - dimensional contacts on the protein structure .these modes show large entries on the protein sites and amino - acid types in contact ; as the other entries contain diffuse , non - structural signal , the components can not be thought of as being sparse .the presence of large entries in structurally - informative components was empirically assessed through the so - called inverse participation ratio , ( for normalized ) , a quantity that remains finite for components with ( few ) large entries and otherwise vanishes for .we hereafter use this quantity as a prior over the components to facilitate their recovery .we consider the popular spiked covariance model , in which the entries of -dimensional vectors , , are gaussian random variables with zero means and population covariance matrix .all eigenvalues of but one are equal to unity , while the remaining eigenvalue is , with associated eigenvector . as usualwe choose but our results could be transposed to the case with minor modifications .we draw independent samples , and define the sample covariance matrix , with entries .the top eigenvector of is denoted by . in the double limit at fixed ratio , there exists a phase transition at a critical value of the sampling noise separating the _ high - noise regime _ , ,in which and have asymptotically zero squared dot product , and the _ low - noise regime _, , where the squared dot product between and is strictly positive with high probability .the sample covariance matrix obeys a wishart distribution , determined by , and . using bayes formula we may write the likelihood ( density of probability ) for the normalized top component of as follows up to a normalization coefficient .parameter above is equal to .however , it is convenient to consider as a free parameter .the limit corresponds to maximum likelihood inference , while working at low values of may be useful to ensure rapid mixing of monte carlo markov chain sampling of distribution , especially in the presence of prior information , see below .we now assume that prior information over the population eigenvector is available under the form of a potential acting on the entries of .the posterior distribution over the top component now reads up to a normalization coefficient .three choices for the potential are shown in fig .[ fig_pot ] .motivated by previous works on protein sequence analysis , see introduction , we will hereafter mostly concentrate on , with ( fig .[ fig_pot](a ) ) .this potential favors the presence of large entries in the top component , but does not rule out the existence of many entries with small magnitude ( typically , of the order of ) .it is therefore different from sparsity - enforcing potentials , such as in fig .[ fig_pot](b ) .exact results for the location of the transition in the nonnegative case ( fig . [ fig_pot](c ) ) were recently derived .our formalism finds back those results , and can be applied to any potential as shown below .corresponding to three prior information about the entries of the principal component : ( a ) large entries are present , ( b ) penalty favoring zero entries , ( c ) all entries are nonnegative ., width=355 ]we assume that the logarithm , divided by , of the normalization coefficient of in eq .( [ rho2 ] ) , is concentrated around its expectation value ] , ] , and the conjugated lagrange multipliers . here, denotes the expectation over the distribution over in eq .( [ rho2 ] ) .the term in eq .( [ logz ] ) depends on the prior potential and on the structure of the normalized population top component , more precisely , on how its entries scale with .we now consider two cases of interest .we assume first that the components of scales as , with finite , and denote by the distribution of the .we focus on the nonnegative entry prior , for which for and 0 for .we obtain \ , \end{split}\ ] ] where erfc is the complementary error function , and ] .these equations correspond to eqs .( 7 ) , ( 8) , ( 9 ) , ( 21 ) and ( 23 ) of .we now assume that has only ` large ' entries , , different from zero in the limit ( with finite ) , and that the other entries decay fast enough with , _e.g. _ are of the order of . then while the above formula is valid for generic we restrict ourselves , in the remaining part of this article , to the potential .furthermore , we assume that the finite entries of are all equal to ; the calculation can be easily extended to , or to the case of nonhomogeneous entries .in addition we assume that ( a1 ) all take identical values in eq .( [ logz ] ) ( homogeneous regime ) ; ( a2 ) has no large ( finite when ) entry on sites such that .the validity of these assumptions will be discussed in the next section .after some elementary algebra the extremization conditions reduce to the following set of coupled equations over and the first entries of : where note that the variables obey the same cubic equation and , hence , can take at most three different values as varies .the equations above admit the solution , corresponding to the ` unaligned ' phase .in addition , in some well - defined regions of the four - dimensional control parameter space solutions with may be found .we give in section iv below results for three cases : ( a ) no prior ( ) ; ( b ) weak exploitation of many data with prior information ( small for finite ) ; ( c ) maximum _ a posteriori _ inference ( finite , and infinite and at fixed ratio ) .we start with the case .extremization conditions over the parameters in eq .( [ logz ] ) give the value of the squared overlap between and for any .we find : for whatever the value of , and , for , those expressions perfectly agree , in the limit , with known results for the spiked covariance model .in addition our formalism gives access to the value of for finite .note that , for , inference of the direction of is possible , _ i.e. _ , even for ( but larger than ) . at the critical noise , .the above results imply that , in the absence of prior information ( ) , inference of the top component direction is possible at low provided the sampling noise is smaller than . in other words ,when both and tend to zero with a fixed ratio , the aligned and not - aligned phases correspond , respectively , to , and .we expect the critical ratio to be a drecreasing function of the prior strength , as stronger prior should facilitate the recovery of the large - entry top component .resolution of eq .( [ ppo ] ) gives the phase diagram shown in fig .[ fig_pd ] .several regimes can be identified , depending on : * for , the critical ratio remains unchanged , see fig .[ fig_pd ] , and equal to .as crosses this critical value the overlap continuously increases from 0 to a positive value , see fig .[ fig_p2 ] .* for , the aligned phase ( ) exist for , see dashed line in fig .[ fig_pd ] . as crosses the squared overlap discontinuously jumps from 0 to , see fig .[ fig_p2 ] . * for ,the aligned phase ( ) is thermodynamically stable , meaning that the value of in eq .( [ logz ] ) attached to this phase is larger than the one of the nonligned phase ( ) , , for , see full line in fig .[ fig_pd ] .the value of and of the overlap at the phase transition are the roots of the two coupled equations where the first equation actually implements the condition .the phase transition is illustrated in the middle panel of fig .[ fig_p2 ] for a specific value of .the value of is defined from * for the prior strength is so strong that the inferred component has few large entries whatever the value of . for it is aligned ( ) with ( fig .[ fig_p2 ] ) , while for , it is not , see fig .[ fig_pd ] .plane ; axis are rescaled by and factors , where is the number of nonzero components in .dots locate the points , and , see text .transitions between phases are shown by full lines .the dashed lines show the limit of existence of the aligned phase , while the dot - dashed lines separate the regions with ( above ) and without ( below line ) homogeneity breaking among the large entries of .,title="fig:",width=316 ] -.3 cm assumption ( a1 ) , see section [ poiuy ] , is trivially valid for , but is not necessarily correct for and strong prior strength , for which we expect that will condensate and one component , say , , will be larger than the other components , say , , with ( nonhomogeneous regime ) . the transition line between these two regimesis identified upon imposing that the cubic equation over in eq .( [ logz ] ) admits a two - fold degenerate solution , that is , .the transition line is plotted in the phase diagram of fig .[ fig_pd ] ( dot - dashed line ) , and ends up in the point of coordinate lying on the existence line ( dashed line ) .$ ] , vs. control parameter for three prior strengths and ., title="fig : " ] -.5 cm we now focus on map inference at finite sampling noise , whereas and are both sent to infinity , with a fixed ratio .parameter , hereafter referred to as the slope , controls the relative magnitude of the -dependent and prior terms in , see eq .( [ rho2 ] ) , while the multiplicative factor is introduced in the definition of to compensate for the explicit dependence upon in . for simplicity we present results for only , the extension to larger being rather straightforward .equations ( [ ppo ] ) admit the solution , and another solution , with as , with ratios , having finite limits .after some simple algebra we obtain the following expresson for the slope as a function of the squared overlap for the latter solution : we show in fig .[ fig_s1 ] the representative curve of vs. the slope , for below and above the critical noise level , , in the absence of prior . for squared overlap is an increasing function of , starting from a non zero value for : the population eigenvector direction can be estimated without prior at low sampling noise , but the overlap is increased in the presence of prior . for a discontinuous jump is observed from to at a critical value of the slope , while the overlap further increases as exceeds .remarkably , even for large sampling noise values , the presence of a sufficiently strong prior allows us to infer .the value of as a function of the noise level is shown in fig .[ fig_s2 ] ; for large noise levels we have ., vs. slope for sampling noises ( top ) and 0.4 ( bottom ) .note the presence of the discontinuous transition at in the latter case .the randomly condensed solution appears for . here ,.,title="fig:",width=268 ] -.3 cm this aligned phase competes with a nonaligned , but condensed phase , in which assumption ( a2 ) , see section [ poiuy ] , is violated .in other words , for and sufficiently large , has few large entries ( in the limit ) , but not along the directions corresponding to the large components of ; hence , . to describe this new phase we set to 0 in the expression for in eq .( [ logz3 ] ) .the corresponding optimization equations can be solved in the limit , with the result that the nonaligned , condensed regime exists for larger than \ .\ ] ] the value of as a function of the noise level is shown in fig .[ fig_s2 ] . for small ( slightly above )we observe that is larger than , as intuitively expected : it is favorable to condense along the direction of rather than any other direction .it can be checked that the value of in eq .( [ logz ] ) is higher for this phase than for the aligned condensed phase .hence , as soon as exceeds the overlap vanishes .( full line ) and ( dashed line ) as functions of the sampling noise .insert : vs. .the difference vanishes in . here , , _i.e. _ .,title="fig:",width=355 ] -2.6 cm for large noise levels , however , we have , which is asymptotically smaller than , see insert of fig . [ fig_s2 ] .indeed , the threshold slopes and cross at a well - defined value of the noise , , which depends on the top eigenvalue .we show in fig .[ fig_s3 ] the behaviour of vs. , and compare it to the critical noise in the absence of prior .we observe the presence of a region in the plane , above the critical line , where the direction of can be inferred thanks to the large - entry prior .our replica symmetric theory predicts that the benefit of the prior does not extend to very large values of the signal eigenvalue and of the noise , see fig .[ fig_s3 ] .plane . the dashed line divides the plane into the weak noise region ( below line ) , where inference is possible withour prior , and the strong noise region ( above line ) .the full line shows the value of , at which , as a function of . in betweenthe dashed and full lines , inference of the top component is possible in the presence of a prior with appropriate strength .the two lines merge for , . here ,.,title="fig:",width=326 ] -1.9 cmthe non rigorous calculations done in this paper suggest that inference of the top component of the population covariance matrix is possible in the presence of prior information , even above the critical noise level of the spiked covariance model , in agreement with rigorous results for the nonnegative case and numerical investigations for the large - entry case .many interesting questions have not been investigated in the present paper : how hard is the recovery problem from a computational point of view ?are there ` dynamical ' phase transitions separating subregions in the aligned phase , where the top component can be recovered in polynomial time or not ?if so how do these line compare to the ` static ' critical lines derived in this paper ?in addition it would be interesting to investigate the validity of the replica - symmetric hypothesis used to derive the results above . though replica symmetry is generally expected to be correct for convex optimization problems what happens in nonconvex situationsis not clear .for instance , inference of the top component with the nonnegative prior gives rise to a nonconvex optimization problem , but all rigorous results are exactly found back within our replica symmetric approach , see section iii.b . from this perspective it would be useful to investigate whether the present results are robust against replica symmetry breaking .i am grateful to m.r .mckay for the invitation to the asilomar 2016 conference , and to s. cocco and d. villamaina for useful discussions .this work has benefited from the financial support of the cnrs inphyniti inferneuro project .johnstone , _ high dimensional statistical inference and random matrices _ , proceedings of the international congress of mathematicians , madrid , spain , 307 - 333 , 2006 .d.c . hoyle and m. rattray , _principal - component - analysis eigenvalue spectra from data with symmetry breaking structure _ , phys .e 69 , 026124 , 2004 . j. baik , g. ben arous g. and s. pch , _ phase transition of the largest eigenvalue for nonnull complex sample covariance matrices _ , ann .33 , 1643 - 1697 , 2005 .a. montanari and e. richard , _ non - negative principal component analysis : message passing algorithms and sharp asymptotics _, ieee transactions on information theory 62 , 1458 - 1484 , 2016 .d. villamaina and r. monasson , _ estimating the principal components of correlation matrices from all their empirical eigenvectors _ , europhys .lett . 112 , 50001 , 2015 .s. cocco , r. monasson and m. weigt , _ from principal component to direct coupling analysis of coevolution in proteins : low eigenvalue modes are needed for structure prediction _ ,plos comp .9 , e:1003176 , 2013 .h. jacquin , a. gilson , e. shakhnovich , s. cocco , r. monasson , _ benchmarking inverse statistical approaches for protein structure and design with exactly solvable models _ , plos comp .12 : e1004889 , 2016 .a. engel and c.v .den broeck , _ statistical mechanics of learning _ , cambridge university press , cambridge , england , 2001. m. advani and s. ganguli , _ statistical mechanics of optimal convex inference in high dimensions _ , phys .x 6 , 031034 , 2016 . y. nakanishi - ohno , t. obuchi , m. okada , y. kabashima , _ sparse approximation based on a random overcomplete basis _, j. stat .p063302 , 1 - 30 , 2016 f. krzakala , m. mzard , f. sausset , y. sun , l. zdeborova , _ probabilistic reconstruction in compressed sensing : algorithms , phase diagrams , and threshold achieving matrices _ , j. stat .p08009 , 1 - 57 , 2012 . | the problem of infering the top component of a noisy sample covariance matrix with prior information about the distribution of its entries is considered , in the framework of the spiked covariance model . using the replica method of statistical physics the computation of the overlap between the top components of the sample and population covariance matrices is formulated as an explicit optimization problem for any kind of entry - wise prior information . the approach is illustrated on the case of top components including large entries , and the corresponding phase diagram is shown . the calculation predicts that the maximal sampling noise level at which the recovery of the top population component remains possible is higher than its counterpart in the spiked covariance model with no prior information . random matrix theory , spiked covariance model , prior information , replica method , phase transitions |
the first direct detection of gravitational waves is a key science mission around the world , with many different approaches being advocated .these include ground and space - based laser interferometers , and pulsar timing arrays ( collections of galactic millisecond pulsars ( msps ) ; ) , and it is with the latter of these that we will be concerned with here .it is the exceptional stability of msps , with decade long observations providing timing measurements that show fractional instabilities similar to atomic clocks ( e.g. ) , that makes them key to the pursuit of a wide range of scientific endeavors .for example , observations of the pulsar psr b1913 + 16 provided the first indirect detection of gravitational waves , whilst the double pulsar system psr j0737 - 3039a / b provides precise measurements of several ` post keplerian ' parameters allowing for additional stringent tests of general relativity .current theoretical limits ( ) place the amplitude of a stochastic gravitational wave background ( gwb ) generated by coalescing black holes ( e.g. ) at only a factor 3 - 10 lower than current observational limits ( e.g. ) . in order to make the first tentative detections of these signals as much ancillary data will be needed as possible in order to constrain the other components present in the data .dispersion measure ( dm ) variations are thought to be one of the largest components of noise in pulsar timing data ( e.g. ) , and many different methods exist to describe it ( e.g. , henceforth l13 ; lee et al . submitted 2013 ; ) . in the near future, observations from lofar will allow precise measurements of dm in the direction of pulsars to be used in pta analysis .including this information in subsequent analysis in order to constrain the dm signal and separate it from the gravitational waves will thus be critical . in this articlewe describe how to include such dm measurements as prior information in pulsar timing analysis in order to constrain the signal realisation for the dm by modifying the existing bayesian techniques presented in l13 . in section [ section : bayes ]we give a brief overview of our bayesian methodology , and in section [ section : models ] derive the likelihood that we use to include the additional dm measurements when analysing the simulated data in section [ section : sims ] .finally we will provide some concluding remarks in section [ section : conclusion ] .this research is the result of the common effort to directly detect gravitational waves using pulsar timing , known as the european pulsar timing array ( epta ) .our method for performing pulsar timing analysis is built upon the principles of bayesian inference , which provides a consistent approach to the estimation of a set of parameters in a model or hypothesis given the data , .bayes theorem states that : where is the posterior probability distribution of the parameters , is the likelihood , is the prior probability distribution , and is the bayesian evidence . in parameter estimation , the normalizing evidence factoris usually ignored , since it is independent of the parameters .inferences are therefore obtained by taking samples from the ( unnormalised ) posterior using , for example , standard markov chain monte carlo ( mcmc ) sampling methods .an alternative to mcmc is the nested sampling approach , a monte - carlo method targeted at the efficient calculation of the evidence , that also produces posterior inferences as a by - product . in and this nested sampling frameworkwas built upon with the introduction of the multinest algorithm , which provides an efficient means of sampling from posteriors that may contain multiple modes and/or large ( curving ) degeneracies , and also calculates the evidence . sinceits release multinest has been used successfully in a wide range of astrophysical problems , from detecting the sunyaev - zeldovich effect in galaxy clusters , to inferring the properties of a potential stochastic gravitational wave background in pulsar timing array data . in the following sections we make use of the multinest algorithm to obtain our estimates of the posterior probability distributions for both timing model , and stochastic parameters .recently temponest ( l13 ) was introduced as a means of performing a simultaneous analysis of either the linear or non - linear timing model and additional stochastic parameters using multinest to efficiently explore this joint parameter space , whilst using tempo2 as an established means of evaluating the timing model at each point in that space .we incorporate the likelihood developed in section [ section : models ] into temponest in order to perform the analysis described in section [ section : sims ] .for any pulsar we can write the toas for the pulses as a sum of both a deterministic and a stochastic component : where represents the toas for a single pulsar , with and the deterministic and stochastic contributions to the total respectively , where any contributions to the latter will be modelled as random gaussian processes . writing the deterministic signal due to the timing model as , and the uncertainty associated with a particular toa as : where and represent the efac and equad parameters applied to toa respectively , we can write the probability that the data is described by the timing model parameters and white noise parameters and as : to include the dm variations , which we will denote , we begin by following the same process as in l13 . writing it in terms of its fourier coefficients so that where denotes the fourier transform such that for frequency and time we will have both : and an equivalent cosine term . here the dispersion constant is given by : is the total observing timespan , is the observing frequency for the toa at barycentric arrival time , and the frequency of the signal to be sampled . defining the number of coefficients to be sampled by , we can then include the set of frequencies with values , where extends from 1 to . for typical pta data that for frequency independent spin noise , a low frequency cut off of is sufficient to accurately describe the expected long term variations present in the data , as the quadratic included in the timing model in the form of the spindown parameters acts as a proxy to lower frequency signals . for dm variations ,however , these terms must be accounted for either by explicitly including these low frequencies in the model , or by including a quadratic in dm to act as a proxy , as with the red noise , defined as : with free parameters to be fitted , the barycentric arrival time for toa and elements of a vector : for a single pulsar the covariance matrix of the fourier coefficients will be diagonal , with components where there is no sum over , and the set of coefficients represent the theoretical power spectrum for the dm variations in the residuals . as discussed in , whilst eq [ eq : bprior ]states that the fourier modes are orthogonal to one another , this does not mean that we assume they are orthogonal in the time domain where they are sampled , and it can be shown that this non - orthogonality is accounted for within the likelihood . instead , in bayesian terms, eq . [ eq : bprior ] represents our prior knowledge of the power spectrum coefficients within the data .we are therefore stating that , whilst we do not know the form the power spectrum will take , we know that the underlying fourier modes are still orthogonal by definition , regardless of how they are sampled in the time domain .it is here then that , should one wish to fit a specific model to the power spectrum coefficients at the point of sampling , such as a broken , or single power law , the set of coefficients should be given by some function , where we sample from the parameters from which the power spectrum coefficients can then be derived .we can then write the joint probability density of the timing model , white noise parameters , power spectrum coefficients and the signal realisation , pr , as : where represents any additional prior information regarding the dm signal realisation . henceforth we will consider to be given by a vector of measurements of the dm at some set of arbitrary times with associated measurement errors , which we will denote and respectively .for our choice of we use an uninformative prior that is uniform in space , and draw our samples from the parameter instead of . given this choice of prior the conditional distributions that make up eq . [ eq : prob ]can be written : \nonumber\end{aligned}\ ] ] where and represents the white noise errors in the residuals , and describes components of the dm model in addition to those contained in the fourier modes , such as the quadratic terms in the timing model which we have separated from the term for clarity .we then also have : \\ \nonumber & \times & \frac{1}{\sqrt{\mathrm{det}\bmath{\psi}}}\exp\left[-\frac{1}{2}(\bmath{l_{dm}}-\bmath{f_{dm}}-\bmath{f_la})^t\bmath{\psi}^{-1}\right.\nonumber \\ & \times & \left.(\bmath{l_{dm}}-\bmath{f_{dm}}-\bmath{f_la})\right].\end{aligned}\ ] ] here is a matrix of fourier modes as in eq .[ eq : fmatrix ] , however with points evaluated at the times that additional dm measurements were made , and is the diagonal noise matrix for the additional dm measurements with values .we then marginalise over all fourier coefficients analytically in order to find the posterior for the remaining parameters alone . in order to perform the marginalisation over the fourier coefficients , we first write the log of the likelihood in eq [ eq : prob ] , which , excluding the determinant terms , and denoting as , as , as and as is given by : taking the derivitive of with respect to gives us : which can be solved to give us the maximum likelihood vector of coefficients : re - expressing eq .[ eq : logl ] in terms of : the 3rd term in this expression can then be integrated with respect to the elements in to give : \nonumber \\ & = & ( 2\pi)^m~\mathrm{det } ~ \bmath{\sigma}^{-\frac{1}{2}}.\end{aligned}\ ] ] our marginalised probability distribution for a set of gwb coefficients is then given as : , \nonumber\end{aligned}\ ] ] in much the same way in which we include an efac in the white noise matrix to alter the weighting of the data , we can include an efac in the matrix in order to alter the weight of the dm measurements in the event that the errors provided are under , or over estimated , or if the measurements are not consistent with the timing data .as such we can simply define : in the simulations described in section [ section : sims ] however , we will not include this additional parameter .we can include additional , frequency independent red ` spin ' noise in much the same way as the dm variations .as before we define a matrix of fourier modes for a set of frequencies : and an equivalent cosine term .these rows can then be appended to the fourier matrix in eq.[eq : fmatrix ] , which we will denote here to form a new matrix containing both the red noise and dm terms : similarly the matrix is extended to accommodate the new power spectrum coefficients required to describe the spin noise . the additional dm prior term is then kept the same : in forming the matrix we add the term to only the section of the matrix that corresponds to the autocorrelated terms of the dm modes , and similarly the vector is added only to the part of concerned with the dm fourier modes when forming . in many pulsar timing datasets , phase jumpsare fitted between different groups of observations , or there might be other parameters that are not of interest , such as a constant phase offset , resulting in a potentially significant increase in the number of parameters to be fit for in analysis .if the specific values of such parameters are not of importance we can marginalise analytically over them , greatly reducing the dimensionality of the problem .if we separate the timing model into a contribution from the set of parameters that we wish to parameterise and a contribution from the set of parameters that we plan to marginalise over analytically then we can write the probability that the data is described by the remaining parameters and any additional parameters we wish to include as : using a uniform prior on the parameters , we use the same approach as described in to perform this marginalisation process analytically . this results in equation [ eq : llikemargin ] , which is the expression we will be using in the subsequent analysis : , \end{aligned}\ ] ] where , and .[ cols="<,^,^,^",options="header " , ] [ table : sim1 ] in order to check the efficacy of the method described in section [ section : models ] we simulate a dataset for the isolated pulsar j0030 + 0451 .we begin by taking the injected parameter values given in table [ table : sim1 ] and generate a 5 year dataset with uneven sampling in the time domain , but with an average cadence of 2 weeks .we include two observing frequencies at 1440 and 2440 mhz where the higher frequency observations exist only for the latter of the observations , and where no multi - frequency data exists for any given observing epoch of duration 2 weeks .we then add variations in the dm that are described by a power law with functional form with a spectral index of .note that we do not list injected values for the and parameters as these are used simply as proxies to the low frequency dm variations , and as such we do not know a priori what these values will take .we then simulate discrete observations of the total dm signal that will act as our prior , .we generate monthly samples that are scattered around the true signal with an rms of pc .the injected dm signal , simulated dm observations , and the final residuals obtained when subtracting the injected timing model parameters in table [ table : sim1 ] except the dm parameters , are shown in fig .[ fig : res ] .in addition to the timing model and dm parameters , we also include a red noise power law model in our analysis , of the same functional form as the dm spectrum .we initially performed our analysis using a log uniform prior on the red noise amplitude , however as can be seen in fig .[ fig : logred ] the signal is completely unconstrained below some upper limit . when using a log uniform prior this upper limitwill be dependent upon the lower limit chosen for the prior , and as such we instead use a uniform prior on the red noise amplitude , in order to obtain a robust upper limit on the signal . in fig .[ figure : timingposteriors ] we show the one dimensional posteriors for the timing model and stochastic parameters given in table [ table : sim1 ] when including ( blue ) , and not including ( red ) , the simulated dm observations as additional prior information . for the timing model parameters the injected value , when known , is given by 0 on the axis , which is in units of the 1 uncertainty returned by tempo2 when not including either the red noise or dm power law model components .the clear result here is that the precision with which the timing model parameters have been recovered has improved significantly when including the additional prior information , between a factor .comparing the posteriors for the amplitude of the red noise power law , when including the additional prior information the upper limit decreases by a factor of , demonstrating how critical such data will be in constraining gravitational wave signals in pulsar timing data .{signal.pdf } & % \includegraphics[width=80mm]{res.pdf } \\\end{array} ] {ra.pdf } & \includegraphics[width=50mm]{dec.pdf } & \includegraphics[width=50mm]{f0.pdf } \\ % % \hspace{-0.5 cm } \includegraphics[width=50mm]{f1.pdf } & \includegraphics[width=50mm]{pmra.pdf } & \includegraphics[width=50mm]{pmdec.pdf } \\ % % \hspace{-0.5 cm } \includegraphics[width=50mm]{dm.pdf } & \includegraphics[width=50mm]{dm1.pdf } & \includegraphics[width=50mm]{dm2.pdf } \\ % % \hspace{-0.5 cm } \includegraphics[width=50mm]{px.pdf } & \includegraphics[width=50mm]{redamp.pdf } & \includegraphics[width=50mm]{redspec.pdf } \\ % % \hspace{-0.5 cm } \includegraphics[width=50mm]{dmamp.pdf } & \includegraphics[width=50mm]{dmindex.pdf } & \\ % \end{array}$ ]we have discussed a method of including discrete measurements of the dispersion measure in the direction of a pulsar as prior information in the analysis of that pulsar . by using an existing bayesian framework ,this prior information can be simply folded into the analysis and used to constrain the dm signal realisation in pulsar timing data .we have shown that this method can be applied where no multi - frequency data exists across much of the dataset , and does not require simultaneous multi - frequency data to be present for any observing epoch .we have shown that , as expected , including this prior information can greatly increase both the precision of the timing model parameters recovered from the analysis , as well as increase the sensitivity to red noise in the data .clearly the level of improvement in real data will be entirely dependent on the dataset in question , however the inclusion of such prior information will likely prove extremely useful both to test the validity of existing models for dm , and to improve constraints in future analysis .many thanks to jason hessels for assistance in the production of this work . | here we present a bayesian method of including discrete measurements of dispersion measure due to the interstellar medium in the direction of a pulsar as prior information in the analysis of that pulsar . we use a simple simulation to show the efficacy of this method , where the inclusion of the additional measurements results in both a significant increase in the precision with which the timing model parameters can be obtained , and an improved upper limit on the amplitude of any red noise in the dataset . we show that this method can be applied where no multi - frequency data exists across much of the dataset , and where there is no simultaneous multi - frequency data for any given observing epoch . including such information in the analysis of upcoming international pulsar timing array ( ipta ) and european pulsar timing array ( epta ) data releases could therefore prove invaluable in obtaining the most constraining limits on gravitational wave signals within those datasets . [ firstpage ] methods : data analysis , pulsars : general , pulsars : individual |
online auctions are becoming an increasingly important component of consumers shopping experience .on ebay , for instance , several million items are offered for sale every day .what makes online auctions popular forms of commerce is their availability of almost any kind of item , whether it be new or used , and their constant accessibility at any time of the day , from any geographical region in the world .moreover , the auction mechanism often engages participants in a competitive environment and can result in advantages for both the buyers and the sellers [ ] . in this paperwe study online auctions from the point of view of the bidder seller network that they induce .every time a bidder purchases from a seller , both bidder and seller are linked .buying from a seller indicates that the bidder likes the product and trusts the seller thus , it establishes a relationship between bidder and seller .many sellers list more than one auction ( i.e. , they sell multiple items across different auctions ) , so repeat transactions by the same bidder across different auctions of the same seller measure the strength of this relationship , that is , it measures the strength of a bidder s _ loyalty _ to a particular seller . studying loyalty in auction networksmuch of the existing auction literature focuses on only the seller and the level of _ trust _ she signals to the bidders [ e.g. , ] . to that end , a seller s _ feedback score _( i.e. , the number of positive ratings minus the number of negative ratings ) is often scrutinized [ e.g. , ] and it has been shown that higher feedback scores can lead to price - premiums for the seller [ see ; ] . in this paper we study a complementary determinant of a bidder s decision process : _loyalty_. loyalty is different from trust .trust is often associated with reliability or honesty ; and trust may be a necessary ( but not sufficient ) prerequisite for loyalty .loyalty , however , is a stronger determinant of a bidder s decision process than trust .loyalty refers to a state of being faithful or committed .loyalty incorporates not only the level of confidence in the outcome of the transaction , but also satisfaction with the product , the price , and also with previous transactions by the same seller . moreover , loyal bidders are often willing to make an emotional investment or even a small sacrifice to strengthen a relationship .this paper makes two contributions to the literature on online auctions : first , we propose a novel way to _ measure _ e - loyalty from the bipartite network of bidders and sellers ; then , we investigate the _ effect _ of e - loyalty on the outcome of an auction and the statistical challenges associated with it . more specifically , our goal is to understand and learn from loyalty networks . to that end , we first measure a seller s perceived loyalty by its induced bidder loyalty distribution .then , borrowing ideas from _ functional data analysis _ , we capture key elements of that distribution using functional principal component analysis .the resulting principal component scores capture different aspects of loyalty - strength , -skew and -variability .we then investigate the impact of these loyalty scores on the outcome of an auction such as its final price .we would like to point out that the goal of this paper is not to develop new auction theory ( i.e. , it is not our goal to develop a game - theoretic model under market equilibrium considerations ) .rather , our goal is to mine a rich set of auction data for new patterns and knowledge . in that sense ,our work is exploratory rather than confirmatory .however , as it is often the case with exploratory work , we hope that our work will also inspire the development of new theory .in particular , we hope that our work will bring the attention to the many statistical challenges associated with the study of online markets .studying e - loyalty networks is challenging from a statistical point of view because of the asymmetric nature of the network .just as in many offline markets , online auctions are dominated by few very large sellers ( `` megasellers '' ) .megasellers have a large supply of products and thus account for a large number of all the transactions .statistically , this dominance results in a clustering of the data and , as a result , a violation of standard ols model assumptions . in this paperwe investigate several remedies to this clustering effect via random effects models and weighted least squares .however , our investigation shows that neither approach fully eases all problems .we thus conclude that the data is too segmented to be captured by a single model and compare our analyses with the results of a data - clustering approach .this paper has implications for future research in online markets .many online markets are characterized by a few large `` players '' that dominate most of the interactions and many , many small players with occasional interactions .this is often referred to as the `` long tail effect '' in online markets [ see , e.g. , ] . for instance , on ebay , megasellers dominate the marketplace .the statistical implication is that repeat interactions by these megasellers are no longer independent and , hence , the assumptions of ols break down .while this may not always create a problem , this research shows that , first , the conclusions from an ols regression are significantly different from models that account for the clustering induced by megasellers , and , second , that it is not at all obvious how to best account for this clustering . in particular , this research puts the spotlight on the findings from previous researchers [ e.g. , ; ; ] who , despite similar data - scenarios , rely their conclusions on the ols modeling assumptions .[ see also who , in the context of trust and online auctions alone , count over 6 papers relying on ols modeling techniques . ]this paper is organized as follows . in section [ sec2 ]we introduce our data and we motivate the existence of auction networks . in section [ sec3 ]we use seller bidder networks to derive several key measures of e - loyalty .we investigate the effect of e - loyalty on the outcome of an auction in section [ sec4 ] and explore different modeling alternatives .the paper concludes with final remarks in section [ sec5 ] .in this section we describe the data for our study .we start by describing the online auction mechanism in general and our data in particular .then , we motivate the network structure induced by the auction mechanism and show several snapshots of our network data . in online auctions participants bid for products or services over the internet . while there are different types of auction mechanisms , one of the most popular types ( a variant of whichcan also be found on http://ebay.com[ebay.com ] ) is the _ vickrey _ auction , in which the initial price starts low and is bid up successively .online auctions have experienced a tremendous popularity recently , which can be attributed to several features : since the auction happens online , it is not bound by any temporal or geographical constraints , in stark contrast to its brick - and - mortar counterpart ( e.g. , at _sotheby s _ ) .it also fosters social interactions since it engages participants in a competition . as a result, it attracts a large number of buyers and sellers which offers advantages for both sides : sellers find a large number of potential customers which often results in higher prices and lower costs . on the other hand, buyers find a large variety of products which enables them to locate rare products and to choose between products with the lowest price .one of the most well - known online auctions is ebay , but there are many more ( e.g. , ubid , prosper , swoopo or overstock ) , each of which offers a variety of different products and services .the data for this research originates from ebay online auctions and we describe details of the data next ..2d2.2d3.2@ * attribute * & & & + auction duration ( days ) & 3.50 & 3.00 & 4.43 + starting price ( usd ) & 3.77 & 3.33 & 5.64 + closing price ( usd ) & 6.61 & 4.25 & 9.15 + item quantity & 5.42 & 1.00 & 129.47 + bid count & 3.16 & 1.00 & 4.26 + size ( bead diameter ) & 6.41 & 6.00 & 3.35 + pieces ( # of beads per item ) & 124.30 & 48.00 & 343.79 + 220pt.2d3.2d6.2@ * attribute * & & & + volume & 163.90 & 6.00 & 999.00 + conversion rate & 0.67 & 0.67 & 0.33 + seller feedback & 2054.00 & 264.00 & 12400.00 + we study the complete bidding records of _ swarovski _ fine beads for every single auction that was listed on ebay between april , 2007 , and january , 2008 .( note that the data were obtained directly from ebay so we have a complete set of bidding records for that time frame . )our data contains a total of 36,728 auctions out of which 25,314 transacted .there are 365 unique sellers and 40,084 bidders out of which 19,462 made more than a single purchase .each bidding record contains information on the auction format , the seller , the bidder , as well as on product details .tables [ tab : tab 1][tab : tab 3 ] summarize this information .table [ tab : tab 1 ] shows information about the auctions and the product sold in each auction .we can see that the typical auction - length is 3 days .the product sold in each auction ( packages of beads for crafts and artisanship ) is of relatively small value and , thus , both the average starting and closing prices are low .while many ebay auctions sell only one item at a time ( e.g. , laptop or automobile auctions ) , auctions in the crafts category often feature multi - unit auctions , that is , the seller offers multiple counts of the same item and bidders can decide how many of these items they wish to purchase . in our datathe average item - quantity per auction is 5.42 .auctions thrive under competition among bidders and while the average number of bids is slightly larger than 3 , the median is only 1 . as pointed out above , the items sold in these auctions are packages of swarovski beads .the value of a bead is , in part , defined by its size , and the average diameter of our beads equals 6.41 millimeters .another measure for the value of an item is the number of pieces per package ; we can see that there are on average over 124 beads in each package , but this number varies significantly from auction to auction .220pt.2d2.2d3.2@ * attribute * & & & + volume & 3.62 & 1.00 & 14.29 + item quantity & 5.05 & 2.00 & 29.25 + bidder feedback & 228.10 & 70.00 & 559.53 + we are primarily interested in the bipartite network between bidders and sellers .one main factor influencing this network is the size of the seller .we can see ( table [ tab : tab 2 ] ) that the average seller - volume ( i.e. , number of auctions per seller ) is over 163 .a seller s auction will only transact if ( at least one ) bidder places a bid . while low transaction rates ( or `` conversion rates '' ) are a problem for many ebay categories ( e.g. , automobiles ) , in our datathe average conversion rate is 67% per seller , which is considerably high .one factor driving conversion rates is a seller s perceived level of trust .trust is often measured using a seller s feedback rating computed as the sum of positive ( `` '' ) and negative ( `` '' ) ratings .trust averages over 2000 in our data .table [ tab : tab 3 ] shows the corresponding attributes of the bidders .bidders win on average almost 4 auctions ( `` volume '' ) and , in every auction , they purchase on average over 5 items .( recall the multi - unit auctions with several items per listing . )the bidder feedback [ computed as the sum of positive ( `` '' ) and negative ( `` '' ) feedback ] captures a bidder s experience with the auction - process and its average is over 220 in our data , signaling highly experienced bidders .interactions in an online auction result in a network linking its participants . bidders bidding on one auctionare linked to other bidders who bid on the same auction .sellers selling a certain product are linked to other sellers selling the same product . in this studywe focus on the network _ between _ buyers and sellers . each time a bidder transacts with a particular seller ,both are linked .a seller can set up more than one auction , thus , repeat transactions ( i.e. , purchases ) measure the strength of this link .for instance , a bidder transacting 10 times with the same seller has a stronger link compared to a bidder who transacts only twice . in our analyses ,we only consider edges with link - strength of at least 4 .that is , we disregard all bidder seller transactions with frequency less than 4 .while there exists no recommended or ideal cut - off , our investigations suggest that results vary for smaller values but stabilize for link - strengths of 4 and higher . in that sense, the network strength measures an important aspect about the relationship between buyers and sellers : customer loyalty .we would like to emphasize that one can measure loyalty in different ways .while one could count all the repeat _ bids _ a bidder places on auctions hosted by the same seller , we only count the number of _ winning bids _( i.e. , the number of transactions ) . while both bids and winning bids indicate a relationship between buyers and sellers , a winning bid signals a much stronger commitment and is thus much more indicative of a buyer s loyalty . in this paperwe investigate loyalty relationships across auctions . studying cross - auction relationshipsis rather rare in the literature on online auctions , and it has gained momentum only recently [ ; ; ; ] . in this work we consider network effects between auction participants and their impact on the outcome of an auction .consider figure [ fig : net 2 ] which shows the top 10% of high volume _sellers_. sellers are marked by white triangles , bidders are marked by red squares .a ( black ) line between a seller and bidder denotes a transaction .the width of the line is proportional to the number of transactions and hence measures the strength of a link .we can see that some sellers interact with several hundred different bidders ( with 895 , on average ) ; we can also see that some sellers are `` exclusive '' in the sense that they are the only ones that transact with a set of bidders ( see , e.g. , at the margins of the network ) , while other sellers `` share '' are a common set of bidders .serving bidders exclusively vs. sharing them with other sellers has huge implications on the outcome of the auction .figure [ fig : net 4 ] shows another subset of the data . in this figurewe display only the top 10% of all _ bidders _ with the highest number of transactions .we can see that many of these high - volume bidders transact with only one seller ( note the many the red triangles which are connected with only a single arc to the network ) and are hence very loyal to the same person .figure [ fig : net 3 ] shows only new buyers ( i.e. , bidders who won an auction for the first time ) . this network exemplifies the market share of a seller with the effect of repeat buyers removed .we can see that the market is dominated by few mega sellers , yet smaller sellers still attract some of the buyers .we can identify 5 mega - sellers , 3 high - volume sellers , and many medium- and low - volume sellers .since these are only first - time buyers , loyalty does not yet play a role in bidders decisions .however , the fact that most first - time bidders `` converge '' to only a few mega - sellers suggests that this is a very difficult market for low - volume sellers to enter .as pointed out above , bidder seller networks capture loyalty of participants . while most sellers and bidders are linked to one another , here we only focus on the sub - graphs created by each bidder seller pair .next , we describe an innovative way to extract _ loyalty measures _ from these graphs .our loyalty measures map the entire network of bidders and sellers into a few seller - specific numbers . for each seller ,these numbers capture both the _ proportion of bidders _ loyal to that seller , as well as the _ degree of loyalty _ of each bidder .we derive the measure in two steps .first , we derive , for each seller , the _ loyalty distribution _ ; then , we summarize that distribution in a few numbers using functional principal component analysis . we describe each step in detail below .note that there exists more than one way for extracting loyalty information from network data .we chose the route of loyalty distributions since they capture the two most important elements of loyalty : the proportion of customers loyal to one s business , and the degree of their loyalty .notice , in particular , that we do not try to dichotomize loyalty ( i.e. , categorize it into loyal vs. disloyal buyers ) : since we do not believe that loyalty can be turned on or off arbitrarily , we allow it to range on a continuous scale between 0 and 1 . this will allow us to quantify the impact of the _ shape _ of a seller s loyalty distribution on his or her bottom line .for instance , it will allow us to answer whether sellers with _ pure _ loyalty ( i.e. , all buyers 100% loyal ) are better off compared to sellers with more variation among their customer base .we would also like to caution that the resulting analysis is complex since we first have to characterize the infinite - dimensional loyalty distributions in a finite way , and subsequently interpret the resulting characterizations .the resulting interpretations are more complex than , say , employing user - defined measures of loyalty ( e.g. , summary statistics such as the number of loyal buyers or the proportion of at least 70% loyal buyers ) .while such user - defined measures are easy to interpret , there is no guarantee that they capture all of the relevant information .( for instance , measuring the `` number of loyal bidders '' would first require us to define a cutoff at which we consider one buyer to be loyal and another one to be disloyal any such cutoff is necessarily arbitrary and would lead to a dichotomization which we are trying to avoid . )rather than employing arbitrary , user - defined measures , we set out to let the data speak freely and first look for ways to summarize the information captured in the loyalty distributions in the most exhaustive way .this will lead us to the notion of principal component loyalty scores and their interpretations .we will elaborate on both aspects below .consider the hypothetical seller bidder network in figure [ fig : illustration ] . in that network, we have 4 sellers ( labeled `` a , '' `` b , '' `` c '' and `` d '' ) and 10 bidders ( labeled 110 ) .an arc between a seller bidder pair denotes an interaction , and the width of the arc is proportional to the number of repeat - interactions between the pair .consider bidder 1 who has a total of 10 interactions , all of which are with seller a ; we can say that bidder 1 is 100% loyal to seller a. this is similar for bidders 2 and 3 , who have a total of 8 and 6 interactions , respectively , all of which are , again , with seller a. in contrast , bidders 4 and 5 are only 80% and 70% loyal to seller a since , out of their total number of interactions ( both 10 ) , they share 2 with seller b and 3 with seller c , respectively .all - in - all , seller a attracts mostly highly loyal bidders .this is different for seller d who attracts mostly little loyal bidders , as he shares all of his bidders with either seller b or c. c + + + for each seller , we can summarize the proportion of loyal bidders and the degree of their loyalty in the associated _loyalty distribution_. the loyalty distributions for sellers a d are displayed in the right panel of figure [ fig : illustration ] .the -axis denotes the degree of loyalty ( e.g. , 100% or 80% loyal ) , and the -axis denotes the corresponding density .we can see that the shape of all four distributions is very different ; while seller a s distribution is very left - skewed ( mostly high - loyal bidders ) , seller d s distribution is very right - skewed ( mostly little - loyal bidders ) .the distributions of sellers b and c fall somewhat in between , yet they are still very distinct from one another . note that our definition of loyalty is similar to the concept of in- and out - degree analysis .more precisely , we first measure the proportion of interactions for each buyer ( i.e. , the normalized distribution of out - degree ) . then , we measure the perceived loyalty of each seller , which can be viewed as the distribution of the weighted in - degree .this definition of loyalty is very similar to the concept of brand - switching in marketing .in essence , if we have a fixed number of brands ( sellers in our case ) and a pool of buyers ( i.e. , bidders ) , then we measure the switching - behavior from one brand to another .while the loyalty distributions in figure [ fig : illustration ] capture all of the relevant information , we can not use them for further analysis ( especially modeling ) .thus , our next step is to characterize each loyalty distribution by only a few numbers . to that end, we employ a very flexible dimension reduction approach via functional data analysis . in order to investigate the effect of loyalty on the outcome of an auction ,we first need to characterize a seller s loyalty distribution .while one could characterize the distributions via summary statistics ( e.g. , mean , median or mode ) , figure [ fig : illustration ] suggests that loyalty is too heterogeneous and too dispersed .therefore , we resort to a very flexible alternative via functional data analysis [ ] . by functional datawe mean a collection of continuous functional objects such as curves , shapes or images .examples include measurements of individuals behavior over time , digitized 2- or 3-dimensional images of the brain , or recordings of 3- or even 4-dimensional movements of objects traveling through space and time . in our application , we regard each seller s loyalty distribution as a functional observation .we capture similarities ( and differences ) across distributions via _ functional principal component analysis _( fpca ) , a functional version of principal component analysis [ see ] .in fact , while operate on the true probability distributions , these are not known in our case ; hence , we apply fpca to the observed ( empirical ) distribution function , which may introduce an extra level of estimation error .functional principal component analysis is similar in nature to ordinary pca ; however , rather than operating on data - vectors , it operates on functional objects . in our context, we take the observed loyalty distributions [ i.e. , the histograms figure [ fig : illustration](b ) ] as input . while one could also first smooth the observed histograms , we decided against it since the results were not substantially different .ordinary pca operates on a set of data - vectors , say , , where each observation is a -dimensional data - vector .the goal of ordinary pca is to find a projection of into a new space which maximizes the variance along each component of the new space and at the same time renders the individual components of the new space orthogonal to one another . in other words , the goal of ordinary pca is to find a pc vector for which the principal component scores ( pcs ) maximize subject to this yields the first pc , .in the next step we compute the second pc , , for which , similarly to above , the principal component scores maximize subject to and the _ additional constraint _ second constraint ensures that the resulting principal components are orthogonal .this process is repeated for the remaining pc , .the functional version of pca is similar in nature , except that we now operate on a set of continuous curves rather than discrete vectors . as a consequence, summation is replaced by integration .more specifically , assume that we have a set of curves , each measured on a continuous scale indexed by .the goal is now to find a corresponding set of pc curves , , that , as previously , maximize the variance along each component and are orthogonal to one another . in other words , we first find the pc function , , whose pcs maximize subject to similarly to the discrete case , the next step involves finding for which the pcs maximize subject to and the additional constraint in practice , the integrals in ( [ eq : fpc score])([eq : fpc contraint ] ) are approximated either by sampling the predictors , , on a fine grid or , alternatively , by finding a lower - dimensional expression for the pc functions with the help of a basis expansion . for instance, let be a suitable basis expansion [ ] , then we can write for a set of basis coefficients . in that fashion , the integral in , for example , ( [ eq : fpc contraint ] ) becomes where .for more details , see . in this workwe use the grid - approach .-origin . ]common practice is to choose only those eigenvectors that correspond to the largest eigenvalues , that is , those that explain most of the variation in . by discarding those eigenvectors that explain no or only a very small proportion of the variation, we capture the most important characteristics of the observed data patterns without much loss of information .in our context , the first 2 eigenvectors capture over 82% of the variation in loyalty distributions .since our loyalty measures are based on their principal component representations , interpretation has to be done with care .figure [ fig : pc loadings ] shows the first 2 principal components ( pcs ) .the first pc ( top panel ) shows a growing trend and , in particular , it puts large negative weight on the lowest loyalty scores ( between 0 and 0.2 ) while putting positive weight on medium to high loyalty scores ( 0.4 and higher ) .thus , we can say that the first pc contrasts the extremely disloyal distributions from the rest .table [ pc_summaries ] ( first row ) confirms this notion : notice the large negative correlation with the minimum ; also , the large correlation with the skewness indicates that pc1 truly captures extremes in the loyalty distributions scores and shape .we can conclude that pc1 distinguishes distributions of `` pure disloyalty '' from the rest .the second pc has a different shape .the second pc puts most ( positive ) weight on the highest loyalty scores ( between 0.8 and 1 ) ; it puts negative weight on scores at the medium and low scores ( between 0.4 and 0.6 ) and thus contrasts average loyalty from extremely high loyalty .indeed , table [ pc_summaries ] ( second row ) shows that pc2 has a high positive correlation with the maximum and a high negative correlation with the median . in that sense, it distinguishes the mediocre loyalty from the stars .while the above interpretations help our understanding of the loyalty components , their overall impact is still hard to grasp , especially because every individual loyalty distribution will by nature of the principal component decomposition comprise of a different _ mix _ between pc1 and pc2 .moreover , as we apply fpca to observed densities ( i.e. , histograms ) , individual values of each density function must be heavily correlated .this adds additional constraints on the pcs and their interpretations .hence , in the following , we discuss five _ theoretical _ loyalty distributions and their corresponding representation via pc1 and pc2 .210pt.2d2.2cd2.2c@ & & & & & + pc1 & 0.55 & -0.2 & 0.52 & -0.99 & 0.77 + pc2 & -0.78 & -0.05 & 0.81 & -0.02 & 0.63 + take a look at figure [ fig : theoretic loyalty ] .it shows five plausible loyalty distributions as they may develop out of a bidder seller network .we refer to these distributions as `` theoretic loyalty distributions '' and we can characterize them by their specific shapes . for instance , the first distribution is comprised of 100% loyal buyers and we hence refer to it as `` pure loyalty ; '' in contrast , the last distribution is comprised of 100% disloyal buyers and we hence name this distribution `` pure disloyalty ; '' the distribution in the center ( `` somewhat loyal '' ) is interesting since it is comprised mostly of buyers that exhibit some loyalty but do not purchase exclusively from only one seller. table [ theoretical_loyalties ] shows the corresponding pc scores .we can see that the theoretical distribution corresponding to _ pure loyalty _ scores very high on pc1 since it is very right - skewed and does not have any values lower than 0.9 ; in contrast , notice the pc1 scores for _ pure disloyalty _ : while it is the mirror image of _ pure loyalty _ , it scores ( in absolute terms ) higher than the former because it is not only very ( left- ) skewed , but its extremely small values weigh heavily ( and negatively ) with the first part of the pc1-shape , which is in contrast to the positive values of _ pure loyalty _ which do not receive as much weight .as for pc2 , table [ theoretical_loyalties ] shows that _ pure loyalty _scores even higher on that component as its values are extremely large , much larger than the typical ( median ) loyalty values . in contrast , _ pure disloyalty _ has very small pc2 values , as low scores are given very little weight by pc2 ..2d2.2d2.2@ & * pure lyty * & * strong lyty * & & & + pc1 & 0.56 & 0.47 & 0.32 & -0.04 & -0.64 + pc2 & 0.72 & 0.51 & -0.08 & 0.35 & -0.01 + we can make similar observations for the remaining theoretical loyalty distributions .for instance , the distribution of _ somewhat loyal _ scores high on pc1 since it does not have many low values ; but it also only receives an average score on pc2 since it does not have many high values either . in the following , we will use these theoretical loyalty distributions to shed more light on the relationship between loyalty and the outcome of an auction .our goal is to investigate the effect of loyalty on the outcome of an auction .for instance , we would like to see whether sellers who attract exclusively high - loyal bidders elicit price - premiums , or whether more variability in buyers loyalty leads to a higher price .to that end , we start out , in similar fashion to many previous studies on online auctions [ e.g. , ] , with an ordinary least squares ( ols ) modeling framework .that is , we investigate a model of the form where is a matrix of covariates and follows the standard linear model assumptions . for the choice of the covariates , we are primarily interested in the effect of loyalty on the price of an auction ( i.e. , the first 2 pc scores from the previous section are our main interest ) .however , we also want to control for factors other than loyalty which are also known to have an impact on price ; these factors include auction characteristics ( auction duration ) , item characteristics ( item quantity , size and pieces ) , seller characteristics ( seller feedback , i.e. , reputation and seller volume ) and auction competition ( number of bids , i.e. , bid - count ) .we first investigate a standard ols model that relates these covariates to price .however , we will show that an ols approach leads to violations of the model assumptions .the reason lies in the asymmetry of the bidder seller network : the presence of high - volume sellers ( i.e. , seller nodes with extremely high degree ) biases the analysis and leads to wrong conclusions .in particular , high - volume sellers have many repeat interactions which result in a strong clustering of the data and thus violate the i.i.d .assumption of ols .we investigate several remedies to this problem .first , we investigate two `` standard '' remedies via random effect ( re ) models and weighted least squares ( wls ) .our results show that although both remedies ease the problem , none removes it completely .we thus argue that the data is too heterogeneous to be modeled within a single model and compare our results with that of a data - segmentation strategy .many studies employ an ols modeling framework to investigate phenomena in online auctions such as the effect of the auction format , the impact of a seller s reputation , or the amount of competition [ e.g. , ; ; ] . however , one problem with an ols model approach is the presence of repeat observations on the same item .for instance , if we want to study the effect of a seller s reputation ( measured by her feedback score ) , then repeat auctions by the same seller will severely overweight the effect of high - volume sellers in the ols model .this problem is typically not addressed in the online auction literature .we face a very similar problem when modeling the effect of e - loyalty ..2cc@ * coefficient * & & * std .error * & * -value * + ( intercept ) & -1.64 & 0.06 & 0.00 + auction duration & 0.02 & 0.00 & 0.00 + log(item quantity 1 ) & -0.04 & 0.04 & 0.30 + bid count & 0.06 & 0.00 & 0.00 + log(pieces ) & 0.42 & 0.00 & 0.00 + size & 0.07 & 0.00 & 0.00 + log(seller feedback 1 ) & -0.00 & 0.01 & 0.52 + loyalty - pc1 & -0.17 & 0.05 & 0.00 + loyalty - pc2 & -1.00 & 0.07 & 0.00 + log(volume ) & 0.16 & 0.01 & 0.00 + [ 3pt ] aic & + -squared & 0.77 + [ 6pt ] ( intercept ) & -0.58 & 0.22 & 0.00 + auction duration & 0.01 & 0.00 & 0.00 + log(item quantity 1 ) & -0.21 & 0.07 & 0.00 + bid count & 0.05 & 0.00 & 0.00 + log(pieces ) & 0.27 & 0.00 & 0.00 + size & 0.03 & 0.00 & 0.00 + log(seller feedback 1 ) & 0.07 & 0.04 & 0.11 + loyalty - pc1 & -0.40 & 0.22 & 0.07 + loyalty - pc2 & -0.15 & 0.22 & 0.51 + log(volume ) & 0.05 & 0.04 & 0.20 + [ 3pt ] aic & + -squared & + [ 6pt ] ( intercept ) & -1.59 & 0.12 & 0.00 + auction duration & 0.01 & 0.00 & 0.01 + log(item quantity 1 ) & 0.30 & 0.10 & 0.00 + bid count & 0.08 & 0.00 & 0.00 + log(pieces ) & 0.26 & 0.01 & 0.00 + size & 0.00 & 0.00 & 0.61 + log(seller feedback 1 ) & -0.01 & 0.01 & 0.59 + loyalty - pc1 & -1.00 & 0.08 & 0.00 + loyalty - pc2 & 1.33 & 0.11 & 0.00 + log(volume ) & 0.24 & 0.01 & 0.00 + [ 3pt ] aic & + -squared & 0.43 + for illustration , take the ols regression model in the top panel of table [ effect of theoretical loyalties ] . in this modelwe estimate the dependency of ( log-)price on loyalty ( measured by pc1 and pc2 ) , controlling for all other factors described above .note that this model appears to fit the data very well ( -squared 77% ) .however , it is curious to see that seller feedback has a negative sign and is statistically insignificant .this contradicts previous findings which found that an increased level of trust leads to price premiums [ ; ; ] .figure [ fig : res2 ols ] shows the residuals corresponding to the above model .the top half shows the residuals plotted against seller - volume ; the bottom half shows the residual distribution . for each type of graph we present 4 different views : one graph ( left graphs in first and third panel ) gives an overview ; the other graphs zoom in by seller volume ( low , medium and high volume , respectively ) .notice that the residuals are rather skewed : a large proportion of residuals are negative ( see , e.g. , top left graph ) , implying that our model over - estimates price effects of loyalty .moreover , we can also see that the residual - variation increases for larger seller volumes .if we zoom in on both the low - volume and medium - volume sellers , we can see that the true effect of model misspecification is confounded with seller volume : while price effects of low - volume sellers are underestimated ( note the positive - skew in the residual distribution for low - volume sellers ) , the effects are overestimated for medium - volume sellers ( negative - skew ) ; only high volume sellers appear to be captured well by the model .thus , the ols regression model blends low volume and medium volume sellers but represents neither of them adequately .we have seen in the previous section that an ols approach does not result in a model that can be interpreted without concerns .we thus investigate two alternate models , a random effects ( re ) model and weighted least squares ( wls ) .random effects models are often employed when there are repeat observations on the same subject or when the data is clustered [ e.g. , ] .since we have many repeat auctions by the same seller , adding a random , seller - specific effect to the model in ( [ eq : regression model ] ) lends itself as a natural remedy for ols .while re models have become popular only recently with the advent of powerful computing and efficient algorithms , wls has been around for a longer time as a possible solution for heteroscedasticity [ ] .while the principle of wls is powerful , it assumes that the matrix of weights is known ( or at least known up to a parameter value ) , which reduces its practical value . in our context , we use weights that are inversely proportional to the residual variance in each cluster .we will now compare both approaches and see if they result in more plausible models for e - loyalty.=-1 table [ effect of theoretical loyalties ] ( second and third panels ) shows the results of the re and wls models , respectively .we can see that wls results in a very poor model fit ( both in terms of -squared and aic ) . while the re model results in much better model fit ( compared to both the wls and the ols model ) , it is curious that seller feedback is insignificant , similar to the ols model above . in fact , it is quite curious that none of the seller - related variables ( feedback , loyalty or volume ) are significant in the re model .this finding suggests that none of the actions taken by the seller affect the outcome of an auction , which contradicts both common practitioner knowledge as well as previous research on the topic [ ; ; ] .figure [ fig : res2 random ] shows the residuals of the re model .we can see that the magnitude of the residuals has decreased , suggesting a better model fit .this is expected as the random effects account for seller - specific variation due to individual selling strategies ( e.g. , seller - specific auction parameters or product descriptions ) , which all may lead to differences in final price .but we can also see that the re model still suffers from heteroscedasticity ( much larger residual variance for high volume sellers compared to low volume sellers ) .figure [ fig : res2 wls ] shows the corresponding residuals of the wls approach . while we would have expected that wls tames the heteroscedasticity somewhat , it appears that model fit has become worse .( this is also supported by the much poorer values of -squared and aic . )one possible reason is that weights have to be chosen by the user ( inversely proportional to seller volume , in our case ) , which may not result in the most appropriate weighting of the data .none of the proposed modeling alternatives so far have lead to models with reasonable residuals or economically defendable conclusions .in fact , we have seen that the model fit differs systematically by the seller volume .we take this as evidence that the data may be segmented into different _seller volume clusters_. we have seen earlier ( e.g. , figures [ fig : net 2 ] and [ fig : net 4 ] ) that sellers of different magnitude exhibit quite different effects on bidders. we will thus now first cluster the data and then model each data - segment separately .we first cluster the data by seller volume ( low , medium and high ) and then apply ols regression within each segment , resulting in three different regression models , one for each segment .we select the clusters with the objective of minimizing the residuals mean squared errors within each cluster .this results in the following three segments : low volume sellers40 transactions or less ; medium volume sellers40350 transactions ; high volume sellers more than 350 transactions .figure [ fig : res cluster ] shows the residuals of the resulting three models .we can see that the model fit is much better compared to the previous modeling approaches . in each segmentthe magnitude of the residuals is very small , all residuals scatter around the origin , and we also no longer find evidence for heteroscedasticity in any of the three segments .in fact , the model fit statistics ( see table [ reg : cluster ] ) suggest that the segmentation approach leads to a much better representation of the data compared to either ols , re or wls models. table [ reg : cluster ] shows the parameter estimates for each segment .we can see that the relationship between loyalty , trust and price varies from segment to segment .in fact , while for the low volume sellers the significance of all seller - related variables ( feedback , loyalty or volume ) is low , both feedback and volume are much more significant than loyalty .( note the much smaller -values of seller feedback and volume . )this suggests that while seller - related actions may not play much of a role for low volume sellers ( such as rookie sellers and sellers that are new to the market ) , trust is much more important compared to loyalty .this makes sense as low volume sellers have not much of a chance to establish a loyal customer base due to the infrequency of their transactions .this is different for medium volume sellers . for medium volume sellers , loyalty and volumeare more significant than feedback .this suggests that with increasing frequency of transactions , repeat transactions ( i.e. , loyalty ) have a more dominant effect on a seller s bottom line .this effect is even more pronounced for high volume sellers .this suggests that high volume sellers are most affected by the actions of repeat customers .it is also interesting that both feedback and loyalty are significant for high volume sellers .this suggests that in the presence of two sellers with the same reputation , buyers `` act with memory '' and return to repeat their previous shopping experience . in order to precisely quantify the effect of loyalty in each segment , consider table [ tblols ] . in that table , we present , for each of the 5 theoretical loyalty distributions from figure [ fig : theoretic loyalty ] , their corresponding combined effect on the regression model .that is , we compute the combined effect of pc1 and pc2 , holding all other variables in the model constant .300pt.2cc@ * coefficient * & & * std .error * & * -value * + ( intercept ) & -0.51 & 0.36 & 0.16 + auction duration & 0.04 & 0.02 & 0.10 + log(item quantity 1 ) & -0.06 & 0.10 & 0.50 + bid count & 0.12 & 0.01 & 0.00 + log(pieces ) & 0.15 & 0.06 & 0.02 + size & 0.06 & 0.03 & 0.05 + log(seller feedback 1 ) & 0.08 & 0.04 & 0.07 + loyalty - pc1 & -0.22 & 0.16 & 0.18 + loyalty - pc2 & -0.03 & 0.13 & 0.82 + log(volume ) & -0.10 & 0.06 & 0.07 + [ 3pt ] aic & & & + -squared & 0.76 & & + [ 6pt ] ( intercept ) & -0.49 & 0.38 & 0.20 + auction duration & -0.05 & 0.01 & 0.00 + log(item quantity 1 ) & -0.11 & 0.19 & 0.58 + bid count & 0.17 & 0.01 & 0.00 + log(pieces ) & 0.03 & 0.01 & 0.05 + size & 0.01 & 0.01 & 0.58 + log(seller feedback 1 ) & -0.01 & 0.02 & 0.65 + loyalty - pc1 & -0.24 & 0.12 & 0.04 + loyalty - pc2 & 0.28 & 0.19 & 0.15 + log(volume ) & 0.33 & 0.08 & 0.00 + [ 3pt ] aic & + -squared & 0.75 + [ 6pt ] ( intercept ) & 1.68 & 0.10 & 0.00 + auction duration & 0.02 & 0.00 & 0.00 + log(item quantity 1 ) & -0.25 & 0.04 & 0.00 + bid count & 0.05 & 0.00 & 0.00 + log(pieces ) & 0.39 & 0.00 & 0.00 + size & 0.06 & 0.00 & 0.00 + log(seller feedback 1 ) & 0.42 & 0.01 & 0.00 + loyalty - pc1 & 7.39 & 0.16 & 0.00 + loyalty - pc2 & -9.41 & 0.17 & 0.00 + log(volume ) & -0.78 & 0.02 & 0.00 + [ 3pt ] aic & + -squared & 0.83 + .2d2.2d2.2d2.2d2.2@ & & & & & + cluster 1 ( low ) & -0.14 & -0.12 & -0.07 & 0.00 & 0.14 + cluster 2 ( medium ) & 0.07 & 0.03 & -0.10 & 0.11 & 0.15 + cluster 3 ( high ) & -2.60 & -1.29 & 3.14 & -3.61 & -4.62 + we can see that in clusters 1 and 2 , the effect of loyalty is considerably small , consistent with the small ( and insignificant ) coefficients for low and medium volume sellers in table [ reg : cluster ] . for cluster 3 ,it is interesting that only the distribution corresponding to _ somewhat loyal _ buyers results in a positive price effect .in fact , we can see that extreme loyalty ( i.e. , the distributions for both _ pure loyalty _ and _ pure disloyalty _ ) has negative implications for a seller s bottom line .while the effect of disloyal bidders is easier to explain ( disloyal bidders may `` shop around '' more actively in the search for lower prices and , as a result , drive down a seller s revenue ) , the negative effect of purely loyal bidders may be due to the fact that a bidder who exclusively interacts with the same seller may form an opinion about that seller s `` going price '' which results in a less competitive auction process ( and thus renders the transaction into a fixed - price transaction ) .thus , our results show that the effect of loyalty is surprisingly `` nonlinear '' in that a mix of somewhat loyal bidders results in the most competitive auction environment and thus the highest price for the seller .another way of quantifying the impact of loyalty is via the difference between pure loyalty and pure disloyalty .notice that the difference in estimated coefficients equals ( 2 , which ( as the response is on the log - scale ) implies that , all else equal , sellers with a purely loyal customer base extract price premiums 200% higher compared to sellers with a purely disloyal customer base .in this paper we investigate loyalty of online transactions . loyalty is an important element to many business models , and it is especially difficult to manage in the online domain where consumers are offered different choices that are often only a mouse - click away .we study loyalty in online auctions .we derive online loyalty from the network of sellers and bidders and find that while bidder s loyalty can have a strong impact on the outcome of an auction , the magnitude of its impact varies depending on the size of the seller .we want to point out that while we find that loyalty has a strong effect on price , we do not determine the cause of loyalty .a buyer s loyalty can have many different causes such as a high - quality product , a speedy delivery , or an otherwise seamless service . while loyalty could also be caused by price itself ( i.e. , a buyer returning to the same seller because of a low price ) , it is unlikely in our setting due to the auction process .recall that in an auction the price is not fixed .thus , a seller offering a top notch product and an outstanding service will sooner or later see an increase in bidders and , as a result , more competition and thus a higher price for her product .thus , loyalty is unlikely to be caused merely by bargain sellers.=1 also , we want to emphasize that while we find many repeat transactions between the same seller bidder pair in our data , the frequency of these repeat interactions may depend on the type of product and the buyer s demand for this product . in our case ( beads , i.e. , arts and crafts ) ,buyers have frequently re - occurring demand for the same product and , hence , the chances that a buyer will seek out the same seller rise drastically . on the other hand , if we were to consider the market for a product in which repeat transactions are less common ( such as computers , digital cameras , automobiles , etc . ) , our loyalty networks would likely not be as dense. nevertheless , it would be equally important for sellers to understand what factors drive consumers to spend money and we believe that loyalty networks are one way to address that question .there are several statistical challenges when studying loyalty networks .first , deriving quality measures from the observed networks requires a method that can capture both the intensity as well as the size of loyalty .we accomplish this using ideas from functional data analysis .second , modeling the effect of loyalty is complicated by the extreme skew of loyalty networks .our analysis shows that many different approaches can lead to model misspecification and , as a consequence , to economically wrong conclusions .similar problems likely exist in other studies on online markets ( e.g. , those that study seller feedback or reputation where one also records repeat observations on the same seller ) .our analysis leads us to conclude that the data is too segmented to be treated by a single model and thus propose a data - clustering approach .another statistical challenge revolves around _ sampling _ bidder seller networks .as pointed out earlier , we have the complete set of bidding records for a certain product ( swarovski beads , in this case ) for a certain period of time ( 6 months ) . as a result , we have the complete bidder seller network for this product , for this time frame . while sampling would be an alternative , it would result in an incomplete network ( since we would no longer observe all nodes / arcs ) . as a result, we would no longer be able to compute loyalty without error , which would bring up an interesting statistical problem .but we caution that sampling would have to be done very carefully . while one could , at least in theory , sample randomly across all different ebay categories , it would bring up several problems .the biggest problem is that we would now be attempting to compare loyalty across different product types .for instance , we would be comparing , say , a bidder s loyalty for purchasing beads ( a very low price , low stake item ) with that of purchasing digital cameras , computers , or even automobiles ( all of which are high price and high stakes ) , which would be conceptually very questionable .we also want to mention that we treat the bidder seller network as _ static _ over time .our data spans a time - frame of only 6 months and we assume that loyalty is static over this time - frame .this assumption is not too unrealistic as many marketing models consider loyalty to be static over much longer time frames [ ; ; ] . while incorporating a temporal dimension ( e.g. , by using a network with a sliding window or via down - weighting older interactions ) would be an intriguing statistical challenge , it is not quite clear how to choose the width of the window or the size of the weights . moreover , we also explicitly tested for learning effects by buyers over time and could not find any strong statistical evidence for it . and finally , in this work we address one specific kind of network dependence , namely , that between buyers and sellers .we argue that the lack of independence among observations on the same sellers leads to a clustering - effect and we investigate several remedies to this challenge .however , the dependence structure may in fact be far more complex .as bidders are linked to sellers which , in turn , are linked again to other bidders , the true dependence structure among the observations may be far more complex .this may call for innovative statistical methodology and we hope to have sparked some new ideas with our work .bailey , j. , gao , g. , jank , w. , lin , m. , lucas , h. c. and viswanathan , s. ( 2008 ) .the long tail is longer than you think : the surprisingly large extent of online sales by small volume sellers .technical report , rh smith school of business , university of maryland .available at ssrn : http://ssrn.com/abstract=1132723 .donkers , b. , verhoef , p. c. and de jong , m. ( 2003 ) .predicting customer lifetime value in multi - service industries .technical report , erim report series reference no .ers-2003 - 038-mkt .available at ssrn : http://ssrn.com/abstract=411666 .jank , w. and zhang , s. ( 2008 ) .an automated and data - driven bidding strategy for online auctions . technical report , rh smith school of business , univ .available at ssrn : http://ssrn.com/abstract=1427212 . | creating a loyal customer base is one of the most important , and at the same time , most difficult tasks a company faces . creating loyalty online ( e - loyalty ) is especially difficult since customers can `` switch '' to a competitor with the click of a mouse . in this paper we investigate e - loyalty in online auctions . using a unique data set of over 30,000 auctions from one of the main consumer - to - consumer online auction houses , we propose a novel measure of e - loyalty via the associated network of transactions between bidders and sellers . using a bipartite network of bidder and seller nodes , two nodes are linked when a bidder purchases from a seller and the number of repeat - purchases determines the strength of that link . we employ ideas from functional principal component analysis to derive , from this network , the loyalty distribution which measures the perceived loyalty of every individual seller , and associated loyalty scores which summarize this distribution in a parsimonious way . we then investigate the effect of loyalty on the outcome of an auction . in doing so , we are confronted with several statistical challenges in that standard statistical models lead to a misrepresentation of the data and a violation of the model assumptions . the reason is that loyalty networks result in an extreme clustering of the data , with few high - volume sellers accounting for most of the individual transactions . we investigate several remedies to the clustering problem and conclude that loyalty networks consist of very distinct segments that can best be understood individually . . |
tunneling is a supreme quantum effect .every introductory text on quantum mechanics gives the paradigm example of a particle tunneling through a one - dimensional potential barrier despite having a total energy less than the barrier height .indeed , the reader typically works through a number of excercises , all involving one - dimensional potential barriers of one form or another modelling several key physical phenomena ranging from atom transfer reactions to the decay of alpha particles . however , one seldom encounters coupled multidimenisonal tunneling in such texts since an analytical solution of the schrdinger equation in such cases is not possible .interestingly , the richness and complexity of the tunneling phenomenon manifest themselves in full glory in the case of multidimensional systems .thus , for instance , the usual one - dimensional expectation of increasing tunneling splittings as one approaches the barrier top from below is not necessarily true as soon as one couples another bound degree of freedom to the tunneling coordinate . in the context of molecular reaction dynamics ,multidimensional tunneling can result in strong mode - specificity and fluctuations in the reaction rates .in fact , a proper description of tunneling of electrons and hydrogen atoms is absolutely essential even in molecular systems as large as enzymes and proteins .although one usually assumes tunneling effects to be significant in molecules involving light atom transfers it is worth pointing out that neglecting the tunneling of even a heavy atom like carbon is the difference between a reaction occuring or not occuring . in particular, one can underestimate rates by nearly hundred orders of magnitude .interestingly , and perhaps paradoxically , several penetrating insights into the nature and mechanism of multidimensional barrier tunneling have been obtained from a phase space perspective .the contributions by creagh , shudo and ikeda , and takahashi in the present volume provide a detailed account of the latest advances in the phase space based understanding of multidimensional barrier tunneling . what happens if there are no coordinate space barriers ?in other words , in situations wherein there are no static energetic barriers separating reactants " from the products " does one still have to be concerned about quantum tunneling ?one such model potential is shown in fig .[ fig1 ] which will be discussed in the next section .here we have the notion of reactants and products in a very general sense .so , for instance , in the context of a conformational reaction they might correspond to the several near - degenerate conformations of a specific molecule . naively one might expect that tunneling has no consequences in such cases .however , studies over last several decades have revealed that things are not so straightforward . despite the lack of static barriers ,the dynamics of the system can generate barriers and quantum tunneling can occur through such dynamical barriers .this , of course , immediately implies that dynamical tunneling is a very rich and subtle phenomenon since the nature and number of barriers can vary appreciably with changes in the nature of the dynamics over the timescales of interest .this would also seem to imply that deciphering the mechanism of dynamical tunneling is a hopeless task as opposed to the static potential barrier case wherein elegant approximations to the tunneling rate and splittings can be written down .however , recent studies have clearly established that even in the case of dynamical tunneling it is possible to obtain very accurate approximations to the splittings and rate . in particular , it is now clear that unambiguous identification of the local dynamical barriers is possible only by a detailed study of the structure of the underlying classical phase space .the general picture that has emerged is that dynamical tunneling connects two or more classically disconnected regions in the phase space . more importantly , and perhaps ironically , the dynamical tunneling splittings and rates are extremely sensitive to the various phase space structures like nonlinear resonances , chaos and partial barriers .it is crucial to note that although purely quantum approaches can be formulated for obtaining the tunneling splittings , any mechanistic understanding requires a detailed understanding of the phase space topology . in this sense, the phenomenon of dynamical tunneling gets intimately linked to issues related to phase space transport .thus , one now has the concept of resonance - assisted tunneling ( rat ) and chaos - assisted tunneling ( cat ) and realistic systems typically involve both the mechanisms .since the appearance of the first book on the topic of interest more than a decade ago , there have been several beautiful experimental studies that have revealed various aspects of the phenomenon of dynamical tunneling .the most recent one by chaudhury __ realizes the paradigmatic kicked top model using cold atoms and clearly demonstrate the dynamical tunneling occuring in the underlying phase space .interestingly , good correspondence between the quantum dynamics and classical phase space structures is found despite the system being in a deep quantum regime . as another example , i mention the experimental observation by flling __ of second order co - tunneling of interacting ultracold rubidium atoms in a double well trap .the similarities between this system and the studies on dynamical tunneling using molecular effective hamiltonians is striking .in particular , the description of the cold atom study in terms of superexchange ( qualitative and quantitative ) is reminiscent of the early work by stuchebrukhov and marcus on understanding the role of dynamical tunneling in the phenomenon of intramolecular energy flow .further details on the experimental realizations can be found in this volume in the articles by steck and raizen , and hensinger .an earlier review provides extensive references to the experimental manifestations of dynamical tunneling in molecular systems in terms of spectroscopic signatures .undoubtedly , in the coming years , one can expect several other experimental studies which will lead to a deeper understanding of dynamical tunneling and raise many intriguing issues related to the subject of classical - quantum correspondence . as remarked earlier, it seems ironic that a pure quantum effect like tunneling should bear the marks of the underlying classical phase space structures .however , it is useful to to recall the statement by heller that tunneling is only meaningful with classical dynamics as the baseline .thus , insights into the nature of the classical dynamics translates into a deeper mechanistic insight into the corresponding quantum dynamics . indeed, one way of thinking about classical - quantum correspondence is that classical mechanics is providing us with the best possible basis " to describe the quantum evolution .the wide range of contributions in this volume are a testimony to the richness of the phenomenon of dynamical tunneling and the utility of such a classical - quantum correspondence perspective . in this articlei focus on the specific field of quantum control and show as to how dynamical tunneling can lead to useful insights into the control mechanism .the hope is that more such studies will eventually result in control strategies which are firmly rooted in the intutive classical world , yet accounting for the classically forbidden pathways and mechanisms in a consistent fashion .this is a tall order , and some may even argue as an unnecessary effort in these days of fast computers and smart and efficient algorithms to solve fairly high dimensional quantum dynamics .however , in this context , it is useful to remember the following which was written by born , heisenberg , and jordan nearly eighty years ago _ the starting point of our theoretical approach was the conviction that the difficulties that have been encountered at every step in quantum theory in the last few years could be surmounted only by establishing a mathematical system for the mechanics of atomic and electronic motions , which would have a unity and simplicity comparable with the system of classical mechanics ... further developement of the theory , an important task will lie in the closer investigation of the nature of this correspondence and in the description of the manner in which symbolic quantum geometry goes over into visualizable classical geometry . _" the above remark was made in an era when computers were nonexistent .nevertheless , it is remarkably prescient since even in the present era one realizes the sheer difficulty in implementing an all - quantum dynamical study on even relatively small molecules . in any case , it is not entirely unreasonable to argue that large scale quantum dynamical studies will still require some form of an implicit classical - quantum correspondence approach to grasp the underlying mechanistic details . with the above remark in mindi start things off by revisiting the original paper by davis and heller since , in my opinion , it is ideal from the pedagogical point of view .three decades ago , davis and heller in their pioneering study gave a clear example of dynamical tunneling .a short recount of this work including the famous plots of the classical trajectories and the associated quantum eigenstates can be found in heller s article in this volume .however , i revisit this model here in order to bring forth a couple of important points that seem to have been overlooked in subsequent works .first , the existence of another class of eigenstate pairs , called as circulating states , which can exert considerable influence on the usual tunneling doublets at higher energies .second , a remark in the original paper which can be considered as a harbinger for chaos - assisted tunneling .as shown below , there are features in the original model that are worth studying in some detail even after three decades since the original paper was published .the hamiltonian of choice is the two degrees of freedom ( 2dof , in what follows the acronymn dof stands for degrees of freedom ) barbanis - like model with the labels ` s ' and ` u ' denoting the symmetric and unsymmetric stretch modes respectively .the above hamiltonian has also been studied in great detail to uncover the correspondence between classical stability of the motion and quantum spectral features , wavefunctions , and energy transfer .the potential is symmetric with respect to as shown in fig .[ fig1 ] but there is no potential barrier .davis and heller used the parameter values , and for which the dissociation energy . note that the masses are taken to be unity and one is working in units such that .the key observation by davis and heller was that despite the lack of any potential barriers several bound eigenstates came in symmetric - antisymmetric pairs and with energy splittings much smaller than the fundamental frequencies _i.e. , _ . in fig .[ fig1 ] the various splittings between adjacent eigenstates are shown and it is clear that several tunneling " pairs appear above a certain threshold energy .poincar surface of sections .note that large scale chaos appears for .the formation of a separatrix and two classically disjoint regular regions can be seen at due to the symmetric mode periodic orbit becoming unstable .the regular regions almost vanish near the dissociation energy . in the bottom rowsome of the chaotic orbits have been suppressed , for clarity of the figure , by showing them in gray.,width=566,height=377 ] how can one understand the onset of such near degeneracies in the system ?the crucial insight that davis and heller provided was that the appearance of such doublets is correlated with the large scale changes in the classical phase space .the nature of the phase space with increasing total energy is shown in fig .[ fig2 ] using the poincar surface of section .such a surface of section , following standard methods , is constructed by recording the points , with momentum , of the intersection of a trajectory with the the plane in the phase space .such a procedure for several trajectories with specific total energy generates a typical surface of section as shown in fig .[ fig2 ] and clearly indicates the global nature of the phase space .it is clear from the figure that the phase space for is mostly regular while higher energy phase spaces exhibit mixed regular - chaotic dynamics .one of the most prominent change happens when the symmetric stretch periodic orbit becomes unstable around . in fig .[ fig2 ] the consequence of such a bifurcation can be clearly seen for as the formation of two classically disjoint regular regions .in fact , the two regular regions signal a : resonance between the stretching modes .the crucial point to observe here is that classical trajectories initiated in one of the regular regions can not evolve into the other regular region . with increasing energythe classically disjoint regular regions move further apart and almost vanish near the dissociation energy .the result presented in fig .[ fig1 ] in fact closely mirrors the topological changes , shown in fig .[ fig2 ] , in the phase space . thus , in fig .[ fig1 ] a sequence of eigenstates with very small splittings begins right around the energy at which the symmetric periodic orbit becomes unstable .the unsymmetric stretch , however , becomes unstable at a higher energy and one can observe in fig .[ fig1 ] another sequence that seemingly begins near this point .note that at higher energies it is not easy to identify any more sequences , but small splittings are still observed .the important thing to note here is that not only do the splittings but the individual eigenstates also correlate with the changes in the phase space . in fig .[ fig3 ] we show a set of four eigenstates to illustrate an important point - as soon as the : resonance manifests itself in the phase space , the tunneling doublets start to form and an integrable normal form approximation is insufficient to account for the splittings .note that the phase space is mostly regular at the energies of interest .a much more detailed analysis can be found in the paper by farrelly and uzer .we begin by noting that the pair of eigenstates ( counting from the zero - point ) and are split by about whereas the pair and are separated by about . in both cases the splitting is smaller than the fundamental frequencies , which are of order unity .note that the latter pair of states appears to be a part of the sequence in fig .[ fig1 ] that starts right after the bifurcation of the symmetric stretch periodic orbit .since the original mode frequencies are nonresonant , one can obtain a normal form approximation to the original hamiltonian and see if the obtained splittings can be explained satisfactorily .in other words , the observed are coming from the perturbation that couples both the modes and in such a case there is no reason for classifying them as tunneling doublets .however , from the coordinate space representations of the states shown in fig .[ fig3 ] one observes that there is an important difference between the two pairs of states .the pair seems to have a perturbed nodal structure and hence one can approximately assign the states using the zeroth - order quantum numbers . inspecting the figure leads to the assignment and for states and respectively .if the above arguments are correct then the splitting should be obtainable from the normal form hamiltonian . since the theory of normal formsis explained in detail in several textbooks , i will provide a brief derivation below .begin by using the unperturbed harmonic action - angle variables defined via the canonical transformation ( similar set for mode ) : to express the original hamiltonian as where .since is purely oscillatory the angle average and thus to the normal form hamiltonian can be identified with the zeroth - order above .the first nontrivial correction arises at and can be obtained using the generating function where are the coefficients in the fourier expansion of the oscillatory part of .one now obtains the correction to the hamiltonian as with the bar denoting angle averaging of the poisson bracket involving and . performing the calculations the normal form at obtained as with \ ] ] the primitive bohr - sommerfeld quantization yields the quantum eigenvalues perturbatively to .the procedure can be repeated to obtain the normal form hamiltonian at higher orders .for instance , farrelly and uzer have computed the normal form out to and used pad resummation techniques to improve in cases when the zeroth - order frequencies are near - resonant . for our qualitative discussions ,the normal form is sufficient . : resonance is just starting to appear .the upper panel shows the coordinate space representations and the lower panels show the corresponding husimi distributions in the surface of section of the phase space .the first two states ( ) can be assigned approximate zeroth - order quantum numbers whereas the last two states ( ) show perturbed nodal features .clear difference in the phase space nature of the eigenstates can be seen .see text for details.,width=604,height=453 ] ) of the davis - heller system involving three states . as in the previous figure , the and phase space husimi representationsare also shown .the first and the last states appear to be strongly mixed states .the middle state seems relatively cleaner .does chaos play a role in this case ?detailed discussions in the text.,width=566,height=377 ] notice that the normal form hamiltonian above is integrable since it is ignorable in the angle variables . indeed , using the normal form and the approximate assignments of the eigenstates shown in fig . [ fig3 ] one finds which is in fair agreement with the actual numerical value .thus , a large part of the splitting can be explained classically . on the other hand , although the states and seem to have an identifiable nodal structure ( cf .[ fig3 ] ) , it is clear that they are significantly perturbed .nevertheless , persisting with an approach based on counting the nodes , states and can be assigned as and respectively .using the normal form one estimates and this is about a factor of two larger than the numerically computed value .thus , the splitting in this case is not accounted for solely by classical considerations .one might argue that a higher order normal form might lead to better agreement , but the phase space husimi distributions of the eigenstates shown in fig .[ fig3 ] suggests otherwise .the husimi disributions , when compared to the classical phase spaces shown in fig .[ fig2 ] , clearly show that the pair are localized in the newly created : resonance zone .hence , this pair of states is directly influenced by the nonlinear resonance and the splitting between them can not be accurately described by the normal form hamiltonian .indeed , as discussed by farrelly and uzer , in this instance one needs to consider a resonant hamiltonian which explicitly takes the : resonance into account .this provides a clear link between dynamical tunneling , appearance of closely spaced doublets and creation of new , in this case a nonlinear resonance , phase space structures . the choice of states in fig .[ fig3 ] is different from what is usually shown as the standard example for dynamical tunneling pairs .however , in the discussion above the states were chosen intentionally with the purpose of illustrating the onset of near - degeneracy due to the formation of a nonlinear resonance . in a typical situation involving near - integrable phase spacesthere are several such resonances ranging from low orders to fairly high orders .the importance of a specific resonance depends sensitively on the effective value of the planck s constant .indeed , detailed studies have shown that excellent agreement with numerically computed splittings can be obtained if proper care is taken to include the various resonances .clear and striking examples in this context can be found in the contributions by schlagheck _et al . _ and bcker _ et al . _in this volume .to finish the discussion of the davis - heller system i show an example which involves states forming tunneling pairs that are fairly complicated both in terms of their coordinate space representations as well as their phase space husimi distributions .the example involves three states , , and around , which is rather close to the dissociation energy . in the original work davis andheller noted that the splittings seem to increase by an order of magnitude in this high energy region .notably , they commented that `` _ perhaps the degree of irregularity between the regular regions plays some part _ '' . in fig .[ fig4 ] the variation of energy levels with the coupling parameter is shown .one can immediately see that the three states are right at the center of an avoided crossing .the coordinate space representations of the states shows extensive mixing for states and while the state seems to be cleaner .the husimi distributions for the respective states conveys the same message .comparing to the phase space sections shown in fig .[ fig2 ] it is clear that the husimis for state and seem to be ignoring the classical regular - chaotic divison - a clear indication of the quantum nature of the mixing . although it is not possible to strictly assign these states as chaotic , a closer inspection does show substantial husimi contribution in the border between the regular and chaotic regions .interestingly , the pairwise splitting between the states is nearly the same but localized linear combinations of any two states exhibits two - level dynamics .thus , the situation here is not of the generic chaos - assisted tunneling one wherein one of the states is chaotic and interacts with the other two regular states .nevertheless , linear combinations of the three states reveal ( not shown here ) that they are mixed with each other .a look at the coordinate and phase space representations of states and in fig .[ fig4 ] reveals that a different kind of state is causing the three - way interaction .it turns out that in the davis - heller model there is another class of symmetry - related pairs that appear when the unsymmetric mode becomes unstable . in the original work they were refered to as circulating " states which displayed much larger splitting then the so - called local mode pairs .the case shown in fig .[ fig4 ] involves one of the circulating pairs interacting with the usual local mode doublet leading to the complicated three - way interaction .the subtle nature of this interaction is evident from the coordinate space representation of state which exhibits a broken symmetry . interestingly , and as far as i can tell ,there has been very little understanding of such three - state interactions in the davis - heller system and further studies are needed to shed some light on the phase space nature of the relevant eigenstates .in the previous section the intimate connection between dynamical tunneling and phase space structures was introduced .the importance of a nonlinear resonance and a hint of the role played by the chaos ( cf .[ fig4 ] ) is evident .several other contributions in this volume discuss the importance of various phase space strucures using different models , both continuous hamiltonians and discrete maps . in the molecular context ,various mode - mode resonances play a critical role in the process of intramolecular vibrational energy redistribution ( ivr) . the phenomenon of ivr is at the heart of chemical reaction dynamics and it is now well established that molecules at high levels of excitation display all the richness , complexity and subtelty that is expected from nonlinear dynamics of multidimensional systems . in thiscontext , dynamical tunneling is an important agent of ivr and state mixing for a certain class of initial states ( akin to the so called noon states ) which are typically prepared by the experiments . the importance of the anharmonic resonances to ivr and the regimes wherein dynamical tunneling , mediated by these resonances , is expected to be crucial is described in some detail in this volume by leitner .the phase space perspective on leitner s viewpoint can be found in a recent review ( see also heller s contribution in the present volume ) . in this regardit is interesting to note that there has been a renaissance of sorts in chemical dynamics with researchers critically examining the validity of the two pillars of reaction rate theory - transition state theory ( tst ) and the rice - ramsperger - kassel - marcus ( rrkm ) theory .since both theories have classical dynamics at their foundation , advances in our understanding of nonlinear dynamics and continuing efforts to characterize the phase space structure of systems with three or more degrees of freedom are beginning to yield crucial mechanistic insights into the dynamics . at the same time, rapid advances in experimental techniques and theoretical understanding of the reaction mechanisms has led researchers to focus on the issue of controlling the dynamics of molecules .what implications might dynamical tunneling have on our efforts to control the atomic and molecular dynamics ? in the rest of this articlei focus on this issue and use two seemingly simple and well studied systems as examples to highlight the role of dynamical tunneling in the context of coherent control .both examples are in the context of periodically driven systems and i refer the reader to the work of flatt and holthaus for an exposition of the close quantum - classical correspondence in such systems . historically , an early indication that dynamical tunneling could be sensitive to the chaos in the underlying phase space came from the study of strongly driven double well potential by lin and ballentine .the model hamiltonian in this case can be written down as with being the frequency of the monochromatic field ( henceforth refered to as the driving field ) and the unperturbed part is the hamiltonian corresponding to a double well potential with two symmetric minima at and a maximum at .following the original work , the parameters of the unperturbed system ( assuming atomic units ) are taken to be , , and for which the potential has a barrier height and supports about eight tunneling doublets .as is well known , in the absence of the driving field , a wavepacket prepared in the left well can coherently tunnel into the right well with the time scale for tunneling being inversely proportional to the tunnel splitting .this unperturbed scenario , however , is significantly altered in the presence of a strong driving field . in the presence of a strong fieldthe phase space of the system exhibits large scale chaos coexisting with two symmetry - related regular regions .lin and ballentine observed that a coherent state localized in one of the regular region tunnels to the other symmetry - related regular region on timescales which are orders of magnitude smaller than in the unperturbed case .it was suspected that the extensive chaos in the system might be assisting the tunneling process . .a coherent state localized in the left regular island tunnels ( indicated by a green arrow ) over to the right regular island . the chaotic regions ( gray )have been suppressed for clarity .( b ) monitoring the survival probability of the initial state to determine the timescale of tunneling ( in this case ) .snapshots of the evolving husimi distributions are also shown at specific intervals , with green ( gray ) indicating maxima of the distributions .( c ) the decay time determined as in ( b ) for a variety of field strengths with the initial state localized in the left island .note the fluctuations in the decay time over several orders of magnitude despite very similar nature of the phase spaces over the entire range of the driving field strength .the arrows highlight some of the plateau regions which are crucial for bichromatic control.,width=604,height=529 ] in order to illustrate the tunneling process fig .[ fig5]a shows the stroboscopic surface of section for the case of strong driving with .note the extensive chaos and the two regular islands ( left and right ) in the phase space .a coherent state is placed in the center of the left island and time evolved using the floquet approach which is ideally suited for time - periodic driven systems . in this instance one is interested in the time at which the coherent state localized on the left tunnels over to the regular region on the right . in order to obtain this informationit is necessary to compute the survival probability of the initial coherent state .briefly , floquet states \{ } are eigenstates of the hermitian operator and form a complete orthonormal basis .an arbitrary time - evolved state can be expressed as with being the quasienergy associated with the floquet state .the expansion coefficients are independent of time and given by yielding the expansion measuring time in units of field period ( ) and owing to the periodicity of the floquet states , , the above equation simplifies to with and integer . the time evolution operator is determined by successive application of the one - period time evolution operator _ i.e. _ , ^k.\ ] ] this allows us to express the survival probability of the initial coherent state in terms of floquet states as where the overlap intensities are denoted by . in order to determine the `` lifetime '' ( ) of the initial coherent state, we monitor the time at which goes to zero ( minimum ) at the first instance .in other words , it is the time when the coherent state leaves its initial position _i.e _ , the left regular island of the phase space for the first time . in fig .[ fig5]b the for the initial state of interest is shown along with the snapshots of husimi distribution at specific times . clearly , the initial state tunnels in about , which suggests that chaos assists the dynamical tunneling process . however , the issue is subtle and highlighted in fig .[ fig5]c which shows the decay time plot for a range of driving field strengths for an intial state localized in the left regular island .it is important to note that the gross features of the phase space are quite similar over the entire range .however , fig .[ fig5]c exhibits strong fluctuations over several orders of magnitude and this implies that a direct association of the decay time with the extent of chaos in the phase space is not entirely correct .( black ) and for ( blue ) and ( red ) as a function of .the local phase structure in the vicinity of the initial coherent state and the survival probability for fixed ( indicated by green line in ( a ) ) corresponding to red shifted , fundamental and blue shifted field frequency is shown in panels ( b ) and ( c ) respectively .the : field - matter resonance in case of is clearly visible and correlates with the very short decay time as compared to the other two cases .note the different time axis scale in the first survival probability plot.,width=529,height=340 ] the above discussion and results summarized in fig .[ fig5 ] bring up the following key question .what is the mechanism by which the initial state decays out of the regular region ? in turn , this is precisely the question that modern theories of dynamical tunneling strive to answer . according to the theory of resonance - assisted tunneling , the mechanism is possibily one wherein couples to the chaotic sea via one or several nonlinear resonances provided certain conditions are satisfied .specifically , the local structure of the phase space surrounding the regular region is expected to play a critical role .the theoretical underpinnings of rat along with several illuminating examples can be found in the contribution by schlagheck _and here we suggest a simple numerical example which points to the importance of the local phase space structure around . preliminary evidence for the role of field - matter nonlinear resonances in controlling the decay of is given in fig .[ fig6 ] , which shows the effect of changing the driving field frequency on the local phase space structures and the decay times . from fig .[ fig6 ] it is apparent that detuning by leads to a significant change in the local phase space structure and the decay time of the initially localized state . in fig .[ fig6](b ) , corresponding to the field frequency , a prominent : field - matter resonance is observed .it is plausible that the the decay time is only a few hundred field periods in this case due to assistance from the nonlinear resonance .however , the decay time increases for and becomes even larger by an order of magnitude for , due to absence of the : resonance .this indicates that decay dynamics is highly sensitive to the changes in local phase space structure of the left regular island .therefore , taking into account the relatively large order resonances , since , is unavoidable in order to understand the decay time plot in fig .[ fig5 ] and for smaller values of the effective planck constant one expects a more complicated behavior .note , however that the very high order island chain visible in the last case in fig .[ fig6 ] is unable to assist the decay .this is where we believe that an extensive -scaling study will help in gaining a deeper understanding of the decay mechanism .such an extensive calculation can be found , for example , in the recent work by mouchet , eltschka , and schlagheck on the driven pendulum .there is sufficient evidence in the driven pendulum system for a mechanism in which nonlinear resonances play a central role in coupling initial states localized in regular phase space regions to the chaotic sea .one might be able to provide a clear qualitative and quantitative explanation for the results in fig .[ fig5]c based on the recent advances . and primary field strength ( a ) and ( b ) . as in the earlier plots ,the chaotic regions have been suppressed ( in gray ) for clarity .note the breaking of the symmetry in both cases .however , significant suppression of the deacy of the state localized in the left regular island happens in case ( b ) only.,width=566,height=340 ] this brings us to the second important issue - what is the role of chaos in this dynamical tunneling process ?an earlier work by utermann , dittrich and hnggi on the driven double well system showed that there is indeed a strong correlation between the splittings of the floquet states and the overlaps of their husimi distributions with the chaotic regions in the phase space .however , a different perspective yields clear insights into the role of chaos with potential implications for coherent control . in order to highlight this perspective i start with a simple question : _ is it possible to control the decay of the localized initial state with an appropriate choice of a control field ? _ it is important to note that the lin - ballentine system parameters imply that one is dealing with a multilevel control scenario and there has been a lot of activity over the last few years to formulate control schemes involving multiple levels in both atomic and molecular systems . the driven double well system has been a particular favorite in this regard , more so in recent times due to increased focus on the physics of trapped bose - einstein condensates .more specifically , several studies have explored the possibility of controlling various atomic and molecular phenomenon using bichromatic fields with the relative phase between the fields providing an additional control parameter . in the present context ,for example , sangouard _ et al ._ exploited the physics of adiabatic passage to show that an appropriate combination of field leads to suppression of tunneling in the driven double well model .the choice of relative phase between the two fields allowed them to localize the initial state in one or the other well .however , the parameter regimes in the work by sangouard _ et al . _correspond to the underlying phase space being near - integrable and hence a minimal role of the chaotic sea .more relevant to the mixed regular - chaotic phase space case presented in fig .[ fig5 ] is an earlier work by farrelly and milligan wherein it was demonstrated that one can suppress the tunneling dynamics in a driven double well system using a bichromatic field . in other wordsthe original hamiltonian of eq .[ linbalham ] is modified as following with being the same as in eq .[ lbh0ham ] and the additional -field is taken to be the control field . moreover , modulating the turn - on time of the control field can trap the wavepacket in the left or right well of the double well potential .hence , it was argued that the tunneling dynamics in a driven double well can be controlled at will for specific choices of the control field parameters .note that in the presence of control field _i.e. , _ with the relative phase the hamiltonian in eq .[ farmilham ] transforms under symmetry operations as \nonumber \\ &= & h_{0}(x , p)-x[-\lambda_{1}\cos(\omega_{f}t)+\lambda_{2}\cos(2\omega_{f}t ) ] \\ & \neq & h(x , p;t ) .\nonumber\end{aligned}\ ] ] similarly , except at , the discrete symmetry of the hamiltonian is broken under the influence of the additional -field .farrelly and milligan thus argued that the control field with strength smaller than the driving field will lead to localization due to the breaking of the generalized symmetry of the hamiltonian and the floquet states .the impact of a small symmetry breaking control field with can be clearly seen in fig .[ fig7 ] in terms of the changes in the classical phase space structures . however , there is a subtlety which is not obvious upon inspecting the phase spaces shown in fig .[ fig7 ] for two different but close values of the driving field strength . in case of corresponding to fig .[ fig7](a ) , the control field is unable to suppress the decay of the initial state localized in the left regular region .on the other hand , fig .[ fig7](b ) corresponds to and computations show that the control field is able to suppress the decay of to an appreciable extent .thus , although in both cases the control field breaks the symmetry and the resulting phase spaces show very similar structures , the extent of control exerted by the -field is drastically different . as a function of the field parameters .the initial state in every case is localized in the left regular region in the classical phase space .notice the convoluted form of the landscape with the regions of low probability indicating little to no control ( red thick arrows ) .the green lines ( thin arrows ) indicate regions where a high degree of control can be achieved.,width=604,height=377 ] the discussions above and the results summarized in fig .[ fig7 ] and fig .[ fig6 ] clearly indicate that the decay of out of the left regular region can be very different and deserves to be understood in greater detail .in fact , computations show that such cases of complete lack of control are present for other values of as well .this is confirmed by inspecting fig .[ fig8 ] which shows the control landscape for the bichromatically driven double well in the specific case of .other choices for also show similarly convoluted landscapes .there are several ways of presenting a control landscape and in fig .[ fig8 ] the time - smoothed survival probability ( cf .eq . [ survprob ] ) associated with the initial state is used to map the landscape as a function of the field strengths . note that the choice of to represent the landscapeis made for convenience ; the decay time is a better choice which requires considerable effort but the gross qualitative features of the control landscape do not change upon using .large ( small ) values of indicate that the decay dynamics is suppressed ( enhanced ) .it is clear from fig .[ fig8 ] that the landscape comprises of regions of control interspersed with regions exhibiting lack of control. such highly convoluted features of control landscape are a consequence of the simple bichromatic choice for the control field and the nonlinear nature of the corresponding classical dynamics . from a control point of view , there are regions on the landscape for which a monotonic increase of leads to increasing control .interestingly , the lone example of control illustrated in farrelly and milligan s paper happens to be located on one of the prominent hills on the control landscape ( shown as a green dot in fig .[ fig8 ] ) . a striking feature that can be seen in fig .[ fig8 ] is the deep valley around which signals an almost complete lack of control even for significantly large strengths of the -field .the valleys in fig .[ fig8 ] correspond precisely to the plateaus seen in the decay time plot shown in fig .[ fig6 ] for ( red arrows ) .it is crucial to note that this wall of no control " is robust even upon varying the relative phase between the driving and the control field . .the primary driving field strength is fixed at .six states that have appreciable overlap with are highlighted by circles .( b ) overlap intensity for and indicates multilevel interactions involving the states shown in ( a ) .husimi distribution function of the floquet states regulating the decay of are also shown .notice that the nature of delocalized states ( magenta ) does not change much with .,width=529,height=453 ] insights into the lack of control for driving field strength ( and other values as well which are not discussed here ) can be obtained by studying the variation of the floquet quasienergies with , the control field strength . in fig .[ fig9 ] the results of such a computation are shown and it is immediately clear that even in the presence of -control field six states contribute to the decay of over the entire range of .this is further confirmed in fig .[ fig9](b ) where the plot of the overlap intensities shows multiple floquet states participating nearly equally in the decay dynamics of the initial state .the final clue comes from inspecting the husimi distributions shown in fig .[ fig9 ] , highlighting the phase space delocalized nature of some of the participating states . despite the symmetry of the tunneling doubletsbeing broken due to the bichromatic field , two or more of the participating states are extensively delocalized in the chaotic reigons of the phase space . moreover, the participation of the chaotic floquet states persists even for larger values of .hence , using the symmetry breaking property of the -field for control purposes is not very effective when chaotic states are participating in the dynamics ._ therefore , the lack of control , signaled by plateaus in fig .[ fig6 ] and the valleys in fig .[ fig8 ] , is due to the dominant participation by chaotic states _ i.e. , _ chaos - assisted tunneling . _the plateaus arise due to the fact that the coupling between the localized states and the delocalized states vary very little with increasing control field strength - something that is evident from the floquet level motions shown in fig .[ fig9 ] and established earlier by tomsovic and ullmo in their seminal work on chaos - assisted tunneling in coupled quartic oscillators .it is important to note that for the chaotic states , as opposed to the regular states , do not have a definite parity .consequently , the presence of the -field does not have a major influence on the chaotic states .thus , if one or more chaotic states are already influencing the dynamics of at then the bichromatic control is expected to be difficult .an earlier study by latka __ on the bichromatically driven pendulum system also suggested that the ability to control the dynamics is strongly linked to the existence of chaotic states . the model problem in this section and the results point to a direct role of chaos - assisted pathways in the failure of an attempt to bichromatically control the dynamics .however , it is not yet clear if control strategies involving more general fields would exhibit similar characteristics .there is some evidence in the literature which indicates that quantum optimal control landscapes might be highly convoluted if the underlying classical phase space exhibits large scale chaos .nevertheless , further studies need to be done and the resulting insights are expected to be crucial in any effort to control the dynamics of multilevel systems . the driven morse oscillator system has served as a paradigm model for understanding the dissociation dynamics of diatomic molecules .studies spanning nearly three decades have explored the physics of this system in exquisite detail .consequently , a great deal is known about the mechanism of dissociation both from the quantum and classical perspectives .indeed , the focus of researchers nowadays is to control , either suppress or enhance , the dissociation dynamics and various suggestions have been put forward . in addition , one hopes that the ability to control a single vibrational mode dynamics can lead to a better understanding of the complications that arise in the case of polyatomic molecular systems wherein several vibrational modes are coupled at the energies of interest .several important insights have originated from classical - quantum correspondence studies which have established that molecular dissociation , in analogy to multiphoton ionization of atoms , occurs due to the system gaining energy by diffusing through the chaotic regions of the phase space .for example , an important experimental study by dietrich and corkum has shown , amongst other things , the validity of the chaotic dissociation mechanism .thus , the formation of the chaotic regions due to the overlap of nonlinear resonances ( field - matter ) , hierarchical structures near the regular - chaotic borders acting as partial barriers , and their effects on quantum transport have been studied in a series of elegant papers . in the context of this present articlean interesting question is as follows .since a detailed mechanistic understanding of the role of various phase space structures of the driven morse system is known , is it possible to design local phase space barriers to effect control over the dissociation dynamics ?in particular , the central question here is whether the local phase space barriers are also able to suppress the quantum dissociation dynamics .there is an obvious connection between the above question and the theme of this volume - quantum mechanics can shortcircuit " the classical phase space barriers due to the phenomenon of dynamical tunneling .thus , such phase space barriers might be very effective in controlling the classical dissociation dynamics but might fail completely when it comes to controlling the quantum dissociation dynamics .there is a catch here , however , since there is also the possibility that the cantori barrier in the classical phase space may be even more restrictive in the quantum case .thus , creation or existence of a phase space barrier invariably leads to subtle competition between classical transport and quantum dynamical tunneling through the barrier . as expected, the delicate balance between the classical and quantum mechanism is determined by the effective planck constant of the system of interest .an earlier detailed review by radons , geisel and rubner is highly reccomended for a nice introduction to the subject of classical - quantum correspondence perspective on phase space transport through kolmogoroff - arnold - moser ( kam ) and cantori barriers . in the driven morse oscillator case , brown and wyatt showed that the cantori barriers do leave their imprint on the quantum dissociation dynamics and act as even stronger barriers as compared to the classical system . maitra and heller in their study on transport through cantori in the whisker map have clearly highlighted the classical versus quantum competition . andfrequency connects state and via a two - photon resonance transition .the location of cantori in presence of the field with are indicated as dotted curves in the figure.,width=529,height=415 ] from the above discussion it is apparent that any approach to control the dissociation dynamics by recreating local phase space barriers will face the subtle classical - quantum competition .in fact , it is tempting to think that every quantum control algorithm works by creating local phase space dynamical barriers and the efficiency of the control is decided by the classical - quantum competition .however , at this point of time there is very little work towards making such a connection and the above statement is , at best , a conjecture . for the purpose of this article, i turn to the driven morse oscillator system to provide an example for the importance of resonance - assisted tunneling in controlling the dissociation dynamics .the model system is inspired from the early work by wyatt and brown ( see also the work by breuer and holthaus ) and the hamiltonian can be written as in the dipole approximation .the zeroth - order hamiltonian }^2 , \label{morsepot}\ ] ] represents the morse oscillator modeling the anharmonic vibrations of a diatomic molecule . in the above, is the dipole moment function , is the strength of the laser field , is the driving field frequency , and is the reduced mass of the diatomic molecule with and being the two atomic masses . in eq .[ morsepot ] , is the dissociation energy , is the range of the potential and is the equilibrium bond length of the molecule . rather than attempting to provide a general account as to how rat might interfere with the process of control ,i feel that it is best to illustrate with a realistic molecular example .hopefully , the generality of the arguments will become apparent later on .for the present purpose i choose the diatomic molecule hydrogen fluoride ( hf ) as the specific example .any diatomic molecule could have been chosen but hf is studied here due to the fact that brown and wyatt have already discussed the role of cantori barriers to the dissociation dynamics in some detail .the morse oscillator parameters for hydrogen fluoride are and .these parameters correspond to ground electronic state of the hf molecule supporting bound states .note that atomic units are used for both the molecular and field parameters with time being measured in units of the field period .the only difference between the present work and that of brown and wyatt has to do with the field - matter coupling .brown and wyatt use the dipole function with and , obtained from _ ab - initio _ data on hf . here a linear approximation for is employed with in case of hf .there are quantitative differences in the dissociation probabilities due to the linearization approximation but the main qualitative features remain intact despite the linearization approximation . in fig .[ fig10 ] the morse potential for hf is shown along with a summary of the key features such as the energy region of interest , the quantum state whose dissociation is to be controlled , and the classical phase space structures that might play an important role in the dissociation dynamics .the driving field parameters are chosen as , same as in the earlier work , and the field strength corresponds to about tw/ w/ . as shown in fig .[ fig10 ] ( inset ) , the focus is on understanding and controlling the dissociation dynamics of the excited morse oscillator eigenstate .there are several reasons for such a choice and i mention two of the most important reasons .firstly , the earlier study has established that of hf happens to be in an energy regime wherein two specific cantori barriers in the classical phase space affect the dissociation dynamics . moreover , for driving frequency , two of the zeroth - order eigenstates and have unperturbed energies such that and hence corresponds to a two - photon resonant situation .secondly , for a field strength of tw/ , the dissociation probability of the ground vibrational state is negligible .far stronger field strengths are required to dissociate the state and ionization process starts to compete with the dissociation at such high intensities .thus , in order to to illustrate the role of phase space barriers in the dissociation dynamics without such additional complications , the specific initial state is chosen .incidentally , such a scenario is quite feasible since a suitably chirped laser field can populate the state very efficiently from the initial ground state and one imagines coming in with the monochromatic laser to dissociate the molecule .the monochromatically driven morse system studied here has a dimensionality such that one can conveniently visualize the phase space in the original cartesian variables .however , since the focus is on suppressing dissociation by creating robust kam tori in the phase space , action - angle variables which are canonically conjugate to are convenient and a natural representation to work with .the action - angle variables of the unperturbed morse oscillator , appropriate for the bound regions , are given by .\end{aligned}\ ] ] [ actionangle ] in the above equations , denotes the dimensionless bound state energy , and for , for . in terms of the action - angle variables it is possible to express the cartesian as follows [ xpactang_chap3 ] \\ p&=&\frac{-(2md_0)^{1/2 } [ e_{0}(j)(1-e_{0}(j))]^{1/2}\sin \theta}{1+\sqrt{e_{0}(j)}\cos \theta},\end{aligned}\ ] ] where . substituting for in terms of , the unperturbed morse oscillator hamiltonian in eq .[ morsepot ] is transformed into where is the harmonic frequency at the minimum . the zeroth - order nonlinear frequencyis easily obtained as and with increasing excitation _i.e. , _ increasing action ( quanta ) , the non - linear frequency decreases monotonically and eventually vanishes , signaling the onset of unbound dynamics leading to dissociation . andfrequency corresponding to a on - resonance situation . the -photon resonance is clearly seen in the phase space and marked as : resonance in dark green .the initial morse eigenstate ( thick dashed line ) is situated rather close to a cantorus ( indicated ) , with being the golden ratio .the state is connected to the morse eigenstate via the : resonance .note that the states , and are symmetrically located about the resonance with being localized in the resonance.,width=604,height=415 ] the driven system can now be expressed in terms of the variables as \cos(\omega_{f}t ) .\label{prelim_hamact}\ ] ] in addition , since is a periodic function of , one has the fourier expansion .\ ] ] as a consequence the driven hamiltonian in eq .[ prelim_hamact ] can be written as where the matter - field interaction term is denoted as \cos(\omega_f t ) , \label{pertact } \nonumber\ ] ] the fourier coefficients and are known analytically and given by [ potfour ] , \label{eqn_four1}\\ v_{n}(j)&=&\frac{(-1)^{n+1}}{\alpha n}{\left[\frac{\sqrt{d_{0}e_{0}(j)}}{d_{0}+ \sqrt{d_{0}^{2}-d_{0}e_{0}(j)}}\right]}^{n } \label{eqn_four2}.\end{aligned}\ ] ] the stroboscopic surface of section in the variables is shown in fig .[ fig11 ] and is a typical mixed regular - chaotic phase space .a few important points are worth noting at this stage .firstly , the initial state of interest is located close to a cantorus with with being the golden ratio .the importance of this cantorus to the resulting dissociation dynamics of the state was the central focus of the work by brown and wyatt .in particular , the extensive stickiness around this region can be clearly seen and hence one expects nontrivial influence on the classical dissociation dynamics as well .secondly , a prominent :: nonlinear resonance is also observed in the phase space and represents the classical analog of the quantum -photon resonance condition .interestingly , the area of this resonance is about and , therefore , can support one quantum state .it turns out that the husimi density of the morse state is localized inside the : resonance island .in addition , the states and are nearly symmetrically located about the : resonance . a way to seethis is to use secular pertubation theory on the driven morse hamiltonian .one can show that in the vicinity of the : resonance ( cf .[ fig11 ] ) an effective pendulum hamiltonian is obtained with , and .the resonant action \approx12.6,\ ] ] for the parameters used in this work . using action values ( quantum state ) and ( quantum state ) the energy difference is calculated as in other words , the states and are nearly symmetrical with respect to the state , which is localized in the : resonance . therefore , the nonzero coupling will efficiently connect the states and .moreover , for the given parameters , using the definition of in terms of fourier coefficient , the strength of the resonance is estimated as , clearly a fairly strong resonance .consequently , the situation in fig . [ fig11 ] is a perfect example where rat can play a crucial role in the dissociation dynamics .indeed quantum computations ( not shown here ) show that there is a rabi - type cycling of the probabilities between the three morse states .i now turn to the issue of _ selectively _ controlling the dissociation dynamics of the initial morse eigenstate by creating local barriers in the phase space shown in fig .[ fig11 ] , bearing in mind the possibility of quantum dynamical tunneling interfering with the control process .if the classical mechanism of chaotic diffusion leading to dissociation holds in the quantum domain as well then a simple way of controlling the dissociation is to create a local phase space barrier between the state of interest and the chaotic region . in a recent work, huang , chandre and uzer provided the theory for recreating local phase space barriers for time - dependent systems and showed that such barriers indeed suppress the ionization of a driven atomic system .however , huang _ et al ._ were only concerned with the classical ionization process .thus , potential complications due to dynamical tunneling were not addressed in their study .the driven morse system studied here presents an ideal system to understand the interplay of quantum and classical dissociation mechanisms . in what follows ,i provide a brief introduction to the methodology with an explicit expression for the classical control field needed to recreate an invariant kam barrier , preferably an invariant torus with sufficiently irrational frequency . to start with, the nonautonomous hamiltonian is mapped into an autonomous one by considering as an additional angle - action pair . denoting the action and angle variables by and , the original driven system hamiltonian ( see eq .[ hamact2 ] ) can be expressed as with .note that for a fixed driving field strength and the value of corresponding to a diatomic molecule , is also fixed .moreover , for physically meaningful values of for most diatoms and typical field strengths far below the ionization threshold one always has . in the absence of the driving field ( ) ,the zeroth - order hamiltonian is integrable and the phase space is foliated with invariant tori labeled by the action corresponding to the frequency .however , in the presence of the driving field ( ) the field - matter interaction renders the system nonintegrable with a mixed regular - chaotic phase space .more specifically , for field strengths near or above a critical value one generally observes a large scale destruction of the field - free invariant tori leading to significant chaos and hence the onset of dissociation .the critical value itself is clearly dependent on the specific molecule and the initial state of interest .the aim of the local control method is to rebuild a nonresonant torus , with integer , which has been destroyed due to the interaction with the field . assuming that the destruction of is responsible for the significant dissociation observed for some initial state of interest, the hope is that locally recreating the will suppress the dissociation _i.e. , _ acts as a local barrier to dissociation .ideally , one would like to recreate the local barrier by using a second field ( appropriately called as the control field ) which is much weaker and distinct from the primary driving field .following huang _ et al ._ such a control field can be analytically derived and has the form where and being a linear operator defined by the classical control hamiltonian can now be written down as in case of the driven morse system the control field can be obtained analytically and to leading order is given by where and it can be shown that .\nonumber\end{aligned}\ ] ] i skip the somewhat tedious derivation of the result above and refer to the original literature as well as a recent thesis for details .note that in the above is the frequency of the invariant torus that is to be recreated corresponding to the unperturbed action and to is located at , assuming the validity of the perturbative treatment . the leading order control field in eq .[ control2 ] is typically weaker than the driving field and has been shown to be quite effective in off - resonant cases in suppressing the classical dissociation .however , in order to study the effect of the control field on the quantum dissociation probabilities , it is necessary to make some simplifications .one of the main reasons for employing the simplified control fields via the procedure given below has to do with the fact that the classical action - angle variables do not have a direct quantum counterpart . essentially , the dominant fourier modes of eq . [ control2 ] are identified and one performs the mapping yielding the simplified control hamiltonian if more than one dominant fourier modes are present then they will appear as additional terms in equation [ qcham ] .note that the above simplified form is equivalent to assuming that the control field is polychromatic in nature , which need not be true in general .nevertheless , a qualitative understanding of the role of the various fourier modes towards local phase space control is still obtained .more importantly , and as shown next , in the on - resonant case of interest here , the simplified control field already suggests the central role played by rat . in fig .[ fig12 ] a summary of the efforts to control the dissociation of the initial state is shown .specifically , fig .[ fig12](a ) and ( b ) show the phase spaces where two different kam barriers , and respectively are recreated .the control hamiltonian in both instances , in cartesian variables , has the following form with tw/ .thus , as desired , the control field strengths are an order of magnitude smaller than the driving field strength .the simplified control hamiltonian reflects the dominance of the fourier mode of the leading order control field in eq .[ control2 ] and obtained using the method outlined before .also , note that the control field comes with a relative phase with respect to the driving field . as expected , from the line of thinking presented in the previous section , fig .[ fig12](c ) and ( d ) show that the kam barriers indeed suppress the classical dissociation significantly ._ however , the quantum dissociation in both cases increases slightly ! _ clearly , the recreated kam barriers are ineffective and suggests that the quantum dissociation mechanism is somehow bypassing the kam barriers .a clue to the surprising quantum results comes from comparing the phase spaces in fig .[ fig11 ] and fig .[ fig12 ] which show the uncontrolled and controlled cases respectively .although , the kam barriers seem to have reduced the extent of stochasticity , the : resonance is intact and appears to occupy slightly larger area in the phase space . thus , this certainly indicates that a significant amount of the quantum dissociation is occuring due to the rat mechanism involving the three morse states , and as discussed before .in particular , the state must still be actively providing a route to couple the initial state to the chaotic region via rat . how can one test the veracity of such an explanation ?one way is to scale the planck constant down from and monitor the quantum dissociation process . reduced implies that the : island can support several states as well as the fact that other higher order resonances now become relevant to the rat mechanism .such a study is not presented here but one would anticipate that the dissociation mechanism can be understood based on the theory of rat that already exists ( see contributions by schlagheck _et al . _ and bcker _ et al . _, for example ) .another way is to directly interfere locally with the resonance and see if the quantum dissociation is actually suppressed . locally interfering with a specific phase space structure , keeping the gross features unchanged ,is not necessarily a straightforward approach .however , the tools in the previous section allow for such local interference and i present the results below . with ) designed to recreate specific kam barriers ( a ) and ( b ) .note that in both cases the dominant fourier amplitude of the leading order control field of eq .[ control2 ] have been utilized and the desired kam barriers are clearly seen ( thick gray ) .the effect of the barriers seen in ( a ) and ( b ) on the classical ( gray circles ) and quantum ( gray squares ) dissociation probabilities are shown in ( c ) and ( d ) respectively .the uncontrolled results are indicated by the corresponding black symbols .interstingly , the classical dissociation is suppressed but the quantum dissociation is slightly enhanced.,width=529,height=377 ] kam barrier . in ( a ) two dominant fourier modes and of the leading order control term in eq .[ control2 ] are retained . in( b ) the simplified control field as in eq .[ qcham ] is used with an effective field strength estimated using and .notice that the desired kam barrier is recreated in ( a ) but not in ( b ) ( thick gray line indicates the expected location ) . in ( c )the classical dissociation probabilities of the uncontrolled ( black circles ) , control using field ( a ) ( open circles ) , and control using field ( b ) ( gray circles ) are shown .the quantum results are shown in ( d ) for the uncontrolled ( black squares ) and control using field ( b ) ( gray squares ) are shown .see text for discussion.,width=529,height=377 ] the control field in eq .[ control2 ] corresponding to the case of fig .[ fig12](b ) _i.e. , _ recreating the kam barrier , turns out to be well approximated by the above hamiltonian comes about due to the fact that two fourier modes and are significant in this case .note that the specific kam barrier of interest has a frequency between that of the cantorus ( around which the initial state is localized , cf .[ fig11 ] ) and the : nonlinear resonance . from a perturbative viewpoint ,creation of this kam barrier using eq .[ appcontham ] is not expected to be easy due to the proximity to the : resonance and the fact that the fourier component is nothing but the : resonance .nevertheless , fig .[ fig13](a ) shows that the specific kam barrier is restored and , compared to fig .[ fig11 ] , the controlled phase space does exhibit reduced amount of chaos .consisitently , fig .[ fig13](c ) shows that the classical dissociation is suppressed by nearly a factor of two as in the case shown in fig .[ fig12](b ) wherein only the component of the control hamiltonian was retained . as mentioned earlier , in order to calculate the quantum dissociation probability the control hamiltonian in eq .[ appcontham ] needs to be mapped into a from as in eq .[ qcham ] .since two fourier modes need to be taken into account , the effective control field strength is given by such a procedure yields and thus the control field , still less intense than the primary field , comes with a relative phase of zero .interestingly , as shown in fig .[ fig13](b ) , the resulting simplified control hamiltonian fails to create the desired barrier .moreover , the phase space also exhibits increased stochasticity and as a consequence the classical dissociation is enhanced ( cf .[ fig13](c ) ) . however , fig .[ fig13](b ) reveals an interesting feature - the : resonance is severly perturbed .this perturbation is a consequence of including the fourier component into the effective control hamiltonian ._ the key result , however , is shown in fig .[ fig13](d ) where one observes that the quantum dissociation probability is reduced significantly_. it seems like the quantum dynamics feels the barrier when there is none ! the surprising and counterintutive results summarized in fig .[ fig12 ] and fig .[ fig13 ] can be rationalized by a single phenomenon - dynamical ( resonance assisted in this case ) tunneling .the main clue comes from the observation that quantum suppression happens as soon as the : resonance is perturbed .although the results here are shown with a specific example , the phenomenon is general .indeed computations ( not published ) for different sets of parameters have supported the viewpoint expressed above .interestingly , the work of huang _focused on classical suppression of ionization by creating local phase space barriers in case of the driven one - dimensional hydrogen atom . around the same timehighlighted the importance of the rat mechanism in order to obtain accurate decay lifetimes of localized wavepackets in the same system .work is currently underway to see if the conclusions made in this section hold in the driven atomic system as well _ i.e. , _ whether the attempt to suppress the ionization by creating local phase space barriers is foiled by the phenomenon of rat .the two examples discussed in this work illustrate a key point - _ dynamical tunneling plays a nontrivial role in the process of quantum control ._ in the first example of the driven quartic double well it is clear that bichromatic control fails in regimes where chaos - assisted tunneling is important . in the second example of a driven morse oscillator it is apparent that efforts to control by building phase space kam barriers fail when resonance - assisted tunneling is possible . in both instances the competition between classical and quantum mechanismsis brought to the forefront .although the two examples shown here represent the failure of specific control schemes due to dynamical tunneling , one must not take this to be a general conclusion .it is quite possible that some other control schemes might owe their efficiency to the phenonmenon of dynamical tunneling itself .further classical - quantum correspondence studies on control with more general driving fields are required in order to confirm ( or refute ) the conclusions presented in this chapter . at the same time the two examples presented here are certainly not the the last word ; establishing the role of dynamical tunneling in quantum coherent control requires one to step into the murky world of three or more degrees of freedom systems .i mention two model systems , currently being studied in our group , in order to stress upon some of the key issues that might crop up in such high dimensional systems .for example , two coupled morse oscillators which are driven by a monochromatic field already presents a number of challenges both from the technical as well as conceptual viewpoints .the technical challenge arises due to the fact that dimensionality constraints do not allow one to visualize the global phase space structures as easily as done in this chapter .one approach is to use the method of local frequency analysis to construct the arnold web _ i.e. , _ the network of nonlinear resonances that regulate the multidimensional phase space transport . in a previous work , involving a time - independent hamiltonian system , the utility of such an approach and the validity of the rat mechanism has been established .however , lifting " quantum dynamics onto the arnold web is an intriguing possibility which is still an open issue . on the conceptual sidethere are several issues with multidimensional systems .i mention a few of them here .firstly , even at the classical level one has the possibility of transport like arnold diffusion which is genuinely a three or more dof effect and has no counterpart in systems with less than three dofs .note that arnold diffusion is typically a very long time process and is notoriously difficult to observe in realistic physical systems .moreover , arguments can be made for the irrelevance of arnold diffusion ( or some similar process ) in quantum systems due to the finiteness of the planck constant .secondly , an interesting competition occurs in systems such as the driven coupled morse oscillators . even in the absence of the fieldthe dynamics is nonintegrable and one can be in a regime where the modes are exchanging energy but none of the modes gain enough energy to dissociate . on the other hand , in the absence of mode - mode coupling , a weak enough field can excite the system without leading to dissociation. however , in the presence of such a weak field and the mode - mode coupling one can have significant dissociation of a specific vibrational mode . clearly , there is nontrivial competition between transport due to mode - mode resonances , field - mode resonances and the chaotic regions .selective control of such driven coupled systems is an active research area today and the lessons learnt from the two examples suggest that dynamical tunneling in one form or another can play a central role . in the context of studying the potential competition between arnold diffusion and dynamical tunneling ,i should mention the driven coupled quartic oscillator system with the hamiltonian : in the absence of the field the hamiltonian reduces to the case originally studied by tomsovic , bohigas and ullmo wherein the existence of chaos - assisted tunneling was established in exquisite detail . at the same time , with the system represents one of the few examples for which the phenomenon of arnold diffusion has been investigated over a number of years .nearly a decade ago , demikhovskii , izrailev , and malyshev studied a variant of the above hamiltonian to uncover the fingerprint of arnold diffusion on the quantum eigenstates and dynamics .an interesting question , amongst many others , is this : _ will the fluctuations in the chaos - assisted tunneling splittings for , observed for varying , survive in the presence of the field ? _ a related topic which i have not touched upon in this article has to do with the control of ivr using weak external fields . based on the insights gained from studies done until now on field - free ivr( see also the contribution by leitner in this volume ) , it is natural to expect that dynamical tunneling could play spoilsport for certain class of initial states that are prepared experimentally .the issue , however , is far more subtle and more studies in this direction might shed light on the mechanism by which quantum optimal control methods work .for instance , recent proposals on quantum control by takami and fujisaki and the so called quantized ulam conjecture " by gruebele and wolynes take advantage ( implicitly ) of the system having a completely chaotic phase space .dynamical tunneling is not an issue in such cases .however , in more generic instances of systems with mixed regular - chaotic phase space , a quantitative and qualitative understanding of dynamical tunneling becomes imperative .given the level of detail at which one is now capable of studying the clasically forbidden processes , as reflected by the varied contributions in this volume , i expect exciting progress in this direction .i think that this is an appropriate forum to acknowledge the genesis of the current and past research of mine on dynamical tunneling . in this contexti am grateful to greg ezra , whose first suggestion for my postdoc work came in the form of a list of some of the key papers on dynamical tunneling .we never got around to work on dynamical tunneling _ per se _ but those references came in handy nearly a decade later .it is also a real pleasure to thank peter schlagheck for several inspiring discussions on tunneling in general and dynamical tunneling in particular. theoretical study of intramolecular vibrational energy relaxation of acteylinic ch vibration for and in large polyatomic molecules ( cx) , where x or d and y or si . | contribution to the edited volume _ dynamical tunneling : theory and experiment _ , editors srihari keshavamurthy and peter schlagheck , crc press , taylor & francis , boca raton , 2011 . |
the present work concerns the geometric structure of the flow of a vector field in four - dimensional spacetime .we work from the perspective that the generating vector field satisfying some set of governing evolution equations is the primary quantity , and the flow is the secondary ( or derived ) quantity . assuming _ a - priori _ a smooth generating vector field , we introduce a generally covariant measure of the flow geometry called the _ referential gradient of the flow_. the main result of this work is the explicit relation between the referential gradient of the flow and the generating vector field , and is provided for from two equivalent perspectives : a lagrangian specification with respect to a generalized parameter , and an eulerian specification making explicit the evolution dynamics .furthermore , we provide explicit non - trivial conditions which govern the transformation properties of the referential gradient object .the layout of this paper is as follows .section ( [ s : flowandrefgrad ] ) provides the standard differential geometry context ( [ ss : preliminaries ] ) and formalism of mathematical flow representations ( [ ss : generalconnectivitymap ] ) in order to provide a rigorous framework from which to define the referential gradient .the main results of the paper are put forth in section ( [ s : refgraddefined ] ) where we define the referential gradient of the flow . in section ( [ ss : lagrangespecrefgrad ] ) , we prove the lagrangian specification theorem ( [ thm : refgradlagrangian ] ) from which is given a non - local , closed - form functional solution with respect the generating vector field .furthermore , due to the non - local nature of the lagrangian specification of the referential gradient , in section ( [ ss : refgradtransforms ] ) we prove three lemmas that identify the non - trivial referential gradient transformation conditions : lemma ( [ lem : refgrad ] ) provides the condition for manifest covariance ; lemma ( [ lem : refgradgroup ] ) provides for the group property with respect to the connectivity parameter ; and lemma ( [ lem : representationrelations ] ) provides the proper relations between the corresponding referential gradient representations associated with a change of integration variable . in section ( [ ss : refgraddynamics ] ) , we prove the eulerian specification theorem ( [ thm : refgraddynamics ] ) which in a coordinate chart makes explicit the referential gradient dynamics at each point of the manifold . throughout the paper , we work in both coordinate - free language , show coordinate - dependent expressions , and give explicit illustrative examples .greek indices denote spacetime components , and latin indices denote only spatial components ; in addition we employ the standard einstein summation convention .let be a simply connected region of spacetime with ( semi - riemannian ) metric .furthermore , assume the spacetime topology may be foliated as , where is a three - dimensional spatial hypersurface , and is the time axis .let be an atlas of , such that an arbitrary point has coordinate component functions , where and are identified with coordinate time and 3-space , respectively. given the atlas over , we have on each chart the local coordinate basis and local dual basis , defined , respectively , via and , where is the kronecker delta .the invariant interval is given in coordinates by , where in general , in the coordinate chart the metric components are given functions of position , .we assume an affine connection , that satisfies the standard axioms ( see e.g. , ref . 3.1 ) . in a coordinate chart ,the connection components are defined via the action on basis vectors , and , such that the covariant derivative of a mixed tensor field is given by , in addition , the covariant derivative with respect to in the direction of the vector field is given by , , where is the contraction map ( with some abuse of notation where the meaning is clear , we write the contraction map , interpreted as the standard scalar product ) .beyond what is physically reasonable , we make no assumptions regarding the metric or the connection . without loss of generality, one may perform particular calculations using the levi - civita connection with components given by , let be a smooth vector field everywhere on ( unless explicitly stated otherwise , by we mean ) . in a chart , the vector fieldmay be written , where the component functions .identify the vector field with a four - velocity field in minkowski spacetime , curvilinear coordinates ( see e.g. , ref . 6.2 ) , the contravariant components of which are given by , where is the velocity field in a co - moving coordinate frame .identify the vector field with the four - magnetic field ( see e.g. , ref . 13.10.2 ) , the contravariant components of which are given by , where is the levi - civita tensor density , and . in the frame of the observer , is the electromagnetic tensor , and are the covariant components of the four - velocity field .[ d : mcflow ] _ ( the flow of a vector field ) _ let be an open set , and and open interval containing .the _ flow _ of is a map , such that for any point , we refer to as the _ generating vector field _ , as the _ connectivity parameter _ ( associated with the generating vector field ) , as the _ reference point _ , and as the _ reference condition_. if for some coordinate chart , then so also lies a segment of in . for that segmentthe coordinate components of equation ( [ e : mcflow ] ) with respect to the basis are , equations ( [ e : mcflowcoords ] ) represent an initial value problem in ( four ) first - order differential equations in the parameter , covariant under a change of coordinates . depending on the nature of the generating vector field , equations ( [ e : mcflowcoords ] ) may be non - autonomous or autonomous in the parameter .hence , for fixed reference condition , by standard theorems of existence , uniqueness , and extension for ordinary differential equations ( see e.g. , refs . ) , the solution exists , is unique , smooth , and maximal for any and all . for a generating vector field identified with the four - velocity field ( [ ex : fourvelocity ] ) , the associated connectivity parameter is identified with time , and equations ( [ e : mcflowcoords ] ) are non - autonomous . for a generating vector field identified with the four - magnetic field ( [ ex : fourmagneticfield ] ) , the associated connectivity parameter is identified with a distance per magnetic field strength , and equations ( [ e : mcflowcoords ] ) are autonomous . by equation ( [ e : mcflowcoords ] )the units of the flow are identified with the position coordinates .hence , the units of the connectivity parameter are coordinate - dependent ; that is , in a given coordinate chart , has coordinate units per generating vector field units . in [ ss : refgraddynamics ] , we return to a full discussion of the coordinate representation of the connectivity parameter . from the flowmay be defined two collections of maps : [ d : mcflow ] _ ( the orbit of ) _ for fixed reference point , the _ orbit of _ is the map from an interval in into ; e.g. , such that , if , then the coordinate maps , are smooth curves with tangent vector everywhere defined by , and equal to , the ( smooth ) generating vector field . for a generating vector field identified with the four - velocity field ( [ ex : fourvelocity ] ) in the non - relativistic limit ,the orbit of is a _streamline_. for a generating vector field identified with the four - magnetic field ( [ ex : fourmagneticfield ] ) in the non - relativistic limit , the orbit of is a _magnetic line of force _ ( or _ magnetic field line _ ) .[ d : mcmap ] _ ( the connectivity map ) _ for fixed connectivity parameter , the _ connectivity map _ is the one - parameter group of ( active ) diffeomorphisms , if , then for all possible values of connectivity parameter , the flow solutions form a lie group .that is to say , for each value of is associated a smooth transformation of the space to itself , such that the connectivity map satisfies the following group properties : for any ; is the identity element ; and is the inverse element , such that . for divergence - free generating vector fields ,the flow is identified with the group of volume - preserving diffeomorphisms ( see e.g. , ref . i.1 ) .the flow represents an equivalence class under a affine transformations of the connectivity parameter , with and ( see e.g. , ref . 7.4 ) .in particular , the connectivity parameter may be identified with an arc length measure along the orbit from the reference point , where by a change of variable with , for a smooth generating vector field , provided for all , it can be shown the transformation ( [ e : arclengthcondition ] ) exists and is unique .furthermore , under transformation ( [ e : arclengthcondition ] ) , the generating vector field for the re - parametrized flow , is the unit direction of the generating vector field , such that , additionally , if , then equation ( [ e : arclengthmcflow ] ) finds a coordinate expression similar to that of ( [ e : mcflow ] ) , we call the re - parametrized flow of equations ( [ e : arclengthmcflow ] ) , the _ arc length representation _ , reflecting the fact that the connectivity parameter itself is identified with an arc length measure along the orbit issuing from the reference point . in the arc length representation of the flow for a generating vector field identified with the four - velocity field ( [ ex : fourvelocity ] ) , the connectivity parameter is a measure of the total time along the streamline . in the arc length representation of the flow for a generating vector field identified with the four - magnetic field ( [ ex : fourmagneticfield ] ) in the non - relativistic limit , the connectivity parameter is a measure of the total spatial distance along the magnetic line of force . unless explicitly noted we will work with the flow representation associated with the full generating vector field . for completenessthough , the flow representations and are implicitly related everywhere via the unit generating field , in a coordinate chart , equation ( [ e : flowarclengthflowrelation ] ) is , where is the metric given in the coordinate chart .given a generating vector field , the determination of the flow with a particular reference condition , represents an initial value problem , via equations ( [ e : mcflow ] ) .the geometric flow structure may be discerned by examining the dependence on the reference condition ( see e.g. , ref . 32 ) .let be an open ball of radius about the reference point , and be a bounded open interval containing and with , such that for all points and any .denote the non - zero _ reference shift vector _ such that , where the unit direction vector .[ d : refgrad ] _ ( the referential gradient of the flow ) _the _ referential gradient of the flow _ is the matrix - valued function defined via the vector , provided the limit exists for arbitrary reference shift vector . the vector is the referential gradient of the flow corresponding to the direction of the reference shift vector .it follows immediately for , definition ( [ e : formalrefgrad ] ) reduces to , thus .a somewhat weaker form of the definition of the referential gradient for any is such that for any reference shift vector and , geometrically , for every , the referential gradient is a measure of the relative change of the flow with respect to a shift in the reference point . in other words , for a given orbit , the referential gradient contains the deformation information of all `` neighboring '' orbits of similar length .let be in a coordinate chart .the reference shift direction vector is given by , and the coordinate expression for is , provided the limit exists . when , the referential gradient corresponding to the reference shift direction vector is , hence .one may always construct a coordinate representation of the referential gradient from the coordinate representation of the flow via definition ( [ e : formalrefgradcoords ] ) .however , in typical physical systems of interest , the governing equations describe the evolution of the generating vector field , and the associated flow is derived therefrom .hence , the following theorem gives the explicit functional dependence of the referential gradient of the flow on the generating vector field . _( generalized lagrangian specification of the referential gradient ) _ [ thm : refgradlagrangian ] let be a smooth , complete generating vector field with associated flow .the referential gradient , for and , is the unique solution to the differential equation , where is the identity .furthermore , in , the solution is given explicitly by the absolutely and uniformly convergent , path - ordered exponential , f \left ( \ 0 , p \\right ) \\ % f \left ( \ \lambda , p \\right ) = \mathcal{p } \text{exp } \biggl [ \ \int_{0}^{\lambda } dt \ \nabla b \bigl ( \ \phi_{p } \left ( t \right ) \\bigr ) \ \biggr ] f \left ( \ 0 , p \\right ) \\ \\\ ] ] the system of differential equations ( [ e : mcflow ] ) and ( [ e : refgradholonomy ] ) together with their reference conditions are often referred to as the _ system of equations of variations _ for the flow ( see e.g. , ref . 32 ) .moreover , equations ( [ e : refgradholonomy ] ) and ( [ a : pexpreferentialgradient ] ) constitute a generalized _ lagrangian specification of the referential gradient _ ( with respect to the connectivity parameter ) . in [ ss : refgraddynamics ] , we construct an _ eulerian specification of the referential gradient _ equivalent to equations ( [ e : refgradholonomy ] ) .we begin by deriving equation ( [ e : refgradholonomy ] ) . using the fundamental relation between the connection flow and generating field , equations ( [ e : mcflow ] ) , for any , we construct the flow difference , \\ % \\ % \end{split } % \displaystyle % \phi \left ( 0 , p + h \right ) - \phi \left ( 0 , p \right ) = h \\% \end{array}\ ] ] equation ( [ e : refgradholonomy ] ) follows upon dividing equation ( [ e : mcflowdifference ] ) by and in the limit .consider the lhs of equation ( [ e : mcflowdifference ] ) .the vector does not depend on the connectivity parameter . using definition ( [ e : formalrefgrad ] ), it follows immediately, \\ % \\ % \displaystyle % \lim_{\vert h \vert \rightarrow 0 } \\frac{1}{\vert h \vert } \ \frac{\partial}{\partial \lambda } \biggl ( \ \phi \bigl ( \lambda , p + \vert h \vert \ \hat{h } \bigr ) - \phi \bigl ( \lambda , p \bigr ) \\biggr ) = \frac{\partial}{\partial \lambda } \\biggl [ \ f \left ( \ \lambda , p \ \right ) \cdot \hat{h } \ \biggr ] \\ % \\ \displaystyle \lim_{\vert h \vert \rightarrow 0 } \ \frac{1}{\vert h \vert } \ \frac{\partial}{\partial \lambda } \biggl ( \ \phi \bigl ( \lambda , p + \vert h \vert \ \hat{h } \bigr ) - \phi \bigl ( \lambda , p \bigr ) \\biggr ) = \frac{\partial f \left ( \ \lambda , p \ \right)}{\partial \lambda } \cdot \hat{h } \\ \\ % \end{array}\ ] ] consider the rhs of equation ( [ e : mcflowdifference ] ) .let and denote the non - zero shift vector such that , where the shift direction vector . by assumption ,the generating vector field is smooth for all , thus we may taylor expand the rhs of equation ( [ e : mcflowdifference ] ) ( see appendix [ a : generalizedfirstordertaylorexpansion ] ) . noting the identity , the rhs of equation ( [ e : mcflowdifference ] ) may be written , hence , \\% \displaystyle % & = \lim_{\vert h \vert \rightarrow 0 } \\frac{1}{\vert h \vert } \\biggl [ \ b \bigl ( \ y + \vert w \vert \ \hat{w } \\bigr ) - b \bigl ( y \bigr ) \ \biggr ] \\ % \\ % \displaystyle % & = \lim_{\vert h \vert \rightarrow 0 } \\frac{1}{\vert h \vert } \\biggl [ \ w \cdot \nabla b \bigl ( y \bigr ) + w \cdot \int^{1}_{0 } ds \\biggl ( \ \nabla b \bigl ( \ y + s \ \vert w\vert \ \hat{w } \\bigr ) - \nabla b \bigl ( y \bigr ) \\biggr ) \ \biggr ] \\ % \\\displaystyle \lim_{\vert h \vert \rightarrow 0 } \\frac{1}{\vert h \vert } & \ \biggl [ \ b \bigl ( \ \phi \bigl ( \lambda , p + \vert h \vert \ \hat{h } \bigr ) \\bigr ) - b \bigl ( \ \phi \bigl ( \lambda , p \bigr ) \\bigr ) \ \biggr ] \\\displaystyle & = \lim_{\vert h \vert \rightarrow 0 } \\frac{w}{\vert h \vert } \cdot \nabla b \bigl ( y \bigr ) \\\displaystyle & + \lim_{\vert h \vert \rightarrow 0 } \\frac{w}{\vert h \vert } \cdot \int^{1}_{0 } ds \\biggl ( \ \nabla b \bigl ( \ y + s \ \vert w \vert \ \hat{w } \\bigr ) - \nabla b \bigl ( y \bigr ) \\biggr ) \\ \end{split } \end{array}\ ] ] recall , ; by definition ( [ e : formalrefgrad ] ) , it remains to show the integral remainder goes to zero in the limit .it suffices to show in the limit . by assumption ,the generating vector field is smooth , then by standard theorems of the smoothness of ode solutions , so also is the associated flow smooth ( e.g. , ref . , theorem 17.19 ) ; thus , the integral remainder term in equation ( [ e : mcflowdifferencerhs1 ] ) is zero since , and , \\ \displaystyle & = \bigl ( \ f \left ( \ \lambda , p \ \right ) \cdot \hat{h } \\bigr ) \cdot \int^{1}_{0 } ds \\biggl ( \ \nabla b \bigl ( y \bigr ) - \nabla b \bigl ( y \bigr ) \\biggr ) = 0 \\ \end{split } % \\% \displaystyle % \lim_{\vert h \vert \rightarrow 0 } \\frac{w}{\vert h \vert } \cdot \int^{1}_{0 } ds \\biggl ( \ \nabla b \bigl ( \ y + s \ \vert w \vert \ \hat{w } \\bigr ) - \nabla b \bigl ( y \bigr ) \\biggr ) \ \biggr ] = 0 \\ % \end{array}\ ] ] hence , the rhs of equation ( [ e : mcflowdifference ] ) is , \\ \displaystyle & = \bigl ( \ f \left ( \ \lambda , p \ \right ) \cdot \hat{h } \\bigr ) \cdot \nabla b \bigl ( \ \phi \bigl ( \lambda , p \bigr ) \\bigr ) \\ \end{split}\ ] ] equating the lhs with the rhs , respectively , equations ( [ e : mcflowdifferencelhs ] ) and ( [ e : mcflowdifferencerhs ] ) , \\ % \\ % \displaystyle % \frac{\partial f \left ( \ \lambda , p \ \right)}{\partial \lambda } \cdot \hat{h } = \bigl ( \ f \left ( \ \lambda , p \ \right ) \cdot \hat{h } \\bigr ) \cdot \nabla b \bigl ( \ \phi \bigl ( \lambda , p \bigr ) \\bigr ) \\ \displaystyle \frac{\partial}{\partial \lambda } \\bigl ( \ f \left ( \ \lambda , p \ \right ) \cdot \hat{h } \\bigr ) = \bigl ( \ f \left ( \ \lambda , p \\right ) \cdot \hat{h } \\bigr ) \cdot \nabla b \bigl ( \ \phi \bigl ( \lambda , p \bigr ) \\bigr ) \\ % \end{array}\ ] ] finally , since the reference shift vector is arbitrary , and noting the contraction map remains unambiguous for fields , the differential equation ( [ e : refgradholonomy ] ) follows immediately , the reference condition is given by definition ( [ e : formalrefgrad ] ) at .we proceed to construct the solution ( [ a : pexpreferentialgradient ] ) . by assumption ,the generating vector field is smooth and complete for all , and so therefore the gradient is smooth and everywhere non - singular for all .thus , the system of equations ( [ e : refgradholonomy ] ) is a well - posed initial value problem and standard theorems for existence , uniqueness , and extension for ordinary differential equations apply ( see e.g. , refs . ) . to construct solution ( [ a : pexpreferentialgradient ] ), we consider the integral formulation of the initial value problem ( [ e : refgradholonomy ] ) ; by the fundamental theorem of calculus ( see e.g. , ref . 1.4 ) , equation ( [ e : refgradholonomy ] ) is equivalent to the integral equation , where is the identity . for fixed , equation ( [ e : integralnearbyfllinearization ] ) is a linear fredholm equation of the second kind .relaxing the fixed condition , allowing the upper - limit of integration to vary over all , equation ( [ e : integralnearbyfllinearization ] ) is a linear volterra equation of the second kind .these integral equations are well known , and the convergence , existence , and uniqueness properties of the solutions are well defined ( see e.g. , ref . ) . we proceed to show the existence of a solution to equations ( [ e : integralnearbyfllinearization ] ) via the resolvent kernel method . for a non - singular kernel ,equation ( [ e : integralnearbyfllinearization ] ) may be solved directly by a method of iteration , such that if there exists a set of functions that satisfy equations ( [ e : integralnearbyfllinearization ] ) , then these functions also satisfy the iterated set of equations . substituting the rhs of equation ( [ e : integralnearbyfllinearization ] ) into the integrand and expanding , we get the two - fold iteration equation , \cdot \nabla b \bigl ( \ \phi \bigl ( \sigma_{0 } , p \bigr ) \ \bigr)\\ % \\ % \end{split } % \\ \begin{split } \displaystyle & f \left ( \ \lambda , p \\right ) = f \left ( \ 0 , p \\right ) + \int_{0}^{\lambda } d\sigma_{0 } \ f \left ( \ 0 , p \ \right ) \cdot \nabla b \bigl ( \ \phi \bigl ( \sigma_{0 } , p \bigr ) \\bigr ) \\ \displaystyle & + \int_{0}^{\lambda } d\sigma_{0 } \\int_{0}^{\sigma_{0 } } d\sigma_{1 }\ f \left ( \ \sigma_{1 } , p \ \right ) \cdot \nabla b \bigl ( \ \phi \bigl ( \sigma_{1 } , p \bigr ) \\bigr ) \cdot \nabla b \bigl ( \ \phi \bigl ( \sigma_{0 } , p \bigr ) \\bigr ) \\ \end{split } \end{array}\ ] ] for ease of notation , considering as a fixed parameter , we define , hence , we may write the 2-fold iteration equation ( [ a : implicitdefgrad1 ] ) as , repeating this procedure -times , and collecting terms , we obtain the -fold iterated equation , \\ % \begin{split } \displaystyle + \int_{0}^{\lambda } d\sigma_{0 } \int_{0}^{\sigma_{0 } } d\sigma_{1 } \ \dotsi \int_{0}^{\sigma_{n } } d\sigma_{n+1 } \ f & \left ( \ \sigma_{n+1 } , p \\right ) \cdot n^{\left ( 0 \right ) } \left ( \sigma_{n+1 } \right ) \\ \displaystyle & \dotsi n^{\left ( 0 \right ) } \left ( \sigma_{1 } \right ) \cdot n^{\left ( 0 \right ) } \left ( \sigma_{0 }\right ) \\ \end{split } \end{array}\ ] ] where the term ( ) is defined by the recursive formula , \cdot n^{\left ( 0 \right ) } \left ( \sigma_{0 } \right ) \\ % \\ % \\ % \end{array}\ ] ] thus , as , there exist a set of functions , , that satisfy equations ( [ e : integralnearbyfllinearization ] ) , and are given by the infinite series , \\\ ] ] where each term is given by the recursion relations ( [ a : nthtermrecurrsionrelation ] ) and ( [ a : order2iterationkernels ] ) .we proceed to show the -fold iterated equation ( [ a : nfolditeration ] ) converges absolutely and uniformly for every , and therefore so does formula ( [ a : defgradgeneralsolution ] ) as . to show convergence of the first -terms in equation ( [ a : nfolditeration ] ) , let for all ; exists since , by assumption , is smooth. then , by the recursive formula ( [ a : nthtermrecurrsionrelation ] ) , the absolute value of each successive iterated kernel is also bounded , where is the ( non - negative ) total length of the integration path .formula ( [ a : boundedkernels ] ) is the term of the absolutely and uniformly convergent exponential series for any ( finite ) , = m \ \sum_{n = 0}^{\infty } \\frac{1}{n ! } \\bigl [ \ m \\vert \lambda \vert \ \bigr]^{n}\ ] ] therefore , the first -terms in equation ( [ a : nfolditeration ] ) are bounded by the absolutely and uniformly convergent series ( [ a : nthkernelconvergence ] ) for every . to show the final iteration term is bounded , let for all be the absolute upper bound of the referential gradient function everywhere along the integration path . exists as long as the gradient of the generating vector field remains smooth and non - singular for any .then , by the relations ( [ a : boundedkernels ] ) , the final iteration term of the -fold iterated equations ( [ a : nfolditeration ] ) satisfies the inequality , ^{n+1 } \\\end{array}\ ] ] formula ( [ a : nthtermiterationconvergence ] ) is the term of the same absolutely and uniformly convergent exponential series ( [ a : nthkernelconvergence ] ) with the coefficient , = m \\sum_{n = 0}^{\infty } \ \frac{1}{n ! } \\bigl [ \ m \vert \lambda \vert \ \bigr]^{n}\ ] ] thus , the final term in the -fold iterated equation ( [ a : nfolditeration ] ) tends to zero as , for any finite .consequently , the -fold iterated equation ( [ a : nfolditeration ] ) converges absolutely and uniformly to the functions , given by equations ( [ a : defgradgeneralsolution ] ) .we proceed to show that the functions , given by the absolutely and uniformly convergent series of equations ( [ a : defgradgeneralsolution ] ) , indeed satisfy equations ( [ e : integralnearbyfllinearization ] ) . substituting equations ( [ a : defgradgeneralsolution ] ) and definition ( [ a : order2iterationkernels ] ) into equations ( [ e : integralnearbyfllinearization ] ) , \cdot n^{\left ( 0 \right ) } \left ( \sigma \right ) \\ % \\ \begin{split } \displaystyle f \left ( \ \lambda , p \\right ) & = f \left ( \ 0 , p \\right ) + f \left ( \ 0 , p \ \right ) \cdot \int_{0}^{\lambda } d\sigma \ n^{\left ( 0 \right ) } \left ( \sigma \right ) \\ \displaystyle & + f \left ( \ 0 , p \ \right ) \cdot \int_{0}^{\lambda } d\sigma \int_{0}^{\sigma } d\chi \ \sum_{n = 0}^{\infty } \ n^{\left ( n \right ) } \left ( \chi \right ) \cdot n^{\left ( 0 \right ) } \left ( \sigma \right ) \\ \end{split } % \end{array}\ ] ] expanding the third term on the rhs of equation ( [ a : satisfysolution ] ) , \cdot n^{\left ( 0 \right ) } \left ( \sigma \right ) \\\end{split } \end{array}\ ] ] noting the recursion relation ( [ a : nthtermrecurrsionrelation ] ) for successive terms , ( [ a : expandedthirdterm1 ] ) becomes , \\ \end{split}\ ] ] substituting ( [ a : expandedthirdterm2 ] ) into ( [ a : satisfysolution ] ) , \\ \end{split}\ ] ] collecting terms , we recover equations ( [ a : defgradgeneralsolution ] ) , \\\ ] ] thus , equations ( [ a : defgradgeneralsolution ] ) , with recursion relations ( [ a : nthtermrecurrsionrelation ] ) and ( [ a : order2iterationkernels ] ) , satisfy equations ( [ e : integralnearbyfllinearization ] ) .finally , it remains to be shown the iterated solution ( [ a : defgradgeneralsolution ] ) may be cast in closed form , given by the path - ordered exponential solution ( [ a : pexpreferentialgradient ] ) .noting recursion relations ( [ a : nthtermrecurrsionrelation ] ) and ( [ a : order2iterationkernels ] ) , equation ( [ a : defgradgeneralsolution ] ) may be written , \\ % \\ % \displaystyle % f \left ( \ \lambda , p \\right ) = f \left ( \ 0 , p \ \right ) \ \cdot \ \biggl [ \ i \ + \ \sum_{n = 1}^{\infty } \\idotsint \limits_{\lambda \geq \sigma_{0 } \geq \dotsi \ge \sigma_{n-1 } \geq 0 } d\sigma_{1 } \dotsi d\sigma_{n } \\nabla b \bigl ( \ \phi \bigl ( \sigma_{n } , p \bigr ) \\bigr ) \ \dotsi \nabla b \bigl ( \ \phi \bigl ( \sigma_{0 } , p \bigr ) \\bigr ) \ \biggr ] \begin{split } \displaystyle & f \left ( \ \lambda , p \ \right ) = f \left ( \ 0 , p \ \right ) \cdot \biggl [ \ i + \int_{0}^{\lambda } d\sigma_{0 } \ \nabla b \bigl ( \ \phi \bigl ( \sigma_{0 } , p \bigr ) \\bigr ) \\ \displaystyle & + \sum_{n = 1}^{\infty } \ \int_{0}^{\lambda } d\sigma_{0 } \ \dotsi \int_{0}^{\sigma_{n-1 } } d\sigma_{n } \\nabla b \bigl ( \ \phi \bigl ( \sigma_{n } , p \bigr ) \\bigr ) \ \dotsi \nabla b \bigl ( \ \phi \bigl ( \sigma_{0 } , p \bigr ) \\bigr ) \ \biggr ] \\\end{split } % \\% \end{array}\ ] ] we introduce the _ product ordering operator _ ( also known as the _ path - ordered product operator _ ; see e.g. , ref . 4.2 ) , the action of which is to permute the factors composing the kernel of each term in equation ( [ a : oinfiteration ] ) such that the integration connectivity parameter values appear in order from smallest to largest .explicitly , for any , the path - ordered product operator is , \\ % % \displaystyle % & = \bigg \lbrace \begin{array}{c c } \nabla b \bigl ( \ \phi \bigl ( \sigma_{i } , p \bigr ) \\bigr ) \cdot \nabla b \bigl ( \ \phi \bigl ( \sigma_{j } , p \bigr ) \\bigr ) \ & \ \ \sigma_{i } < \sigma_{j } \\\nabla b \bigl ( \ \phi \bigl ( \sigma_{j } , p \bigr ) \\bigr ) \cdot \nabla b \bigl ( \ \phi \bigl ( \sigma_{i } , p \bigr ) \\bigr ) \ & \ \ \sigma_{j } < \sigma_{i } \end{array } \\ % \end{split } \end{array}\ ] ] where is the heavyside step function , with . in general ,the ( free ) lie algebra consisting of evaluated at different connectivity parameter values is non - abelian ; e.g. , for any fixed and , \ne 0 \\\ ] ] however , in the special cases that the ( [ e : pocommutator ] ) equals zero , the ordered product operator reduces to the standard product . transforming integration limits from , to a uniform -cube for every , and using the product ordering operator , the term ( ) integration in equation ( [ a : oinfiteration ] ) may be rewritten , % \\ \displaystyle & \int_{0}^{\lambda } d\sigma_{0 } \ \dotsi \int_{0}^{\sigma_{n-1 } } d\sigma_{n } \\nabla b \bigl ( \ \phi \bigl ( \sigma_{n } , p \bigr ) \\bigr ) \ \dotsi \nabla b \bigl ( \ \phi \bigl ( \sigma_{0 } , p \bigr ) \\bigr ) \\ \displaystyle & = \frac{1}{n ! } \ \int_{0}^{\lambda } d\sigma_{0 } \\dotsi \int_{0}^{\lambda } d\sigma_{n } \\mathcal{p } \biggl ( \ \nabla b \bigl ( \ \phi \bigl ( \sigma_{n } , p \bigr ) \\bigr ) \ \dotsi \nabla b \bigl ( \ \phi \bigl ( \sigma_{0 } , p \bigr ) \\bigr ) \ \biggr ) \\ \end{split}\ ] ] furthermore , the nested integral on rhs of equation ( [ a : poinfiteration ] ) is simply independent factor integrations . hence , from equations ( [ a : oinfiteration ] ) and ( [ a : poinfiteration2 ] ) , we may formally define the _ path ordered exponential function _ , via its series representation , cases in which the ( free ) lie algebra is abelian , i.e. , ( [ e : pocommutator ] ) is equal to zero for every , the path ordered exponential ( [ a : poexponential ] ) reduces to the standard exponential .equation ( [ a : poexponential ] ) is simply a formally compact form of the bracketed terms in equation ( [ a : oinfiteration ] ) .noting is the identity ( given by definition ( [ e : formalrefgrad ] ) at ) , we get the final form of the referential gradient solution ( [ a : pexpreferentialgradient ] ) , for consistency , we note the formal path ordered exponential solution ( [ a : pexpreferentialgradient ] ) trivially reduces to the identity at .it is a nearly trivial matter to write the coordinate representation of equations ( [ e : refgradholonomy ] ) and solution ( [ a : pexpreferentialgradient ] ) .let be in a coordinate chart .the reference shift vector may be written .thus , the vector representing the referential gradient of the flow vector corresponding to the reference shift vector , evolves with the connectivity parameter according to , since the reference shift vector is arbitrary , we may write the first - order differential equations ( [ e : refgradholonomy ] ) for the coordinate - dependent component representation , where the reference condition is given by definition ( [ e : formalrefgrad ] ) at . furthermore , in the coordinate chart , solution ( [ a : pexpreferentialgradient ] ) is written , the kernel is the covariant derivative of the generating vector field is given by , where the position is evaluated along the flow .the integration operation of the referential gradient solution ( [ a : pexpreferentialgradient ] ) places powerful non - trivial restrictions on this object s transformation properties . in this section ,we explore the various conditions demanded in order that : 1 ) the components of the referential gradient transform as a tensor , 2 ) the referential gradient forms a group under translations of the connectivity parameter , and 3 ) the relationship between representations of the referential gradient under a change in integration variable .it is important to note , due to the integration the coordinate representation of the lagrangian specification of the referential gradient solution ( [ e : refgradproofcoords ] ) is valid only to the extent that it domain of definition .the first lemma makes explicit the condition under which the referential gradient components in any coordinate representation transform as a tensor under smooth changes of coordinates ; namely , so long as the set . _( coordinate transformation of the referential gradient ) _ [ lem : refgrad ] let and be coordinate charts on . under smooth coordinate transformations , the coordinate representation of the referential gradient transforms as a tensor , if and only if . to prove necessity , we assume , and show the transformation ( [ e : refgradcoordtransform ] ) follows from the series solution ( [ a : oinfiteration ] ) for any . in the chart , the coordinate representation of the referential gradient series solution ( [ a : oinfiteration ] )is given by , where we have made use of the reference condition . by assumption , and each kernel factor covariant derivative of the generating vector field transforms under smooth coordinate transformations as , for every . making use of the identity , the product kernel transforms as , by induction , the product kernel transforms as , upon substitution of ( [ e : kerneltransform ] ) and ( [ e : nthorderproductkerneltransform ] ) into ( [ e : seriessolncoords ] ) , and using the fact that the connectivity parameter is independent of coordinates , equation ( [ e : refgradcoordtransform ] ) follows immediately , \\% \end{split } % \\ f^{\mu}_{\nu } \bigl ( \ \lambda , x_{i } \left ( p \right ) \\bigr ) = \lambda^{\beta}_{\nu } \ \lambda^{\mu}_{\alpha } \f^{\alpha}_{\beta } \bigl ( \ \lambda , x_{j } \left ( p \right ) \\bigr ) % \end{array}\ ] ] to prove sufficiency , let the chart cover the set , and let .for any reference point , the coordinate representation of the referential gradient in the chart is well - defined for all .furthermore , there exists a such that for any , thus equation ( [ e : refgradcoordtransform ] ) is not defined .hence , upon shrinking to the extent that guarantees equation ( [ e : refgradcoordtransform ] ) for smooth coordinate transformations . whereas the previous lemma concerned the tensorial nature of the referential gradient components in any coordinate representation , the second lemma concerns the group properties of the referential gradient with respect to the connectivity parameter .namely , due to the integration , the group property of the referential gradient with respect to translations of connectivity parameter is a direct consequence of the group property of the flow along which it is evaluated . _( group structure of the referential gradient ) _ [ lem : refgradgroup ] let and .the referential gradient forms a ( lie ) group in the connectivity parameter .if and only if the flow satisfies for every and , or the representation of in any coordinate chart is a constant . to prove the lemma , it suffices to work in coordinates ; let be in a coordinate chart . to show necessity , we begin with the defining equation ( [ e : refgradholonomycoords ] ) . without loss of generality , fix that .noting , , equation ( [ e : refgradholonomycoords ] ) becomes , suppose the referential gradient satisfies the group condition , equation ( [ e : fgroup ] ) , then we may write , substituting equation ( [ e : refgradholonomycoords ] ) , fixing , we find the necessary condition that the referential gradient is a group in the connectivity parameter follows , equation ( [ e : necessarycondition ] ) is true for the representation of in any coordinate chart if for every and , or is constant ( independent of both the connectivity parameter and the reference point ) . to show sufficiency , we make use of the path - ordered exponential functional solution , equation ( [ e : refgradproofcoords ] ) . for fixed such that , the integral argument in the path - ordered exponential may be written as a sum , where , in the first integral on the rhs we have made the substitution .since the flow forms a group in the connectivity parameter , the second integral on the rhs is equivalent to pivoting off of a different reference point ; namely , .hence , we may write , by the baker - campbell - hausdorff theorem ( see appendix [ a : bchtheorem ] ) , sufficiency follows from the condition that the ( free ) lie algebra is abelian ; e.g. , the following commutator is zero , \\ \\% \begin{split } % \displaystyle % = \int^{\lambda_{2}}_{0 } & d\tau \\nabla_{\eta } \b^{\mu}_{i } \bigl ( \ x_{i } \bigl ( \ \phi \bigl ( \tau , q \bigr ) \\bigr ) \ \bigr ) \ \int_{0}^{\lambda_{1 } } d\sigma \ \nabla_{\nu } \ b^{\eta}_{i } \bigl ( \ x_{i } \bigl ( \ \phi \bigl ( \sigma , p \bigr ) \\bigr ) \ \bigr ) \\ % \displaystyle % & - \int_{0}^{\lambda_{1 } } d\sigma \ \nabla_{\eta } \b^{\mu}_{i } \bigl ( \ x_{i } \bigl ( \ \phi \bigl ( \sigma , p \bigr ) \\bigr ) \ \bigr ) \ \int^{\lambda_{2}}_{0 } d\tau \ \nabla_{\nu } \ b^{\eta}_{i } \bigl ( \ x_{i } \bigl ( \ \phi \bigl ( \tau , q \bigr ) \\bigr ) \\ % \\ % \end{split } % \\% \begin{split } % \displaystyle % = \int_{0}^{\lambda_{2 } } & d\tau \\int^{\lambda_{1}}_{0 } d\sigma \ \biggl ( \ \nabla_{\eta } \b^{\mu}_{i } \bigl ( \ x_{i } \bigl ( \ \phi \bigl ( \tau , q \bigr ) \\bigr ) \ \bigr ) \ \nabla_{\nu } \b^{\eta}_{i } \bigl ( \ x_{i } \bigl ( \ \phi \bigl ( \sigma , p \bigr ) \\bigr ) \ \bigr ) \\ % \displaystyle % & - \nabla_{\eta } \ b^{\mu}_{i } \bigl ( \ x_{i } \bigl ( \ \phi \bigl ( \sigma , p \bigr ) \\bigr ) \ \bigr ) \ \nabla_{\nu } \b^{\eta}_{i } \bigl ( \ x_{i } \bigl ( \ \phi \bigl ( \tau , q \bigr ) \\bigr ) \ \bigr ) \\biggr ) \\ % \\ % \end{split } %\\ \displaystyle = \int_{0}^{\lambda_{2 } } d\tau \ \int^{\lambda_{1}}_{0 } d\sigma \ \biggl [ \ \nabla_{\nu } \ b^{\mu}_{i } \bigl ( \ x_{i } \bigl ( \ \phi \bigl ( \tau , q \bigr ) \\bigr ) , \nabla_{\nu } \ b^{\mu}_{i } \bigl ( \ x_{i } \bigl ( \ \phi \bigl ( \sigma , p \bigr ) \\bigr ) \ \bigr ) \ \biggr ] \\ \end{array}\ ] ] the rhs of equation ( [ e : groupcommutator ] ) is zero if the kernel is zero , to wit , = 0 \\\ ] ] in general , equation ( [ e : groupcommutatorabelian ] ) is non - zero unless the representation of in any coordinate chart is : 1 ) evaluated at the same point , or 2 ) independent of the connectivity parameter and reference point .the second case is trivial .the first case requires the condition . substituting for the change of integration variable used in equation ( [ e : pexpdecomp ] ), we recover the condition , .lemma ( [ lem : refgradgroup ] ) simply states that the group property of the referential gradient with respect to translations of the connectivity parameter is a direct consequence of the similar group property of the flow ; the constant representation is the trivial case . in a follow - up paper , we explore the geometric and topological implications of this lemma .the third lemma relates the referential gradient of the respective representations of the flow .recall from section ( [ ss : generalconnectivitymap ] ) , the flow of a vector field is an equivalence class under affine transformations of the connectivity parameter . as such, the flow may always be represented in a form in which the connectivity parameter measures an arc - length from the reference point via relations ( [ e : arclengthcondition ] ) , ( [ e : arclengthmcflow ] ) , and ( [ e : flowarclengthflowrelation ] ) . for ease of notation , by ] we mean , respectively , the lagrangian specification of the referential gradient solution ( [ a : pexpreferentialgradient ] ) constructed from of the full generating vector field with flow representation , and that constructed from the unit generating vector field with arc - length flow representation . _( relations between referential gradient representations ) _ [ lem : representationrelations ] let be a non - null generating vector field .the representations ] are related by , = \mathcal{p } \text{exp } \int^{l}_{0 } ds \\biggl ( \ \nabla b \bigl ( \ \psi \left ( s , p \right ) \\bigr ) + \frac{b}{2 } \ \frac{\nabla b^{2}}{b^{2 } } \bigl ( \ \psi \left ( s , p \right ) \\bigr ) \ \biggr ) \\ \\ \displaystyle f \left [ \ b \ \right ] = \mathcal{p } \text{exp } \int^{\lambda}_{0 } d\sigma \ \biggl ( \ \nabla b \bigl ( \ \phi \left ( \sigma , p \right ) \\bigr ) - \frac{b}{2 } \ \frac{\nabla b^{2}}{b^{2 } } \bigl ( \ \phi \left ( \sigma , p \right ) \\bigr ) \ \biggr ) \\ \end{array}\ ] ] furthermore , if the ( free ) lie algebras are abelian , = 0 \\\int^{\lambda}_{0 } d\sigma \ \nabla b \bigl ( \ \phi \left ( \sigma , p \right ) \\bigr ) \ , \ \int^{l}_{0 } ds \\frac{b}{2 } \ \frac{\nabla b^{2}}{b^{2 } } \bigl ( \ \phi \left ( \sigma , p \right ) \\bigr ) \ \biggr ] = 0 \\ \end{array}\ ] ] then relations ( [ e : representationsrelatations ] ) may be written , = f \left [ \ b \ \right ] \\mathcal{p } \text{exp } \biggl ( \ \int^{l}_{0 } ds \ \frac{b}{2 } \ \frac{\nabla b^{2}}{b^{2 } } \bigl ( \ \psi \left ( s , p \right ) \\bigr ) \ \biggr ) \\ \\ \displaystyle f \left [ \ b \ \right ] = f \left [ \ b \ \right ] \ \mathcal{p } \text{exp } \biggl ( \ - \int^{\lambda}_{0 } d\sigma \ \frac{b}{2 } \ \frac{\nabla b^{2}}{b^{2 } } \bigl ( \ \phi \left ( \sigma , p \right ) \\bigr ) \ \biggr ) \\ \end{array}\ ] ] we note , in the case of a null generating vector field , . hence ,the unit generating direction field does not exist , and therefore neither does the representation ] at , and = i ] .the lhs of equation ( [ e : covderdirrefgradcoord ] ) is simply the lie bracket of the vector fields and .hence , the -component of the lie derivative of the vector with respect to the generating vector field is given by , moreover , equation ( [ thm : dirrefgradcommutator2 ] ) follows immediately for a torsion - free metric connection . in the practical application of theorem ( [ thm : refgraddynamics ] ) , it is useful to use the levi - civita connection. furthermore , recall the generating vector field is assumed known _ a - priori _ everywhere . noting the reference shift direction vector is an arbitrary albeit fixed ( i.e. , coordinate - independent ) vector , the eulerian specification of the referential gradient is given by the following set of ( sixteen ) coupled first - order , partial differential equations , with reference condition .the eulerian specification of the referential gradient in minkowski space , , satisfies the following set of coupled first - order partial differential equations , ^{i } \\\displaystyle & = \frac{f^{0}_{\nu } \left ( \ x_{i } \\right)}{c } \\frac{\partial b^{i}_{i } \left ( \ x_{i } \\right)}{\partial t } \\\end{split } \end{array}\ ] ] with reference conditions at the point given by , paper introduces the _ referential gradient of the flow _ of a vector field , a generally covariant measure of the geometric structure of the flow of a vector field in four - dimensional spacetime .we assume _ a - priori _ the generating vector field exists , is everywhere smooth , and satisfies some set of governing evolution equations .the mathematical formalism of flows is provided as background from which the referential gradient object is defined .we provided the explicit relation between the referential gradient of the flow and the generating vector field from two equivalent perspectives : a lagrangian specification with respect to a generalized connectivity parameter , and an eulerian specification making explicit the evolution dynamics at each point of the manifold .the lagrangian specification theorem ( [ thm : refgradlagrangian ] ) yields a general closed - form functional solution with respect the generating vector field in terms of a generalized connectivity parameter . while the eulerian specification theorem ( [ thm : refgraddynamics ] ) makes explicit the referential gradient dynamics at each point of the manifold . due to the integration , we prove three transformation lemmas that identify the conditions under which the referential gradient transforms as a 1 - 1 tensor , forms a group with respect to the connectivity parameter , and the proper change of variable relations between the corresponding referential gradient representations . lemma ( [ lem : refgrad ] ) proves manifest covariance of the referential gradient provided the closure of its domain of definition is contained within coordinate chart overlap .lemma ( [ lem : refgradgroup ] ) identifies the necessary and sufficient conditions that the referential gradient forms a group with respect to translations of connectivity parameter ; that is , in general , 1 ) the group property is a direct consequence of the similar group property of the flow , or 2 ) the representation of in any coordinate chart is independent of the connectivity parameter and reference point .finally , since the flow of a vector field represents an equivalence class under affine transformations of the connectivity parameter , lemma ( [ lem : representationrelations ] ) provides the proper relations associated with a change of integration variable ; in particular , between the two most natural representations of the flow . in a follow - up paper , we develop a geometric mechanics and thermodynamics using the lagrangian specification of the referential gradient . in this contextwe explore the importance of lemma ( [ lem : refgradgroup ] ) with respect to topological invariants such as the linking number , as well as the consequences of relaxing the smoothness assumption of the _ a - priori _ generating vector field which lead to some powerful topological constraints on the storage , transport , and release of field energy in a system .furthermore , in this context , lemma ( [ lem : representationrelations ] ) will prove important in application to systems in which only partial information of the generating vector field may known .the author would like to acknowledge l. a. fisk at the university of michigan , and b. j. lynch at the university of california at berkeley for insightful discussions in the development of this research .this research was supported , in part , by nsf grant ags-1043012 , and nasa lws grant nnx10aq61 g .by standard taylor expansion theorem ( see e.g. , ref . , theorem a.58 ) , any smooth vector field for any fixed , may be written , where is the covariant derivative along the vector . in a chart , the vector field , and the covariant derivative along the vector given by , hence the component of equation ( [ ae : generaltaylorexpansion ] ) may be written , ds \\ \end{split } \end{array}\ ] ] it follows immediately the integral remainder is equal to zero when the vector .the exponential of a matrix satisfies the identity ( dinkin s formula , see ref . ) , + \sum_{n } \ a_{n } \ c_{n } \bigl ( \ x , y \ \bigr ) \\biggr ) \\\ ] ] where the are constant coefficients , and the are homogeneous ( lie ) polynomials in and of degree ( i.e. , nested commutators ) . the first few terms are well - known , and given by ( see ref . ) , \ \bigr ] + \bigl [ \ \left [ \ x , y \\right ] , y \ \bigr ] \\ \\\displaystyle a_{2 } = 1/24 \ \ \ & \ \ \ c_{2 } = \bigl [ \ \bigl [ \ x , \left [ \ x , y \\right ] \ \bigr ] , y \ \bigr ] \\ \\\displaystyle a_{3 } = 1/720 \ \ \ & \ \ \ \begin{split } c_{3 } = \bigl [ \ x , \bigl [ \ x , \bigl [ & \ x , \left [ \ x , y\ \right ] \ \bigr ] \ \bigr ] \ \bigr ] \\\displaystyle & - \bigl [ \ \bigl [ \ \bigl [ \ \left [ \ x , y\ \right ] , y \ \bigr ] , y \ \bigr ] , y \ \bigr ] \\ \end{split } \\\end{array}\ ] ] hirsch , m. w. , & smale , s. : _ differential equations , dynamical systems , and linear algebra_. pure and applied mathematics , ed .eilenberg , s. , & bass , h. , academic press inc ., harcourt brace jovanovich , 1974 | assuming _ a - priori _ a smooth generating vector field , we introduce a generally covariant measure of the flow geometry called the referential gradient of the flow . the main result is the explicit relation between the referential gradient and the generating vector field , and is provided for from two equivalent perspectives : a lagrangian specification with respect to a generalized parameter , and an eulerian specification making explicit the evolution dynamics . furthermore , we provide explicit non - trivial conditions which govern the transformation properties of the referential gradient object . |
in this paper we derive exact solutions for a diffusive predator - prey system , where , , and are positive parameters .subscripts and denote partial derivatives .the equations are expressed in dimensionless variables , where scaling of space and time have been introduced so to have the equation in a simple form .the biological meaning of the each term has been discussed in , which we briefly review .the model is of the predator - prey kind , with and being the densities of prey and predator .spatial redistribution of the population is governed by diffusive dynamics , with both species having the same diffusivities , . in absence of the predator ,the temporal dynamics of the prey density is of the allee type , small populations being not viable .presence of the predator population negatively affects the prey population . in turn , the predator population is totally dependent on prey availability , as the only term in the predator equation that represents the growth of the population is the term , where quantifies the gain in natality due to prey consumption . the parameters and represent the per capita mortality rate of prey and predator respectively , in the linear , small populations , limit .finally , the is a closure relation taking into account the effects of higher trophic levels . to investigate the dynamics of the above diffusive predator - prey system the authors of ref . assumed the following relations between the parameters , namely and . in other wordsit has been assumed that the per capita mortality rate of prey and predator are equal and the rate of biomass production at the lower level must be consistent with the rate of biomass assimilation at the upper level of the web . under this assumption eq .( [ qqq1 ] ) reads in our work also we consider the predator - prey system ( [ qqq6 ] ) only .we construct exact analytic solutions to the equation ( [ qqq6 ] ) in order to understand the properties of this model for different parametric values .to do so we employ the expansion method .this method has been applied to several nonlinear evolutionary equations . herewe demonstrate that the utility of this method in exploring dynamics of a diffusive predator - prey system .while implementing expansion method to equation ( [ qqq6 ] ) we obtain exact solutions for two different wave speeds ( vide eqs .( [ qqq17 ] ) and ( [ qqq18 ] ) ) . in both the cases we give three different types of exact solutions .the plan of the paper is as follows . to begin with , in sec .2 , we describe the expansion method . in sec . 3 , we consider eq .( [ qqq6 ] ) and derive exact solutions of it . finally , we present our conclusion in sec .in this section we discuss briefly the method of finding exact solutions for a system of nonlinear partial differential equations ( pdes ) using -expansion method .suppose that the system of nonlinear pdes is of the form where and are two unknown functions and and are polynomials in and and their partial derivatives .the expansion method involves the following four steps . let us introduce the travelling wave reduction in the pde ( [ aaa1 ] ) so that the latter becomes where prime denotes differentiation with respect to the new variable . suppose that the solution of can be expressed by a polynomial in , that is with is the solution of the second order linear damped harmonic oscillator equation in the above and are constants and , .the positive integers and can be determined by substituting ( [ aaa4 ] ) in ( [ aaa3 ] ) and considering the homogeneous balance between the highest order derivative and nonlinear terms appearing in ( [ aaa3 ] ) . substituting ( [ aaa4 ] ) in ( [ aaa3 ] ) and eliminating the variable in the resultant equations by using ( [ aaa5 ] ) one gets two polynomials equations in . now equating each coefficients of and to zeroone obtains a set of algebraic equations for the parameters and . solving these algebraic equations one can get exact values for these coefficients . substituting the values of and and and the general solution of ( [ aaa5 ] ) in ( [ aaa4 ] ) one can obtain three different types of travelling wave solutions for the given system of nonlinear pdes .in this section , we apply the method described in the previous section to the nonlinear pdes ( [ qqq6 ] ) and construct exact solutions . substituting ( [ aaa2 ] ) into ( [ qqq6 ] )we get the following system of ordinary differential equations ( odes ) , namely suppose that the solution of odes ( [ qqq9 ] ) can be expressed by a polynomial in which is of the form ( [ aaa4 ] ) . substituting ( [ aaa4 ] ) and their derivatives in ( [ qqq9 ] ) and performing the homogeneous balance between and and with in resultant equation we find and .so we fix the polynomials ( [ aaa4 ] ) be of the form substituting the expressions ( [ qqq13 ] ) and their derivatives in ( [ qqq9 ] ) and rearranging the resultant equation in the descending powers of we arrive at \big(\frac{g'}{g}\big)^3+[3\alpha_1\lambda - c\alpha_1+k\alpha_1 ^ 2+\frac{\alpha_1 ^ 2}{\sqrt{\delta}}-3\alpha_1 ^ 2\alpha_0-\alpha_1\beta_1]\big(\frac{g'}{g}\big)^2 & & \nonumber \\ + [ ( 2\mu+\lambda^2)\alpha_1-c\lambda \alpha_1-\beta\alpha_1 + 2k\alpha_0\alpha_1+\frac{2\alpha_1\alpha_0}{\sqrt{\delta}}-3\alpha_0 ^ 2\alpha_1-\alpha_1\beta_0-\alpha_0\beta_1]\big(\frac{g'}{g}\big ) & & \nonumber \\ + ( \mu \alpha_1\lambda - c\mu \alpha_1-\beta \alpha_0+\alpha_0 ^ 2+\frac{\alpha_0 ^ 2}{\sqrt{\delta}}-\alpha_0 ^ 3-\alpha_0\beta_0 ) = 0 , \qquad \qquad \qquad\end{aligned}\ ] ] \big(\frac{g'}{g}\big)^3+[3\beta_1\lambda - c\beta_1+k\alpha_1\beta_1 - 3\delta\beta_1 ^ 2\beta_0]\big(\frac{g'}{g}\big)^2 & & \nonumber \\ + [ ( 2\mu+\lambda^2)\beta_1-c\lambda \beta_1-\beta\beta_1+k\alpha_0\beta_1+k\alpha_1\beta_0 - 3\delta\beta_0 ^ 2\beta_1]\big(\frac{g'}{g}\big ) & & \nonumber \\ + ( \mu \beta_1\lambda - c\mu \beta_1-\beta \beta_0+k\alpha_0\beta_0-\delta\beta_0 ^ 3 ) = 0 .\qquad \qquad \end{aligned}\ ] ] equating the coefficients of , to zero in equations ( [ qqq15 ] ) and ( [ qqq16 ] ) we get the following set of algebraic equations , namely solving the above system of algebraic equations ( [ qqa17 ] ) and ( [ qqa18 ] ) we obtain two sets of values for the constants , , , and : since both the sets separately satisfy the algebraic equations in and they individually form a compatible solution . from eachset we derive an exact solution for the nonlinear pdes ( [ qqq6 ] ) . to begin withlet us take the values given in with the values given in , the solution reads it is known that the linear damped harmonic oscillator equation ( [ aaa5 ] ) admits three different types of solutions depending on the values of and , namely case 1 : case 2: case 3 : substituting ( [ qa17])-([qa19 ] ) into ( [ qqq19 ] ) we arrive at the following form of solutions , namely case 1 : ) shown for time ; when ; parameters are , , , , , . ] ) shown for time ; when ; parameters are , , , , , . ] ) shown for time ; when ; parameters are , , , , . ]case 2 : case 3 : where , , and are arbitrary constants . for the second set of values we end up with the same form of solution with the only difference in the values in the parameters , and : case 1 : case 2 : case 3 : where , , and are arbitrary constants .in this paper , we have constructed exact solutions for a diffusive predator - prey system which is modeled by a system of two coupled nonlinear pdes . using the expansion method ,we have derived exact solutions for two different wave speeds .the solutions that we have obtained are singular and can not be taken at face value describing actual situations in ecology .notwithstanding they present a very interesting property : depending on the sign of , the nature of the solution changes from a single structure to a periodic one , which is akin to pattern forming systems .we express in terms of the original parameters in the equation , we get . therefore , if we have a single structure like in fig.(1 ) . and if we have a periodic structure .the meaning of is that it measures the gain in natality obtained by the predator , and is its mortality .it follows that periodic structure formation comes from the strength of mortality . in a more intuitive way , is the inverse of the typical time for a predator population to decay in absence of preys .if this time is short , we have a periodic pattern , the population continuously decaying and recovering . if this time is long , a smoother dynamics shows up .the equations for which we could find new solutions , besides the previously known , , have the obvious drawback that matching of coefficients is necessary .this is the same situation in most cases in the subject of exact solutions for reaction - diffusion equations , beyond the specific cases of interest in biology , .however , the broad view insight gained remains of interest as the systems considered contain many elements of more realistic , non solvable ones . * *rak and ms wish to thank cnpq ( brazil ) and dst ( india ) for the financial support through major research projects . | we construct exact solutions for a system of two nonlinear partial differential equations describing the spatio - temporal dynamics of a predator - prey system where the prey per capita growth rate is subject to the allee effect . using the expansion method , we derive exact solutions to this model for two different wave speeds . for each wave velocity we report three different forms of solutions . we also discuss the biological relevance of the solutions obtained . reaction - diffusion equations , predator - prey system , -expansion method . |
an unprecedented volume of observational data and information regarding the distribution and properties of optical sources in the universe is becoming available from ongoing and future sky surveys , both imaging and spectroscopic . these include boss ( baryon oscillation spectroscopic survey ; ) , des ( dark energy survey ; ) , kids ( kilo - degree survey ; ) , sumire , desi ( dark energy spectroscopic instrument ; ) , 4most ( 4-meter multi - object spectroscopic telescope ; ) , j - pas ( javalambre - physics of the accelerated universe astrophysical survey ; ) , lsst ( large synoptic survey telescope ; ) , euclid , and wfirst ( wide - field infrared survey telescope ; ) .combined with microwave background observations from the planck satellite and ground - based telescopes , such as act ( atacama cosmology telescope ) and spt ( south pole telescope ) , the mining of these and other datasets , such as resulting from new radio surveys , e.g. , vlass ( very large array sky survey ) and x - ray surveys , are expected to yield a host of cosmological and astrophysical insights .there are several foundational cosmological questions that the datasets will directly address .perhaps the most pressing is the mysterious cause of the accelerated expansion of the universe whether it is due to dark energy or a modification of general relativity .in addition , the observations will also bear on the ultimate nature of dark matter , provide information on the physics of the early universe by probing primordial fluctuations , and enhance our knowledge of the neutrino family , the lightest known massive particles in the universe . aside from these basic cosmological questions ,the survey data provides an enormous resource for attacking a very large number of astrophysical problems related primarily to understanding the formation of complex structure in the universe . in order to extract the full extent of information from these remarkable surveys , a similar level of effortmust be undertaken in the realm of theory and modeling . to attain the necessary realism and accuracy , sophisticated , large scale simulations of structure formationmust be carried out .these simulations address a large variety of tasks : providing predictions for many different cosmological models to solve the inverse problem related to determining cosmological parameters , investigating astrophysical and observational systematics that could mimic physics effects of the dark sector , enabling careful calibration of errors ( by providing precision covariance estimates ) , testing , optimizing , and validating observational strategies with synthetic catalogs , and finally , exploring new and exciting ideas that could either explain puzzling aspects of the observations ( such as cosmic acceleration ) or help to motivate and design the implementation of new types of cosmological probes .the simulations have to be large enough in volume to cover the observed universe ( or at least a large part of it ) and at the same time possess sufficient mass and force resolution ( a spatial dynamic range of roughly a million ) to resolve objects down to the smallest relevant scales .as one example , a recently constructed synthetic galaxy and quasar catalog for desi is shown in figure [ syn_cat ] .this catalog was based on a 1.1 trillion - particle hacc simulation , with a box - size of gpc ( ) .the diverse uses of simulations outlined above also demand fast turn - around times , as not one such simulation is required but , eventually , many hundreds to hundreds of thousands .the simulation codes , therefore , have to be capable of exploiting the largest supercomputing platforms available today and into the future .viewed as a single entity , the field of ` modern ' computational cosmology ( , ) has largely kept pace with the growth of computational power , but new challenges will need to be faced over the next decade .this is due to the failure of dennard scaling ( ) , which underlay the success of moore s law for about two decades . as a consequence of the fact that single - core clock rate and performance have stalled since 2004/5, the design of future microprocessors is branching into several new directions , to overcome the related performance bottlenecks ( ) the key constraint being set by electrical power requirements . the resulting impact on large supercomputersis already being seen ; aside from the familiar large multi - core processor clusters , two major approaches can be easily identified . the first approach is the large homogeneous system , built around a ` system on chip ' ( soc ) design or around many - core nodes , with concurrencies in the multi - million range the ibm blue gene / q is an excellent example of such an approach ; specific examples include sequoia ( 20 pflops peak ) at lawrence livermore national laboratory and mira ( 10 pflops peak ) at argonne national laboratory , supporting up to million - way concurrency ( sequoia ; half of that on mira ) . the second route is to take a conventional cluster but to attach computational accelerators to the cpu nodes .the accelerators can range from the ibm cell processor ( as on los alamos national laboratory s roadrunner , first to break the petaflop barrier ) , to gpus as on titan ( 27 pflops peak ) at oak ridge national laboratory , to the intel xeon phi coprocessor as on stampede ( 10 pflops peak ) at the texas advanced computing center .future evolution of supercomputer systems will almost certainly involve further branching in the space of possible architectures . because development of the supercomputing software environment is likely to lag significantly behind the pace of hardware evolution , it appears prudent , if not essential , to develop a code design philosophy and implementation plan for cosmological simulations that can be reconfigured relatively quickly for new architectures , has support for multiple programming paradigms , and at the same time , can extract high levels of sustained performance . with this challenge in mind ,we have designed hacc ( hardware / hybrid accelerated cosmology codes ) , an n - body cosmology code framework that takes full advantage of all available architectures .this paper presents an overview of hacc ` theory and practice ' by covering the methods employed , as well as describing important components of the implementation strategy .an example of hacc s capabilities is shown in figure [ zoom ] , a recent large cosmological run on mira , evolving more than 1 trillion particles ; this is the same simulation on which the results of figure [ syn_cat ] are based . box - size simulation with hacc on 32 blue gene / q racks .the force resolution is kpc and the particle mass , m .the image , taken during a late stage of the evolution , illustrates the global spatial dynamic range covered by the simulation , , although the finer details are not resolved in this visualization.,width=278 ] cosmological simulations can be broadly divided into two classes : gravity - only n - body simulations and ` hydrodynamic ' simulations that incorporate gasdynamics and models of astrophysical processes .since gravity dominates at large scales , and dark matter outweighs baryons by roughly a factor of five , n - body simulations are an essential component in modeling the formation of structure .several post - processing strategies can be used to incorporate missing physics in gravity - only codes , such as the halo occupation distribution ( hod ) approach ( ; ; ; ; ; ; ) , subhalo / halo abundance matching ( s / ham ) ( ; ; ; ; ) or semi - analytic modeling ( sam ) ( ; ; ; ; ; ; ) for adding galaxies to the simulations .whenever a more detailed understanding of the dynamics of baryons is required , gasdynamic , thermal , and radiative processes ( among others ) must be modeled , as well as processes such as star formation and local feedback mechanisms ( outflows , agn / sn feedback ) .a compact review of cosmological simulation techniques and applications can be found in .a concise review of phenomenological galaxy modeling is given in .carrying out a fully realistic first principles simulation program for all aspects of modeling cosmological surveys will not be possible for quite some time .the required gasdynamic simulations are very expensive and there is considerable uncertainty in the modeling and physics inputs .progress is nevertheless possible by combining high - resolution , large - volume n - body simulations and post - processing inputs from simulations that include gas physics to build robust phenomenological models .the parameters of these models would be determined by a set of observational constraints ; yet other observations would then function as validation tests .the hacc approach assumes this starting point as an initial design constraint , but one that can be relaxed in the future .the overall structure of hacc is based on the realization that a large - scale computational framework must not only meet the challenges of spatial dynamic range , mass resolution , accuracy , and throughput , but , as already discussed , be cognizant of disruptive changes in computational architectures .as an early validation of its design philosophy , hacc was among the pioneering applications proven on the heterogeneous architecture of roadrunner ( ) , the first computer to break the petaflop barrier . with its multi - algorithmic structure , hacc allows the coupling of mpi with a variety of local programming models mpi+`x ' to readily adapt to different platforms .currently , hacc is implemented on conventional and cell / gpu - accelerated clusters , on the blue gene / q architecture ( ) , and has been run on prototype intel xeon phi hardware .hacc is the first , and currently the only large - scale cosmology code suite world - wide , that has been demonstrated to run at full scale on _ all _ available supercomputer architectures .another key aspect of the hacc code suite is an inbuilt capability for fast `` on the fly '' or _ in situ _ data analysis . because the raw data from each run can easily be at the petabyte ( pb ) scale or larger , it is essential that data compression and data analysis be maximized to the extent possible , before the code output is dumped to the file system . in order to comply with storage and data transfer bandwidth limitations ,the amount of data reduction required is roughly two orders of magnitude .hacc s _ in situ _ data analysis system is designed to incorporate a number of parallel tools such as tools for generating clustering statistics ( power spectra , correlation functions ) , tessellation - based density estimators , a fast halo finder with an associated merger tree capability , real - time visualization , etc .the hacc framework has been used to generate a number of large simulations to carry out a variety of scientific projects .these include a suite of 64 billion particle runs for predicting the baryon acoustic oscillation signal in the quasar ly- forest , as observed by boss ( ) , a high - statistics study of galaxy group and cluster profiles , to better establish the halo concentration - mass relation ( ) , tests of a new matter power spectrum emulator ( ) , and a study of the effect of neutrino mass and dynamical dark energy on the matter power spectrum ( ) .results from other simulation runs will be available shortly .these range from simulation campaigns in the billion particle range per simulation , e.g. , simulations for determining sampling covariance for boss ( ) , to individual simulations in the billion particle class . as an example of the latter , a recently completed simulation on titan used 550 billion particles to evolve a 1.3 gpc periodic box , with mass resolution , m , and force smoothing scale of kpc ( ) .the paper is organized as follows . in section [ sec : feat ] we introduce the basic hacc design and algorithms , focusing on how portability and scaling on very large supercomputers is achieved , including discussions of practical matters such as memory management and parallel i / o . a significant aspect of supercomputer architecture is diversity at the level of the computational nodes .as mentioned above , the hacc design adopts different short - range solvers for different nodal architectures , and this feature is discussed separately in section [ sec : specs ] .section [ sec : verif ] presents some results from our extensive code verification program , showcasing a comparison test with gadget-2 , one of the most widely used cosmology codes today .the _ in situ _ analysis tool suite is covered in section [ sec : tools ] , and selected performance results are given in section [ sec : perf ] .we conclude in section [ sec : conc ] with a recap and a discussion of future evolution paths for hacc .in the standard model of cosmology , structure formation at large scales is described by the gravitational vlasov - poisson equation ( ) , a 6-d partial differential equation for the liouville flow ( [ le ] ) of the one - particle phase space distribution , arising from the non - relativistic limit of the vlasov - einstein set of equations , where the poisson equation encodes the self - consistency of the evolution : the expansion history of the universe is given by the time - dependence of the scale factor governed by the specifics of the cosmological model , the hubble parameter , , is newton s constant , is the critical density , , the average mass density as a fraction of , is the local mass density , and is the dimensionless density contrast , in general , the vlasov - poisson equation is computationally very difficult to solve as a partial differential equation because of its high dimensionality and the development of nonlinear structure including complex multistreaming on ever finer scales , driven by the gravitational jeans instability .consequently , n - body methods , using tracer particles to sample are used ; the particles follow newton s equations , with the forces given by the gradient of the scalar potential ( cf . ; for an early comparison of direct and particle methods in a non - gravitational context , see ) .the cosmological n - body problem is characterized by a very large value of the number of interacting particles and a very large spatial dynamic range . if one wishes to track billions of galaxy - hosting halos at a reasonable mass resolution , then hundreds of billions to trillions of tracer particlesare required .since gravity can not be shielded , this obviously precludes the use of brute - force direct particle - particle algorithms for the particle force computation .popular alternatives include pure particle - based methods ( tree codes ) or multi - scale grid - based methods ( amr codes ) , or hybrids of the two ( treepm , particle - particle particle - mesh , p m ) .it is not our purpose here to go into many details of the algorithms and their implementations ; good coverage of the background material can be found in , , , , , , and .the hacc design approach acknowledges that , as a general rule , particle and grid - based methods both have their limitations . for physics , algorithmic , and data structure reasons ,grid - based techniques are better suited to larger ( ` smooth ' ) lengthscales , with particle methods possessing the opposite property .this suggests that higher levels of code organization should be grid - based , interacting with particle information at a lower level of the computational hierarchy . following this central idea, hacc uses a hybrid parallel algorithmic structure , splitting the gravitational force calculation into a specially designed grid - based long / medium range spectral particle - mesh ( pm ) component that is retained on all computational architectures , and an architecture - tunable particle - based short / close - range solver .the spectral pm component can be viewed as an upper layer that is implemented using c++/c / mpi , essentially independent of the target architecture , whereas , the bottom or node level is where the short / close - range solvers reside .these are chosen and optimized depending on the target architecture and use different local programming models as appropriate .of the 6 orders of magnitude required for the spatial dynamic range for the force solver , the grid is responsible for 4 orders of magnitude , while the particle methods handle the critical 2 orders of magnitude at the shortest scales where particle clustering is maximal and the bulk of the time - stepping computation takes place .the short - range solvers can employ direct particle - particle interactions , i.e. , a p m algorithm , as on some accelerated systems , or use both tree and particle - particle methods as on the ibm blue gene / q ( ` pptreepm ' with a recursive coordinate bisection ( rcb ) tree ) . as two extreme cases , in non - accelerated systems , the tree solver provides very good performance but has some complexity in the data structure , whereas for accelerated systems , the local approach is more compute - intensive but has a very simple data structure , better - suited for computational accelerators such as cells and gpus .the availability of multiple algorithms within the hacc framework also allows us to carry out careful error analyses , for example , the p m and the pptreepm versions agree to within for the nonlinear power spectrum test in the code comparison suite of ( see section [ sec : verif ] for more details ) .this level of error control easily meets the minimal requirements set by the increased statistical power of next - generation survey observations . in the followingwe first describe the long / medium - range force solver employed by hacc .as mentioned above , this solver remains unchanged across all architectures . after doing this , we provide details of the architecture - specific short - range solvers , and the sub - cycled time - stepping scheme used by the code suite .an important aspect of large - volume cosmological simulations is that the density distribution is very highly clustered , with an overall topology descriptively referred to as the `` cosmic web '' .the clustering is such that the maximum distance moved by a particle is roughly 30 mpc , very much smaller than the overall scale of the simulation box ( ) . with a 3-d domain decomposition , each ( non - cubic ) nodal volume ( mpi rank ) is roughly of linear size 50 - 500 mpc , depending on the simulation run size .the idea behind particle overloading is to ` overload ' the node with particles belonging also to a zone of size roughly 3 - 10 mpc extending out from the nominal spatial boundary for that node ( so - called `` passive '' particles ) .note that copies of these particles essentially a replicated particle cache , roughly analogous to ghost zones in pde solvers will also be held by other processors , in only one of which will they be `` active '' , hence the use of the term ` overloading ' . because more than one copy of these particles is held by neighboring domains ,overloading is not the same as the guard zone conventionally used to reduce communication in particle codes .the point of having this particle cache is two - fold .first , for a number of time steps no particle communication across nodes is required .additionally , the cached particle ` skin ' allows particle deposition and force interpolation for the spectral particle - mesh method to be done using information entirely local to a node , thus grid communication is also reduced .the particle cache is refreshed ( replacement of passive particles in each computational domain by active particles from neighboring domains ) at some given number of time steps .this involves only nearest neighbor communication and the penalty is a trivial fraction of the time spent in a single global time step .the second advantage of overloading is that when a short - range solver is added to each computational domain , no communication infrastructure is associated with this step .thus , in principle , short - range solvers can be developed completely independently at the node level , and then inserted into the code as desired .consequently , hacc s weak scaling is purely a function of the properties of the long - range solver . the overloaded pm solver is formally exact each node sends its local density field ( computed with active particles only ) to the global spectral poisson solver , which then returns the force for both active and passive particles to each node . the short - range force calculation for passive particlesis computed in the same way as for the active particles , except that passive particles , which are closer to the outer domain boundary than the short - range force - matching scale , ( defined in the following section ) , do not have their short - range forces computed , and are subject to only long - range forces .this avoids force anisotropy near the overloaded zone boundary , at the expense of a force error on the passive particles that are close to the edge of the boundary . since the role of the passive particles is primarily to provide a buffer` boundary condition ' zone for the active particles near the nodal domain s active zone boundary , consequences of this error are easy to control .overloading has two associated costs : ( i ) loss of memory efficiency because of the overloading zone , and ( ii ) the just - discussed numerical error for the short - range force that slowly leaks in from the edge of the outer ( passive particle ) domain boundary . in cosmology applications ,the memory inefficiency can be tolerated to the point where the memory in passive and active particles is roughly equal , but this is not a limitation in the majority of the cases of interest .the second problem is easily mitigated by balancing the boundary thickness against the frequency of the particle cache refresh , `` recycling '' the passive particles after some finite number of time - steps , chosen to be such that the refresh time is smaller than the error diffusion time : each domain gets rid of all of its passive particles and then refreshes the passive zone with ( high accuracy ) active particles exchanged from nearest - neighbor domains .conventional particle - based codes employ a combination of spatial and spectral techniques .the cloud - in - cell ( cic ) scheme used for particle deposition is an example of a real space operation , whereas the fast fourier transform ( fft)-based poisson solver is a -space or spectral operation .the spatial operations in a typical particle code are the particle deposition , the force interpolation , and the finite - differences for computing field derivatives .the spectral operations include the influence ( or pseudo - green ) function fft solve , digital filtering , and spectral differentiation techniques .spatial techniques are often less flexible and more tedious to implement than their spectral counterparts . also , higher - order spatial operations can be complicated and lead to messy and dense communication patterns ( e.g. , indirection ) .accurate p m codes are usually run with triangle - shaped - cloud ( tsc ) , a high - order spatial deposition / filtering scheme , as well as with high - order spatial differencing templates . in terms of grid units ,the cic deposition kernel filters roughly at the level of two grid cells or less with a large amount of associated `` anisotropy noise '' ; tsc filters at about three grid cells with much reduced noise .the resulting long - range / short - range force matching is usually done at four / five grid cells or so ( but can be as small as three grid cells ) .treepm codes can move the matching point to a larger number of grid cells because of the inherent speed of the tree algorithm , so they can continue to use cic deposition ( with some additional gaussian filtering ) .hacc allows the use of both p m and treepm algorithms , using a shorter matching scale than most treepm codes .one advantage of the shorter matching scale is that a low - order polynomial expression can be used in the force representation , which greatly speeds up evaluation of the force kernel .behavior is tuned to occur at a separation of three grid cells , which sets the force - matching scale , .the low level of the force anisotropy noise is shown by the bracketing red lines representing the 1- deviation .the solid curve is the fitting formula of eq .[ fitform].,width=321 ] the hacc long - range force solver uses only cic mass deposition and force interpolation with a gauss - sinc spectral filter to mimic tsc - like behavior .in addition , a fourth - order super - lanczos spectral differentiator ( ) is used , along with a sixth - order influence function .this approach allows the data motion to be simplified as no complicated spatial differentiation is needed .also , by moving more of the poisson - solve to the spectral domain , the inherent flexibility of fourier space can be exploited .for example , the filtering is flexible and tunable , allowing careful force matching at only three grid cells .figure [ ppforce ] shows the force - matching with the spectral techniques using a pair of test particles . in this test, multiple realizations of particle pairs were taken at fixed distances , but with random orientations of the separation vector in order to sample the anisotropy imposed by the grid - based calculation of the force .the solution of the poisson equation in hacc s long range force - solver is carried out using large ffts ; the corresponding spectral representation of the inverse ( discrete ) laplacian is the influence function .hacc employs the following three - dimensional , sixth - order , periodic , influence function : ^{-1 } , \label{greenf}\end{aligned}\ ] ] where the sum is over the three spatial dimensions , is the grid spacing , and the physical box size .as mentioned earlier , the cic - deposited density field is spectrally filtered using a sinc - gaussian filter : ^{n_s}. \label{filter}\ ] ] the aim of the filter is to reduce the anisotropy noise as well as control the matching scale where the short - range and long - range forces are matched ; the nominal choices in hacc are and .this filtering reduces the anisotropy `` noise '' of the basic cic scheme by better than an order of magnitude , and allows for the use of the higher - order spectral differencing scheme . instead of solving for the scalar potential , and then using a spatial stencil - based differentiation, hacc uses spectral differentiation within the poisson - solve itself , using fourth - order spectral lanczos derivatives , as previously mentioned .the ( one - dimensional ) fourth - order super - lanczos spectral differentiation for a function , , given at a discrete set of points is where are coefficients in the fourier expansion of .the `` poisson - solve '' in the hacc code is the composition of all the kernels above in one single fourier transform .note that each component of the field gradient requires an independent fft .this entails some extra work , but is a very small fraction of the total force computation , the bulk of which is dominated by the short - range solver .an efficient and scalable parallel fast fourier transform ( fft ) is an essential component of hacc s design , and determines its weak scaling properties .although parallel fft libraries are available , hacc uses its own portable parallel fft implementation optimized for low memory overhead and high performance .since slab - decomposed parallel ffts are not scalable beyond a certain point ( restricted to ) , the hacc fft implementation uses data partitioning across a two - dimensional subgrid , allowing , where is the number of mpi ranks and is the linear size of the 3-d array .the resulting scalable performance is sufficient for use in any supercomputer in the foreseeable future ( ) .the implementation consists of a data partitioning algorithm which allows an fft to be taken in each dimension separately .the data structure of the computing nodes prior to the fft is such as to divide the total space into regular three - dimensional domains .therefore , to employ a two - dimensionally decomposed fft , the distribution code reallocates the data from small ` cubes ' , where each cube represents the data of one mpi rank , to thin two - dimensional ` pencil ' shapes , as depicted schematically in figure [ pencil ] .once the distribution code has formed the pencil data decomposition , a one - dimensional fft can be taken along the long dimension of the pencil .moreover , the same distribution algorithm is employed to carry out the remaining two transforms by redistributing the domain into pencils along those respective dimensions .the transposition and fft steps are overlapped and pipelined , with a reduction in communication hotspots in the interconnect .lastly , the dataset is returned to the three - dimensional decomposition , but now in the spectral domain .pairwise communication is employed to redistribute the data , and has proven to scale well in our larger simulations .a demonstration of this is provided by the blue gene / q sytems , where we have run on up to million mpi ranks ( ) . as the grid sizeis increased on a given number of processors , the communication efficiency ( i.e. , the fraction of time spent communicating data between processors ) , remains unchanged .this is an important validation of our implementation design , as the communication cost of the algorithm must not outpace the increase in local computation performance when scaling up in size .further details of the parallel fft implementation will be presented elsewhere .the total force on a particle is given by the vector sum of two components : the long - range force and the short - range force .at distances greater than the force - matching scale , only the long - range force is needed ( at these scales , the ( filtered ) pm calculation is an excellent approximation to the desired newtonian limit , see figure [ ppforce ] ) . at distances less than the force - matching scale , ,the short - range force is given by subtracting the residual filtered grid force from the exact newtonian force . to find the residual filtered pm force , we compute it numerically using a pair of test particles ( since in our case no analytic expression is available ) , evaluating the force at many different distances at a large number of random orientations .the results are fit to an expression that has the correct asymptotic behaviors at small and large separation distances ( cf .we used the particular form : which , with , , , , , , , and , provides an excellent match to the data from the test code ( figure [ ppforce ] ) , with errors much below . at very small distance scales ,the gravitational force must be softened , and this can be implemented using plummer or spline kernels ( see , e.g. , ) . the force expression , eq .( [ fitform ] ) , is complex and to implement faster force evaluations one can either employ look - ups based on interpolation or a simpler polynomial expression. the communication penalty of look - ups can be quite high , whereas an extended dynamic range is difficult to fit with sufficiently low - order polynomials . in our case ,the choice of a short matching scale , , enables the use of a fifth - order polynomial approximation , which can be vectorized for high performance . depending on the target architecture , hacc uses two different short - range solvers , one based on a tree algorithm , the other based on a direct particle - particle interaction ( p m ) .tree methods are employed on non - accelerated systems , while both p m and tree methods can be used on accelerated systems .hacc uses an rcb tree in conjunction with a highly - tuned short - range polynomial force kernel .( an oct - tree implementation also exists , but is not the current production version . ) the implementation of the rcb tree , although not the force evaluation scheme , generally follows the discussion in .( multiple rcb trees are used per rank to enhance parallelism , as described in section [ rcb ] . )two core principles underlie the high performance of the rcb tree s design ( ) . _ spatial locality . _the rcb tree is built by recursively dividing particles into two groups .the dividing line is placed at the center of mass coordinate perpendicular to the longest side of the box ( figure [ rcb ] ) .once the line of division is chosen , the particles are partitioned such that particles in each group occupy disjoint memory buffers .local forces are then computed one leaf node at a time .the net result is that the particle data exhibits a high degree of spatial locality after the tree build ; because the computation of the short - range force on the particles in any given leaf node , by construction , deals with particles only in nearby leaf nodes , the cache miss rate during the force computation is very low . _walk minimization ._ in a traditional tree code , an interaction list is built and evaluated for each particle . while the interaction list size scales only logarithmically with the total number of particles ( hence the overall complexity ) ,the tree walk necessary to build the interaction list is a relatively slow operation .this is because it involves the evaluation of complex conditional statements and requires `` pointer chasing '' operations .a direct force calculation scales poorly as grows , but for a small number of particles , a thoughtfully - constructed kernel can still finish the computation in a small number of cycles .the rcb tree exploits our highly - tuned short - range force kernels to decrease the overall force evaluation time by shifting workload away from the slow tree - walking and into the force kernel .up to a point , doing this actually speeds up the overall calculation : the time spent in the force kernel goes up but the walk time decreases faster .obviously , at some point this breaks down , but on many systems , tens or hundreds of particles can be in each leaf node before the crossover is reached .we point out that the force kernel is generally more efficient as the size of the interaction list grows : the relative loop overhead is smaller , and more of the computation can be done using unrolled vectorized code .in addition to the performance benefits of grouping multiple particles in each leaf node , doing so also increases the accuracy of the resulting force calculation : the local force is dominated by nearby particles , and as more particles are retained in each leaf node , more of the force from those nearby particles is calculated exactly . in highly - clustered regions ( with very many nearby particles ) , the accuracy can increase by several orders of magnitude when keeping over 100 particles per leaf node .the p m implementation within hacc follows the standard method of building a chaining mesh to control the number of particle - particle interactions ( ) .this algorithm is used within hacc when working with accelerated systems .we defer further details regarding the architecture - specific implementation of the short - range force to section [ sec : specs ] , where the different alternatives are covered separately .the time - stepping in hacc is based on the widely employed symplectic scheme , as used , e.g. , in the impact code ( ) , the forerunner of mc ( mesh - based cosmology code , see , e.g. , ) , in turn the pm precursor of hacc .the basic idea here is not to finite - difference the equations of motion , but to view evolution as a symplectic map on phase space .symplectic integration in hacc approximates the full evolution to second order in the time - step by composing elementary maps using the campbell - baker - hausdorff series expansion ( ) . in pm mode ,the elementary maps are the ` stream ' and ` kick ' maps and corresponding to the free particle ( kinetic ) piece and the one - particle effective potential in the hamiltonian , respectively . in the stream map ,the particle position is drifted using its known velocity , which remains unchanged ; in the kick map , the velocity is updated using the force evaluation , while the position remains unchanged .a symmetric ` split - operator ' symplectic step is termed sks ( stream - kick - stream ) ; a ksk step is another way to implement a second - order symplectic integrator .( in the presence of explicitly time - dependent hamiltonan pieces , the map evaluations have to be implemented at the correct times to maintain second - order accuracy . ) in the presence of both short and long - range forces , we split the hamiltonian into two parts , where contains the kinetic and particle - particle force interaction ( with an associated map ) , whereas , is just the long range force , corresponding to the map . since the long range force varies relatively slowly , we construct a single time - step map by subcycling : the total map being a usual second - order symplectic integrator .this corresponds to a ksk step , where the s is not an exact stream step as in the pm case , but has enough steps composed together to obtain the required accuracy . for typical problemsthe number , , of short time steps for each long time step will range between 3 - 10 , depending on accuracy requirements .because the late - time distribution of particles is highly clustered , there can be a substantial advantage in using different ( synchronized ) local time steps down to the single - particle level . although hacc is currently designed for a regime where extreme dynamic rangeis not needed , as in treating the innermost part of galaxy halos or in tracking orbits around black holes where this advantage is most felt ( e.g. , ) the automatic density information available in the short - range force solvers is used to enable multi - level time - stepping , resulting in speed - ups by a factor of 2 - 3 , with only small effects on the accuracy .more on this topic can be found in section [ sec : specs ] .hacc uses comoving coordinates for positions and velocities .the actual internal representation of all variables is in the dimensionless form : where the fundamental scaling length , , is the length of a single grid cell of the long - range force pm - solver , , where is the box - size and is the number of grid points in a single dimension , is the current value of the hubble parameter , and the background mass density , .hacc uses powers of the scale factor , , as the actual evolution variable , with a nominal default value of ; time - stepping is performed using the variable , .besides easy portability between different architectures , another very important feature of hacc is its highly optimized memory footprint .pushing the simulation limits in large - scale structure formation problems means running simulations with as many particles as possible , and this often implies running as close as possible to the memory limit of the machine . as a result , memory fragmentation becomes a serious problem .to make matters worse , hacc is required to allocate and free different data structures during different parts of each time step because there is not enough available memory to hold all such structures at the same time .furthermore , many of these data structures , such as the rcb tree used for the short - range force calculation , have sizes that change dynamically with each new time step .this , combined with other allocations from the mpi implementation , message printing , file i / o , etc . with lifetimes that might outlast a time - step phase ( e.g. long - range force computation , short - range force computation , _ in situ _analysis ) , is a recipe for fatal memory fragmentation problems problems that we actually encountered on the blue gene / q systems . to mitigate this difficultywe have implemented a specialized pool allocator called bigchunk .this allocator grabs a large chunk of memory , and then distributes it to various other subsystems .during the first time step , bigchunk acts only as a wrapper of the system s memory allocator , except that it keeps track of the total amount of memory used during each phase of the time step . before the second time step begins, bigchunk allocates an amount of memory equal to the maximum used during any phase of the previous time step plus some safety factor .subsequent allocations are satisfied using memory from the ` big chunk ' , and all such memory is internally marked as free after the completion of each time - step phase .this en - masse deallocation , the only kind of deallocation supported by bigchunk , allows for an implementation that has minimal overhead , and the time it takes to allocate memory from bigchunk is very small compared to the speed of the system allocator . because the bigchunk memory is not released back to the system , memory fragmentation no longer fatally affects the ability of the time - step phases to allocate their necessarily - large data structures .the persistent state information in the simulation is carried by the particle attributes .while the number of particles in each mpi rank s ( overloaded ) spatial sub - volume is similar , structure formation implies some variance . once the overload cache is filled , the ( total ) number of particles on a rank is fixed until the cache is emptied and refreshed , at which point the number of particles on each rank can change . in order to avoid memory fragmentation with persistent informationwe must anticipate the maximum number of particles any rank will need during the course of a simulation run and allocate that amount of memory for particle information at the start , before any other significant memory allocation occurs .we estimate the maximum number of particles by choosing a maximum representative volume from which bulk motion could move all of the particles into a rank s ( overloaded ) sub - volume and multiply that volume by the average particle density .the actual memory allocations are monitored during runtime , allowing for adjustments to be made while the code is running . in practice for the sub - volumes with side lengths that are at least several tens of mpcthis results in allocating memory for an additional skin of particles that is 6 - 10 mpc thick .hacc prints a memory diagnostic at each time step to indicate the extrema of particle memory usage across all ranks , and the amount of extra memory can be adjusted when restarting the code if the initial estimate appears to be insufficient for later times .in addition to the space required for the actual particle information , we also allocate an array of integers to represent a permuted ordering of the particles and a scratch array large enough to hold any single particle attribute .these enable out - of - place re - ordering of the particle information one attribute at a time without additional memory allocation . a plurality of i / o strategies are needed for different use cases , machine architectures , and data sizes because no single i / o strategy works well under all of these conditions .our three main approaches are described below ._ one file per process ._ using one output file per process ( i.e. mpi rank ) is the simplest i / o strategy , and continues to provide the best write bandwidth compared to any other strategy .because every process writes into a separate file , after file creation , there is no locking or synchronization needed in between processes .unfortunately , while simple and portable , one file per process only works for a modest number of processes ( typically less than 10,000 ) .no file system can manage hundreds of thousands of files that would result from checkpointing a large - scale run with one file per process .additionally , as a practical matter , managing hundreds of thousands of files is cumbersome and error - prone . finally , even when the number of files is reasonable , reading the stored data back into a different number of processes than used to write the data requires redistribution .this happens when the output is used for analysis on a smaller cluster ( or machine partition ) , or for visualization . in such cases ,the required reshuffling of all of the data in memory is equivalent to the aggregation done by more complex collective i / o strategies and cancels out the simplicity of the one file per process approach . for improved scalability and flexibility , hacc supports the following additional i / o strategies . _ many processes per file . _the default i / o strategy used by hacc , called genericio , partitions the processes in a system - specific manner , and each partition writes data into a custom self - describing file format .each process writes its data into a distinct region of the file in order to reduce contention for file - system - level page locks . within the region assigned to each process ,each variable is written contiguously .on modern supercomputers , such as ibm blue gene and cray x series machines , dedicated i / o nodes execute special i / o forwarding system software on behalf of a set of compute nodes .for example , on ibm blue gene / q systems , one i / o node is assigned to 128 compute nodes . by writing one file per i / o node ,the total number of files is reduced by at least the ratio of compute nodes to i / o nodes , and a high percentage of the peak available bandwidth can be captured by the reading and writing processes . by partitioning the processes by i / o - node assignment and providing each process with a disjoint data region, we are taking advantage of the technique successfully used by the glean i / o library ( ) on the blue gene / p and blue gene / q systems .the i / o implementation can use mpi i / o , in either collective or non - collective mode , or ( non - collective ) posix - level i / o routines .importantly , 64-bit cyclic - redundancy - check ( crc ) codes are computed for each variable for each rank , and this provides a way to validate data integrity .this detects corruption that occurs while the data is stored on disk , while files are being transferred in between systems , and while being transmitted within the storage subsystem(s ) . during a recent run which generated 100 tb of checkpoint files , a single error in a single variable from one writing processwas detected during one checkpoint - reading process , and we were able to roll - back to a previous checkpoint and continue the simulation with valid data .furthermore , over the past two years , crc validation has detected corrupted data from malfunctioning memory hardware on storage servers , misconfigured raid controllers and bugs in ( or miscompiled ) compression libraries .the probability of seeing corrupt data from any of these conditions is small ( even while they exist ) , and overall , modern storage subsystems are highly reliable , but when writing and reading many petabytes of data at many facilities the probability of experiencing faults is still significant .the crc64 code is in the process of being transformed into an open - source project .an example of genericio performance under production conditions on the blue gene / q system mira is given in table [ table : genio - perf ] . in tests , i / o performance very close to the peak achievable has been recorded . under production conditions ,we still achieve about two - thirds of the peak performance . _ a single file using parallel netcdf ._ when peak i / o performance is not required , and the system s mpi i / o implementation can deliver acceptable performance , we can make use of a parallel - netcdf - based i / o implementation ( ) . the netcdf format is used by simulation codes from many different science domains , and readers for netcdf have been integrated with many visualization and analysis tools .because netcdf has an established user community , and will likely be supported into the foreseable future , writing data into a netcdf - based format should make distributing data generated by hacc to other outside groups easier than if only custom file formats were supported .the file schema that we developed for the parallel netcdf format is shown in figure [ fig : pnetcdf ] .as the figure shows , particles are organized and indexed according to the spatial subdomains ( blocks ) of the simulation , one block per process .the spatial extents of each block are also indexed .currently , three modes of reading the parallel netcdf files are supported .first , given some number of processes that need not be the same as the number of blocks , particles can be redistributed uniformly among the new number of processes while being read collectively .second , the particles in a single block can be retrieved given the block i d .third , particles in a desired bounding box can be retrieved .the queried bounding box need not match the extents of any one block and particles overlapping several blocks may be retrieved in this manner .the performance for writing the parallel netcdf output is shown in table [ table : pnetcdf - perf ] .these tests were run on hopper ( cray xe6 ) at the national energy research scientific computing center ( nersc ) with a lustre file system . because the file system is shared by all running jobs, performance results will vary due to resource contention ; the values in table [ table : pnetcdf - perf ] are the means of four runs for each configuration . to put these results in perspective , consider the last row of table [ table : pnetcdf - perf ] .the peak performance expected for the number of object storage targets ( osts , 128 in this case ) is approximately 26.7 gib / s ( 1 gib = = 1,073,741,824 bytes ) .this value represents ideal conditions of writing large amounts of data to one file per ost .in contrast , our strategy uses one shared file with collective i / o aggregation and a high - level format with index data in addition to raw particle arrays .even so , we achieve 56% of the peak ideal bandwidth .b0.5 in b0.5 in b0.5 in b0.5 in b0.3 in no .particles & no .processes & file size ( gib ) & write time ( s ) & write bandwidth ( gib / s ) + & 512 & 43.8 & 22.0 & 1.90 + & 16384 & 1332.4 & 99.0 & 12.88 + & 262144 & 43821.6 & 380.5 & 109.9 + [ table : genio - perf ] b0.5 in b0.5 in b0.5 in b0.5 in b0.3 in no .particles & no .processes & file size ( gib ) & write time ( s ) & write bandwidth ( gib / s ) + & 64 & 4.6 & 3.54 & 1.30 + & 512 & 36 & 14.34 & 2.51 + & 4096 & 288 & 19.10 & 15.1 + [ table : pnetcdf - perf ]in this section we go over the choice of algorithms deployed as a function of nodal architecture , as well as the corresponding optimizations implemented so far performance optimization is a continuous process . in order to evaluate the short - range force on non - accelerated systems , such as the blue gene / q, hacc uses an rcb tree in conjunction with a highly - tuned short - range polynomial force kernel , as has been discussed in section [ srf ] .an important consideration in this implementation is the tree - node partitioning step , which is the most expensive part of the tree build .the particle data is stored as a collection of arrays the so - called structure - of - arrays format .there are three arrays for the three spatial coordinates , three arrays for the velocity components , in addition to arrays for mass , a particle identifier , etc .our implementation in hacc divides the partitioning operation into three phases .the first phase loops over the coordinate being used to divide the particles , recording which particles will need to be swapped .next , these prerecorded swapping operations are performed on six of the arrays .the remaining arrays are identically handled in the third phase .dividing the work in this way allows the blue gene / q hardware prefetcher to effectively hide the memory transfer latency during the particle partitioning operation and reduces expensive read - after - write dependencies .the premise underlying the multilevel timestepping scheme ( section [ time - stepper ] ) is that particles in higher density regions will require finer short - range time steps .a local density estimate can be trivially extracted from the same rcb tree constructed for evaluating the short - range interparticle forces .each `` leaf node '' in the rcb tree holds some number of particles , the bounding box for those particles has already been computed , and a constant density estimate is used for all particles within the leaf node s bounding box . because the bounding box , and thus the density estimate , changes as the particles are moved , the timestepping level assigned to each leaf node is fixed when the tree is constructed .this implies that , after particles have been moved , the leaf - node bounding boxes may overlap .the force - evaluation algorithm is insensitive to these small overlaps , and the effect on the efficiency of the force calculation is apparently negligible .[ see eq .( [ kicks ] ) ] versus 5 sub - cycle steps for each particle .shown is the ratio of the final power spectra for a small test problem ( 256 , 256 particles ) , going out to the particle nyquist wave number .the multi - level time step approach leads to a speed - up of the full simulation by a factor of , with only a negligible change in the error.,width=302 ] in between consecutive long - range force calculations , each particle is operated on by the kick ( velocity update ) and stream ( position update ) operators of the short - range force .if a particle , based on the density of its leaf node at the beginning of the subcycle , is evolved using kicks , then it needs to be acted on by stream operators each evolving the particle by . to ensure time synchronization , these stream operators are further split such that all particles are acted on by the same number of stream operatorsthe number of kicks used for particles in a leaf node is determined by : where is an adjustable linear scaling parameter . in addition , the maximum level is capped by an additional user - provided parameter .we show examples of accuracy control in the multi - level time stepping scheme in figures [ time1 ] [ time3_dens ] . in figure[ time1 ] , we compare the power spectrum obtained from a simulation with 5 sub - cycle steps for each particle with a result that was obtained in the following way : 5 sub - cycles per step per particle are used until ; since the clustering at that point is still modest , this point is reached relatively quickly .after we evolve each particle with at least 2 sub - cycles and allow depending on the density two more levels of sub - cycling . in this test , setting the scaling parameter to 20 or 10 leads to accurate results , better than 0.2% out to the particle nyquist wavenumber , , where is the initial inter - particle separation ( the test case used 256 particles in a 256 volume with 500 pm steps . ) in precision cosmology applications , one desires better than accuracy , and this is attained at wavenumbers less than ( ) .consequently , the current error limits are comfortably wihtin the required bounds , the error at being only .using the multi - level stepping can speed up the simulation by a factor of two ( changing in the range shown leaves the performance unaffected ) .we have carried out a suite of convergence tests , concluding that the setting with satisfies our accuracy requirements . at much smaller length scales ,the power spectrum test above can be augmented by checking the stability of small scale structures in the halos as the adaptive time - stepping parameters are varied . as typical examples thereof ,we show results for the largest and second - largest halos ( identified using a ` friends - of - friends ' or fof algorithm ) in the same simulation discussed above in figures [ time2 ] [ time3_dens ] .the halo density field is computed via a tessellation - based method in three dimensions and then projected onto a two - dimensional grid ; details of this implementation will be presented elsewhere ( ) .the angle - averaged ( spherical ) density profiles are also shown in figures [ time2_dens ] and [ time3_dens ] . as can be seen from these results , aside from trivial differences due to the fof linking noise , the halo substructure depends relatively mildly on the choice of the values of the scaling parameter , , over the chosen ranges used . )halo at for the same run as in figure [ time1 ] , with different values for .minor variations in the outskirts of the halos are due to fof linking ` noise'.,width=264 ] at corresponding values of the scaling parameter , .,width=302 ] for the second largest halo.,width=283 ] for the second - largest halo.,width=302 ] aside from optimizing the number of force evaluations , one also has to minimize the time spent in evaluating the force kernel .this is a function of the design of the compute nodes .here we provide a description of the blue gene / q - specific short - range force kernel and how it is optimized .( very similar implementations were carried out for cray xe6 and xc30 systems . ) as mentioned earlier , the compactness of the short - range interaction ( cf .section [ sec : lrange ] ) , allows the kernel to be represented as where , is a 5-th order polynomial in , and is a short - distance cutoff ( plummer softening ) .this computation must be vectorized to attain high performance ; we do this by computing the force for every neighbor of each particle at once .the list of neighbors is generated such that each coordinate and the mass of each neighboring particle is pre - generated into a contiguous array .this guarantees that 1 ) every particle has an independent list of particles and can be processed within a separate thread ; and 2 ) every neighbor list can be accessed with vector memory operations , because contiguity and alignment restrictions are taken care of in advance .every particle on a leaf node shares the interaction list , therefore all particles have lists of the same size , and the computational threads are automatically balanced .the filtering of , i.e. , checking the short - range condition , can be processed during the generation of the neighbor list or during the force evaluation itself ; since the condition is likely violated only in a number of `` corner '' cases , it is advantageous to include it into the force evaluation in a form where ternary operators can be combined to remove the need of storing a value during the force computation .each ternary operator can be implemented with the help of the instruction , which also has a vector equivalent .even though these alterations introduce an ( insignificant ) increase in instruction count , the entire force evaluation routine becomes fully vectorizable .there is significant flexibility in choosing the number of mpi ranks versus the number of threads on an individual blue gene / q node . because of the excellent performance of the memory sub - system , a large number of openmp threads up to 32 per node can be run to optimize performance .concurrency in the short - range force evaluation is exposed by , first , building a work queue of leaf - node - vs - tree interactions , and second , executing those interactions in parallel using openmp s dynamic scheduling capability .each work item incorporates both the interaction - list building and the force calculation itself for each leaf node s particles . to further increase the amount of parallel work, hacc builds multiple rcb trees per rank .first , the particles are sorted into fixed bins , where the linear size of each bin is roughly the scale of the short - range force .an rcb tree is then constructed within each bin , and because this process in each bin is independent of all other bins , this is done in parallel .this parallelization of the tree - building process provides a significant performance boost to the overall force computation . when the force on the particles in each leaf node is computed , not only must the parent tree be searched , but so must the other 26 neighboring trees . because of the limited range of the short - range force , only nearest neighbors need to be considered . while searching many neighboring treesadds extra expense , the trees are individually not as deep , and so the resulting walks are less expensive . also , because we distribute ` ( leaf node , neighboring tree ) ' pairs among the threads , this scheme also increases the amount of available parallelism post - build ( which helps with thread - level load balancing ) .all told , using multiple trees in this fashion provides a significant performance advantage over using one large tree for the entire domain .the first version of hacc was originally written for the ibm powerxcell 8i - accelerated hardware of roadrunner , the first machine to break the petaflop barrier . this architecture posed three critical challenges , all of which continue to be faced in one way or the other on all accelerated systems .a more detailed description of the cell implementation and the roadrunner architecture is given in .( see also . )the three challenges for a roadrunner - style architecture are as follows .( i ) _ memory balance . _the machine architecture ( figure [ rr_arch ] ) has a top layer of conventional multi - core processors ( in this case , two dual - core amd opterons ) to which are attached ibm powerxcell 8i cell broadband engines ( cell bes ) via an eight - lane pci - e bus .the relative performance of the opterons is small compared to that of the cell bes , by roughly a factor of 1:20 , but they carry half the memory and possess access to a communication fabric that is balanced to their level of computational performance . for large - scale n - body codes, memory is a key limiting factor , therefore the code design must make the best use possible of the cpu layer ( this situation continues in current and future accelerated systems , as discussed further below ) .( ii ) _ communication balance ._ the cell bes dominate the computational resource , but are starved for communication , due to the relatively slow pci - e link to the host cpu ( figure [ rr_arch ] ) . from the point of view of the compute / communication ratio , such a machine is 50 - 100 times out of balance .we note that this situation also continues to hold in the current generation of accelerated systems such as cpu / gpu or cpu / xeon phi machines : the computational approach taken must therefore maximize computation for a given amount of data motion .( iii ) _ multiple programming models . _ accelerated systems have a multi - layer programming model . on the cpu level ,standard languages and interfaces can be used ( hacc uses c / c++/mpi / openmp ) but the accelerator units often have a hardware - specific programming paradigm .( although attempts to overcome this gap now exist , the results in actual practice are not yet compelling . ) for this reason , it is desirable to keep the code on the accelerator ( the cell be in this case ) as simple as possible and avoid elaborate coding .in addition , it also proved advantageous to keep the data structures and communication patterns on the cell be as straightforward as possible to optimize computational efficiency . with these challenges in mind ,hacc is matched to the machine architecture as follows : at the first level of code organization , the medium / long range force is handled by the fft - based method that operates at the opteron layer , as for all other architectures . at this layer ,only grid information is stored and manipulated ( except when carrying out analysis steps ) .particles live only at the cell layer .there is a rough memory balance between the grid and particle data , matching well to the memory organization on the machine , and combating the first challenge mentioned above .the particle - grid deposition and grid - particle interpolation steps are performed at the cell layer with only grid information passing between the two layers .this compensates for the limited bandwidth available between the cell bes and the opterons .the local force calculations reside at the cell level .this addresses the second challenge , as aided by the particle overloading discussed in section [ sec : overload ] . because implementing complicated data structures at the cell level is difficult , and conditionals are best avoided , our choice for the local force solve is a direct particle - particle interaction . to make this interaction more efficient , we use a chaining mesh to control the number of interactions ( see , e.g. , )this leads to an efficient , hardware - accelerated p m algorithm , thereby overcoming the third challenge ( ) .as already discussed above , some of the challenges for gpu - accelerated systems are very similar in spirit to those for cell - accelerated systems .the low - level gpu programming model ( opencl or cuda ) adds another layer of complexity and the compute to communication balance is heavily skewed towards computing .one major difference between the two architectures is the memory balance : while on the cell - accelerated systems the cell layer has the same amount of memory as the cpu layer , this is generally not the case for cpu / gpu systems .for example , the cray xk7 system , titan , at oak ridge , has 32 gb of host - side memory on a single node , with only 6 gb of gpu memory .this adds yet another challenge , that of memory imbalance .we have overcome this by partitioning the local data into smaller overlapping blocks , which can fit on device memory . very similar in spirit to particle overloading ,the boundaries of the partitions are duplicated , such that each block can be evolved independently .we emphasize again that the long / medium range calculations on this architecture remain unchanged , and only the short - range force kernel needs to be optimized .the data partitioning is illustrated in figure [ data_part ] .we utilize a two - dimensional decomposition of data blocks , which are in turn composed of slabs that are spatially separated by the extent of the short - range force roughly 3 grid cells ( see figure [ ppforce ] ) . in complete analogy to particle overloading , the data blocks are composed of ` active ' particles ( green ) that are updated utilizing the ` passive ' particles ( yellow and red ) from the boundary .the ( red ) edge slabs are solely streamed , as opposed to performing the full sks time stepping described in section [ time - stepper ] .this mediates the error inflow on these passive particles , as they can only `` see '' the interior particles within the domain .we note that since the data has been decomposed into smaller independent work items , these blocks can now be communicated to any nodes that have the available resources to handle the extra workload .hence , this scheme provides for a straightforward load - balancing algorithm by construction .details of the error analysis and load balancing schemes will be described in an upcoming paper devoted to the gpu implementation of hacc .m version ( blue short - dashed ) .we also show the comparison with a gadget-2 run .the agreement is very good the treepm runs agree with the gpu version of hacc to better than 0.1% up to the particle nyquist wavenumber .the level of agreement with gadget-2 is noteworthy because it is a completely independent code ., width=307 ][ cols="^,^,^,^,^",options="header " , ] . the solid curve is the prediction from the extended coyote emulator of .the agreement across the two runs is at the fraction of a percent level , while the agreement with the emulator is at the 2% level , which is the estimated emulator accuracy ., width=302 ] hacc has been subjected to a large number of standard convergence tests ( second - order time - stepping , halo profiles , power spectrum measurements , etc . ) . in this sectionwe focus mostly on a subset of hacc test results using the setup of the code comparison project , originally carried out in . in that work , a set of initial conditions was created for different problems ( mainly different volumes ) and a number of cosmological codes were run on those , all at their nominal default settings .the final outputs were compared by measuring a variety of statistics , including matter fluctuation power spectra , halo positions and profiles , and halo mass functions .the initial conditions and final results from the tests are publicly available and have been used subsequently by other groups for code verification , e.g for gadget-2 , and most recently for nyx .in addition , we will also show some results from recently carried out large - scale simulations .we first restrict attention to the larger volume simulation ( 256 ) and compare hacc results with those found for gadget-2 , as published in . while the simulation is only modest in size ( 256 particles ) it does present a relatively sensitive challenge and is capable of detecting subtle errors in the code under test .not only are statistical measures such as the power spectrum robust indicators of code accuracy , but visual inspection of the particle data itself presents a quick qualitative check on code behavior and correctness ; it is particularly valuable in identifying problems at early stages of code development .we use paraview for this purpose ( ) .the code comparison test was run with a force resolution of , very similar to what was used in the gadget-2 simulation .we compare results from different hacc versions ( pptreepm on the blue gene / q and hopper , p m on a cell - accelerated and a gpu - accelerated system ) with those from gadget-2 .the result for the matter power spectrum is shown in figure [ comp_pk ] , where , as in figure [ time1 ] , we show results up to .all the code results are very close to each other ( note the scale on the y - axis ) .the agreement over the full -range is better than 0.5% , including gadget-2 . considering the different implementations of force solvers and time steppers ,this closeness is very reassuring , particularly as no effort was made to fine - tune the level of agreement .the treepm versions of hacc and the cpu / gpu version agree to better than a tenth of a percent .next we present results from a more qualitative , but nevertheless , very detailed comparison , shown in figure [ comp_vis ] . in this test , we identify all particles that belong to halos with at least 100 particles per halo .this is done within paraview , using an fof halo finder with linking length .the particles are colored with respect to halo mass .the most massive halo in the image ( colored in red ) has a mass of .while there are differences in the images as is to be expected the overall agreement is striking .almost all small halos exist in all images ( the ones that are missing are just below the cut of 100 particles , but they do actually exist ) and many of the fine details within the halo structures of the larger halos are well - matched .the mass profiles of the three largest halos in the simulation are shown in figure [ comp_prof ] .the binning shot noise dominates the comparison at small radii , but beyond that the agreement is very good .other quantitative halo comparison statistics are given in table [ tab1 ] , to further illustrate the close match of the results from all of the different algorithms .we illustrate the dynamic range and accuracy of the hacc approach by comparing results for the matter power spectrum from the ` outer rim ' and ` q continuum ' runs in figure [ big_pk ] .the q continuum run on titan had billion particles , a 1.3 gpc box , with a mass resolution , m , and the outer rim run on mira had trillion particles , a 4.225 gpc box , with a mass resolution , m .the numerical results ( run with different short - range force algorithms ) agree to fractions of a percent , and agree at the 2% level with the ( extended ) coyote emulator predictions of , which is at the level of accuracy expected from the emulator .finally , we show results for the halo mass function at very large simulation scales illustrating excellent agreement across different sized simulations carried out using different short - range force implementations .figure [ mf_big ] shows the ( fof , ) halo mass function at resulting from three different simulations with the same cosmology , ( i ) a run from the mira universe suite ( billion particles , 2.1 gpc box , mass resolution , m ) , ( ii ) the q continuum run on titan , and ( iii ) the outer rim run on mira . )halo mass function for three large simulations run with hacc , measured at .the box sizes range from gpc and the particle number in each simulation ranges from billion to over one trillion ( mass resolutions range from m to m ; for details , see text).,width=321 ]an entire suite of _ in situ _ analysis tools ( cosmotools ) for hacc is under continuous development , driven by evolving science goals ; cosmotools exists in both _ in situ _ and stand - alone versions ( to be used for off - line processing ) . the overall structure of the _ in situ _framework is shown in figure [ fig : framework ] .output from the analysis is available either at run time or for post - processing . in the former case ,a paraview server can be launched and connected to the simulation through a tool called catalyst . in the latter case ,results of the _ in situ _ analysis are written to a parallel file system and later retrieved .several standard tools for the analysis of cosmological simulations are part of the _ in situ _framework , such as halo finders ( ) and merger tree constructors .the first tool to be part of this framework that works on the full particle data to produce field information is a parallel voronoi tessellation that computes a polyhedral mesh whose cell volume is inversely proportional to the distance between particles .such a mesh representation acts as a continuous density field that affords accurate sampling of both high- and low - density regions .connected components of cells above or below a certain density can also approximate large - scale structures .two important criteria for _ in situ _ analysis filters are that they should scale similarly as the simulation and have minimal memory overhead .the parallel tessellation approach meets these criteria ; full details are given in .the various tools can be turned on through the configuration file for hacc , and the frequency of their execution is also adjustable .hacc runs on a variety of supercomputing platforms and has scaled to the maximum size of some of the fastest machines in the world , including roadrunner at los alamos , sequoia at livermore , titan at oak ridge , mira at argonne , and hopper at nersc .we have carried out detailed scaling and performance studies on the blue gene / q systems ( ) and on titan ; a sample of our results is presented below . for both systems, we carried out weak and strong scaling tests .for the weak scaling tests we fix a physical volume and number of particles per node . when scaling up to more nodes , the volume and particle loading therefore increases , while the mass resolution stays constant .the wall - clock time for a run should hence stay constant if the code scales or , equivalently , the time to solution per particle per step should decrease . the absolute performance measured in tflops per seconds will rise while the percentage per peak will stay constant . for our weak scaling tests ,the particle mass is and the force resolution , 6 kpc .all simulations are for a model .simulations of cosmological surveys focus on large problem sizes , therefore the weak scaling properties are of primary interest . for the weak scaling test on the blue gene / q systems , we ran with 2 million particles in a (100 ) volume per core , using a typical particle loading in actual large - scale simulations on these systems . tests with 4 million particles per core produce very similar results . as demonstrated in figure [ comp ] ( right panel ) , weak scaling is ideal up to 1,572,864 cores ( 96 racks , all of sequioa ) , where hacc attains a peak performance of 13.94 pflops and parallel efficiency of . the largest test simulation on sequoia evolved .6 trillion particles and a ( very high accuracy ) particle substep took ns for the full high - resolution code .the scaling results were obtained by averaging over 50 sub - cycles . on titan we ran with 32 million particles per node in a fixed ( nodal ) physical volume of , representative of the particle loading in actual large - scale runs ( the gpu version was run with one mpi rank per node ) .the results are shown in the left panel of figure [ comp ] .in addition ( not shown ) we have timing results for a 1.1 trillion particle run , where we have kept the volume per node the same but increased the number of particles per node by a factor of two to 64.5 million . as for the blue gene / q systems , hacc weak - scales essentially perfectly up to the full machine .for the strong scaling tests , we chose the same problem size on both systems , a ( 1000 ) volume with 1024 particles .this is a rather small problem and strong scaling is expected to break down at some point .the results for both systems are shown in figure [ comp ] the strong scaling regime is remarkably large .on the blue gene / q system , we demonstrate strong scaling between 512 and 16384 cores ( with somewhat degraded performance at the largest scale ) . for titan, we increase the number of nodes for this problem from 32 to 8192 , almost half of the machine .as can be seen in the left panel in figure [ comp ] , strong scaling only starts degrading after 2048 nodes .( the results for titan have been improved since these earlier tests , they will be reported in ) .the significance of the strong scaling tests is in showing how well a code can perform as the effective memory per core reduces hacc can run very effectively at values as low as 100mb / core , which is roughly equivalent to the memory per core available in next - generation many - core systems .the impressive scale and quality of data from sky survey observations requires a correspondingly strong response in theory and modeling .it is widely recognized that large - scale computing must play an essential role in shaping this response , not only in interpreting the data , but also in optimizing survey strategies and validating data and analysis pipelines .the hacc simulation framework is designed to produce synthetic catalogs and to run large campaigns for precision predictions of cosmological observables .the evolution of hacc continues to proceed in two broad directions : ( i ) the further development of algorithms ( and their optimization ) for future - looking supercomputing architectures , including areas such as power management , fault - tolerance , exploitation of local non - volatile random - access memory ( nvram ) and investigation of alternative programming models , and ( ii ) addition of new physics capabilities in both modeling and simulation ( e.g. , gas physics , feedback processes , etc . ) and in the analysis part of the framework ( e.g. , increased sophistication in semi - analytic galaxy modeling , and an associated validation program ) .the use of hacc for large - scale simulation campaigns covers applications such as those required to construct cosmological emulators ( ) , to determine covariance matrices ( ) , to help optimize survey design and test associated pipelines with synthetic catalogs ( ) , and , finally , to carry out mcmc - based parameter estimation across multiple cosmological probes ( ) .a major component of the future use of hacc is therefore related to the production and exploitation of large simulation databases , including public access and analysis ; work in this area is in progress with a number of collaborators .for useful interactions and discussions over time , the authors thank jim ahrens , viktor decyk , nehal desai , wu feng , chung - hsing hsu , rob latham , pat mccormick , rob ross , robert ryne , paul sathre , sergei shandarin , volker springel , joachim stadel , and martin white .we acknowledge early contributions to hacc by nehal desai and paul sathre . running on a number of supercomputers ,a large fraction of which were in their acceptance phase , required the generous assistance and support of many people .we record our indebtedness to susan coghlan , kalyan kumaran , joe insley , ray loy , paul messina , mike papka , paul rich , adam scovel , tisha stacey , william scullin , rick stevens , and tim williams ( argonne national laboratory ) , brian carnes , kim cupps , david fox , and michel mccoy ( lawrence livermore national laboratory ) , lee ankeny , sam gutierrez , scott pakin , sriram swaminarayan , andy white , and cornell wright ( los alamos national laboratory ) , and bronson messer and jack wells ( oak ridge national laboratory ) .argonne national laboratory s work was supported under u.s .department of energy contract de - ac02 - 06ch11357 .part of this research was supported by the doe under contract w-7405-eng-36 .partial support for this work was provided by the scientific discovery through advanced computing ( scidac ) program funded by the u.s .department of energy , office of science , jointly by advanced scientific computing research and high energy physics .this research used resources of the alcf , which is supported by doe / sc under contract de - ac02 - 06ch11357 and resources of the olcf , which is supported by doe / sc under contract de - ac05 - 00or22725 . some of the results presented here result from awards of computer time provided by the innovative and novel computational impact on theory and experiment ( incite ) and ascr leadership computing challenge ( alcc ) programs at alcf and olcf . | current and future surveys of large - scale cosmic structure are associated with a massive and complex datastream to study , characterize , and ultimately understand the physics behind the two major components of the ` dark universe ' , dark energy and dark matter . in addition , the surveys also probe primordial perturbations and carry out fundamental measurements , such as determining the sum of neutrino masses . large - scale simulations of structure formation in the universe play a critical role in the interpretation of the data and extraction of the physics of interest . just as survey instruments continue to grow in size and complexity , so do the supercomputers that enable these simulations . here we report on hacc ( hardware / hybrid accelerated cosmology code ) , a recently developed and evolving cosmology n - body code framework , designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond . hacc can run on all current supercomputer architectures and supports a variety of programming models and algorithms . it has been demonstrated at scale on cell- and gpu - accelerated systems , standard multi - core node clusters , and blue gene systems . hacc s design allows for ease of portability , and at the same time , high levels of sustained performance on the fastest supercomputers available . we present a description of the design philosophy of hacc , the underlying algorithms and code structure , and outline implementation details for several specific architectures . we show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed , including benchmarks evolving more than 3.6 trillion particles . cosmology large scale structure of the universe ; n - body simulations |
in the past , divergent series have played a central role in mathematics .mathematicians like for instance lacroix , fourier , euler , laplace , etc .have used them extensively .nowadays they play a marginal role in mathematics and are often not mentioned in the standard curriculum .a number of mathematicians and most students are not aware that they can be of any use . where was the turn ?according to , the turn occured in the time of cauchy and abel , when the need was felt to construct analysis on absolute rigor .so let us go back a little and see what cauchy and abel have said of divergent series .cauchy ( preface of analyse mathmatique `` , 1821 ) : _ jai t forc dadmettre diverses propositions qui paratront peut - tre un peu dures .par exemple quune srie divergente na pas de somme " _( i have been forced to admit some propositions which will seem , perhaps , hard to accept .for instance that a divergent series has no sum '' ) cauchy made one exception : he justified rigorously the use of the divergent stirling series to calculate the -function .we will explain below the kind of argument he made when looking at the example of the euler differential equation .abel ( letter to holmboe , january 16 1826 ) : _ les sries divergentes sont en gnral quelque chose de bien fatal et cest une honte quon ose y fonder aucune dmonstration .on peut dmontrer tout ce quon veut en les employant , et ce sont elles qui ont fait tant de malheurs et qui ont enfant tant de paradoxes . enfin mes yeux se sont dessills dune manire frappante , car lexception des cas les plus simples , par exemple les sries gomtriques , il ne se trouve dans les mathmatiques presque aucune srie infinie do nt la somme soit dtermine dune manire rigoureuse , cest - - dire que la partie la plus essentielle des mathmatiques est sans fondement .pour la plus grande partie les rsultats sont justes il est vrai , mais cest l une chose bien trange .je moccupe en chercher la raison , problme trs intressant . " _( divergent series are , in general , something terrible and it is a shame to base any proof on them . we can prove anything by using them and they have caused so much misery and created so many paradoxes . . finally my eyes were suddenly opened since , with the exception of the simplest cases , for instance the geometric series , we hardly find , in mathematics , any infinite series whose sum may be determined in a rigorous fashion , which means the most essential part of mathematics has no foundation . for the most part , it is true that the results are correct , which is very strange .i am working to find out why , a very interesting problem . " ) the author was struck by this citation when she first read it 27 years ago and this citation has followed her for her whole career . in her point of view , this citation contains the past , present and future of divergent series in mathematics . * the past : * as remarked by abel , divergent series occur very often in many natural problems of mathematics and physics . their use has permitted to do successfully a lot of numerical calculations .one example of this is the computation by laplace of the secular perturbation of the orbit of the earth around the sun due to the attraction of jupiter .the calculations of laplace are verified experimentally , although the series he used were divergent . *the present : * in the 20-th century divergent series have occupied a marginal place .however it is during the same period that we have learnt to justify rigorously their use , answering at least partially abel s question . in the context of differential equations the borel summation , generalized by calle and others to multi - summability ,give good results ( see for instance , , ) .* the future : * why do so many important problems of mathematics lead to divergent series ( see for instance ) ? what is the meaning of a series being divergent ? finding an answer tothis question is a fascinating research field .we will illustrate all this on the example of the euler differential equation : since this is a short paper , the list of references is by no means exhaustive .the euler differential equation has the formal solution which is divergent for all nonzero values of .on the other hand , it is a linear differential equation whose solution can be found by quadrature : the integral is convergent for and hence yields a solution of .we can rewrite this solution as in which we make the change of coordinate .this yields the integral is convergent for , so the solution is well defined for and moreover . what is now the link between the divergent power series and the function ?we will show that the difference between and a partial sum is smaller than the first neglected term ( this part has been inspired by ) .this is exactly as in the leibniz criterion for alternating series .for any proof .we use the following formula which is easily checked then using the following formula which implies this yields where , the remainder , is the difference between and the partial sum of the power series .the result follows since the argument given here , which justifies rigorously the use of the divergent series in numerical calculations , is very similar to the argument made by cauchy for the use of the stirling series .in particular , cauchy used the formula . *if we use the power series to approximate the function , how good is the approximation ?* we encounter here the main difference between convergent and divergent series . with convergent series , the more terms we take in the partial sum , the better the approximation .this is not the case with divergent series . indeed ,if we take fixed in the general term tends to .so we are better to take the partial sum for which the first neglected term is minimum . is minimum when . in that case .we use stirling formula to approximate for large : this gives us which is exponentially small for small .not only have we bounded the error made when approximating the function by the partial sum of the power series , but this error is exponentially small for small , i.e. very satisfactory from the numerical point of view. the phenomenon we have described here is not isolated and explains the successes encountered when using divergent series in numerical approximations .looking at what we have done with the euler equation , someone can have the impression we have cheated .indeed we have chosen a linear differential equation , thus allowing us to construct by quadrature a function which is solution of the differential equation .but what about the general case ? in general, once we have a formal solution by means of a power series , we use a _ theory of resummation _ for finding a function which is a solution . in this paper, we will focus on the _ borel method of resummation _ , also called 1-summability .we start with the properties that an adequate theory of summability must satisfy : * properties of a good method of resummation ( see for instance ) : * we consider a series , to which we want to associate a number , called its sum : \(1 ) if is convergent , then should be the usual sum of the series .\(2 ) if and are summable with respective sums and then is summable with sum .\(3 ) a series is _ absolutely summable _ if the series is summable .\(4 ) if and are absolutely summable with respective sums and , then the product of the series , , where , is absolutely summable with sum .\(5 ) if is summable with sum , then is summable with sum .\(6 ) ( in the context of differential equations ) if is summable with sum , then is summable with sum . *the borel method of resummation for a series : * we present it in a way which proves at the same time the property ( 1 ) : the idea is to take a convergent series and to write its sum in a different way . for this purpose ,we use and we write changing the order of the summation and the integral in the last line requires a proof , which we leave as an exercise . a series is _ borel - summable _ if the series has a nonzero radius of convergence , if it can be extended along the positive real axis and if the integral is convergent with value .we call the _ borel sum _ of the series. * example : * in the case of the solution of the euler differential equation we have .then hence which is .we adapt the definition of borel - summability to power series in a manner that will allow to use it for analytic extension . 1. a power series is 1-summable ( borel - summable ) in the direction , where is a half - line from the origin in the complex plane if * the series has a nonzero radius of convergence , and the sum of the series is an analytic function on the disk of convergence , * the function can be extended along the half line , * and the integral is convergent with value .+ we call the sum of the series .a power series is 1-summable if it is 1-summable in all directions , except a finite number of exceptional directions .* example : * the solution of the euler differential equation is 1-summable in all directions except the direction .the problem in the direction comes from the singularity at in or .a theory of resummation is useful if there are theorems associated to it .for instance , for the borel summation , let us cite this theorem of borel : we consider an algebraic differential equation where is a multivariate polynomial . if is a formal solution of and is absolutely borel - summable with borel sum , then is solution of the differential equation and has the asymptotic expansion .\(1 ) the sums of a 1-summable series in the different directions give functions which are analytic extensions one of the other as long as we move the line continuously through directions in which the series is 1-summable .this yields a function defined on a sector with vertex at the origin .more details can be found for instance in , and .\(2 ) the borel sum of a divergent power series can never be uniform in a punctured neighborhood of the origin .it is necessarily ramified .this is what is known in the literature as the _ stokes phenomenon_. \(3 ) if a series has a radius of convergence equal to and its sum is a function for , then a theorem of complex analysis states that the function has a least one singularity on the circle .the idea of borel is that a divergent series is a series with radius of convergence .hence , we have at least one singularity hidden in some direction : for the euler differential equation , this is the direction .the operation sends the singularity at a finite distance , as if we had blown the disk of convergence to bring it from radius to radius . is called the _ borel transform _ of .the 1-summability and its extensions have been extensively studied during the 20-th century .extensions of the notion of 1-summability have been obtained by allowing ramifications .then 1-summability in corresponds to -summability in .the notion of multi - summability has also been introduced : a series is multi - summable if it is a finite sum , each being -summable .explicit criteria allow to decide a priori that some systems of differential equations have multi - summable solutions in the neighborhood of certain singular points ( see for instance ) .let us now look at a generalized euler differential equation for almost all functions analytic in the neighborhood of the origin and such that the formal solution of vanishing at the origin is given by a divergent series .only in very special cases the equation has an analytic solution at the origin .* example : * is the analytic solution of what is the difference between the equations and? to understand we will apply successively the two steps : * complexify : we will allow ; * unfold : is a double singular point of each equation .hence , we will introduce a parameter so as to split the double singular point into two simple singular points .* complexification : * let us consider a function which is the borel sum of a solution of , and its analytic extension when we turn around the origin .the function is ( figure [ fig1 ] ) : * uniform for ; * ramified for , and generically ramified for a solution of .the two branches differ by a multiple of ( which is a multiple of a solution of the homogeneous equation ) . * how to understand that generically we should have ramification ? * to answer we unfold and embed into with .we will limit here our discussion to , although all complex values of are of interest .since solutions of the homogeneous equation associated to are given by , the local model of solutions at is given by with analytic .hence , the solution ( corresponding to ) is the unique solution which is analytic and bounded at .similarly the local model for solutions at is given by we now have two cases : \(1 ) if , then is analytic .since is ramified at for , all solutions but are ramified .it is of course exceptional that the extension of at be exactly the solution and , generically , we should expect that the analytic extension of is with .hence the extension of should be ramified at . if this ramification holds till the limit we get figure 1(a ) .\(2 ) let us now consider the case .we must again divide the discussion in two cases : \(i ) in the generic case , is ramified : it contains one term of the form .\(ii ) in the exceptional case , is analytic , and so are all solutions through .hence , it is impossible for to be ramified at .this case is excluded in the unfolding of an equation whose solution is ramified .let us now summarize the very interesting phenomenon we have discovered : if the formal solution of is divergent , then in the unfolding : * necessarily is ramified at : the divergence means a form of _ incompatibility _ between the local solutions at two singular points , which remains until the limit at .* * parametric resurgence phenomenon : * for sequences of values of the parameter converging to ( here ) , the pathology of the system is located exactly at one of the singular points . indeed , here, the only way for the system to have an incompatibility is that one of the singular points be pathologic .we have understood why divergence is the rule , and convergence the exception .the phenomena described above are much more general than the context of described here , and are explored by the author and her collaborators in different contexts .the common thread is that -summability occurs when two equilibrium points of a dynamical system coallesce in a double point .there are some rigid dynamics attached to each simple equilibrium , and these rigid dynamics do not match well until the limit when the two equilibria coallesce .divergent series also occur in the phenomena involving small denominators .a source of divergence in this case is the accumulation of particular solutions .for instance , in the case of a fixed point of a germ of analytic diffeomorphism , , with , the divergence of the linearizing series could come from the accumulation of periodic points around the fixed point at the origin .the study of this phenomenon is part of the work for which jean - christophe yoccoz received the fields medal in 1994 .divergent series occur generically in many situations with differential equations and dynamical systems . the divergence of the series carries a lot of geometric information on the solutions .for instance , if the formal power series solution of the euler equation had been convergent , its sum could not have been ramified .y. ilyashenko , in the theory of normal forms of analytic differential equations , divergence is the rule and convergence the exception when the bryuno conditions are violated , _moscow university mathematics bulletin _ , * 36 * ( 1981 ) 1118 . | the present paper presents some reflections of the author on divergent series and their role and place in mathematics over the centuries . the point of view presented here is limited to differential equations and dynamical systems . |
the chemostat model provides the foundation for much of current research in bio - engineering , ecology , and population biology . in the engineering literature ,the chemostat is known as the continuously stirred tank reactor .it has been used for modeling the dynamics of interacting organisms in waste water treatment plants , lakes and oceans . in its basic setting , it describes the dynamics of species competing for one or more limiting nutrients .if there are species with concentrations for and just one limiting nutrient with concentration and dilution rate , then the model takes the form where denotes the per capita growth rate of species and is the time derivative of any variable .( in much of the paper , we simplify our notation by omitting the arguments of the functions .for instance , when no confusion can arise from the context , we denote simply by . ) the functions depend only on the nutrient concentration , and are zero at zero , continuously differentiable and strictly increasing , although non - monotone functions have been the subject of research as well .the conversion of nutrient into new biomass for each species happens with a certain yield and the natural control variables in this model are the input nutrient concentration and the dilution rate .the latter variable is defined as the ratio of the volumetric flow rate ( with units of volume over time ) and the reactor volume which is kept constant .therefore it is proportional to the speed of the pump that supplies the reactor with fresh medium containing the nutrient .the equations ( [ model - full ] ) are then straightforwardly obtained from writing the mass - balance equations for the total amounts of the nutrient and each of the species , assuming the reactor content is well - mixed .the full model ( [ model - full ] ) is illustrated in figure [ chem ] . in the present work ,we consider the case where there is just one species with concentration , in which case the equations ( [ model - full ] ) take the form ( but see theorem [ iss - track ] below for results on chemostats with disturbances , and section [ several ] for models involving several species ) .we assume is a given positive constant , while the per capita growth rate is a monod function ( which is also known as a michaelis - menten function ) taking the form for certain positive constants and that we specify later .the dilution rate is an appropriate continuous positive periodic function we also specify below . since is near zero , one can readily check that ( [ model - reduced ] ) leaves the domain of interest positively invariant ( i.e. , trajectories for ( [ model - reduced ] ) starting in remain in for all future times ) ; see theorem [ iss - track ] for a more general invariance result for perturbed chemostats .since we are taking to be a fixed positive constant , we rescale the variables to reduce the number of parameters . using the change of variables and dropping bars , we eliminate and and so obtain the new dynamics again evolving on the state space . . in the next section ,we briefly review the literature focusing on what makes our approach different . in section[ track ] , we fix the reference signal we wish to track . in section [ define ] , we precisely formulate the definitions and the stability problem we are solving .we state our main stability theorem in section [ thm ] and we discuss the significance of our theorem in section [ discuss ] .we prove our stability result in section [ main ] . in section[ several ] , we show that the stability is maintained when there are additional species that are being driven to extinction .we validate our results in section [ simulations ] using a numerical example .we conclude in section [ concl ] by summarizing our findings .the behavior of the system is well understood when and are positive constants , as well as cases where and either of these control variables is held fixed while the other is periodically time - varying .see for periodic variation of and for periodic variation of and the general reference on chemostats . when both and are constants , the so - called `` competitive exclusion principle '' holds , meaning that at most one species survives .mathematically this translates into the statement that system has a steady state with at most one nonzero species concentration , which attracts almost all solutions ; see .this result has triggered much research to explain the discrepancy between the ( theoretical ) competitive exclusion principle and the observation that in real ecological systems , many species coexist .the results on the periodically - varying chemostat mentioned above should be seen as attempts to explain this paradox .they involve chemostats with species , and their purpose is to show that an appropriate periodic forcing for either or can make the species coexist , usually in the form of a ( positive ) periodic solution .few results on coexistence of species are available .an exception is , where a periodic function is designed ( with kept fixed ) so that the resulting system has a ( positive ) periodic solution with an arbitrary number of coexisting periodically varying species .the stability properties of this solution are not known .more recent work has explored the use of state - dependent but time invariant feedback control of the dilution rate to generate coexistence ; see for monotone growth rate functions in the species case , and for the species case .the paper considers feedback control when the growth rate functions are non - monotone . in , , and ,coexistence is proved for models taking into account intra - specific competition . in these models ,the usual growth functions are replaced by functions which are decreasing with respect to the variable .all the results discussed so far apply to a more general model than involving species .this is because the main purpose of these papers is to investigate environmental conditions under which the competitive exclusion principle fails and several species can coexist . herewe will not consider any coexistence problems .our main objective is to provide a proof of stability of a periodic solution based on a lyapunov - type analysis and to investigate the robustness properties of the periodic solution with respect to perturbations .as an illustration we show that the stability of the periodic solution is robust with respect to additional species that are being driven to extinction , or to small disturbances on the initial nutrient concentration or dilution rate .these features set our work apart from the known results on periodically forced chemostat models which do not rely on the construction of a lyapunov function .proving stability in the chemostat usually relies on reduction and monotonicity arguments , and not so often on lyapunov functions ( but see for instance theorem in which uses a lyapunov function introduced in and more recently ) .finally we point out that closely related to our results is where a single - species chemostat with a continuous and bounded ( but otherwise arbitrary ) function and constant dilution rate is investigated ; there it is shown that two positive solutions converge to each other . however , the proof is not based on a lyapunov function .the advantage of having a lyapunov function is that it can be used to _ quantify _ the effect of additional noise terms on the stability of the unperturbed dynamics .in fact , to our knowledge , our work provides the first input - to - state stability analysis of chemostats whose dilution rates and initial concentrations are perturbed by small noise ; see remark [ aboutiss ] for a discussion on the importance of input - to - state stability in control theory and engineering applications .we first choose the dilution rate that will give a reference trajectory for ( [ model ] ) which we show to be stable .we assume a growth rate with constants as which we refer to as a _ reference trajectory _when we choose condition ( [ mbound ] ) then provides constants such that for all : see figure [ refff ] for the graph of for and .we wish to solve the following stability problem : * * * given any trajectory for ( [ model ] ) corresponding to the dilution rate from ( [ chli ] ) and as in ( [ mbound ] ) ( i.e. for any initial value for ) , show that the corresponding deviation of from the reference trajectory ( [ reftraj ] ) asymptotically approaches as .we will solve ( sp ) by proving a far more general tracking result for a single species chemostat acted on by a disturbance vector as follows : (1+u_2(t)-s(t))-\mu(s(t))x(t)\\[.5em ] \dot x(t)&= & x(t)[\mu(s(t))-d(t)-u_1(t ) ] \end{array } \right .. \ ] ] we will quantify the extent to which the reference trajectory ( [ reftraj ] ) tracks the trajectories of ( [ mod1 ] ) . to this end , we need to introduce a priori bounds on and ; see remark [ bound - u ] .our main theoretical tool will be the input - to - state stability ( iss ) property which is one of the central paradigms of current research in nonlinear stability analysis ; see remark [ aboutiss ] .the relevant definitions are as follows .we let denote the set of all continuous functions for which ( i ) and ( ii ) is strictly increasing and unbounded .we let denote the class of all continuous functions for which ( i ) for each , ( ii ) is non - increasing for each , and ( iii ) as for each .consider a general control - affine dynamic evolving on a given subset where is a given subset of euclidean space .( later we specialize to dynamics for the chemostat . ) for each and , let denote the solution of ( [ gen ] ) satisfying for a given control function ; i.e. the solution of the initial value problem we always assume that such solutions are uniquely defined on all of ( i.e. , ( [ gen ] ) is forward complete and is positively invariant for this system ) and that there exists such that everywhere , where is the usual euclidean norm .for example , is the solution of ( [ mod1 ] ) for the disturbance satisfying the initial condition .[ issdef ] we call ( [ gen ] ) _ input - to - state stable ( iss ) _ provided there exist and such that for all , , , and .here denotes the essential supremum of . by causality , the iss condition ( [ issest ] ) is unchanged if is replaced by the essential supremum } ] . in particular , ( [ issest ] )says as for all initial values and initial times , where is the zero disturbance .[ aboutiss ] the theory of iss systems originated in .iss theory provides the foundation for much current research in robustness analysis and controller design for nonlinear systems , and has also been used extensively in engineering and other applications .the iss approach can be viewed as a unification of the operator approach of zames ( e.g. ) and the lyapunov state space approach .the operator approach involves studying the mapping of initial data and control functions into appropriate spaces of trajectories , and it has the advantages that it allows the use of hilbert or banach space techniques to generalize many properties of linear systems to nonlinear dynamics .by contrast , the state space approach is well suited to nonlinear dynamics and lends itself to the use of topological or geometric ideas .the iss framework has the advantages of both of these approaches including an equivalent characterization in terms of the existence of suitable lyapunov - like functions ; see remark [ pba ] below . for a comprehensive survey on many recent advances in iss theory including its extension to systems with outputs , see . to specify the bound on our disturbances ,we use the following constants whose formulas will be justified by the proof of our main stability result : now on , we assume the disturbance vector in ( [ mod1 ] ) takes all of its values in a fixed square control set of the form 0 < \bar u\ ; < \ ; \min\left\{\dfrac{c_1}{\sqrt { 8(1 + 2c\kappa c_1 ) } } , \dfrac{d_o}{2}\right\ } \end{array}\ ] ] where , , and are in ( [ ck ] ) ( but see remark [ bound - u ] for related results under less stringent conditions on the disturbance values ) .we will prove the following robustness result : [ iss - track ] choose , , and as in ( [ mbound])-([chli ] ) .then the corresponding solutions of ( [ mod1 ] ) satisfy \\[.25em]\ ; \ ; \ ; \rightarrow\ ; \ ; \ ; \left[\ ; s(t ; t_0 , ( s_0,x_0 ) , \alpha ) > 0 \ ; \ ; \ & \ ; \ ; x(t ; t_0 , ( s_0,x_0 ) , \alpha)>0\ ; \right]\ ; \ ; .\end{array}\ ] ] moreover , there exist and such that the corresponding transformed error vector \left(s(t ; t_0 , ( s_0,x_0 ) , \alpha ) - s_r(t ) , \ln(x(t ; t_0 , ( s_0,x_0 ) , \alpha ) ) - \ln(x_r(t))\right)\end{array}\ ] ] satisfies the iss estimate ( [ issest ] ) for all , , , , and , where .before proving the theorem , we discuss the motivations for its assumptions , and we interpret its conclusions from both the control theoretic and biological viewpoints .[ invariance ] condition ( [ c1ru1 ] ) says is positively invariant for ( [ mod1 ] ) .one may also prove that is positively invariant for ( [ mod1 ] ) , as follows .suppose the contrary .fix , , , and for which the corresponding trajectory for ( [ mod1 ] ) satisfying exits in finite time .this provides a finite constant \} ] , hence also for all ] .since clearly stays in , this contradicts the maximality of .the positive invariance of follows .[ means ] theorem [ iss - track ] says that in terms of the error signals , any componentwise positive trajectory of the unperturbed chemostat dynamics ( [ mod1 ] ) converges to the nominal trajectory ( [ reftraj ] ) , uniformly with respect to initial conditions .this corresponds to putting in ( [ issest ] ) .it also provides the additional desirable robustness property that for an arbitrary -valued control function , the trajectories of the _ perturbed _ chemostat dynamics ( [ mod1 ] ) are `` not far '' from ( [ reftraj ] ) for large values of time . in other words , they `` almost '' track ( [ reftraj ] ) with a small overflow from the iss inequality ( [ issest ] ) .similar results can be shown for general choices of and .for example , we can choose any that admits a constant such that for all and .in this case , we take the dilution rate which is again uniformly bounded above and below by positive constants .the proof of this more general result is similar to the proof of theorem [ iss - track ] we give below except with different choices of the constants and .[ mfre1 ] the robustness result \le \beta(|(s_0,x_0)|,t - t_0 ) + \gamma(|\alpha|_{[t_0,t]})\end{array}\ ] ] of theorem [ iss - track ] differs from the classical iss condition in the following ways : 1 . for biological reasons , negative values of the nutrient level and the species level do not make physical sensehence , only componentwise positive solutions are of interest . therefore ,( [ cru1 ] ) is not valid for all but rather only for .2 . our condition ( [ cru1 ] )provides an estimate on the _ transformed _ error component instead of the more standard error .our reasons for using the transformed form of the error are as follows .the function goes to when goes to zero .this property is relevant from a biological point of view .indeed , in the study of biological systems , it is important to know if the concentration of the species is above a strictly positive constant when the time is sufficiently large or if the concentration admits zero in its omega limit set . in the first case ,the species is called _persistent_. the persistency property is frequently desirable , and it is essential to know whether it is satisfied .hence , the function has the desirable properties that ( a ) it goes to if does , ( b ) it is equal to zero when is equal at time to the value of , and ( c ) it goes to if goes to zero .therefore , roughly speaking , if the species faces extinction , then it warns us .[ pba ] our proof of theorem [ iss - track ] is based on a lyapunov type analysis .recall that a function is called an _ iss lyapunov function ( iss - lf ) _ for ( [ gen ] ) provided there exist such that 1 . and 2 . \le -\gamma_3(|y|)+\gamma_4(|u|) ] for any fixed constant , then the chemostat error dynamics instead satisfies the less stringent _ integral _ iss property ; see remark [ iiss - rk ] below for details .the proof of ( [ c1ru1 ] ) is immediate from the structure of the dynamics ( [ mod1 ] ) and the fact that ( which imply that when is sufficiently small ) ; see remark [ invariance ] for a similar argument . it remains to prove the iss estimate ( [ cru1 ] ) for suitable functions and . throughout the proof , all ( in)equalities should be understood to hold globally unless otherwise indicated .also , we repeatedly use the simple `` ( generalized ) triangle inequality '' relation for various choices of , , and that we specify later .fix , , , and , and let denote the corresponding solution of ( [ mod1 ] ) satisfying . for simplicity , we write as , omitting the time argument as before .we first write the error equation for the variables where , , , and .one easily checks that [1+u_2(t)-z(t)]\\ \dot x(t)&= & x(t)[\mu(z(t)-x(t))-d(t)-u_1(t)]\ , .\end{array}\ ] ] therefore , since ( which implies [1-z_r(t)] ] give : .then is in a suitable bounded set , so since are locally bounded when defined to be zero at , one can readily use ( [ a2uiue ] ) to compute constants and such that where was used to get and was used to get .it follows from ( [ s2dodo])-([een ] ) that , in cases 1b-2b , with .condition ( [ goal ] ) is the classical iss lyapunov function decay condition for the transformed error dynamics evolving on our restricted state space .therefore , a slight variant of the classical iss arguments combined with ( [ goal ] ) give the iss estimate asserted by theorem [ iss - track ] . for details , see the appendix below .this concludes the proof .[ iiss - rk ] if our control set is replaced by the larger control set ^ 2 ] gives it follows that , for all , therefore is a bounded function .similarly , and are bounded .we deduce that , , and the components of are uniformly continuous , since their time derivatives ( [ dmodel ] ) are bounded . reapplying ( [ da2 ] ) therefore implies is finite .it follows from barbalat s lemma and the structure of the function that as .this establishes our stability condition for the multi - species model .notice that is bounded below by a quadratic of the form along the trajectories of ( [ dmodel ] ) , and that is bounded above and below by such quadratics along the trajectories as well , since the trajectories are bounded . from this fact and ( [ surmise ] ) , one can deduce that the trajectories converge _ exponentially _ to zero .to validate our convergence result , we simulated the dynamics ( [ mod1 ] ) with the initial values and and the reference trajectory , using the parameters and and . in this case ,the lower bound on provided by ( [ aer1 ] ) is .it follows from remark [ iiss - rk ] that the convergence of to is robust to disturbances that are valued in ^ 2 ] ( by integrating between and ) gives } \ ; , \end{array}\ ] ] where we enlarged without relabeling .we deduce that }\right)\ ] ] where is defined in ( [ vchoice ] ) .since and for all , we deduce from the formula for that in particular , . from ( [ 49 ] ) and the inequality , } } \end{array}\ ; .\ ] ] the relations and give } } \\ & & \hspace{-.5in}+ \frac{1}{2 } \left(e^{2\sqrt{2 e^{(t_0 - t ) c_5 } \omega\left(|(\tilde{\xi}(t_0),\tilde z(t_0))|\right ) } } - 1\right ) + \frac{1}{2 } \left(e^{2\sqrt{2 c_2 |u|_{[t_o , t ] } } } - 1\right ) .\end{array}\ ] ] the desired iss estimate ( [ issest ] ) now follows immediately from this last inequality and ( [ tal ] ) with the choices this completes the proof of theorem [ iss - track ] .part of this work was done while p. de leenheer and f. mazenc visited louisiana state university ( lsu ) .they thank lsu for the kind hospitality they enjoyed during this period .f. mazenc thanks claude lobry and alain rapaport for illuminating discussions .malisoff was supported by nsf / dms grant 0424011 .de leenheer was supported by nsf / dms grant 0500861 .the authors thank the referees for their comments , and they thank hairui tu for helping with the graphics .the second author thanks ilana aldor for stimulating discussions .f. grognard , f. mazenc , and a. rapaport , polytopic lyapunov functions for the stability analysis of persistence of competing species .proceedings of the 44th ieee confence on decision and control and european control conference ecc 2005 , seville , spain ( 2005 ) 3699 - 3704 .[ also discrete and continuous dynamical systems - series b , to appear . ]smith and p. waltman , the theory of the chemostat .cambridge university press , cambridge , 1995 .sontag , smooth stabilization implies coprime factorization .ieee trans .automatic control 34 ( 1989 ) 435 - 443 .e. d. sontag , the iss philosophy as a unifying framework for stability - like behavior .nonlinear control in the year 2000 , volume 2 , a. isidori , f. lamnabhi - lagarrigue , and w. respondek , eds . , lecture notes in control and information sciences vol .259 , springer - verlag , berlin , 2000 , 443 - 468 .g. zames , on the input - output stability of time - varying nonlinear feedback systems .part i : conditions using concepts of loop gain , conicity , and positivity .ieee trans .automatic control 11 ( 1966 ) 228 - 238 .g. zames , on the input - output stability of time - varying nonlinear feedback systems .part ii : conditions involving circles in the frequency plane and sector nonlinearities .. automatic control 11 ( 1966 ) 465 - 476 . | we study the chemostat model for one species competing for one nutrient using a lyapunov - type analysis . we design the dilution rate function so that all solutions of the chemostat converge to a prescribed periodic solution . in terms of chemostat biology , this means that no matter what positive initial levels for the species concentration and nutrient are selected , the long term species concentration and substrate levels closely approximate a prescribed oscillatory behavior . this is significant because it reproduces the realistic ecological situation where the species and substrate concentrations oscillate . we show that the stability is maintained when the model is augmented by additional species that are being driven to extinction . we also give an input - to - state stability result for the chemostat tracking equations for cases where there are small perturbations acting on the dilution rate and initial concentration . this means that the long term species concentration and substrate behavior enjoys a highly desirable robustness property , since it continues to approximate the prescribed oscillation up to a small error when there are small unexpected changes in the dilution rate function . frdric mazenc projet mere inria - inra umr analyse des systmes et biomtrie inra 2 , pl . viala , 34060 montpellier , france michael malisoff department of mathematics louisiana state university baton rouge , la 70803 - 4918 patrick de leenheer department of mathematics university of florida 411 little hall , po box 118105 gainesville , fl 326118105 ( communicated by ? ? ? ? ) |
consider the following class of regression models , with response variable denoted by and -dimensional covariate vector by , where is the unknown parameter vector , is the unobserved error term that is independent of with a completely unspecified distribution , and is a monotone increasing , but otherwise unspecified function .it is easily seen that this class of models contains many commonly used regression models as its submodels that are especially important in the econometrics and survival analysis literature .for example , with , becomes the standard regression model with an unspecified error distribution ; with ( ) , the box - cox transformation model ( box and cox , 1964 ) ; with ] , a censored regression model ( tobin , 1958 ; powell , 1984 ) ; with , the accelerated failure times ( aft ) model ( cox and oakes , 1984 ; kalbfleisch and prentice , 2002 ) ; with having an extreme value density , the cox proportional hazards regression ( cox , 1972 ) ; with having the standard logistic distribution , the proportional odds regression ( bennett , 1983 ) . a basic tool for handling model is the maximum rank correlation ( mrc ) estimator proposed in the econometrics literature by han ( 1987 ) .because both the transformation function and the error distribution are unspecified , not all components of are identifiable . without loss of generality, we shall assume henceforth that the last component , .let be a random sample from .han s mrc estimator , denoted by , is the maximizer of following objective function }i[{{{\bf x}}_i'{\mbox{\boldmath}}({\mbox{\boldmath}})>{{\bf x}}_j'{\mbox{\boldmath}}({\mbox{\boldmath}})}],\ ] ] where \beta\theta\beta\theta\beta\theta\beta\theta\beta\theta ] and . because of the standardization , the rank correlation criterion function is bounded by 1 .it is not difficult to establish a uniform law of large numbers where is the expectation of ; cf .han ( 1987 ) and sherman ( 1993 ) .likewise , we can show that such uniform convergence also holds for , i.e. note that the limit remains the same . in the following theorem, we claim that the estimate obtained from maximizing the smoothed rank correlation function is also asymptotically normal with the same asymptotic covariance matrix as han s mrce .[ thy : normality ] for any given positive definite matrix , let be defined as in and . then , under assumptions 1 - 3 , is consistent , a.s . and asymptotically normal , where is defined as in proposition 1 .in addition , is asymptotically equivalent to in the sense that .recall that defines the sandwich - type variance estimator by pretending that is a standard smooth objective function .theorem [ thy : var formula ] below shows that is consistent .[ thy : var formula ] let be the mrc estimator and be defined by . then , for any fixed positive definite matrix , converges in probability to , the limiting variance - covariance matrix of .the self - induced smoothing uses the limiting covariance matrix as . in practice, we may initially choose the identity matrix for , which is the same way as the initial step in algorithm i. by theorem 2.1 , we know that the one - step estimator in algorithm i converges in probability to the true covariance . however , this one - step estimator depends on the initial choice of .algorithm 1 is an iterative algorithm with the variance - covariance estimator converging to the fixed point of .convergence of algorithm 1 is ensured by the following theorem . for notational simplicity , we let be the vectorization of matrix . for any function of , where denotes the entry of .[ thy : alg converg ] let be defined as in algorithm 1 .suppose that assumptions 1 - 3 hold .then there exist , , such that for any , there exists , such that for all , for a fixed , represents the fixed point matrix in the iterative algorithm .the above theorem shows that with probability approaching 1 , the iterative algorithm converges to a limit , as , and the limit converges in probability to the limiting covariance matrix .the speed of convergence of to is faster than any exponential rate in the sense that for any .this can be seen from step 2 of algorithm 1 in subsection 2.1 and below , {r , s}=o_p(1),\ ] ] which will be proved in the appendix . here is a small neighborhood of and is a positive definite matrix . in this section, we provide proofs for ( 1 ) asymptotic equivalence of smrce to mrce , ( 2 ) consistency of the induced variance estimator and ( 3 ) convergence of algorithm 1 . some of the technical developments used in the proofs will be given in the appendix .[ proof of theorem [ thy : normality ] ] without loss of generality , we assume as in subsection 2.1 , let be a -variate normal random vector with mean and covariance matrix .define let and . define = { \rm argmax}_{{\mbox{\boldmath } } } { \widetilde\gamma_n}({\mbox{\boldmath}}).\ ] ] let \omega\omega\theta\theta\omega\theta\omega\theta\omega\theta\theta\theta ] .as defined in has the following integral representation , by change of variable , . from , and where and .in view of , to show , it suffices to prove {r , s } + o_p(1)\ ] ] uniformly over . to show, we define by lemma 2(i ) and the boundedness of , we have , where for set denotes its complement .therefore , reduces to {r , s } + o_p(1).\ ] ] to show , we establish a quadratic expansion of for . since for and , it follows that .therefore , by , therefore , the left hand side of equals , where \ddot{k}_{n , r , s}({{\bf t}},{\mbox{\boldmath}})d{{\bf t}},\\ \bf{ii}&=q_n({\mbox{\boldmath}}_0)\times\int_{{\mbox{\boldmath}}_{n , r}\cap{\mbox{\boldmath}}_{n , s}}\ddot{k}_{n , r , s}({{\bf t}},{\mbox{\boldmath}})d{{\bf t}},\\ \bf{iii}&=\frac{{\bf w}_n'}{\sqrt{n}}\times\int_{{\mbox{\boldmath}}_{n , r}\cap{\mbox{\boldmath}}_{n , s}}({{\bf t}}-{\mbox{\boldmath}}_0)\ddot{k}_{n , r , s}({{\bf t}},{\mbox{\boldmath}})d{{\bf t}},\\ \bf{iv}&=\frac{1}{2}\int_{{\mbox{\boldmath}}_{n , r}\cap{\mbox{\boldmath}}_{n , s}}({{\bf t}}-{\mbox{\boldmath}}_0)'{{\bf a}}({\mbox{\boldmath}}_0)({{\bf t}}-{\mbox{\boldmath}}_0)\ddot{k}_{n , r , s}({{\bf t}},{\mbox{\boldmath}})d{{\bf t}}. \end{split}\end{aligned}\ ] ] by the definition of , by lemma 2(ii ) , .furthermore , due to lemma 2(iii ) .note that where the last equality follows from the fact that and are symmetric at and is an odd function of {r} ] . combining the approximations for , we get .next we prove by showing , componentwise , {r , s } = [ { { \bf v}}({\mbox{\boldmath}}_0)]_{r , s } + o_p(1)\ ] ] uniformly over for .define }i_{[({{\bf x}}-\tilde { { \bf x}})'{\mbox{\boldmath}}>0 ] } + i_{[y<\tilde y]}i_{[({{\bf x}}-\tilde { { \bf x}})'{\mbox{\boldmath}}<0]},\theta\theta\theta\omega\omega\omega\theta\omega\omega\theta\omega\omega\theta\omega\theta\omega\omega\omega\omega\theta\omega\theta\omega ] as a projection of such that - e f^ * , \\p_{\{i , j\ } } f^ * & = e [ f^ * | { { \bf u}}_i , { { \bf u}}_j ] - e [ f^ * | { { \bf u}}_i ] - e [ f^ * | { { \bf u}}_j ] + e f^*,\\ p_{\{1,2,3\ } } f^ * & = e [ f^ * | { { \bf u}}_1 , { { \bf u}}_2 , { { \bf u}}_3 ] - \sum_{i\neq j } e [ f^ * | { { \bf u}}_i , { { \bf u}}_j ] + \sum_{i=1,2,3}e [ f^ * | { { \bf u}}_i ] - e f^*. \end{split}\end{aligned}\ ] ] we know from hoeffding s decomposition that and are second- and third - order degenerated u - statistics with bounded kernels and thus of order and ; see sherman ( 1992 , corollary 8) . therefore , by lemma 2(vi ) , replacing by in also results in .then combining this and with and , reduces to {r , s } & = 3 \times\int_{{\mbox{\boldmath}}_{n , r}\cap{\mbox{\boldmath}}_{n , s } } u_{n,1}\times\dot{k}_{n , r}({{\bf t}},{\mbox{\boldmath}})\dot{k}_{n , s}({\mbox{\boldmath } } , { \mbox{\boldmath}})d{{\bf t}}d{\mbox{\boldmath}}\\ & + \int_{{\mbox{\boldmath}}_{n , r}\cap{\mbox{\boldmath}}_{n , s } } ef\times\dot{k}_{n , r}({{\bf t}},{\mbox{\boldmath}})\dot{k}_{n ,s}({\mbox{\boldmath}},{\mbox{\boldmath}})d{{\bf t}}d{\mbox{\boldmath}}+ o_p(1 ) . \end{split}\ ] ] let ] and \theta\omega\theta\omega\theta ] , which gives . from and , .from theorem , we know that and . by the mean value theorem , {r , s } & = [ \hat{\mbox{\boldmath}}_n(\hat{\mbox{\boldmath}}_n , \hat{{\mbox{\boldmath}}}_n^{(1 ) } ) - \hat{\mbox{\boldmath}}_n(\hat{\mbox{\boldmath}}_n , { { \bf d}}_0)]_{r , s } + [ \hat{\mbox{\boldmath}}_n(\hat{\mbox{\boldmath}}_n , { { \bf d}}_0 ) - { { \bf d}}_0]_{r , s}\\ & = \left[\frac{\partial}{\partial{\mbox{\boldmath}}}[\hat{{{\bf d}}}_n]_{r , s}\bigg|_{{\mbox{\boldmath}}={\mbox{\boldmath}}^*}\right ] ' \times vech(\hat{{\mbox{\boldmath}}}_n^{(1)}-{{\bf d}}_0 ) + [ \hat{\mbox{\boldmath}}_n(\hat{\mbox{\boldmath}}_n , { { \bf d}}_0 ) - { { \bf d}}_0]_{r , s } , \end{split}\end{aligned}\ ] ] where and thus . in view of lemma 4 and , . again by the mean value theorem , {r , s}= \left[\frac{\partial}{\partial { \mbox{\boldmath}}}[\hat { { { \bf d}}}_n]_{r , s}\bigg|_{{\mbox{\boldmath}}={\mbox{\boldmath}}^*}\right ] ' \times vech(\hat{{\mbox{\boldmath}}}_n^{(k)}-{{\bf d}}_0),\ ] ] where . then by lemma 4 and mathematical induction ,we know that for any and , there exist and , such that for any and , {r , s}\big| \leq \eta \times \big|[\hat{\mbox{\boldmath}}_n^{(k)}-\hat{\mbox{\boldmath}}_n^{(k-1)}]_{r , s}\big| , \mbox { for all } k > k\right)>1-\epsilon,\ ] ] where .note that the inequality inside the above probability implies that converges as and the limit satisfies and .in this section , we extend the approach to the partial rank correlation ( prc ) criterion function , defined by ( 3 ) , of khan and tamer ( 2007 ) for censored data . under the usual conditional independence between failure and censoring times given covariates and additional regularity conditions , khan and tamer ( 2007 ) developed asymptotic properties for prce that are parallel to those by sherman ( 1993 ) .the same self - induced smoothing can be applied to partial rank correlation criteria function to get \phi\left(\sqrt{n}{{\bf x}}_{ij}'{\mbox{\boldmath}}({\mbox{\boldmath}})/\sigma_{ij}\right ) .\end{split}\end{aligned}\ ] ] we define its maximizer , , as the smoothed partial rank correlation estimator ( sprce ) .let ^{\otimes2}\right\},\ ] ] \right\}^{\otimes2 } , \\\end{split}\end{aligned}\ ] ] ^{-1}\times \hat { { \bf v}}_n^*({\mbox{\boldmath } } , { \mbox{\boldmath}})\times [ \hat { { \bf a}}^*_n({\mbox{\boldmath } } , { \mbox{\boldmath}})]^{-1},\ ] ] where -\delta_i \times i[\tilde y_j > \tilde y_i] ] and a covariance matrix diag .we set ] by two steps .we first generated ' ] and an identity covariance matrix .we then generated as 0 or 2 with equal probability .we set ] + ( v) .+ ( vi) .+ ( vii) + + let , and divide its complement into and . we prove ( i)-(iv ) for . for ,the proofs are similar and omitted . for ( i ) , note that since and \geq0 \theta\theta\theta\theta\theta\omega\theta\omega\theta\theta\theta\sigma\sigma\theta\omega\sigma\sigma\sigma\theta\omega\sigma\sigma ] . first , by theorem 2 , lemma 4 and the mean value theorem, we can show that {r , s } = [ \hat{{{\bf a}}}_n({\mbox{\boldmath } } , { \mbox{\boldmath}})-\hat{{{\bf a}}}_n({\mbox{\boldmath } } , { { \bf d}}_0 ) + \hat{{{\bf a}}}_n({\mbox{\boldmath } } , { { \bf d}}_0)]_{r , s}=[{{\bf a}}({\mbox{\boldmath}}_0)]_{r , s}+o_p(1)$ ] . by matrix differentiation , .thus , where . the rest of the proof is straightforward and thus omitted .[ thy : fast convergence ] for , we have {r , s}=o_p(1).\ ] ] the result follows immediate from lemma 4 and corollary 1 .suppose is the joint density for and is the conditional density of given and .suppose is the conditional density of given and is the conditional density of given and .by change of variable , . therefore , where is the marginal distribution of .therefore if the conditional density has bounded derivatives up to order three for each in the support of space , it is not difficult to show that assumption 3 is satisfied .the sufficient condition can be easily verified in certain common situations such as when the conditional density is normal . | this paper deals with a general class of transformation models that contains many important semiparametric regression models as special cases . it develops a self - induced smoothing for the maximum rank correlation estimator , resulting in simultaneous point and variance estimation . the self - induced smoothing does not require bandwidth selection , yet provides the right amount of smoothness so that the estimator is asymptotically normal with mean zero ( unbiased ) and variance - covariance matrix consistently estimated by the usual sandwich - type estimator . an iterative algorithm is given for the variance estimation and shown to numerically converge to a consistent limiting variance estimator . the approach is applied to a data set involving survival times of primary biliary cirrhosis patients . simulations results are reported , showing that the new method performs well under a variety of scenarios . , , + |
binary classification problems appear from diverse practical applications , such as , financial fraud detection , spam email classification , medical diagnosis with genomics data , drug response modeling , among many others . in these classification problems ,the goal is to predict class labels based on a given set of variables .suppose that we observe a training data set consisting of pairs , where , , and .a classifier fits a discriminant function and constructs a classification rule to classify data point to either class or class according to the sign of .the decision boundary is given by .two canonical classifiers are linear discriminant analysis and logistic regression .modern classification algorithms can produce flexible non - linear decision boundaries with high accuracy .the two most popular approaches are ensemble learning and support vector machines / kernel machines . ensemble learning such as boosting and random forest combine many weak learners like decision trees into a powerful one . the support vector machine ( svm ) fits an optimal separating hyperplane in the extended kernel feature space which is non - linear in the original covariate spaces . in a recent extensive numerical study by ,the kernel svm is shown to be one of the best among 179 commonly used classifiers .motivated by data - piling " in the high - dimension - low - sample - size problems , invented a new classification algorithm named distance weighted discrimination ( dwd ) that retains the elegant geometric interpretation of the svm and delivers competitive performance . since then much work has been devoted to the development of dwd .the readers are referred to for an up - to - date list of work on dwd . on the other hand, we notice that dwd has not attained the popularity it deserves .we can think of two reasons for that .first , the current state - of - the - art algorithm for dwd is based on second - order - cone programming ( socp ) proposed in .socp was an essential part of the dwd development .as acknowledged in , socp was then much less well - known than quadratic programming , even in optimization .furthermore , socp is generally more computationally demanding than quadratic programming .there are two existing implementations of the socp algorithm : in matlab and in r. with these two implementations , we find that dwd is usually more time - consuming than the svm .therefore , socp contributes to both the success and unpopularity of dwd .second , the kernel extension of dwd and the corresponding kernel learning theory are under - developed compared to the kernel svm . although proposed a version of non - linear dwd by mimicking the kernel trick used for deriving the kernel svm , theoretical justification of such a kernel dwd is still absent . on the contrary ,the kernel svm as well as the kernel logistic regression have mature theoretical understandings built upon the theory of reproducing kernel hilbert space ( rkhs ) .most learning theories of dwd succeed to s geometric view of hdlss data and assume that and is fixed , as opposed to the learning theory for the svm where and is fixed .we are not against the fixed and theory but it would be desirable to develop the canonical learning theory for the kernel dwd when is fixed and .in fact , how to establish the bayes risk consistency of the dwd and kernel dwd was proposed as an open research problem in the original dwd paper .nearly a decade later , the problem still remains open . in this paper , we aim to resolve the aforementioned issues .we show that the kernel dwd in a rkhs has the bayes risk consistency property if a universal kernel is used .this result should convince those who are less familiar with dwd to treat the kernel dwd as a serious competitor to the kernel svm . to popularize the dwd, it is also important to allow practitioners to easily try dwd collectively with the svm in real applications . to this end, we develop a novel fast algorithm to solve the linear and kernel dwd by using the majorization - minimization ( mm ) principle . compared with the socp algorithm ,our new algorithm has multiple advantages .first , our algorithm is much faster than the socp algorithm . in some examples ,our algorithm can be several hundred times faster .second , dwd equipped with our algorithm can be faster than the svm .third , our algorithm is easier to understand than the socp algorithm , especially for those who are not familiar with semi - definite and second - order - cone programming .this could help demystify the dwd and hence may increase its popularity . to give a quick demonstration, we use a simulation example to compare the kernel dwd and the kernel svm .we drew 10 centers from . for each data point in the positive class , we randomly picked up a center and then generated the point from .the negative class was assembled in the same way except that 10 centers were drawn from .for this model the bayes rule is nonlinear . ]figure [ fig : svm_dwd ] displays the training data from the simulation model where 100 observations are from the positive class ( plotted as triangles ) and another 100 observations are from the negative class ( plotted as circles ) . we fitted the svm and dwd using gaussian kernels .we have implemented our new algorithm for dwd in a publicly available r package ` kerndwd ` .we computed the kernel svm by using the r package ` kernlab ` .we recorded their training errors and test errors . from figure[ fig : svm_dwd ] , we observe that like the kernel svm , the kernel dwd has a test error close to the bayes error , which is consistent with the bayes risk consistency property of the kernel dwd established in section [ sec : learningtheory0 ] .notably , the kernel dwd is about three times as fast as the kernel svm in this example .the rest of the paper is organized as follows .to be self - contained , we first review the svm and dwd in section [ sec : review ] .we then derive the novel algorithm for dwd in section [ sec : computation ] .we introduce the kernel dwd in a reproducing kernel hilbert space and establish the learning theory of kernel dwd in section [ sec : kerdwd ] .real data examples are given in section [ sec : real_data ] to compare dwd and the svm .technical proofs are provided in the appendix .the introduction of the svm usually begins with its geometric interpretation as a maximum margin classifier . consider a case when two classes are separable by a hyperplane such that are all non - negative . without loss of generality , we assume that is a unit vector , i.e. , , and we observe that each is equivalent to the euclidean distance between the data point and the hyperplane .the reason is that and , where is any data point on the hyperplane and is the unit normal vector .the svm classifier is defined as the optimal separating hyperplane that maximizes the smallest distance of each data point to the separating hyperplane .mathematically , the svm can be written as the following optimization problem ( for the separable data case ) : the smallest distance is called the _ margin _ , and the svm is thereby regarded as a _ large - margin classifier_.the data points closest to the hyperplane , i.e. , , are dubbed the _ support vectors_. in general , the two classes are not separable , and thus can not be non - negative for all . to handle this issue ,non - negative slack variables , are introduced to ensure all to be non - negative . with these slack variables ,the optimization problem is generalized as follows , to compute svms , the optimization problem is usually rephrased as an equivalent quadratic programming ( qp ) problem , ,\\ \text { subject to } & \;\;\ ; y_i(\beta_0 + { \boldsymbol } x_i^t{\boldsymbol } \beta ) + \xi_i \ge 1,\ \xi_i \ge 0 , \\forall i , \label{eq : quad_svm } \end{aligned}\ ] ] and it can be solved by maximizing its lagrange dual function , , \\\text { subject to } & \;\;\ ; \mu_i \ge 0 \text { and } \sum_{i=1}^n \mu_i y_i = 0 .\label{eq : dual } \end{aligned}\ ] ] by solving , one can show that the solution of has the form being zero only when lies on the support vectors .one widely used method to extend the linear svm to non - linear classifiers is the kernel method , which replaces the dot product in the lagrange dual problem with a kernel function , and hence the solution has the form some popular examples of the kernel function include : ( linear kernel ) , ( polynomial kernel ) , and ( gaussian kernel ) , among others. distance weighted discrimination was originally proposed by to resolve the _ data - piling _ issue . observed that many data points become support vectors when the svm is applied on the so - called high - dimension - low - sample - size ( hdlss ) data , and coined the term data - piling to describe this phenomenon .we delineate it in figure [ fig : dp ] through a simulation example .let be a -dimension vector .we generated points ( indexed from to and represented as triangles ) from as the negative class and another points ( indexed from to and represented as circles ) from as the positive class .we computed and for svm . in the left panel of figure [ fig : dp ] , we plotted for each data point , and we portrayed the support vectors by solid triangles and circles .we observe that out of data points become support vectors .the right panel of figure [ fig : dp ] corresponds to dwd ( will be defined shortly ) , where data - piling is attenuated .a real example revealing the data - piling can be seen in figure 1 of .are plotted for svm and dwd .indices 1 to 50 represent negative class ( triangles ) and indices 51 to 100 are for positive class ( circles ) . in the left panel ,data points belonging to the support vectors are depicted as solid circles and triangles . ] viewed data - piling " as a drawback of the svm , because the svm classifier is a function of only support vectors .another popular classifier logistic regression does classification by using all the data points .however , the classical logistic regression classifier is derived by following the maximum likelihood principle , not based on a nice margin - maximization motivation penalized logistic regression approaches the margin - maximizing hyperplane for the separable data case .dwd was first proposed in 2002 . ] . wanted to have a new method that is directly formulated by a svm - like margin - maximization picture and also uses all data points for classification .to this end , proposed dwd which finds a separating hyperplane minimizing the total inverse margins of all the data points : ,\\ \text { subject to } & \;\;\ ; d_i = y_i(\omega_0 + { \boldsymbol } x_i^t{\boldsymbol } \omega ) + \eta_i \ge 0 , \ \eta_i \ge 0,\ \forall i , \text { and } { \boldsymbol } \omega^t { \boldsymbol } \omega=1 . \end{aligned } \label{eq : nonsep_dwd}\ ] ]there has been much work on variants of the standard dwd .we can only give an incomplete list here . introduced the weighted dwd to tackle unequal cost or sample sizes by imposing different weights on two classes . the binary dwd to the multiclass case . proposed the sparse dwd for high - dimensional classification .in addition , the work connecting dwd with other classifiers , e.g. , svm , includes but not limited to lum , dwsvm , and flame . provided a more comprehensive review of the current dwd literature . solved the standard dwd by reformulating as a second - order cone programming ( socp ) program , which has a linear objective , linear constraints , and second - order - cone constraints .specifically , for each , let , , and then , , and .hence the original optimization problem becomes ,\\ \text { subject to } & \;\;\ ; { \boldsymbol } \rho - { \boldsymbol } \sigma = \tilde{{\boldsymbol } y } { \boldsymbol } x { \boldsymbol } \omega + \omega_0 \cdot { \boldsymbol } y + { \boldsymbol } \eta , \\ & \;\;\ ; \eta_i \ge 0 , \( \rho_i ; \sigma_i , 1 ) \in s_3 , \\forall i , \ ( 1 ; { \boldsymbol } \omega)\in s_{p+1 } , \end{aligned } \label{eq : primal_dwd}\ ] ] where is an diagonal matrix with the diagonal element , is an data matrix with the row , and is the form of the second - order cones . after solving and from , a new observation classified by .note that the kernel svm was derived from applying the kernel trick to the dual formulation ( [ eq : soln_svm ] ) . followed the same approach to consider a version of kernel dwd for achieving non - linear classification .the dual function of the problem is ,\\ \text { subject to } & \;\;\ ; { \boldsymbol } y^t { \boldsymbol } \alpha = 0 , \ { \boldsymbol } 0 \le { \boldsymbol } \alpha \le c\cdot { \boldsymbol } 1 , \end{aligned } \label{eq : dual_dwd}\ ] ] where .note that only uses , which makes it easy to employ the kernel trick to get a nonlinear extension of the linear dwd . for a given kernel function ,define the kernel matrix as , .then a kernel dwd can be defined as ,\\ \text { subject to } & \;\;\ ; { \boldsymbol } y^t { \boldsymbol } \alpha = 0 , \ { \boldsymbol } 0 \le { \boldsymbol } \alpha \le c\cdot { \boldsymbol } 1 .\end{aligned } \label{eq : dual_kerdwd}\ ] ] to solve ( [ eq : dual_kerdwd ] ) , used the cholesky decomposition of the kernel matrix , i.e. , and then replaced the predictors in with . also carefully discussed several algorithmic issues that ensure the equivalent optimality in and . * remark 1 . *two dwd implementations have been published thus far : a matlab software and an r package ` dwd ` .both implementations are based on a matlab socp solver ` sdpt3 ` , which was developed by .we notice that the r package ` dwd ` can only compute the linear dwd . * remark 2 . * to our best knowledge ,the theoretical justification for the kernel dwd in is still unclear .the reason is likely due to the fact that the nonlinear extension is purely algorithmic .in fact , the bayes risk consistency of dwd was proposed as an open research problem in .the kernel dwd considered in this paper can be rigorously justified to have a universal bayes risk consistency property ; see details in section [ sec : learningtheory0 ] . also attempted to replace the reciprocal in the dwd optimization problem with the power ( ) of the inverse distances , and also used it as the original definition of dwd .we name the dwd with this new formulation the generalized dwd : ,\\ \text { subject to } & \;\;\ ; d_i = y_i(\omega_0 + { \boldsymbol } x_i^t{\boldsymbol } \omega ) + \eta_i \ge 0 , \ \eta_i\forall i , \text { and } { \boldsymbol } \omega^t { \boldsymbol } \omega=1 , \label{eq : gendwd } \end{aligned}\ ] ] which degenerates to the standard dwd when . the first asymptotic theory for dwd and generalized dwd was given in who presented a novel geometric representation of the hdlss data .assuming are the data from the positive class and are from the negative class . stated that , when the sample size is fixed and the dimension goes to infinity , under some regularity conditions , there exist two constants and such that for each pair of and , as .this result was applied the results to study several classifiers including the svm and the generalized dwd . for ease presentationlet us consider the equal subgroup size case , i.e. , . assumed that the basic conclusion is that when is greater than a threshold that depends on , the misclassification error converges to zero , and when is less than the same threshold , the misclassification error converges to . for more details , see theorem 1 and theorem 2 in . further relaxed the assumptions thereof . * remark 3 .* the generalized dwd has not been implemented yet because the socp transformation only works for the standard dwd ( ) , but its extension to handle the general cases is unclear if not impossible .that is why the current dwd literature only focuses on dwd with .in fact , the generalized dwd with was proposed as an open research problem in .the new algorithm proposed in this paper can easily solve the generalized dwd problem for any ; see section [ sec : computation ] . originally solved the standard dwd by transforming into a socp problem .this algorithm , however , can not compute the generalized dwd with . in this section, we propose an entirely different algorithm based on the majorization - minimization ( mm ) principle .our new algorithm offers a unified solution to the standard dwd and the generalized dwd .our algorithm begins with a formulation of the dwd .lemma [ lm : dwd_loss ] deploys the result .note that the loss function also lays the foundation of the kernel dwd learning theory that will be discussed in section [ sec : kerdwd ] .the generalized dwd classifier in can be written as , where is computed from , \label{eq : dwdlossopt}\ ] ] for some , where [ lm : dwd_loss ] * remark 4 . * the proof of lemma 1 provides the one - to - one mapping between in and in .write as the solution to .define considering using , ,\\ \text { subject to } & \ ; d_i = y_i(\omega_0 + { \boldsymbol } x_i^t{\boldsymbol } \omega ) + \eta_i \ge 0 , \\eta_i \ge 0 , \\forall i , \text { and } { \boldsymbol } \omega^t{ \boldsymbol } \omega=1 , \label{eq : gendwd2 } \end{aligned}\ ] ] we have note that , which means that the generalized dwd classifier defined by is equivalent to the generalized dwd classifier defined by . by lemma [ lm : dwd_loss ] , we call the generalized dwd loss. it can be visualized in figure [ fig : dwdloss ] .we observe that the generalized dwd loss decreases as increases and it approaches the svm hinge loss function as .when , the generalized dwd loss becomes we notice that has appeared in the literature . in this workwe give a unified treatment of all values , not just ., and the svm hinge loss . ]we now show how to develop the new algorithm by using the mm principle .some recent successful applications of the mm principle can be seen in , among others .the main idea of the mm principle is easy to understand .suppose and we aim to minimize , defined in .the mm principle finds a majorization function satisfying for any and , and then we generate a sequence by updating via .we first expose some properties of the generalized dwd loss functions , which give rise to a quadratic majorization function of .the generalized dwd loss is differentiable everywhere ; its first - order derivative is given below , the generalized dwd loss function has a lipschitz continuous gradient , which further implies a quadratic majorization function of such that for any and .[ lm : lps ] denote the current solution by and the updated solution by .we settle and without abusing notations .we have that for any , \\ & + \frac{m}{2n } \sum_{i=1}^n \left[y_i(\beta_0 - \tilde\beta_0 ) + y_i { \boldsymbol } x_i^t ( { \boldsymbol } \beta - \tilde{{\boldsymbol } \beta})\right]^2 + \lambda { \boldsymbol } \beta^t { \boldsymbol } \beta \\ \equiv & { \boldsymbol } d(\beta_0 , { \boldsymbol } \beta ) .\label{eq : cled } \end{split}\ ] ] we now find the minimizer of .the gradients of are given as follows : {\boldsymbol } x_i + 2\lambda { \boldsymbol } \beta \notag\\ = & { \boldsymbol } x^t { \boldsymbol } z + \frac{m}{n}(\beta_0 - \tilde{\beta}_0){\boldsymbol } x^t { \boldsymbol } 1 + \frac{m}{n}\sum_{i=1}^n { \boldsymbol } x_i { \boldsymbol } x_i^t ( { \boldsymbol } \beta - \tilde{{\boldsymbol } \beta } ) + 2\lambda { \boldsymbol } \beta \notag\\ = & { \boldsymbol } x^t { \boldsymbol } z + \frac{m}{n}(\beta_0 - \tilde{\beta}_0){\boldsymbol } x^t { \boldsymbol } 1 + \left(\frac{m}{n } { \boldsymbol } x^t { \boldsymbol } x + 2\lambda{\boldsymbol } i_p\right ) ( { \boldsymbol } \beta - \tilde{{\boldsymbol } \beta } ) + 2\lambda \tilde{{\boldsymbol } \beta}\label{eq : grdnt_alp } , \\ \partial\frac{{\boldsymbol } d(\beta_0 , { \boldsymbol } \beta)}{\partial \beta_0 } = & \frac{1}{n } \sum_{i=1}^n v_q'\left(y_i(\tilde{\beta}_0 + { \boldsymbol } x_i ^t \tilde{{\boldsymbol } \beta})\right ) y_i + \frac{m}{n } \sum_{i=1}^n \left[(\beta_0 - \tilde{\beta}_0 ) + { \boldsymbol } x_i^t({\boldsymbol } \beta - \tilde{{\boldsymbol } \beta})\right ] \notag\\ = & { \boldsymbol } 1^t { \boldsymbol } z + m(\beta_0 - \tilde{\beta}_0 ) + \frac{m}{n } { \boldsymbol } 1^t { \boldsymbol } x ( { \boldsymbol } \beta - \tilde{{\boldsymbol } \beta } ) .\label{eq : grdnt_b0}\end{aligned}\ ] ] where is the data matrix with the row , is an vector with the element , and is the vector of ones . setting ] is the hinge loss underlying the svm . as shown in figure 3, the generalized dwd loss takes the hinge loss as its limit when . in general , the generalized dwd loss and the hinge loss look very similar , which suggests that the kernel dwd and the kernel svm equipped with the same kernel have similar statistical behavior . the procedure for deriving algorithm [ alg : linear ] for the linear dwdcan be directly adopted to derive an efficient algorithm for solving the kernel dwd .we obtain the majorization function , +\lambda { \boldsymbol } \alpha^t { \boldsymbol } k { \boldsymbol } \alpha \\ & & + \frac{m}{2n } \sum_{i=1}^n \left[y_i(\beta_0 - \tilde\beta_0 ) + y_i { \boldsymbol } k_i^t ( { \boldsymbol } \alpha - \tilde{{\boldsymbol } \alpha})\right]^2 + \frac{1}{n } \sum_{i=1}^n v_q\left(y_i(\tilde{\beta}_0 + { \boldsymbol } k_i ^t \tilde{{\boldsymbol } \alpha})\right ) \end{aligned}\ ] ] and then find the minimizer of which has a closed - form expression .we opt to omit the details here for space consideration .algorithm [ alg : kernel ] summarizes the entire algorithm for the kernel dwd .initialize compute : compute : compute : set = formulated the kernel svm as a non - parametric function estimation problem in a reproducing kernel hilbert space and showed that the population minimizer of the svm loss function is the bayes rule , indicating that the svm directly approximates the optimal bayes classifier . further coined a name fisher consistency " to describe such a result .the vapnik - chervonenkis ( vc ) analysis and the margin analysis have been used to bound the expected classification error of the svm . used the so - called leave - one - out analysis to study a class of kernel machines .the exisiting theoretical work on the kernel svm provides us a nice road map to study the kernel dwd . in this sectionwe first elucidate the fisher consistency of the generalized kernel dwd , and we then establish the bayes risk consistency of the kernel dwd when a universal kernel is employed . let denote the conditional probability . under the 0 - 1 loss ,the theoretical optimal bayes rule is .assume is a measurable function and throughout .the population minimizer of the expected generalized dwd loss ] assume that is the bayes rule and is the solution of , then where and are defined as follows and is the generalized dwd loss , - e_{{\boldsymbol } xy}\bigg[v_q \left(y\tilde{f}({\boldsymbol } x)\right)\bigg],\\ \varepsilon_e=\varepsilon_e(\hat{f}_n ) & = e_{{\boldsymbol } xy}\bigg[v_q \left(y\hat{f}_n({\boldsymbol } x)\right)\bigg ] - \inf_{f\in{\mathcal{h}}_k } e_{{\boldsymbol } xy}\bigg[v_q(yf({\boldsymbol } x))\bigg ] .\end{aligned } \label{eq : two_err}\ ] ] [ lm : err_bd ] in the above lemma is the bayes error rate and is the misclassification error of the kernel dwd applied to new data points .if , we say the classifier is bayes risk consistent .based on lemma [ lm : err_bd ] , it suffices to show that both and approach zero in order to demonstrate the bayes risk consistency of the kernel dwd .note that is deterministic and is called the approximation error . if the rkhs is rich enough then the approximation error can be made arbitrarily small . in the literature ,the notation of _ universal kernel _ has been proposed and studied .suppose is the compact input space of and is the space of all continuous functions .the kernel is said to be _universal _ if the function space generated by is dense in , that is , for any positive and any function , there exists an such that .suppose is the solution of , is induced by a universal kernel , and the sample space is compact .then we have * ; * let .when and , for any , by ( 1 ) and ( 2 ) and we have in probability .[ lm : thm ] the gaussian kernel is universal and .thus theorem [ lm : thm ] says that the kernel dwd using the gaussian kernel is bayes risk consistent .this offers a theoretical explanation to the numerical results in figure [ fig : svm_dwd ] .in this section , we investigate the performance of ` kerndwd ` on four benchmark data sets : the bupa liver disorder data , the haberman s survival data , the connectionist bench ( sonar , mines vs. rocks ) data , and the vertebral column data .all the data sets were obtained from uci machine learning repository . for comparison purposes , we considered the svm , the standard dwd ( ) and the generalized dwd models with .we computed all dwd models using our r package ` kerndwd ` and solved the svm using the r package ` kernlab ` .we randomly split each data into a training and a test set with a ratio . for each method using the linear kernel, we conducted a five - folder cross - validation on the training set to tune . for each method using gaussian kernels ,the pair of was tuned by the five - folder cross - validation .we then fitted each model with the selected and evaluated its prediction accuracy on the test set .table [ tab : realdata ] displays the average timing and mis - classification rates .we do not argue that either svm or dwd outperforms the other ; nevertheless , two models are highly comparable .svm models work better on sonar and vertebral data , and dwd performs better on bupa and haberman data . for three out of the four data sets, the best method uses a gaussian kernel , indicating that linear classifiers may not be adequate in such cases . in terms of timing , ` kerndwd ` runs faster than ` kernlab ` in all these examples .it is also interesting to see that dwd with can work slightly better than dwd with on bupa and haberman data , although the difference is not significant .in this paper we have developed a new algorithm for solving the linear generalized dwd and the kernel generalized dwd .compared with the current state - of - the - art algorithm for solving the linear dwd , our new algorithm is easier to understand , more general , and much more efficient .dwd equipped with the new algorithm can be computationally more efficient than the svm .we have established the statistical learning theory of the kernel generalized dwd , showing that the kernel dwd and the kernel svm are comparable in theory .our theoretical analysis and algorithm do not suggest dwd with has any special merit compared to the other members in the generalized dwd family .numerical examples further support our theoretical conclusions .dwd with is called the standard dwd purely due to the fact that it , not other generalized dwds , can be solved by socp when the dwd idea was first proposed . now with our new algorithm and theory , practitioners have the option to explore different dwd classifiers . in the present paperwe have considered the standard classification problem under the 0 - 1 loss . inmany applications we may face the so - called non - standard classification problems .for example , observed data may be collected via biased sampling and/or we need to consider unequal costs for different types of mis - classification . a weighted dwd to handle the non - standard classification problem , which follows the treatment of the non - standard svm in . defined the weighted dwd as follows , , \text { subject to } r_i = y_i(\beta_0 + { \boldsymbol } x_i^t{\boldsymbol } \beta)+\xi_i \ge 0 \text { and } { \boldsymbol } \beta^t { \boldsymbol } \beta=1 , \end{aligned } \label{eq : wdwd}\ ] ] which can be further generalized to the weighted kernel dwd : .\label{eq : kerwtgendwd}\ ] ] gave the expressions for for various non - standard classification problems . solved the weighted dwd with based on the second - order - cone programming .the mm procedure for algorithm [ alg : linear ] and algorithm [ alg : kernel ] can easily accommodate the weight factors s to solve the weighted dwd and weighted kernel dwd .we have implemented the weighted dwd in the r package ` kerndwd ` .* proof of lemma [ lm : dwd_loss ] * write and .the objective function of can be written as .we next minimize over for every fixed by computing the first - order and the second - order derivatives of : if , then for all , and is the minimizer .if , then is the minimizer as and . by plugging in the minimizer into , we obtain where we now simplify .suppose and .we define for each , by setting and , we find that becomes which can be further transformed to with and one - to - one correspondent . if both and , .it is trivial that by , , and , we prove .we now prove .let from , it is not hard to show that is strictly increasing . therefore is a strictly convex function , and its first - order condition , verifies directly .* proof of lemma [ lm : fisher ] * given that , we have that \equiv e_{{\boldsymbol } x } \zeta(f({\boldsymbol } x)) ] , we define and compute its first - order derivative as follows , + \left[\dfrac{q}{2(q+1 ) } + \dfrac{q}{2(q+1)}\left(\dfrac{1+a}{1-a}\right)^{\frac{1}{q+1 } } - \dfrac{q}{q+1}\right ] \ge 0 .\end{aligned}\ ] ] hence for each ] we obtain + e_{\{{\boldsymbol } x : \hat{f}_n({\boldsymbol } x ) \le 0,\ f^\star({\boldsymbol } x ) > 0\}}[2\eta({\boldsymbol } x)-1]\\ & \le e_{\{{\boldsymbol } x : \hat{f}_n({\boldsymbol } x)f^\star({\boldsymbol } x)\le 0\}}|2\eta({\boldsymbol } x)-1|\\ & \le \dfrac{q+1}{q } e_{\{{\boldsymbol } x : \hat{f}_n({\boldsymbol } x)f^\star({\boldsymbol } x)\le 0\ } } \left[1 - \zeta\left(\tilde{f}({\boldsymbol } x)\right)\right ] .\end{aligned } \label{eq : rf}\ ] ] since and share the same sign , implies that .when , 0 is between and , and thus indicates that . from , we conclude that \\ & \le \dfrac{q+1}{q } e_{{\boldsymbol } x } \left[\zeta\left(\hat{f}_n({\boldsymbol } x)\right ) - \zeta\left(\tilde{f}({\boldsymbol } x)\right ) \right]\\ & = \dfrac{q+1}{q } e_{{\boldsymbol } xy } \left[v_q\left(y\hat{f}_n({\boldsymbol } x)\right ) - v_q\left(y\tilde{f}({\boldsymbol } x)\right ) \right]\\ & = \dfrac{q+1}{q}(\varepsilon_a + \varepsilon_e ) .\end{aligned}\ ] ] * part * ( 1 ) .we first show that when is induced by a universal kernel , the approximation error . by definition ,we need to show that for any , there exists such that we first use truncation to consider a truncated version of . forany given , we define we have that where \\ & -e_{{\boldsymbol } x : \eta({\boldsymbol } x)>1-\delta}\left [ \eta({\boldsymbol } x ) v_q \left(\tilde{f}({\boldsymbol } x)\right ) + ( 1-\eta({\boldsymbol } x ) ) v_q\left(-\tilde{f}({\boldsymbol } x)\right ) \right],\\ \kappa_- = & e_{{\boldsymbol } x : \eta({\boldsymbol }x)<\delta}\left [ \eta({\boldsymbol } x ) v_q(f_\delta({\boldsymbol } x ) ) + ( 1-\eta({\boldsymbol } x ) ) v_q(-f_\delta({\boldsymbol } x ) ) \right]\\ & -e_{{\boldsymbol } x : \eta({\boldsymbol } x)<\delta } \left [ \eta({\boldsymbol } x ) v_q \left(\tilde{f}({\boldsymbol } x)\right ) + ( 1-\eta({\boldsymbol } x ) ) v_q\left(-\tilde{f}({\boldsymbol } x)\right ) \right ] .\end{aligned}\ ] ] since when , \\ & -e_{{\boldsymbol } x : \eta({\boldsymbol } x)>1-\delta}\left [ \eta({\boldsymbol } x ) v_q\left(\tilde{f}({\boldsymbol } x)\right ) + ( 1-\eta({\boldsymbol } x ) ) v_q\left(-\tilde{f}({\boldsymbol } x)\right ) \right]\\ = & \left[\delta + ( 1-\delta)^{\frac{1}{q+1}}\delta^\frac{q}{q+1 } \right ] - e_{{\boldsymbol } x : \eta({\boldsymbol }x)>1-\delta } \left[1-\eta({\boldsymbol } x ) + \eta({\boldsymbol } x)^{\frac{1}{q+1}}(1-\eta({\boldsymbol } x))^{\frac{q}{q+1 } } \right ] .\end{aligned}\ ] ] we notice that is a continuous function in terms of . since implies that , we conclude that for any given , there exists a sufficiently small such that .we can also obtain in the same spirit .therefore , by lusin s theorem , there exists a continuous function such that .notice that . define then as well .hence where the first inequality comes from the fact that is lipschitz continuous , i.e. , notice that is also continuous .the definition of the universal kernel implies the existence of a function such that by combining , , and we obtain . *part * ( 2 ) .in this part we bound the estimation error .note that rkhs has the following reproducing property : fix any . by the kkt condition of and the representor theorem , we have we define } ] and the convexity of , we have }({\boldsymbol } x_i ) \right ) - \lambda_n ||\hat{f}^{[k]}||^2_{\mathcal{h}_k}\\ \le & - \dfrac{1}{n}\sum_{i=1 , i\neq k}^{n}v'_q\left(y_i \hat{f}_n({\boldsymbol } x_i ) \right ) y_i\left(\hat{f}^{[k ] } ( { \boldsymbol } x_i ) - \hat{f}_n({\boldsymbol } x_i)\right ) + \lambda_n ||\hat{f}_n||^2_{\mathcal{h}_k } -\lambda_n ||\hat{f}^{[k]}||^2_{\mathcal{h}_k}. \end{aligned}\ ] ] by the reproducing property , we further have } ( { \boldsymbol } x ) - \hat{f}_n({\boldsymbol } x)\right\rangle_{\mathcal{h}_k } + \lambda_n ||\hat{f}_n||^2_{\mathcal{h}_k } -\lambda_n ||\hat{f}^{[k]}||^2_{\mathcal{h}_k}\\ = & - \dfrac{1}{n}\sum_{i=1 , i\neq k}^{n}v'_q\left(y_i \hat{f}_n({\boldsymbol } x_i ) \right ) y_i\left\langle k({\boldsymbol } x_i , { \boldsymbol } x ) , \hat{f}^{[k ] } ( { \boldsymbol } x ) - \hat{f}_n({\boldsymbol } x)\right\rangle_{\mathcal{h}_k } \\ & - 2 \lambda_n \left\langle \hat{f}_n({\boldsymbol } x ) , \hat{f}^{[k]}({\boldsymbol } x)-\hat{f}_n({\boldsymbol } x ) \right\rangle_{\mathcal{h}_k } - \lambda_n||\hat{f}^{[k]}- \hat{f}_n||^2_{\mathcal{h}_k}\\ = & \dfrac{1}{n } v'_q\left(y_k \hat{f}_n({\boldsymbol } x_k ) \right ) y_k \left\langle k({\boldsymbol } x_k , { \boldsymbol } x ) , \hat{f}^{[k ] } ( { \boldsymbol } x ) - \hat{f}_n({\boldsymbol } x)\right\rangle_{\mathcal{h}_k } - \lambda_n||\hat{f}^{[k]}- \hat{f}_n||^2_{\mathcal{h}_k } , \end{aligned}\ ] ] where the equality in the end holds by .thus , by cauchy - schwartz inequality , }- \hat{f}_n||^2_{\mathcal{h}_k } \le v'_q\left(y_k \hat{f}_n({\boldsymbol } x_k ) \right ) y_k \left\langle k({\boldsymbol } x_k , { \boldsymbol } x ) , \hat{f}^{[k ] } ( { \boldsymbol } x ) - \hat{f}_n({\boldsymbol } x)\right\rangle_{\mathcal{h}_k}\\ \le & \left|v'_q\left(y_k \hat{f}_n({\boldsymbol } x_k ) \right ) \right| ||k({\boldsymbol } x_k , { \boldsymbol } x)||_{\mathcal{h}_k}||\hat{f}^{[k]}- \hat{f}_n||_{\mathcal{h}_k } \le \sqrt{k({\boldsymbol } x_k , { \boldsymbol } x_k ) } \cdot || \hat{f}^{[k]}- \hat{f}_n||_{\mathcal{h}_k } , \end{aligned}\ ] ] which implies }- \hat{f}_n||_{\mathcal{h}_k } \le \dfrac{\sqrt{b}}{n\lambda_n},\ ] ] where . by the reproducing property, we have }- \hat{f}_n||^2_{\mathcal{h}_k } \le b \left(\dfrac{\sqrt{b}}{n\lambda_n}\right)^2 .\end{aligned}\ ] ] by the lipschitz continuity of the dwd loss , we obtain that for each , }({\boldsymbol } x_k ) \right ) - v_q\left(y_k\hat{f}_n({\boldsymbol } x_k ) \right ) \le & |\hat{f}^{[k]}({\boldsymbol } x_k ) - \hat{f}_n({\boldsymbol } x_k)| \le \dfrac{b}{n\lambda_n } , \end{aligned}\ ] ] and therefore , }({\boldsymbol }x_k ) \right ) \le \dfrac{1}{n}\sum_{k=1}^n v_q\left(y_k\hat{f}_n({\boldsymbol } x_k ) \right ) + \dfrac{b}{n\lambda_n}. \label{eq : proof_eq100}\ ] ] let such that by definition of , we have since each data point in is drawn from the same distribution , we have }({\boldsymbol } x_k ) \right ) \right ] = \dfrac{1}{n}\sum_{k=1}^n e_{{\boldsymbol } t_n}v_q\left(y_k\hat{f}^{[k]}({\boldsymbol } x_k ) \right ) = e_{{\boldsymbol } t_{n-1}}e_{{\boldsymbol } xy } v_q\left(y\hat{f}_{n-1}({\boldsymbol } x ) \right ) .\quad \label{eq : proof_eq103 } \end{aligned}\ ] ] by combining we have by the choice of , we see that there exits such that when we have , , and hence \le \inf_{f\in\mathcal{h}_k}e_{{\boldsymbol } xy } v_q\left(yf({\boldsymbol } x ) \right)+\epsilon.\ ] ] because is arbitrary and \ge \inf_{f\in\mathcal{h}_k}e_{{\boldsymbol } xy } v_q\left(yf({\boldsymbol } x ) \right ) ] , which equivalently indicates that since , then by markov inequality , we prove part ( 2 ) .fernndez - delgado , m. , cernadas , e. , barro , s. , and amorim , d. ( 2014 ) , `` do we need hundreds of classifiers to solve real world classification problems ? '' _ the journal of machine learning research _ , 15 , 31333181 .huang , h. , liu , y. , du .y. , perou , c. , hayes , d. , todd , m. , and marron , j.s .( 2013 ) , `` multiclass distance - weighted discrimination , '' _ journal of computational and graphical statistics _ , 22(4 ) , 953969 .huang , h. , lu , x. , liu , y. , haaland , p. , and marron , j.s .( 2012 ) , `` r / dwd : distance - weighted discrimination for classification , visualization and batch adjustment , '' _ bioinformatics _ , 28(8 ) , 11821183 .qiao , x. , zhang , h. , liu , y. , todd , m. , marron , j.s .( 2010 ) , `` weighted distance weighted discrimination and its asymptotic properties , '' _ journal of american statistical association _ , 105(489 ) , 401414 .wahba , g. , gu , c. , wang , y. , and campbell , r. ( 1994 ) , `` soft classification , aka risk estimation , via penalized log likelihood and smoothing spline analysis of variance , '' in _santa fe institute studies in the sciences of complexity - proceeding vol _, 20 , addison - wesley publishing co , 331331 .wahba , g. ( 1999 ) , `` support vector machines , reproducing kernel hilbert spaces and the randomized gacv , '' _ advances in kernel methods - support vector learning _ , 6 , 6987 .wang , b. and zou , h. ( 2015 ) , `` sparse distance weighted discrimination , '' _ journal of computational and graphical statistics _, forthcoming . | distance weighted discrimination ( dwd ) is a margin - based classifier with an interesting geometric motivation . dwd was originally proposed as a superior alternative to the support vector machine ( svm ) , however dwd is yet to be popular compared with the svm . the main reasons are twofold . first , the state - of - the - art algorithm for solving dwd is based on the second - order - cone programming ( socp ) , while the svm is a quadratic programming problem which is much more efficient to solve . second , the current statistical theory of dwd mainly focuses on the linear dwd for the high - dimension - low - sample - size setting and data - piling , while the learning theory for the svm mainly focuses on the bayes risk consistency of the kernel svm . in fact , the bayes risk consistency of dwd is presented as an open problem in the original dwd paper . in this work , we advance the current understanding of dwd from both computational and theoretical perspectives . we propose a novel efficient algorithm for solving dwd , and our algorithm can be several hundred times faster than the existing state - of - the - art algorithm based on the socp . in addition , our algorithm can handle the generalized dwd , while the socp algorithm only works well for a special dwd but not the generalized dwd . furthermore , we consider a natural kernel dwd in a reproducing kernel hilbert space and then establish the bayes risk consistency of the kernel dwd . we compare dwd and the svm on several benchmark data sets and show that the two have comparable classification accuracy , but dwd equipped with our new algorithm can be much faster to compute than the svm . * key words : * bayes risk consistency , classification , dwd , kernel methods , mm principle , socp . |
autoregressive moving average ( arma ) models have featured prominently in the analysis of time series .the versions initially stressed in the theoretical literature ( e.g. , ) are stationary and invertible . following , unit root nonstationarityhas frequently been incorporated , while `` overdifferenced '' noninvertible processes have also featured .stationary arma processes automatically have short memory with `` memory parameter , '' denoted , taking the value zero , implying a huge behavioral gap relative to unit root versions , where .this has been bridged by `` fractionally - differenced , '' or long memory , models , a leading class being the fractional autoregressive integrated arma ( farima ) . a farima process is given by where is the observable series ; is the lag operator ; ; with for and by convention ; is the indicator function ; and are real polynomials of degrees and , which share no common zeros , and all of their zeros are outside the unit circle in the complex plane ; and the are serially uncorrelated and homoscedastic with zero mean .the reason ( [ a ] ) features the truncated process rather than simply is to simultaneously cover falling in both the stationary region and the nonstationary region ( , where otherwise the process would `` blow up '' ) . in the former case, the truncation implies that is only `` asymptotically stationary . '' in recent years , fractional modeling has found many applications in the sciences and social sciences ; for example , with respect to environmental and financial data . early work on asymptotic statistical theory for fractional models assumed [ and replaced by in ( [ a ] ) ] .assuming , and showed consistency and asymptotic normality of whittle estimates ( of and other parameters , such as the coefficients of and ) , thereby achieving analogous results to those of for stationary arma processes [ i.e. , ( [ aaaa ] ) with and other short memory models .more recently , considered empirical maximum likelihood inference covering this setting .note that and , and much other work , not only excluded but also the short - memory case , as well as negatively dependent processes where . to some degree, other can be covered , for example , for one can first - difference the data , apply the methods and theory of and , and then add 1 to the memory parameter estimate , but this still requires prior knowledge that lies in an interval of length no more than . on the other hand, argued that the same desirable properties should hold without so restricting , in case of a conditional - sum - of - squares estimate , and this would be consistent with the classical asymptotic properties established by for score tests for a unit root and other hypotheses against fractional alternatives , by comparison with the nonstandard behavior of unit root tests against stationary autoregressive alternatives .however , the proof of asymptotic normality in appears to assume that the estimate lies in a small neighborhood of , without first proving consistency ( see also ) . due to a lack of uniform convergence , consistency of this implicitly - defined estimateis especially difficult to establish when the set of admissible values of is large . in particular , this is the case when is known only to lie in an interval of length greater than .in the present paper , we establish consistency and asymptotic normality when the interval is arbitrarily large , including ( simultaneously ) stationary , nonstationary , invertible and noninvertible values of .thus , prior knowledge of which of these phenomena obtains is unnecessary , and this seems especially practically desirable given , for example , that estimates near the or boundaries frequently occur in practice , while empirical interest in autoregressive models with two unit roots suggests allowance for values in the region of also , and ( following ) antipersistence and the possibility of overdifferencing imply the possibility that . we in fact consider a more general model than ( [ a ] ) , ( [ aaaa ] ) , retaining ( [ a ] ) but generalizing ( [ aaaa ] ) to where is a zero - mean unobservable white noise sequence, is an unknown vector , , where for all , , is continuous in and , .more detailed conditions will be imposed below .the role of in ( [ b ] ) , like and in ( [ aaaa ] ) , is to permit parametric short memory autocorrelation .we allow for the simplest case farima by taking to be empty . another model covered by ( [ b ] )is the exponential - spectrum one of bloomfield ( which in conjunction with fractional differencing leads to a relatively neat covariance matrix formula ) .semiparametric models ( where has nonparametric autocovariance structure ; see , e.g. , ) afford still greater flexibility than ( [ b ] ) , but also require larger samples in order for comparable precision to be achieved . in more moderate - sized samples , investment in a parametric model can prove worthwhile , even the simple farima( , , ) employed in the monte carlo simulations reported in the supplementary material , while model choice procedures can be employed to choose and in the farima( ) , as illustrated in the empirical examples included in the supplementary material .we wish to estimate from observations , . for any admissible , define noting that ( [ a ] ) implies , . for a givenuser - chosen optimizing set , define as an estimate of where and , where for given , such that , is a compact subset of and .the estimate is sometimes termed `` conditional sum of squares '' ( though `` truncated sum of squares '' might be more suitable ) . it has the anticipated advantage of having the same limit distribution as the maximum likelihood estimate of under gaussianity , in which case it is asymptotically efficient ( though here we do not assume gaussianity ) .it was employed by in estimation of nonfractional arma models ( when is a given integer ) , by in stationary farima models , where , and by in nonstationary farima models , allowing .the following section sets down detailed regularity conditions , a formal statement of asymptotic properties and the main proof details. section [ sec3 ] provides asymptotically normal estimates in a multivariate extension of ( [ a ] ) , ( [ b ] ) .joint modeling of related processes is important both for reasons of parsimony and interpretation , and multivariate fractional processes are currently relatively untreated , even in the stationary case .further possible extensions are discussed in section [ sec4 ] .useful lemmas are stated in section [ sec5 ] . due to space restrictions, the proofs of these lemmas , along with an analysis of finite - sample performance of the procedure and an empirical application , are included in the supplementary material .our first two assumptions will suffice for consistency . 1 .a. for all , , on a set of positive lebesgue measure ; b. for all , is differentiable in with derivative in , c. for all , is continuous in d. for all , . condition ( i ) provides identification while ( ii ) and ( iv ) ensure that is an invertible short - memory process ( with spectrum that is bounded and bounded away from zero at all frequencies ) .further , by ( ii ) the derivative of has fourier coefficients as , for all , from page 46 of , so that , by compactness of and continuity of in for all , also , writing , we have for all , and ( ii ) , ( iii ) and ( iv ) imply that finally , ( ii ) also implies that assumption a1 is easily satisfied by standard parameterizations of stationary and invertible arma processes ( [ aaaa ] ) in which autoregressive and moving average orders are not both over - specified .more generally , a1 is similar to conditions employed in asymptotic theory for the estimate and other forms of whittle estimate that restrict to stationarity ( see , e.g. , ) and not only is it readily verifiable because is a known parametric function , but in practice satisfying a1 are invariably employed by practitioners . 1 .the in ( [ b ] ) are stationary and ergodic with finite fourth moment , and almost surely , where is the -field of events generated by , , and conditional ( on ) third and fourth moments of equal the corresponding unconditional moments .assumption a2 avoids requiring independence or identity of distribution of , but rules out conditional heteroskedasticity .it has become fairly standard in the time series asymptotics literature since .let ( [ a ] ) , ( [ b ] ) and hold .then as we give the proof for the most general case where , but our proof trivially covers the situation , for which some of the steps described below are superfluous .the proof begins standardly . for , define , . for small enough , where .the remainder of the proof reflects the fact that , and thus , converges in probability to a well - behaved function when , and diverges when , while the need to establish uniform convergence , especially in a neighborhood of , requires additional special treatment .consequently , for arbitrarily small , such that , we define the nonintersecting sets , , , .correspondingly , define , , so .thus , from ( [ new1 ] ) it remains to prove each of the four proofs differs , and we describe them in reverse order ._ proof of _ ( [ new2 ] ) _ for ._ by a familiar argument , the result follows if for there is a deterministic function ( not depending on ) , such that where throughout denoting a generic arbitrarily small positive constant , and since , , for we set [ cf .( [ d ] ) ] , , and .we may write where for all , so by jensen s inequality under a1(i ) , we have strict inequality in ( [ ze ] ) for all , so that by continuity in of the left - hand side of ( [ ze ] ) , ( [ 1 ] ) holds .next , write where .because , given a2 , the are stationary martingale differences , then defining , and henceforth writing , ( [ 2 ] ) would hold on showing that \biggr\vert & = & o_{p } ( 1 ) , \\[-2pt ] \label{5 } \sup_{\mathcal{t}_{4}}\biggl\vert\frac{1}{n}\sum_{t=1}^{n}\sum_{j=0}^{t-1}\sum_{k = t}^{\infty}c_{j}c_{k}\gamma _ { j - k}\biggr\vert&=&o_{p } ( 1 ) , \\[-2pt ] \label{5bis } \sup_{\mathcal{t}_{4}}\biggl\vert\frac{1}{n}\sum_{t=1}^{n}\sum_{j = t}^{\infty}\sum_{k = t}^{\infty}c_{j}c_{k}\gamma _ { j - k}\biggr\vert&=&o_{p } ( 1 ) .\end{aligned}\ ] ] we first deal with ( [ 4 ] ) .the term whose modulus is taken is & & \quad { } + \frac{2}{n}\sum_{j=0}^{n-2}\sum_{k = j+1}^{n-1}c_{j}c_{k}\sum_{l = k - j+1}^{n - j}\bigl\ { u_{l}u_{l- ( k - j ) } -\gamma_{j - k}\bigr\}\\[-2pt ] & & \qquad =( a ) + ( b ) .\nonumber\end{aligned}\ ] ] first, it can be readily shown that , uniformly in , , so by lemma [ lemma1 ] .next , by summation by parts , is equal to & & \quad{}-\frac{2}{n}\sum_{j=0}^{n-2}c_{j}\sum_{k = j+1}^{n-2 } ( c_{k+1}-c_{k } ) \sum_{r = j+1}^{k}\sum_{l = r - j+1}^{n - j}\bigl\ { u_{l}u_{l- ( r - j ) } -\gamma_{j - r}\bigr\ } \\[-2pt ] & & \qquad= ( b_{1 } ) + ( b_{2 } ) .\end{aligned}\ ] ] it can be easily shown that , uniformly in , so we have by lemma [ lemma1 ] , where throughout denotes a generic finite but arbitrarily large positive constant .similarly , by lemma [ lemma1 ] , where was introduced in a1(ii ). it can be readily shown that take such that .then this is bounded by because . for small enough , ( [ new3 ] )is bounded by , to complete the proof of ( [ 4 ] ) .next , the term whose modulus is taken in ( [ 5 ] ) is where denotes the spectral density of . by boundedness of ( implied by assumption a1 ) and the cauchy inequality , ( [ ab ] ) is bounded by so the left - hand side of ( [ 5 ] ) is bounded by by lemma [ lemma1 ] , to establish ( [ 5 ] ) . finally , by a similar reasoning , the term whose modulus is taken in ( 5bis ) is bounded by to conclude the proof of ( [ 5bis ] ) , and thence of ( [ 2 ] ) .thus , ( [ new2 ] ) is proved for . with respect to ( [ new2 ] ) for , note from for such , and ( [ 1010 ] ) , that these results follow if _ proof of _ ( [ new2 ] ) _ for . denote , for any sequence , , , the discrete fourier transform and periodogram , respectively , and . for satisfying lemma [ lemma3 ] , setting , where . then }}_{{\bolds{\varphi}}\in\psi}\vert\xi ( e^{i\lambda};{\bolds{\varphi } } ) \vert ^{2}\inf_{\delta\in\mathcal{i}_{3}}r_{n } ( { \bolds{\tau}}^{\ast } ) -\sup_{\mathcal{t}_{3}}\frac{1}{n}\vert v_{n } ( { \bolds{\tau } } ) \vert.\ ] ] assumption a1 implies [ see ( [ zf])]}}_{{\bolds{\varphi}}\in\psi}\vert\xi ( e^{i\lambda};{\bolds{\varphi } } ) \vert^{2}>\epsilon.\ ] ] thus , \\[-10pt ] & & { } -\sup_{\mathcal{t}_{3}}\frac{1}{n}\vert v_{n } ( { \bolds{\tau } } ) \vert-\sup_{\mathcal{i}_{3}}\frac{1}{n}\vert w_{n } ( \delta ) \vert,\nonumber\vspace*{-2pt}\end{aligned}\ ] ] where , and by lemma [ lemma2 ] by lemma [ lemma2 ] and ( 0.6 ) in the proof of lemma [ lemma3 ] in the supplementary material ( taking there in both cases) and also by lemma [ lemma3 ] ( with there ) next , note that for where we introduce the digamma function . from ( [ 51 ] ) and the fact that is strictly increasing in , \\[-10pt ] & & { } -\sup_{\mathcal{i}_{3}}\biggl\vert\frac{1}{n}\sum_{t=1}^{n } \mathop{\sum\sum}^{t-1}_{j\neq k}a_{j}a_{k}\varepsilon _ { t - j}\varepsilon _ { t - k}\biggr\vert.\nonumber\vspace*{-2pt}\end{aligned}\ ] ] by a very similar analysis to that of in ( [ ai ] ) , the second term on the right - hand side of ( [ h1 ] ) is bounded by & & \qquad\leq\frac{2}{n}\sup_{\mathcal{i}_{3}}\biggl\vert { \sum}_{j=0}^{n-2}a_{j}{\sum}_{k = j+1}^{n-1}{\sum}_{l = k - j+1}^{n - j}\varepsilon_{l}\varepsilon_{l-(k - j)}\biggr\vert \\[-3pt ] & & \qquad\quad{}+\frac{2}{n}\sup_{\mathcal{i}_{3}}\biggl\vert { \sum}_{j=0}^{n-2}a_{j}{\sum}_{k = j+1}^{n-2}(a_{k+1}-a_{k}){\sum}_{r = j+1}^{k}{\sum}_{l = r - j+1}^{n - j}\varepsilon_{l}\varepsilon _ { l-(k - j)}\biggr\vert,\vspace*{-2pt}\vadjust{\goodbreak}\end{aligned}\ ] ] which has expectation bounded by \\[-8pt ] & & \qquad\leq k\biggl ( 1+\frac{1}{n^{{1/2}}}\sum_{j=1}^{n}j^{-{1/2}-a}\sum_{k=1}^{n}k^{-1+a}\biggr ) \leq k\nonumber\end{aligned}\ ] ] for any . therefore , there exists a large enough such that as .then , noting ( [ cb ] ) , ( [ cb2 ] ) , ( [ cb1 ] ) , ( [ cc ] ) , we deduce ( [ ac ] ) for if now the third term on the right is clearly , whereas , as in the treatment of in ( [ ai ] ) , the second is , so that ( [ 32 ] ) holds as can be made arbitrarily large for small enough .this proves ( [ ac ] ) , and thus ( [ new2 ] ) , for . _ proof of _ ( [ new2 ] ) _ for . take and note that for .it follows from lemma [ lemma2 ] and ( 0.6 ) in the proof of lemma [ lemma3 ] ( see supplementary material ) that \\[-8pt ] & = & o_{p } ( n^{2\eta-{1/2 } } ) = o_{p}(1).\nonumber\end{aligned}\ ] ] it follows from lemma [ lemma3 ] that denote . by ( [ ay ] ) , ( [ az ] ) , it followsthat ( [ ac ] ) for holds if for arbitrarily large as .clearly, defining , , the right - hand side of ( h2 ) is bounded below by for , then by ( [ h3 ] ) , using summation by parts as in the analysis of in ( [ ai ] ) , the expectation of the second term in ( [ h4 ] ) is bounded by which , noting ( [ h5 ] ) , is .next , the first term in ( [ h4 ] ) is bounded below by using ( [ h3 ] ) it can be easily shown that the second term in ( [ h6 ] ) is , whereas the first term is bounded below by _ { 1/n}^{1 } \\ & & \qquad=\frac{\epsilon}{4\eta(2\eta+1)}-o_{p}(n^{-2\eta}).\nonumber\end{aligned}\ ] ] then ( [ ba ] ) holds because the right - hand side of ( [ h7 ] ) can be made arbitrarily large on setting arbitrarily close to zero .this proves ( [ ac ] ) , and thus ( [ new2 ] ) , for ._ proof of _ ( [ new2 ] ) _ for . noting that , because .clearly , where for arbitrarily small , the right - hand side of ( [ q1 ] ) is bounded from below by for large enough , so it suffices to show ( [ q2 ] ) as .first where , was defined below ( [ h2 ] ) , and for where ( [ ee ] ) is routinely derived , noting that by summation by parts now noting ( [ zf ] ) and that under a1 , , the required result follows on showing that as .the proof of ( [ qqq2 ] ) is omitted as it is similar to and much easier than the proof of ( [ qqq1 ] ) , which we now give .let . by the cauchyinequality so that by ( [ eee ] ) , noting that , because by a1(ii ) . next , by summation by parts so\\[-8pt ] & & { } + \frac{1}{n^{{1/2}}}{\sum}_{j=1}^{n-2}\sup_{\mathcal{t}_{1}}\vert s_{j+1,n } ( { \bolds{\tau } } ) -s_{j , n } ( { \bolds{\tau } } ) \vert\biggl\vert { \sum}_{k=1}^{j}u_{n - k}\biggr\vert.\nonumber\end{aligned}\ ] ] given that , so as , noting ( [ eee ] ) and stirling s approximation , the expectation of the first term on the right - hand side of ( [ q11 ] ) is bounded by next , noting that , it can be shown that\\[-8pt ] & & { } + \frac{\phi_{j+1 } ( { \bolds{\varphi } } ) } { n^{\delta_{0}-\delta}}{\sum}_{l=1}^{j+1}a_{l } ( \delta_{0}-\delta ) .\nonumber\end{aligned}\ ] ] thus , noting that , uniformly in , , , by previous arguments the contribution of the last term on the right - hand side of ( [ q12 ] ) to the expectation of the second term on the right - hand side of ( [ q11 ] ) is bounded by by identical arguments , the contribution of the first term on the right - hand side of ( [ q12 ] ) to the expectation of the last term on the right - hand side of ( [ q11 ] ) is bounded by \\[-8pt ] & & \qquad\leq\frac{k}{n^{1+\eta}}{\sum}_{j=1}^{n}j^{{1/2}}{\sum}_{k=1}^{j-1}k^{-1-\varsigma}{\sum}_{l = j - k}^{j}l^{-{3/2}+\eta}.\nonumber\end{aligned}\ ] ] given that , the right - hand side of ( [ aa ] ) is bounded by } k^{-\varsigma } ( j - k ) ^{-{3/2}+\eta } \\ & & \qquad\quad{}+\frac{k}{n^{1+\eta}}{\sum}_{j=1}^{n}j^{{1/2}}{\sum}_{k= [ j/2 ] + 1}^{j-1}k^{-\varsigma } ( j - k ) ^{-{3/2}+\eta},\nonumber\end{aligned}\ ] ] where ] , which is bounded away from zero on .thus as and ( [ qqq3 ] ) follows as is arbitrarily small. then we conclude ( [ ac ] ) , and thus ( [ new2 ] ) , for .this requires an additional regularity condition . 1 .a. b. for all , is twice continuously differentiable in on a closed neighborhood of radius about c. the matrix is nonsingular , where . by compactness of and continuity of , , for all , with , where is the element of , a1(ii ) , a1(iv ) and a3(ii ) imply that , as which again is satisfied in the arma case . as with a1, a3 is similar to conditions employed under stationarity , and can readily be checked in general .[ theo2.2 ] let ( [ a ] ) , ( [ b ] ) and hold . then as the proof standardly involves use of the mean value theorem , approximation of a score function by a martingale so as to apply a martingale convergence theorem , and convergence in probability of a hessian in a neighborhood of . from the mean value theorem , ( [ 213 ] ) follows if we prove that where ._ proof of _ ( [ x2 ] ) .it suffices to prove and where . by lemma [ lemma2 ] ,the left - hand side of ( [ x4 ] ) is the vector , where clearly , , and noting that , by a2 , the and are martingale difference sequences .thus , .next , , and equals from ( [ b ] ) and a2 , the expectation is for , and zero otherwise . by a1 , has bounded spectral density .thus , ( [ x6 ] ) is bounded by now } \frac { ( t - l ) ^{-1-\varsigma}}{l}+\sum_{l= [ t/2 ] + 1}^{t-1}\frac { ( t - l ) ^{-1-\varsigma}}{l}\\ & \leq & k ( t^{-1-\varsigma}\log t+t^{-1 } ) \leq\frac { k}{t}.\end{aligned}\ ] ] then , so next , by lemma [ lemma2 ] also , and since , denoting euclidean norm . finally , by lemmas[ lemma2 ] and [ lemma4 ] to conclude the proof of ( [ x4 ] ) . next , ( [ x5 ] ) holds by the cramr wold device and , for example , theorem 1 of on showing that and \\[-8pt ] & & \qquad { } -\frac{1}{n}\sum_{t=2}^{n}e\biggl ( \varepsilon _ { t}^{2}\sum_{j=1}^{\infty}\sum_{k=1}^{\infty}\mathbf{m}_{j } ( { \bolds{\varphi}}_{0 } ) \mathbf{m}_{k}^{\prime } ( { \bolds{\varphi}}_{0 } ) \varepsilon_{t - j}\varepsilon_{t - k}\biggr ) \rightarrow_{p}0,\nonumber\end{aligned}\ ] ] because has expectation , noting that the lindeberg condition is satisfied as is stationary with finite variance .now ( [ x7 ] ) follows as , , is -measurable , whereas the left - hand side of ( [ x56 ] ) is because is stationary ergodic with mean zero .this completes the proof of ( [ x5 ] ) , and thus ( x2 ) ._ proof of _ ( [ x3 ] ) .denote by an open neighborhood of radius about , and trivially, because , it follows that , so the first term in is identically zero . also , as in the proof of ( [ x5 ] ) , the second term of is identically .thus , given that by slutzky s theorem and continuity of at , , ( [ x3 ] ) holds on showing for some , as .as , the proof for ( [ x9 ] ) is almost identical to that for ( [ 4 ] ) , noting the orders in lemma [ lemma4 ] . to prove ( [ xxx ] ) , we show that is , the proof for the corresponding result concerning the difference between the second terms in ( [ x12 ] ) , ( [ h12 ] ) being almost identical . by lemma [ lemma4 ] , ( [ h13 ] ) is bounded by\\[-8pt ] & & \qquad{}+\frac{k}{n}\sum_{t=1}^{n}\sum_{j = t}^{\infty } \sum_{k = j+1}^{\infty}j^{\epsilon-1}k^{\epsilon-1 } ( k - j ) ^{-1-\varsigma}\log^{2}k,\nonumber\end{aligned}\ ] ] noting that ( [ h14 ] ) implies that .the first term in ( [ h15 ] ) is bounded by for any .choosing such that , ( [ h16 ] ) is bounded by similarly , the second term in ( [ h15 ] ) can be easily shown to be , whereas the third term is bounded by for any , so choosing again such that , ( [ h17 ] ) is , to conclude the proof of ( [ x3 ] ) , and thus of the theorem .when observations on several related time series are available joint modeling can achieve efficiency gains .we consider a vector given by where , in which , is ( as in the univariate case ) a vector of short - memory parameters , , for all , and , where the memory parameters are unknown real numbers . in general, they can all be distinct but for the sake of parsimony we allow for the possibility that they are known to lie in a set of dimension .for example , perhaps as a consequence of pre - testing , we might believe some or all the are equal , and imposing this restriction in the estimation could further improve efficiency .we introduce known functions , , of vector , such that for some we have , .we denote and define [ cf .( [ d ] ) ] where .gaussian likelihood considerations suggest the multivariate analogue to ( [ f ] ) where , assuming that no prior restrictions link with the covariance matrix of .unfortunately our consistency proof for the univariate case does not straightforwardly extend to an estimate minimizing ( [ 36 ] ) if .also ( [ 36 ] ) is liable to pose a more severe computational challenge than ( [ f ] ) since is liable to be larger in the multivariate case and may exceed 1 ; it may be difficult to locate an approximate minimum of ( [ 36 ] ) as a preliminary to iteration .we avoid both these problems by taking a single newton step from an initial -consistent estimate . defining we consider the estimate we collect together all the requirements for asymptotic normality of in : 1 . 1 . for all , is differentiable in with derivative in , 2 . for all , 3 .the in ( [ zb ] ) are stationary and ergodic with finite fourth moment , , almost surely , where is positive definite , is the -field of events generated by , , and conditional ( on ) third and fourth moments and cross - moments of elements of equal the corresponding unconditional moments ; 4 . for all , is twice continuously differentiable in on a closed neighborhood of radius about 5 .the matrix having element is nonsingular , where the being coefficients in the expansion , where is an matrix whose column is the column of and whose other elements are all zero ; 6 . is twice continuously differentiable in , for 7 . is a -consistent estimate of .the components of a4 are mostly natural extensions of ones in a1 , a2 and a3 , are equally checkable , and require no additional discussion .the important exception is ( vii ) .when is a diagonal matrix [ as in the simplest case , when is a farima for ] then can be obtained by first carrying out univariate fits following the approach of section [ sec2 ] , and then if necessary reducing the dimensionality in a common - sense way : for example , if some of the are a priori equal then the common memory parameter might be estimated by the arithmetic mean of estimates from the relevant univariate fits .notice that in the diagonal- case with no cross - equation parameter restrictions the efficiency improvement afforded by is due solely to cross - correlation in , that is , nondiagonality of . when is not diagonal , it is less clear how to use the -consistent outcome of theorem [ theo2.2 ] to form .we can infer that has spectral density matrix . from the diagonal element of this ( the power spectrum of ) , we can deduce a form for the wold representation of , corresponding to ( [ b ] ) .however , starting from innovations in ( [ zb ] ) satisfying ( iii ) of a4 , it does not follow in general that the innovations in the wold representation of will satisfy a condition analogous to ( [ 28 ] ) of a2 , indeed it does not help if we simply strengthen a4 such that the are independent and identically distributed .however , ( [ 28 ] ) certainly holds if is gaussian , which motivates our estimation approach from an efficiency perspective . noticethat if is a vector arma process with nondiagonal , in general all univariate ar operators are identical , and of possibly high degree ; the formation of is liable to be affected by a lack of parsimony , or some ambiguity .an alternative approach could involve first estimating the by some semiparametric approach , using these estimates to form differenced and then estimating from these proxies for .this initial estimate will be less - than--consistent , but its rate can be calculated given a rate for the bandwidth used in the semiparametric estimation .one can then calculate the ( finite ) number of iterations of form ( [ 39 ] ) needed to produce an estimate satisfying ( [ 213 ] ) , following theorem 5 and the discussion on page 539 of .let ( [ zc ] ) , ( [ zb ] ) and hold .then as because is explicitly defined in ( [ 39 ] ) , we start , standardly , by approximating by the mean value theorem . then in view of a4(vii ) , ( [ zh ] ) follows on showing for .we only show ( [ x15 ] ) , as ( [ x16 ] ) , ( [ x17 ] ) follow from similar arguments to those given in the proof of ( [ x3 ] ) . noting that , whereas for , equals by similar arguments to those in the proof of theorem [ theo2.2 ] , it can be shown that the left - hand side of ( [ x15 ] ) equals then by the cramr wold device , ( [ x15 ] ) holds if for any -dimensional vector ( with component ) where . as in the proof of ( [ x5 ] ) , ( [ x20 ] )holds by theorem 1 of , for example , noting that to conclude the proof .\(1 ) our univariate and multivariate structures cover a wide range of parametric models for stationary and nonstationary time series , with memory parameters allowed to lie in a set that can be arbitrarily large .unit root series are a special case , but unlike in the bulk of the large unit root literature , we do not have to assume knowledge that memory parameters are 1 .indeed , in monte carlo our method out - performs one which correctly assumes the unit interval in which lies , while in empirical examples our findings conflict with previous , unit root , ones .\(2 ) as the nondiagonal structure of and suggests , there is efficiency loss in estimating if memory parameters are unknown , but on the other hand if these are misspecified , will in general be inconsistently estimated .our limit distribution theory can be used to test hypotheses on the memory and other parameters , after straightforwardly forming consistent estimates of or .\(3 ) our multivariate system ( [ zc ] ) , ( [ zb ] ) does not cover fractionally cointegrated systems because is required to be positive definite .on the other hand , our theory for univariate estimation should cover estimation of individual memory parameters , so long as assumption a2 , in particular , can be reconciled with the full system specification .moreover , again on an individual basis , it should be possible to derive analogous properties of estimates of memory parameters of cointegrating errors based on residuals that use simple estimates of cointegrating vectors , such as least squares .\(4 ) in a more standard regression setting , for example , with deterministic regressors such as polynomial functions of time , it should be possible to extend our theory for univariate and multivariate models to residual - based estimates of memory parameters of errors .\(5 ) adaptive estimates , which have greater efficiency at distributions of unknown , non - gaussian form , can be obtained by taking one newton step from our estimates ( as in ) .\(6 ) our methods of proof should be extendable to cover seasonally and cyclically fractionally differenced processes .\(7 ) nonstationary fractional series can be defined in many ways . our definition [ ( [ a ] ) and ( [ zc ] ) ] is a leading one in the literature , and has been termed `` type ii . '' another popular one ( `` type i '' )was used by for an alternate type of estimate .that estimate assumes invertibility and is generally less efficient than due to the tapering required to handle nonstationarity .it seems likely that the asymptotic theory derived in this paper for can also be established in a `` type i '' setting .the proofs of the following lemmas appear in .[ lemma1 ] under with where for any , as , \\[-8pt ] { \sup_{{\bolds{\varphi}}\in\psi}}\vert c_{j+1 } ( { \bolds{\tau } } ) -c_{j } ( { \bolds{\tau } } ) \vert&=&o\bigl ( j^{\max ( \delta_{0}-\delta-2,-1-\varsigma ) } \bigr).\nonumber\end{aligned}\ ] ] [ lemma2 ] under where and for any and [ lemma3 ] under where for any real number [ lemma4 ] under , given an open neighborhood of radius about , as , thank the associate editor and two referees for constructive comments that have improved the presentation .we also thank sren johansen and morten o. nielsen for helpful comments .some of the second author s work was carried out while visiting universidad carlos iii , madrid , holding a ctedra de excelencia . | we consider the estimation of parametric fractional time series models in which not only is the memory parameter unknown , but one may not know whether it lies in the stationary / invertible region or the nonstationary or noninvertible regions . in these circumstances , a proof of consistency ( which is a prerequisite for proving asymptotic normality ) can be difficult owing to nonuniform convergence of the objective function over a large admissible parameter space . in particular , this is the case for the conditional sum of squares estimate , which can be expected to be asymptotically efficient under gaussianity . without the latter assumption , we establish consistency and asymptotic normality for this estimate in case of a quite general univariate model . for a multivariate model , we establish asymptotic normality of a one - step estimate based on an initial -consistent estimate . . |
this paper considers a decision problem that involves gussing an invisible state after observing , which is correlated with . in decision theory (* section 1.5.2 ) , an optimal decision rule , which minimizes the decision error probability , is called the _bayes decision rule_. let and be random variables that take values in sets and , respectively , where we call a _ state of nature _ or a _parameter _ and an _observation_. let be the joint distribution of .let and be the marginal distributions of and , respectively .let be the conditional distribution of for a given .it is well known that an optimal strategy for guessing the state consists of finding , which maximizes the conditional probability depending on a given observation . formally , by taking an that maximizes for each , we can define the function as which is a bayes decision rule . it should be noted that the discussion throughout this paper does not depend on choosing states with the same maximum probability .when the cardinality of is small , operations ( [ eq : map ] ) and ( [ eq : ml ] ) are tractable by using a brute force search . however , with coding problems , these operations appears to be intractable because grows exponentially as the dimension of grows . in this paper, we assume a situation where operations ( [ eq : map ] ) and ( [ eq : ml ] ) are intractable . in source coding, corresponds to a source output and corresponds to a codeword and side information . in channel coding, corresponds to a codeword and corresponds to a channel output , where the decoding with ( [ eq : map ] ) is called _ maximum a posteriori decoding_. on the other hand , the decoding method that maximizes the conditional probability of a channel is called _ maximum likelihood decoding _ ) as maximizing the likelihood . for this reason ,we might call a _ maximum - likelihood decision rule_. in a series of papers on the coding problem , we have called a maximum - likelihood decoding based on this idea . ] , which is equivalent to maximum a posteriori decoding when is generated subject to the uniform distribution . in this paper, we call the decision rule with the _ maximum a posteriori decision rule_. in this paper , we consider a stochastic decision , where the decision is made randomly subject to a probability distribution .we investigate the relationship between the error probabilities of the stochastic and maximum a posteriori decisions .then , we introduce the construction of stochastic decoders for source / channel codes .for a stochastic decision , we use a random number generator to obtain after observing and let be a decision ( guess ) about the state . formally , we generate subject to the conditional distribution on depending on an observation and let an output be a decision of , where and are conditionally independent for a given , that is , forms a markov chain .the joint distribution of is given as let us call a _ stochastic decision rule_. as a special case , when is given by using a function and is defined as we call or a _ deterministic decision rule_. it should be noted that the maximum a posteriori decision rule is deterministic . throughout this paper , we assume that and are countable sets . it should be noted that the results do not change when is an uncountable set , where the summation should be replaced with the integral . the following lemma guarantees that the right hand side of ( [ eq : map ] ) always exists for every .this fact , which is implicitly used in this paper , implies that it is enough to assume that is a countable set .let be a probability distribution on a countable set .then the maximum of on always exists , that is , there is such that for any .the lemma is trivial when is a finite set . in the following ,we assume that is a countable infinite set .since for all , then always exists , that is , for all , and for any there is a such that .we prove the lemma by contradiction .assume that there is no such that .since and for all , there is such that . from the definition of , there is a such that , where the second inequality comes from the assumption . by repeating this argumenttimes so that . ], we have a sequence such that this implies , which contradicts .when is a finite dimensional euclidian space , we can make the same discussion by quantizing uniformly from to a countable set , where the decision is interpreted as guessing with a finite precision .then we can apply the results to parameter estimation problems .let be a support function defined as then the error probability of a ( stochastic ) decision rule is given as .\label{eq : error - random}\end{aligned}\ ] ] in the last equality , corresponds to the error probability of the decision rule after the observation , and corresponds to the average of this error probability .when is defined by using and ( [ eq : deterministic ] ) , the decision error probability of a deterministic decision rule is given as .\label{eq : error - f}\end{aligned}\ ] ] it should be noted that the right hand side of the first equality can be derived directly from ( [ eq : error - random ] ) and the fact that that is , we have when and satisfy ( [ eq : deterministic ] ) .in this section , we discuss the optimality of the maximum a posteriori decision .we introduce the following well - known lemma .[ prop : bayes ] let be a pair consisting of a state and an observation and be the joint distribution of .when we make a stochastic decision with a distribution , an optimal decision rule minimizing the decision error probability satisfies for all such that and . in particular , the maximum a posteriori decision rule defined by ( [ eq : map ] ) minimizes the decision error probability , where is defined by and ( [ eq : deterministic ] ) . from the definition of , we have \notag \\ & = 1-\sum_y p_y(y)\sum_x p_{x|y}(x|y)q_{{\widehat{x}}|y}(x|y ) \notag \\ & = 1 -\sum_y p_y(y ) { \left [ { \sum_{x\neq { f_{\mathrm{map}}}(y)}p_{x|y}(x|y)q_{{\widehat{x}}|y}(x|y ) + p_{x|y}({f_{\mathrm{map}}}(y)|y)q_{{\widehat{x}}|y}({f_{\mathrm{map}}}(y)|y ) } \right ] } \notag \\ & = 1 -\sum_y p_y(y ) { \left [ { \sum_{x\neq{f_{\mathrm{map}}}(y ) } p_{x|y}(x|y)q_{{\widehat{x}}|y}(x|y ) + p_{x|y}({f_{\mathrm{map}}}(y)|y){\left[{1-\sum_{x\neq { f_{\mathrm{map}}}(y)}q_{{\widehat{x}}|y}(x|y)}\right ] } } \right ] } \notag\\ & = 1 -\sum_yp_y(y)p_{x|y}({f_{\mathrm{map}}}(y)|y ) + \sum_yp_y(y ) \sum_{x\neq{f_{\mathrm{map}}}(y)}q_{{\widehat{x}}|y}(x|y){\left[{p_{x|y}({f_{\mathrm{map}}}(y)|y)-p_{x|y}(x|y)}\right]}. \label{eq : optimal}\end{aligned}\ ] ] ( [ eq : optimal ] ) , which is on the top of the next page , where implies that is minimized only when for all such that and ( i.e. , ) .in this section , we consider the case where for all , that is , we make a stochastic decision with the conditional distribution of a state for a given observation .it should be noted that is independent of for a given , where the joint distribution of is given as in the following , we call this type of decision a _stochastic decision with the a posteriori distribution_. it should be noted that it may be unnecessary to know ( or compute ) the distribution to make this type of decision . to make this type of decision ,it is sufficient that we have a random number generator subject to the distribution with arbitrary input , where the generated random number is independent of for a given .we have the following theorem . in section [ sec : decoder ] , we apply this theorem to an analysis of stochastic decoders of coding problems .[ thm : random - fmap ] let be a pair consisting of a state and an observation and be the joint distribution of . when we make a stochastic decision with ,the decision error probability of this rule is at most twice the decision error probability of the maximum a posteriori decision rule .that is , we have it should be noted that this theorem can be considerd as a special case of ( * ? ? ?* corollary 1 of theorem 1 ) ( * ? ? ?* eq . ( 29 ) ) which assme a general loss function .although the idea of the proof in ( * ? ? ?* theorem 1 ) is quite simple by assuming that a loss function is symmetric and satisfies the triangle inequality , we show this lemma in a different way by assuming the loss function .we have \notag \\ & = \sum_{y}p_y(y ) { \left[{1-\sum_{x}p_{x|y}(x|y)^2}\right ] } \notag \\ & \leq \sum_{y}p_y(y ) { \left[{1-p_{x|y}({f_{\mathrm{map}}}(y)|y)^2}\right ] } \notag \\ & = \sum_{y}p_y(y ) [ 1-p_{x|y}({f_{\mathrm{map}}}(y)|y ) ] [ 1+p_{x|y}({f_{\mathrm{map}}}(y)|y ) ] \notag \\ & \leq 2\sum_{y}p_y(y ) [ 1-p_{x|y}({f_{\mathrm{map}}}(y)|y ) ] \notag \\ & = 2{\mathrm{error}}({f_{\mathrm{map } } } ) , \label{eq : proof - random - deterministic}\end{aligned}\ ] ] where the second inequality comes from the fact that and the fourth equality comes from ( [ eq : error - f ] ) . here, we introduce the following inequalities , which come from lemma [ prop : bayes ] and theorem [ thm : random - fmap ] . in these inequalities , if either or vanishes as the dimension ( block length ) of goes to infinity , then the other one also vanishes .[ cor : bound ] here , we introduce another lemma that comes from lemma [ prop : bayes ] and theorem [ thm : random - fmap ] .[ cor : random ] let be a pair consisting of a state and an observation and be the joint distribution of .when we make a stochastic decision with , the decision error probability of this rule is at most twice the decision error probability of _ any _ decision rule .that is , we have for any .let us consider a situation where is unknown but can be estimated empirically .then the above corollary implies that the error probability of stochastic decision with the a posteriori distribution is upper bounded by .for example , when we know empirically that a human being can guess with small error probability explicitly .] , then the error probability of a stochastic decision with the a posteriori distribution is also small because it is at most twice the error probability of her / his decision rule . inequality ( [ eq : random ] ) is tight in the sense that there is a pair consisting of and such that ( [ eq : random ] ) is satisfied with equality . in fact , by assuming that for all , we have \ ] ] and \notag \\ * & \quad + 2\sum_yp_y(y)p_{x|y}(1|y)[1-q_{{\widehat{x}}|y}(1|y ) ] \notag \\ & = 2\sum_yp_y(y)p_{x|y}(0|y)[1-q_{{\widehat{x}}|y}(0|y ) ] \notag \\ * & \quad + 2\sum_yp_y(y)[1-p_{x|y}(0|y)]q_{{\widehat{x}}|y}(0|y ) \notag \\ & = 2\sum_yp_y(y)p_{x|y}(0|y ) \notag \\ * & \quad + 2\sum_yp_y(y)[1 - 2p_{x|y}(0|y)]q_{{\widehat{x}}|y}(0|y ) \notag \\ & = 2\sum_yp_y(y)p_{x|y}(0|y ) -2\sum_yp_y(y)p_{x|y}(0|y)^2 \notag \\ * & = 2\sum_yp_y(y)p_{x|y}(0|y)[1-p_{x|y}(0|y)]\end{aligned}\ ] ] from ( [ eq : error - random ] ) .let be the variational distance of two probability distributions and on the same set as ( see ( * ? ? ? * eq . ( 11.137 ) ) ) .we have the following lemma .[ thm : random - approx ] let be a pair consisting of a state and an observation and be the joint distribution of .when we make decisions with two stochastic decision rules and for each , we have where and .the lemma is obtained immediately from ( [ eq : vd - max ] ) by considering the decision error event measured by using the joint probability distributions and .formally , we have - \sum_{y}p_y(y ) \sum_{x } p_{x|y}(x|y ) [ 1-q'(x|y ) ] } \right| } \notag \\ & = { \left| { \sum_{y}p_y(y ) \sum_{x } p_{x|y}(x|y ) q({\mathcal{x}}\setminus\{x\}|y ) - \sum_{y}p_y(y ) \sum_{x } p_{x|y}(x|y ) q'({\mathcal{x}}\setminus\{x\}|y ) } \right| } \notag \\ & \leq \sum_{y}p_y(y ) \sum_{x } p_{x|y}(x|y ) { \left| { q({\mathcal{x}}\setminus\{x\}|y ) - q'({\mathcal{x}}\setminus\{x\}|y ) } \right| } \notag \\ & \leq \sum_{y}p_y(y ) \sum_{x } p_{x|y}(x|y ) d(q(\cdot|y),q'(\cdot|y ) ) \notag \\ & = \sum_{y}p_y(y ) d(q(\cdot|y),q'(\cdot|y ) ) \notag \\ & = d(q\times p_y , q'\times p_y ) \label{eq : proof - random - approx}\end{aligned}\ ] ] ( [ eq : proof - random - approx ] ) , which appears on the top of the next page , where the second inequality comes from ( [ eq : vd - max ] ) and the last equality comes from ( [ eq : vd ] ) as applying lemma [ thm : random - approx ] by letting and , we have the following theorem from theorem [ thm : random - fmap ] .let be a pair consisting of a state and an observation and be the joint distribution of .when we make a stochastic decision with , the decision error probability is bounded as this section , we assume that a conditional probability is computable . we make a stochastic decision from a random sequence as a typical example of a random sequence is that generated by a markov chain monte carlo method . here, we assume that and are conditionally independent for a given , that is , the joint distribution of is given as where is a joint probability distribution of for a given .then , we have a decision error probability as follows ; } \notag \\ & = e_{y{\widehat{x}}^{{\overline{t}}}}{\left [ { 1-p_{x|y}({\widehat{x}}_{t(y)}|y ) } \right ] } \notag \\ & = e_{y{\widehat{x}}^{{\overline{t}}}}{\left [ { 1-\max_{t\in\{1,\ldots,{\overline{t}}\}}p_{x|y}({\widehat{x}}_t|y ) } \right]}. \label{eq : seq - error}\end{aligned}\ ] ] we have the following theorem . [ thm : seq - random ] let be a pair consisting of a state and an observation and be the joint distribution of .when we make a stochastic decision with a random sequence defined by ( [ eq : ty])([eq : sequence ] ) , the decision error probability defined by ( [ eq : seq - error ] ) satisfies where is a conditional marginal distribution given as from ( [ eq : seq - error ] ) , we have } \notag \\ & \leq e_{y{\widehat{x}}^{{\overline{t}}}}{\left [ { 1-p_{x|y}({\widehat{x}}_t|y ) } \right ] } \notag \\ & = \sum_{y}p_y(y ) \sum_{{\widehat{x}}^{{\overline{t } } } } q_{{\widehat{x}}^{{\overline{t}}}|y}({\widehat{x}}^{{\overline{t}}}|y ) { \left [ { 1-p_{x|y}({\widehat{x}}_t|y ) } \right ] } \notag \\ & = \sum_{y}p_y(y ) \sum_{{\widehat{x}}_t}q_{{\widehat{x}}_t|y}({\widehat{x}}_t|y ) [ 1-p_{x|y}({\widehat{x}}_t|y ) ] \notag \\ & = \sum_{y}p_y(y ) \sum_{{\widehat{x}}_t } p_{x|y}({\widehat{x}}_t|y ) [ 1-q_{{\widehat{x}}_t|y}({\widehat{x}}_t|y ) ] \notag \\ & = { \mathrm{error}}(q_{{\widehat{x}}_t|y})\end{aligned}\ ] ] for any , where the fourth equality comes from the fact that and are probability distributions . from this inequality , we have ( [ eq : seq - random ] ) .applying lemma [ thm : random - approx ] by letting and , we have the following theorem from theorem [ thm : random - fmap ] .this theorem implies that if tends towards as ( e.g. ( [ eq : gibbs - approx ] ) in appendix ) then the upper bound of error probability is close to at most twice the error probability of the maximum a posteriori decision .[ thm : seq - random - approx ] let be a pair consisting of a state and an observation and be the joint distribution of . when we make a stochastic decision with a random sequence defined by ( [ eq : ty])([eq : sequence ] ) , the decision error probability is bounded as in the following , we assume that a random sequence is independent and identically distributed ( i.i.d . ) with a distribution for a given , that is , the conditional probability distribution is given as then we have the following theorem . from this theorem with a trivial inequality , we have the fact that tends towards the error probability of the maximum a posteriori decision , where the difference is exponentially small as the length of a sequence increases .[ thm : seq - iid ] let be a pair consisting of a state and an observation and be the joint distribution of . when we make a stochastic decision with an i.i.d .random sequence defined by ( [ eq : ty])([eq : sequence ] ) and ( [ eq : iid ] ) , the decision error probability defined by ( [ eq : seq - error ] ) satisfies ^{{\overline{t } } } , \label{eq : seq - qiid - fmap}\end{aligned}\ ] ] where is given by ( [ eq : map ] ) .in particlular , when , we have ^{{\overline{t } } } \label{eq : seq - iid - fmap } \\ & \leq { \mathrm{error}}({f_{\mathrm{map}}})+{\left[{1-\inf_{y : p_y(y)>0}\max_x p_{x|y}(x|y)}\right]}^{{\overline{t}}}. \label{eq : seq - iid - inffmap}\end{aligned}\ ] ] } \notag \\ & = \sum_{y}p_y(y)\sum_{{\widehat{x}}^{{\overline{t}}}\in{\widehat{{\mathcal{x}}}}^{{\overline{t } } } } { \left [ { 1-\max_{t\in\{1,\ldots,{\overline{t}}\}}p_{x|y}({\widehat{x}}_t|y ) } \right ] } \prod_{t=1}^{{\overline{t}}}q_{{\widehat{x}}|y}({\widehat{x}}_t|y ) + \sum_{y}p_y(y)\sum_{{\widehat{x}}^{{\overline{t}}}\notin{\widehat{{\mathcal{x}}}}^{{\overline{t } } } } { \left [ { 1-\max_{t\in\{1,\ldots,{\overline{t}}\}}p_{x|y}({\widehat{x}}_t|y ) } \right ] } \prod_{t=1}^{{\overline{t}}}q_{{\widehat{x}}|y}({\widehat{x}}_t|y ) \notag \\ & \leq \sum_{y}p_y(y)\sum_{{\widehat{x}}^{{\overline{t}}}\in{\widehat{{\mathcal{x}}}}^{{\overline{t } } } } { \left [ { 1-\max_{{\widehat{x}}}p_{x|y}({\widehat{x}}|y ) } \right ] } \prod_{t=1}^{{\overline{t}}}q_{{\widehat{x}}|y}({\widehat{x}}_t|y ) + \sum_{y}p_y(y)\sum _ { { \widehat{x}}^{{\overline{t}}}:{\widehat{x}}_t\neq{f_{\mathrm{map}}}(y)\ \text{for all}\ t } \prod_{t=1}^{{\overline{t}}}q_{{\widehat{x}}|y}({\widehat{x}}_t|y ) \notag \\ & = \sum_{y}p_y(y ) { \left [ { 1-p_{x|y}({f_{\mathrm{map}}}(y)|y ) } \right ] } + \sum_{y}p_y(y ) \prod_{t=1}^{{\overline{t } } } \sum_{{\widehat{x}}_t\neq{f_{\mathrm{map}}}(y ) } q_{{\widehat{x}}|y}({\widehat{x}}_t|y ) \notag \\ & = { \mathrm{error}}({f_{\mathrm{map } } } ) + \sum_{y}p_y(y ) { \left [ { 1-q_{{\widehat{x}}|y}({f_{\mathrm{map}}}(y)|y ) } \right]}^{{\overline{t } } } \label{eq : proof - iid}\end{aligned}\ ] ] for a given , let be defined as then ( [ eq : seq - qiid - fmap ] ) is shown as ( [ eq : proof - iid ] ) , which appears on the top of the next page , where the inequality comes from the fact that implies and for all . inequality ( [ eq : seq - iid - fmap ] ) is obtained from ( [ eq : proof - iid ] ) by letting . inequalities ( [ eq : seq - iid - inffmap ] ) and ( [ eq : seq - iid - x ] ) are shown by the fact that }^{{\overline{t } } } \notag \\ * & \leq \sum_{y}p_y(y ) \sup_{y : p_y(y)>0 } { \left [ { 1-p_{x|y}({f_{\mathrm{map}}}(y)|y ) } \right]}^{{\overline{t } } } \notag \\ & = \sup_{y : p_y(y)>0 } { \left [ { 1-p_{x|y}({f_{\mathrm{map}}}(y)|y ) } \right]}^{{\overline{t } } } \notag \\ & = { \left [ { 1-\inf_{y : p_y(y)>0}\max_x p_{x|y}(x|y ) } \right]}^{{\overline{t}}}.\end{aligned}\ ] ] when is finite , we have }^{{\overline{t } } } \notag \\ & \leq { \mathrm{error}}({f_{\mathrm{map } } } ) + { \left [ { 1-\frac1{|{\mathcal{x}}| } } \right]}^{{\overline{t } } } \label{eq : seq - iid - x}\end{aligned}\ ] ] from ( [ eq : seq - iid - inffmap ] ) , where the last inequality comes from the fact that for all implies we have the same bound ( [ eq : seq - iid - x ] ) from ( [ eq : seq - qiid - fmap ] ) when is the uniform distribution on for every .this implies that the stochastic decision with an i.i.d .sequence subject to the a posteriori distribution is at least as good as that subject to the uniform distribution .it should be noted that , when increases exponentially as the dimension of increases , should also increase exponentially to ensure that the second term ^{{\overline{t}}} ] by using the above scheme , and reproduces a message corresponding to , where the decoding is successful when is reproduced correctly from . from the above discussion , the decoding error probability is at most twice the decoding error probability of the maximal a posteriori decoder .this section introduces a stochastic decoder for the fixed - length lossless compression of a source with an arbitrary small decoding error probability , where the decoder has access to the side information correlated with .let be the joint distribution of and be an encoding map , where is the codeword of .then the joint distribution of a source , side information source , and codeword is given as the decoder receives a codeword and side information . by using a stochastic decoder with the distribution obtain the bound of error probability from theorem [ thm : random - fmap ] .this implies that , when we use the encoding map such that the error probability of the maximum a posteriori decoder vanishes as , the error probability of the stochastic decoder with also vanishes as .in addition , the fundamental limit is achievable with this code , where is the spectrum sup - entropy rate of ( see ( * ? ? ?* theorems 4 and 5 ) ) .it should be noted that the right hand side of ( [ eq : side - information - decoder ] ) is the output distribution of the constrained - random - number generator introduced in .this section introduces a stochastic decoder for the channel code introduced in .let and be random variables corresponding to a channel input and a channel output , respectively .let be the conditional probability of the channel and be the distribution of the channel input .we consider correlated sources with the joint distribution .let be the source code with the decoder side information introduced in the previous section .let be the joint distribution defined by ( [ eq : joint - side - information ] ) .the decoder of this source code obtains the reproduction by using the stochastic decoding with the distribution defined by ( [ eq : side - information - decoder ] ) .let be the error probability of this source code .we can assume that for all , and all sufficiently large there is a function such that where a maximum a posteriori decoder is not assumed for this code . when constructing a channel code , we prepare a map and a vector .we use the stochastic encoder with the distribution for a message generated subject to the uniform distribution on .it should be noted that the right hand side of the above equality is the output distribution of the constrained - random - number generator introduced in .the decoder reproduces satisfying by using the stochastic decoder with the distribution given by ( [ eq : side - information - decoder ] ) and reproduces a message by operating on . in the above channel coding , let us assume that , and the balanced - coloring property of an ensemble , where is the spectrum inf - entropy rate of and .then , from ( * ? ? ?* theorem 1 ) and ( [ eq : errora ] ) , we have the fact that for all and all sufficiently large there are and such that the error probability of this channel code is bounded as it should be noted that the channel capacity ,\ ] ] which is derived in ( * ? ? ?* lemma 1 ) , is achievable by letting , , , , and be the general source that attains the supremum . here, we comment on the decoding with a random sequence introduced in section [ sec : seq ] . for decoding, we can use random sequences generated by markov chains ( random walks ) that converge to the respective stationary distributions ( [ eq : source - decoder ] ) and ( [ eq : side - information - decoder ] ) . in this case, we can apply theorem [ thm : seq - random - approx ] to guarantee that the decoding error probability is bounded by twice the error probability of the maximum a posteriori decoding as the sequence becomes longer .we can also use random sequences by repeating stochastic decisions independently with the respective distributions ( [ eq : source - decoder ] ) and ( [ eq : side - information - decoder ] ) , where we can use independent markov chains with independent initial states to generate an i.i.d . random sequence .in this case , we can use theorem [ thm : seq - iid ] to guarantee that the decoding error probability tends to the error probability of the maximum a posteriori decoding as the sequence becomes longer .when implementing ( [ eq : ty ] ) for ( [ eq : source - decoder ] ) and ( [ eq : side - information - decoder ] ) , it is sufficient to calculate the value of the numerator on the right hand side of these qualities because the denominator does not depend on .the numerator value is easy to calculate when the base probability distribution is memoryless .this paper investigated stochastic decision and stochastic decoding problems .it is shown that the error probability of a stochastic decision with the a posteriori distribution is at most twice the error probability of a maximum a posteriori decision .a stochastic decision with the a posteriori distribution may be sub - optimal but acceptable when the error probability of another decision rule ( e.g. the maximum a posteriori decision rule ) is small .furthermore , by generating an i.i.d .random sequence subject to the a posteriori distribution and making a decision that maximizes the a posteriori probability over the sequence , the error probability approaches exponentially the error probability of the maximum a posteriori decision as the sequence becomes longer .when it is difficult to make the maximum a posteriori decision but the error probability of the decision is small , we may use the stochastic decision rule with the a posteriori distribution as an alternative .in particular , when the error probability of the maximum a posteriori decoding of source / channel coding tends towards zero as the block length goes to infinity , the error probability of the stochastic decoding with the a posteriori distribution also tends towards zero .the stochastic decoder with the a posteriori distribution can be considered to be the constrained - random - number generator implemented by using the sum - product algorithm or the markov chain monte carlo method ( see appendix ) .however , the trade - off between the computational complexity and the precision of these algorithms is unclear .it remains a challenge for the future .in this section , we introduce the algorithms for the constrained - random - number generator , which generates random numbers subject to a distribution for a given matrix and vectors , , where is assumed to be memoryless , that is , there is such that for all and .we review the algorithms introduced in and , which make use of the sum - product algorithm and the markov chain monte carlo method , respectively .let be a family of subsets of .for a set of local functions \}_{i=1}^l ] matrix and the right part of is the identity matrix .it should be noted that , when a matrix is not systematic , the elementary transformation and the elimination of redundant rows can be used to obtain an equivalent condition represented by a systematic matrix .let be the -th column of and let .let be the number of iterations .in the following , we describe an algorithm for a constrained - random - number generator . it should be noted that steps 2 , 5 , and 7 realize the stochastic decision defined by ( [ eq : ty ] ) and ( [ eq : fy ] ) , which may be skipped by outputting instead of in step 9 .[ lem : gibbs ] for given , let be the probability of at step 7 in the above algorithm , where we can take an arbitrary initial sequence satisfying at step 1 .then for all .s. m. aji and r. j. mceliece , `` the generalized distributive law , '' _ ieee trans .theory _ , vol .46 , no . 2 , pp . 325343 ,. j. o. berger , _ statistical decision theory and bayesian analysis _ , springer - verlag new york inc . , 1985 . t. m. cover , `` estimation by the nearest neighbor rule , '' _ ieee trans . inform .theory _ , vol .14 , no . 1 ,pp . 5055 , jan .t. m. cover and p. e. hart , `` nearest neighbor pattern classification , '' _ ieee trans .theory _ , vol . 13 , no . 1 ,pp . 2127 , jan .t. m. cover and j. a. thomas , _ elements of information theory 2nd . ed ._ , john wiley & sons , inc . , 2006 .han and s. verd , `` approximation theory of output statistics , '' _ ieee trans . inform .theory _ , vol .it-39 , no . 3 , pp .752772 , may 1993 . w. k. hastings , `` monte carlo sampling methods using markov chains and their applications , '' _ biometrica _ , vol .57 , no . 1 ,pp . 97109 , 1970 . f. r. kschischang , b. j. frey , and h. a. loeliger , `` factor graphs and the sum - product algorithm , '' _ ieee transactions on information theory _47 , no . 2 , pp .498519 , feb .j. muramatsu , `` channel coding and lossy source coding using a generator of constrained random numbers , '' _ ieee trans . inform .theory _ , vol .it-60 , no . 5 , pp . 26672686 , may 2014 . j. muramatsu , `` variable - length lossy source code using a constrained - random - number generator , '' _ ieee trans .theory _ , vol .it-61 , no . 6 , pp . 35743592 , junj. muramatsu and s. miyake , `` construction of a channel code from an arbitrary source code with decoder side information , '' _ proc .int . symp . on inform .theory and its applicat ._ , moterey , usa , 2016 , pp .extended version is available at arxiv:1601.05879[cs.it ] , 2016 .y. steinberg and s. verd , `` channel simulation and conding with side information , '' _ ieee trans .theory _ , vol .it-40 , no . 3 , pp .634646 , may 1994 . | this paper investigates the error probability of a stochastic decision and the way in which it differs from the error probability of an optimal decision , i.e. , the maximum a posteriori decision . it is shown that the error probability of a stochastic decision with the a posteriori distribution is at most twice the error probability of the maximum a posteriori decision . furthermore , it is shown that , by generating an independent identically distributed random sequence subject to the a posteriori distribution and making a decision that maximizes the a posteriori probability over the sequence , the error probability approaches exponentially the error probability of the maximum a posteriori decision as the sequence length increases . using these ideas as a basis , we can construct stochastic decoders for source / channel codes . channel coding , decision theory , error probability , maximum a posteriori decision , source coding , source coding with decoder side information , stochastic decision , stochastic decoding |
the interplay between network structure and search dynamics has emerged as a busy sub - field of statistical network studies ( see e.g. refs . ) . consider a simple graph ( where is a set of vertices and is a set of edges unordered pairs of vertices ) .assume information packets travel from a source vertex to a destination .we assume the packages are myopic agents ( at a given timestep they have access to information about the vertices in their neighborhood , but not more ) , have memory ( so they can e.g. perform a depth - first search ) but no previous knowledge of the network . let be the time for a packet to travel between its source and destination .one commonly studied quantity of search efficiency is the expectation value of , , for randomly chosen and . in this work we attempt to find efficient ways to index andutilize these indices for packet navigation .we propose two schemes of indexing the vertices , and corresponding methods for packet navigation .these schemes , along with two depth - first search methods ( not using vertex indices for more than remembering the path ) are examined on four network models .we will first present the indexing and search schemes , then the network models for testing the algorithms , and last numerical results . to ( with ) . on the way from to the packet chooses the neighbor ( of the current vertex ) with lowest index , which here gives a longer route than the optimal .( d ) shows a possible partition of branches of non - root vertices into classes of as similar size as possible ( as done in the asu indexing scheme ) .( e ) shows a possible indexing based on the partition in ( d ) .panel ( f ) displays a search from to with .the shortest path from to is accurately found , but a detour to makes the search from to sub - optimal . ]now we turn to the schemes for assigning indices to the vertices and using them in search processes .our two main schemes are both inspired by search trees .packets first moves towards a root vertex , then towards the destination .unless the network really is a tree , this approach can not be exact a packet is not guaranteed to find the shortest way both from to and from to ) .however , as we will see , one can assign indices such that the search either from to , or from to is certain to be as short as possible .one of our schemes , asd ( accurate search up ) , will be such that the shortest upward search is guaranteed , the other , asd ( accurate search down ) , will have the shortest possible to search . on a technical note , is a set of distinct elements and an indexing scheme is a bijection ] ( where denotes the cardinality of a subgraph ) .let be the largest index in s neighborhood smaller than .assume there is an edge that the search will follow , i.e. that .this means that . by construction, is the only vertex in at a distance ( the distance from the rest of to the root is at least ) . since , we have which contradicts the existence of an edge . thus searches from to always follow the edges of , which also means the -searches will be as short as possible .searching upwards , from to , in a graph indexed as above is harder .we know that one shortest path goes via a vertex with smaller index than , but there might sub - optimal paths via indices in the intervals and , and there might also be paths via vertices of index larger than , that is optimal .for example , assume the search tree in fig .[ fig : ill](a ) comes from a graph with the additional edges , and ( see fig .[ fig : ill](b ) ) .then , the shortest path from to via a vertex of lower index is , but there is an equally long path via a vertex of larger index , , and longer paths via vertices both smaller and larger than but smaller than .there thus no general way of finding the shortest way from to . instead , we always choose the vertex with the smallest index in the neighborhood . by this strategya packet will come closer to , in index space , for every step .furthermore , in tree - like parts of the graph , the search will follow the shortest paths .an illustration of the asd search can be found in fig .[ fig : ill](c ) .consider a tree constructed as in the previous section and an indexing such that implies ( i.e. , all indices of a level further from the root is larger than in levels closer to ) .with such an indexing , since the neighbor of a vertex with the smallest index necessarily is one step closer to the root , a packet can always find one shortest way too the root .but once the package is at the root the indices is not of so much help .the search from to has to be , essentially , a depth - first search .there are , however , a few tricks to speed up the search . first , there is no need to search deeper than , then .second , one can choose the indices of one level in the tree in a way to narrow down the search .for example , one can divide the vertices into classes ( defined by e.g. the remainder when the index is divided by ) and index vertices of connected regions of the graph with indices of the same class .the search can then be restricted to the same class as the destination .we will pursue this idea with . to derive the asu indexing scheme ,the first goal is to divide the vertices into classes of connected subgraphs .furthermore , we require all classes to be connected to the root vertex . another aim is to make the classes of as similar sizes as possible .our first step is to make ( the degree , or number of neighbors , of ) parallel depth - first searches .second we group the search trees into groups with maximally similar sizes . in our case , we seek a partition of the search trees into two classes such that the sums of vertices in the respective classes are as close as possible. then we go through the levels , starting from the root , and assign numbers such that vertices of one partition have even indices , while the other has odd numbers ( this assignment might not always work ) . to avoid systematic errors we sample the elements of levels randomly .this construction scheme is illustrated in fig .[ fig : ill](d ) and ( e ) . as a reference, we also run simulations for two depth - first search methods that do not utilize indices .one of them , rnd , is regular depth - first search where the neighbors are traversed in random order . in the other , deg ,the neighbors are chosen in order from high to low degree . just like for asu and asd methods ,a packet is assumed to have knowledge about its neighborhood if the destination is in the neighborhood of a vertex , then the search will be over the next time step .the efficiency of our indexing and search schemes are more or less directly affected by the network structure . to investigate this relationship we test the search schemes on four different types of network models : modified erds rnyi ( er ) graphs , square lattices , barabsi albert ( ba ) and holme kim ( hk ) networks . to facilitate comparison, we have the same average degree , four ( dictated by the square grid ) , in all networks .the er model is the simplest model for randomly generating simple graphs with vertices and edges .the edges are added one by one to randomly chosen vertex pairs ( the only restriction being that loops or multiple edges are not allowed ) .a problem for our purpose is that er graphs are not necessarily connected ( something required to measure ) . to remedy thiswe propose a scheme to make networks connected . 1 .[ step : components ] detect the connected components[ step : seq ] go through the connected components sequentially .denote the current component .[ step : random_component ] pick a component randomly .2 . pick a random edge whose removal would not fragment . if no such edge exist , go to step [ step : seq ] .[ step : rnd_vertex ] pick a random vertex of .[ step : repl ] replace by . if the edge would exist already ( an unlikely event ) , go to step [ step : repl ] . if there is no vertex such that does not already exist , then go to [ step : seq ] .3 . if the network is disconnected still , go to step [ step : components ] . in practice , even for our largest system sizes , the above algorithm converges in a few iterations .the number of edges needed to be added never exceed a few percent of , and this addition is made with greatest possible randomness ; hence we believe the essential network structure of the er model is conserved .we use square lattices with periodic boundary conditions . vertices spread out regularly on a -grid such that the vertex with coordinates , , is connected to , , , ( if , we formally let , if we let represent ; and correspondingly for ) .the popular ba model of networks with a power - law degree distribution works as follows ( with our parameter settings ) .start with one vertex connected to two degree - one vertices .iteratively add vertices connected to two other vertices .let the probability of connecting the new vertex to a vertex already present in the network is proportional to ( so called _ preferential attachment _ ) .the hk model is a modification of the ba model to give the network higher number of triangles .when edges are added from the new vertex to already present vertices , the first edge is added by preferential attachment .the second edge is added to one of s neighbors , forming a triangle . as a function of the graphsizes . in all panels , we display data for the different indexing and search schemes .the shaded areas are unreachable ( corresponding to values smaller than the theoretical minimum the average distance ) .the different panels correspond to the modified er model , square grid , ba model and hk model networks respectively .error bars would have been smaller than the symbol sizes . ]we study the search schemes on the four different network topologies numerically .we use independent networks and different -pairs for every network .the network sizes range from to .to with the asd indexing and search scheme . a packet from to travel along the perimeter to and then move towards the center . ] in fig .[ fig : sca ] we display the average search times as a function of system size for our simulations .the most conspicuous feature is that the asd scheme is always , by far , the most efficient .while asu and deg are close to the least efficient method ( rnd ) , asd is rather close to the theoretical limit ( equal to the average distances upper border of the shaded areas in fig .[ fig : sca ] ) . to be more precise, is quite constant , about two times larger than the average distance . the other search schemes ( asu , deg and rnd ) follow faster increasing functional forms .for the square lattice , these three schemes increase , approximately proportional to ( the analytical value for two - dimensional random walks ) whereas for asd , scale like distances in square grids , .one way of interpreting this result is that while asd manages to find the root as fast as it finds the destination from the root , asu fails to find faster than a random search .the slow downward performance of asu is not unexpected the -search in asu only differs from a random depth - first search in that it does not search further than the level of the destination , and that it restricts the search - space to half its original size by dividing the vertices into odd and even indices .the fast upward search of asd is more surprising . in fig .[ fig : wcs ] we show a network where asd performs badly .the average time to search upwards is as .the downward search takes giving a total expected value of .this can be compared to the average distance .for this example , and diverge in a way not seen in the network models .why is the search so much faster in the model networks ?one point is that the worst - case indexing seen in fig . [fig : wcs ] is very unlikely .since the spokes would be sampled randomly , the chance that a vertex at the perimeter not finds in two steps is , the probability of a perimeter vertex to find in steps is , and so on . carrying on this calculation , a vertex at the perimeter reaches in timesteps giving too far from the observed .we note however that for the model networks many other factors that are not present in the wheel - graph of fig .[ fig : wcs ] affect .for example , the high density of short triangles in the hk model networks will introduce many edges between vertices of the same level in which will affect the search efficiency . is approximately linear for the asu , deg and rnd on all network models .the slopes of these curves are , however , a little different .first , the deg method is more efficient ( compared to asu and rnd ) for ba networks , than for the modified er model .this observation ( also made in ref . ) can be explained by the skewed degree distribution in the ba - network the packet reaches high - degree vertices fast .the packet can see a large part of the network from these hubs , and is therefore more likely to see .more interesting , perhaps , is that asu is more efficient for the networks with a higher density of short cycles ( the square lattice and hk models ) .a rough explanation is that the partition procedure of asu cuts off many edges between vertices at the same distance from .since there are many such edges in these network models , the network will effectively be sparser ( without changing s diameter ) , which results in a better performance .we have investigated navigation in valued graphs , more specifically in indexed graphs graphs where every vertex is associated with a unique number in the interval $ ] .these indices can be assigned to facilitate the packet navigation .the packets are assumed to have no _ a priori _ knowledge about the network , except the neighborhoods of their current positions , but memory enough to perform a depth - first search .we find that one of our investigated methods , asd , is very efficient for four topologically very different network models .the searches with the asd scheme are roughly twice as long as the shortest paths ( scaling in the same way as the average distance ) .navigation on indexed graphs has applications in distributed information systems . if , specifically , the amount of information that can be stored at the vertices were limited , search strategies such as ours would be useful .one such system is the autonomous system level internet where the information stored at each vertex ( with the current protocols ) increase at least as fast as the networks themselves . for most real - world applications ( other examples being _ ad hoc _ networks or peer - to - peer networks ) there are additional constraints so that the algorithms of this paper can not immediately be applied .such networks are typically changing over time , so the indexing should ideally be possible to be extended on the fly as vertices and edges are added and deleted from the network . apart from this ,a future direction for research on indexed graphs is to improve the performance of the algorithms presented in this work .there might be search - tree based algorithm that neither finds the shortest path to the root , nor finds the shortest way to the destination .for some network topologies there might be faster algorithms that are not based on constructing a spanning tree .consider , for example , modular networks ( i.e. networks with tightly connected subgraphs that are only sparsely interconnected ) in such networks the search can be divided into two stages first find the cluster of the destination , then the destination .these two stages should be reflected in a fast navigation algorithm .n. ganguly , l. brusch , and a. deutsch .design and analysis of a bio - inspired search algorithm for peer to peer networks . in o.babaoglu , m. jelasity , a. montresor , c. fetzer , and s. leonardi , editors , _ self - star properties in complex information systems _ , pages 358372 , new york , 2007 .springer - verlag .n. sarshar , p. o. boykin , and v. p. roychowdhury .percolation search in power law networks : making unstructured peer - to - peer networks scalable . in _ proceedings of fourth international conference on peer - to - peer computing _ , pages 29 .ieee , 2004 . | we investigate efficient methods for packets to navigate in complex networks . the packets are assumed to have memory , but no previous knowledge of the graph . we assume the graph to be indexed , i.e. every vertex is associated with a number ( accessible to the packets ) between one and the size of the graph . we test different schemes to assign indices and utilize them in packet navigation . four different network models with very different topological characteristics are used for testing the schemes . we find that one scheme outperform the others , and has an efficiency close to the theoretical optimum . we discuss the use of indexed - graph navigation in peer - to - peer networking and other distributed information systems . |
a two - dimensional numerical model of the device was constructed to allow for a greater understanding of the underlying physics of the cell .the model solves the nernst - planck equations with advection in the imposed flow for the concentrations and electrostatic potential , assuming a quasi - neutral bulk electrolyte .fully developed poiseuille flow was assumed in the channel , with reactions occurring at the top and bottom of the channel along thin electrodes ( fig .[ fig : cell_design ] ) . because the channel width far exceeds the height ,edge effects can be ignored , validating the two - dimensional assumption .equilibrium potentials along the cathode and anode were determined by the nernst equation assuming dilute solution theory , and activation losses were estimated using the symmetric butler - volmer equation , consistent with existing kinetics data . electrolytic conductivity was assumed to depend on the local ( spatially evolving ) hydrobromic acid concentration , and was calculated using empirical data . for a peclet number of 10,000 and initial concentrations of bromine and hydrobromic acid of 1 mis overlaid ( a ) .assembled cell prior to testing ( b).,width=9 ] the cell was operated galvanostatically at room temperature and atmospheric pressure over a range of flow rates and reactant concentrations .the cell was observed to reach steady state in less than ten seconds , so each data point was collected after sixty seconds of steady state operation to eliminate transient artifacts .polarization data was collected as a function of peclet number for the hblfb using 1 m hbr and 1 m br , and compared with numerical model results ( fig .[ fig : pol_curve ] ) .the peclet numbers of 5,000 , 10,000 , and 15,000 correspond to reynolds numbers of 5.75 , 11.5 , and 17.25 , oxidant flow rates of 0.22 , 0.44 , and 0.66 ml min , and mean velocities of 6.3 , 12 , and 19 mm s , respectively .the oxidant flow rates correspond to stoichiometric currents of 0.7 , 1.4 , and 2.1 a respectively .hydrogen was flowed in excess at a rate of 25 sccm .the slightly enhanced maximum current density of the observed results compared to the predicted results may be attributed to the roughness of the electrode surface producing chaotic mixing that slightly enhances reactant transport . below limiting current ,the agreement between model and experiment is very good . at low current densities ,the voltage differences between the different peclet numbers are small , but in each case , the voltage drops rapidly as the cell approaches limiting current , corresponding to the predicted mass transfer limitations . and 1 m hbr . for low concentration reactants ,mass transport is the dominant source of loss , and limiting current density can be predicted analytically as a function of peclet number using the lvque approximation ( inset ) ( a ) . predicted ( dashed ) and observed ( symbols ) cell voltage and power density as a function of current density and concentration for and 3 m hbr .the higher reactant concentrations require concentrated solution theory to accurately model , but allow for much higher current and power density ( b).,width=9 ] these limitations can be estimated analytically for fast reactions by assuming a bromine concentration of zero at the cathode and applying the lvque approximation to the bromine depletion layer . for a channel of length and height , fully developed poiseiulle flow , and initial bromine concentration , the resulting partial differential equation and boundary conditions can be expressed in terms of dimensionless bromine concentration , position and , channel aspect ratio , and peclet number . a similarity technique can be applied to convert this to an ordinary differential equation . {\beta\tilde{x}/2\textrm{pe}}}\nonumber\end{aligned}\ ] ] this equation can be solved exactly in terms of the incomplete gamma function , . limiting current can be calculated using faraday s law to determine the distribution of current along the length of the electrode and integrating to obtain limiting current as a function of peclet number , reactant concentration and diffusivity , channel height and aspect ratio , faraday s constant , and the number of moles of electrons transferred per mole of reactant .{\frac{\textrm{pe}}{\beta}}\end{aligned}\ ] ] this result has considerable bearing on how laminar flow systems should be designed and operated .the presence of aspect ratio in the denominator means that shortening the channel results in greater power density , as observed in experiment .in addition , the weak 1/3 dependence on peclet number means that increasing the flow rate beyond a certain point yields minimal benefits .there was excellent agreement between maximum observed current density , maximum numerically predicted current density , and the analytically predicted limiting current density as a function of peclet number ( fig .[ fig : pol_curve]a ) .higher bromine concentrations were also investigated by using a more concentrated electrolyte ( 3 m hbr ) to enhance bromine solubility and move beyond the mass transfer limitations of 1 m br .the performance of this system was investigated at a peclet number of 10,000 by varying the bromine concentration ( fig .[ fig : pol_curve]b ) .using 5 m br and 3 m hbr as the oxidant and electrolyte respectively , a peak power density of 0.795 w was observed when operated near limiting current density .this corresponds to power density per catalyst loading of 1.59 w mg platinum .the open circuit potential of the cell dropped more than might be predicted simply using the nernst equation at the cathode .the drop in open circuit potential is consistent with data on the activity coefficient of concentrated hydrobromic acid available in the literature , as well as previous studies that employed concentrated hydrobromic acid . to account for this effect, empirical data for the activity coefficient of hydrobromic acid as a function of local concentration was incorporated into the boundary conditions .the activity coefficient was assumed to vary slowly enough within the electrolyte that gradients in activity coefficient were neglected in the governing equations . taking into account the activity coefficient of hydrobromic acid , there is good agreement between the model and experiment . as in the low concentration data ,the observed maximum current density is slightly higher than predicted .otherwise the model captures the main features of the data , including the transition from transport limited behavior at low bromine concentrations , evidenced by a sharp drop in cell voltage , to ohmically limited behavior at high bromine concentrations . in this ohmically limited regime ,mass transport limitations are less important , and the limiting current solution applied to the low concentration results does not apply .charging behavior was also investigated by flowing only hbr and applying a voltage to electrolyze hbr back into br and h .the voltage versus current density behavior of the cell was investigated during charging as a function of hbr concentration at a peclet number of 10,000 ( fig .[ fig : charging_data ] ) .experimental conditions were kept identical to those of the discharge experiments , with the exception that no bromine was externally delivered to the cell .side reactions , in particular the formation of hypobromous acid and the evolution of oxygen become dominant before potentials get sufficiently high to observe limiting current , the numerical model can not accurately describe this behavior at potentials above 1.3 volts , and was not applied to the charging data . at lower voltages , both the electrolyte conductivity and the limiting current density were increased by increasing the hbr concentration , resulting in increased performance ( fig .[ fig : charging_data ] ) .roundtrip voltage efficiency was then calculated by dividing the discharge voltage by the charging voltage for a number of reactant concentrations ( fig .[ fig : power_efficiency ] ) .voltage efficiencies slightly greater than 100% were observed for low power densities due to differences in the open circuit potential that are generated by the variation in bromine concentration between the charging and discharging experiments , but this anomaly becomes unimportant at higher current densities , where the reactant concentration varies spatially much more strongly .using high concentration reactants , roundtrip efficiency of 90% was observed when using high concentration reactants at 25% of peak power ( 200 mw/ during discharge ) .this appears to be the first publication of roundtrip charging and discharging of a membrane - less laminar flow battery , and compares very favorably to existing flow battery technologies .vanadium redox batteries , for example , have demonstrated voltage efficiencies as high as 91% , but only at a discharge power density nearly an order of magnitude lower than the hblfb .concentration of 1 m. increasing the hbr concentration increases both the conductivity and the limiting current , resulting in superior performance.,width=9 ] although closed - loop operation was not demonstrated in this study , some insight into coulombic efficiency can be gathered by considering the effect of reactant mixing within the channel . at 25% of peak power ,the single - pass coulombic efficiency of the cell is only about 1% .if no attempt were made to separate the reactants at the outlet of the cell , the resulting energy efficiency of the cell would also be very low .however , if the channel outlet were split to conserve the volume of stored electrolyte and oxidant , only the br that had diffused into the electrolyte layer would be lost . assuming fully developed poiseuille flow , this corresponds to a coulombic efficiency of 72% , or a roundtrip energy efficiency of 66% .several opportunities for improvement on this design remain to be investigated .first , the porous hydrogen anode was selected because of its commercial availability , despite the fact that it was intended for use in a proton exchange membrane fuel cell .the catalyst composition and structure , wettability , and media porosity have not been optimized .previous work has shown that these parameters can impact the power density of laminar flow cells by nearly an order of magnitude .recent work on thin hydrogen oxidation electrodes has demonstrated that excellent catalytic activity can be achieved with platinum loadings as low as 3 g , more than two orders of magnitude lower than the electrodes used in this study . assuming equivalent performance , an hblfb employing such a hydrogen electrode would have a specific power density of roughly 250 w mg platinum , virtually eliminating platinum as a cost - limiting component of the system .second , the channel geometry used in these experiments was relatively long in order to achieve high oxidant utilization .shortening the channel would decrease the average thickness of the depletion layer that develops along the cathode , enabling higher current densities .third , reducing the distance between electrodes would greatly reduce ohmic losses , which are dominant when high concentration reactants are fed into the cell .incorporating further refinements , such as micro patterned chaotic mixing patterns coupled with a non - specific convection barrier or non noble metal catalysts could also improve performance . . the initial data presented here for the hblfb suggests that high power density and high efficiency energy storage is achievable using a membrane - less electrochemical cell operating at room temperature and pressure .the hblfb requires no special procedures or facilities to fabricate , and uses kinetically favorable reactions between abundant , low cost reactants .recent work has shown that a membrane - based hydrogen - bromine flow battery at room temperature can generate 850 mw , or 7% more power than these experiments with the hblfb at room temperature . however , this was achieved using a stoichiometric oxidant flow rate over 8 times larger than that used in this work , as well as an acid - treated porous bromine electrode with substantially greater active area than the bare graphite bromine electrode of the hblfb .this work represents a major advance of the state of the art in flow batteries . to the best of the authors knowledge ,the data presented here represent the highest power density ever observed in a laminar flow electrochemical cell by a factor of three , as well as some of the first recharging data for a membrane - less laminar flow electrochemical cell .although previous work has identified the appropriate scaling laws , the result presented here represents the first exact analytical solution for limiting current density applied to a laminar flow electrochemical cell , and serves as a guide for future designs . the hblfb rivals the performance of the best membrane - based systems available today without the need for costly ion exchange membranes , high pressure reactants , or high temperature operation .this system has the potential to play a key role in addressing the rapidly growing need for low - cost , large - scale energy storage and high efficiency portable power systems .a proof of concept electrochemical cell was assembled using a graphite cathode and a commercial carbon cloth gas diffusion anode with 0.5 mg of platinum ( 60% supported on carbon ) obtained from the fuel cell store ( san diego , ca ) .the cell was housed between graphite current collectors and polyvinylidene fluoride ( pvdf ) porting plates ( fig .[ fig : cell_design]b ) .all components were fabricated using traditional cnc machining or die cutting .no additional catalyst was applied to the cathode .the hydrogen flow rate through the porous anode was 25 sccm , and an oxidant stream of bromine mixed with aqueous hydrobromic acid passed through the channel in parallel with an electrolyte stream of aqueous hydrobromic acid .an 800 m thick viton gasket was used to separate the two electrodes and create the channel , which was 1.4 cm long from oxidant inlet to outlet with an active area of 25 mm .a fixed ratio of ten to one was maintained between the electrolyte and oxidant flow rates , and the net flow rate was adjusted to study the performance of the cell as a function of the peclet number , , defined by the average flow velocity , channel height , and bromine diffusion coefficient such that .a scaled , dimensionless model was constructed in comsol multiphysics , and results were calculated over a range of flow rates and reactant concentrations .a complete description of the model has been presented previously .bromine concentration varied along the length of the channel , resulting in strong spatial variations in the current density ( fig .[ fig : cell_design]a ) .current - voltage data was obtained for a range of reactant concentrations and flow rates for comparison to experimental data by averaging the current density along the length of the cell , and calculating solutions over a range of specified cell voltages .10 url # 1`#1`urlprefix[2]#2 [ 2][]#2 _ et al ._ _ * * , ( ) . & ._ _ * * , ( ) . ._ _ * * , ( ) . , & _ _ * * , ( ) ._ et al . _ . __ * * , ( ) ._ _ * * , ( ) . , , , & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) ._ et al . _ . _ _ * * , ( ) . , , & . __ * * , ( ) ._ et al . _ . __ * * , ( ) . & ._ _ * * , ( ) . & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , &. _ _ * * , ( ) ._ et al . _ . __ * * , ( ) . & ._ _ * * , ( ) . , & ._ _ * * , ( ) . .( ) . , , , & _ _ * * , ( ) . , , & ._ _ * * , ( ) . & ._ _ * * , ( ) . , , , & ._ _ * * , ( ) . , , & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) . , , & ._ _ * * , ( ) ._ et al . _ . __ * * , ( ) ._ _ * * , ( ) ._ _ * * , ( ) .et al . __ _ * * , ( ) ._ * * , ( ) . , & ._ _ * * , ( ) ._ _ ( , ) ._ _ ( , ) ._ _ ( , ) . , , & ._ _ * * , ( ) . , & ._ _ * * , ( ) . , , , &_ _ * * , ( ) . , , &_ _ * * , ( ) ._ et al . __ _ * * , ( ) ._ et al . _ . __ ( ) . , &_ _ * * , ( ) ._ _ * * , ( ) . , & ._ _ * * , ( ) .the authors acknowledge financial support from the department of defense ( dod ) through the national defense science & engineering graduate fellowship ( ndseg ) program , as well as the mit energy initiative seed fund .w.a.b . performed the experiments and simulations ; m.z.b . developed the theoretical plan , and c.r.b . the experimental plan .all authors wrote and edited the manuscript .the authors have no competing financial interests . | in order for the widely discussed benefits of flow batteries for electrochemical energy storage to be applied at large scale , the cost of the electrochemical stack must come down substantially . one promising avenue for reducing stack cost is to increase the system power density while maintaining efficiency , enabling smaller stacks . here we report on a membrane - less , hydrogen bromine laminar flow battery as a potential high power density solution . the membrane - less design enables power densities of 0.795 w at room temperature and atmospheric pressure , with a round - trip voltage efficiency of 92% at 25% of peak power . theoretical solutions are also presented to guide the design of future laminar flow batteries . the high power density achieved by the hydrogen bromine laminar flow battery , along with the potential for rechargeable operation , will translate into smaller , inexpensive systems that could revolutionize the fields of large - scale energy storage and portable power systems . low - cost energy storage remains a critical unmet need for a wide range of applications , include grid scale frequency regulation , load following , contingency reserves , and peak shaving , as well as portable power systems . for applications that require the storage of large quantities of energy economically and efficiently , flow batteries have received renewed attention . a wide variety of solutions have been proposed , including zinc - bromine and vanadium redox cells . this includes recent efforts to incorporate novel concepts such as organic electrolytes for greater voltage stability and semisolid reactants for higher reactant energy density or chemistries to reduce reactant cost . one such permutation is the hydrogen bromine flow battery . the rapid and reversible reaction kinetics of both the bromine reduction reaction and the hydrogen oxidation reaction minimize activation losses , while the low cost ( $ 1.39 kg ) and abundance ( 243,000 metric tons produced per year in the united states alone ) of bromine distinguishes it from many other battery chemistries . however , theoretical investigations of such systems have revealed that the perfluorosulfonic acid membranes typically used suffer from low conductivity in the absence of sufficient hydration . in the presence of hydrobromic acid , this membrane behavior is the dominant limitation on overall performance . laminar flow electrochemical cells have been proposed to address many of the challenges that face traditional membrane - based systems . laminar flow cells eliminate the need for an ion exchange membrane by relying on diffusion to separate reactants . eliminating the membrane decreases cost , relaxes hydration requirements , and opens up the possibility for a much wider range of chemistries to be investigated . this flexibility has been exploited in the literature ; examples include vanadium redox flow batteries , as well as methanol , formic acid , and hydrogen fuel cells . however , none of these systems have achieved power densities as high as their membrane - based counterparts . this is largely because the proposed chemistries already work well with existing membrane technologies that have been refined and optimized over several decades . more recently , a laminar flow fuel cell based on borohydride and cerium ammonium nitrate employed a porous separator , chaotic mixing , and consumption of acid and base to achieve a power density of 0.25 w . this appears to be the highest previously published power density for a membrane - less laminar flow fuel cell . in this work , we present a membrane - less hydrogen bromine laminar flow battery ( hblfb ) with reversible reactions and a peak power density of 0.795 w at room temperature and atmospheric pressure . the cell uses a membrane - less design similar to previous work , but with several critical differences that allow it to triple the highest previously reported power density for a membrane - less electrochemical cell and also enable recharging . first , where many previous laminar flow electrochemical cell designs were limited to low current operation by the low oxygen concentration at the cathode , the hblfb uses gaseous hydrogen fuel and aqueous bromine oxidant . this allows for high concentrations of both reactants at their respective electrodes , greatly expanding the mass transfer capacity of the system . next , both reactions have fast , reversible kinetics , with no phase change at the liquid electrode , eliminating bubble formation as a design limitation . these two characteristics of the hblfb enable high power density storage and discharge of energy at high efficiency , while avoiding the cost and reliability issues associated with membrane - based systems . |
in line with information theory , we treat a literary text as the output of a stationary and ergodic source that takes values in a finite alphabet and we look for information about the source through a statistical analysis of the text . herewe focus on correlations functions , which are defined after specifying an observable and a product over functions .in particular , given a symbolic sequence * s * ( the text ) , we denote by the symbol in the -th position and by ( ) the substring . as observables , we consider functions that map symbolic sequences * s * into a sequence * x * of numbers ( e.g. , s and s ) .we restrict to local mappings , namely for any and a finite constant .its autocorrelation function is defined as : where plays the role of time ( counted in number of symbols ) and denotes an average over sliding windows , see supporting information ( si ) sec .i for details .the choice of the observable is crucial in determining whether and which `` memory '' of the source is being quantified . only once a class of observables sharing the same properties is shown to have the same asymptotic autocorrelation , it is possible to think about long - range correlations of the text as a whole . in the past ,different kinds of observables and encodings ( which also correspond to particular choices of ) were used , from the huffmann code , to attributing to each symbol an arbitrary binary sequence ( ascii , unicode , 6-bit tables , dividing letters in groups , etc . ) , to the use of the frequency - rank or parts of speech on the level of words .while the observation of long - range correlations in all cases points towards a fundamental source , it remains unclear which common properties these observables share .this is essential to determine whether they share a common root ( conjectured in ref . ) and to understand the meaning of quantitative changes in the correlations for different encodings ( reported in ref . ) . in order to clarify these points we use mappings that avoid the introduction of spurious correlations .inspired by voss and ebeling _ et al . _ we use s that transform the text into binary sequences by assigning if and only if a local matching condition is satisfied at the -th symbol , and otherwise ( e.g. , _ k - th symbol is a vowel _ ) .see si - sec .ii for specific examples . once equipped with the binary sequence associated with the chosen condition we can investigate the asymptotic trend of its .we are particularly interested in the long - range correlated case for which diverges . in this casethe associated random walker spreads super - diffusively as in the following we investigate correlations of the binary sequence using eq .( [ eq.mu ] ) because integrated indicators lead to more robust numerical estimations of asymptotic quantities .we are mostly interested in the distinction between short- and long- range correlations .we use normal ( anomalous ) diffusion of interchangeably with short- ( long- ) range correlations of * x*. an insightful view on the possible origins of the long - range correlations can be achieved by exploring the relation between the power spectrum at and the statistics of the sequence of inter - event times s ( i.e. , one plus the lengths of the cluster of s between consecutive s ) .for the short - range correlated case , is finite and given by : for the long - range correlated case , and eq . ( [ eq.spectrum ] ) identifies two different origins : ( i ) _ burstiness _ measured as the broad tail of the distribution of inter - event times ( divergent ) ; or ( ii ) long - range correlations of the sequence of s ( not summable ) . in the next section we show how these two terms give different contributions at different linguistic levels of the hierarchy . building blocks of the hierarchy depicted in fig .[ fig.1 ] are binary sequences ( organized in levels ) and links between them .levels are established from sets of semantically or syntactically similar conditions s ( e.g. , vowels / consonants , different letters , different words , different topics ) .each binary sequence is obtained by mapping the text using a given , and will be denoted by the relevant condition in .for instance , * prince * denotes the sequence obtained from the matching condition `` prince '' .a sequence is linked to if for all s such that we have , for a fixed constant .if this condition is fulfilled we say that is _ on top of _ and that belongs to a higher level than . by definition , there are no direct links between sequences at the same level .a sequence at a given level is on top of all the sequences in lower levels to which there is a direct path . for instance , * prince * is on top of * e * which is on top of * vowel*. as will be clear later from our results , the definition of link can be extended to have a probabilistic meaning , suited for generalizations to high levels ( e.g. , `` prince '' is more probable to appear while writing about a topic connected to war ) .we now show how correlations flow through two linked binary sequences . without loss of generalitywe denote a sequence on top of and the unique sequence on top of such that ( sum and other operations are performed on each symbol : for all ) .the spreading of the walker associated with is given by where is the cross - correlation .using the cauchy - schwarz inequality we obtain define , as the sequence obtained reverting on each of its elements .it is easy to see that if then . applying the same arguments above , and using that for any , we obtain and similarly .suppose now that with .in order to satisfy simultaneously the three inequalities above , at least two out of the three have to be equal to the largest value . next we discuss the implications of this restriction to the flow of correlations up and down in our hierarchy of levels . * up . *suppose that at a given level we have a binary sequence with long - range correlations .from our restriction we know that at least one sequence on top of , has long - range correlations with .this implies , in particular , that if we observe long - range correlations in the binary sequence associated with a given letter then we can argue that its anomaly originates from the anomaly of at least one word where this letter appears , higher in the hierarchy of a word containing the given letter is on top of the sequence of that letter .if is long range correlated ( lrc ) then either is lrc or is lrc .being finite the number of words with a given letter , we can recursively apply the argument to and identify at least one lrc word . ] . * down .* suppose is long - range correlated . from eq .( [ eq.sum ] ) we see that a fine tuning cancellation with cross - correlation must appear in order for their lower - level sequence ( down in the hierarchy ) to have . from the restriction derived abovewe know that this is possible only if , which is unlikely in the typical case of sequences receiving contributions from different sources ( e.g. , a letter receives contribution from different words ) .typically , is composed by sequences , with , in which case .correlations typically flow down in our hierarchy of levels .* finite - time effects . *while the results above are valid asymptotically ( infinitely long sequences ) , in the case of any real text we can only have a finite - time estimate of the correlations .already from eq .( [ eq.sum ] ) we see that the addition of sequences with different , the mechanism for moving down in the hierarchy , leads to if is computed at a time when the asymptotic regime is still not dominating .this will play a crucial role in our understanding of long - range correlations in real books . in order to give quantitative estimates, we consider the case of being the sum of the most long - range correlated sequence ( the one with ) and many other independent non - overlapping and are non - overlapping if for all for which we have . ]sequences whose combined contribution is written as , with an independent identically distributed binary random variable .this corresponds to the random addition of s with probability to the s of . in this case shows a transition from normal to anomalous diffusion .the asymptotic regime of * z * starts after a time where and are obtained from which asymptotically goes as .note that the power - law sets at only if .a similar relation is obtained moving up in the hierarchy , in which case a sequence in a higher level is built by random subtracting s from the lower - level sequence as ( see si - sec .iii - a for all calculations ) .* burstiness .* in contrast to correlations , burstiness due to the tails of the inter - event time distribution is not always preserved when moving up and down in the hierarchy of levels .consider first going down by adding sequences with different tails of .the tail of the combined sequence will be constrained to the shortest tail of the individual sequences . in the random addition example , with having a broad tail in , the large asymptotic of has short - tails because the cluster of zeros in is cut randomly by .going up in the hierarchy , we take a sequence on top of a given bursty binary sequence , e.g. , using the random subtraction mentioned above .the probability of finding a large inter - event time in is enhanced by the number of times the random deletion merges two or more clusters of s in , and diminished by the number of times the deletion destroys a previously existent inter - event time .even accounting for the change in , this moves can not lead to a short - ranged for if of has a long tail ( see si - sec .iii - b ) .altogether , we expect burstiness to be preserved moving up , and destroyed moving down in the hierarchy of levels . * summary . * from eq .( [ eq.spectrum ] ) the origin of long - range correlations can be traced back to two different sources : the tail of ( burstiness ) and the tail of .the computations above reveal their different role at different levels in the hierarchy : is preserved moving down , but there is a transfer of _ information _ from to .this is better understood by considering the following simplified set - up : suppose at a given level we observe a sequence coming from a renewal process with broad tails in the inter - event times with leading to .let us now consider what is observed in * z * , at a level below , obtained by adding to other independent sequences .the long s ( a long sequence of 0 s ) in eq .( [ eq.renewal ] ) will be split in two long sequences introducing at the same time a cut - off in and non - trivial correlations for large . in this case , asymptotically the long - range correlations ( ) is solely due to .burstiness affects only estimated for times .a similar picture is expected in the generic case of a starting sequence with broad tails in both and . .( b , d ) transport defined in eq .( [ eq.mu ] ) .the numerical results show : ( a ) exponential decay of with inset : in log - linear scales ; ( b ) ; ( c ) non - exponential decay of with ; and ( d ) .all panels show results for the the original and -shuffled sequences , see legend . ]is an indicator of the burstiness of the distribution . is a finite time estimator of the global indicator of long - range correlation .a poisson process has .the twenty most frequent symbols ( white circles ) and twenty frequent words ( black circles ) of wrnpc are shown ( see si - tables for all books ) . indicates the case of * vowels * and of * blank space*. the red dashed - line is a lower - bound estimate of due to burstiness ( see si - sec .vi ) . this diagram is a generalization for long - range correlated sequences of the diagrams in ref .equipped with previous section s theoretical framework , here we interpret observations in real texts .we use ten english versions of international novels ( see si - sec .iv for the list and for the pre - processing applied to the texts ) . for each book sequences were analyzed separately : vowel / consonants , at the letter level ( blank space and the most frequent letters ) , and at the word level ( most frequent words , most frequent nouns , and words with frequency matched to the frequency of the nouns ) .the finite - time estimator of the long - range correlations was computed fitting eq .( [ eq.mu ] ) in a broad range of large ] used to compute the finite time is all below ( i.e. ) we have ( see eq .( [ eq.renewal ] ) ) while if the fitting interval is all beyond the cutoff ( i.e. ) we have .interpolating linearly between these two values and using we obtain the lower bound for in fig .[ fig.3 ] .it strongly restricts the range of possible in agreement with the observations and also with obtained for the -shuffled sequences ( see si - sec .vi for further details ) . the pre - asymptotic normal diffusion anticipated in sec .* finite - time effects * is clearly seen in fig .[ fig.4 ] .our theoretical model explains also other specific observations : \1 .key - words reach higher values of than letters ( ) .this observation contradicts our expectation for asymptotic long times : * prince * is on top of * e * and the reasoning after eq .( [ eq.sum ] ) implies .this seeming contradiction is solved by our estimate ( [ eq.tt ] ) of the transition time needed for the finite - time estimate to reach the asymptotic .this is done imagining a surrogate sequence with the same frequency of `` e '' composed by * prince * and randomly added s . using the fitting values of for * prince * in eq .( [ eq.tt ] ) we obtain , which is larger than the maximum time used to obtain .conversely , for a sequence with the same frequency of `` prince '' built as a random sequence on top of * e * we obtain . these calculationsnot only explain , they show that * prince * is a particularly meaningful ( not random ) sequence on top of * e * , and that * e * is necessarily composed by other sequences with that dominate for shorter times .more generally , the _ observation _ of long - range correlations at low levels is due to widespread correlations on higher levels .the sharper transition for keywords .the addition of many sequences with explains the slow increase in for letters because sequences with increasingly larger dominate for increasingly longer times .the same reasoning explains the positive correlation between and the length of the book ( pearson correlation , similar results for other letters ) .the sequence also shows slow transition and small , consistent with the interpretation that it is connected to many topics on upper levels .in contrast , the sharp transition for * prince * indicates the existence of fewer independent contributions on higher levels , consistent with the observation of the onset of burstiness .altogether , this strongly supports our model of hierarchy of levels with keywords ( but not function words ) strongly connected to specific topics which are the actual correlation carriers .the sharp transition for the keywords appears systematically roughly at the scale of a paragraph ( symbols ) , in agreement with similar observation in refs . .additional insights on long - range correlations are obtained by investigating whether they are robust under different manipulations of the text . herewe focus on two non - trivial shuffling methods ( see si - sec .vii for simpler cases for which our theory leads to analytic results ) .consider generating new same - length texts by applying to the original texts the following procedures * keep the position of all blank spaces fixed and place each word - token randomly in a gap of the size of the word .* recode each word - type by an equal length random sequence of letters and replace consistently all its tokens .note that m1 preserve structures ( e.g. , words and letter frequencies ) destroyed by m2 . in terms of our hierarchy , m1 destroys the links to levels above word level while m2 shuffles the links from word- to letter - levels . since according to our picture correlationsoriginate from high level structures , we predict that m1 destroys and m2 preserves long - range correlations . indeed simulations unequivocally show that long - range correlations present in the original texts ( average of letters in wrnpc and in all books ) are mostly destroyed by m1 ( and ) and preserved by m2 ( and ( see si - tables for all data ) . at this pointit is interesting to draw a connection to the _ principle of the arbitrariness of the sign _ , according to which the association between a given sign ( e.g. , a word ) and the referent ( e.g. , the object in the real world ) is arbitrary .as confirmed by the m2 shuffling , the long - range correlations of literary texts are invariant under this principle because they are connected to the semantic of the text .our theory is consistent with this principle .( local derivative of the transport curve in fig .[ fig.2]bd ) .results for three sequences in wrnpc are shown ( from top to bottom ) : the noun `` prince '' , the most frequent letter `` e '' , and the word `` so '' ( same frequency of `` prince ' ' ) .the horizontal lines indicate the , the error bars , and the fitting range .inset ( from top to bottom ) : the other nouns appearing as outliers in fig .[ fig.3 ] , the most frequent letters after `` e '' , and the words matching the frequency of the outlier - nouns . ]from an information theory viewpoint , long - range correlations in a symbolic sequence have two different and concurrent sources : the broad distribution of the distances between successive occurrences of the same symbol ( burstiness ) and the correlations of these distances .we found that the contribution of these two sources is very different for observables of a literary text at different linguist levels .in particular , our theoretical framework provides a robust mechanism explaining our extensive observations that on relevant semantic levels the text is high - dimensional and bursty while on lower levels successive projections destroy burstiness while preserving the long - range correlations of the encoded text via a flow of information from burstiness to correlations .the mechanism explaining how correlations cascade from high- to low - levels is generic and extends to levels higher than word - level in the hierarchy in fig .[ fig.1 ] .the construction of such levels could be based , e.g. , on techniques devised to extract information on a `` concept space '' .while long - range correlations have been observed at the concept level , further studies are required to connect to observations made at lower levels and to distinguish between the two sources of correlations .our results showing that correlation is preserved after random additions / subtractions of s help this connection because they show that words can be linked to concepts even if they are not used every single time the concept appears ( a high probability suffices ) .for instance , in ref . a topic can be associated to an axis of the concept space and be linked to the words used to build it . in this case , when the text is referring to a topic there is a higher probability of using the words linked to it and therefore our results show that correlations will flow from the topic to the word level . in further higher levels , it is insightful to consider as a limit picture the renewal case eq .( [ eq.renewal ] ) for which long - range correlations originate only due to burstiness . this _limit case _ is the simplest toy model compatible with our results .our theory predicts that correlations take form of a bursty sequence of events once we approach the semantically relevant topics of the text .our observations show that some highly topical words already show long - range correlations mostly due to burstiness , as expected by observing that topical words are connected to less concepts than function words .this renewal limit case is the desired outcome of successful analysis of anomalous diffusion in dynamical systems and has been speculated to appear in various fields . using this limit case as a guideline we can think of an algorithm able to automatically detect the relevant structures in the hierarchy by pushing recursively the long - range correlations into a renewal sequence .next we discuss how our results improve previous analyses and open new possibilities of applications .previous methods either worked below the letter level or combined the correlations of different letters in such a way that asymptotically the most long - range correlated sequence dominates . only through our resultsit is possible to understand that indeed a single asymptotic exponent should be expected in all these cases .however , and more importantly , is usually beyond observational range and an interesting range of finite - time is obtained depending on the observable or encoding . on the letter level , our analysis ( figs .[ fig.2 ] and [ fig.3 ] ) revealed that all of them are long - range correlated with no burstiness ( exponentially distributed inter - event times ) .this lack of burstiness can be wrongly interpreted as an indication that letters and most parts of speech are well described by a poisson processes .our results explain that the non - poissonian ( and thus information rich ) character of the text is preserved in the form of long - range correlations ( ) , which is observed also for all frequent words ( even in the most frequent word `` the ' ' ) .these observations violate not only the strict assumption of a poisson process , they are incompatible with any finite - state markov chain model .these models are the basis for numerous applications of automatic semantic information extraction , such as keywords extraction , authorship attribution , plagiarism detection , and automatic summarization .all these applications can potentially benefit from our deeper understanding of the mechanisms leading to long - range correlations in texts . apart from these applications, more fundamental extensions of our results should : ( i ) consider the mutual information and similar entropy - related quantities , which have been widely used to quantify long - range correlations ( see for a comparison to correlations ) ; ( ii ) go beyond the simplest case of the two point autocorrelation function and consider multi - point correlations or higher order entropies , which are necessary for the complete characterization of the correlations of a sequence ; and ( iii ) consider the effect of non - stationarity on higher levels , which could cascade to lower levels and affect correlations properties .finally , we believe that our approach may help to understand long - range correlations in any complex system for which an hierarchy of levels can be identified , such as human activities and dna sequences .we thank b. lindner for insightful suggestions and s. graffi for the careful reading of the manuscript .g.c . acknowledges partial support by the firb - project rbfr08uh60 ( miur , italy ) .m. d. e. acknowledges partial support by the prin project 2008y4w3cy ( miur , italy ). 99 schenkel a , zhang j , zhang y ( 1993 ) long range correlation in human writings ._ fractals _ 1:47 - 55 .alvarez - lacalle e , dorow b , eckmann jp , moses e , ( 2006 ) hierarchical structures induce long - range dynamical correlations in written texts ._ proc natl acad sci usa _ 103:7956 - 7961 .voss r , clarke j ( 1975 ) ` 1/f noise ' in music and speech ._ nature _ 258:317 - 318 .gilden d , thornton t , mallon m ( 1995 ) 1/f noise in human cognition ._ science _ 267:1837 - 1839 .muchnik l , havlin s , bunde a , stanley he ( 2005 ) scaling and memory in volatility return intervals in financial markets ._ proc natl acad sci usa _ 102:9424 - 9428 .rybski d , buldyrev sv , havlin s , liljeros f , makse ha ( 2009 ) scaling laws of human interaction activity ._ proc natl acad sci _ 106:12640 - 12645 .kello ct , brown gda , ferrer - i - cancho r , holden jg , linkenkaer - hansen k , rhodes t , van orden gc ( 2010 ) scaling laws in cognitive sciences ._ trends cogn sci _ 14:223 - 232 press wh ( 1978 ) flicker noises in astronomy and elsewhere ._ comments on astrophysics _7:103 li w , kaneko k ( 1992 ) long - range correlation and partial spectrum in a noncoding dna sequence ._ europhys lett _ 17:655 - 660 . peng ck , buldyrev s , goldberger a , havlin s , sciortino f , simons m , and stanley he ( 1992 ) long - range correlations in nucleotide sequences ._ nature _ 356 : 168 - 171 .voss rf ( 1992 ) evolution of long - range fractal correlations and noise in dna base sequences ._ phys rev lett _ 68:3805 - 3808 .c.d . manning , h. schtze ( 1999 ) _ foundations of statistical natural language processing _ , ( the mit press , cambridge , massachusetts , usa ) .stamatatos e ( 2009 ) a survey of modern authorship attribution methods ._ journal of the american society for information science and technology _ 60:538 - 556 .oberlander j and brew c ( 2000 ) stochastic text generation ._ phil trans r soc lond a _ 358:1373 - 1387 .o usatenko , v yampolskii ( 2003 ) binary n - step markov chains and long - range correlated systems ._ phys rev lett _ 90:110601 .amit m , shmerler y , eisenberg e , abraham m , shnerb n ( 1994 ) language and codification dependence of long - range correlations in texts . _ fractals _ 2:7 - 13 ebeling w , neiman a ( 1995 ) long - range correlations between letters and sentences in texts . _physica a _215:233 - 241 .ebeling w , pschel t ( 1994 ) entropy and long - range correlations in literary english ._ europhys lett _ 26:241 - 246 .allegrini p , grigolini p , palatella l ( 2004 ) intermittency and scale - free networks : a dynamical model for human language complexity ._ chaos , solitons and fractals _ 20:95 - 105 .melnyk ss , usatenko ov , and yampolskii va ( 2005 ) competition between two kinds of correlations in literary texts ._ phys rev e _ 72:026140 .herrera jp , pury pa ( 2008 ) statistical keyword detection in literary corpora ._ eur phys j b _ 63:135 - 146 .montemurro ma , zanette d ( 2010 ) towards the quantification of the semantic information encoded in written language ._ adv comp syst _ 13:135 - 153 .cover tm , thomas ja ( 2006 ) _ elements of information theory _( wiley series in telecommunications and signal processing ) herzel h , groe i ( 1995 ) measuring correlations in symbol sequences ._ physica a : statistical mechanics and its applications _ , 216:518 - 542 grassberger p ( 1989 ) estimating the information content of symbol sequences and efficient codes ._ ieee transactions on information theory _ ,35:669 - 675 .kokol p , podgorelec v ( 2000 ) complexity and human writings ._ complexity _ 7:1 - 6 .kanter i , kessler da ( 1995 ) markov processes : linguistics and zipf s law ._ phys rev lett _ 74:4559 - 4562 .montemurro ma , pury pa ( 2002 ) long - range fractal correlations in literary corpora ._ fractals _ 10:451 - 461 .trefn g , floriani e , west bj and grigolini p , ( 1994 ) dynamical approach to anomalous diffusion : response of levy processes to a perturbation ._ phys rev e _50:2564 - 2579 .cox dr , lewis paw ( 1978 ) _ the statistical analysis of series of events _ ( chapman and hall , london ) .b. lindner ( 2006 ) superposition of many independent spike trains is generally not a poisson process ._ phys rev e _ 73:022901 .allegrini p , menicucci d , bedini r , gemignani a , paradisi p ( 2010 ) complex intermittency blurred by noise : theory and application to neural dynamics ._ phys rev e _ 82:015103 .goh k - i , barabasi a - l , burstiness and memory in complex systems ._ europhys lett _ 81 : 48002 .ortuno m , carpena p , bernaola - galvan p , munoz e , somoza am ( 2002 ) keyword detection in natural languages and dna ._ europhys lett _ 57:759 - 764 . altmann eg , pierrehumbert jb , motter ae ( 2009 ) beyond word frequency : bursts , lulls , and scaling in the temporal distributions of words ._ plos one _ 4:e7678 .doxas i , dennis s , oliver wl ( 2009 ) the dimensionality of discourse ._ proc natl acad science usa _ 107:4866 - 4871 .saussure f de ( 1983 ) course in general linguistics , eds .charles bally and albert sechehaye .roy harris .la salle , illinois ) badalamenti af ( 2001 ) speech parts as poisson processes ._ journal of psycholinguistic research _ 30:31 .schmitt ao , ebeling w , herzel h ( 1996 ) the modular structure of informational sequences ._ biosystems _ 37:199210 . supporting informationgiven an ergodic and stationary stochastic process , correlation functions are defined as where denotes an average over different realizations of the process . stationarity guarantees that depends on the time lag only . in practice, one typically has no access to different realizations of the process but only to a single finite sequence . in our case , any binary sequence* x * is obtained from a single text of length through a given mapping . in such casesit is possible to use the assumption of ergodicity to approximate the correlation function ( [ eq.corr ] ) by where means averaging , for each fixed , over all pairs and for as the sentence `` this paper is a paper of mine '' . by choosing the condition to be _ the -th symbol is a vowel _the projection maps the sentence into the sequence .if is _ the k - th symbol is equal to ` e ' _ than we get : .generally , we can treat any n - gram of letters in the same way , as for example by choosing the condition to be _the -gram starting at the k - th symbol is equal to ` er ' _ , that projects using a sliding window the sentence to: .words are encoded using their corresponding n - gram , for example could be _ the 7-gram starting at the k - th symbol is equal to ` paper ' ( blank spaces included ) _ that gives : .it is possible to generalize these procedures to more _ semantic conditions _ that associate to either all or part of the symbols that appears in a sentence that is attached to a specified topic .these topics can be quantitatively constructed from the frequency of words using methods such as latent semantic analysis or the procedures to determine the so - called _ concept space _ .we describe two simple procedures to construct two binary sequences * x * and * z * such that * x * is _ on top of _ * z*. these procedures will be based either on the addition " of s to * x * or on the subtraction " of s of * z*. in the simplest cases of _ random _ addition and subtraction , we explicitly compute how long - range correlations flow from * x * to * z * ( corresponding to a flow from upper to lower levels of the hierarchy ) and how burstiness is preserved when extracting * x * from z ( moving from lower to upper levels in the hierarchy ) .recall that a sequence * x * is _ on top of _ * z * if for all such that we have , for a fixed constant . without loss of generality in the following calculations we fix for simplicity .we now define simple operations that map two binary sequences into a third binary sequence : * given two generic binary sequences * z * and * * we define their multiplication * y*=* * * z * as . by construction* y * is on top of * z*. * given two non - overlapping sequences * x * and * y * we define their sum * z * = * x*+*y * as . by construction* x * and * y * are on top of * z*. we say that sequences and are non - overlapping if for all for which we have . in general , two independent binary sequences * x * and* * will overlap . a sequence which is non- overlapping with can be constructed from * * as * y*= * *(*1 * -*x * ) , where * 1 * denotes the trivial sequence with all s . in this case, we say that * z * = * x*+*y * , with * y * = * *(*1 * -*x * ) is a sequence lower than * x * in the hierarchy that is constructed by a _ random addition _( of 1 s ) to * x*. similarly , if * * is independent of * z * , the sequence * **z * is a _ random subtraction _ ( of 1 s ) of * z * consider a sequence * z * constructed as a random addition of s to a given long - range correlated sequence : , with * y * = * *(*1 * -*x * ) and a sequence of _ i.i.d . _binary random variables .the associated random walker spreads anomalously with the same exponent of .this asymptotic regime is masked at short times by a pre - asymptotic normal behavior .here we first compute explicitly the spreading of in terms of that of and and then we compute a bound for the transition time to the asymptotic anomalous diffusion of . as written in eq .( 5 ) of the main text we have for our particular case we obtain and from eqs .( [ eq.y ] ) and ( [ squared - app ] ) and noting that and we obtain the correlation term in eq .( [ eq.sum ] ) can also be obtained through direct calculations : \\ & = & -\left < \xi \right > \sigma^2_x(t).\label{eq.correl}\end{aligned}\ ] ] finally , inserting eqs . ( [ eq.sigma2 ] ) and ( [ eq.correl ] ) into eq . ( [ eq.sum ] )we have as superdiffuses so it will and they both have the same asymptotic behavior . on the other hand the asymptotic regimeis masked at short times by a pre - asymptotic normal behavior , given by the linear term in .we stress that , even if the non - overlapping condition for * y * forces both and to have the same asymptotic behavior of , their cumulative contributions does not cancel out unless we trivially have .we now give a bound on the transition time to the asymptotic anomalous diffusion of eq .( [ eq.sumsimple ] ) . without loss of generalityconsider the case in which even the asymptotic anomalous behavior of is masked by generic pre -asymptotic such that \ ] ] with and increasing and such that for ( to guarantee that the asymptotic behavior is dominated by ) and ( as ) .the asymptotic behavior in eq.([eq.sumsimple ] ) dominates only after a time such that : using the fact that the term is positive and that is monotonically increasing we finally have which corresponds to eq .( 7 ) of the main text . in practice ,any finite - time estimate is close to the asymptotic only if the estimate is performed for , otherwise ( if ) .as noted in the main text , if * z * = * x*+*y * then = + * y*. applying to this relation the same arguments above , similar pre - asymptotic normal diffusion and transition time appear in the case of random subtraction , moving up in the hierarchy .more specifically , starting from a sequence * z * such that asymptotically and constructing , with * * independent of * z * , we obtain a transition time for given by : which corresponds to eq .( [ eq.tt ] ) above after properly replacing , and .we consider the case of sequences as in eq .( 8) of the main text : is a sequence emerging from a renewal process with algebraically decaying inter - event times , i.e. and .given now a fixed , we consider the random subtraction where each is eventually set to with probability .it is easy to see that the inter - event times of the new process will be distributed as : asymptotically is dominated by the long tails of : given a large , fix eventually diverging with and split accordingly the sum over in the second term of the right hand side .the term corresponding to the sum is exponentially dominated by and arbitrary small , while the remaining finite sum over is controlled again by the tail of .in our investigations we considered the english version of the popular novels listed in si - tab . books .the texts were obtained through the gutenberg project ( http://www.gutenberg.org ) .we implement a very mild pre - processing of the text that reduces the number of different symbols and simplifies our analysis : we consider as valid symbols the letters `` a - z '' , numbers `` 0 - 9 '' , the apostrophe `` '' and the blank space `` '' .capitalization , punctuations and other markers were removed .a string of symbols between two consecutive blank spaces is considered to be a word .no lemmatization was applied to them so that plurals and singular forms are considered to be different words .as described in the main text , the distinction between long - range and short - range correlation requires a finite - time estimate of the asymptotic diffusion exponent of the random - walkers associated to a binary sequence . in practice, this corresponds to estimate the tails of the relation and it is therefore essential to estimate the upper limit in , denoted as , for which we have enough accuracy to provide a reasonable estimate .we adopt the following procedure to estimate .we consider a surrogate binary sequence with the same length and fraction of symbols ( s ) , but with the symbols randomly placed in the sequence .for this sequence we know that .we then consider instants of time equally spaced in a logarithmic scale of ( in practice we consider , with integer and ) .we then estimate the local exponent as /\log_{10}(1.2) ] ( see fig . [ fig.a1]a ) .we recall that our primary interest in the distinction between and .the procedure described above is particularly suited for this distinction and an exponent obtained for large can be confidently regarded as a signature of super diffusion ( long - range correlation ) . in fig .[ fig.a1 ] we verify that show no strong dependence on the fraction of s in the binary sequence ( inset ) and that it scales linearly with . based on these results , a good estimate of is , i.e. the safe interval for determining long - range correlation ends two decades before the size of the text .this phenomenological rule was adopted in the estimate of for all cases .the is only the upper limit and the estimate is performed through a least - squared fit in the time interval , where . in practice ,we select different values of around and report the mean and variance over the different fittings as and its uncertainty , respectively .we start clarifying the validity of the inequality where is the finite - time estimate of the total long - range correlation of a binary sequence * x * and is the estimate for the correlation due to the burstiness ( which can be quantified by shuffling * x * using the procedure of the main text ) . equation ( 4 ) of the main text shows that both burstiness and long - range correlations in the sequence of s contribute to the long - range correlations of a binary sequence .while the contribution is always positive , the contribution from the correlation in s can be positive or negative . in principle , a negative contribution could precisely cancel the contribution of and violates the inequality ( [ eq.inequality ] ) .conversely , this inequality is guaranteed to hold if the asymptotic contribution of the correlation in s of to is positive .we now show that this is the case for the sequences we have argued to provide a good account of our observations .consider high in the hierarchy a renewal sequence * x * with a given and broad tail in ( diverging ) .adding many independent non - overlapping sequences , we construct a lower level sequence that still has long range correlation , with the same exponent ( see sec . [ sec.operations ] above ) . for this sequence we know that the broad tail in has a cutoff and thus burstiness gives no contribution to . instead , results solely from the correlations in the s , which are therefore necessarily positive .it is natural to expect that this positiveness of the asymptotic correlation extends to finite times , in which case the ( finite time ) inequality ( [ eq.inequality ] ) holds .indeed , for small , the distribution is not strongly affected by the independent additions and thus for a finite time estimate will receive contributions from both burstiness and correlations .finally , we have directly tested the validity of eq .( [ eq.inequality ] ) by comparing of different sequences * x * to the obtained from the corresponding ( a2-shuffled sequences of * x * , see main text ) . the inequality ( [ eq.inequality ] ) was confirmed for every single sequence we have analyzed , as shown by the fact that ( red symbols ) in fig .[ fig.3si ] are systematically below their corresponding ( black circles ) .we now obtain a quantitative lower bound for using eq .( [ eq.inequality ] ) .we consider a renewal sequence ( in which case ) with an inter - event time distribution given by where is the cut - off time , is the anomalous diffusion exponent for a renewal sequence with no cutoff , is a lower cut - off ( we fixed it at ) , and is a normalization constant .we obtain the lower bound for as a function of by considering how and change with in the model above . for short times ( )the corresponding walkers have not seen the cutoff and their diffusion will be anomalous with exponent . at longer time( ) the diffusion becomes normal .correspondingly , if the fitting interval $ ] used to compute the finite time ( see sec . [ ssec.confidence ] ) is all below ( i.e. ) we have while if the fitting interval is all beyond the cutoff ( i.e. ) we have . when is inside the fitting interval we approximate by linearly interpolating between and , we can compute by directly calculating the first and second moments of the distribution ( [ cutoffpt ] ) .particularly important are the values and obtained evaluating at the critical values of the cutoff and , respectively . using the fact that is a monotonic increasing function of we can obtain explicitely the dependency on .the for the case of a binary sequence with distribution ( [ cutoffpt ] ) is given by ,\\ \hat{\gamma}_{a2}&=\gamma_{a2 } \qquad \qquad \qquad \qquad \qquad & \textrm{if } \qquad \sm > s_2.\end{aligned}\ ] ] the red dashed line in fig .[ fig.3si ] ( fig . 3 of the main text ) was computed using the fitting range corresponding to the book wrnpc ( see sec . [ ssec.confidence ] ) , and ( compatible with observed for words with large ) .in addition to the shuffling methods presented in the main text , we discuss here briefly two cases : * * shuffle words * + mixing words order kills correlations for scales larger than the maximum word length .even the blank space sequence becomes uncorrelated because its original correlations originate ( as in the case of all letters ) from the correlation in and not from tails in .* * keep all blank spaces * in their original positions and fill the empty space between them with : * * two letters , placed randomly with probabilities and . * * the same letters of the book , placed in random positions .+ by construction , correlation for blank space is trivially preserved .what do we expect for the other letters ?the following simple reasoning indicates that long - range correlation should be expected asymptotically in both cases : any letter sequence is on top of the reverted blank space sequence ; the results in sec .[ sec.operations ] above show that either the selected sequence or its complement ( such that ) has ; and eq .( [ eq.sumsimple ] ) above shows that any randomly chosen on top of has . in practicethese exponents are relevant only if the subsequence is dense enough in order for in eq .( [ eq.ttsub ] ) above to be inside the observation range . for the first shuffling method and for our longest book ( wrnpc ), we obtain that only if one finds book size . since the most frequent letter in a book has much smaller frequency ( around ) , we conclude that in practice all sequences obtained using the second shuffling mehthod have for all books of size smaller than symbols ( pages ) .+ these simple calculations show that does not explain the correlations observed in the letters of the original text , as has been speculated in ref .their origin are the long - range correlations on higher levels .& & & + sequence & & & error & & error & & error & & error + vowels & 41414 & 0.440 & 0.020 & 1.18 & 0.05 & & & & + _ & 26666 & 0.379 & 0.011 & 1.13 & 0.06 & 1.13 & 0.06 & 1.13 & 0.06 + e & 13545 & 0.812 & 0.003 & 1.20 & 0.04 & 1.11 & 0.04 & 1.01 & 0.04 + t & 10667 & 0.858 & 0.003 & 1.17 & 0.05 & 1.05 & 0.03 & 1.05 & 0.03 + a & 8772 & 0.838 & 0.003 & 1.14 & 0.05 & 1.07 & 0.03 & 0.98 & 0.04 + o & 8128 & 0.920 & 0.002 & 1.25 & 0.05 & 1.13 & 0.04 & 0.99 & 0.04 + i & 7500 & 0.887 & 0.002 & 1.20 & 0.04 & 1.10 & 0.03 & 1.03 & 0.03 + h & 7379 & 0.848 & 0.003 & 1.15 & 0.04 & 1.11 & 0.04 & 1.04 & 0.03 + n & 7001 & 0.895 & 0.002 & 1.09 & 0.03 & 1.13 & 0.04 & 1.02 & 0.03 + s & 6497 & 0.925 & 0.002 & 1.11 & 0.04 & 1.09 & 0.03 & 1.07 & 0.03 + r & 5418 & 0.905 & 0.002 & 1.15 & 0.04 & 1.15 & 0.04 & 1.04 & 0.03 + d & 4928 & 0.878 & 0.003 & 1.06 & 0.03 & 1.10 & 0.04 & 0.97 & 0.04+ l & 4704 & 1.081 & 0.003 & 1.20 & 0.06 & 1.12 & 0.04 & 1.00 & 0.03 + u & 3469 & 0.901 & 0.004 & 1.15 & 0.04 & 1.15 & 0.04 & 1.07 & 0.03 + w & 2681 & 0.966 & 0.003 & 1.11 & 0.04 & 1.23 & 0.05 & 0.99 & 0.04 + g & 2529 & 0.986 & 0.003 & 1.13 & 0.04 & 1.16 & 0.05 & 0.97 & 0.05 + c & 2397 & 0.980 & 0.005 & 1.15 & 0.05 & 1.11 & 0.04 & 1.00 & 0.03 + y & 2259 & 1.070 & 0.004 & 1.23 & 0.04 & 1.05 & 0.03 & 1.00 & 0.04 + m & 2103 & 1.030 & 0.005 & 1.16 & 0.04 & 1.24 & 0.05 & 0.98 & 0.03 + f & 1988 & 1.089 & 0.006 & 1.17 & 0.05 & 1.14 & 0.04 & 1.02 & 0.04 + p & 1514 & 1.143 & 0.006 & 1.17 & 0.04 & 1.13 & 0.04 & 1.05 & 0.03 + _ the _ & 1635 & 0.971 & 0.005 & 1.29 & 0.07 & & & & + _ and _ & 868 & 0.973 & 0.008 & 1.08 & 0.04 & & & & + _ to _ & 734 & 1.013 & 0.006 & 1.08 & 0.04 & & & & + _ a _ & 624 & 1.057 & 0.007 & 1.08 & 0.03 & & & & + _ she _ & 542 & 1.548 & 0.012 & 1.34 & 0.06 & & & & + _ it _ & 530 & 1.172 & 0.008 & 1.22 & 0.04 & & & & + _ alice _ & 386 & 0.885 & 0.008 & 1.01 & 0.05 & & & & + _ in _ & 367 & 0.959 & 0.008 & 1.04 & 0.03 & & & & + _ way _ & 57 & 1.098 & 0.021 & 1.21 & 0.04 & & & & + _ turtle _ & 57 & 4.066 & 0.039 & 1.47 & 0.12 & & & & + _ hatter _ & 55 & 4.978 & 0.050 & 1.46 & 0.12 & & & & + _ gryphon _ & 55 & 3.541 & 0.038 & 1.43 & 0.10 & & & & + _ quite _ & 55 & 1.290 & 0.025 & 1.16 & 0.04 & & & & + _ mock _ & 55 & 3.919 & 0.045 & 1.45 & 0.12 & & & & + _ are _ & 54 & 1.250 & 0.024 & 1.10 & 0.03 & & & & + _ think _ & 52 & 1.268 & 0.039 & 1.09 & 0.04 & & & & + _ more _ & 49 & 1.040 & 0.023 & 1.11 & 0.04 & & & & + _ head _ & 49 & 1.230 & 0.024 & 1.07 & 0.03 & & & & + _ never _ & 48 & 1.083 & 0.049 & 1.09 & 0.05 & & & & + _ voice _ & 47 & 1.359 & 0.054 & 1.07 & 0.03 & & & & + & & & + sequence & & & error & & error & & error & & error + vowels & 358397 & 0.454 & 0.020 & 1.25 & 0.04 & & & & + _ & 208375 & 0.434 & 0.009 & 1.34 & 0.04 & 1.34 & 0.04 & 1.34 & 0.04 + e & 123056 & 0.817 & 0.002 & 1.18 & 0.04 & 1.22 & 0.05 & 0.96 & 0.05 + t & 86576 & 0.836 & 0.002 & 1.18 & 0.04 & 1.15 & 0.04 & 0.98 & 0.04 + a & 78914 & 0.847 & 0.002 & 1.11 & 0.04 & 1.11 & 0.04 & 1.03 & 0.03 + o & 67896 & 0.885 & 0.001 & 1.20 & 0.04 & 1.24 & 0.04 & 0.96 & 0.05+ n & 64597 & 0.865 & 0.002 & 1.13 & 0.04 & 1.18 & 0.04 & 1.02 & 0.04 + i & 63755 & 0.889 & 0.001 & 1.22 & 0.04 & 1.15 & 0.04 & 1.07 & 0.03 + s & 62383 & 0.909 & 0.001 & 1.21 & 0.04 & 1.20 & 0.04 & 1.05 & 0.03 + r & 59027 & 0.870 & 0.001 & 1.17 & 0.04 & 1.09 & 0.04 & 1.05 & 0.03 + h & 54880 & 0.861 & 0.002 & 1.23 & 0.04 & 1.07 & 0.05 & 1.09 & 0.03+ l & 38467 & 1.033 & 0.001 & 1.28 & 0.05 & 1.10 & 0.04 & 1.03 & 0.03 + d & 37051 & 0.923 & 0.001 & 1.24 & 0.04 & 1.19 & 0.04 & 1.01 & 0.03 + c & 27687 & 0.978 & 0.001 & 1.28 & 0.04 & 1.17 & 0.04 & 1.11 & 0.03 + u & 24776 & 0.957 & 0.001 & 1.15 & 0.04 & 1.12 & 0.04 & 1.04 & 0.03 + f & 24052 & 0.955 & 0.001 & 1.18 & 0.04 & 1.12 & 0.04 & 1.08 & 0.03 + m & 21509 & 0.987 & 0.001 & 1.20 & 0.04 & 1.20 & 0.04 & 1.03 & 0.04 + w & 19172 & 1.010 & 0.001 & 1.36 & 0.05 & 1.21 & 0.04 & 1.07 & 0.03 + g & 18284 & 0.962 & 0.001 & 1.20 & 0.04 & 1.09 & 0.05 & 0.99 & 0.03 + p & 16742 & 1.040 & 0.001 & 1.24 & 0.04 & 1.10 & 0.03 & 1.04 & 0.03 + y & 15700 & 0.993 & 0.002 & 1.26 & 0.04 & 1.15 & 0.04 & 1.03 & 0.04 + _ the _ & 16882 & 0.924 & 0.002 & 1.21 & 0.04 & & & & + _ of _ & 9414 & 0.970 & 0.002 & 1.25 & 0.04 & & & & + _ and _ & 5765 & 0.897 & 0.003 & 1.10 & 0.04 & & & & + _ a _ & 5326 & 1.097 & 0.003 & 1.19 & 0.04 & & & & + _ in _ & 4287 & 1.022 & 0.003 & 1.14 & 0.04 & & & & + _ to _ & 4080 & 1.051 & 0.003 & 1.20 & 0.04 & & & & + _ water _ & 417 & 1.509 & 0.011 & 1.26 & 0.04 & & & & + _little _ & 412 & 1.117 & 0.011 & 1.08 & 0.03 & & & & + _ where _ & 349 & 1.086 & 0.011 & 1.06 & 0.03 & & & & + _ sea _ & 348 & 1.534 & 0.015 & 1.29 & 0.04 & & & & + _ much _ & 338 & 1.112 & 0.010 & 1.08 & 0.04 & & & & + _ country _ & 337 & 1.519 & 0.011 & 1.28 & 0.05 & & & & + _ land _ & 318 & 1.387 & 0.012 & 1.25 & 0.04 & & & & + _ must _ & 317 & 1.290 & 0.009 & 1.12 & 0.04 & & & & + _ feet _ & 312 & 1.391 & 0.013 & 1.23 & 0.04 & & & & + _ may _ & 311 & 1.118 & 0.010 & 1.09 & 0.03 & & & & + _ species _ & 303 & 2.459 & 0.022 & 1.55 & 0.05 & & & & + _ found _ & 303 & 1.217 & 0.010 & 1.16 & 0.04 & & & & + _ me _ & 301 & 1.206 & 0.012 & 1.07 & 0.04 & & & & + _ day _ & 301 & 1.375 & 0.007 & 1.12 & 0.03 & & & & + & & & + sequence & & & error & & error & & error & & error + vowels & 237050 & 0.420 & 0.020 & 1.26 & 0.05 & & & & + _ & 151300 & 0.404 & 0.010 & 1.53 & 0.05 & 1.53 & 0.05 & 1.53 & 0.05 + e & 78161 & 0.843 & 0.002 & 1.16 & 0.04 & 1.09 & 0.04 & 1.04 & 0.04 + t & 58475 & 0.873 & 0.002 & 1.32 & 0.05 & 1.19 & 0.04 & 1.04 & 0.03 + a & 53663 & 0.854 & 0.002 & 1.21 & 0.04 & 1.17 & 0.04 & 1.08 & 0.03 + o & 47726 & 0.896 & 0.001 & 1.18 & 0.04 & 1.24 & 0.04 & 0.97 & 0.04 +n & 44497 & 0.874 & 0.002 & 1.14 & 0.04 & 1.18 & 0.04 & 1.12 & 0.03 + h & 44473 & 0.832 & 0.002 & 1.37 & 0.05 & 1.27 & 0.04 & 1.17 & 0.05 + i & 40025 & 0.906 & 0.001 & 1.34 & 0.05 & 1.23 & 0.05 & 1.13 & 0.04 + s & 37500 & 0.941 & 0.001 & 1.32 & 0.05 & 1.37 & 0.04 & 1.07 & 0.03 + r & 34514 & 0.888 & 0.002 & 1.19 & 0.04 & 1.19 & 0.04 & 1.09 & 0.04 + d & 30491 & 0.929 & 0.001 & 1.31 & 0.06 & 1.22 & 0.04 & 1.00 & 0.04 + l & 24876 & 1.059 & 0.001 & 1.20 & 0.04 & 1.10 & 0.04 & 1.07 & 0.03 + u & 17475 & 0.943 & 0.001 & 1.16 & 0.04 & 1.22 & 0.04 & 0.99 & 0.05 + w & 17213 & 0.948 & 0.002 & 1.36 & 0.06 & 1.30 & 0.05 & 1.06 & 0.03 + m & 14754 & 0.977 & 0.001 & 1.18 & 0.04 & 1.24 & 0.04 & 1.04 & 0.03 + c & 14148 & 1.006 & 0.001 & 1.37 & 0.05 & 1.18 & 0.04 & 1.10 & 0.04 + g & 14069 & 0.994 & 0.002 & 1.28 & 0.06 & 1.27 & 0.04 & 1.06 & 0.03 + f & 13862 & 1.016 & 0.002 & 1.25 & 0.04 & 1.21 & 0.04 & 1.09 & 0.04 + y & 10868 & 1.068 & 0.002 & 1.29 & 0.05 & 1.16 & 0.04 & 0.95 & 0.05 + p & 9940 & 1.074 & 0.002 & 1.24 & 0.05 & 1.24 & 0.04 & 1.06 & 0.04 + _ the _ & 8930 & 1.018 & 0.003 & 1.34 & 0.04 & & & & + _ and _ & 7280 & 0.958 & 0.002 & 1.27 & 0.04 & & & & + _ of _ & 4365 & 1.113 & 0.003 & 1.42 & 0.07 & & & & + _ to _ & 4190 & 1.077 & 0.003 & 1.20 & 0.04 & & & & + _ a _ & 4158 & 1.152 & 0.004 & 1.22 & 0.04 & & & & + _ he _ & 3311 & 2.158 & 0.011 & 1.60 & 0.05 & & & & + _ him _& 1184 & 2.009 & 0.013 & 1.42 & 0.05 & & & & + _ jurgis _ & 1098 & 2.077 & 0.010 & 1.48 & 0.07 & & & & + _ i _ & 485 & 6.141 & 0.275 & 1.54 & 0.06 & & & & + _ man _ & 463 & 1.301 & 0.013 & 1.27 & 0.04 & & & & + _ said _ & 367 & 1.975 & 0.019 & 1.38 & 0.04 & & & & + _ time _ & 356 & 1.209 & 0.013 & 1.15 & 0.04 & & & & + _ men _ & 329 & 1.768 & 0.011 & 1.33 & 0.05 & & & & + _ now _ & 325 & 1.077 & 0.009 & 1.11 & 0.03 & & & & + _ day _ & 280 & 1.378 & 0.021 & 1.15 & 0.04 & & & & + _ other _ & 279 & 1.244 & 0.014 & 1.16 & 0.04 & & & & + _ place _ & 263 & 1.227 & 0.013 & 1.17 & 0.04 & & & & + _ only _ & 261 & 1.042 & 0.010 & 1.03 & 0.04 & & & & + _ before _ & 235 & 1.117 & 0.010 & 1.09 & 0.03 & & & & + _ home _ & 229 & 1.759 & 0.012 & 1.23 & 0.04 & & & & + & & & + sequence & & & error & & error & & error & & error + vowels & 235370 & 0.445 & 0.020 & 1.48 & 0.05 & & & & + _ & 146786 & 0.429 & 0.009 & 1.65 & 0.05 & 1.65 & 0.05 & 1.65 & 0.05 + e & 76483 & 0.850 & 0.002 & 1.40 & 0.05 & 1.11 & 0.04 & 1.10 & 0.03 + t & 59660 & 0.858 & 0.002 & 1.24 & 0.04 & 1.18 & 0.04 & 1.02 & 0.04 + a & 51642 & 0.859 & 0.002 & 1.23 & 0.04 & 1.18 & 0.04 & 1.11 & 0.04 + o & 47123 & 0.890 & 0.002 & 1.23 & 0.04 & 1.22 & 0.04 & 1.05 & 0.04 + n & 44064 & 0.869 & 0.002 & 1.24 & 0.04 & 1.24 & 0.04 & 1.08 & 0.03 + i & 42750 & 0.920 & 0.001 & 1.30 & 0.04 & 1.19 & 0.04 & 1.11 & 0.03 + s & 38995 & 0.940 & 0.001 & 1.34 & 0.06 & 1.23 & 0.04 & 1.20 & 0.04 + h & 36904 & 0.859 & 0.002 & 1.40 & 0.05 & 1.13 & 0.04 & 1.20 & 0.04 + r & 35465 & 0.912 & 0.001 & 1.34 & 0.05 & 1.19 & 0.04 & 1.16 & 0.04 + d & 27682 & 0.974 & 0.001 & 1.40 & 0.06 & 1.24 & 0.04 & 1.05 & 0.03+ l & 24910 & 1.055 & 0.001 & 1.20 & 0.04 & 1.12 & 0.04 & 1.06 & 0.03 + u & 17372 & 0.947 & 0.002 & 1.20 & 0.04 & 1.17 & 0.04 & 1.07 & 0.03 + w & 15554 & 0.996 & 0.002 & 1.30 & 0.04 & 1.25 & 0.04 & 1.10 & 0.03 + m & 14940 & 1.006 & 0.002 & 1.29 & 0.04 & 1.27 & 0.04 & 1.09 & 0.04 + c & 14884 & 1.042 & 0.001 & 1.35 & 0.05 & 1.21 & 0.04 & 1.21 & 0.06 + f & 14234 & 1.006 & 0.001 & 1.24 & 0.05 & 1.14 & 0.04 & 1.04 & 0.03 + g & 12890 & 1.044 & 0.001 & 1.26 & 0.04 & 1.17 & 0.04 & 1.09 & 0.03 + y & 11994 & 1.022 & 0.002 & 1.34 & 0.04 & 1.14 & 0.04 & 1.02 & 0.03 + p & 11087 & 1.093 & 0.002 & 1.30 & 0.05 & 1.17 & 0.04 & 1.16 & 0.05 + _ the _ & 9091 & 1.043 & 0.003 & 1.38 & 0.04 & & & & + _ and _ & 5898 & 0.995 & 0.003 & 1.34 & 0.05 & & & & + _ of _ & 4380 & 1.033 & 0.003 & 1.32 & 0.05 & & & & + _ a _ & 4057 & 1.098 & 0.003 & 1.22 & 0.04 & & & & + _ to _ & 3545 & 1.095 & 0.004 & 1.24 & 0.04 & & & & + _ in _ & 2555 & 1.031 & 0.004 & 1.14 & 0.04 & & & & + _ would _& 480 & 1.552 & 0.012 & 1.26 & 0.04 & & & & + _ river _ & 478 & 2.176 & 0.014 & 1.43 & 0.06 & & & & + _ water _ & 242 & 1.899 & 0.015 & 1.38 & 0.05 & & & & + _ she _ & 239 & 2.055 & 0.022 & 1.44 & 0.06 & & & & + _ boat _ & 212 & 1.921 & 0.028 & 1.32 & 0.05 & & & & + _ here _ & 210 & 1.508 & 0.015 & 1.24 & 0.04 & & & & + _ night _ & 177 & 1.609 & 0.012 & 1.30 & 0.05 & & & & + _ can _ & 177 & 1.392 & 0.015 & 1.13 & 0.04 & & & & + _ go _ & 176 & 1.275 & 0.010 & 1.16 & 0.04 & & & & + _ head _ & 175 & 1.612 & 0.017 & 1.41 & 0.06 & & & & + _ pilot _ & 172 & 2.652 & 0.047 & 1.40 & 0.05 & & & & + _ long _ & 172 & 1.246 & 0.013 & 1.06 & 0.03 & & & & + _ first _ & 164 & 1.132 & 0.018 & 1.11 & 0.04 & & & & + _ miles _ & 162 & 1.816 & 0.030 & 1.49 & 0.05 & & & & + & & & + sequence & & & error & & error & & error & & error + vowels & 356037 & 0.441 & 0.020 & 1.45 & 0.05 & & & & + _ & 215939 & 0.424 & 0.009 & 1.54 & 0.05 & 1.54 & 0.05 & 1.54 & 0.05 + e & 116938 & 0.859 & 0.002 & 1.29 & 0.04 & 1.12 & 0.04 & 1.05 & 0.03 + t & 87882 & 0.860 & 0.002 & 1.23 & 0.04 & 1.25 & 0.04 & 1.00 & 0.03 + a & 77820 & 0.851 & 0.002 & 1.24 & 0.04 & 1.22 & 0.05 & 1.05 & 0.03 + o & 69258 & 0.900 & 0.001 & 1.27 & 0.04 & 1.16 & 0.04 & 1.09 & 0.03 +n & 65552 & 0.886 & 0.001 & 1.20 & 0.04 & 1.20 & 0.04 & 1.07 & 0.03 + i & 65349 & 0.905 & 0.001 & 1.28 & 0.04 & 1.11 & 0.04 & 1.09 & 0.03 + s & 64148 & 0.917 & 0.001 & 1.34 & 0.05 & 1.31 & 0.04 & 1.15 & 0.04 + h & 62824 & 0.856 & 0.002 & 1.32 & 0.04 & 1.38 & 0.06 & 1.21 & 0.04 + r & 52073 & 0.900 & 0.002 & 1.32 & 0.04 & 1.19 & 0.04 & 1.14 & 0.04 + l & 42733 & 1.051 & 0.001 & 1.22 & 0.04 & 1.20 & 0.04 & 0.99 & 0.03 + d & 38192 & 0.969 & 0.001 & 1.42 & 0.05 & 1.19 & 0.04 & 1.05 & 0.03 + u & 26672 & 0.968 & 0.001 & 1.23 & 0.04 & 1.09 & 0.03 & 1.02 & 0.03 + m & 23243 & 0.998 & 0.001 & 1.22 & 0.04 & 1.16 & 0.04 & 0.96 & 0.04 + c & 22482 & 1.031 & 0.001 & 1.32 & 0.04 & 1.30 & 0.04 & 1.15 & 0.04 + w & 22193 & 0.957 & 0.001 & 1.24 & 0.04 & 1.23 & 0.04 & 1.06 & 0.03 + f & 20812 & 0.997 & 0.001 & 1.33 & 0.05 & 1.20 & 0.04 & 1.01 & 0.04 + g & 20801 & 1.009 & 0.001 & 1.32 & 0.04 & 1.11 & 0.03 & 1.07 & 0.04 + p & 17233 & 1.057 & 0.001 & 1.23 & 0.04 & 1.13 & 0.04 & 1.12 & 0.03 + y & 16852 & 1.037 & 0.001 & 1.25 & 0.05 & 1.22 & 0.04 & 0.98 & 0.04 + _ the _ & 14404 & 1.033 & 0.002 & 1.40 & 0.04 & & & & + _ of _ & 6600 & 1.073 & 0.003 & 1.47 & 0.06 & & & & + _ and _ & 6428 & 0.962 & 0.002 & 1.23 & 0.04 & & & & + _ a _ & 4722 & 1.137 & 0.003 & 1.34 & 0.05 & & & & + _ to _ & 4619 & 1.023 & 0.003 & 1.15 & 0.04 & & & & + _ in _ & 4166 & 1.021 & 0.003 & 1.30 & 0.05 & & & & + _ whale _ & 1096 & 2.162 & 0.018 & 1.57 & 0.07 & & & & + _ from _ & 1085 & 1.143 & 0.006 & 1.15 & 0.04 & & & & + _ man _ & 476 & 1.252 & 0.007 & 1.21 & 0.04 & & & & + _ them _ & 474 & 1.214 & 0.012 & 1.16 & 0.04 & & & & + _ sea _ & 453 & 1.311 & 0.009 & 1.24 & 0.04 & & & & + _ old _ & 450 & 1.507 & 0.012 & 1.33 & 0.04 & & & & + _ we _ & 445 & 1.646 & 0.011 & 1.28 & 0.05 & & & & + _ ship _ & 438 & 1.522 & 0.012 & 1.31 & 0.04 & & & & + _ ahab _ & 436 & 3.056 & 0.021 & 1.53 & 0.06 & & & & + _ ye _ & 431 & 2.680 & 0.018 & 1.43 & 0.04 & & & & + _ who _ & 344 & 1.136 & 0.012 & 1.22 & 0.04 & & & & + _ head _ & 342 & 1.346 & 0.012 & 1.35 & 0.05 & & & & + _ time _ & 333 & 1.086 & 0.014 & 1.08 & 0.04 & & & & + _ long _ & 333 & 1.092 & 0.009 & 1.07 & 0.03 & & & & + & & & + sequence & & & error & & error & & error & & error + vowels & 203916 & 0.437 & 0.020 & 1.20 & 0.04 & & & & + _ & 122194 & 0.450 & 0.008 & 1.41 & 0.05 & 1.41 & 0.05 & 1.41 & 0.05 + e & 69370 & 0.828 & 0.002 & 1.19 & 0.04 & 1.12 & 0.04 & 1.08 & 0.04 + t & 46645 & 0.872 & 0.002 & 1.10 & 0.05 & 1.09 & 0.04 & 1.02 & 0.03 + a & 41688 & 0.849 & 0.002 & 1.11 & 0.03 & 1.18 & 0.04 & 1.04 & 0.04 + o & 40041 & 0.891 & 0.001 & 1.18 & 0.04 & 1.09 & 0.03 & 1.00 & 0.04 + i & 37830 & 0.870 & 0.002 & 1.16 & 0.04 & 1.31 & 0.04 & 1.09 & 0.04 +n & 37689 & 0.884 & 0.001 & 1.13 & 0.04 & 1.16 & 0.04 & 1.09 & 0.03 + h & 34067 & 0.869 & 0.002 & 1.31 & 0.04 & 1.04 & 0.04 & 1.10 & 0.03 + s & 33114 & 0.956 & 0.001 & 1.06 & 0.03 & 1.11 & 0.03 & 1.06 & 0.03 + r & 32299 & 0.882 & 0.001 & 1.18 & 0.04 & 1.09 & 0.04 & 1.06 & 0.04 + d & 22303 & 0.917 & 0.002 & 1.15 & 0.04 & 1.11 & 0.03 & 1.05 & 0.03 + l & 21594 & 1.036 & 0.001 & 1.19 & 0.05 & 1.10 & 0.04 & 1.03 & 0.04 + u & 14987 & 0.971 & 0.002 & 1.26 & 0.04 & 1.17 & 0.04 & 1.03 & 0.03 + m & 14764 & 0.963 & 0.002 & 1.17 & 0.04 & 1.17 & 0.04 & 1.02 & 0.03 + c & 13461 & 1.005 & 0.002 & 1.27 & 0.06 & 1.20 & 0.04 & 1.04 & 0.04 + y & 12706 & 0.992 & 0.002 & 1.37 & 0.05 & 1.20 & 0.05 & 1.04 & 0.03 + w & 12305 & 0.949 & 0.002 & 1.23 & 0.04 & 1.14 & 0.04 & 1.06 & 0.04 + f & 11998 & 0.988 & 0.002 & 1.23 & 0.04 & 1.05 & 0.03 & 1.05 & 0.03 + g & 10031 & 0.949 & 0.002 & 1.06 & 0.04 & 1.17 & 0.04 & 1.03 & 0.03 + b & 9088 & 0.943 & 0.002 & 1.19 & 0.05 & 1.08 & 0.03 & 0.99 & 0.04 + _ the _ & 4331 & 1.083 & 0.003 & 1.24 & 0.04 & & & & + _ to _ & 4163 & 0.945 & 0.003 & 1.11 & 0.03 & & & & + _ of _ & 3609 & 0.974 & 0.003 & 1.21 & 0.04 & & & & + _ and _ & 3585 & 0.859 & 0.003 & 1.18 & 0.04 & & & & + _ her _ & 2225 & 1.592 & 0.015 & 1.31 & 0.04 & & & & + _ i _ & 2068 & 2.915 & 0.014 & 1.46 & 0.05 & & & & + _ at _ & 788 & 1.071 & 0.006 & 1.10 & 0.04 & & & & + _ mr _ & 786 & 1.218 & 0.007 & 1.32 & 0.06 & & & & + _ they _ & 601 & 1.459 & 0.010 & 1.26 & 0.04 & & & & + _ elizabeth _ & 597 & 1.192 & 0.027 & 1.17 & 0.06 & & & & + _ or _ & 300 & 1.026 & 0.010 & 1.00 & 0.03 & & & & + _ bennet _ & 294 & 2.047 & 0.034 & 1.37 & 0.07 & & & & + _ who _ & 284 & 1.148 & 0.010 & 1.06 & 0.03 & & & & + _ miss _ & 283 & 1.536 & 0.015 & 1.35 & 0.07 & & & & + _ one _ & 268 & 1.066 & 0.009 & 1.06 & 0.04 & & & & + _ jane _ & 264 & 1.741 & 0.016 & 1.29 & 0.06 & & & & + _ bingley _ & 257 & 3.166 & 0.019 & 1.45 & 0.08 & & & & + _ we _ & 253 & 1.546 & 0.013 & 1.26 & 0.04 & & & & + _ own _ & 183 & 1.078 & 0.015 & 1.06 & 0.04 & & & & + _ lady _ & 183 & 1.924 & 0.023 & 1.38 & 0.06 & & & & + & & & + sequence & & & error & & error & & error & & error + vowels & 638882 & 0.430 & 0.020 & 1.26 & 0.04 & & & & + _ & 402964 & 0.415 & 0.009 & 1.34 & 0.05 & 1.34 & 0.05 & 1.34 & 0.05 + e & 204300 & 0.840 & 0.002 & 1.25 & 0.04 & 1.25 & 0.04 & 1.03 & 0.03 + t & 157193 & 0.867 & 0.002 & 1.26 & 0.04 & 1.25 & 0.04 & 1.03 & 0.03 + a & 138706 & 0.841 & 0.002 & 1.25 & 0.04 & 1.28 & 0.04 & 1.09 & 0.03 + o & 136541 & 0.881 & 0.001 & 1.24 & 0.04 & 1.29 & 0.04 & 1.03 & 0.03 + h & 117821 & 0.852 & 0.002 & 1.23 & 0.04 & 1.10 & 0.04 & 1.06 & 0.03 +n & 115898 & 0.866 & 0.002 & 1.23 & 0.04 & 1.17 & 0.04 & 1.05 & 0.04 + i & 112746 & 0.881 & 0.001 & 1.26 & 0.04 & 1.19 & 0.04 & 1.04 & 0.03 + s & 106979 & 0.935 & 0.001 & 1.28 & 0.05 & 1.27 & 0.04 & 1.06 & 0.03 + r & 92501 & 0.910 & 0.001 & 1.27 & 0.04 & 1.28 & 0.04 & 1.06 & 0.03 + d & 76655 & 0.929 & 0.001 & 1.34 & 0.04 & 1.22 & 0.04 & 1.03 & 0.03 + l & 62107 & 1.108 & 0.002 & 1.24 & 0.04 & 1.28 & 0.05 & 1.01 & 0.03 + u & 46589 & 0.949 & 0.001 & 1.23 & 0.04 & 1.17 & 0.04 & 1.00 & 0.04 + m & 42945 & 0.992 & 0.001 & 1.21 & 0.04 & 1.25 & 0.04 & 1.04 & 0.05 + f & 38552 & 0.977 & 0.001 & 1.24 & 0.04 & 1.24 & 0.04 & 1.05 & 0.03 + w & 38209 & 0.986 & 0.001 & 1.29 & 0.04 & 1.21 & 0.04 & 1.05 & 0.03 + c & 37602 & 0.984 & 0.001 & 1.21 & 0.04 & 1.26 & 0.04 & 1.08 & 0.03 + g & 31927 & 0.988 & 0.001 & 1.22 & 0.04 & 1.23 & 0.04 & 1.03 & 0.03 + y & 31053 & 1.048 & 0.001 & 1.25 & 0.04 & 1.26 & 0.04 & 1.05 & 0.04 + p & 23880 & 1.069 & 0.001 & 1.23 & 0.04 & 1.26 & 0.04 & 1.11 & 0.03 + _ the _ & 20652 & 1.050 & 0.002 & 1.36 & 0.04 & & & & + _ and _ & 16835 & 0.908 & 0.002 & 1.22 & 0.05 & & & & + _ to _ & 13184 & 1.031 & 0.002 & 1.26 & 0.04 & & & & + _ of _ & 12173 & 1.033 & 0.002 & 1.26 & 0.04 & & & & + _ that _ & 7515 & 1.023 & 0.002 & 1.21 & 0.04 & & & & + _ in _ & 6716 & 1.023 & 0.002 & 1.11 & 0.03 & & & & + _ by _ & 2069 & 1.042 & 0.004 & 1.09 & 0.03 & & & & + _ sancho _ & 2063 & 3.762 & 0.025 & 1.63 & 0.05 & & & & + _ or _ & 2048 & 1.154 & 0.004 & 1.15 & 0.04 & & & & + _ quixote _ & 2002 & 3.214 & 0.016 & 1.55 & 0.06 & & & & + _ other _ & 609 & 1.072 & 0.008 & 1.11 & 0.04 & & & & + _ knight _ & 606 & 2.175 & 0.016 & 1.43 & 0.05 & & & & + _ take _ & 546 & 1.195 & 0.008 & 1.14 & 0.04 & & & & + _ master _ & 545 & 1.720 & 0.013 & 1.38 & 0.04 & & & & + _ thy _ & 510 & 2.252 & 0.017 & 1.35 & 0.05 & & & & + _ senor _ & 509 & 1.632 & 0.009 & 1.28 & 0.04 & & & & + _ worship _ & 470 & 2.337 & 0.012 & 1.37 & 0.04 & & & & + _ here _ & 467 & 1.237 & 0.007 & 1.14 & 0.04 & & & & + _ god _ & 467 & 1.169 & 0.011 & 1.12 & 0.04 & & & & + _ way _ & 466 & 1.056 & 0.006 & 1.07 & 0.04 & & & & + & & & + sequence & & & error & & error & & error & & error + vowels & 110026 & 0.432 & 0.020 & 1.23 & 0.04 & & & & + _ & 71180 & 0.402 & 0.010 & 1.50 & 0.05 & 1.50 & 0.05 & 1.50 & 0.05 + e & 35603 & 0.864 & 0.002 & 1.30 & 0.04 & 1.05 & 0.05 & 0.99 & 0.04 + t & 28825 & 0.858 & 0.002 & 1.16 & 0.04 & 1.23 & 0.04 & 0.99 & 0.04 + a & 23478 & 0.858 & 0.002 & 1.17 & 0.04 & 1.03 & 0.03 & 1.13 & 0.04 + o & 23192 & 0.898 & 0.001 & 1.22 & 0.04 & 1.06 & 0.03 & 0.98 & 0.04 +n & 20146 & 0.866 & 0.002 & 1.12 & 0.04 & 1.23 & 0.04 & 1.07 & 0.03 + h & 19565 & 0.861 & 0.002 & 1.18 & 0.04 & 1.11 & 0.04 & 1.07 & 0.04 + i & 18811 & 0.910 & 0.002 & 1.16 & 0.05 & 1.23 & 0.04 & 1.05 & 0.03 + s & 17716 & 0.951 & 0.001 & 1.19 & 0.04 & 1.13 & 0.04 & 1.08 & 0.03 + r & 15247 & 0.917 & 0.002 & 1.30 & 0.05 & 1.13 & 0.04 & 1.10 & 0.05 + d & 14850 & 0.950 & 0.002 & 1.20 & 0.04 & 1.20 & 0.04 & 1.01 & 0.03 + l & 12136 & 1.086 & 0.002 & 1.17 & 0.04 & 1.14 & 0.04 & 1.06 & 0.03 + u & 8942 & 0.949 & 0.002 & 1.18 & 0.04 & 1.07 & 0.04 & 1.06 & 0.03 + w & 8042 & 0.949 & 0.002 & 1.13 & 0.03 & 1.18 & 0.04 & 1.12 & 0.04 + m & 7135 & 0.977 & 0.002 & 1.22 & 0.04 & 1.18 & 0.04 & 1.02 & 0.03 + y & 6725 & 1.043 & 0.002 & 1.36 & 0.04 & 1.04 & 0.03 & 1.00 & 0.04 + g & 6606 & 1.041 & 0.002 & 1.16 & 0.04 & 1.15 & 0.05 & 1.06 & 0.03 + c & 6497 & 1.030 & 0.003 & 1.23 & 0.05 & 1.09 & 0.05 & 1.16 & 0.04 + f & 6004 & 1.047 & 0.003 & 1.22 & 0.04 & 1.11 & 0.03 & 1.02 & 0.03 + b & 4958 & 0.959 & 0.003 & 1.10 & 0.04 & 1.25 & 0.04 & 1.02 & 0.03 + _ the _ & 3703 & 1.154 & 0.004 & 1.35 & 0.04 & & & & + _ and _ & 3105 & 1.008 & 0.003 & 1.21 & 0.04 & & & & + _ a _ & 1863 & 1.085 & 0.005 & 1.20 & 0.04 & & & & + _ to _ & 1727 & 1.054 & 0.004 & 1.14 & 0.03 & & & & + _ of _ & 1436 & 1.127 & 0.005 & 1.21 & 0.04 & & & & + _ he _ & 1197 & 1.770 & 0.015 & 1.40 & 0.04 & & & & + _ tom _ & 689 & 1.740 & 0.014 & 1.39 & 0.06 & & & & + _ with _ & 647 & 1.068 & 0.008 & 1.15 & 0.04 & & & & + _ if _ & 237 & 1.404 & 0.011 & 1.23 & 0.04 & & & & + _ huck _ & 223 & 3.228 & 0.024 & 1.46 & 0.07 & & & & + _ boys _ & 155 & 1.767 & 0.019 & 1.24 & 0.06 & & & & + _ did _ & 150 & 1.336 & 0.018 & 1.22 & 0.04 & & & & + _ joe _ & 133 & 2.248 & 0.051 & 1.38 & 0.06 & & & & + _ never _ & 131 & 1.185 & 0.017 & 1.14 & 0.04 & & & & + _ boy _ & 122 & 1.788 & 0.054 & 1.29 & 0.06 & & & & + _ back _ & 121 & 0.968 & 0.015 & 1.04 & 0.03 & & & & + _ off _ & 99 & 1.335 & 0.019 & 1.12 & 0.04 & & & & + _ night _ & 98 & 2.025 & 0.057 & 1.29 & 0.04 & & & & + _ other _ & 96 & 1.145 & 0.019 & 1.12 & 0.03 & & & & + _ becky _ & 96 & 2.701 & 0.036 & 1.55 & 0.10 & & & & + & & & + sequence & & & error & & error & & error & & error + vowels & 440676 & 0.456 & 0.020 & 1.61 & 0.06 & & & & + _ & 265304 & 0.436 & 0.009 & 1.78 & 0.06 & 1.78 & 0.06 & 1.78 & 0.06 + e & 141465 & 0.855 & 0.002 & 1.28 & 0.04 & 1.30 & 0.06 & 1.11 & 0.03 + t & 100183 & 0.904 & 0.001 & 1.54 & 0.07 & 1.37 & 0.06 & 1.05 & 0.03 + a & 93129 & 0.877 & 0.001 & 1.32 & 0.05 & 1.25 & 0.06 & 1.06 & 0.03 + o & 91403 & 0.930 & 0.001 & 1.19 & 0.04 & 1.35 & 0.05 & 1.10 & 0.04 + i & 81407 & 0.914 & 0.001 & 1.44 & 0.07 & 1.33 & 0.06 & 1.21 & 0.05 +n & 80138 & 0.897 & 0.001 & 1.34 & 0.05 & 1.27 & 0.04 & 1.26 & 0.06 + s & 76915 & 0.950 & 0.001 & 1.40 & 0.06 & 1.31 & 0.06 & 1.23 & 0.06 + h & 72550 & 0.906 & 0.002 & 1.61 & 0.06 & 1.23 & 0.05 & 1.44 & 0.08+ r & 69852 & 0.918 & 0.001 & 1.49 & 0.06 & 1.22 & 0.04 & 1.30 & 0.06 + l & 55052 & 1.074 & 0.001 & 1.41 & 0.06 & 1.39 & 0.06 & 1.19 & 0.05 + d & 49093 & 0.980 & 0.001 & 1.44 & 0.05 & 1.17 & 0.04 & 1.04 & 0.04 + u & 33272 & 0.982 & 0.001 & 1.25 & 0.04 & 1.36 & 0.06 & 1.16 & 0.04 + m & 31535 & 1.025 & 0.001 & 1.29 & 0.05 & 1.35 & 0.05 & 1.05 & 0.03 + c & 29894 & 1.072 & 0.001 & 1.62 & 0.07 & 1.31 & 0.04 & 1.36 & 0.07 + g & 27791 & 1.031 & 0.001 & 1.36 & 0.06 & 1.24 & 0.05 & 1.19 & 0.04 + f & 26638 & 1.025 & 0.001 & 1.30 & 0.05 & 1.22 & 0.04 & 1.12 & 0.03 + w & 26164 & 1.056 & 0.001 & 1.53 & 0.07 & 1.32 & 0.05 & 1.31 & 0.06 + y & 24251 & 1.032 & 0.001 & 1.36 & 0.05 & 1.15 & 0.03 & 1.01 & 0.03 + p & 22440 & 1.124 & 0.002 & 1.46 & 0.06 & 1.27 & 0.05 & 1.25 & 0.05 + _ the _ & 14952 & 1.071 & 0.002 & 1.44 & 0.06 & & & & + _ of _ & 8141 & 1.121 & 0.003 & 1.63 & 0.07 & & & & + _ and _ & 7217 & 1.167 & 0.003 & 1.53 & 0.05 & & & & + _ a _ & 6518 & 1.144 & 0.003 & 1.24 & 0.04 & & & & + _ to _ & 4963 & 1.157 & 0.003 & 1.38 & 0.07 & & & & + _ in _ & 4946 & 1.002 & 0.002 & 1.16 & 0.04 & & & & + _ were _ & 510 & 1.461 & 0.013 & 1.27 & 0.04 & & & & + _ stephen _ & 505 & 4.955 & 0.099 & 1.64 & 0.06 & & & & + _ we _ & 425 & 2.427 & 0.085 & 1.25 & 0.04 & & & & + _ man _ & 415 & 1.388 & 0.019 & 1.16 & 0.04 & & & & + _ into _ & 330 & 1.179 & 0.011 & 1.14 & 0.04 & & & & + _ eyes _ & 329 & 1.921 & 0.013 & 1.21 & 0.04 & & & & + _ where _ & 310 & 1.214 & 0.014 & 1.11 & 0.03 & & & & + _ hand _ & 308 & 1.295 & 0.017 & 1.18 & 0.04 & & & & + _ street _ & 293 & 1.394 & 0.013 & 1.21 & 0.04 & & & & + _ our _ & 291 & 1.556 & 0.018 & 1.23 & 0.04 & & & & + _ first _ & 278 & 1.306 & 0.011 & 1.19 & 0.04 & & & & + _ father _ & 277 & 1.631 & 0.013 & 1.62 & 0.05 & & & & + _ day _ & 250 & 1.131 & 0.012 & 1.10 & 0.03 & & & & + _ just _ & 249 & 2.014 & 0.012 & 1.20 & 0.04 & & & & + & & & + sequence & & & error & & error & & error & & error + vowels & 945097 & 0.430 & 0.020 & 1.55 & 0.05 & & & & + _ & 565161 & 0.426 & 0.009 & 1.50 & 0.05 & 1.50 & 0.05 & 1.50 & 0.05 + e & 312626 & 0.834 & 0.002 & 1.39 & 0.05 & 1.35 & 0.04 & 1.05 & 0.03 + t & 224180 & 0.886 & 0.001 & 1.40 & 0.04 & 1.27 & 0.04 & 1.09 & 0.03 + a & 204154 & 0.869 & 0.002 & 1.42 & 0.05 & 1.18 & 0.04 & 1.05 & 0.03 + o & 191126 & 0.904 & 0.001 & 1.45 & 0.05 & 1.40 & 0.04 & 1.04 & 0.04 +n & 182910 & 0.860 & 0.002 & 1.28 & 0.04 & 1.43 & 0.05 & 1.17 & 0.04 + i & 172403 & 0.894 & 0.001 & 1.48 & 0.05 & 1.46 & 0.05 & 1.21 & 0.04 + h & 166290 & 0.852 & 0.002 & 1.50 & 0.05 & 1.26 & 0.04 & 1.28 & 0.05 + s & 161889 & 0.955 & 0.001 & 1.32 & 0.04 & 1.47 & 0.05 & 1.04 & 0.03 + r & 146667 & 0.919 & 0.001 & 1.38 & 0.04 & 1.34 & 0.04 & 1.10 & 0.03 + d & 117632 & 0.923 & 0.001 & 1.48 & 0.05 & 1.33 & 0.04 & 1.05 & 0.03+ l & 95888 & 1.064 & 0.001 & 1.26 & 0.04 & 1.30 & 0.04 & 1.04 & 0.04 + u & 64788 & 0.971 & 0.001 & 1.26 & 0.04 & 1.26 & 0.04 & 1.04 & 0.03 + m & 61162 & 1.018 & 0.001 & 1.30 & 0.04 & 1.26 & 0.04 & 0.98 & 0.04 + c & 60576 & 1.009 & 0.001 & 1.54 & 0.05 & 1.39 & 0.04 & 1.17 & 0.04 + w & 58852 & 0.978 & 0.001 & 1.28 & 0.04 & 1.29 & 0.04 & 1.24 & 0.05 + f & 54419 & 1.064 & 0.001 & 1.49 & 0.05 & 1.23 & 0.04 & 1.05 & 0.03 + g & 50819 & 1.014 & 0.001 & 1.49 & 0.05 & 1.37 & 0.04 & 1.08 & 0.04 + y & 45847 & 1.035 & 0.001 & 1.36 & 0.04 & 1.34 & 0.04 & 1.01 & 0.04 + p & 44680 & 1.080 & 0.001 & 1.48 & 0.05 & 1.34 & 0.04 & 1.12 & 0.03 + _ the _ & 34495 & 1.128 & 0.002 & 1.59 & 0.05 & & & & + _ and _ & 22217 & 0.874 & 0.002 & 1.22 & 0.04 & & & & + _ to _ & 16640 & 1.056 & 0.001 & 1.24 & 0.04 & & & & + _ of _ & 14864 & 1.168 & 0.002 & 1.59 & 0.05 & & & & + _ a _ & 10525 & 1.119 & 0.002 & 1.21 & 0.04 & & & & + _ he _ & 9860 & 1.893 & 0.006 & 1.44 & 0.06 & & & & + _ so _ & 1900 & 1.180 & 0.005 & 1.14 & 0.03 & & & & + _ prince _ & 1890 & 3.862 & 0.026 & 1.68 & 0.06 & & & & + _ pierre _ & 1796 & 6.563 & 0.042 & 1.78 & 0.06 & & & & + _ an _ & 1625 & 1.131 & 0.004 & 1.17 & 0.04 & & & & + _ could _& 1115 & 1.285 & 0.005 & 1.15 & 0.05 & & & & + _ natasha _ & 1098 & 6.334 & 0.036 & 1.74 & 0.06 & & & & + _ man _ & 1081 & 1.407 & 0.007 & 1.33 & 0.04 & & & & + _ will _ & 1066 & 1.530 & 0.011 & 1.36 & 0.04 & & & & + _ andrew _ & 1047 & 4.321 & 0.021 & 1.71 & 0.06 & & & & + _ do _ & 1037 & 1.273 & 0.010 & 1.16 & 0.04 & & & & + _ time _ & 926 & 1.100 & 0.008 & 1.13 & 0.03 & & & & + _ princess _ & 915 & 5.431 & 0.033 & 1.72 & 0.06 & & & & + _ face _ & 893 & 1.445 & 0.007 & 1.25 & 0.04 & & & & + _ french _ & 879 & 2.287 & 0.029 & 1.53 & 0.05 & & & & + | the complexity of human interactions with social and natural phenomena is mirrored in the way we describe our experiences through natural language . in order to retain and convey such a high dimensional information , the statistical properties of our linguistic output has to be highly correlated in time . an example are the robust observations , still largely not understood , of correlations on arbitrary long scales in literary texts . in this paper we explain how long - range correlations flow from highly structured linguistic levels down to the building blocks of a text ( words , letters , etc .. ) . by combining calculations and data analysis we show that correlations take form of a bursty sequence of events once we approach the semantically relevant topics of the text . the mechanisms we identify are fairly general and can be equally applied to other hierarchical settings . + published as : link : dx.doi.org/10.1073/pnas.1117723109[proc . nat . acad . sci . usa ( 2012 ) doi : 10.1073/pnas.1117723109 ] literary texts are an expression of the natural language ability to project complex and high - dimensional phenomena into a one - dimensional , semantically meaningful sequence of symbols . for this projection to be successful , such sequences have to encode the information in form of structured patterns , such as correlations on arbitrarily long scales . understanding how language processes long - range correlations , an ubiquitous signature of complexity present in human activities and in the natural world , is an important task towards comprehending how natural language works and evolves . this understanding is also crucial to improve the increasingly important applications of information theory and statistical natural language processing , which are mostly based on short - range - correlations methods . take your favorite novel and consider the binary sequence obtained by mapping each vowel into a and all other symbols into a . one can easily detect structures on neighboring bits , and we certainly expect some repetition patterns on the size of words . but one should certainly be surprised and intrigued when discovering that there are structures ( or memory ) after several pages or even on arbitrary large scales of this binary sequence . in the last twenty years , similar observations of long - range correlations in texts have been related to large scales characteristics of the novels such as the story being told , the style of the book , the author , and the language . however , the mechanisms explaining these connections are still missing ( see ref . for a recent proposal ) . without such mechanisms , many fundamental questions can not be answered . for instance , why all previous investigations observed long - range correlations despite their radically different approaches ? how and which correlations can flow from the high - level semantic structures down to the crude symbolic sequence in the presence of so many arbitrary influences ? what information is gained on the large structures by looking at smaller ones ? finally , what is the origin of the long - range correlations ? in this paper we provide answers to these questions by approaching the problem through a novel theoretical framework . this framework uses the hierarchical organization of natural language to identify a mechanism that links the correlations at different linguistic levels . as schematically depicted in fig . [ fig.1 ] , a topic is linked to several words that are used to describe it in the novel . at the lower level , words are connected to the letters they are formed , and so on . we calculate how correlations are transported through these different levels and compare the results with a detailed statistical analysis in ten different novels . our results reveal that while approaching semantically relevant high - level structures , correlations unfold in form of a bursty signal . moving down in levels , we show that correlations ( but not burstiness ) are preserved , explaining the ubiquitous appearance of long - range correlations in texts . ) , letters ( a - z ) , words , and topics . ] |
random graphs often appear and have been studied in various fields of natural science and engineering .a recent interest in their applications is generating expander graphs , regular graphs that shows high connectivity and homogeneity .such graphs have important applications in designing networks of computers , infrastructures , and real and artificial neurons .a way to define and generate expanders is maximization of the spectral gap , which is defined as the difference between the largest eigenvalue and the second largest eigenvalue of the adjacency matrix of a graph .et al . _ numerically maximized the spectral gap by simulated annealing and generated examples of these graphs .however , in designing networks , we are also interested in quantitative properties ; specifically how the probability of large spectral gap graphs behaves when the size of graphs increases . in this paperwe apply a method based on multicanonical monte carlo to the calculation of large deviations in the spectral gap of random graphs .the method can be regarded as an extension of the method introduced in ; in , large deviations in the largest eigenvalue of random matrices are computed by a similar method .multicanonical monte carlo enables us to estimate tails of the distribution whose probability is very small and can not be computed by naive random sampling . by using the proposed method, we estimate the distribution of the spectral gap of random 3-regular graphs and quantify the probability that the spectral gap is larger than a given ; when , it is shown that for large .an undirected graph is described by the corresponding adjacency matrix , whose entries are defined by here we denote eigenvalues of the adjacency matrix by . in the case of -regular graphs, has the trivial largest eigenvalue , therefore we assume the difference between the largest eigenvalue and the largest non - trivial eigenvalue of is called `` spectral gap '' and takes a non - zero value if the corresponding graph is connected .we are interested in tails of the distribution of the spectral gap . in the case of regular graphs , alon and boppana ( see ) proved an asymptotic lower bound of the largest non - trivial eigenvalue as assuming that the limit exists , this gives an asymptotic upper bound for the spectral gap as .graphs with are called `` ramanujan graphs '' . on the other hand ,friedman proved that for and for any constant `` most '' random -regular graphs have that satisfies as .these results indicate that the peak of the distribution of is located near for large .et al . _ studied the distribution around this peak using naive random sampling .none of these studies , however , discusses extreme tails of the distribution of or , which is the main subject of this paper .here , we will give a brief explanation of multicanonical monte carlo .the aim of this method is to estimate the density of states defined by where is the dirac -function and denotes a multiple integral in the space of matrix .we define the weight by as a function of , a key quantity of this method .when a weight function is given , we can generate samples from the distribution defined by the weight using the metropolis algorithm .the essential idea is to tune the weight function so as to produce a flat histogram of .after we find a appropriate weight function that gives a flat histogram , an approximate value of the density of states is estimated by . to obtain this , we modify the weight function step - by - step through the metropolis simulation using the current guess of the weight function .there are several ways to modify the function . among them , a method proposed by wang and landau is most successful and used in this study . for a more accurate estimate of the density of states ,we calculate a histogram by performing a long simulation with fixed .then is estimated by and the probability distribution of is obtained by .practically , we calculate the density only in a prescribed interval .details of the implementation of the metropolis algorithm are as follows .the simulation starts from an arbitrary -regular graph with the desired number of vertices . in each step , a candidate is generated by rewiring edges in the way used in , where the degree of each vertex is not changed ; a pair of links and that satisfy is selected and rewired as .then the spectral gap of a candidate is calculated by the householder method and accept / reject decision of the transition from the current state to is made by comparing the metropolis ratio with a random number uniformly distributed in $ ] . a candidate with , indicating a disconnected graph , is always rejected , hence an ensemble of connected random -regular graphs is sampled .of the spectral gap of random 3-regular graphs .the density at plotted in the vertical axis is obtained from the probability in a small bin around of a fixed width ; the values of them depend on the bin width in the right tail of the distribution , because discreteness of the spectra becomes relevant there .the arrows indicate ; graphs whose is above this value are ramanujan . ] using the proposed method , we estimate the distribution of the spectral gap of random 3-regular graphs with the number of vertices .figure [ fig:1 ] shows graphs with the largest spectral gap for and found in the simulations . in figure [ fig:2 ] , we show of random 3-regular graphs with the number of vertices ; the computational time is 7 hours for and 251 hours for using a core of intel xeon x5365 . as increases, the probability density becomes sharper and the peak becomes closer to around , which is consistent with a theoretical estimate and a numerical experiment .specifically , the probability of graphs with large decreases drastically for large .s are shown as functions of .each curve corresponds to different values of .data are well fitted by quadratic functions when . ] to quantify decreasing rate of the tails of the distribution , we define the probability that is larger than by here , we assume the probability is negligibly smaller than . in figure[ fig:3 ] , estimated are shown as functions of .for each , is well fitted by a quadratic functions of , when .this result indicates that decreases for large as where the rate function is shown in figure [ fig:4 ] . in the region , is no longer a monotonic decreasing function of and asymptotically approaches to unity for large .is plotted .the inset shows semi log plot of . in the region , exponentially increases as increases . is the point that becomes negative . ]a method based on multicanonical monte carlo is introduced to the estimation of large deviations in the spectral gap of random graphs . by using this method, we calculate the distribution of the spectral gap and the probability for random 3-regular graphs .while naive random sampling provides reasonable estimates of only when is around the peak of the distribution , the proposed method enables us to estimate in a wide region of including extreme tails of the distribution .we find that behaves as for large , when .our preliminary results indicate that a similar behavior is also seen in the case of random - and -regular graphs , suggesting that it is a general feature of random -regular graphs .the proposed method can be applied to calculations of large deviations in any statics of any ensemble of random graphs . in the case of non - regular graphs ,the spectral gap are defined as the smallest non - trivial eigenvalue of the laplacian matrix ; hence we sample matrices instead of by using multicanonical monte carlo .recent studies on gaussian or wishart random matrices showed that the probability of all eigenvalues being negative decreases as when the size of the matrices is large .our results are regarded as an extension of these results to the spectral gap of random graphs .we thank prof . m. kikuchi for his support and encouragement . this work is supported in part by global coe program ( core research and engineering of advanced materials - interdisciplinary education center for materials science ) ,mext , japan .all simulations were performed on a pc cluster at cybermedia center , osaka university . | graphs with large spectral gap are important in various fields such as biology , sociology and computer science . in designing such graphs , an important question is how the probability of graphs with large spectral gap behaves . a method based on multicanonical monte carlo is introduced to quantify the behavior of this probability , which enables us to calculate extreme tails of the distribution . the proposed method is successfully applied to random -regular graphs and large deviation probability is estimated . random graph ; spectral gap ; ramanujan graph ; multicanonical monte carlo ; large deviation |
let be the frequency of the species in a random sample of size from a multinomial population with a perhaps countably infinite number of species and let be probability measures under which the species has probability of being sampled , where with .let and denote the sum of the probabilities of the unobserved species , and the total number of species represented times in the sample , respectively , that is , where .then is called the sample coverage which is the sum of the probabilities of the observed species . proposed the estimator for .the good estimator has many applications such as shakespeare s general vocabulary and authorship of a poem [ , ] , genom [ ] , the probability of discovering new species in a population [ , ] , network species and data confidentiality [ ] . the problem of predicting .they studied prediction and prediction intervals , and gave a real - data example . on the theoretical aspects , many authors studied the asymptotic properties [ cf .esty ( ) , , and and references therein ] . proved the following asymptotic normality : under the condition where recently , found a necessary and sufficient condition for the asymptotic normality ( [ clt - thm - esty ] ) under the condition that is , under condition ( [ moment - condition-1 ] ) , ( [ clt - thm - esty ] ) holds if and only if both and for any , where for any , in this paper , we consider the moderate deviation problem for the good estimator .it is known that the moderate deviation principle is a basic problem .it provides us with rates of convergence and a useful method for constructing asymptotic confidence intervals .the moderate deviations can be applied to the following nonparameter hypothesis testing problem : where and are two probability measures under which the species has , respectively , probability and of being sampled , where with , .we can define a rejection region of the hypothesis testing by the moderate deviation principle such that the probabilities of type i and type ii errors tend to with an exponential speed .the asymptotic normality provides as the asymptotic variance and approximate confidence statements , but it does not prove that the probabilities of type i and type ii errors tend to with an exponential speed .the moderate deviations can be applied to a hypothesis testing problem for the expected coverage of the sample . have established a general delta method on the moderate deviations for estimators .but the method can not be applied to the good estimator . in order to study the moderate deviation problem for the good estimator, we need refined asymptotic analysis techniques and tail probability estimates . the exponential moments inequalities , the truncation method , asymptotic analysis techniques and the poisson approximation in play important roles .our main results are a moderate deviation principle and a self - normalized moderate deviation principle for the good estimator .the rest of this paper is organized as follows .the main results are stated in section [ sec2 ] . some examples and applications to the hypothesis testing problem and the confidence interval are also given in section [ sec2 ] .the proofs of the main results are given in section [ sec3 ] .some basic concepts for large deviations and the proofs of several technique lemmas are given in the .in this section , we state the main results and give some examples and applications .let , , be a function taking values in such that we introduce the following lindeberg - type condition : for any positive sequence with and any , [ rmk-2 - 1 ] for any , in particular , take . if , then ( [ mdp - lindeberg - condition ] ) holds .[ main - thm - mdp ] suppose that the conditions ( [ moment - condition-1 ] ) , ( [ moment - condition-2 ] ) and ( [ mdp - lindeberg - condition ] ) hold .then satisfies a large deviation principle with speed and with rate function .in particular , for any , [ main - thm - mdp - self ] suppose that conditions ( [ moment - condition-1 ] ) , ( [ moment - condition-2 ] ) and ( [ mdp - lindeberg - condition ] ) hold. then satisfies a large deviation principle with speed and with rate function .let , be a sequence of positive numbers such that then theorems [ main - thm - mdp ] and [ main - thm - mdp - self ] give the following estimates which are much easier to understand and apply : and set .then is called the expected coverage of the sample in the literature . by theorems [ main - thm - mdp ] and [ main - thm - mdp - self ] , and lemma [ main - thm - self - lem ] , as an estimator of also satisfies moderate deviation principles .[ main - thm - mdp - expected ] suppose that conditions ( [ moment - condition-1 ] ) , ( [ moment - condition-2 ] ) and ( [ mdp - lindeberg - condition ] ) hold .then and satisfy the large deviation principle with speed and with rate function . considered the problem of predicting , and obtained conditionally unbiased predictors and exact prediction intervals based on a poissonization argument .the moderate deviations for the predictors are also interesting problems . in this subsection , we apply the moderate deviations to hypothesis testing problems and confidence interval .let be the unknown total probability unobserved species , and let be the estimator defined by ( [ def - estimator ] ) .first , let us consider a nonparametric hypothesis testing problem .let and be two probability measures under which the species has , respectively , probability and of being sampled , where with , .denote by and suppose that the conditions ( [ moment - condition-1 ] ) , ( [ moment - condition-2 ] ) and ( [ mdp - lindeberg - condition ] ) hold for , , and that consider the nonparameter hypothesis testing we take the statistic as test statistic .suppose that the rejection region for testing the null hypothesis against is , where is a positive constant .the probability of type i error and the probability of type ii error are respectively .it follows therefore , corollary [ main - thm - mdp - expected ] implies that the above result tells us that if the rejection region for the test is , then the probability of type i error tends to with exponential decay speed , and the probability of type ii error tends to with exponential decay speed for all .but the asymptotic normality does not prove that the probabilities of type i and type ii errors tend to with an exponential speed .we also consider a hypothesis testing problem for the expected coverage of the sample .we denote by the probability measures under which the expected coverage of the sample is and set suppose that the conditions ( [ moment - condition-1 ] ) , ( [ moment - condition-2 ] ) and ( [ mdp - lindeberg - condition ] ) hold for for each .let be two real numbers preassigned .consider the hypothesis testing we also take the rejection region , where is a positive constant .when , and when , next , we apply the moderate estimates to confidence intervals . for given confidence level ,set .then by theorem [ main - thm - mdp ] , the confidence interval for is , that is , but the confidence interval contains unknown .we use theorem [ main - thm - mdp - self ] to obtain another confidence interval with confidence level for which does not contain unknown , let us check that some examples in also satisfy moderate deviation principles if , where . for a given decreasing density function on .define , where .two concrete examples are as follows : let , where and . by example 1 in , and , where thus ( [ moment - condition-1 ] ) and ( [ moment - condition-2 ] ) hold . by remark [ rmk-2 - 1 ] , ( [ mdp - lindeberg - condition ] ) also holds .therefore , theorems [ main - thm - mdp ] and [ main - thm - mdp - self ] hold .let , where for some constant .then by example 2 in , and when .thus , ( [ mdp - lindeberg - condition ] ) is equivalent to which holds if and only if .therefore , ( [ moment - condition-1 ] ) , ( [ moment - condition-2 ] ) and ( [ mdp - lindeberg - condition ] ) hold if and only if .in this section we give proofs of the main results .let us explain the idea of the proof of theorem [ main - thm - mdp ] .first , we divide the proof into two cases : case i and case ii , according to the limit and . for case i , by the truncation method and the exponential equivalent method, we simplify our problems to the case which is uniformly bounded . for case ii , by the poisson approximation and the exponential equivalent method ,we simplify our problems to the case of independent sums satisfying an analogous lindeberg condition .for the two cases simplified , we establish moderate deviation principles by the method of the laplace asymptotic integral ( lemmas [ laplace - int - lem ] and [ mdp - poisson - approx ] ) . the exponential moment estimate ( lemma [ exp - moment - inq - lem-1 ] ) plays an important role in the proofs of some exponential equivalence ( lemmas [ exp - moment - estimates - lem-1 ] and [ exp - eqv - lemma-1 ] ) . the main technique in the estimate of the laplace asymptotic integral lemma [ laplace - int - lem ] is asymptotic analysis .in particular , we emphasis a transformation defined below ( [ laplace - int - lem - eq-4 ] ) which plays a crucial role in the proof of lemma [ laplace - int - lem ] .we can assume that the population is sampled sequentially , so that , , are i.i.d . under , where ; can be viewed as a multinomial vector under , that is , for all integers , it is obvious that .since for any , we have that and if , then . without loss of generality , we can assume that .\ ] ] otherwise , we consider subsequence. the proof of theorem [ main - thm - mdp ] will be divided into two cases , now let us introduce the structrue of the proofs of main results . in section [ sec3.1 ] ,we give several moment estimates and exponential moment inequalities which are basic for studying the moderate deviations for the good estimator . a truncation method and some related estimates are also presented in the subsection .the proofs of cases i and ii of theorem [ main - thm - mdp ] are given , respectively , in sections [ sec3.2 ] and [ sec3.3 ] . in section [ sec3.4 ] , we prove theorem [ main - thm - mdp - self ] .the proofs of several technique lemmas are postponed to the . for any and , set and [ truncation - lem - l ] if , then for any positive sequence with , in particular , condition ( [ mdp - lindeberg - condition ] ) is valid .similarly to remark [ rmk-2 - 1 ] , for any , therefore , ( [ truncation - lem - l - eq-1 ] ) holds . from lemma 1 in ,under conditions ( [ moment - condition-1 ] ) and ( [ moment - condition-2 ] ) , and if , then . [ sn - comparison - lem-1 ] assume that ( [ mdp - lindeberg - condition ] ) holds . if and then set .then for any , for large enough , therefore , by ( [ mdp - lindeberg - condition ] ) , the above inequality implies that as . on the other hand ,it is clear that for any , when is large enough , which yields that .thus ( [ sn - comparison - lem-1-eq-1 ] ) is valid .[ truncation - lem - rho ] for any , since holds uniformly on for , we obtain that that is , ( [ truncation - lem - rho - eq-1 ] ) holds . in order to obtain the exponential moment inequalities ,we need some concepts of negative dependence ; cf . , .let be real random variables . are said to be negatively associated if for every two disjoint index finite sets , for all nonnegative functions and that are both nondecreasing or both nonincreasing .[ negative - dep - lem] is a sequences of negatively associated random variables , and for each is also negatively associated .let denote the frequency of the species in the sampling , that is , then are zero - one random variables such that . by lemma 8 in , , are negative associated . since , , are i.i.d . under , , , are negative associated . noting that and where }(x) ] such that , for large enough , we can write by ( [ mdp - lindeberg - condition ] ) , and therefore , by as , we have that which implies the conclusion of the lemma by the grtner ellis theorem ; cf .theorem 2.3.6 in . by lemmas [ mdp - poisson - approx ] and [ exp - eqv - lemma - basic ], we need the following exponential approximation : for any , let us first give a maximal exponential estimate . its proof is postponed to appendix [ sec5 ] .[ exp - eqv - lemma-1 ] let conditions ( [ moment - condition-1 ] ) , ( [ moment - condition-2 ] ) and ( [ mdp - lindeberg - condition ] ) hold , and let .for any fixed , set , .then for any , }|\zeta_{\lambda_n n}- \zeta_{t n}|\geq\varepsilon a\bigl(b(n)\bigr ) \bigr)=-\infty.\ ] ] proof of theorem [ main - thm - mdp ] under by lemmas [ mdp - poisson - approx ] and [ exp - eqv - lemma - basic ] , we only need to prove ( [ exp - eqv - lemma-2-eq-1 ] ) .set .then has gamma distribution and .therefore , for any and any , }|\zeta _n- \zeta _ { t n}|\geq\frac{\varepsilon a(b(n))}{2 } \biggr).\end{aligned}\ ] ] by lemma [ exp - eqv - lemma-1 ] , }|\zeta_n- \zeta_{t n}|\geq \frac{\varepsilon a(b(n))}{2 } \biggr)=-\infty.\hspace*{-34pt}\ ] ] by chebyshev s inequality , it is easy to get that therefore , we only need to prove that it is sufficient that for any , in fact , by lemma [ exp - moment - inq - lem-1 ] , we can get that for any , where , which implies that ( [ exp - eqv - lemma-2-eq-1 ] ) holds . by the comparison method in large deviations [ cf . theorem 4.2.13 in ] , in order to obtain theorem [ main - thm - mdp - self ] , we need the following lemma .[ main - thm - self - lem ] for any , for , by ( [ exp - moment - estimates - lem-1-eq-1 ] ) , for , for any and , therefore , by lemma [ truncation - lem - rho ] , it suffices to show that & & \quad{}\times\log p_n \biggl ( \frac{1}{b(n ) } \biggl|\sum _ { k\in m_{n\varrho } } \biggl(\delta_{kj}(n)- \frac{1}{j!}(np_{kn})^j e^{-np_{kn } } \biggr ) \biggr|\geq\varepsilon \biggr)\\[-2pt ] & & \qquad=-\infty .\nonumber\end{aligned}\ ] ] now , let us show ( [ main - thm - self - lem - eq-3 ] ) . using the partial inversion formula for characteristic function due to [ see also , ] , for any , \hspace*{-4pt}&&\qquad= \frac{n!}{2\pi n^ne^{-n}}\int_{-\pi}^\pi\prod _ { k\in m_{n\varrho}^c } e_n \bigl(\exp \bigl\{iu \bigl(y_k(n)-np_{kn}\bigr ) \bigr\ } \bigr ) \\[-2pt ] \hspace*{-4pt}&&\hspace*{60pt}\qquad\quad{}\times \prod_{k\in m_{n\varrho } } e_n \biggl(\exp \biggl\{iu \bigl(y_k(n)-np_{kn}\bigr)\\[-2pt ] \hspace*{-4pt}&&\qquad\hspace*{147pt } { } + r \biggl(i_{\{y_k(n ) = j\}}-\frac{1}{j!}(np_{kn})^j e^{-np_{kn } } \biggr ) \biggr\ } \biggr)\,du,\end{aligned}\ ] ] where are independent random variables and is poisson distributed with mean .let be defined as in the proof of lemma [ laplace - int - lem ] , that is , set then for any , set .then , and noting that , we obtain that for small enough , } \biggl{\vert}e^{n(e^{i u}-1-iu ) } \prod_{k\in m_{n\varrho } } \vartheta _ k(u,\alpha)\biggr{\vert}\biggr ) \\ & & \qquad\leq - \frac { b(n)n \tau^2(n)}{a^2(b(n ) ) } \biggl(1+o \biggl(\frac { \log n } { n\tau^2(n ) } \biggr)+o ( \varrho ) \biggr ) \to-\infty.\end{aligned}\ ] ] since }\sup_{k\in m_{n\varrho } } |np_{kn}(1-\cos u)|\leq\varrho ] , thus and so this yields that ( [ main - thm - self - lem - eq-3 ] ) holds .proof of theorem [ main - thm - mdp - self ] by lemma [ main - thm - self - lem ] , for any , now , by and the elementary inequality for all , we obtain that therefore , the conclusion of the theorem follows from lemma [ exp - eqv - lemma - basic ] or theorem 4.2.13 in .for the sake convenience , let us introduce some notions in large deviations [ ] .let be a metric space .let , be a sequence of probability spaces and let be a sequence of measurable maps from to .let be a sequence of positive numbers tending to , and let ] is compact for any .then is said to satisfy a large deviation principle ( ldp ) with speed and with rate function , if for any open measurable subset of , and for any closed measurable subset of , assume that satisfies in law and a fluctuation theorem such as central limit theorem , that is , there exists a sequence such that in law , where is a constant and is a nontrivial random variable .usually , is said to satisfy a moderate deviation principle ( mdp ) if satisfies a large deviation principle , where is an intermediate scale between and , that is , and . in this paper ,the following exponential approximation lemma is required .it is slightly different from theorem 4.2.16 in .[ exp - eqv - lemma - basic ] let and , be sequences of measurable maps from to .assume that for each , satisfies a ldp with speed and with rate function .if and for any , the satisfies a ldp with speed and with rate function .set .for any closed subset , where . by ( [ exp - eqv - lemma - basic - eq-2 ] ) , for large and .therefore , for large and and so the argument for open sets is similar and is omitted .in this appendix , we give the proofs of several technique lemmas .the proofs of lemmas [ exp - moment - estimates - lem-1 ] and [ exp - eqv - lemma-1 ] are based some exponential moment inequalities for negatively associated random variables and martingales .the refined asymptotic analysis techniques play a basic role in the proof of lemma [ laplace - int - lem ] .proof of lemma [ exp - moment - estimates - lem-1 ] ( 1 ) by lemma [ exp - moment - inq - lem-1 ] , we have that for any , and , \hspace*{-2pt}&&\qquad\leq\frac{a(b(n))}{b(n)}\frac{r^2a^2(b(n))}{b(n ) } \\[-4pt ] \hspace*{-2pt}&&\qquad\quad{}\times\frac{1}{b(n ) } \sum _ { k\in m_{n\varrho}^{c } } \biggl ( \frac{2}{\varrho } np_{kn}e^{-n p_{kn}}+ \sum_{l=1}^j\frac{n!}{(n - l)!l ! } p_{kn}^l e^{-(n - l)p_{kn } } \biggr).\end{aligned}\ ] ] therefore , ( [ exp - moment - estimates - lem-1-eq-1 ] ) holds .\(2 ) similarly to the proof of ( [ exp - moment - estimates - lem-1-eq-1 ] ) , we also have that & & \qquad\leq\frac{r^2a^2(b(n))}{b(n ) } \frac{1}{b(n)}\sum_{k\in m_n^{lc } } \biggl ( \frac{2}{l } np_{kn}e^{-n p_{kn}}+ \sum _ { l=1}^j\frac{n!}{(n - l)!l ! }p_{kn}^l e^{-(n - l)p_{kn } } \biggr)\\[-4pt ] & & \qquad\to0.\end{aligned}\ ] ] finally , let us prove ( [ exp - moment - estimates - lem-1-eq-3 ] ) .by lemma [ exp - moment - inq - lem-1 ] , for any , & & \qquad\leq\sum_{k\in m_n^{lc } } \biggl ( \log \biggl ( \biggl(\exp \biggl\ { \frac{r na(b(n))}{b(n ) } p_{kn } \biggr\}-1 \biggr ) ( 1- p_{kn})^n+1 \biggr ) \\[-4pt ] & & \hspace*{126.5pt}\qquad\quad{}-\frac{r na(b(n ) ) } { b(n)}p_{kn}(1- p_{kn})^n \biggr).\end{aligned}\ ] ] therefore & & \qquad\leq 4\sum_{k\in m_n^{lc } } \biggl(\frac { r na(b(n))}{b(n ) } p_{kn } \biggr)^2e^{-n p_{kn}}i_{\ { { |r| na(b(n))}p_{kn}/{b(n)}\leq1 \ } } \\[-4pt ] & & \qquad\quad { } + 12\sum_{k\in m_n^{lc } } \exp \biggl\{\frac{2|r| na(b(n))}{b(n ) } p_{kn } \biggr\}e^{-n p_{kn}}i_{\ { { |r| na(b(n))}p_{kn}/{b(n)}\geq1 \ } } \\[-4pt ] & & \qquad\leq \frac{4 r^2 a(b(n))}{b(n ) } a_{nl}+ 24 |r| a_{n},\vadjust{\goodbreak}\end{aligned}\ ] ] where , and by the proof of lemma [ truncation - lem - l ] , . by ( [ truncation - lem - l - eq-1 ] ) , therefore , ( [ exp - moment - estimates - lem-1-eq-3 ] ) holds .proof of lemma [ laplace - int - lem ] it is known that where are independent random variables , and is poisson distributed with mean .then , using the partial inversion formula for characteristic function due to [ see also , ] , for any , where and it is obvious that . by stirling s formula , it suffices to show that for any , since uniformly in , we can write that for large enough , where . choose a positive function such that and , and define , and then , noting that for large enough , }(1-\cos u)\geq\tau^2(n)/4 ] , and therefore and so now , by we obtain ( [ laplace - int - lem - eq-5 ] ) .the proof of lemma [ laplace - int - lem ] is complete .proof of lemma [ exp - eqv - lemma-1 ] for any , we can write [ cf .( a.1 ) in ] therefore , it suffices to prove that and \\[-8pt ] & & \qquad\hspace*{242pt}\geq \varepsilon a\bigl(b(n)\bigr ) \biggr)\nonumber\\ & & \qquad=-\infty .\nonumber\end{aligned}\ ] ] let us first prove ( [ exp - eqv - lemma-1-eq-3 ] ) .set and .since are independent variables with mean zero and independent of , is a martingale , and by the maximal inequality for supermartingales , we have that for any , for any , and for any , take . then for large enough , therefore , for large enough , where and by ( [ mdp - lindeberg - condition ] ) , then , by under , and , and thus this yields ( [ exp - eqv - lemma-1-eq-3 ] ) by chebyshev s inequality .next , we show ( [ exp - eqv - lemma-1-eq-4 ] ) . noting that it suffices to show that for any , \\[-8pt ] & & \qquad=-\infty\nonumber\end{aligned}\ ] ] and since a similar argument to the proof of ( [ exp - eqv - lemma-1-eq-5 ] ) gives which implies ( [ exp - eqv - lemma-1-eq-6 ] ) .similarly , we can obtain ( [ exp - eqv - lemma-1-eq-7 ] ) .the author is very grateful to the editor , professor t. cai , the associate editor and an anonymous referee for their helpful comments and suggestions .the author is also thankful to the referee for recommending the reference . | in this paper , we consider moderate deviations for good s coverage estimator . the moderate deviation principle and the self - normalized moderate deviation principle for good s coverage estimator are established . the results are also applied to the hypothesis testing problem and the confidence interval for the coverage . |
in quantum physics , the state of the system can be unambiguously determined by measuring several select integrals of motion ( observables conserved in time evolution ) ; such observables are said to form a complete set of commuting observables ( csco ) .mathematically , the spectra of the allowed values of the members of the set are the discrete sequences of real numbers , , that obey the following property : for any pair of indices , there exists at least one member of the csco ( say ) for which the corresponding elements are non - degenerate ( i.e. distinct , .operationally , for any , the knowledge of the real - number sequence is sufficient to infer what was . in this paperwe consider a situation where the indistinguishability occurs due to an insufficient accuracy of the detection .we assume that the spectra of the csco members are drawn from random processes , later chosen to be of the poisson type .we further assume that for a given observable , two of its possible values , and , can _ not _ be resolved if they are separated by a distance less than the detection error , ; in this case , they are considered to be degenerate for all practical purposes .the principal goal of this paper is to determine the probability for two observables , and , to form an csco ( see examples depicted at figs .[ f : csco_novel_on ] and [ f : csco_novel_off ] ) . and , forming a complete set of commuting observables ( csco ) .even if a particular pair of measured values of one of the observables is degenerate ( points in ovals ) , i.e. indistinct given the measurement error , it will not be degenerate vis - a - vis another observable .if both observables are measured with respective errors and , the state of the system , , can be determined unambiguously . ] and are the only two observables that are measured , the states and are indistinct . ]furthermore , we will concentrate on quantum systems with finite spectra of size , i.e. we will assume that a given system can be in any of available states , where is finite . in real - world applications, grows with the system size . this study is partially inspired by a related issue of the functional dependence between the quantum observables ( i.e. the ability to predict the value of one observable having had measured the values of several others ) : while being a clear and extremely useful concept in classical physics , the practical significance of the notion of functional dependence in the quantum world is questionable at best . already in 1929 , von neumann showed that for any quantum system with states , one can construct a set of as many as conserved quantities each functionally independent from the other ; this is provided that any number of degeneracies is allowed . on the other hand , sutherland argued that any conserved quantity is functionally dependent on any other conserved quantity with a non - degenerate spectrum .being able to determine the state of the system amounts to being able to predict the value of any other observable .thus , from the functional dependence perspective we are looking for the probability that all conserved quantities be functionally dependent on two other , chosen beforehand .consider two -element - long finite ( ordered ) sequences of real numbers , both sequences are assumed to be -event - long randomly reshuffled fragments of a poisson process .let us elaborate on what this assumption means .consider monotonically increasing permutations of the above sequences , according to the definition of the poisson process , the probability of finding exactly elements of the sequence in an interval of a length is } ] = \\ & \qquad\qquad \frac { e^ { -{\cal i } / \overline{\delta i^{(2 ) } } } ( { \cal i } / \overline{\delta i^{(2 ) } } ) ^k } { k ! } \\ &\overline{\delta i^{(2 ) } } \equiv \mbox{{\it mean}}[\tilde{i}^{(2)}_{n+1}-\tilde{i}^{(2)}_{n } ] \,\ , .% \end{aligned}\ ] ] an important particular property of the poisson processes is that the intervals between two successive elements of the sequence are mutually statistically independent , and they are distributed according to the _ exponential _ law : = e^{-{\cal i}/\overline{\delta i^{(1 ) } } } \\ & \mbox{{\it prob}}[\tilde{i}^{(2)}_{n+1}-\tilde{i}^{(2)}_{n } > { \cal i } ] = e^{-{\cal i}/\overline{\delta i^{(2 ) } } } \label{exponential_distribution } \end{aligned } \,\ , . % \end{aligned}\ ] ] let us now return to the original sequences and that are random permutations of the and : \\ & \left ( i^{(2)}_{1},\,i^{(2)}_{2},\,\ldots,\,i^{(2)}_{n},\,\ldots,\,i^{(2)}_{n } \right ) = \\ & \quad \mbox{random permutation } [ \left ( \tilde{i}^{(2)}_{1},\,\tilde{i}^{(2)}_{2},\,\ldots,\,\tilde{i}^{(2)}_{n},\,\ldots,\,\tilde{i}^{(2)}_{n } \right ) ] \,\ , .% \end{aligned}\ ] ] it also follows from the properties of the poisson processes that the and ) sequences can be approximately generated by dropping uniformly distributed random numbers on respective intervals of lengths and . for a given sequence ,say , call a pair of elements , and , _ degenerate _ if the difference between them is less than a given number , , called the _ detecting error _: assume that the sequence is known , and a device ( emulating a process of quantum measurement ) produces a particular element of it , .an observer is allowed to measure it in an attempt to determine what the index was ( the analogue of a quantum state index ) , using the sequence as a look - up table . for a perfect measurement device and in the absence of coinciding elements in the sequence , this task can be easily accomplished .if however the accuracy of the measurement is limited , and the values separated by less than can not be distinguished , the value may happen to belong to one of the degenerate pairs , .if , further , there is no other degenerate pair belongs to , can only be said to be equal to _ either _ _ or _. if there are other relevant degenerate pairs , the uncertainty can be even greater .likewise , degenerate pairs of elements of the second sequence is determined through where is the error of the detector that measures the observable .now , both and are produced , and both detectors are used .when both measurements are allowed , several ambiguities in determining the index can be removed . however ,if for a given pair of indices , and , both and constitute the respective degenerate pairs , the indices and can never be resolved .the goal of this work is to determine the probability that no above ambiguities exist : = ? \,\ , .% \end{aligned}\ ] ] physically , we are interested in computing the probability that the observables and constitute a complete set of commuting observables ( csco ) .consider a length- fragment of a poisson process , with the mean spacing . in many respects , studying monotonically increasing sequences is easier than their random - order counterparts .this is fortunate since several important conclusions about the latter can be drawn using the former . as a model for a length- fragment of a poisson process with the mean spacing we use pseudorandom numbers uniformly distributed on an interval between 0 and .when ordered in the order it has been generated , the sequence serves as a model for a random permutation of a poisson sequence , , the primary object of interest . when rearranged to a monotonically increasing sequence , a model for a poisson sequence per se , ,is generated . to test the numerical method , we compare , in fig .[ f:_r__probabilityforapairtobedegeneratei ] , numerical and analytical predictions for the probability of two subsequent elements of the monotonically increasing sequence to form a degenerate pair , with respect to a detection error .the analytic prediction , = \\ & \qquad\qquad\qquad\qquad 1-e^{-\delta i/\overline{\delta i } } \stackrel{\delta i \ll \overline{\delta i}}{\approx } \frac{\delta i}{\overline{\delta i } } \,\ , , \label{p } \end{aligned } % \end{aligned}\ ] ] follows directly from one of the properties of the poisson sequences ( [ exponential_distribution ] ) .the numerical and analytical predictions agree very well .( i.e. to be degenerate ) . is the mean spacing between consecutive elements .red dots : numerical model , uncorrelated , uniformly - distributed random numbers on an interval , subsequently rearranged in a monotonically increasing succession .green line : theoretical prediction ( [ p ] ) .numerically , the spectrum consists of 100 elements ; the theoretical prediction is universal .numerical points result from averaging over 1000 monte carlo realizations . ] from ( [ p ] ) it also follows that the average number of degenerate pairs is where is the number of pairs of indices with two neighboring elements . in what follows ,the analytical predictions will be greatly simplified thanks to our ability to assume that degenerate pairs do not share elements , i.e. that a given element can either be isolated or belong to only one degenerate pair .it is clear that the probability of forming a degenerate cluster must be much smaller than the one for an isolated pair , because of the small probability of forming the latter . yet ,a quantitative assessment is due . assuming that that the individual spacings between the subsequent values of are statistically independent , we get : = ( 1-p)p^2 \,\ , .\label{p_c } \end{aligned } % \end{aligned}\ ] ] note that the above probability is the probability for the index to be at the left end of a cluster of degenerate pairs containing more than one pair .[ f:_r__probabilityforapairtobeleftpairofdegenerateclusteri ] compares the above prediction with _ ab initio _ numerics : the agreement is remarkable .the overall conclusion is that for a small probability of forming a degenerate pair , one can safely assume that all the degenerate pairs are isolated . .red dots : numerics .green line : theoretical prediction ( [ p_c ] ) .it does not depend on the sequence length .the rest of the data is the same as in fig .[ f:_r__probabilityforapairtobedegeneratei ] . ] this result will be heavily used in a subsequent treatment of the more physical random - order sequences .it is instructive to also compute the probability of having no degenerate clusters at all , for two reasons .( a ) this is the first measure in this study that characterizes the as a set , with no reference to the order of the elements .the probability of no clusters we will obtain will therefore be identical to the one for the sequence , randomly ordered .on the other hand , the monotonically increasing sequences are conceptually simpler , and they allow for more intuition in designing analytical predictions .( b ) using this study we will show that while the spacings between neighboring members of the monotonic sequence are statistically independent , the degenerate pairs they produce are not statistically independent from the point of view of the randomly permuted sequence . even though the spacings between the consecutive elements of the sequence are statistically independent , the appearance of a cluster with the leftmost edge at an index will be preventing an appearance of another cluster at .thus , appearances of clusters at different points are not statistically independent .nevertheless , for a very low cluster probability , a typical distance between the clusters becomes much larger than their typical length , and the above correlations can be neglected .our estimate for the probability of having no clusters at all then reads : \approx \\ & \qquad ( 1-p_{c})^{n_{p } } \stackrel{p\to 0,\ , n\to\infty,\ , p^2 n \to \mbox{const}}{\approx } e^{-p_{c } n } \stackrel{p\to0}{\approx } e^{-p^2 n } \,\ , , \label{p_no_clusters_correct } \end{aligned } % \end{aligned}\ ] ] where is given by ( [ n_p ] ) . above, we neglected limitations on the cluster length closer to the right edge of the spectrum , since we are mainly interested in the limit .as we have mentioned above , the result ( [ p_no_clusters_correct ] ) applies for both and sequences , being a characteristics of these sequences as a set , not as sequences with a particular order .it is tempting to reinterpret it as the probability that pairs chosen uniformly at random from pairs of elements of an -member - strong set have no elements in common .the number of ways to choose pairs with no elements in common is the number of ways to chose any set of pairs is dividing the former by the latter we get the following expression : where is given by ( [ m_bar ] ) . comparing ( [ p_no_clusters_incorrect ] ) to ( [ p_no_clusters_correct ] )one notices that the former overestimates the probability of cluster formation by a factor of two .an interpretation of this simple relationship requires future research .figure [ f:_r__probabilityofnoclustersi ] illustrates this difference . .red dots : numerics .green line : theoretical prediction ( [ p_no_clusters_correct ] ) .blue line : a naive combinatorial hypothesis ( [ p_no_clusters_incorrect ] ) . at small detector window widths , the formula involves factorials of large numbers and becomes numerically unreliable .purple line : the same as the blue line but using an asymptotic expression .the rest of the data is the same as in fig .[ f:_r__probabilityforapairtobedegeneratei ] . ]the principal goal of our paper is to assess the ability of two quantum conserved quantities to serve as a csco set .the physical realization of our model will involve two conserved quantities in an integrable system .those are indeed known to realize poisson processes , but only after a permutation to a monotonically increasing sequence . for two observables , these permutations are completely unrelated .thus a physically more relevant model would involve involve sequences that are represented by a random permutation of a poisson sequence , \,\ , .% \end{aligned}\ ] ] numerically , those are modeled by sets of real numbers randomly independently distributed over an interval , with _ no subsequent reordering_. to make a theoretical prediction for the probability for a given pair of two subsequent elements of the sequence to be degenerate , we first assume that the probability for degenerate pairs to share elements ( _ i.e . form clusters of degenerate pairs _ ) is low ( see fig .[ f:_r__probabilityforapairtobeleftpairofdegenerateclusteri ] and the expression ( [ p_c ] ) ) . in this case, one can assume that each degenerate pair constitutes a pair of consecutive elements in the monotonically increasing counterpart of the sequence .the probability of the later is ( with given by ( [ n_p ] ) ) .this probability must then be multiplied by the probability of a neighboring pair in the monotonic sequence to be degenerate , .we get = \\ & \qquad\qquad \frac{2}{n_{p } } p \stackrel{n\to\infty}{\approx } \frac{2}{n } p \,\ , .\label{p_randomized } \end{aligned } % \end{aligned}\ ] ] fig .[ f:_r__probabilityforapairtobedegenerateiunordered ] demonstrates that this prediction is indeed correct .( i.e. to be degenerate ) . is the mean spacing between consecutive elements before the random permutation .red dots : numerical model , uncorrelated uniformly distributed random numbers on an interval , with _ no further rearrangement_. green line : theoretical prediction ( [ p_randomized ] ) . numerically , the spectrum consists of 100 elements .numerical points result from an averaging over 1000 monte carlo realizations . ]we are now in the position to assess the ability of a single observable to serve as a csco .the probability of that is given by the probability of having no degenerate pairs at all . since this probability is a property of the sequence as a set , it can be estimated using the monotonically increasing counterpart .there , the probability of not having degenerate pairs is the probability that none of the pairs of consecutive indices are degenerate .we get = \\ & \qquad ( 1-p)^{n_{p } } \stackrel{p\to 0,\ , n\to\infty,\ , p n \to \mbox{const}}{\approx } e^{-pn } \,\ , .\label{p_csco_1 } \end{aligned } % \end{aligned}\ ] ] fig .[ f:_r__probabilityofnodegeneratepairsi ] shows that this probability approaches unity only for a detector error as low as the inverse sequence length .furthermore , the expression ( [ p_csco_1 ] ) shows that even if for a given the given observable does form a csco , for larger this property dissappears . .physically , this probability corresponds to the probability for a given observable to form a complete set of commuting observables ( csco ) .red dots : numerics .green line : theoretical prediction ( [ p_csco_1 ] ) .the rest of the data is the same as in fig .[ f:_r__probabilityforapairtobedegeneratei ] . ]we are finally ready to address the principal question posed : what is the probability that two observables with poisson spectra , subsequently independently randomly permuted , and , constitute a complete set of commuting observables ( csco ) .mathematically , the corresponding probability is : = \\ & \qquad\qquad\qquad ( 1-p_{\mbox{\scriptsize randomized}}^{(1)}p_{\mbox{\scriptsize randomized}}^{(2)})^{{\cal n}_{p } } \\ & \qquad\qquad\qquad\qquad \stackrel{n\to\infty}{\approx } e^{-2 p^{(1)}p^{(2 ) } } \,\ , , \label{p_csco_2 } \end{aligned } % \end{aligned}\ ] ] where are the respective probabilities for a given pair to be degenerate ( see ( [ p_randomized ] ) ) , is the analogue for a monotonic sequence ( see ( [ p ] ) ) , and is the respective detection error .we assume the same spacing , , for both sequences .a naive combinatorial interpretation of this probability is the ratio between the number of ways in which pairs can be chosen from two -element - long sets in such a way that no two pairs coincide and its unrestricted analogue : where is the number of pairs of objects ( [ cal_n_p ] ) , and is the number of pairs of neighboring elements in a sequence of length ( see ( [ n_p ] ) ) . in this particular casethe naive model generates the correct prediction .note that in this case , no correlations between the degenerate pairs within a particular sequence are involved , thus a better result .figure [ f:_r__probabilityofnojointdegeneratepairsunordered ] allows one to compare the various predictions made above , for .notice that ( a ) already at , csco is reached ; ( b ) from the expression ( [ p_csco_2_2 ] ) one can see that for large , the probability ( [ p_csco_2_2 ] ) does not depend on the length of the spectrum , and thus csco persist for large systems . a particular illustration of this phenomenon is presented in fig . [ f:_r__probabilityofnojointdegeneratepairsunordered_vs_n ] .) , a naive combinatorial hypothesis ( [ p_csco_2_2 ] ) , and its large asymptotic behavior , all three mutually indistinct .note that the former and the latter are analytically shown to coincide . ] . as in fig .[ f:_r__probabilityofnojointdegeneratepairsunordered ] , the multicolored line corresponds at the same time to the theoretical prediction ( [ p_csco_2 ] ) and to the large- asymptotic behavior of the combinatorial expression ( [ p_csco_2_2 ] ) , which are shown to coincide .the corresponding detector window sizes are and . ]in this work we show that with a large probability , differing from unity only by the expression in ( [ p_csco_2_2 ] ) ( see also figs .[ f:_r__probabilityofnojointdegeneratepairsunordered]-[f:_r__probabilityofnojointdegeneratepairsunordered_vs_n ] ) , two integrals of motion with poisson distributed spectra form a complete set of commuting observables ( csco ) ; i.e. they can be used to unambiguously identify the state of the system , the detection error notwithstanding . for a given detector error , this probability converges to a fixed value as the number of states in the spectrum increases .the above result is contrasted to an analogous result for a single observable . there , ( a ) only for a very small detector error can this observable be used as a csco , and ( b ) for a fixed detector error , the probability of a csco falls to zero exponentially as the number of states in the spectrum increases .poisson spectra constitute a popular model for energy spectra of integrable ( i.e. exactly solvable ) quantum systems with no true degeneracies .they can also be used to model spectra of other integrals of motion of integrable systems , provided they are substantially functions of at least two quantum numbers .note that in generic non - integrable quantum systems , energy levels repel each other .there , even a poor energy detector , with an error as large as the energy level spacing , would be able to identify a state : energy can thus serve as a csco .therefore , the physical implication of our principal result can be formulated as follows : given a reasonably small ( i.e. smaller than the mean spacing between levels ) detector error , two generic integrals of motion of an integrable quantum system can be used as a complete set of commuting observables , no matter how large the system is . on the other hand ,if only one integral of motion is used , the appearance of unidentifiable states is unavoidable for large systems . from the functional dependence perspective, we showed that in poorly measured integrable systems , all conserved quantities are ( with a close - to - unity probability ) functionally dependent on any two a priori chosen generic conserved quantities .results obtained in our article may find applications beyond the quantum state identification .they should be generally applicable in a standard pattern recognition setting where an unknown object must be identified indirectly by one , two , or more attributes .for example , using our results , we can show that chances of an ambiguity in identifying a person who was born on a particular date in a particular town within a specific time interval it and who is currently living on a particular street in another town depends neither on the length of the time interval , nor on the number of the streets in , but solely on the number of birth per day in , probability of a further migration to , percentage of -born citizens in , and the average street population in ; in the other words , this probability depends only on the relative measures , the absolute measures being irrelevant .the value for this probability is trivially obtainable from the central result of this paper .more generally , _ for a large enough set of n patterns with two poisson distributed numerical identifiers , each measured with a finite error , the probability of having no unidentifiable patterns depends only on the probability of a given value of an identifier to be indistinct from its neighbor above and the analogous probability for another identifier and not on size of the set . at the same time , if only a date of birth is known , the procedure will start frequently produce ambiguous results for long enough interval of time considered : here , the probability of finding two people born in a given town on the same date within this interval of time approaches unity .generally , a _single poisson identifier becomes unreliable for larger sets of patterns_.authors shared equal amount of work on the probabilistic models .e.m . provided the necessary numerical support , along with overseeing the overall logistics of the project .m.o . was solely responsible for the development of the combinatorial models .we are grateful to vanja dunjko and steven g. jackson for helpful comments and to maxim olshanii ( olchanyi ) for providing the seed idea for this project . | in this article , we revisit the century - old question of the minimal set of observables needed to identify a quantum state : here , we replace the natural coincidences in their spectra by effective ones , induced by an imperfect measurement . we show that if the detection error is smaller than the mean level spacing , then two observables with poisson spectra will suffice , no matter how large the system is . the primary target of our findings is the integrable ( i.e. exactly solvable ) quantum systems whose spectra do obey the poisson statistics . we also consider the implications of our findings for classical pattern recognition techniques . |
actin network is one of the most relevant structural component inside cytoskeleton of eukaryotic cells , this have a highly dynamic mechanical properties in the cell .its dynamics is essential to many process , such as cell adhesion , mechanosensing and mechanotransduction , motility , and cell shape among others . at the same timethe development of actin based biomimetic systems at micro and nano - scale demand a deep understanding of their mechanical properties at large deformation where the non - linear mechanics effect encode effects that allow the tuning of unexpected effects .in the cell , the dynamics and mechanics of the actin cytoskeleton are regulated by upward sixty known actin - binding proteins ( abps ) defining different levels with emergents behaviour with strong implications in physiological and pathological conditions . in particular the calponin abp was discovered in smooth muscle cells and was studied as possible regulator of actomyosin interaction .non - muscle calponin are known to be involved in actin stabilisation , stress - fibres formation and increase the tensile strength of the tissue under strain among others effect . despite the growing evidence on the role performed by calponin as a structural stabilisers the underlying micro - mechanics still unknown . to study the mechanical effect developed by calponin over the actin structure studied an in - vitro f - actin network crosslinked with calponin to gain insights about its mechanical properties using large deformation rheology .they found that the networks with calponin are able to reach a higher failure stress and strain while reducing the pre - strain of the network .calponin delays the onset of network stiffening , something observed in polymer networks with increased flexibility .they also observed that the micro - structural origin of these behavior was related to the decrease on the persistence length at single filament level . in order to explain the observed effect reported by we develop a mathematical model within the framework of continuum mechanics . in this model, we introduce the hypothesis by which the difference between the two networks , with and without calponin , is interpreted as a alterations in the pre - strain developed by the network entanglement and the regulation of the crosslinks adhesion energy . as a consequence , when the network have a higher internal pre - strain their crosslinks are also near the fluidisation transition . according to the observed stress - strain relationship at the concentrations of cross - linkers and actin used in the experiment, it can be assumed continuum strain , with elasticity originating from the entropic nature of the individual polymers . in the first part of this workwe describe the mechanics of actin networks with rigid crosslinks , using the wormlike chain model in the form proposed by and further developed by , whereas the network is described using an homogenized continuum framework based on the eight chain network .afterwards , we introduce the inelastic effect driven by the crosslinks as alterations in the contour length of the f - actin network . to define the phenomenological law that drives the changes in the contour length , we propose a simple model for the gelation process of the network based on the interactions between the entanglement and adhesion energy of crosslinkers .finally we compare with experimental measurements performed by be a fixed reference configuration of the continuos body of interest ( assumed to be stress free ) .we use the notation for the deformation , which transforms a typical material point to a position in the deformed configuration .further , let be the deformation gradient and the local volume ratio . consider a multiplicative split of into spherical ( dilatational ) part , and a uni - modular ( distortional ) part .note that . )eight chains model . ) semi - flexible bundle filaments in which the contour length is defined as the distance between the crosslinks . ) unbinding probabiity.,width=226 ] we use the right and left cauchy - green tensors denoted by and , respectively , and their modified counterparts associated with , and , respectively . hence , the mechanical behavior of the f - actin cross - linked network is modelled by means of a strain energy function ( sef ) based on the wormlike chain model for semi - flexible filaments .this model , proposed firstly by and later in a polynomial and homogenised form by , is defined in terms of four physical parameters related to the network architecture and network deformation ( see figure [ dynamic_net ] ) : i ) the contour length , ; ii ) the persistence length , ; iii ) the end - to - end length at zero deformation , , associated with the network prestress ; and iv ) the macroscopic network stretch from the condition of zero force , . with ,\label{eq_w1}\end{aligned}\ ] ] where is the filament end - to - end distance , the first invariant of . using standard procedures from continuum mechanics , the cauchy stress , , can be derived from direct differentiation of eq .[ eq_w ] with respect to \left[\frac{\frac{l_{c}}{l_{p}}-6\left(1-\frac{\lambda r_{0}}{l_{c}}\right)}{\frac{l_{c}}{l_{p}}-2\left(1-\frac{\lambda r_{0}}{l_{c}}\right)}\right]\left(\bar{\mathbf{b}}-\frac{1}{3}\bar{i}_1\mathbf{i}\right)+p\mathbf{i } , \end{split}\ ] ] where is a constitutive relation for the dilatational part of . for a simple shear test , , and the stress - strain relation results \left[\frac{\displaystyle{\frac{l_{c}}{l_{p}}}-6\left(1-\frac{\lambda r_{0}}{l_{c}}\right)}{\displaystyle{\frac{l_{c}}{l_{p}}}-2\left(1-\frac{\lambda r_{0}}{l_{c}}\right)}\right]\gamma . \\[2pt]\ ] ] as mentioned previously , in vivo or in vitro actin networks experience prestress during network formation and remodelling due to the entanglement and the formation and disruption of crosslinks . in order to account for these effects ,we introduce a passive prestress , in the network model , through the parameter where is a dimensionless parameter associated with the passive prestress .the network is buildup by the interaction between the actin filaments and the crosslinkers .the nature of this interaction defines the mechanical properties of the structure .if these interactions are stable ( for the stress and the time scales of the experiments ) , they provide a strong gelation process and the meshwork shows a solid - like behaviour under deformation .if on the contrary , the crosslinks are not completely stable , but they are associated with a reaction that can proceed in both directions , folding / unfolding or flexible / rigid states of the crosslink , we then speak of a weak gelation process with the meshwork showing a fluid - like behaviour without manifesting a complete unbinding .clearly if the level of deformation exerted over the crosslink exceeds a given threshold it will break irreversibly .these effects are account within the model via the contour length , .we propose the following expression for where defines the unfolding probability encompassing the states of unfolding or flexible cross - link ( see bellow ) , represents the contour length when , and represents the average increment of the contour length when the unbinding probability is one .chemical crosslinks can be modelled as a reversible two - state equilibrium process : where the binding probability encompassing the states folding or rigid cross - link . since only these two states are possible , then .the two - state model has the folded state as the preferred low free energy equilibrium state at zero force and the unfolded state as the high free energy equilibrium state at zero force . represents the difference in the free - energy between these states , represents the external mechanical work that induce the deformation of the crosslink , and represents the thermal energy .as we are only able to measure macroscopic quantities as stress and strain and we aim to develop a constitutive model in the mesoscale , we propose the next phenomenological expression , using the previous expression as motivation : },\ ] ] where the main driving force is , the average stretch over the bundle . in order to simplify the mathematical treatment , we consider to be proportional to the free adhesion energy .parameter gives us an idea of the sharpness of the transition between states and is the strain at which the probability of unbinding transition is 0.5 . if , the network is easy to be remodelled showing a more fluid - like behavior . on the contrary , if , the crosslink is more stable and the probability of transition is low .consequently the network behaves as a solid - like structure .showing an extension of the solid - like regime.,width=264 ] the model of f - actin network with weak crosslinks subjected to simple shear can be summarized as follows : \left[\frac{\displaystyle{\frac{l_{c}}{l_{p}}}-6\left(1-\frac{\lambda r_{0}}{l_{c}}\right)}{\displaystyle{\frac{l_{c}}{l_{p}}}-2\left(1-\frac{\lambda r_{0}}{l_{c}}\right)}\right]\gamma , \nonumber\ ] ] }. \nonumber\ ] ] in order to describe qualitatively the behaviour of the coupled set of equations under alterations in the pre - strain and the adhesion energy of the crosslinks we evaluate them in the regime of semi - flexible response i.e .figure [ quant_resp].a describes the effect of an increment on the pre - strain ( 1+ ) on network response , with the remaining parameters kept constant . as can be observed , as the pre - strain increases , the network stiffness increases and is able to reach a higher level of stress ( higher yield point ) .however , the yielding point ( fluidization of the network ) occurs earlier reducing the solid - like regime of the network .figure [ quant_resp].b shows the response of the network for different values of .contrary to the pre - strain , as increases the initial stiffness of the network remains unaltered while the yielding stress and strain increase , extending the solid - like regime .this implies that as increases the crosslinks become more stable .the proposed theory is used to describe the experiments conducted by on the artificially reconstituted f - actin networks crosslinked with filamin , where the network has an actin decorated with and without calponin .we evaluate the proposed model for the set of parameters shown in table [ table1 ] identified by means of nonlinear least - square fit to experiments of monotonic large deformation stress - strain shear tests from .the parameters of the model shown in table [ table1 ] can be divided in two types : ( i ) rigid - wormlike chain parameters , , and which are of the order of magnitude of the values used to describe in experiments of in - vitro f - actin networks and to keep on the regime the regime of semi - flexible entropic elasticity .( ii ) parameters associated with the dynamics of the crosslinks and .these parameters encode the transitions to induce fluidisation of the network and represent an indirect measure of the adhesion force of crosslinks .while describes the transition point in the contour length of the network meshsize ( average distance between crosslinks ) , describes the sharpness of this transition .these values are phenomenological and were identified in order to fit the experimental data . ., width=283 ] we compared the results obtained by the model with the average curve of monotonic simple shear at a constant shear strain rate .the network stiffening response begins at low levels of strain and continues almost linearly until reaching a maximum critical shear stress .after that point , the response change towards a regime of stress - strain softening where the stress decreases slowly towards zero as the shear strain increases , and the structure flows . for low levels of calponinthe networks present a higher level of pre - strain as noted by the higher slope at low level of deformation , which can also can be observed in the figure [ res].c as a larger value of . for low levels of calponin strain stiffening occurs until , with a maximum stress ( see figure [ res].a ) . on the contrary , for higher levels of calponin , the response shows a lower level of pre - strain ( lower initial stiffness ) , but the yielding point occurs at a larger deformation and higher stress ( experimental values between 3 and 6 ) as shown in figure [ res].a , indicating that calponin extends the solid - like behaviour of the network .this is a remarkable result with respect to pure f - actine networks for which a lower level of pres - train increases the solid - like regime , but reduces the yield stress significantly .figure [ res].b shows the scaled stress - strain curve with respect to the yield point .this figure demonstrates the ability of the model to describe the nonlinear effects .in order to explain the effects of calponin in stabilisation of the crosslinked f - actin networks our propose that the observed effect which promotes by modification in the flexural rigidity , and as a consequence the persistence length , at single filament will integrate at network scale as a change in the internal pre - stress and the adhesion energy over the crosslinks .we condense these observations in the figure [ calp_eff ] . during the gelation process, the network entanglement induces pre - stress across the network which is propagated through the bundles to the chemical crosslinks .the trapped stress in the structure is compensated by the deformation of the bundle and the chemical crosslinks . as a consequence ,it is potentially able to induce conformational changes over the crosslink structure , as was described by .the red and black dots in figure [ calp_eff ] describe the effect of the pre - stress on the energy landscape of the chemical crosslinks .qualitatively , results easy to see that the energy gap is lower under the presence of pre - stress on the crosslink , where the adhesion energy increases , changing from to , with . this can be understood as a combined action of two mechanical regulation pathways over the bonds reaction , where : represents the mechanical work induced by the macroscopic deformation that propagates through the network down to the crosslink . represents the mechanical work introduced during the entanglement also deforms the crosslink structure .on one side , without calponin the bundles are less flexible , and therefore network deformation will deform more the crosslink , which in terms of the energy landscape , it appears as tilted down ( se figure [ calp_eff ] ) , and therefore facilitates the conformational change from the folded configuration to the unfolded configuration .under the perspective of the proposed model , it implies that is shifted to the left , and the pre - strain is higher .therefore , the network is closer to the fluidisation transition and the maximum stress ( yield stress ) is reduced . on the contrary , if the concentration of calponin is higher , the bundle flexibility increases and persistence length decrease implying that the effects associated with trapped stress will be more concentrated on the bundle deformation than on the crosslinks . in consequence the adhesion energy of the crosslink is higher ( more difficult to jump from the folded to the unfolded configuration ) and the level of shear deformation that the network is able to achieve before the solid - fluid transition results also is also higher . in summary , the proposed model provides arguments to describe the changes observed in the flexibility of the actin bundles encoded the effects at network scale that drives the increment of adhesion and the whole stabilisation of the structure .this will help to gain better understanding in the complex mechanics behind of the cytoskeleton - like structural building blocks and synthetic bio - structures ..model parameters to fit the experiments of [ cols= " < , < , < " , ] [ table1 ]16 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 m. jensen , j. watt , j. hodgkinson , c. gallant , s. appel , m. el - mezgueldi , t. angelini , k. morgan , w. lehman , and j. moore .effects of basic calponin on the flexural mechanics and stability of f - actin ._ cytoskeleton _ , 690 ( 1):0 4958 , 2012 . h. lpez - menndez and j. f. rodrguez .microstructural model for cyclic hardening in f - actin networks crosslinked by -actinin ._ journal of the mechanics and physics of solids _ , 91:0 2839 , 2016 .jh shin , ml gardel , l mahadevan , p matsudaira , and da weitz .relating microstructure to rheology of a bundled and cross - linked f - actin network in vitro ._ proceedings of the national academy of sciences of the united states of america _ , 1010 ( 26):0 96369641 , 2004 . | the synthetic actin network demands great interest as bio - material due to its soft and wet nature that mimic many biological scaffolding structures . the study of these structures can contribute to a better understanding of this new micro / nano technology and the cytoskeleton like structural building blocks . inside the cell the actin network is regulated by tens of actin - binding proteins ( abp s ) , which makes for a highly complex system with several emergent behavior . in particular calponin is an abp that was identified as an actin stabiliser , but its mechanism is poorly understood . recent experiments using an in vitro model system of cross - linked actin with calponin and large deformation bulk rheology , found that networks with basic calponin exhibited a delayed onset and were able to withstand a higher maximal strain before softening . in this work we propose a mathematical model into the framework of nonlinear continuum mechanics to explain the observation of jansen , where we introduce the hypothesis by which the difference between the two networks , with and without calponin , is interpreted as a alterations in the pre - strain developed by the network entanglement and the regulation of the crosslinks adhesion energy . the mechanics of the f - actin is modelled using the wormlike chain model for semi - flexible filaments and the gelation process is described as mesoscale dynamics for the crosslinks . the model has been validated with reported experimental results from literature , showing a good agreement between theory and experiments . f - actin networks , calponin , chemical crosslinks , adhesion , non - linear rheology , allosteric materials |
the growing interest in the interdisciplinary physics of complex systems , has focussed physicists attention on agent - based modeling of social dynamics , as a very attractive methodological framework for social sciences where concepts and tools from statistical physics turn out to be very appropriate for the analysis of the collective behaviors emerging from the social interactions of the agents .the dynamical social phenomena of interest include residential segregation , cultural globalization , opinion formation , rumor spreading and others .the question that motivates the formulation of axelrod s model for cultural dissemination is how cultural diversity among groups and individuals could survive despite the tendencies to become more and more alike as a result of social interactions .the model assumes a highly non - biased scenario , where the culture of an agent is defined as a set of equally important cultural features , whose particular values ( traits ) can be transmitted ( by imitation ) among interacting agents .it also assumes that the driving force of cultural dynamics is the `` homophile satisfaction '' , the agents commitment to become more similar to their neighbors .moreover , the more cultural features an agent shares with a neighbor , the more likely the agent will imitate an uncommon feature s trait of the neighbor agent . in other words , the higher the cultural similarity , the higher the social influence . the simulations of the model dynamics show that for low initial cultural diversity , measured by the number of different traits for each cultural feature ( see below ) , the system converges to a global cultural state , while for above a critical value the system freezes in an absorbing state where different cultures persist .the ( non - equilibrium ) phase transition between globalization and multiculturalism was first studied for a square planar geometry , but soon other network structures of social links were considered , as well as the effects of different types of noise ( `` cultural drift '' ) , external fields ( modeling _ e.g. _ influential media , or information feedback ) , and global or local non - uniform couplings . in all those extensions of axelrod s model mentioned in the above paragraph , the cultural dynamics occurs on a network of social contacts that is fixed from the outset . however , very often social networks are dynamical structures that continuously reshape . a simple mechanism of network reshaping is agents mobility , and a scenario ( named the axelrod - schelling model ) where cultural agents placed in culturally dissimilar environments are allowed to move has recently been analyzed . in this model ,new interesting features of cultural evolution appear depending on the values of a parameter , the ( in-)tolerance , that controls the strength of agents mobility . a different mechanism of network reshaping has been considered in , where a cultural agent breaks its link to a completely dissimilar neighbor , redirecting it to a randomly chosen agent . at variance with the mobility scenario of the axelrod - schelling model , that limits the scope of network structures to clusters configurations on the starting structure ( square planar lattice , or others ) , the rewiring mechanism allows for a wider set of network structures to emerge in the co - evolution of culture and social ties . in this paperwe introduce in the scenario of network rewiring a tolerance parameter controlling the likelihood of links rewiring , in such a way that the limit recovers the case analyzed in , where only links with an associated null cultural overlap are broken .lower values of correspond to less tolerant attitudes where social links with progressively higher values of the cultural overlap may be broken with some probability that depends on these values .the results show a counterintuitive dependence of the tolerance on the critical value . on one hand , as expected from , rewiring promotes globalization for high values of the tolerance , but on the other hand , very low values of ( which enhance the rewiring probability ) show the higher values of .indeed , a non monotonous behavior is observed in : our results unambiguously show that for some intermediate values of the tolerance , cultural globalization is disfavored with respect to the original axelrod s model where no rewiring of links is allowed . in other words , rewiring does not always promote globalization . on the other hand ,the resulting network topology depends on , changing from a poisson connectivity distribution to a fat tailed distribution for .as in axelrod s model , the culture of an agent is a vector of integer variables ( ) , called cultural _ features _ , that can take on values , , the cultural _ traits _ that the feature can assume .the cultural agents occupy the nodes of a network of average degree whose links define the social contacts among them .the dynamics is defined , at each time step , as follows : * each agent imitates an uncommon feature s trait of a randomly chosen neighbor with a probability equal to their _ cultural overlap _ , defined as the proportion of common cultural features , where denotes the kronecker s delta which is 1 if and 0 otherwise .the whole set of agents perform this step in parallel . *each agent disconnects its link with a randomly chosen neighbor agent with probability equal to its _ dissimilarity _ , provided the dissimilarity exceeds a threshold ( _ tolerance _ ) , and rewires it randomly to other non - neighbor agent .the tolerance is a model parameter .first we note that the initial total number of links in the network is preserved in the rewiring process , so the average degree remains constant .however , the rewiring process allows for substantial modifications of the network topological features , _e.g. _ connectedness , degree distribution , etc . in that respect , except for the limiting situation of very low initial cultural diversity and a very high tolerance ( where the likelihood of rewiring could be very low ) , one should expect that the choices for the initial network of social ties have no influence in the asymptotic behavior of the dynamics .when the threshold tolerance satisfies , only those links among agents with zero cultural overlap are rewired , so the model becomes the one studied in .on the other hand , when the tolerance takes the value , there is not rewiring likelihood and the original axelrod s model is recovered .when rewiring is always possible provided the cultural similarity is not complete , _ , so that it corresponds to the highest intolerance .the usual order parameter for axelrod s model is , where is the average ( over a large number of different random initial conditions ) of the number of agents sharing the most abundant ( dominant ) culture , and is the number of agents in the population .large values of the order parameter characterize the globalization ( cultural consensus ) regime .we also compute the normalized size of the largest network component ( _ i.e. _ , the largest connected subgraph of the network ) .we have studied networks of sizes , ; averaging over - replicas .the considered cultural vectors have cultural features , each one with a variability - .we studied different values of the tolerance threshold and different values of the average connectivity .each simulation is performed for , , , , and fixed .for the sake of comparison with previous results , we will present results for .the behavior of the order parameter for different values of is seen in fig .[ k4_diff_z ] . like in ,three different macroscopic phases are observed with increasing values of , namely a monocultural phase , with a giant cultural cluster , a multicultural one with disconnected monocultural domains , and finally a multicultural phase with continuous rewiring .the nature of the latter phase has been successfully explained in : at very large values of the initial cultural diversity , the expected number of pairs of agents sharing at least one cultural trait becomes smaller than the total number of links in the network , so that rewiring can not stop .here we will focus attention on the first two phases and the transition between them . in figure [ histog ]we show the size distribution of the dominant culture over different realizations , measured for different values of , at a particular fixed value of the tolerance . in the region of values near the transition from globalization to multiculturalism ,the distribution is double peaked , indicating that the transition is first order , as in the original axelrod s model .the transition value , , may be roughly estimated as the value where the peaks of the size distribution are equal in height .the estimates of the transition points for different values of the tolerance are shown in fig .[ qc_vs_z ] .the non monotonous character of the graph seen in this figure reveals a highly non trivial influence of the tolerance parameter on the co - evolution of cultural dynamics and the network of social ties .let us first consider the ( most tolerant ) case that , except for the system size , coincides exactly with the situation considered in , _i.e. _ , only links with zero cultural overlap are rewired . as discussed in , for values larger than the critical value for a fixed network ( ), rewiring allows redirecting links with zero overlap to agents with some common cultural trait ( compatible agents ) , so reinforcing the power of social influence to reach cultural globalization .once all links connect compatible agents , rewiring stops . from there on , the network structure will remain fixed , and globalization will be reached with the proviso that the network has so far remained connected .this is the case for most realizations ( for ) up to values of .increasing further the cultural diversity , increases the frequency of rewiring events and slows down the finding of compatible agents , favoring the topological fragmentation into network components before rewiring stops . under these conditions ,the asymptotic state will consist of disconnected monocultural components .on one hand , network plasticity allows to connect compatible agents , so promoting globalization ; but on the other hand it may produce network fragmentation , so favoring multiculturalism .what we have seen in the previous paragraph is that for the first effect prevails over the second one up to .going from there to less tolerant situations ( decreasing ) , increases the likelihood of rewiring , making easier that network fragmentation occurs before rewiring stops .this has the effect of decreasing the critical value .in fact , from fig. [ qc_vs_z ] we see that for and multiculturalism prevails for cultural diversities where the original axelrod s model shows cultural globalization . in these casesnetwork plasticity promotes multiculturalism in a very efficient way : agents segregate from neighbors with low cultural similarity and form disconnected social groups where full local cultural consensus is easily achieved , for values low enough to allow a global culture in fixed connected networks . for very low values of the tolerance parameter ,though network fragmentation occurs easily during the evolution , fig .[ qc_vs_z ] shows that globalization persists up to very high values of the initial cultural diversity . to explain this seemingly paradoxical observation, one must realize that network fragmentation is not an irreversible process , provided links connecting agents with high cultural overlap have a positive rewiring probability . under these circumstances , transient connections among different componentsoccur so frequently so as to make it possible a progressive cultural homogenization between components that otherwise would have separately reached different local consensuses .[ time.evolution ] illustrates the time evolution for and different values of .panel ( a ) shows an example of cultural evolution where network fragmentation reverts to a connected monocultural network for . panel ( b ) , that corresponds to , shows that social fragmentation persists during the whole evolution , while in panel ( c ) , which corresponds to the most tolerant situation ( ) , the network remains connected all the time .the degree distribution of the network is poissonian centered about for all values , except for where it becomes fat tailed , with several lowly connected ( and disconnected ) sites .for very high values , in the dynamical phase , the network rewiring is esentially random , so is again poisson like , centered around .in this paper we have generalized the scenario for co - evolution of axelrod s cultural dynamics and network of social ties that was considered in , by introducing a tolerance parameter that controls the strength of network plasticity .specifically , fixes the fraction of uncommon cultural features above which an agent breaks its tie with a neighbor ( with probability equal to the cultural dissimilarity ) , so that , the lower the value , the higher the social network plasticity .our results show that the network plasticity , when controlled by the tolerance parameter , has competing effects on the formation of a global culture .when tolerance is highest , network plasticity promotes cultural globalization for values of the initial cultural diversity where multiculturalism would have been the outcome for fixed networks . on the contrary , for intermediate values of the tolerance , the network plasticity produces the fragmentation of the ( artificial ) society into disconnected cultural groups for values of the initial cultural diversity where global cultural consensus would have occurred in fixed networks . for very low values of the tolerance ,social fragmentation occurs during the system evolution , but the network plasticity is so high that it allows the final cultural homogenization of the transient groups for very high values of the cultural diversity .intermediate tolerances promote multiculturalism , while both extreme intolerance and extreme tolerance favor the formation of a global culture , being the former more efficient than the latter .this work has been partially supported by micinn through grants fis2008 - 01240 and fis2009 - 13364-c02 - 01 , and by comunidad de aragn ( spain ) through a grant to fenol group .y. m. was partially supported by the fet - open project dynanets ( grant no .233847 ) funded by the european commission and by comunidad de aragn ( spain ) through the project fmi22/10 . | starting from axelrod s model of cultural dissemination , we introduce a rewiring probability , enabling agents to cut the links with their unfriendly neighbors if their cultural similarity is below a tolerance parameter . for low values of tolerance , rewiring promotes the convergence to a frozen monocultural state . however , intermediate tolerance values prevent rewiring once the network is fragmented , resulting in a multicultural society even for values of initial cultural diversity in which the original axelrod model reaches globalization . |
consider the elliptic self - adjoint operator where are smooth functions in , a smooth bounded open subset of , satisfying for some .it is well - known that the minimum value in the rayleigh - ritz variational formula is attained at some function satisfying the number is usually referred to as the principal eigenvalue of in and is the corresponding principal eigenfunction . for operators of the form and also more general linear operator in divergence form there is a vast literature on computational methods for the principal eigenvalue , see for example , , , .general non - divergence type elliptic operators , namely are not self - adjoint and the spectral theory is then much more involved : in particular , the rayleigh - ritz variational formula is not available anymore . in the seminal paper by m.d .donsker and s.r.s .varadhan , a min - max formula for the principal eigenvalue of a class of elliptic operators including ( [ lnonself ] ) was proved , namely in that papers other representation formulas for were also proposed in terms of large deviations and of the average long run time behavior of the positive semigroup generated by .a further crucial step in that direction is the paper by h. berestycki , l. nirenberg and s.r.s .varadhan , where the validity of formula ( [ pe2intro ] ) is proved under mild smoothness assumptions ( a bounded open set and , , ) .moreover it is proved that is equivalent to following this path of ideas , notions of principal eigenvalue for fully nonlinear uniformly elliptic operators of the form = f(x , u(x ) , du(x ) , d^2 u(x))\ ] ] have been introduced and analyzed in , , , , , . a by now established definition of principal eigenvalue is given by +{\lambda}{\varphi}\le 0\quad\text{in }\}\,\ ] ] where the inequality in is intended in viscosity sense .it is possible to prove under appropriate assumptions , see - , that there exists a viscosity solution of + { \lambda}_1 w_1(x)=0 \qquad & x\in { \omega } , \\ w_1(x)=0 & x\in \partial{\omega}. \end{array } \right.\ ] ] moreover the characterization still holds in this nonlinear setting .as it is well - known , the principal eigenvalue plays a key role in several respects , both in the existence theory and in the qualitative analysis of elliptic partial differential equations as well in applications to large deviations , , bifurcation issues , ergodic and long run average cost problems in stochastic control . for linear non self - adjoint operators and , a fortiori , for nonlinear onesthe principal eigenvalue can be explicitly computed only in very special cases , see e.g. , hence the importance to devise numerical algorithms for the problem .but , apart some specific case ( see for the -laplace operator ) , approximation schemes and computational methods are not available in the literature , at least at our present knowledge .the aim of this paper is to define a numerical scheme for the principal eigenvalue of nonlinear uniformly elliptic operators via a finite difference approximation of formula .more precisely , denoting by the orthogonal lattice in where is a discretization parameter , we consider a discrete operator acting on functions defined on a discrete subset of and the corresponding approximated version of , namely (x)}{{\varphi}(x)}.\ ] ] as for the approximating operators , we consider a specific class of finite difference schemes introduced in , since they satisfy some useful properties for the convergence analysis . we prove that if is uniformly elliptic and satisfies in addition some quite natural further conditions , then it is possible to define a finite difference scheme such that the discrete principal eigenvalues and the associated discrete eigenfunctions converge uniformly in , as the mesh step is sent to , respectively to the principal eigenvalue and to the corresponding eigenfunction for the original problem ( [ pe ] ) .it is worth pointing out that the proof of our main convergence result , theorem [ main ] , can not rely on standard stability results for fully nonlinear partial differential equations , see , since the limit problem does not satisfy a comparison principle ( see remark [ convergence ] for details ) .we mention that our approach is partially inspired by the paper where a similar approximation scheme is proposed for the computation of effective hamiltonians occurring in the homogenization of hamilton - jacobi equations which can be characterized by a formula somewhat similar to . in section [ sect2 ]we introduce the main assumptions and we investigate some issues related to the maximum principle for discrete operators .in section [ sect3 ] we study the approximation method for a class of finite difference schemes and we prove the convergence of the scheme . in section [ sect4 ] we show that under some additional structural assumptions on the inf - sup problem can be transformed into a convex optimization problem on the nodes of the grid and we discuss its implementation .a few tests which show the efficiency of our method on some simple examples are reported in section [ sect4 ] as well .we start by fixing some notations and the assumptions on the operator .set , where denotes the linear space of real , symmetric matrices .the function is assumed to be continuous on and locally uniformly lipschitz continuous with respect to for each fixed .we will also suppose that the partial derivatives , , satisfy the following structure conditions : for some constants , , , .a further condition is the positive homogeneity of degree , that is the principal eigenvalue of problem is defined by +{\lambda}{\varphi}\le 0\ \mbox{in}\ { \omega}\},\ ] ] where the differential inequality +{\lambda}{\varphi}\le 0 ] represents the stencil of the scheme , i.e. the points in where the value of is computed for writing the scheme at the point ( we assume that h\to 0 { \omega}_h \partial { \omega}_h,}\\ \text{then } \quad f_h(x , z,[u+\eta]_x)>f_h(x , z,[u]_x)\qquad \end{split}\ ] ] or )\ge f_h(x , z+\tau , [ u+\eta]_x)+c_0\tau\qquad \end{split}\ ] ] for some positive constants .then the maximum principle holds for the operator in .assume by contradiction that satisfies and .let be such that . since on , it is not restrictive to assume that there exists such that . hence {\bar x})\le f_h(\bar x , u(\bar x)-m , [ u - m]_{\bar x } ) \\ & < f_h(\bar x,0,[0]_{\bar x})= 0,\end{aligned}\ ] ] a contradiction .a similar proof can be done with the assumption . the assumptions and correspond to the uniform ellipticity and , respectively , to the strict monotonicity of the operator with respect to the zero - order term .the following proposition shows that , as it is known in the continuous case ( see for example ) , the validity of the maximum principle for subsolutions of the operator is equivalent to the positivity of the principal eigenvalue for .[ prop_mp2 ] assume that the scheme is of positive type and that it is positively homogeneous .suppose that for , there exists a nonnegative grid function with in such that +{\lambda}{\varphi}\le 0 { \omega}_h \partial { \omega}_h ] satisfies the maximum principle .suppose by contradiction that .let as in the statement and set ( note that the maximum is taken only with respect to the internal points ) .then is continuous , decreasing , and for . hence there exists such that .moreover , since on , we also have .let be such that and set .then +{\lambda}\psi\le0 ] .it follows that {\bar x})&=f_h(\bar x , v(\bar x)+m , [ u]_{\bar x } ) < f_h(\bar x , v(\bar x)+m , [ v+m]_{\bar x})\\ & \le f_h(\bar x , v(\bar x ) , [ v]_{\bar x})\le f(\bar x)\end{aligned}\ ] ] and therefore a contradiction . a similar proof can be carried on under assumption .in this section we consider a specific class of finite difference schemes introduced in .these schemes satisfy certain pointwise estimates which are the discrete analogues of those valid for a general class of fully nonlinear , uniformly elliptic equations .+ we assume that for all , the stencil ] with and we consider a scheme defined as in , obviously without the dependence on , . [ kr ] under the assumption the eigenvalue problem has a simple eigenvalue which corresponds to a positive eigenfunction .the other eigenvalues correspond to sign changing eigenfunctions .choose large enough so that and set )=l_h ( x , t,[u]_x)-\xi t.\ ] ] let be the positive cone of the nonnegative grid functions in .for a given grid function , by proposition [ prop_wellposed ] and proposition [ prop_comp ] there exists a unique solution to )+f=0\qquad & \text{in ,}\\ u= 0&\text{on . } \end{array } \right.\ ] ] since is a finite dimensional space it follows that defined by is a compact linear operator .moreover , if , then by proposition [ prop_mp ] and if , .+ therefore , by the krein - rutman theorem , the spectral radius of is a simple real eigenvalue with a positive eigenfunction such that .hence for , satisfies ) + { \lambda}_{1,h}w_1=0\qquad & \text{in ,}\\ w_1= 0&\text{on }. \end{array } \right.\ ] ] the following characterization of is a simple consequence of proposition [ prop_mp2 ] .we have +{\lambda}{\varphi}\le 0\ \mbox{in}\ { \omega} ] .a contradiction follows immediately by proposition [ prop_mp2 ] since the eigenfunction corresponding to is positive .hence we have .+ let such that (x)+{\lambda}{\varphi}(x)\le 0 i =- n+1,\dots , n-1 ] , then /{{\sigma}}) ] ; then +\tau { { \sigma}}\ge 0 ] .hence by proposition [ prop_mp2 ] , it follows in , a contradiction , and therefore .we consider now a general discrete operator given by and we study the corresponding eigenvalue problem . in analogy with formula , we define +{\lambda}{\varphi}\le 0 ] satisfies and therefore by propositions [ prop_wellposed ] and there exists a unique solution to problem .+ let us define by induction a sequence by setting and , for we consider the equation : ) = f(x)-{\lambda}u_{n},\qquad & x\in { \omega}_h,\\ u_{n+1}(x)=0 & x\in \partial{\omega}_h .\end{array } \right.\ ] ] + for any there exists a non negative solution to . for ,existence follows by proposition [ prop_wellposed ] .moreover since is a subsolution to , by proposition [ prop_comp ] we get .the existence of a non negative solution at the -step is proved in a similar way ; moreover the solution is non negative since .+ we claim now that , for any , . for the claim is trivially true since .assume then by induction that .since it follows that is a subsolution of . by proposition [ prop_comp ] , we get that . + let us show now that the sequence is bounded. assume by contradiction that it is false and set .then , by positive homogeneity , is a solution of ) = \frac{f(x)}{|u_{n+1}|_\infty}-{\lambda}\frac{u_{n}}{|u_{n+1}|_\infty},\qquad x\in { \omega}_h.\ ] ] since the sequence is bounded , then up to a subsequence it converges to a function , while converges to where .hence , , on and ) + k{\lambda}\overline u = 0,\qquad x\in { \omega}_h.\ ] ] since and using the fact that for there exists by definition such that +{\lambda}{\varphi}\ge 0 { \omega}_h$,}\ ] ] and let be such that for sufficiently small , . hence =-{\lambda}_{1,h}w_{1,h}\ge -\tau-\eta , \qquad x\in { \omega}_h.\end{aligned}\ ] ] set , and let be the corresponding solution of , while is the solution of then by proposition [ prop_comp ] and the consistency of the scheme for sufficiently small and therefore by , and we get a contradiction to the maximum principle for the operator ( see , ) and therefore .+ we now prove that assume by contradiction that there exists such that we consider a subsequence , still denoted by , such that and we set . by standard stability results satisfies and +({\lambda}_1+\eta ) \underline w\le 0\qquad & \mbox { in } { \omega},\\ \underline w= 0&\mbox { on } \\partial { \omega } , \end{array } \right.\ ] ] in viscosity sense .let be a sequence such that and for all . by , .we claim that assume by contradiction that , hence there exists a sequence such that . by with and get since we get a contradiction for sufficiently small and therefore . + we are in a position to apply the maximum principle for the continuous problem ( see ) , and so we obtain that .but and the positivity of give a contradiction to the definition of . byand we get . + by and a local boundary estimate for , see ( * ? ? ?5.1 ) and ( * ? ? ?* thm.3 . ), we get the equi - continuity of the family and therefore the uniform convergence , up to a subsequence , of to with .the simplicity of the eigenfunction associated to the principal eigenvalue gives the uniform convergence of all the sequence to .in this section we discuss an algorithm for the computation of the principal eigenvalue based on the inf - sup formula .in fact we show that this formula results in a finite dimensional nonlinear optimization problem .we first present the scheme in one dimension .note that since the eigenfunction corresponding to the principal eigenvalue vanishes on the boundary of and it is strictly positive inside , then the minimization in can be restricted to the internal points . by the formula and the homogeneity of , we have (x_i)}{u(x_i)}={{{\cal f}}}\left(x_i , 1 , \frac{u(x_i+h)-u(x_i - h)}{2hu(x_i)},\frac{u(x_i+h)+u(x_i+h)}{h^2u(x_i)}-\frac{2}{h^2}\right).\ ] ] we identify the function with the values , , at the points of the grid ( with ) .assume that is linear or more generally convex in .then the functions , defined by for , is either linear or respectively convex in , .moreover , since , is also convex in .taking the maximum of the functions over the internal nodes of the grid gives a convex function defined by hence the computation of is equivalent to the minimization of the convex function of variables : this problem can be solved by means of standard algorithms in convex optimization .note also that the minimum is unique and the map is sparse , in the sense that the value of at depends only on the values at and .+ in general , if is not convex , the computation of the principal eigenvalue is equivalent to the solution of a min - max problem in .+ to solve min - max problem we use the routine ` fminmax ` available in the optimization toolbox of matlab .this routine is implemented on a laptop and therefore the number of variables is modest. a better implementation of the minimization procedure which takes advantage of the sparse structure of the problem would allow to solve larger problems .* example 1 . * to validate the algorithm we begin by studying the eigenvalue problem : in this case the eigenvalue and the corresponding eigenfunction are given by note that since the eigenfunctions are defined up to multiplicative constant , we normalize the value by taking ( the constraint for is included in the routine ` fminmax ` ) .given a discretization step and the corresponding grid points , , the minimization problem is \ ] ] ( with ) . in table 4.1, we compare the exact solution with the approximate one obtained by the scheme .we report the approximation error for and ( in -norm and -norm ) and the order of convergence for .we can observe an order of convergence close to for and therefore equivalent to one obtained by discretization of the rayleigh quotient via finite elements ( see ) ..space step ( first column ) , eigenvalue error ( second column ) , convergence order ( third column ) , eigenfunction error in ( fourth column ) , eigenfunction error in ( last column ) [ cols="^,^,^,^,^",options="header " , ] | we present a finite difference method to compute the principal eigenvalue and the corresponding eigenfunction for a large class of second order elliptic operators including notably linear operators in nondivergence form and fully nonlinear operators . + the principal eigenvalue is computed by solving a finite - dimensional nonlinear min - max optimization problem . we prove the convergence of the method and we discuss its implementation . some examples where the exact solution is explicitly known show the effectiveness of the method . * msc 2000 * : : : 35j60 , 35p30 , 65m06 . * keywords * : : : principal eigenvalue , nonlinear elliptic operators , finite difference schemes , convergence . |
the last decade has seen a renewed interest in the problem of solving an underdetermined system of equations , , where , by regularizing its solution to be sparse , i.e. , having very few non - zero entries .specifically , if one aims to find with the least number of nonzero entries that solves the linear system , the problem is known as -minimization: the problem is intended to seek _ entry - wise sparsity _ in and is known to be np - hard in general . in compressive sensing ( cs ) literature , it has been shown that the solution to often can be obtained by solving a more tractable linear program , namely , -minimization : this unconventional equivalence relation between and and the more recent numerical solutions to efficiently recover high - dimensional sparse signal have been a very competitive research area in cs .its broad applications have included sparse error correction , compressive imaging , image denoising and restoration , and face recognition , to name a few .in addition to enforcing entry - wise sparsity in a linear system of equations , the notion of _ group sparsity _ has attracted increasing attention in recent years . in this case, one assumes that the matrix has some underlying structure , and can be grouped into blocks : , where and .accordingly , the vector is split into several blocks as , where . in this case , it is of interest to estimate with the least number of blocks containing non - zero entries .the group sparsity minimization problem is posed as where is the indicator function .since the expression can be written as , it is also denoted as , the -norm of .enforcing group sparsity exploits the problem s underlying structure and can improve the solution s interpretability .for example , in a sparsity - based classification ( sbc ) framework applied to face recognition , the columns of are vectorized training images of human faces that can be naturally grouped into blocks corresponding to different subject classes , is a vectorized query image , and the entries in represent the coefficients of linear combination of all the training images for reconstructing .group sparsity lends itself naturally to this problem since it is desirable to use images of the smallest number of subject classes to reconstruct and subsequently classify a query image .furthermore , the problem of robust face recognition has considered an interesting modification known as the _ cross - and - bouquet _ ( cab ) model : , where represents possible sparse error corruption on the observation .it can be argued that the cab model can be solved as a group sparsity problem in , where the coefficients of would be the group .however , this problem has a trivial solution for and , which would have the smallest possible group sparsity .hence , it is necessary to further regularize the entry - wise sparsity in . to this effect ,one considers a mixture of the previous two cases , where one aims to enforce entry - wise sparsity as well as group sparsity such that has very few number of non - zero blocks _ and _ the reconstruction error is also sparse .the _ mixed sparsity _ minimization problem can be posed as where controls the tradeoff between the entry - wise sparsity and group sparsity . due to the use of the counting norm ,the optimization problems in and are also np - hard in general .hence , several recent works have focused on developing tractable convex relaxations for these problems . in the case of group sparsity ,the relaxation involves replacing the -norm with the -norm , where .these relaxations are also used for the mixed sparsity case . in this work ,we are interested in deriving and analyzing convex relaxations for general sparsity minimization problems . in the entry - wise case ,the main theoretical understanding of the link between the original np - hard problem in and its convex relaxation has been given by the simple fact that the -norm is a convex surrogate of the -norm .however , in the group sparsity case , a similar relaxation produces a family of convex surrogates , i.e. , , whose value depends on .this raises the question whether there is a preferable value of for the relaxation of the group sparsity minimization problem ?in fact , we consider the following more important question : _ is there a unified framework for deriving convex relaxations of general sparsity recovery problems ?_ we present a new optimization - theoretic framework based on lagrangian duality for deriving convex relaxations of sparsity minimization problems .specifically , we introduce a new class of equivalent optimization problems for , and , and derive the lagrangian duals of the original np - hard problems .we then consider the lagrangian dual of the lagrangian dual to get a new optimization problem that we term as the _ lagrangian bidual _ of the primal problem .we show that the lagrangian biduals are convex relaxations of the original sparsity minimization problems .importantly , we show that the lagrangian biduals for the and problems correspond to minimizing the -norm and the -norm , respectively . since the lagrangian duals for , and are linear programs, there is no duality gap between the lagrangian duals and the corresponding lagrangian biduals .therefore , the bidual based convex relaxations can be interpreted as maximizing the lagrangian duals of the original sparsity minimization problems .this provides new interpretations for the relaxations of sparsity minimization problems .moreover , since the lagrangian dual of a minimization problem provides a lower bound for the optimal value of the primal problem , we show that the optimal objective value of the convex relaxation provides a non - trivial lower bound on the sparsity of the true solution to the primal problem .in what follows , we will derive the lagrangian bidual for the mixed sparsity minimization problem , which generalizes the entry - wise sparsity and group sparsity cases ( also see section [ sec : results ] ) .specifically , we will derive the lagrangian bidual for the following optimization problem : , ~~ { \mbox{s.t.}}~~ \begin{bmatrix } a_1 & \cdots & a_k \end{bmatrix } \begin{bmatrix } { { \boldsymbol}{x}}_1 \\ \vdots \\ { { \boldsymbol}{x}}_k \end{bmatrix } = { { \boldsymbol}{b } } , \end{split } \label{eq : primal0}\ ] ] where and .given any unique , finite solution to , there exists a constant such that the absolute values of the entries of are less than , namely , .note that if does not have a unique solution , it might not be possible to choose a finite - valued that upper bounds all the solutions . in this case, a finite - valued may be viewed as a regularization term for the desired solution . to this effect , we consider the following modified version of where we introduce the box constraint that : , ~~ { \mbox{s.t.}}~~ a{{\boldsymbol}{x}}= { { \boldsymbol}{b}}\text { and } \|{{\boldsymbol}{x}}\|_\infty \le m , \end{split } \label{eq : primal1}\ ] ] where is chosen as described above to ensure that the optimal values of and are the same .* primal problem .* we will now frame an equivalent optimization problem for , for which we introduce some new notation .let be an entry - based sparsity indicator for , namely , if and otherwise .we also introduce a group - based sparsity indicator vector , whose entry denotes whether the block contains non - zero entries or not , namely , if and otherwise . to express this constraint ,we introduce a matrix , such that if the entry of belongs to the block and otherwise . finally , we denote the positive component and negative component of as and , respectively , such that . given these definitions, we see that can be reformulated as , ~ { \mbox{s.t.}}\text { ( a ) } { { \boldsymbol}{x}}_+ \ge 0 , \text { ( b ) } { { \boldsymbol}{x}}_- \ge 0 , \text { ( c ) } { { \boldsymbol}{g}}\in \{0,1\}^k , \!\!\\ &\text{(d ) } { { \boldsymbol}{z}}\in \{0,1\}^n \text { ( e ) } a({{\boldsymbol}{x}}_+ - { { \boldsymbol}{x}}_- ) = { { \boldsymbol}{b } } , \text { ( f ) } \pi{{\boldsymbol}{g}}\ge \frac{1}{m}({{\boldsymbol}{x}}_+ + { { \boldsymbol}{x}}_- ) , \text { and ( g ) } { { \boldsymbol}{z}}\ge \frac{1}{m}({{\boldsymbol}{x}}_+ + { { \boldsymbol}{x}}_- ) , \end{split}\ ] ] where and ^\top \in { { \mathbb{r}}}^n$ ] .constraints ( a)(d ) are used to enforce the aforementioned conditions on the values of the solution . while constraint ( e ) enforces the condition that the original system of linear equations is satisfied , the constraints ( f ) and ( g )ensure that the group sparsity indicator and the entry - wise sparsity indicator are consistent with the entries of .* lagrangial dual .* the lagrangian function for is given as where , , , and . in order to obtain the lagrangian dual function ,we need to minimize with respect to , , and .notice that if the coefficients of and , i.e. , and are non - zero , the minimization of with respect to and is unbounded below .to this effect , the constraints that these coefficients are equal to form constraints on the dual variables .next , consider the minimization of with respect to .since each entry only takes values or , its optimal value that minimizes is given as a similar expression can be computed for the minimization with respect to . as a consequence , the lagrangian dual problem can be derived as , { \mbox{s.t.}}\\ & \text{(a ) } \forall i=1,2,4,5 : { { \boldsymbol}{\lambda}}_i \ge { { \boldsymbol}{0 } } , \text { ( b ) } \frac{1}{m}({{\boldsymbol}{\lambda}}_4+{{\boldsymbol}{\lambda}}_5)- a^\top{{\boldsymbol}{\lambda}}_3 -{{\boldsymbol}{\lambda}}_1 = { { \boldsymbol}{0}}\\ & \text{and ( c ) } \frac{1}{m}({{\boldsymbol}{\lambda}}_4+{{\boldsymbol}{\lambda}}_5)+ a^\top{{\boldsymbol}{\lambda}}_3 -{{\boldsymbol}{\lambda}}_2 = { { \boldsymbol}{0}}. \end{split}\ ] ] this can be further simplified by rewriting it as the following linear program : , { \mbox{s.t.}}~ \text{(a ) } { { \boldsymbol}{\lambda}}_4 \ge { { \boldsymbol}{0 } } , \text { ( b ) } { { \boldsymbol}{\lambda}}_5 \ge { { \boldsymbol}{0 } } , \text { ( c ) } { { \boldsymbol}{\lambda}}_6 \le 0 , \text { ( d ) } { { \boldsymbol}{\lambda}}_7 \le 0 , \!\!\!\!\!\!\!\!\!\!\!\!\\ & \!\!\!\!\text { ( e ) } { { \boldsymbol}{\lambda}}_6 \le { { \boldsymbol}{\alpha}}-\pi^\top{{\boldsymbol}{\lambda}}_4 , \text { ( f ) } { { \boldsymbol}{\lambda}}_7 \le { { \boldsymbol}{\beta}}-{{\boldsymbol}{\lambda}}_5 \text { and ( g ) } -\frac{1}{m}({{\boldsymbol}{\lambda}}_4+{{\boldsymbol}{\lambda}}_5 ) \le a^\top{{\boldsymbol}{\lambda}}_3 \le \frac{1}{m}({{\boldsymbol}{\lambda}}_4+{{\boldsymbol}{\lambda}}_5 ) . \end{split}\ ] ] notice that we have made two changes in going from to .first , we have replaced constraints ( b ) and ( c ) in with the constraint ( g ) in and eliminated and from .second , we have introduced variables and to encode the min " operator in the objective function of . * lagrangian bidual .* we will now consider the lagrangian dual of , which will be referred to as the _ lagrangian bidual _ of .it can be verified that the lagrangian dual of is given as ^k , \!\!\\ & \!\!\!\!\!\ !\text{(d ) } { { \boldsymbol}{z}}\in [ 0,1]^n \text { ( e ) } a({{\boldsymbol}{x}}_+ - { { \boldsymbol}{x}}_- ) = { { \boldsymbol}{b } } , \text { ( f ) } \pi{{\boldsymbol}{g}}\ge \frac{1}{m}({{\boldsymbol}{x}}_+ + { { \boldsymbol}{x}}_- ) \text { and ( g ) } { { \boldsymbol}{z}}\ge \frac{1}{m}({{\boldsymbol}{x}}_+ + { { \boldsymbol}{x}}_- ) .\end{split}\ ] ] notice that in going from to , the discrete valued variables and have been relaxed to take real values between and .given that and noting that can be represented as , we can conclude from constraint ( g ) in that the solution satisfies .moreover , given that and are relaxed to take real values , we see that the optimal values for and are and , respectively .hence , we can eliminate constraints ( f ) and ( g ) by replacing and by these optimal values .it can then be verified that solving is equivalent to solving the problem : \quad { \mbox{s.t.}}\text { ( a ) } a{{\boldsymbol}{x}}= { { \boldsymbol}{b}}\text { and ( b ) } \|{{\boldsymbol}{x}}\|_\infty \le m.\ ] ] this is the lagrangian bidual for .in this section , we first describe some properties of the biduality framework in general . we will then focus on some important results for the special cases of entry - wise sparsity and group sparsity .the optimal value of the lagrangian bidual in is a lower bound on the optimal value of the np - hard primal problem in .[ thm : optval ] since there is no duality gap between a linear program and its lagrangian dual , the optimal values of the lagrangian dual in and the lagrangian bidual in are the same .moreover , we know that the optimal value of a primal minimization problem is always bounded below by the optimal value of its lagrangian dual . we hence have the required result . since the original primal problem in is np - hard , we note that the duality gap between the primal and its dual in is non - zero in general .moreover , we notice that as we increase ( i.e. , a more conservative estimate ) , the optimal value of the primal is unchanged , but the optimal value of the bidual in decreases .hence , the duality gap increases as increases . in should preferably be equal to , which may not be possible to estimate accurately in practice .therefore , it is of interest to analyze the effect of taking a very conservative estimate of , i.e. , choosing a large value for . in what follows, we show that taking a conservative estimate of is equivalent to dropping the box constraint in the bidual . for this purpose ,consider the following modification of the bidual : \quad { \mbox{s.t.}}\quad a{{\boldsymbol}{x}}= { { \boldsymbol}{b}},\ ] ] where we have essentially dropped the box constraint ( b ) in .it is easy to verify that , we have that . therefore , we see that taking a conservative value of is equivalent to solving the modified bidual in . notice that by substituting and , the optimization problem in reduces to the entry - wise sparsity minimization problem in .hence , the lagrangian bidual to the -regularized entry - wise sparsity problem is : more importantly , we can also conclude from that solving the lagrangian bidual to the entry - wise sparsity problem with a conservative estimate of is equivalent to solving the problem : which is precisely the well - known -norm relaxation for .our framework therefore provides a new interpretation for this relaxation : the -norm minimization problem in is the lagrangian bidual of the -norm minimization problem in , and solving is equivalent to maximizing the dual of .we further note that we can now use the solution of to derive a non - trivial lower bound for the primal objective function which is precisely the sparsity of the desired solution .more specifically , we can use theorem [ thm : optval ] to conclude the following result : [ corr : l0l1 ] let be the solution to .we have that , the sparsity of , i.e. , is bounded below by . due to the non - zero duality gap in the primal entry - wise sparsity minimization problem ,the above lower bound provided by corollary [ corr : l0l1 ] is not tight in general .notice that by substituting and , the optimization problem in reduces to the group sparsity minimization problem in .hence , the lagrangian bidual of the group sparsity problem is : as in the case of entry - wise sparsity above , solving the bidual to the group sparsity problem with a conservative estimate of is equivalent to solving : which is the convex -norm relaxation of the -min problem . in other words ,the biduality framework selects the -norm out of the entire family of -norms as the convex surrogate of the -norm .finally , we use theorem [ thm : optval ] to show that the solution obtained by minimizing the -norm provides a lower bound for the group sparsity .[ corr : grp ] let be the solution to . for any , the group sparsity of , i.e. , , is bounded below by .the -norm seems to be an interesting choice for computing the lower bound of the group sparsity , as compared to other -norms for finite . for example , consider the case when , where the -norm is equivalent to the -norm .assume that consists of a single block with several columns so that the maximum number of non - zero blocks is .denote the solution to the -minimization problem as .it is possible to construct examples ( also see figure [ fig : entry ] ) where .hence , it is unclear in general if the solutions obtained by minimizing -norms for finite - valued can help provide lower bounds for the group sparsity .we now present experiments to evaluate the bidual framework for minimizing entry - wise sparsity and mixed sparsity .we present experiments on synthetic data to show that our framework can be used to compute non - trivial lower bounds for the entry - wise sparsity minimization problem .we then consider the face recognition problem where we compare the performance of the bidual - based -norm relaxation with that of the -norm relaxation for mixed sparsity minimization .we use boxplots to provide a concise representation of our results statistics .the top and bottom edge of a boxplot for a set of values indicates the maximum and minimum of the values .the bottom and top extents of the box indicate the and percentile mark .the red mark in the box indicates the median and the red crosses outside the boxes indicate potential outliers .* entry - wise sparsity .* we now explore the practical implications of corollary [ corr : l0l1 ] through synthetic experiments .we randomly generate entries of and from a gaussian distribution with unit variance .the sparsity of is varied from to in steps of .we solve with using and , where .we use corollary [ corr : l0l1 ] to compute lower bounds on the true sparsity , i.e. , .we repeat this experiment times for each sparsity level and figure [ fig : entry ] shows the boxplots for the bounds computed from these experiments .we first analyze the lower bounds computed when , in figure [ fig : m0 ] . as explained in section [ sec : entry - results ] , the bounds are not expected to be tight due to the duality gap .notice that for extremely sparse solutions , the maximum of the computed bounds is close to the true sparsity but this diverges as the sparsity of reduces .the median value of the bounds is much looser and we see that the median also diverges as the sparsity of reduces .furthermore , the computed lower bounds seem to grow linearly as a function of the true sparsity .similar trends are observed for and in figures [ fig:2m0 ] and [ fig:5m0 ] , respectively . as expected from the discussion in section [ sec : entry - results ] , the bounds become very loose as increases . in theory, we would like to have _ per - instance certificates - of - optimality _ of the computed solution , where the lower bound is equal to the true sparsity .nonetheless , we note that this ability to compute a per - instance non - trivial lower bound on the sparsity of the desired solution is an important step forward with respect to the previous approaches that require pre - computing optimality conditions for equivalence of solutions to the -norm and -norm minimization problems .we have performed a similar experiment for the group sparsity case , and observed that the bidual framework is able to provide non - trivial lower bounds for the group sparsity also .+ * mixed sparsity .* we now evaluate the results of mixed sparsity minimization for the sparsity - based face recognition problem , where the columns of represent training images from the face classes : and represents a query image .we assume that a subset of pixel values in the query image may be corrupted or disguised .hence , the error in the image space is modeled by a sparse error term : , where is the uncorrupted image . a linear representation of the query image forms the following linear system of equations : where is the identity matrix .the goal of sparsity - based classification ( sbc ) is to minimize the group sparsity in and the sparsity of such that the dominant non - zero coefficients in reveal the membership of the ground - truth observation . in our experiments ,we solve for and by solving the following optimization problem : notice that for , this reduces to solving a special case of the problem in , i.e. , the bidual relaxation of the mixed sparsity problem with a conservative estimate of . in our experiments , we set and compare the solutions to obtained using and .we evaluate the algorithms on a subset of the ar dataset which has manually aligned frontal face images of size for 50 male and 50 female subjects , i.e. , and .each individual contributes 7 un - occluded training images , 7 un - occluded testing images and 12 occluded testing images .hence , we have 700 training images and 1900 testing images . to compute the number of non - zero blocks in the coefficient estimated for a testing image, we find the number of blocks whose energy is greater than a specified threshold .the results of our experiments are presented in figure [ fig : mixed ] .the solution obtained with gives better group sparsity of .however , a sparser error is estimated with .the number of non - zero entities in a solution to , i.e. , the number of non - zero blocks plus the number of non - zero error entries , is lower for the solution obtained using rather than that obtained using .however , the primal mixed - sparsity objective value ( see ) is lower for the solution obtained using .+ + we now compare the classification results obtained with the solutions computed in our experiments . for classification, we consider the non - zero blocks in and then assign the query image to the block , i.e. , subject class , for which it gives the least residual .the results are presented in table [ tab : mixed ] .notice that the classification results obtained with ( the bidual relaxation ) are better than those obtained using .since the classification of un - occluded images is already very good using , classification with gives only a minor improvement in this case .however , a more tangible improvement is noticed in the classification of the occluded images .therefore the classification with is in general better than that obtained with , which is considered the state - of - the - art for sparsity - based classification ..classification results on the ar dataset using the solutions obtained by minimizing mixed sparsity .the test set consists of 700 un - occluded images and 1200 occluded images . [ cols="^,^,^,^,^ " , ]we have presented a novel analysis of several sparsity minimization problems which allows us to interpret several convex relaxations of the original np - hard primal problems as being equivalent to maximizing their lagrangian duals .the pivotal point of this analysis is the formulation of mixed - integer programs which are equivalent to the original primal problems . while we have derived the biduals for only a few sparsity minimization problems , the same techniquescan also be used to easily derive convex relaxations for other sparsity minimization problems .an interesting result of our biduality framework is the ability to compute a per - instance certificate of optimality by providing a lower bound for the primal objective function .this is in contrast to most previous research which aims to characterize either the subset of solutions or the set of conditions for perfect sparsity recovery using the convex relaxations . in most cases ,the conditions are either weak or hard to verify .more importantly , these conditions needed to be pre - computed as opposed to verifying the correctness of a solution at run - time . in lieu of this , we hope that our proposed framework will prove an important step towards per - instance verification of the solutions .specifically , it is of interest in the future to explore tighter relaxations for the verification of the solutions .this research was supported in part by aro muri w911nf-06 - 1 - 0076 , arl mast - cta w911nf-08 - 2 - 0004 , nsf cns-0931805 , nsf cns-0941463 and nsf grant 0834470 . the views and conclusions contained in this documentare those of the authors and should not be interpreted as representing the official policies , either expressed or implied , of the army research laboratory or the u.s .government is authorized to reproduce and distribute for government purposes notwithstanding any copyright notation herein . | recent results in compressive sensing have shown that , under certain conditions , the solution to an underdetermined system of linear equations with sparsity - based regularization can be accurately recovered by solving convex relaxations of the original problem . in this work , we present a novel primal - dual analysis on a class of sparsity minimization problems . we show that the lagrangian bidual ( i.e. , the lagrangian dual of the lagrangian dual ) of the sparsity minimization problems can be used to derive interesting convex relaxations : the bidual of the -minimization problem is the -minimization problem ; and the bidual of the -minimization problem for enforcing group sparsity on structured data is the -minimization problem . the analysis provides a means to compute per - instance non - trivial lower bounds on the ( group ) sparsity of the desired solutions . in a real - world application , the bidual relaxation improves the performance of a sparsity - based classification framework applied to robust face recognition . |
negative energy sturmian functions ( s - f ) , introduced in the 60ties by m. rotenberg , found many useful applications , such as in the calculation of electron induced ionization collisions , , in the identification of resonances in nucleon - nucleus scattering , in the solution of a schrdinger equation with non - local potentials , for a separable representation of scattering and in the solution of three - body faddeev equations .for the applications involving long range coulomb forces , the analytical expressions for the coulomb green s functions , initially developed by l. hosteler and r. pratt have found many applications .sturmian functions form a complete , discrete set of eigensolutions of a sturm - liouville differential ( or integral ) equation , and form a complete set of basis functions . howeverthe expansion of a wave function into a set of sturmian functions in many cases does not converge well , and methods to improve the convergence , such as pad approximations , have frequently been utilized , . the atom - atom potentials that occur in atomic physics calculation often have strong repulsive cores , in addition to a long range attractive part , and for such potentials it has been difficult to obtain a reliable set of sturmian functions .a method based on spectral expansions in terms of chebyshev functions has been recently developed that overcomes this difficulty , and hence the use of sturmian functions is again of interest .nevertheless , for numerical calculations the expansions have to be truncated , and because of the slow convergence of sturmian expansions , an iterative method to correct for the truncation error becomes desirable .it is the purpose of this study to introduce such a method , based on the original quasi - particle method of weinberg , and , for the purpose of testing it , apply it to the scattering of a particle from a potential as described by the lippmann - schwinger ( l - s ) integral equation . since, the method described here should be applicable to more general kernels of an integral equation , the present study is done in anticipation of solving the more complicated two - dimensional integral equations that occur in the solution of three - body equations in configuration space , .the expected accuracy is to be better than 6 to 7 significant figures , desirable for doing atomic physics calculations . by comparison ,the solution of three - body equations for nuclear physics applications , done commonly in momentum space , achieve an accuracy not better than four significant figures .the present paper examines the convergence of several iteration methods for the solution of the l - s equation describing the scattering of a particle with positive energy by a local potential in one dimension .the insights gained from this example is to guide one for solving a more general integral equation .the method initially suggested by weinberg , also called the `` quasi - particle '' method q - p , requires sturmian functions that are the eigenstates for the exact integral operator .the present method , denoted as method differs from the q - p method in that it uses auxiliary sturmian functions that are based on an auxiliary potential that is not the same as the potential since it may be computationally easier to obtain the auxiliary sturmians rather than the sturmians for the original integral operator method may prove advantageous for the case of very complicated integral kernels . for the example , the auxiliary sturmians are used to approximate the scattering potential by a non - local separable representation in terms of auxiliary sturmian functions and subsequently correcting for the truncation error by means of iterations .we find that the rate of convergence of the iterations does not depend significantly on the choice of , provided that the range of is sufficiently larger than the range of .the sturmian functions in all these cases are calculated in configuration space for a positive energy , so that the asymptotic form of the approximated wave function is the same as the exact wave function . when solving for a bound - state wave function , it is customary the represent the potential by negative energy sturmians that are real , but for scattering cases , positive energy sturmians should be preferable .an advantage of the sturmian expansion method over the fourier - grid method for positive energies is that the sturmian method emphasizes only the spatial region where the potential is non - negligible , the asymptotic part of the wave function being already incorporated into the sturmian basis , while in a fourier - grid method , the asymptotic part has to be obtained explicitly . since an accuracy of the scattering solution to is envisaged , the sturmian functions are also required to have the same , or better , accuracy .our approach for solving the sturm - liouville eigenvalue equation utilizes a `` spectral '' expansion method into chebyshev polynomials , , that permits one to prescribe a desired accuracy . with this methodone can obtain sturmian eigenvalues and eigenfunctions with an accuracy of , for example , and because of its stability , one can incorporate the effect of long - range tails of potentials , as was demonstrated in the calculation of the bound state eigenvalue of a helium - helium dimer .the present calculations are performed in configuration space , and can be generalized to include asymptotic coulomb functions .contrary to what is often done at negative energies , our approach does not expand the green s functions into a separable set of sturmians , but rather expands the potential into such a set .the present method to calculate sturmian functions is similar to a method introduced previously , in that it does not use a square well potential sturmian basis set in terms of which the desired sturmians are expanded , and , being based on a spectral method , is considerably more precise .this additional feature permits a more accurate study of the iteration convergence properties . in sectionii we present the formalism that defines the sturmian functions ; section iii describes the approximate q - p methods and that extend weinberg s q - p original iterative method so that the use of sturmian functions for the integral kernel are avoided , and auxiliary sturmians are used instead ; in section iv numerical examples are presented for the case that the integral kernel is of the lippmann - schwinger type , and section v contains the summary and conclusions .appendices a , b , c and d contain further details .the one - dimensional schrdinger equation for the radial wave function is where is the differential operator , is the energy assumed positive here , is the scattering potential .these quantities are in units of inverse length squared , and were transformed from their energy units by multiplication with the well known factor the wave number is related to the energy by the corresponding lippmann - schwinger integral equation for is the undistorted green s function is given by where if and if . in the present zero angular momentumcase according to eqs .( [ 3 ] ) and ( [ 4 ] ) the asymptotic form of is with near the origin because both as well as the integral term in eq .( [ 3 ] ) go to zero .the sturmian functions , obey eq .( [ 1 ] ) with replaced by where is the sturmian potential chosen conveniently , that need not be equal to the scattering potential , and the s are the eigenvalues to be determined . the sturmian functions obey the boundary conditions where the constant is determined by the normalization of the sturmian function .the functions obey the integral equation version of eq .( [ 8 ] ) with the green s function given by eq .( [ 4 ] ) for positive energies .the s are eigenvalues of the operator , they form an infinite set with point of convergence at , and are related to the s according to with the choice ( [ 4 ] ) and ( [ 5 ] ) of the green s function the sturmians obey the boundary conditions ( [ 9 ] ) , and the s are not square integrable for positive energies or orthogonal to each other . however , they are orthogonal if one includes the potential as a weight function in the bra - ket notation above , is _ not _ the complex conjugate of , as is usually implied . the validity of ( [ 11 ] ) can be shown by replacing by in the integral ( [ 11 ] ) , and subsequently replacing by .the result is , and if , the above identity becomes absurd unless because of the completeness of the sturmian functions , one has the identity which can also be written as depending on whether the delta function applies to the left or to the right of the integrand in an integral .one can understand intuitively the properties of the s as follows .as the index increases , the corresponding values of increase , and hence the potential increases in magnitude .if is real and attractive and the real part of is positive , then the real part of becomes more attractive , and the corresponding eigenfunction becomes more oscillatory inside of the attractive region of the well .so , from one to the subsequent one the eigenfunction acquires one more node inside of the well . according to flux considerations the imaginary part of has to be positive , i.e. , the well has to be emissive .near the origin the flux is since however asymptotically the outgoing form of the wave function produces a positive outgoing flux .this outgoing flux is generated by an emissive imaginary potential , exactly the opposite of the case of an optical potential , that absorbs flux .these properties will be verified in the numerical section below . in the examples given below, the scattering potential is of the morse type with a repulsive core near the origin ( ) , and the sturmian potential is either also of the morse type with the repulsive core suppressed ( , but of the same range as the scattering potential , or it is a potential of the woods - saxon type ( ) with a range larger than that of either or the morse potentials are formed by the combination of two exponentials , \label{21}\ ] ] with parameters given in table [ table1 ] [ c]|l||l|l|l| & & & + & & & + & & & + , and the woods - saxon potential is given by\ } \label{21ws}\ ] ] with and resulting dependence of these potentials on is illustrated in fig .[ fig1 ] . [ ptb ]potentials.eps the spectrum of the eigenvalues for the two morse - type potentials , with is illustrated in figs .[ fig2 ] and [ fig3 ] .[ ptb ] lambdas.eps [ ptb ] lambdap_05.eps since the potential is entirely attractive , the real parts of are positive , while the imaginary parts are negative , in accordance with the argument given above .these sturmian eigenfunctions are obtained by an adaptation of hartree s iterative method designed to obtain energy eigenvalues to the schrdinger equation , and described in appendix c. a list of these eigenvalues precise to significant figures is given in the appendix d. some of the eigenfunctions are illustrated in figs .[ fig4 ] and [ fig5 ] , that show that at large distances where the potential becomes small , these functions become linearly dependent .the used in the present examples are normalized such that asymptotically they equal the function , with unit coefficient .[ ptb ] re_phi_s_1_4.eps [ ptb ] imag_phi_s_1_4.eps because potential has both a repulsive and an attractive part , the eigenvalues fall into two categories . in categoryi the eigenvalues have a positive real part and a negative imaginary part , and the corresponding eigenfunctions are large mainly in the attractive regions of the potential well .examples are given in figs .[ fig6 ] and [ fig7 ] .[ ptb ] re_phi_p_05_1_5.eps [ ptb ] imag_phi_p_1_5.eps in category ii the real parts of are negative so as to turn the repulsive piece of the potential near the origin into an attractive well , and the formerly attractive valley into a repulsive barrier .examples of the corresponding sturmian for indices and two of these functions are shown in figs .[ fig8 ] and [ fig9 ] .[ ptb ] phi_p_4_05.eps [ ptb ] phi_p_7.eps for the function has a magnitude less than the sturmian for is similar to that for in that it is also large near the origin ( with an amplitude of and has a node near the functions for and are `` resonant '' in the radial region ] and the function , solution of eq .( [ 30 ] ) , is given by the quantity is defined by the asymptotic limit of and is given by , where and the iterative corrections to are as shown in the appendix a. the values of are shown in fig .[ fig12 ] .they begin to decrease nearly monotonically for but the rate of decrease is slow .for example and this shows that without the additional iterations described next , the accuracy of given by alone would be good to only one or two significant figures .for example , the value of obtained from is while the `` exact '' value of , obtained via the spectral integral equation method ( iem ) is in fig .[ fig13 ] the improvement due to the successive weinberg iterations are displayed .[ ptb ] ci_p_05.eps [ ptb ] iter_05.eps for both methods and the operator is again with , and the sturmian potential can be either eqs . ([ 21 ] ) , or the latter defined in appendix in this section the normalization of the sturmian functions is such that for method the elements of the matrix , eq. become , and all other equations described in section iii apply . the separable approximation to the operator eq .( [ 25 ] ) , is given by and as a result of the normalization ( [ 55 ] ) the power of where the matrix has the matrix elements equation ( [ 57 ] ) shows that if the norm of the matrix is larger than unity , iterations performed with not converge .for this reason separating from a part that has a norm less than unity , and performing iterations for on that part , will converge .that is the reason for solving the iterated equation ( [ 48 ] ) rather than the original eq .( [ 3 ] ) , as will be explained further below . for more complicated integral kernelsa similar separation is also feasible with the singular value decomposition method ( svd ) but is not needed for the examples given here . in method decomposes the square of the full operator , into two parts according to eq .( [ 57 ] ) ( is given by and one is defined as in order to solve the once iterated equation ( [ 48 ] ) , one defines the function as the solution of given in the present case by where the solution of the linear equation the terms required for the subsequent iterations, are obtained by solving with a justification for the advantage of method over is as follows .formally , eq . ( [ 66 ] ) can be written as ^{n}\mathcal{f}_{2}^{(n)} ] , as given by eq .( [ 21ws]) this potential is illustrated by the open circles in fig .[ fig25 ] this potential was defined for the purpose of performing a green s function iteration , as follows .the potential is divided into two parts and .the first part has a suppressed repulsive core , and the second part , adds the repulsive core again .the iterations are based on green s functions that explicitly include the distortion due to , and iterate over the effect of in a fashion similar to the born approximation .the green s function method has many applications , but in view of the factor in the greens functions and eq .( [ 4 ] ) , this method does not converge well for low values of the wave number in what follows the eigenvalue subscripts will be dropped , and the iterations which lead to the solution for a particular fixed value of will be described . in the differential equation ( [ 8 ] )the exact values of and are initially unknown , but the equation can still be solved for a guessed value .one solutions , denoted as satisfies the correct boundary conditions at the origin , and is integrated from inside outward. the other solution , denoted as , satisfies the correct boundary conditions asymptotically , and is integrated inward towards the intermediate point chosen to lie within the region where is non negligible .the three solutions at the point are denoted as ,and one can renormalize by a factor so that if one multiplies eq .( [ 8 ] ) on both sides by and eq .( [ 12 ] ) by , integrates over from to , and subtracts the results one from the other , after an integration by parts one obtains where the prime denotes a derivative with respect to by a similar procedure one obtains by adding eqs .( [ 15 ] ) and ( [ 16 ] ) , by noting that because and vanish at the origin , and that because and obey the same boundary condition asymptotically ( to within a normalization constant ) , one obtains. \label{17}\ ] ] the above equation is still rigorously valid .the approximation consists in replacing in the first integral by and by in the second integral , and further , by replacing in the left hand side by either or after making these approximations , and by dividing both sides of eq .( [ 17 ] ) by one obtains the final result in the above , replaces as the iteratively corrected value for ; also the factor disappeared because it canceled in the numerator and denominator of eq .( [ 18 ] ) . equation ( [ 18 ] ) is the generalization to sturmian eigenvalues of the iterative calculation of energy eigenvalues , eq .( 7 ) of ref .it can also be applied to negative energies , and can be generalized to the cases of coupling potentials or of non - local potentials , which however goes beyond the scope of the present study .the functions and can be calculated as the solutions of the differential eqs .( [ 12 ] ) and ( [ 13 ] ) by any convenient finite difference method , or they can be obtained as the solutions of the integral equations and or else the whole iteration procedure can be bypassed by obtaining the eigenvalues and eigenfunctions of the integral operator contained in eq .( [ 10 ] ) . in the numerical calculations described below, this latter procedure is carried out ( with questionable accuracy according to ref ) so as to provide the initial values for the iteration , eq .( [ 18 ] ) .the present implementation obtains the solution of eqs .( [ 19 ] ) and ( [ 20 ] ) by using the spectral chebyshev expansion method , , because its high accuracy can be predetermined by the specification of an accuracy parameter , and the method is by now well tested .[ c]|l||l|l| & real part of & imag . part of + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + & & + m. rotenberg , ann . of phys .( n.y . ) * 19 * , 262 ( 1962 ); m. rotenberg , `` theory and applications of sturmian functions '' in adv . in atomic and molec .phys . , ed by d.r .bates and i. esterman , ( academic press , n.y .( 1970 ) ) , vol * 6 , * p233 - 268 ; j. s. ball and d. wong , phys . rev .* 169 * , 1362 ( 1968 ) ; g. h. rawitscher and l. canton , phys . rev .* c 44 * , 60 ( 1991 ) ; l. canton and g. h. rawitscher , j. of phys .g : nucl . and* 17 * , 429 ( 1991 ) ; d. eyre , h. g. miller , phys .b * 153 b * , 5 , ( 1985 ) ; z. c. kuruoglu and f. s. levin , annals of phys . * 163*,120 ( 1985 ) ; a. c. fonseca and m. t. pena , phys . rev . * a 38 * , 4967 ( 1998 ) ; s. yu and j. h. macek , phys . rev .* a 55 * , 3605 ( 1997 ) ; j. m. randazzo , l. u. ancarani , g. gazaneo , a. l. frapiccini and f. d. colavecchia , phys . rev . * a 81 * , 042520 ( 2010 ) ; a. maquet , v. veniard , t. a. marian , j. phys * b 31 * , 3743 ( 1998 ) ; c. brezinski , _pade -type approximation and general orthogonal polynomials , _ ( birkhauser , basel , 1980 ) ; c. brezinski and j. van iseghem , _ pade approximation _ , in handbook of numerical analysis ( p. g. ciarlet and j. l. lions , eds . ) , vol .47 222 , north - holland , amsterdam , 1994 ; r. shakeshaft and x. tang , phys. rev . * a 35 * , 3945 ( 1987 ) , r. shakeshaft , _ ibid . _ * a 37 * , 4488 ( 1988 ) ; s. weinberg , phys . rev . * 131 * , 440 ( 1963 ) ; _ ibid ._ * 133 b * 232 ( 1964 ) ; s. weinberg and m. scaldron , _ ibid ._ * 133 b * 1589 ( 1964 ) ; s. weinberg in _ lectures on particles and field theory , brandeis summer institute in theoretical physics _( prentice - hall , englewood cliffs , 1964 ) vol .2 , chaps . 4 and 5 ; w. glckle and g. rawitscher , nucl . phys . * a 790 * , 282c ( 2007 ) ; w. glckle and g. rawitscher , `` three - atom scattering via the faddeev scheme in configuration space '' , physics/0512010 at arxiv.com ; g. rawitscher and w. glckle , epj web of conferences * 3 * , 05012 ( 2010 ) ; e. o. alt , p. grassberger and w. sandhas , phys . rev . *d 1 * , 2581 ( 1970 ) ; c. y. chen and k.t .chung , phys .rev * a 2 , * 1449 ( 1970 ) * ; * p. j. kramer and j. c. y. chen , phys .rev * a 3 * , 568 ( 1971 ) ; w. glckle and r. offerman , phys . rev . * c 16 * , 2039 ( 1977 ) ; r. kosloff in _ dynamics of molecules and chemical reactions , _ edited by r.e .wyatt and j. z. h. zhang ( marcel dekker , new york , 1966 ) ; v. kokoouline , o. dulieu , r. kosloff and f. masnou - seeuws , j. of chem .phys . * 110 * , 9865 ( 1999 ) ; r. a. gonzales , j. eisert , i koltracht , m. neumann and g. rawitscher , j. of comput . phys . * 134 * , 134 - 149 ( 1997 ) ; r. a. gonzales , s .- y .kang , i. koltracht and g. rawitscher , j. of comput . phys . * 153 * , 160 - 202 ( 1999 ) ; a. deloff , ann . phys .( ny ) 322 , 13731419 ( 2007 ) ; l. n. trefethen , _ spectral methods in matlab _ , ( siam , philadelphia , pa , 2000 ) ; john p. boyd , _ chebyshev and fourier spectral methods , _2nd revised ed .( dover publications , mineola , ny , 2001 ) ; g. rawitscher and i. koltracht , computing sci .* 7 * , 58 ( 2005 ) ; g. rawitscher , _ applications of a numerical spectral expansion method to problems in physics : a retrospective , _ in operator theory , advances and applications , vol .203 , edited by thomas hempfling ( birkuser verlag , basel , 2009 ) , pp . 409426 . | years ago s. weinberg suggested the `` quasi - particle '' method ( q - p ) for iteratively solving an integral equation , based on an expansion in terms of sturmian functions that are eigenfunctions of the integral kernel . an improvement of this method is presented that does not require knowledge of such sturmian functions , but uses simpler auxiliary sturmian functions instead . this improved q - p method solves the integral equation iterated to second order so as to accelerate the convergence of the iterations . numerical examples are given for the solution of the lippmann - schwinger integral equation for the scattering of a particle from a potential with a repulsive core . an accuracy of is achieved after iterations , and after iterations . the calculations are carried out in configuration space for positive energies with an accuracy of by using a spectral expansion method in terms of chebyshev polynomials . the method can be extended to general integral kernels , and also to solving a schrdinger equation with coulomb or non - local potentials . |
consider a immersion , where is an open set . here is a hypersurface element ( or a position vector ) , and .the first fundamental form is a symmetric , positive definite metric tensor of , given by .its matrix elements can also be expressed as , where is the euclidean inner product in , .let be the unit normal vector given by the gauss map , where the cross product in is a generalization of that in . here is the normal space of at point .the vector is perpendicular to the tangent hyperplane at .note that , the tangent space at . by means of the normal vector and tangent vector ,the second fundamental form is given by the mean curvature can be calculated from where we use the einstein summation convention , and .let be an open set and suppose is compact with boundary .let be a family of hypersurfaces indexed by , obtained by deforming in the normal direction according to the mean curvature .explicitly , we set we wish to iterate this leading to a minimal hypersurface , that is in all of , except possibly where barriers ( atomic constraints ) are encountered . for our purpose ,let us choose , where is a function of interest .we have the first fundamental form : the inverse matrix of is given by where is the gram determinant .the normal vector can be computed from eq .( [ normal ] ) the second fundamental form is given by i.e. , the hessian matrix of . | we introduce a novel concept , the minimal molecular surface ( mms ) , as a new paradigm for the theoretical modeling of biomolecule - solvent interfaces . when a less polar macromolecule is immersed in a polar environment , the surface free energy minimization occurs naturally to stabilizes the system , and leads to an mms separating the macromolecule from the solvent . for a given set of atomic constraints ( as obstacles ) , the mms is defined as one whose mean curvature vanishes away from the obstacles . an iterative procedure is proposed to compute the mms . extensive examples are given to validate the proposed algorithm and illustrate the new concept . we show that the mms provides an indication to dna - binding specificity . the proposed algorithm represents a major step forward in minimal surface generation . the stability and solubility of macromolecules , such as proteins , dnas and rnas , are determined by how their surfaces interact with solvent and/or other surrounding molecules . therefore , the structure and function of macromolecules depend on the features of their molecule - solvent interfaces . molecular surface was proposed to describe the interfaces and has been applied to protein folding , protein - protein interfaces , protein surface topography , oral drug absorption classification , dna binding and bending , macromolecular docking , enzyme catalysis , calculation of solvation energies , and molecular dynamics . it is of paramount importance to the implicit solvent models . however , the molecular surface model suffers from it being probe dependent , non - differentiable , and being inconsistent with free energy minimization . minimal surfaces are omnipresent in nature . their study has been a fascinating topic for centuries . french geometer , meusnier , constructed the first non - trivial example , the catenoid , a minimal surface that connects two parallel circles , in the 18th century . in 1760 , lagrange discoved the relation between minimal surfaces and a variational principle , which is still a cornerstone of modern mechanics . plateau studied minimal surfaces in soap films in the mid - nineteenth century . in liquid phase , materials of largely different polarizabilities , such as water and oil , do not mix , and the material in smaller quantity forms ellipsoidal drops , whose surfaces are minimal subject to the gravitational constraint . the self - assembly of minimal cell membrane surfaces in water has been discussed . the schwarz p minimal surface is known to play a role in periodic crystal structures . the formation of -sheet structures in proteins is regarded as the result of surface minimization on a catenoid . a minimal surface metric has been proposed for the structural comparison of proteins . however , to the best of our knowledge , a natural minimal surface that separates a less polar macromolecule from its polar environment such as the water solvent has not been considered yet . the objective of this report is to introduce the theory of and algorithm to generate minimal molecular surfaces ( mmss ) . since the surface free energy is proportional to the surface area , a mms contributes to the molecular stability in solvent . therefore , there must be a mms associated with each stable macromolecule in its polar environment . although minimal surfaces are often generated by evolving surfaces with predetermined curve boundaries , there is no algorithm available that generates minimal surfaces with respect to obstacles , such as atoms . here , we develop such an algorithm based on the theory of differential geometry . for a given initial function that characterizes domain encompassing the biomolecule of interest , we consider an evolution driving by the mean curvature where is a small parameter , is the gram determinant , and . our procedure involves iterating eq . ( [ evolut ] ) until everywhere except for certain protected boundary points where the mean curvatures take constant values . physically , the vanishing of the mean curvature is a natural consequence of surface free energy minimization . consider the surface free energy of a molecule as , where is boundary of the molecule , the energy density and . the energy minimization via the first variation leads to the euler lagrange equation , where . for a homogeneous surface , , a constant , eq . ( [ el ] ) leads to the vanishing of the mean curvature . for a given set of atomic coordinates , we prescribe a step function initial value for , i.e. , a non - zero constant inside a sphere of radius about each atom and zero elsewhere . alternatively , a gaussian initial value can be placed around each atomic center . the value of is updated in the iteration except for at obstacles , i.e. , a set of boundary points given by the collection of all of the van der waals sphere surfaces or any other desired atomic sphere surfaces . here and can be approximated by any standard numerical methods . for simplicity , we use the standard second order central finite difference . due to the stability concern , we choose , where is the smallest grid spacing . the mms is differentiable , probe independent , and consistent with the surface free energy minimization . [ cols="^,^,^ " , ] finally , we employ our mms to study the mechanism of molecular recognition in protein - dna interactions . nmr and molecular dynamics studies suggest that antennapedia achieves specificity through an ensemble of rapidly fluctuating dna contacts . while x - ray structure indicates a well - defined set of contacts due to side chains constraints . in the present work , we reveal flat contacting interfaces which stabilizing the protein - dna complex . figs . [ fig.9ant2](a ) and [ fig.9ant2](c ) depict the mmss of antennapedia and dna ( pdb i d : 9ant ) , generated by using and . clearly , the binding site of the dna ( middle groove ) has a large facet , which is absent from the top and bottom grooves of the dna . interestingly , the mms of the protein exhibits a complimentary facet . for a comparison , the molecular surfaces ( mss ) generated by using the program msms with the same set of van der waals radii and a probe radius of 1.5 are depicted in figs . [ fig.9ant2 ] ( b ) and [ fig.9ant2 ] ( d ) . apparently , it is very difficult to recognize the complementary binding interfaces from mss . it is interesting to note that the mms also better reveals the skeleton of the dna s double helix structure . to quantitate the affinity at the contacting site , we compute the mean distance between the mmss of the protein and dna by using about 7200 surface vertices over the binding domain . a small mean distance of 0.4054 unveils a close contact between two facets . relatively small standard deviation of 0.3401 indicates the smoothness of the contacting facets . in contrast , inconclusive mean ( 0.8697 ) and standard deviation ( 0.5818 ) were found from the corresponding mss . this study indicates the great potential of the proposed mms for biomolecular binding sites prediction and recognition . we have introduced a novel concept , the minimal molecular surface ( mms ) , for the modeling of biomolecules , based on the speculation of free energy minimization for stabilizing a less polar molecule in a polar solvent . the mms is probe independent , differentiable , and consistent with surface free energy minimization . a novel hypersurface approach based on the theory of differential geometry is developed to generate the mmss of arbitrarily complex molecules . numerical experiments are carried out on few - atom and many - atom systems to demonstrate the proposed method . it is believed that the proposed mms provides a new paradigm for the studies of surface biology , chemistry and physics , in particular , for the analysis of stability , solubility , solvation energy , and interaction of macromolecules , such as proteins , membranes , dnas and rnas . it has potential applications not only in science , but also in technology , such as vehicle design and packaging problems . 99 l.a . kuhn , m. a. siani , m. e. pique , c. l. fisher , e. d. getzoff and j. a. tainer , the interdependence of protein surface topography and bound water molecules revealed by surface accessibility and fractal density measures , _ j. mol . biol . _ , * 228 * , 13 - 22 ( 1992 ) . f.m . richards , areas , volumes , packing and protein structure , _ annu . rev . biophys . bioeng . _ , * 6 * , 151 - 176 ( 1977 ) . m.l . connolly , analytical molecular surface calculation . , _ j. appl . crystallogr . _ , * 16 * , 548 - 558 ( 1983 ) . r.s . spolar and m.t . jr . record , coupling of local folding to site - specific binding of proteins to dna , _ science _ , * 263 * , 777 - 184 ( 1994 ) . p.b . crowley and a. golovin , cation - pi interactions in protein - protein interfaces , _ proteins - struct . func . bioinf . _ , * 59 * , 231 - 239 ( 2005 ) . c.a.s . bergstrom , m. strafford , l. lazorova , a. avdeef , k. luthman and p. artursson , absorption classification of oral drugs based on molecular surface properties , _ j. medicinal chem . _ , * 46 * , 558 - 570 ( 2003 ) . a.i . dragan , c.m . read , e.n . makeyeva , e.i . milgotina , m.e.a . churchill , c. crane - robinson and p.l . privalov , dna binding and bending by hmg boxes : energetic determinants of specificity , _ j. mol . biol . _ , * 343 * , 371 - 393 ( 2004 ) . r.m . jackson and m.j . sternberg , a continuum model for protein - protein interactions : application to the docking problem , _ j. mol . biol . _ , * 250 * , 258 - 275 ( 1995 ) . v.j . licata and n.m . allewell , functionally linked hydration changes in escherichia coli aspartate transcarbamylase and its catalytic subunit , _ biochemistry _ , * 36 * , 10161 - 10167 ( 1997 ) . t.m . raschke , j. tsai and m. levitt , quantification of the hydrophobic interaction by simulations of the aggregation of small hydrophobic solutes in water , _ proc . natl . acad . sci . usa _ , * 98 * , 5965 - 5969 ( 2001 ) . b. das and h. meirovitch , optimization of solvation models for predicting the structure of surface loops in proteins , _ proteins _ , * 43 * , 303 - 314 ( 2001 ) . j. warwicker and h.c . watson , calculation of the electric - potential in the active - site cleft due to alpha - helix dipoles , _ j. mol . biol . _ , * 154 * , 671 - 679 ( 1982 ) . b. honig and a. nicholls , classical electrostatics in biology and chemistry , _ science _ , * 268 * , 1144 - 1149 ( 1995 ) . s. andersson , s.t . hyde , k. larsson and s. lind , minimal surfaces and structures from inorganic and metal crystals to cell membranes and bio polymers , _ chem . rev . _ , * 88 * , 221 - 242 ( 1998 ) . m.w . anderson , c.c . egger , g.j.t . tiddy , j.l . casci and k.a . brakke , a new minimal surface and the structure of mesoporous silicas , _ angew . chem . int . ed . _ , * 44 * , 3243 - 3248 ( 2005 ) . d. pociecha , e. gorecka , n. vaupotic , m. cepic and j. mieczkowski , spontaneous breaking of minimal surface condition : labyrinths in free standing smectic films , _ phys . rev . lett . _ , * 95 * , no . 207801 ( 2005 ) . j.m . seddon and r.h . templer , cubic phases of self - assembled amphiphilic aggregates , _ philos . t. royal soc . london ser . a - math . phys . engng . sci . _ , * 244 * , 377 - 401 ( 1993 ) . chen bl , eddaoudi m , hyde st , okeeffe m , yaghi om , interwoven metal - organic framework on a periodic minimal surface with extra - large pores , _ science _ , * 291 * , 1021 - 1023 ( 2001 ) . e. koh and t. kim , minimal surface as a model of beta - sheets , _ prot . struct . func . bioinf . _ , * 61 * , 559 - 569 ( 2005 ) . a. falicov and f.e . cohen , a surface of minimum area metric for the structural comparison of proteins , _ j. mole . biol . _ , * 258 * , 871 - 892 ( 1996 ) . d.l . chopp , computing minimal - sufaces via level set curvature flow , _ j. comput . phys . _ , * 106 * , 77 - 91 ( 1993 ) . t. cecil , a numerical method for computing minimal surfaces in arbitrary dimension , _ j. comput . phys . _ , * 206 * , 650 - 660 ( 2005 ) . a. gray , modern differential geometry of curves and surfaces with mathematica , _ second edition _ , ( crc press , boca raton , 1998 ) . m. billeter , homeodomain - type dna recognition , _ progr . biophys . mol . biol . _ , * 66 * , 211 - 225 ( 1996 ) . e. fraenkel and c.o . pabo , comparison of x - ray and nmr structures for the antennapedia homeodomain / dna complex , _ nature struc . mol . biol . _ , * 5 * , 692 - 697 ( 1998 ) . m.f . sanner , a.j . olson and j.c . spehner , reduced surface : an efficient way to compute molecular surfaces , _ biopolymers _ , * 38 * , 305 - 320 ( 1996 ) . |
there is a continuous and rapid global growth in data storage needs .archival and backup storage form a specific niche of importance to both businesses and individuals .a recent market analysis from idc stated that the global revenue of the data archival business is expected to reach ] / * * / availdatabybw min( ) availdatabyindex min( availdatabyindex max(availdatabyindex , 0 ) availdata = min(availdatabybw , availdatabyindex ) availdata availdata .we assume in this example a hypothetical system where . ]if we focus on a single time step , then the scheduling problem can be restated as how to choose the best permutation of .we can represent this decision problem using a permutation tree as is depicted in figure [ f : permtree ] .the weight of the edges in this permutation tree correspond to the negative amount that choosing each edge contributes to .choosing the best scheduling algorithm tis the same than finding the shortest path between vertices and in the permutation tree .the bellman - ford algorithm can find the shortest past with cost where and respectively represent the number of edges and vertices in the permutation tree .however , in our permutation tree the number of edges and vertices are both , which makes finding the optimal schedule for even the simplified problem computationally exorbitantly expensive , even for small number of nodes .hence , we consider the general problem described in [ s : optimal ] to be also intractable .in this section we investigate several heuristics for scheduling the in - network redundancy generation .we split the scheduling problem into two parts , following the strategy presented in algorithm [ a : algo ] .the heuristics do not require assumption [ a : simpleprob1 ] , thus allowing the source node to send different amounts of data to each storage node .we however still rely on assumption [ a : simpleprob2 ] , which allows us to model the decision problem with a sorting algorithm , as previously outlined in algorithm [ a : algo ] .thus , the overall scheduling problem is decomposed into the following two decisions : ( i ) how does the source node schedule its uploads ? ( ii )how are redundancy generation triplets sorted ? recall that generating redundancy directly from the source node involves less bandwidth than doing it with in - network techniques ( remark [ r : sourcetraffic ] ) .thus , a good source traffic scheduling should aim at maximizing the source s upload capacity utilization .furthermore , the schedule must also try to ensure that the source injected data can be further used for the in - network redundancy generation . given a hsrc , where , any subset of linearly independent encoded fragmentsforms a basis , denoted by ( see example [ ex : basis ] for an illustration ) .let be the set of all the possible bases .since each storage node stores one redundant fragment , we use to represent all the basis of whose corresponding storage nodes are available at a time step ( and likewise , refer to each combination of such nodes as an _ available basis _ ) : from the set of available basis , , the source node selects one basis and uploads some data to each node .the amount of data the source uploads to each node is set to guarantee that at the end of time step , all these nodes have received the same amount of data , . from equations ( [ e :throughput ] ) and ( [ e : innettraf ] ) we know that evening out the data all nodes receives allows to minimize network traffic and maximize insertion throughput .to even out the amount of data each node in basis receives and maximizing the utilization of the upload capacity of the source , the source needs to send to each node an amount of data equal to besides determining how to distribute the upload capacity of the source between nodes of a basis , the source node also needs to select the basis from all the available ones ( if more than one basis is available ) .we consider the following heuristic policies for the source node to select a specific basis : * * random : * is randomly selected from . repeating this procedure for several time steps is expected to ensure that all nodes receive approximately the same amount of data from the source . ** minimum data : * the source selects the basis that on an average has received less redundant data .it means that is the basis that minimizes .this policy tries to homogenize the amount of data all nodes receive . ** maximum data : * the source selects the basis that on an average has received more redundant data .it means that is the basis that maximizes .this policy tries to have a basis of nodes with enough data to allow the in - network redundancy generation for the entire data object even when the source may not be available . * * no basis : * the source does not considers any basis and instead uploads data to all the online nodes .the upload bandwidth of the source is also distributed to guarantee that , after time step , all online nodes have received the same amount of data . at each timestep , once the source allocates its upload capacity to nodes from a specific available basis , the remaining upload / download capacity of the available nodes is used for in - network redundancy generation . for that purpose , the list of _ available triplets _ , ,is determined as follows : then , the set of available triplets is sorted , and the available upload / download capacity of storage nodes allocated according to this priority ( i.e. , the first available triplets have more preference ) .we consider the following sorting heuristics : * * random : * repair triplets are randomly sorted .this policy tries to uniformly distribute the utilization of network resources to maximize the amount of in - network generated data . ** minimum data : * the list of available triplets are sorted in ascending order according to the amount of data the destination is the destination of a triplet , . ]node has received .this policy tries to prioritize the redundancy generation in those nodes that have received less redundant data . ** maximum data : * similarly to the _ minimum data _ policy , however , triplets are sorted in descending order .this policy tries to maximize the amount of data some specific subset of nodes receive , to allow them to sustain the redundancy generation process even when the source is not available . * * maximum flow : * the triplets are sorted in descending order according to the amount of redundant data these nodes can help generate .note that the amount of data a triplet can generate at each time step , where , is given by : this policy tries to maximize the amount of new redundancy generated per time step ..different policy combinations . [ cols="^,^,^",options="header " , ] he have proposed four different policies for the source traffic scheduling problem and four policies for the triplets sorting problem . however , after an extensive experimental evaluation of all polices we will only report for each case the two best policies ( in terms of achieved throughput ) . at the source ,the _ random _ and _ minimum data _ policies consistently outperform the others , and at the storage nodes , the _ maximum flow _ and _ minimum data _ sorting policies for the triplets likewise outperform the others .we will refer to each of the combinations as denoted in table [ t : policies ] .first , the interpretation of the good performance of the random policy in the source node is that the use of random bases favors the diversity of information among the nodes , which in turn enables more redundancy generation triplets .second , it is interesting to note that the _ minimum data _ policy obtains good storage throughput in both cases , which leads us to infer that _ in general , prioritizing redundancy generation in those nodes that have received less data _ is a good strategy to maximize the throughput of the backup process .we considered a -hsrc , which is a code that can achieve a static data resiliency similar to a 3-way replication , but requiring only a redundancy factor of . using this erasure code we simulated various backup processes with different node ( un)availability patterns for a fixed number of time steps .in all the simulated cases we consider three different metrics : a. the maximum amount of data that can be stored in time steps , . b. the amount of data the source node uploads per unit of useful data backed up , c. the total traffic generated per unit of useful data stored , .we evaluate the three metrics for a system using an in - network redundancy generation algorithm and we compare our results with a system using the naive erasure coding backup process , where the source uploads all the data directly to each storage node .our results depict the savings and gains , in percentage , of using an in - network redundancy algorithm with respect to the naive approach . regarding the ( un)availability patterns of nodes and their bandwidth constraints we consider two different distributed storage cases : a. a p2p - like environment where we assume , to simplify simulations , that nodes have an upload bandwidth uniformly distributed between 20kbps and 200kbps , and an asymmetric download bandwidth equal to four times their upload bandwidth . nodes in this category follow two different availability traces from real decentralized application : ( i ) traces from users of an instant messaging ( i m ) service and traces from p2p nodes in the amule kad dht overlay . in both caseswe filter the nodes that on average stay online more than 4 , 6 and 12 daily hours , obtaining different mean availability scenarios .b. real availability traces collected from a google datacenter .the traces contain the normalized i / o load of more than 12,000 servers monitored for a period of one month .we consider that a server is available to upload / download data when its i / o load is under the -percentile load .we consider three different percentiles , , giving us three different node availability constraints . finally the time step duration is set to and we obtain the results by averaging the results of 500 backup processes of time steps each ( 5 days ) . before discussing the results, we will like to note that the experiments make a few simplifying assumptions .furthermore , real deployments have somewhat different workload characteristics than what have been considered above .hence , the quantitative results we report are only indicative ( and many more settings could possibly be experimented ) , and instead the specific choices help us showcase the potential benefits of our approach in only a qualitative manner . [ [ storage - throughput . ] ] storage throughput .+ + + + + + + + + + + + + + + + + + + in figure [ f : stored ] we show the increment of the data insertion throughput achieved by the in - network redundancy generation process .we can see how the gain is higher when nodes are more available for redundancy generation .this fact is a consequence of the constraint in eq .( [ e : c : symmetry ] ) requiring redundancy generation triplets to be symmetric , which requires the three involved nodes in each triplet to be available simultaneously .the higher the online availability , the higher the chances to find online three nodes from a triplet .further , we observe that the _ rndflw _ policy achieves significantly better results in comparison to other policies ; the second best policy is _minflw_. it is easy then to see that the _ maximum flow_ heuristic plays an important role on the overall redundancy generation throughput , which tries to maximize the use of those nodes that can potentially generate more redundancy .additionally , a _random _ source selection policy provides more benefits than the _ minimum data _ policy .[ [ network - traffic . ] ] network traffic .+ + + + + + + + + + + + + + + + in figure [ f : traffic ] we show the increment on the required network traffic of the in - network redundancy generation strategy as compared to the traditional redundancy generation .as noted previously ( in remark [ r : sourcetraffic ] ) , the total traffic required for in - network redundancy generation can be up to twice the needed by the traditional process ( i.e. 100% traffic increment ) .however , since the in - network redundancy generation can not always take place due to the node availability constraints , the traffic increment is always below 100% .as it is expected then , the traffic increment is minimized when nodes are less available , in which case the source has to generate and introduce larger amounts of redundancy ( i.e. , less reduction in the data uploaded by source , as shown in figure [ f : source ] ) .it is also important to note that the increase in traffic is approximately the same or even less than the increase in storage throughput even for low availability scenarios .thus the in - network redundancy generation scales well by achieving a better utilization of the available network resources than the classical storage process .[ [ data - uploaded - by - the - source . ] ] data uploaded by the source .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + in figure [ f : source ] we show the reduction of data uploaded by the source . in the traditional approach , the source needs to upload times the size of the actual data to be stored ; of this data is redundant , however the in - network redundancy generation process allows to reduce the amount of data uploaded by the source . in this figurewe can see how in the best case ( _ rndflw _ policy ) our approach reduces the source s load by 40% ( out of a possible 57% ) , yielding 40 - 60% increase in storage throughput. finally , we want to note that the in - network redundancy performance requires finding three available nodes simultaneously , which becomes difficult on environments with fewer backup opportunities . to solve this problem , we would need to look at more sophisticated in - network redundancy generation strategies not subjected to the symmetric constraint ( defined in eq .( [ e : c : symmetry ] ) ) , so that nodes can forward and store partially - generated data .however , the scheduling problem will be much more complicated , and is beyond the reach of this first work .furthermore , in real traces , nodes will have correlation ( e.g. , based on batch jobs ) , which are missing in the synthetic traces , and such correlations can be leveraged in practice . exploring boththese aspects will be part of our future work .in this work we propose and explore how storage nodes can collaborate among themselves to generate erasure encoded redundancy by leveraging novel erasure codes local - repairability property . doing so not only reduces a source node s load to insert erasure encoded data , but also significantly improves the overall throughput of the data insertion process .we demonstrate the idea using self - repairing codes .we show that determining an optimal schedule among nodes to carry out in - network redundancy generation subject to resource constraints of the system ( nodes and network ) is computationally prohibitive even under simplifying assumptions .however , experiments supported by real availability traces from a google data center , and p2p / f2f applications show that some heuristics we propose yield significant gain in storage throughput under these diverse settings , proving the practicality of not only the idea in general , but also that of the specific proposed heuristics .calder , b. , wang , j. , ogus , a. , nilakantan , n. , skjolsvold , a. , mckelvie , s. , xu , y. , srivastav , s. , wu , j. , simitci , h. , haridas , j. , uddaraju , c. , khatri , h. , edwards , a. , bedekar , v. , mainali , s. , abbasi , r. , agarwal , a. , haq , m.f.u . ,haq , m.i.u ., bhardwaj , d. , dayanand , s. , adusumilli , a. , mcnett , m. , sankaran , s. , manivannan , k. , and rigas , l. `` windows azure storage : a highly available cloud storage service with strong consistency . '' in _ acm symposium on operating systems principles ( sosp)_. 2011 .ford , d. , labelle , f. , popovici , f.i ., stokely , m. , truong , v.a . , barroso , l. , grimes , c. , and quinlan , s. `` availability in globally distributed storage systems . '' in _ usenix conference on operating systems design and implementation ( osdi)_. 2010 .hastorun , d. , jampani , m. , kakulapati , g. , pilchin , a. , sivasubramanian , s. , vosshall , p. , and vogels , w. `` dynamo : amazon s highly available key - value store . '' in _symposium on operating systems principles ( sosp)_. 2007 .kubiatowicz , j. , bindel , d. , chen , y. , czerwinski , s. , eaton , p. , geels , d. , gummadi , r. , rhea , s. , weatherspoon , h. , weimer , w. , wells , c. , and zhao , b. `` oceanstore : an architecture for global - scale persistent storage . '' in _ intl .conference on architectural support for programming languages and operating systems ( asplos)_. 2000 .liu , s. , schulze , j.p . ,herr , l. , weekley , j.d ., zhu , b. , van osdol , n. , plepys , d. , and wan , m. `` cinegrid exchange : a workflow - based peta - scale distributed storage platform on a high - speed network . ''_ future generation comp ._ , 27(7):966976 , 2011 . | erasure coding is a storage - efficient alternative to replication for achieving reliable data backup in distributed storage systems . during the storage process , traditional erasure codes require a unique source node to create and upload all the redundant data to the different storage nodes . however , such a source node may have limited communication and computation capabilities , which constrain the storage process throughput . moreover , the source node and the different storage nodes might not be able to send and receive data simultaneously e.g. , nodes might be busy in a datacenter setting , or simply be offline in a peer - to - peer setting which can further threaten the efficacy of the overall storage process . in this paper we propose an `` in - network '' redundancy generation process which distributes the data insertion load among the source and storage nodes by allowing the storage nodes to generate new redundant data by exchanging partial information among themselves , improving the throughput of the storage process . the process is carried out asynchronously , utilizing spare bandwidth and computing resources from the storage nodes . the proposed approach leverages on the local repairability property of newly proposed erasure codes tailor made for the needs of distributed storage systems . we analytically show that the performance of this technique relies on an efficient usage of the spare node resources , and we derive a set of scheduling algorithms to maximize the same . we experimentally show , using availability traces from real peer - to - peer applications as well as google data center availability and workload traces , that our algorithms can , depending on the environment characteristics , increase the throughput of the storage process significantly ( up to 90% in data centers , and 60% in peer - to - peer settings ) with respect to the classical naive data insertion approach . * keywords : distributed storage systems , data insertion , locally repairable codes , self - repairing codes * |
efficient computing of x - ray ( and neutron ) scattering from crystals has been the subject of intense work since computers became available . except in the case of small structures ( atoms ) or small number of reflections ( ) , the method of choice has long been to use the fast - fourier transform of the crystal s scattering density . by computing this density inside the crystal s unit cell over a suitable grid , it is possible to compute structure factors at nodes of the reciprocal lattice . in the case of strained or disordered crystals , the scattering must take into account a large part of the crystal ( or possibly the entire crystal ) instead of a single unit cell , in order to describe the departure from an infinite , triperiodic structure .this requires either using approximations , or a large computing power .moreover , both strain and disorder lead to non - discrete scattering , so that the scattered amplitude must be evaluated on a fine grid around or between bragg diffraction peaks .this type of computations can greatly benefit from fast calculations , which we will present in this paper .this article is organized as follows : in section [ secscattering ] we describe the formulas used for computing the scattering from an atomistic model , how it can be efficiently computed using a graphical processing unit ( gpu ) , and what performance can be achieved . in section [ secpynx ] we present the open - source package pynx which can be used to easily compute scattering with little programming knowledge . in section [ secapplication ]a few examples are given .x - ray and neutron scattering can generally be calculated , in the kinematic approximation , as : \ ] ] where is the scattered amplitude , is the scattering vector , represents the scattering density ( electronic or nuclear ) at position inside the crystal , and denotes the fourier transform .this equation can be used to determine the scattering from a crystal as long as is described on a grid fine enough to resolve the atomic positions , which is easy if the crystal can be described from a single unit cell . in the case of an aperiodic object ( crystal with an inhomogeneous strain or disordered ) , it is simpler to compute the scattering from an atomistic model , which can be obtained using reverse monte - carlo , atomic potentials combined with molecular dynamics or a direct minimization of the crystal energy .the scattered amplitude is then derived from the atomic positions : where is the scattering length ( either the thomson scattering factor for x - rays or the nuclear scattering length for neutrons ) of atom and its position .the number of floating - point operations ( ) required to evaluate equation ( [ eqn : eqnscattatoms ] ) is approximately equal ( see section [ secimplemgpu ] ) to : for a structure with ( e.g. a cube of silicon of ) and points in reciprocal space , this corresponds to , which can be compared to the current computing power of today s consumer micro - processors of per core . in the case of large nano ( and micro)-structures , for which the description of the atomic structure is not possible in practice , a model based on continuum elasticity can be used , either with an analytical or numerical approach ( see and references within ) .the most popular method in the case of epitaxial nanostructures currently is the finite element method- see for a recent discussion .this model can then be used to calculate the scattering using groups of atoms : where is the structure factor for the block of atoms ( generally a group of unit cells ) , its original position , and its displacement from the ideal structure .assuming that all blocks of atoms are identical and on a triperiodic grid , it is possible to rewrite eqn .[ eqn : eqnscattblock ] as : \ ] ] where denotes the bragg peak position around which the calculation is made , is the structure factor calculated for a block of atoms , and the displacement field inside the crystal . if the composition of the blocks of atoms vary ( e.g. due to interdiffusion ) , it is also possible to include a variation of the average scattering density in the : \ ] ] where is the relative scattering density in the crystal .both equation ( [ eqn : eqnscattblockft ] ) and ( [ eqn : eqnrhoftdispl ] ) allow the use of a _ fast _ fourier transform , but are only valid as long as : moreover , use of equations ( [ eqn : eqnscattblockft ] ) and ( [ eqn : eqnrhoftdispl ] ) with a _fast _ fourier transform restricts the computation of scattering on a triperiodic grid in reciprocal space - this is a limitation since modern data collection often use 2d detectors , and the measured points in reciprocal space are located on a _ curved _ surface ( the projection of the detector on ewald s sphere ) .furthermore , as the resolution in reciprocal space is inversely proportional to the size in real space , analysis of high - resolution data using a fft calculation demands a large model - even if the extent in reciprocal space is very limited .therefore , even if the speed of the fft is optimal for large crystalline structures - for points in real space , points in reciprocal space are calculated with a cost proportional to instead of - it is still interesting to consider a _direct _ computation using equation ( [ eqn : eqnscattatoms ] ) or ( [ eqn : eqnscattblock ] ) because it allows computation for : * _ any _ assembly of points in reciprocal space * from _ any _ structural model ( no matter how severely distorted or disordered ) in order to achieve the calculations in a reasonable time , it is useful to consider current graphics cards as general - purpose gpu . this has already been reported in the scope of crystallography , for computing scattering maps from disordered crystals , powder pattern computing using the debye equation , and for single - particle electron microscopy . to summarize basic principles behind gpu computing ,it is possible to accelerate any calculation provided that : 1 .it is * highly parallel * , _ i.e. _ the same formula must be applied on large amounts of data , independently 2 .the number of * memory * transfers required is much smaller than the number of mathematical operations 3 .the calculation pathway is determined in advance ( at compilation time ) , which excludes any _ if ... then ... else _ operation in the inner computation loop moreover , many classical functions ( e.g. : , , , fused evaluation , ... ) are highly optimized on gpus - an algorithm requiring many such operations will be greatly accelerated .equation ( [ eqn : eqnscattatoms ] ) fulfills all requirements , assuming that both the number of atoms and the number of points in reciprocal space are large ( ) .[ figspeed ] indicates the number of reflections for the gpu calculations ( black lines ) .the cpu ( central processing unit ) curves ( red lines ) correspond to a computing using a vectorized ( sse - optimized ) c++ code running on a _ single _ core of an intel core2 quad q9550 running at 2.83 ghz , for ( the curves for and are almost identical).,title="fig:",scaledwidth=95.0% ] the implementation presented in this article uses the cuda toolkit .it is beyond the scope of this article to detail the exact algorithm used for computation , as the implementation is freely available as an open - source project ( see section [ secpynx ] ) .however , it should be noted that the calculations are made in parallel for all reflections , with the atomic coordinates shared between parallel threads ( to minimize memory transfers ) - this method is optimal for large number of atoms . for some configurations ( large number of reflections and small number of atoms ) , it may be more optimal to parallelize on the atoms and share the reflection coordinates between parallel processes .the achieved speed is shown in fig .[ figspeed ] , for a calculation of scattering for a random list of points in reciprocal space , and random coordinates for the atoms - the occupancy of all atoms is assumed in this test to be equal to 1 , and the atomic scattering factor is not evaluated- in practice the atomic scattering factors can be factorized and represent a negligible amount of computing ( see section [ sec : secexampleinas ] ) - the same is true for debye - waller factors . as can be seen in fig .[ figspeed ] , there is a strong dependence of the speed with the number of reflections and the number of atoms per second - the maximum speed ( ) is only reached if both numbers are larger than .each couple ( reflection , atom ) corresponds to 8 floating - point operations ( 3 multiplications , 4 additions , one evaluation ) operation is hardware - accelerated , it is 4 to 8 times slower than a simple addition . if the evaluation is counted as 4 , the achieved speed is . ] , so that the overall speed is equal to .this can be compared to the peak theoretical speed of 1.7 for this graphics card , which is only achieved when using fused add - multiply operations , without any bottleneck due to memory transfers . by comparison , when computing on the cpu ( see fig .[ figspeed ] ) , the maximum speed is reached sooner : for 100 atoms and reflections , or for atoms when using reflections .the top speed for a _ single _ core ( intel core2 quad q9550 running at 2.83 ghz ) , using sse - vectorized sine and cosine functions , is - times slower than the gpu version .un_-optimized ( without using sse code ) c++ code runs about 3 times slower , or times slower than the gpu version .using multiple cores , the speed increases linearly ( except for small number of atoms ) with the number of cores .as was already pointed out by , accuracy is an important issue since gpus are most efficient when using single precision floating - point operations .moreover , the accuracy of operations can be slightly relaxed compared to ieee standards . for example , since single - precision floating - point use 24 bits mantissa ( _ i.e. _ a relative accuracy of ) , precision may be expected to become problematic when the total scattered amplitude varies on more than 7 orders of magnitude . a simple test can be made : computing the scattering for a linear , perfectly periodic chain of identical atoms , and comparing it to the analytical formula : ( where is the reciprocal lattice unit and the number of atoms in the chain ) .this is shown in fig.[figaccuracy ] , for chains of atoms of different lengths .the discrepancy between the analytical calculation and the single - precision gpu calculation is clearly visible in the regions where the intensity is minimal - however in practice , the dynamic range where the calculation is reliable is always larger than , and most of the time ( of the points ) around - these numbers refer to the intensity ( squared modulus of the scattered amplitude ) .[ figaccuracy ] ( b ) and ( c ) atoms , using a perfectly periodic chain calculated using the gpu ( black line ) , the analytical model ( red line ) , and for a chain where the atoms are randomly displaced with a gaussian distribution with a standard deviation of ( gray line ) .the atoms where located at , and the h coordinates are located every 0.001 in reciprocal lattice units ( r.l.u.).,scaledwidth=95.0% ] such a dynamic range should be sufficient for most applications , as the practical range for experimentally measured intensities is usually lower , except in the case of perfect crystals . in fig.[figaccuracy ] a gray curve is superposed to the simulations , and corresponds to the gpu calculation for a chain of atoms with random displacements with a gaussian standard deviation of of their fractional coordinate . the error due tothe single - precision computing is generally lower than the noise level represented by the gray curve .we have found that errors due to single precision floating point calculations were not significant _ in practice _ : indeed , most of the time structural models for which this type of computation is used are not ideal ( see examples of simulated calculations using our code in and ) and therefore do not present a very large dynamic range ( larger than 8 orders of magnitude ) .it should however be noted that gpus can also perform calculations using double precision floating point , but with a lower performance , as the number of available processing units are generally smaller ( 8 times in the case of cuda graphics cards with capability less than 2.0 ) than for single precision calculations .more recent graphics cards ( available since mid-2010 ) , using the fermi architecture ( ` http://www.nvidia.com/object/fermi_architecture.html ` ) provide a higher computing power dedicated to double - precision computing ( about half the speed of single - precision ) .writing programs using gpu computing is a relatively complex process , as it is necessary to fine - tune the algorithm , notably in order to optimize memory transfers - which can make a very significant difference in terms of performance .for example an early version of the presented algorithm did not perform in a synchronized way between parallel computing threads , and its performance was slower than the final algorithm used .moreover , all data has to be allocated both in the computer s main memory as well as on the graphics card , which can be tedious to write . for this reason, we have written an open - source library , pynx `` python tools for nanostructure xtallography '' , using the python language .the main features of this software package are the following : * computing of scattering for a given list of atomic positions and points in reciprocal space does not require _ any _ gpu - computing knowledge * it is possible to input either a list of coordinates , or also include their occupancy * the shape and order of the and coordinates ( i.e. 1d , 2d or 3d , sorted or not ) is irrelevant - all calculations are made _ in fine _ on 1d vectors * the computation can be distributed on several gpus - e.g. such cards as nvidia s gtx 295 are seen as two independent gpu units - the calculation is distributed transparently over the two gpu * a pure sse - optimized cpu computation is also available when no gpu is available , and can take advantage of all the computing cores available .three modules are available : * ` pynx.gpu ` , which is the main module allowing fast , parallel computation of , either using a gpu or the cpu * ` pynx.fthomson ` , which gives access to the analytic approximation for the x - ray atomic scattering factors extracted from the cctbx library * ` pynx.gid ` , which provides transmission and reflection coefficients at an interface , which is required for grazing incidence diffraction analysis using the distorted wave born approximation ( dwba ) , as is demonstrated in section [ sec : dwba ] .[ figstrainedcube ] .intensities are color - coded on a logarithmic scale.,scaledwidth=95.0% ] to compute the scattering around the reflection of a simple cubic structure with a lateral size of 100 unit cells , the following code is used : .... # import libraries from numpy import arange , float32,newaxis , log10,abs from pynx import gpu # create array of 3d coordinates , 100x100x100 cells x = arange(-50,50,dtype = float32 ) y = arange(-50,50,dtype = float32)[:,newaxis ] z = arange(0,100,dtype = float32)[:,newaxis , newaxis ] # hkl coordinates as a 2d array h = arange(-.1,.1,0.001 ) k=0 l = arange(3.9,4.1,0.001)[:,newaxis ] # the actual computation fhkl , dt = gpu.fhkl_thread(h , k , l , x , y , z , gpu_name="295 " ) # display using matplotlib from pylab import imshow imshow(log10(abs(fhkl)**2),vmin=0 , extent=(-.1,.1,3.9,4.1 ) ) .... in this example , the calculation takes 0.93s on a gtx 295 graphics card .the library used for graphics display is matplotlib ( http://matplotlib.sourceforge.net/ ) . of course scattering from this cubecould be calculated analytically - if we introduce a simple displacement field in the z - direction : , the following line can be inserted after the `` ` z = arange ... ` '' instruction : .... z = z+1e-6*z*(x**2+y**2 ) .... the computed diffraction map is shown in fig.[figstrainedcube ] . in the previous example , the atomic scattering factor is not taken into account - since this factor is the same for all atoms of the same type , it is easy to group all atoms of the same type together and calculate first , and then multiply it by the value of the atomic scattering factor dependent on the position in reciprocal space ( and the energy if anomalous scattering terms are to be taken into account ) , as well as the debye - waller factor .these atomic scattering factors can be extracted from the ` pynx.fthomson ` module .let us consider an inas nano - structure , for which we have atomic coordinates in separate files ` in.dat ` and `as.dat ` , each file having 3 columns corresponding to the x , y , z orthonormal coordinates ( in nanometers ) .the scattering around reflection for this data can be calculated in the following way ( the f and f " resonant terms were taken manually from the cctbx library ) : .... # import libraries from numpy import arange , newaxis , sqrt , abs , loadtxt from pynx import gpu , fthomson # hkl coordinates as a 2d array h = arange(-.1,.1,0.001 ) k=0 l = arange(3.9,4.1,0.001)[:,newaxis ] # load orthonormal coordinates xas , yas , zas = loadtxt("as.dat",unpack = true ) xin , yin , zin = loadtxt("in.dat",unpack = true ) # convert to fractional coordinates xas/=.6036 yas/=.6036 zas/=.6036 xin/=.6036 yin/=.6036 zin/=.6036 # compute scattering fhklin , dt = gpu.fhkl_thread(h , k , l , xin , yin , zin , gpu_name="295 " ) fhklas , dt = gpu.fhkl_thread(h , k , l , xas , yas , zas , gpu_name="295 " ) # apply scattering factors at 10kev s=6.036/sqrt(h**2+k**2+l**2 ) fas = fthomson.fthomson(s,"as")-1.64 + 0.70j fin = fthomson.fthomson(s,"in")+0.09 + 3.47j # full structure factor fhkl = fhklas*fas + fhklin*fin .... a specific module ( ` pynx.gid ` ) is available for grazing incidence diffraction - this module allows to compute the complex refraction index of a crystalline material ( the substrate ) and determine the reflection and transmission coefficients at the interface .it is therefore possible to simulate grazing incidence x - ray scattering using the dwba approximation , by taking into account the reflections before and after diffraction by the object at the surface , which influence the shape of the scattering in reciprocal space . in fig.[figdwba ]is shown the simulated scattering for a germanium quantum dot on a silicon substrate .for the sake of demonstrating the use of pynx , a simple analytical model is used : * the quantum dot shape corresponds to a portion of a sphere , with a radius equal to 50 unit cells and a height of 20 unit cells . *the germanium content varies linearly from ( bottom ) to ( top ) * the relaxation ( x , y , z being the fractional coordinates relative to the silicon substrate lattice ) are : + moreover , in this example only the scattering from the quantum dot is calculated , ignoring any contribution from the substrate .the corresponding script is provided as a supplementary file . a more complete description of the scattering , taking into account scattering from the ( strained ) substrate ,could also be added in the future .[ figdwba ] reflection , and is plotted against ( reciprocal lattice unit ) and the outgoing angle .the location ( outgoing angle ) of the intensity maximum as a function of varies as the in - plane strain changes with the height of the corresponding layer in the dot , due to interferences between the four scattered beams .see text for details.,scaledwidth=95.0% ]the pynx library is freely available from the project website at : http://pynx.sourceforge.net .it is open - source software , distributed under the cecill - b license ( http://www.cecill.info/index.en.html ) , a permissive license similar to the freebsd license . although this library has been developed and tested only under linux , it should work on any operating system ( including macos x and windows ) supported by the pycuda library .it has been tested on a variety of graphics card ( 9600 gt , gtx 295 , 8800 gtx ) .although it is recommended to use a dedicated graphics card ( not used for display ) for gpu computing , it is not a requirement - the library automatically cuts the number of atoms in order to decompose the calculation in batches which last less than 5s ( a limit imposed by the cuda library for graphics cards attached to a display ) . andit is also possible to use the cpu for calculations .this library uses the scipy ( http://www.scipy.org ) and pycuda libraries , and optionally the cctbx to determine the refraction index for the computing of transmission and reflection coefficients for grazing incidence x - ray scattering .the main interest from this computing project is the ability to compute scattering for non - ideal structures without any approximation .this is particularly important for strained nano - structures where the calculation often uses a fast fourier transform approximation , even though the displacements from the ideal structure are large .this could also be useful for coherent diffraction imaging in bragg condition for strained nano - structures , especially in order to extend this method to severely distorted lattices ( e.g. near an epitaxial interface with dislocations , a grain boundary , ... ) .a current limitation of this project is related to the toolkit used - the cuda development package is the most popular gpu computing tool available at the moment , but it depends on a single manufacturer , and remains proprietary . an important development in that regard is the creation of the opencl language ( http://www.khronos.org/opencl/ ) , which is intended to allow gpu - computing _ independently of the graphics card_. a future implementation of the proposed algorithm could use opencl and ensure its usability on a larger range of computing equipment . | scattering maps from strained or disordered nano - structures around a bragg reflection can either be computed quickly using approximations and a ( fast ) fourier transform , or using individual atomic positions . in this article we show that it is possible to compute up to using a single graphics card , and we evaluate how this speed depends on number of atoms and points in reciprocal space . an open - source software library ( pynx ) allowing easy scattering computations ( including grazing incidence conditions ) in the python language is described , with examples of scattering from non - ideal nanostructures . coraux richard renevier scattering calculations from atomistic models using graphical processing units are presented , and compared to the speed achieved using normal cpu calculations . an open - source software toolbox ( pynx ) is presented , with a few examples showing the fast calculation of scattering maps from strained nanostructures , including grazing - incidence conditions . |
nonequilibrium transitions from stuck to flowing phases underlie the physics of a wide range of physical phenomena . in a first class of systemsthe onset of a stuck or frozen state occurs as a result of intrinsic dynamical constraints , due to interactions or crowding , and is usually referred to as _ jamming _ .familiar examples are supercooled liquids that become a glasses upon lowering the temperature , colloidal suspensions that undergo a glass transition due to crowding upon increasing the density or the pressure , foams and granular materials that jam under shear , arrays of dislocations in solids that jam under an applied load . in a second class of systemsthe transition to a stuck state is due to external constraints , such as the coupling to quenched disorder ( pinning centers from material defects in vortex lattices , optical traps in colloids , etc . ) , and is denoted as _ pinning _ .both classes of systems can be driven in and out of glassy states by tuning not only temperature , density or disorder strength , but also an applied external force .the external drive may be a shear stress in conventional glasses or simply a uniform applied force in systems with extrinsic quenched disorder , where even a uniform translation of the system relative to the fixed impurities represents a nontrivial perturbation .vortex lattices in superconductors and charge density waves ( cdws ) in metals can be driven in and out of stuck glassy states by a uniform external current or electric field , respectively . as recognized recently in the context of jamming ,the external drive plays a role not unlike that of temperature in driving the system to explore metastable configurations and should be included as an axis in a complete phase diagram . in this lectures i will focus on zero - temperature depinning transitions of interacting condensed matter systems that spontaneously order in periodic structures and are driven over quenched disorder .the prototype examples are vortex lattices in type - ii superconductors and charge density waves in anisotropic metals .other examples include wigner crystals of two dimensional electrons in a magnetic field moving under an applied voltage , lattices of magnetic bubbles moving under an applied magnetic field gradient , and many others .in general these systems form a lattice inside a solid matrix , provided by the superconducting or conducting material and are subject to pinning by random impurities .the statics of such disordered lattices have been studied extensively .one crucial feature that distinguishes the problem from that of disordered interfaces is that the pinning force experienced by the periodic structure is itself periodic , although with random amplitude and phase . as a result , although disorder always destroys true long - range translational order and yields glassy phases with many metastable states and diverging energy barriers between these states , the precise nature of the glassy state depends crucially on disorder strength . at weak disorder the system , although glassy , retains topological order ( the resulting phase has been named bragg glass in the context of vortex lattices ) .topological defects proliferate only above some characteristic disorder strength , where a topologically disordered glass is formed .the driven dynamics of disordered periodic structures have been studied extensively by modeling the system as an overdamped elastic medium that can be deformed by disorder , but is not allowed to tear , that is by neglecting the possible formation of topological defects due to the competition of elasticity , disorder and drive .this model , first studied in the context of charge density waves , exhibits a nonequilibrium phase transition from a pinned to a _unique _ sliding state at a critical value of the driving force .this nonequilibrium transition displays universal critical behavior as in equilibrium _ continuous _ transitions , with the medium s mean velocity acting as an order parameter . while the overdamped elastic medium model may seem adequate to describe the dynamics of driven bragg glasses , many experiments and simulations of driven systems have shown clearly that topological defects proliferate in the driven medium even for moderate disorder strengths .the dynamics near depinning becomes spatially and temporally inhomogeneous , with coexisting moving and pinned degrees of freedom .this regime has been referred to as plastic flow and may be associated with memory effects and even hysteresis in the macroscopic response .the goal of the present lectures is to describe coarse - grained models of driven extended systems that can lead to history - dependent dynamics .such models can be grouped in two classes . in the first classthe displacement of the driven medium from some undeformed reference configuration remains single - valued , as appropriate for systems without topological defects , but the interactions are modified to incorporate non - elastic restoring forces .in the second class of models topological defects are explicitly allowed by removing the constraint of single - valued displacements . herewe will focus on the first class and specifically consider driven periodic media with a linear stress - strain relation , where the stress transfer between displacements of different parts of the medium is nonmonotonic in time and describes viscous - type slip of neighboring degrees of freedom .a general model of this type that encompasses many of the models discussed in the literature was proposed recently by us . here slips between neighboring degrees of freedomare described as viscous force , that allows a moving portion of the medium to overshoot a static configuration before relaxing back to it .it is shown below that such viscous coupling can be considered an effective way of incorporating the presence of topological defects in the driven medium .related models have also been used to incorporate the effect of inertia or elastic stress overshoot in crack propagation in solids .the precise connection between the two classes of models has been discussed in ref . .in section [ sec : singleparticle ] we review the simplest example of depinning transition , obtained when non - interacting particles are driven through a periodic pinning potential . by contrasting the case of periodic and non - periodic pinning, we stress that care must be used in the definition of the mean velocity of the system . in section [ sec : extendedmedium ] , we first describe the generic coarse - grained model of a driven elastic medium that exhibits a _ continuous _depinning transition as a function of the driving force from a static to a _unique _ sliding state .next we introduce an anisotropic visco - elastic model as a generic model of a periodic system driven through strong disorder .the model considers coarse - grained degrees of freedom that can slip relative to each other in the directions transverse to the mean motion , due to the presence of small scale defects ( phase slips , dislocations , grain boundaries ) at their boundaries , but remain elastically coupled in the longitudinal directions .the slip interactions are modeled as viscous couplings and a detailed physical motivation for this choice is given in section [ subsec : viscous_couplings ] .most of our current results for these type of models are for the mean - field limit and are presented in section [ sec : mean_field ] .the studies carried out so far for finite - range interactions suggest that the mean - field theory described here may give the correct topology for the phase diagram , although there will of course be corrections to the critical behavior in finite dimensions .finally , we conclude in section [ sec : other_models ] by discussing the relation to other models described in the literature and the connection to experiments .it is instructive to begin with the problem of a single particle driven through a _ periodic _ pinning potential as the simplest illustration of driven depinning . assuming overdamped dynamics , the equation of motion for the position of the particle is where is a friction coefficient ( in the following we choose our units of time so that ) , is the external drive and , with an integer , is a periodic function of period . for simplicitywe choose a piecewise linear pinning force , corresponding to , for . in this case a periodic solution of eq .( [ single_per ] ) is obtained immediately in terms of the time needed to traverse a potential well , or period . introducing an arbitrary time such that if , then , the particle position for is where is given by for and diverges for .in other words if the particle never leaves the initial well , i.e. , it is pinned .the threshold force for depinning is then . in the sliding state the meanvelocity is defined as the average of the instantaneous velocity over the arbitrary initial time .this gives this definition naturally identifies the mean velocity of the particle with the inverse of the period .the logarithmic behavior of near threshold , , is peculiar to a discontinuous pinning force . for an arbitrary pinning force the period is and can be evaluated analytically for various forces .for instance , for a sinusoidal pinning force , , one finds , which gives near threshold , a generic behavior for continuous pinning forces. the main focus of the remainder of this paper will be on the modeling of extended driven systems as collections of interacting degrees of freedom .it will then be important to distinguish two cases . for extended systems that are periodic , such as charge density waves and vortex lattices , the pinning potential is itself periodic as each degrees of freedom sees the same disorder after advancing one lattice constant . for non - periodic systems , such as interfaces ,each degree of freedom moves through a random array of defects .when interactions are neglected , an extended _ periodic _ system moving through a periodic random pinning potential can be modeled as a collection of non - interacting particles , where each particle sees its own periodic pinning potential .the pinning potentials seen by different particles may differ in height and be randomly shifted relative to each other , as sketched in fig .the equation of motion for the -th particle at position is then where are random phases uniformly distributed in and the pinning strengths are drawn independently from a distribution .since the displacements are decoupled , they can be indexed by their disorder parameters and instead of their spatial label , i.e. , . the mean velocity of the many - particle system can then be written as an average over the random phases and pinning strengths , where .the average over the random phase of each degree of freedom is equivalent to the average over the random time shift described for the single - particle case and yields , with the period of each particle given in eq .( [ period ] ) .the mean velocity is then where denotes the average over the barrier height distribution . for distributions thathave support at , a system of _ noninteracting _ particles with periodic pinning depins at , as there are always some particles experiencing zero pinning force .( a ) sketch of noninteracting degrees of freedom driven over a random periodic pinning potential in one dimension .spatial coordinates have been discretized so that degrees of freedom are labelled by an index . in ( b ) the case where each degree of freedom interacts elastically with its neighborsthis is a discretized one - dimensional realization of the elastic medium model described by eq .( [ elastic_model ] ) below . ]a different single - particle problem that has been discussed in the literature is that of a particle moving through a _random ( non - periodic ) _ array of defects .the defects can be described as pinning potential wells centered at random positions and/or with random well heights . to make contact with the periodic case we consider a particle moving through a succession of evenly spaced pinning potential wells of random heights .the equation of motion is where is the total number of pinning centers and the pinning strengths are drawn independently from a distribution .choosing again the piecewise - linear pinning force , the time to traverse the -th well is simply , with given by eq .( [ period ] ) .the mean velocity of the particle is defined as the total distance travelled divided by the total time and is given by in this case , unless the distribution is bounded from above , there is always a finite probability that the particle will encounter a sufficiently deep potential well to get pinned .therefore for unbounded the particle is always pinned in the thermodynamic limit .if is bounded from above by a maximum pinning strength , this value also represents the depinning threshold .finally , the case of many noninteracting particles driven through a random array of defects is equivalent to that of a single particle , as the mean velocity of each particle can be calculated independently .the mean velocity of the system is then again given by eq .( [ vmean_nonper ] ) .we consider a -dimensional periodic structure driven along one of its symmetry directions , chosen as the direction .the continuum equations for such a driven lattice within the elastic approximation were derived by various authors by a rigorous coarse - graining procedure of the microscopic dynamics .assuming overdamped microscopic dynamics , the equation for the local deformation of the medium ( in the laboratory frame ) from an undeformed reference state is written by balancing all the forces acting on each portion of the system as where is the stress tensor due to interactions among neighboring degrees of freedom , is the driving force and is the periodic pinning force .the periodicity of the pinning force , which contains fourier components at all the reciprocal lattice vectors of the lattice , arises from the coupling to the density of the driven lattice . for conventional short - ranged elasticitythe stress tensor is where and are the compressional and shear moduli of the driven lattice , respectively , and is the strain tensor .it was shown in ref . that deformations of the driven lattice along the direction of the driving force grow without bound due to large transverse shear stresses that generate unbounded strains responsible for dislocation unbinding .for this reason , we focus here on the dynamics of a scalar field , with , describing deformations of the driven lattice along the direction of mean motion .the -dimensional vector denotes the coordinates transverse to the direction of motion .assuming , we obtain a scalar model for the driven elastic medium , given by where denotes the component of the pinning force . for simplicitywe also consider a model that only retains the component of the pinning force at the smallest reciprocal lattice vector and choose our units of lengths so that the corresponding period is 1 .the pinning force is then taken of the form where is a periodic function .the random pinning strengths are drawn independently at every spatial point from a distribution with zero mean and short - ranged correlations to be prescribed below .the random phases are spatially uncorrelated and distributed uniformly in . the model of a driven overdamped elastic medium embodied by eq .( [ elastic_model ] ) has been studied extensively both analytically and numerically .it exhibits a depinning transition at a critical value of the applied force from a static to a _unique _ sliding state .the depinning can be described as a continuous equilibrium transition , with the mean velocity playing the role of the order parameter , and universal critical behavior .the velocity vanishes as is approached from above as .the critical exponent depends only on the system dimensionality and was found to be using a functional rg expansion in .strong disorder can yield topological defects in the driven lattice , making the elastic model inapplicable . in this casethe dynamics becomes inhomogeneous , with coexisting pinned and moving regions . the depinning transition may be discontinuous ( first order ) , possibly with macroscopic hysteresis .several mean - field models of driven extended systems have been proposed to describe this inhomogeneous dynamics . herewe focus on a class of models that retains a single - valued displacement field and a linear stress - strain relation , but assumes that the presence of topological defects can be effectively incorporated at large scales by a non - instantaneous stress transfer that couples to gradients of the local velocity ( rather than displacement ) .more precisely , we consider an anisotropic model of coarse - grained degrees of freedom that can slip relative to each other in at least one of the directions transverse to the mean motion , due to the presence of small scale defects ( phase slips , dislocations , grain boundaries ) at their boundaries , but remain elastically coupled in the longitudinal directions .this model incorporates the anisotropy of the sliding state in the plastic flow region that results either from flow along coupled channels oriented in the direction of the drive ( e.g. , as in the moving smectic phase ) or in layered materials such as the high- cuprate superconductors .it also encompasses several of the models discussed in the literature . for generality , consider a -dimensional medium composed of degrees of freedom that are coupled elastically in direction and can slip relative to each other in the remaining directions .the axis along which the driving force is applied is along one of the directions .the equation of motion for the displacement is given by with the local velocity .this model will be referred to as the visco - elastic ( ve ) model as it incorporates elastic couplings of strength in directions and viscous couplings of strength controlled by a shear viscosity in the remaining directions . a two - dimensional cartoon of this anisotropic model is shown in fig .[ fig : cartoon ] . a two - dimensional realization of the anisotropic driven medium described in the text .spatial coordinates have been discretized in the figure so that degrees of freedom are labelled by indices and , respectively transverse and longitudinal to the direction of the driving force , .each degree of freedom interacts with its neighbors via elastic couplings in the longitudinal direction and via viscous or similar slip couplings in the transverse direction . ] for ( or ) the ve model reduces to the elastic model ( but with isotropic elasticity ) of eq .( [ elastic_model ] ) .conversely , for ( or ) eq .( [ ve_model ] ) reduces to the purely viscous model studied earlier by us . for any distribution of pinning strengths with support at ,the purely viscous model has zero threshold for depinning , but it does exhibit a critical point separating regions of unique and multivalued solutions for the mean velocity . in the ve model ( and ) even when fluid - like shear takes place ,particle conservation gives a sharp depinning transition in flow along the channels .furthermore , as shown below , the model has a sharp mean - field tricritical point separating a region of parameters where depinning is continuous , in the universality class of elastic depinning , from one where depinning become discontinuous and hysteretic .it is important to stress that the ve model still assumes _ overdamped microscopic dynamics_. velocity or viscous couplings can appear generically in the large - scale equations of motion upon coarse - graining the microscopic dynamics of a dissipative medium .in fact , next we show that viscous couplings indeed represent an effective way of incorporating the local dissipation due to the presence of topological defects .the goal of this section is to provide some justification to the anisotropic ve model as an effective description of topological defects in a driven lattice . to this purposewe consider a two dimensional medium and take advantage of the continuum equations developed many years ago by zippelius et al . to describe the time - dependent properties of two - dimensional solids near melting .these authors combined the equations of free dislocation motion with solid hydrodynamics to construct a semimicroscopic dynamical model of a solid with free dislocations .they further showed that the dynamics of such of a `` heavily dislocated solid '' ( an elastic medium with an equilibrium concentration of free dislocations ) is identical to that of the hexatic phase obtained when a two - dimensional solid melts via the unbinding of dislocations .more recently we reconsidered the dynamical equations for the `` heavily dislocated solid '' of ref . and showed that they can be recast in the form of the phenomenological equations of a viscoelastic fluid ( with hexatic order ) introduced many years ago by maxwell . in the presence of free dislocationsthe local stresses in the medium have contributions from both elastic stresses and defect motion . the latter couple again to the the local strains which control the defect dynamics . by eliminating the defect degrees of freedom ,one obtains a linear , although nonlocal , relation between strain and stress , given by \;,\end{aligned}\ ] ] where and the velocity is defined here in terms of the momentum density as , with the equilibrium mass density of the medium .also in eq .( [ sigmave ] ) is the compressional modulus of the liquid and and are the compressional and shear relaxation times , with the dislocation glide and climb mobility , respectively .of course in the presence of dislocations the displacement is no longer single - valued ( although the strain remains single - valued and continuous ) and due to both the motion of vacancy / interstitial defects and of dislocations .the phenomenological maxwell model of viscoelasticity is obtained by assuming that even in the presence of dislocations .then for the viscoelastic stress reduces to the familiar elastic stress tensor given in eq .( [ stress ] ) , conversely for one obtains which describes stresses in a viscous fluid of shear viscosity and bulk viscosity . the first term on the right hand side of eq .( [ sigma_fluid ] ) is the pressure and incorporates the fact that even a liquid has a nonzero long - wavelength compressional elasticity , which is associated with density conservation . as we will see below this terms plays a crucial role in controlling the physics of depinning of a viscoelastic medium .the maxwell viscoelastic fluid has solid - like shear rigidity at high frequency , but flows like a fluid at low frequency . since the relaxation times and are inversely proportional to the density of free dislocations , the maxwell model behaves as a continuum elastic medium on all time scales when and as a viscous fluid when .dislocation climb is much slower than dislocation glide ( ) , resulting in .we therefore assume that the response to compressional deformations is instantaneous on all time scales , but retain a viscoelastic response to shear deformations . letting , we find \;.\end{aligned}\ ] ] we now turn to the case of interest here , where topological defects are generated in a an extended medium driven through quenched disorder . in this casethe medium has no low frequency shear modulus , but particle conservation still requires long wavelength elastic restoring forces to _ compressional _ deformations . on the other hand ,the number of topological defects is not fixed as dislocations are continuously generated and annihilated by the interplay of elasticity , disorder and drive .furthermore , unbound dislocations can be pinned by disorder and do not equilibrate with the lattice . in the plastic region near depinning the dynamics remains very inhomogeneous and fluid - like and the pinning of dislocations by quenched disorder is not sufficient to restore the long wavelength shear - stiffness of the medium .for this reason we propose to describe the effect of topological defects near depinning by replacing elastic _ shear_ stresses by viscoelastic ones , while retaining elastic _compressional _ forces .of course the resulting model that assumes a fixed density of dislocations becomes inapplicable at large driving forces where dislocations heal as the lattice reorders .for the case of interest here of a scalar model describing only deformations along the direction of motion , the viscoelastic model of a driven disordered medium is with .this model naturally incorporates the anisotropy and channel - like structure of the driven medium , where _ shear _ deformations due to gradients in the displacement in the directions transverse to the mean motion ( ) are most effective at generating the large stresses responsible for the unbinding of topological defects .it is instructive to note that due to the exponential form of stress relaxation the integro - differential equation ( [ full_ve ] ) is equivalent to a second order differential equation for the displacement , with an effective friction . in other wordsthe effect of a finite density of dislocations in the driven lattice yields `` inertial effects '' on a scale controlled by the time . the purely viscous model obtained from eq .( [ utwodots ] ) with was analyzed in detail in ref . where it was shown that if and are tuned independently , then is a strongly irrelevant parameter in the rg sense .this allows us to consider a simplified form of the equation for the driven medium obtained from eq .( [ utwodots ] ) with , but finite , leading to the general anisotropic viscoelastic model introduced in eq .( [ ve_model ] ) .the mean - field approximation for the ve model is obtained in the limit of infinite - range elastic and viscous interactions . to set up the mean field theory , it is convenient to discretize space in both the transverse and longitudinal directions , using integer vectors for the -dimensional intra - layer index and for the -dimensional layer index .the local displacement along the direction of motion is .its dynamics is governed by the equation , + f+h_{{\ell}}^{{i}}y(u_{{\ell}}^{{i}}-\gamma_{{\ell}}^{{i}})\;,\ ] ] where the dot denotes a time derivative and ( ) ranges over sites ( ) that are nearest neighbor to ( ) .the random pinning strengths are chosen independently with probability distribution and the are distributed uniformly and independently in . for a system with sites , _ one _ mean field theory is obtained by assuming that all sites are coupled with uniform strength , both within a given channel and with other channels . each discrete displacement then couples to all others only through the mean velocity , , and the mean displacement , .we assume that the disorder is isotropic and the system is self averaging and look for solutions moving with stationary velocity : .since all displacements are coupled , they can now be indexed by their disorder parameters and , rather than the spatial indices and .the mean - field dynamics is governed by the equation the cases and need to be discussed separately .when , the mean field equation becomes identical to that of a single particle discussed in section [ sec : singleparticle ] driven by an effective force ( with friction ) . in this casedifferent degrees of freedom move at different velocities even in the mean field limit .the mean field velocity is determined by the self - consistency condition , where the average over the random phases is equivalent to the average over the random times shifts given in eq .( [ vmean_single ] ) . for the case of a piecewise linear pinning force using eq .( [ vmean_per ] ) we find with given by eq .( [ period ] ) .the mean velocity obtained by self - consistent solution of eq .( [ vmean_viscous ] ) is shown in figs .[ fig : viscous_fixed_h ] and [ fig : viscous_broad_h ] for two distributions of pinning strengths .( a ) velocity versus driving force for the purely viscous model ( , ) with a narrow distribution of pinning strength , , for .there is a finite depinning threshold at . in depinning and repinning forces and are shown as functions of . ] ( a ) velocity versus driving force for the purely viscous model ( , ) with a broad distribution of pinning strength , for . in this case there are no stable static ( pinned states ) .the velocity is single valued for and multi - valued for . in this case when is ramped up from zero , the velocity jumps discontinuously at where the system goes from the `` slow - moving '' to the `` fast - moving '' state . here and below `` coexistence '' refers to multistability of the solutions to the equations of motion .when is then ramped down from within the fast - moving state the jump in occurs at the lower value .the forces and become equal at the critical point , as shown in frame ( b ) . ] for a narrow distribution , , there is a finite threshold , independent of .the velocity is multivalued for any finite .when the force is ramped up adiabatically from the static state the system depins at .when the force is ramped down from the sliding state , the system repins at the lower value .the depinning and repinning forces are shown in fig .[ fig : viscous_fixed_h](b ) .the region where unique and multivalued velocity solutions coexist extend to . for a broad distribution with support at ,e.g. , , the threshold for depinning is zero as some of the degrees of freedom always experience zero pinning and start moving as soon as a force is applied .there is a critical point at . for analytical solution for is multivalued , as shown in fig .[ fig : viscous_broad_h ] .if the force is ramped up adiabatically from zero at a fixed , the system depins discontinuously at , while when the force is ramped down it repins at the lower value , as shown in fig .[ fig : viscous_broad_h ] .the viscous model has also been studied in finite dimensions by mapping it onto the nonequilibrium random field ising model ( rfim ) . in the mapping ,the local velocities correspond to spin degrees of freedom , the driving force is the applied magnetic field and the mean velocity maps onto the magnetization .the rfim has a critical point separating a region where the magnetization versus applied field curve displays hysteresis with a discontinuous jump to a region where there is no jump in the hysteresis curve . in the viscous modelthe critical point separates a region where the velocity curve is smooth and continuous from the region where the `` depinning '' ( from `` slow - moving '' to `` fast - moving '' states ) is discontinuous and hysteretic .the critical point is in the ising universality class , with an upper critical dimension . when , all degrees of freedom are coupled by a spring - like interaction ( the first term on the right hand side of eq .( [ mft_ve ] ) ) to the mean field and can not lag much behind each other .this forces all the periods to be the same , independent of , and yields a nonvanishing threshold for depinning . in this casethe mean field velocity is determined by imposing .it is useful to first review the case where and . in this limit ,( [ mft_ve ] ) reduces to the mean field theory of a driven elastic medium worked out by fisher and collaborators .no moving solution exists above a finite threshold force . for the piecewise linear pinning forcethis is given by for there is a _unique _ moving solution that has a universal dependence on near , where it vanishes as . in mean - fieldthe critical exponent depends on the shape of the pinning force : for the discontinuous piecewise linear force and for generic smooth forces . using a functional rg expansion in ,narayan and fisher showed that the discontinuous force captures a crucial intrinsic discontinuity of the large scale , low - frequency dynamics , giving the general result , in reasonable agreement with numerical simulations in two and three dimensions . for simplicity and to reflect the `` jerkiness '' of the motion in finite - dimensional systems at low velocities , we use piecewise linear pinning below .when the nature of the depinning differs qualitatively from the case , in that hysteresis in the dynamics can take place .again , no self - consistent moving solution exists for , with independent of . above threshold ,both unique and multi - valued moving solutions exist , depending on the values of the parameters : , , and the shape of the disorder distribution , . to obtain the mean field solution in the sliding state, we examine the motion during one period during which the displacement advances by 1 .( [ mft_ve ] ) is is easily solved for and , with the result , where . at long times , regardless of the initial condition , approaches a periodic function of period with jumps in its time derivative at times , with an integer .the constant is determined by requiring that if , then .writing , it is easy to see that for an arbitrary value of , the solution will have the form .the mean velocity is then obtained from .averaging over is equivalent to averaging for a fixed over a time period , , with the result , finally , averaging over and using the consistency condition , we obtain + \big\langle\frac{h^2}{k(k+h)}\frac{1}{e^{(k+h)/(1+\eta)\overline{v}}-1 } \big\rangle_h\;,\end{aligned}\ ] ] with the threshold force given in eq . ( [ ft ] ) and given by ( a ) velocity versus driving force for the ve model with and a broad distribution of pinning strength , .the velocity is continuous and single - valued for and becomes multivalued for .the dashed line on he curve for indicates the value where the system repins when the drive is ramped down from the sliding state .frame shows the depinning and repinning forces and as functions of .the tricritical point at separates continuous from hysteretic depinning .pinned and sliding states coexist in the region . ] as in the purely elastic case ( ) only static solutions exist for .for there is a _unique _ sliding solution , provided , with mean velocity near threshold given by giving , as in the purely elastic case . the critical line separating unique from multivalued sliding solutions is determined by , the velocity - force curves and a phase diagram are shown in fig .[ fig : ve_versus_eta ] for .there is a _ tricritical point _ at .in contrast to the purely viscous model with , for finite long - time elasticity ( ) the behavior is _ independent of the shape of the pinning force distribution , _ . for , a continuous depinning transition at a pinned state from a sliding state with _unique _ velocity . in finite dimensions ,this transition is likely to remain in the same universality class as the depinning of an elastic medium ( ) . in our mean - field example, the linear response diverges at , . for is hysteresis with coexistence of stuck and sliding states .numerical simulations of the ve model in two dimensions ( ) indicate a strong crossover ( possibly a tricritical point ) at a critical value of from continuous to hysteretic depinning .although it is always difficult to establish conclusively on the basis of numerics that hysteresis survives in the limit of infinite systems , the size of the hysteresis loop evaluated numerically does appear to saturate to a finite value at large system sizes , indicating that the mf approximation may indeed capture the correct finite - dimensional physics .other models of driven systems with inertial - type couplings have been proposed in the literature .it is useful to discuss in some detail their relationship to the viscoelastic model considered here . in the context of charge density waves , littlewood and levy and collaborators modified the fukuyama - lee - rice model that describes the phase of the cdw electrons as an overdamped elastic manifold driven through quenched disorder by incorporating the coupling of the cdw electrons to normal carriers .this was realized via a global coupling in the equation of motion for the phase to the mean velocity of the cdw , not unlike what obtained by a mean - field approximation of our viscous coupling .the model was argued to account for the switching and non - switching behavior observed in experiments .schwarz and fisher recently considered a model of crack propagation in heterogeneous solids that incorporates stress overshoot , that is the fact that a moving segment of the crack can sometimes overshoot one or more potential static configurations before settling in a new one , inducing motion of neighboring segments .these effects may arise from elastic waves that can carry stress from one region to another of the driven medium .stress overshoots , just like topological defects in a driven disordered lattice , have an effect similar to that of local inertia and were modeled by fisher and schwarz by adding couplings to gradients in the local crack velocity in the equation of motion for a driven elastic crack .these authors considered an automaton model where time is discrete .it is straightforward to define an automaton version of our ve model , where both the displacement and time are discrete , as shown in ref .it is then apparent that the automaton version of the viscoeleastic model given in ref . is identical in its dynamics to the model of crack propagation with stress overshoot studied by schwarz and fisher , provided the strength of the stress overshoot is identified with the combination .the two models differ in the type of pinning considered as the random force used in by schwarz and fisher is not periodic .we find , however , that the two models have identical mean - field behavior , with a mean - field tricritical point separating continuous from hysteretic dynamical transitions .the connection between the viscoelastic and the stress - overshoot model is important because it stresses that distinct physical mechanisms ( inertia , nonlocal stress propagation , unbound topological defects ) at play in different physical systems can be described generically by a coarse - grained model that includes a coupling to local velocities of the driven manifold .finally , in a very recent paper , maimon and schwarz suggested that out of equilibrium a new type of generic hysteresis is possible even when the phase transition remains continuous .driven models with both elastic and dissipative velocity couplings may therefore belong to a novel universality class that exhibits features of both first and second order equilibrium phase transitions .they clearly deserve further study .we now turn briefly to simulations and experiments . for comparison with experimentsit is useful to point out that the tricritical point of the viscoelastic model can also be obtained by tuning the applied force and the disorder strength , rather than the applied force and the viscosity .since the phase diagram does not depend on the form of the disorder distribution , , we choose for convenience a sharp distribution , .the phase diagram in the plane is shown in fig . [fig : vephased_f_vs_h ] . for weak disorderthe depinning is continuous , while for strong disorder it becomes hysteretic , with a region of coexistence of pinned and moving degrees of freedom .the _ tricritical point _ is at , with .mean - field solution of the ve model with a piecewise parabolic pinning potential , and . left frame : phase diagram .right frame : velocity versus drive for ( blue ) , ( red ) and ( black ) . also shown for the discontinuous hysteretic jumps of the velocity obtained when is ramped up and down adiabatically . ]simulations of two - dimensional driven vortex lattices clearly show a crossover as a function of disorder strength from an elastic regime to a regime where the dynamics near depinning is spatially inhomogeneous and plastic , with coexistence of pinned and moving degrees of freedom .in fact a bimodal distribution of local velocity was identified in ref . as the signature of plastic depinning .this local plasticity does not , however , lead to hysteresis in the macroscopic dc response in two dimension : the mean velocity remains continuous and single - valued , although it acquires a characteristic concave - up ward form near depinning that can not be described by the exponent predicted by elastic models in all dimensions .hysteresis is , however , observed in simulations of three - dimensional layered vortex arrays where the couplings across layers are weaker than the in - layer ones . in this casethe phase diagram is qualitatively similar to that obtained for the viscoelastic model .recent experiments in have argued that memory effects originally attributed in this system to `` plasticity of the driven vortex lattice actually due edge contamination effects . in the experimentsa metastable disordered vortex phase is injected in a stable ordered bulk vortex lattice .memory effects may then arise in the macroscopic dynamics during the annealing of the injected disordered phase .edge contamination does not , however , explain the plasticity seen in simulations , where periodic boundary conditions are used . a possible scenario may be that while in the experiments the vortex lattice in the bulk is always in the ordered phase , in the simulations the vortex lattice in the bulk of the sample may be strongly disordered even in the absence of drive .such a disordered vortex lattice would then naturally respond plastically to an external drive .finally , it is worth mentioning one experimental situation where hysteresis of the type obtained in our model is indeed observed in the macroscopic response .this occurs in the context of charge density waves , driven by both a dc and an ac field . in this casethe dc response exhibits mode - locking steps .the ' ' depinning " from such mode - locked steps was found to be hysteretic .several colleagues and students have contributed to various aspects of this work : alan middleton , bety rodriguez - milla , karl saunders , and jen schwarz .i am also grateful to jan meinke for help with some of the figures .the work was supported by the national science foundation via grants dmr-0305407 and dmr-0219292 .equation ( [ eqmotion ] ) should include a convective derivative that arises from the transformation to the laboratory frame .this is negligible near a continuous depinning transition where the mean velocity is very small and it drops out in the mean field limit considered here .it will therefore be neglected , although it does play a crucial role on the dynamics well into the sliding state . | invited lecture presented at the xix sitges conference on _ jamming , yielding , and irreversible deformations in condensed matter _ , sitges , barcelona , spain , june 14 - 18 , 2004 . |
it is well known that shannon s source - channel separation result for point - to - point communication does not hold for in general for multi - terminal systems , and thus joint source - channel coding may be required to achieve the optimum .one simple yet intriguing scenario where source - channel separation is known to be suboptimal is broadcasting gaussian sources on gaussian channels .when a single gaussian source is at the encoder , the achievable distortion region is known when the source bandwidth and the channel bandwidth are matched , for which a simple analog scheme is optimal .however when the source and channel bandwidths are not matched , exact characterization of the achievable distortion region is not known .the best known coding schemes are based on joint source - channel using hybrid signaling , and approximate characterizations were given in ; see references therein for related works . as a simple extension to this problem of single gaussian source broadcasting , when the source is a bandwidth - matched bivariate gaussian andeach decoder is interested in one source component , only partial characterization is known when uncoded scheme is shown to be optimal under certain signal - to - noise ratio conditions . in this work ,we provide a complete characterization of the achievable distortion region for broadcasting bivariate gaussian sources over gaussian channels when each receiver is interested in only one component , where the source bandwidth and the channel bandwidth are matched .we further show that in this joint source channel coding setting , the gaussian problem is the worst scenario among the sources and channel noises with the same covariances , in the sense that any distortion pair that is achievable in the gaussian setting is also achievable for other sources and channel noises .our work is built on the outer bounds given in and we show that a hybrid coding scheme ( different from the one given in proposed for the same problem ) can achieve the outer bounds .our main contribution in this work is this new coding scheme and a detailed and systematic analysis of the inner and outer bounds , which result in a complete solution . to the best of our knowledge ,this is the first case in the literature that a hybrid scheme is shown to be optimal for a joint source - channel problem , whereas neither an uncoded scheme alone nor a separation - based scheme alone is optimal .let be a memoryless and stationary bivariate gaussian source with zero mean and covariance matrix where without loss of generality , we can assume .the vector will be written as for .we use to denote the domain of reals .the gaussian memoryless broadcast channel is given by the model where is the channel output observed by the -th receiver , and is the zero - mean independent additive gaussian noise in the channel .thus the channel is memoryless in the sense that is a memoryless and stationary process .the variance of is denoted as , and without loss of generality , we shall assume . throughout the paper ,we use natural logarithm . , width=453 ]the mean squared error distortion measure is used , which is given by for .the encoder maps a source sample block into a channel input block ; the decoder observing channel output block reconstruct the source within certain distortion ; see fig .[ fig : systemdiag ] .the channel input is subject to an average power constraint .more formally , the problem is defined as follows .an bivariate gaussian source - channel broadcast code is given by an encoding function such that and two decoding functions and their induced distortions where is the expectation operation .in the definition , in the expression is understood as the length- vector addition .it is clear that the performance of any gaussian joint source - channel code depends only on the marginal distribution of , but not the joint distribution .note that this implies that physical degradedness does not differ from statistical degradedness in terms of the system performance .since the gaussian broadcast channel is always statistically degraded , we can assume physical degradedness without loss of generality .a distortion pair is achievable under power constraint , if for any and sufficiently large there exists an bivariate gaussian source - channel broadcast code such that the collection of all the achievable distortion pairs under power constraint for a given bivariate source is denoted by , and this is the region we shall characterize in this work . in fact , we shall determine the following function which clearly provides a characterization of the achievable distortion region . note that is a closed set , and thus the minimization above is meaningful . since the minimum that is achievable is given by when the second receiver is completely ignored , the function is thus only meaningfully defined on the domain ] , where }.\end{aligned}\ ] ] on the other hand if , then \\ d_2^h(p,\sigma^2,\rho , n_1,n_2,d_1)&d_1\in [ d^-_1,d^+_1 ] \end{array}\end{aligned}\ ] ] where }\end{aligned}\ ] ] and _ remark : _ depending on the power constraint , the achievable distortion region may have two operating regimes . in the regime where , the uncoded scheme given in is optimal , whereas in the regime , a hybrid scheme given in the next section is optimal , but the uncoded scheme is not .typical achievable distortion regions for these two cases will be illustrated in the next section after the hybrid coding scheme is given , where more observations regarding these schemes can be discussed .consider a source pair whose covariance is given by ( [ eqn : covariance ] ) , and channel noise pair whose variances are given by .we have the following theorem .[ theorem : worstcase ] if , then .this theorem essentially says the gaussian setting has the worst sources and channels among those having the same covariance structure , a result similar to the well - known ones that the gaussian source is the worst source ( ex .13.8 ) and the gaussian channel is the worst channel ( ex .10.1 ) .the proofs of theorem [ theorem : maintheorem ] and theorem [ theorem : worstcase ] are given in the next section .in this section , we shall first review some previous results which provide partial characterization of the achievable distortion region , then give a new hybrid coding scheme , which provides the missing portion of the characterization .finally , we provide a proof for the worst - case property of the gaussian setting .it is straightforward to show that a simple analog scheme by sending directly ( after certain scaling ) achieves the distortion pair ( see also ) for which can not be reduced even when the first receiver is not present .thus we trivially have thus we only need to characterize the function when ] and .the reconstruction in this uncoded scheme is thus naturally the single letter mapping given by ] .as we shall show in the next subsection , this genie - aided outer bound is in fact tight for a certain regime . intuitively speaking ,since the first receiver is stronger than the second receiver , and is not required at the first receiver , the information is redundant at the first receiver in a certain sense , and thus we can expect the genie - aided outer bound to be reasonably good . ,width=453 ] the coding scheme we propose is a hybrid one , where the channel input is given by where is ( roughly ) the quantized version of source sequence after some proper scaling ; is the digital portion of the channel input , and are two scaling parameters to be specified later .more precisely , consider the single - letter distribution where is a zero mean gaussian random variable independent of everything else with variance , and is again a scaling parameter to be specified later . since is a markov string , the joint distribution of is uniquely determined , and they are jointly gaussian .we also need to define the coefficients and =a_kx_d+b_k(\tilde{\alpha } s_1+\tilde{\beta } s_2+z_k ) , \qquad k=1,2.\end{aligned}\ ] ] this proposed hybrid scheme in this work is somewhat similar to the scheme given in for joint source channel coding on the multiple access channel . in what follows ,we only outline the coding scheme and some important analysis steps , but omit the rather technical detailed proof ( a rigorous proof can be straightforwardly adapted from that given in ) . * codebook generation : generate codewords single - letter wise according to the marginal distribution of ; this codebook is revealed to both the encoder and decoders . *encoding : find a sequence in the codebook that is jointly typical with the source sequence ; if successful , the transmitter sends . * digital decoding : the -th decoder tries to find a unique codeword in the codebook that is jointly typical with ; the decoder also recovers the sequence after removing the digital codeword . *estimation : if the digital decoding succeeds , then the decoder reconstructs the respective source sequence as .an error occurs in the above scheme if the encoder fails to find a codeword that is jointly typical with , or one of the decoders fails to find the correct digital codeword .note that due to the markov string , we indeed have that the chosen is jointly typical with with high probability in the above scheme . because the second receiver is a degraded version of the first receiver , the error probability can be made arbitrarily small if the following condition holds ( after ignoring the s often seen in the typicality argument ) furthermore , to ensure the power constraint is not violated , we need it is evident that as long as we can find such that ( [ eqn : powerconstraint ] ) holds with equality , because the left hand side of ( [ eqn : powerconstraint ] ) is monotonically increasing in in the range ] and ; * if , then the uncoded scheme is optimal over the range \cup [ d^+_1,d^{\max}_1] ] .it is worth noting that it is always true that .next we use theorem [ theorem : genie ] to write an lower bound for the function .[ prop : lowerbound ] for any ] by choosing in order to prove this proposition , we need to show firstly that the given choice of does not violate the power constraint , i.e. , the condition ( [ eqn : reducedpowercondition ] ) is satisfied ; secondly , the given choice of reduces the distortion pairs in ( [ eqn : innerbound ] ) to those given in ( [ eqn : matchingbound ] ) . notice that and given in ( [ eqn : alphachoice ] ) and ( [ eqn : betachoice ] ) are well - defined and non - negative when ] . in order to show that the lower bound as stated in proposition [ prop : lowerbound ]can be achieved , we first simplify the expression of given in ( [ eqn : innerbound ] ) in terms of , which ( after quite some algebra ) eventually gives substituting our choice of given in ( [ eqn : alphachoice ] ) into ( [ eqn : almostfinald1 ] ) leads to ; again substituting ( [ eqn : alphachoice ] ) into the expression of in ( [ eqn : innerbound ] ) gives the expression stated in the proposition , which completes the proof .readers may wonder how the magic value of was found , which optimizes in the hybrid scheme . indeed , directly optimizing the distortion is extremely cumbersome this way . ] . to circumvent this difficulty, we instead solve for such that the inner bound matches the outer bound , which gives the given expression .this approach is less intuitive , since it is possible that neither the genie - aided outer bound nor the hybrid scheme inner bound is tight , however extensive numerical comparison indeed suggested that these bounds match , which motivated us to take such an approach . in fig .[ fig : regime1 ] and fig .[ fig : regime2 ] we give two typical achievable distortion regions , where for comparison we also include the performance of a simple separation - based scheme where the digital broadcast messages encoding and ( conditioned on the reconstructed ) , respectively . in both figures ,each horizontal red line is the performance of the hybrid scheme by varying while keeping fixed .note that the hybrid scheme includes the uncoded scheme as a special case when the digital portion is allocated no power .[ fig : regime1 ] is plotted with the choice of source and channel satisfying the condition , and thus the uncoded scheme is always optimal . for this case , adding digital code in the hybrid scheme is always inferior .in contrast , fig [ fig : regime2 ] is plotted under the condition , and thus uncoded scheme is only optimal at high and low regimes . in the regimethat uncoded scheme is not optimal , it can be seen that even when analog portion does not include , the hybrid scheme can sometimes outperform the uncoded scheme , however , by optimizing in the analog portion , the inner and outer bounds indeed match .moreover , observe that the distortion of achieved by the hybrid scheme is not monotonic in when is fixed ( each red line ) , where the two extreme values of give the uncoded scheme and the hybrid scheme without analog , respectively . .here , , and .[fig : regime1],width=642 ] .here , , and .[fig : regime2],width=642 ] next we prove theorem [ theorem : worstcase ] , i.e. , the worst case property of the gaussian setting .we have shown an optimal scheme in the gaussian setting is the proposed hybrid scheme , and thus we can limit ourselves to the distortion pairs achievable by this scheme .in fact we shall continue to use this scheme and the associated parameters when the sources and channel noises are not gaussian .more precisely , we shall now use instead of to construct the digital source codewords where is still a gaussian random variable with variance of , independent of everything else .the overall covariance structure of the scheme remains intact as in the gaussian case , and thus the same ( mse ) distortion pairs can be achieved , as long as the digital codewords can be correctly decoded at both the decoders , i.e. , with our choices of the parameters , where is the channel output in this non - gaussian setting .note that unlike in the gaussian case , here the broadcast channel is not necessarily degraded , and thus we also need to make sure that the codeword can be correctly decoded at the first decoder .to show the second decoder can succeed ( with high probability ) , we only need to observe that where in the second inequality we substitute of the gaussian version of the problem , because the terms have the same covariance structure , and gaussian distribution maximizes the differential entropy ; in the last but one equality , we add and subtract the same term , and the last equality is due to our specific choice of the parameters in the gaussian problem . similarly , we can write where the second inequality is guaranteed by the relation in the gaussian case , which is indeed a degraded broadcast channel .this completes the proof .we provide a complete solution for the joint source - channel coding problem of sending bivariate gaussian sources over gaussian broadcast channels when the source bandwidth and channel bandwidth are matched .thus this problem joins a limited list of joint source - channel coding problems for which complete solutions are known .possible extension of this work includes the case with more than two users or more than two sources , and approximate characterization for bandwidth mismatched case , which are part of our on - going work . c. tian , s. n. diggavi and s. shamai , `` approximate characterizations for the gaussian broadcasting distortion region , '' in _ proc .ieee international symposium on information theory _ , seoul , korea , jul .2009 , pp 2477 - 2482 .r. soundararajan and s. vishwanath , `` hybrid coding for gaussian broadcast channels with gaussian sources , '' in _ proc .ieee international symposium on information theory _ , seoul , korea , jul .2009 , pp .2790 - 2794 . | we provide a complete characterization of the achievable distortion region for the problem of sending a bivariate gaussian source over bandwidth - matched gaussian broadcast channels , where each receiver is interested in only one component of the source . this setting naturally generalizes the simple single gaussian source bandwidth - matched broadcast problem for which the uncoded scheme is known to be optimal . we show that a hybrid scheme can achieve the optimum for the bivariate case , but neither an uncoded scheme alone nor a separation - based scheme alone is sufficient . we further show that in this joint source channel coding setting , the gaussian setting is the worst scenario among the sources and channel noises with the same covariances . gaussian source , joint source - channel coding , squared error distortion measure . |
more than 50 years ago john von neumann traced first parallels between architecture of the computer and the brain . since that time computers became an unavoidable element of the modern society forming a computer network connected by the world wide web ( www ) .the www demonstrates a continuous growth approaching to web pages spread all over the world ( see e.g. http://www.worldwidewebsize.com/ ) .this number starts to become even larger than neurons in the brain .each neuron can be viewed as an independent processing unit connected with about other neurons by synaptic links ( see e.g. ) .about 20% of these links are unidirectional and hence the brain can be viewed as a directed network of neuron links . at present , more and more experimental information about neurons and their links becomes available and the investigation of properties of neuronal networks attracts an active interest of many groups ( see e.g. .the www is also a directed network where a site points to a site but no necessary vice versa .the classification of web sites and information retrieval from such an enormous data base as the www becomes a formidable challenge of modern society where the search engines like google are used by internet users in everyday life . an efficient way to classify and extract the information from wwwis based on the pagerank algorithm ( pra ) , proposed by brin and page in 1998 , which forms the basis of the google search engine .the pra is based on the construction of the google matrix which can be written as ( see e.g. for details ) : here the matrix is constructed from the adjacency matrix of directed network links between nodes so that and the elements of columns with only zero elements are replaced by .the second term in r.h.s . of ( [ eq1 ] ) describes a finite probability for www surfer to jump at random to any node so that the matrix elements .this term with the google damping factor stabilizes the convergence of pra introducing a gap between the maximal eigenvalue and other eigenvalues . as a resultthe first eigenvalue has and the second one has .usually the google search uses the value . by the construction so that the asymmetric matrix belongs to the class of perron - frobenium operators which naturally appear in the ergodic theory and dynamical systems with hamiltonian or dissipative dynamics .the right eigenvector at is the pagerank vector with positive elements and .the classification of nodes in the decreasing order of values is used to classify importance of www nodes as it is described in more detail in .the pagerank can be efficiently obtained by a multiplication of a random vector by which is of low cost since in average there are only about ten nonzero elements in a typical line of of www .this procedure converges rapidly to the pagerank .fundamental investigations of the pagerank properties of the www have been performed in the computer science community ( see e.g. ; involvement of physicists is visible , e.g. , but less pronounced ) .it was established that the pagerank is satisfactory characterized by an algebraic decay with being the ordering index and ; the number of nodes with the pagerank scales as with the numerical value of the exponent .it is known that such type of algebraic dependencies appear in various types of scale - free networks .the pagerank classification finds its applications not only for the www but also for the network of article citations in physical review as it is described in .this shows that the approach based on the google matrix can be applied to vary different types of networks . in this workwe construct the google matrix for a model of brain analyzed in .the properties of the spectrum and the eigenstates of are described in the next section ii .the results are discussed in section iii .= 8.5 cm -1.0 cm = 8.5 cm -0.3 cm to construct the google matrix of brain we use a directed network of links between neurons generated from the brain model . in total there are links in the network .they form _ outgoing _ links and _ ingoing _ links ( ) , so that there are about 200 outlinks ( or ingoing ) per neuron .these numbers include multiple links between certain pairs of neurons ; certain neurons have also links to themselves ( there is one neuron linked only to itself ) .the number of weighted symmetric links is approximately % . due to existence of multiple links between the same neurons we constructed two matrices based on unweighted and weighted counting of links . in the first caseall links from neuron to neuron are counted as one link , in the second case the weight of the link is proportional to the number of links from to . in both casesthe sum of elements in one column is normalized to unity .the distributions of ingoing ( ) and outgoing ( ) links are shown in fig .[ fig1 ] . the weighted distribution of ingoing links have a pronounced peaked structure corresponding to different regions of brain model considered in .we note that the distribution of links is not of free - scale type .= 4.5 cm = 4.5 cm -0.2 cm = 4.5 cm = 4.5 cm -0.3 cm the dependence of the pagerank on is shown in fig . [ fig2 ] . for almost all probability is concentrated on one neuron .this is the only one neuron which is linked only to itself . with the increase of up to the main part of probabilityis concentrated mainly on about 10 neurons that approximately corresponds to the number of peaks in the distribution of weighted ingoing links in fig .[ fig1 ] ( bottom left panel ) . at the same timethe pagerank has a long tail at large where the probability is practically homogeneous .for the peak of probability at is washed out and the pagerank becomes completely delocalized .we note that a delocalization of the pagerank with appears in the ulam networks describing dynamical systems with dissipation . at the same timethe www networks remain stable in respect to variation of as it is discussed in .recently , for the studies of procedure call network of the linux kernel it was proposed to study the properties of the importance - pagerank which is given by the eigenvector at for the google matrix constructed from the inverted links of the original adjacency matrix . it was argued that can give an additional information about certain important nodes .our results for are shown in panels ( c , d ) of fig .they show that is practically delocalized and flat for all used values of .this indicates that all nodes have practically equal importance .the popularity - importance correlator introduced in and defined as is rather small ( at and at for unweighted , weighted links respectively ) .this shows that there are no correlations between and in our neuronal network that is similar to the linux kernel case .= 4.5 cm = 4.5 cm -1.2 cm = 4.5 cm = 4.5 cm -0.3 cm the spectrum and the right eigenvectors of the google matrix of brain are defined by the equation the spectrum of is complex and is shown in fig .the color of points is chosen to be proportional to the participation ratio ( par ) defined as .this quantity determines an effective number of sites populated by an eigenstate , it is often used to characterize localization - delocalization transition in quantum solid - state systems with disorder ( see e.g. ) .the spectrum has eigenvalues with being close to unity so that there is no gap in the spectrum of in the vicinity of ( we remind that the second term in the r.h.s . of ( [ eq1 ] ) transfers to keeping only one ) .this is different from the spectrum of random scale - free networks characterized by a large gap in the spectrum of .= 7.5 cm -0.3 cm compared to the spectra of the university www networks studied in the spectrum of in fig .[ fig3 ] is more flat being significantly compressed to the real axis . in this respectour neuronal network has certain similarity with the spectra of vocabulary networks analyzed in ( see fig . 1 there ) . at the same time the spectrum of matrix of brainhas visible structures in the eigenvalues distribution in the complex plane of while the vocabulary networks are characterized by structureless spectrum .the spectrum of fig .[ fig3 ] has global properties being similar to those of the ulam networks considered in .it is interesting to note that the spectra of unweighted and weighted networks of brain have similar structure .this supports the view of structural stability of the spectrum of matrix . it is useful to determine the relaxation rate of eigestates by the relation .the dependence of density of states on is shown in fig .[ fig4 ] ( the density is normalized to unity so that corresponds to states ) .the distribution in has a pronounced peak at , the density of states at small is relatively small ( this is also seeing in fig . [ fig3 ] ) .the comparison of unweighted and weighted links shows the stability of the density distribution in respect to such modification of links .= 4.5 cm = 4.5 cm -0.3 cm the dependence of the par on is shown in fig .[ fig5 ] ( we note that except of the pagerank is independent of due to the unity rank of matrix , see e.g. ) .the pagerank value of at is very large being more than half of the total number of neurons .it is clear that this corresponds to a delocalized state .the eigenstates with have relatively small being close to a localized domain while eigenstates with have being delocalized on the main part of the network ; the states with enter in the localized domain .for the par is close to .taking as a criterion that the delocalization takes place when we obtain that the pagerank becomes delocalized at ( see data of figs .[ fig2],[fig6 ] ) .the global dependence of the par of the pagerank on parameter is shown in fig .[ fig6 ] with a sharp delocalization of for .of course , the above analysis should be considered as an approximate one since the localization properties should be studied in dependence on the system size while we consider only one size of .= 4.5 cm = 4.5 cm -0.3 cm = 4.5 cm = 4.5 cm -0.3 cm finally , following the approach proposed in , we show in fig . [ fig7 ] the distribution of pagerank values and for all sites. such kind of distributions can be rather useful in determining sites which have maximal values of and at the same time .however , a detailed analysis of the properties of this distribution would require networks with a larger size where statistical fluctuations are smaller .in this work we studied the properties of the google matrix of a neuronal network of the brain model discussed in . for this network of neurons we found that the spectrum of the google matrix has a gapless spectrum at demonstrating certain similarities with the spectra of university www networks and vocabulary networks studied in . at the same time our neuronal network shows signs of delocalization transition of the pagerank at the google damping factor which was absent in the networks studied in .a similar transition in was detected in the ulam networks generated by dissipative dynamical maps .we attribute the appearance of such delocalization transition to a large number of links per neuron ( 200 ) which is by factor 10 larger than in the www networks ( 20 ) . of course , our studies have certain limitations since we considered only a fixed size neuronal network and since this network is taken from a model system of brain analysed in .another weak point is that we do not consider the dynamical properties of the network which are probably more important for practical applications .nevertheless , the spectral properties of matrix can be rather useful .indeed , the gapless spectrum of shows that long living excitations can exist in our neuronal network .such relaxation modes with small rates can be the origin of long living oscillations found in numerical simulations .it is quite possible that the properties of spectra of can help to understand in a better way rapid relaxation processes and those with long relaxation times .we conjecture that the rapid relaxation modes correspond to relaxation of local groups of neurons while long living modes can represent relaxation of collective modes representing dynamics of human thoughts .the dynamics of such collective modes can contain significant elements of chaotic dynamics as it was discussed in the frame of the concept of creating chaos in .it is possible that the brain effectively implements dynamics described by the evolution equation which without perturbations converges to the steady - state described by the pagerank ( which may be linked with a sleeping phase ) .external perturbations give excitations of other eigenmodes of discussed here .the evolution of these excitations will be significantly affected by the spectrum of .further development of the google matrix approach to the brain looks to us to be rather promising .for example , a detection of isolated communities and personalized pagerank , represented by other types of matrix in ( [ eq1 ] ) , is under active investigation in the computer science community ( see e.g. ) .such type of problems can find their applications for detection of specific quasi - isolated neuronal networks of brain .the usage of real neuronal networks , similar to those studied in , in combination with the google matrix approach can allow to discover new properties of processes in the brain .the development of parallels between the www and neuronal networks will give new progress of the ideas of john von neumann .99 j. von neumann , _ the computer and the brain _ , new haven , ct , yale uiv . press ( 1958 ) .hoppensteadt and e.m .izhikevich , _ weakly connected neural networks _ , springer - verlag , n.y .izhikevich , _ dynamical systems in neuroscience : the geometry of excitability and bursting _ , the mit press , cambridge , ma ( 2007 ) .o. sporns , _ brain connectivity _ , scholarpedia 2(10 ) ( 2007 ) 4695 . d.j . felleman and d.c .van essen , cereb .cortex 1 ( 1991 ) 1 .laughlin and t.j .sejnowski , science 301 ( 2003 ) 1870 .o. sporns , d.r .chialvo , m. kaiser , and c.c .hilgetag , trends cognitive sci .8 , ( 2004 ) 418 .honey , r. ktter , m. breakspear , and o.sporns , pnas 104 ( 2007 ) 10240 .m. kaiser , phil .r. soc . a 365 ( 2007 ) 3033 .p. hagmann , l. cammoun , x. gigandet , r. meuli , c.j .honey , v.j .weeden , and o.sporns , plos biology 6 ( 2008 ) 1479 .izhikevich and g.m .edelman , pnas 105 ( 2008 ) 3593 .q. wen , a. stepanyants , g.n .elston , a.y .grosberg , and d.b .chklovskii , pnas 106 ( 2009 ) 12536 .s. cocco , s. leibler , and r. monasson , pnas 106 ( 2009 ) 14058 . s. brin and l. page , computer networks and isdn systems * 30 * ( 1998 ) 107 .a. m. langville and c. d. meyer , _ google s pagerank and beyond : the science of search engine rankings _ , princeton university press ( princeton , 2006 ) ; d. austin , ams feature columns ( 2008 ) available at www.ams.org/featurecolumn/archive/pagerank.html i.p .cornfeld , s.v .fomin , and y. g. sinai , _ ergodic theory _ , springer ,m. brin and g. stuck , _ introduction to dynamical systems _, cambridge univ . press , cambridge ,uk ( 2002 ) .d. donato , l. laura , s. leonardi and s. millozzi , eur .j. b 38 ( 2004 ) 239 ; g. pandurangan , p. raghavan and e. upfal , internet math . 3 ( 2005 ) 1 .p. boldi , m. santini , and s. vigna , in _ proceedings of the 14th international conference on world wide web _ , a. ellis and t. hagino ( eds . ) , acm press , new york p.557 ( 2005 ) ; s. vigna , * ibid . * p.976 .k. avrachenkov and d. lebedev , internet math . 3 ( 2006 ) 207 .k. avrachenkov , n. litvak , and k.s .pham , in _ algorithms and models for the web - graph : 5th international workshop , waw 2007 san diego , ca , proceedings _ , a. bonato and f.r.k .chung ( eds . ) , springer - verlag , berlin , lecture notes computer sci . 4863 ( 2007 ) 16 .n. litvak , w. r. w. scheinhardt , and y. volkovich , internet math . 4 ( 2007 ) 175 .k. avrachenkov , d. donato and n. litvak ( eds . ) , _ algorithms and models for the web - graph : 6th international workshop , waw 2009 barcelona , proceedings _ , springer - verlag , berlin , lecture notes computer sci .5427 , springer , berlin ( 2009 ) .n. perra and s. fortunato , phys . rev .e 78 ( 2008 ) 036107 .dorogovtsev and j.f.f .mendes , _ evolution of networks _ , oxford univ .press , oxford ( 2003 ) .p. chen , h. xie , s. maslov , and s. redner , j. infometrics 1 ( 2007 ) 8 .f. radicchi , s. fortunato , b. markines , and a. vespignani , phys .e * 80 * ( 2009 ) 056103 .e.m izhikevich , private communication , august ( 2009 ) : the links between neurons have been generated by e.m .izhikevich on the basis of the brain model described in ; the links are available at quantware library , k. frahm and d.shepelyansky ( eds . ) section qnr15 at www.quantware.ups-tlse.fr/qwlib d.l .shepelyansky and o.v.zhirov , phys . rev .e 81 ( 2010 ) 036213 .l. ermann and d.l .shepelyansky , phys .e 81 ( 2010 ) 036221 .b. georgeot , o. giraud and d.l .shepelyansky , preprint arxiv:1002.3342[cs.ir ] ( 2010 ) .chepelianskii , arxiv:1003.5455[cs.se ] ( 2010 ) .f. evers and a.d .mirlin , rev .mod . phys .80 ( 2008 ) 1355 .o. giraud , b. georgeot and d. l. shepelyansky , phys .e 80 ( 2009 ) 026107 .b.v.chirikov , _ creating chaos and the life _ , preprint arxiv : physics/0503072 ( 2005 ) . | we apply the approach of the google matrix , used in computer science and world wide web , to description of properties of neuronal networks . the google matrix is constructed on the basis of neuronal network of a brain model discussed in pnas * 105 * , 3593 ( 2008 ) . we show that the spectrum of eigenvalues of has a gapless structure with long living relaxation modes . the pagerank of the network becomes delocalized for certain values of the google damping factor . the properties of other eigenstates are also analyzed . we discuss further parallels and similarities between the world wide web and neuronal networks . * keywords : * neuronal networks , world wide web ; google matrix ; pagerank |
the emergence of the internet has changed the way of communication radically and , especially , the development of web 2.0 applications has led to some extremely popular online social sites , such as facebook , flickr , youtube , twitter , livejournal , orkut and xiaonei .these sites provide a powerful means of sharing information , finding content and organizing contacts for ordinary people .users can consolidate their existing relationships in the real world through publishing blogs , photos , messages and even states .they also have a chance to communicate with strangers that they have never met on the other end of the world .based on the development and prevalence of the internet , online social sites have reformed the structure of the traditional social network to a new complex system , called the online social network , which attracts a lot of research interests recently as a new social media .recent works about online social networks mainly focus on probing and collecting network topologies , structural analysis , user interactions and content generating patterns . at the same time, some concepts and methods of traditional social networks have also been introduced into current researches : the strength of ties is one of them .the strength of ties was first proposed by granovetter in his landmark paper in 1973 , in which he thought the strength of ties could be measured by the relative overlap of the neighborhood of two nodes in the network .it was interesting that different from the common sense , he found that loose acquaintances , known as weak ties , were helpful in finding a new job .this novel finding has become a hot topic of research for decades . in ,a predictive model was proposed to map social media data to the tie strength . in ,onnela et al . gave a simple but quantified definition to the overlap of neighbors of nodes and as follows : where is the number of common acquaintances , and are the degrees of and , respectively .* in this paper , we define as the strength of the tie between and *. the lower is , the weaker the strength of tie between and is . as a social media ,the core feature of online social networks is the information diffusion .however , the mechanism of the diffusion is different from traditional models , such as susceptible - infected - susceptible ( sis ) , susceptible - infected - recovered ( sir ) and random walk . at the same time, few works have been done to reveal the coupled dynamics of both the structure and the diffusion of online social networks . to meet this critical challenge , in this paper, we aim to investigate the role of weak ties in the information diffusion in online social networks . by monitoring the dynamics of where is the number of connected clusters with nodes , and is the size of the network , a phase transition was found in the mobile communication network during the removal of weak ties first .we find that this phase transition is pervasive in online social networks , which implies that weak ties play a special role in the structure of the network .this interesting finding inspires us to investigate the role of weak ties in the information diffusion . to this end, we propose a model to characterize the mechanism of the information diffusion in online social networks and associate the strength of ties with the process of spread . through the simulations on large - scale real - world data sets , we find that selecting weak ties preferentially to republish can not make the information diffuse quickly , while the random selection can .nevertheless , further analysis and experiments show that the coverage of the information will drop substantially during the removal of weak ties even for the random diffusion case .so we conclude that weak ties play a subtle role in the information diffusion in online social networks .we also discuss their potential use for the information diffusion control practices .the rest of this paper is organized as follows .section [ sec : datasets ] introduces the data sets used in this paper . in section[ sec : sroleofweakties ] , we study the structural role of weak ties .the model is proposed in section [ sec : droleofweakties ] , and the role of weak ties in the information diffusion is then investigated .section [ sec : dcontrol ] discusses the possible uses of weak ties in the control of the virus spread and the private information diffusion .finally , we give a brief summary in section [ sec : summary ] .we use two data sets in this paper , i.e. , ` youtube ` and ` facebook ` in new orleans . `youtube ` is a famous video sharing site , and ` facebook ` is the most popular online social site which allows users to create friendships with other users , publish blogs , upload photos , send messages , and update their current states on their profile pages .all these sites have some privacy control schemes which control the access to the shared contents .the data set of ` youtube ` includes user - to - user links crawled from ` youtube ` in 2007 .the data set of ` facebook ` contains a list of all the user - to - user links crawled from the new orleans regional network in ` facebook ` during december 29th , 2008 and january 3rd , 2009 . in both two data sets, we treat the links as undirected . in these data sets ,each node represents a user , while a tie between two nodes means there is a friendship between two users . in general , creating a friendship between two users always needs mutual permission .so we can formalize each data set as an undirected graph , where is the set of nodes and is the set of ties .we use to denote the size of the network , and to denote the size of ties .some characteristics of the data sets are shown in table [ tab : datasets ] .the of the strength of ties is shown in fig .[ fig : cdfofstrength ] ..[tab : datasets]data sets [ cols="^,^,^",options="header " , ] ( color online ) of the strength of ties . ] as we know , online social networks are divided into two types : knowledge - sharing oriented and networking oriented . for the data sets we use , ` youtube ` belongs to the former , while ` facebook ` belongs to the latter , both of which are scale - free networks .+ in this section , we study the structural role of weak ties . as shown in fig .[ fig : facebook_average_s ] and fig .[ fig : youtube_average_s ] , we find a phase transition ( characterized by ) similar to the one in in online social networks during the removal of weak ties first .this phase transition , however , disappears if we remove the strong ties first .furthermore , it is also found in fig .[ fig : facebook_gcc ] and fig .[ fig : youtube_gcc ] that the relative size of giant connected cluster ( gcc ) , denoted by , shows different dynamics between the removals of weak ties first and strong ties first .we denote the critical fractions of the removed ties at the phase transition point by .it is interesting to note that for ` youtube ` and for ` facebook ` when reaches the submit , which are very close to the case when . in the percolation theory, the existence of the above phase transition means that the network is collapsed , while the network is just shrinking if there is no phase transition when removing the ties .so the above experiments tell us that weak ties play a special role in the structure of online social networks , which is different from the one strong ties play .in fact , they act as the important bridges that connect isolated communities . inwhat follows , we build a model that associates the weak ties with the information diffusion , to discuss the coupled dynamics of the structure and the information diffusion .the information diffusing in online social networks includes blogs , photos , messages , comments , multimedia files , states , etc . because of the privacy control and other features of online social sites , the mechanism of the information diffusion in online social networks is different from traditional models , such as sis , sir and random walk .we start by discussing the procedure of information diffusion in online social networks .the procedure of the diffusion in online social networks can be briefly described as follows : * the user publishes the information , which may be a photo , a blog , etc . *friends of will know when they access the profile page of or get some direct notifications from the online social site .we call this scheme as _ push_. * some friends of , may be one , many or none , will comment , cite or reprint , because they think that it is interesting , funny or important. we call this behavior as _ republish_. * the above steps will be repeated with replaced by each of those who have republished . it is easy to find that the key feature of the information diffusion in online social networks is that the information is pushed actively by the site and only part of friends will republish it . take ` facebook ` as an example , in which _ news feed _ and _ live feed _ are two significant and popular features . news feed constantly updates a user s profile page to list all his or her friends news in ` facebook ` .the news includes conversations taking place between the walls of the user s friends , changes of profile pages , events , and so on .live feed facilitates the users to access the details of the contents updated by news feed .it is updated in a real - time manner after the user s login to the web . in fact , news feed aggregates the most interesting contents that a user s friends are posting , while live feed shows to the user all the actions his or her friends are taking in ` facebook ` .the feature of pushing and republishing we have discussed above is indeed more obvious in ` twitter ` , in which all the words you post will be pushed immediately to your followers terminals , including a pc or even a mobile phone , and then they can republish it if they like . however , in real - world situations , the trace of the information is hard to collect , especially for large - scale networks .so it is quite reasonable to build a model to characterize the mechanism and simulate the diffusion .based on the procedure described above , we propose a simple model , where is the navigating factor and represents the strength of the information . in this model , determines how to select neighbors to republish the information , while $ ] is a physical character of the information , which describes how interesting , novel , important , funny or resounding it is .the model is defined as follows : * step 1 : suppose there comes information .set the state of all the nodes in to .the state of a node means is not known to it , otherwise the state is .* step 2 : randomly select a seed node from the network .the degree of is .set to .it publishes the information with strength equal to at time . *step 3 : increase the time by one unit , i.e. , .set each node in the neighborhood of to .add to the set of nodes that have published , denoted by .so . *step 4 : calculate the number of nodes that will republish in the next round : * step 5 : select one node from the neighborhood of with the probability if is not in , then add it to the set of nodes that will republish in the next round , denoted by .so .repeat this step for times .* step 6 : for each node in , execute from step 3 to step 5 recursively until is null or all the nodes in have known .it is easy to find from eq .( [ eq : rounds ] ) that during the diffusion , the number of republishing nodes selected from the neighborhood of is decided by and .it is consistent with the real situation that the user with more friends tends to attract more other users to visit and republish the information .the more interesting or important the information is , the higher the chance that it will be republished .we use parameter in eq .( [ eq : probability ] ) to associate the diffusion with the strength of the ties , which means different values of will lead to different selections of ties as paths for republishing information in the next round .in fact , when , weak ties are to be selected preferentially as paths for republishing .the selection is random when , and the strong ties will be selected with higher priority when .we define the fraction of nodes with the state as the coverage of , denoted by . since it is found that only 1 - 2% friends will republish the information in flickr , we let in the simulations .[ fig : id_visited_t ] shows the numeric experimental results on ` facebook ` and ` youtube ` networks .as can be seen , reaches the maximum when . in other words ,compared with weak or strong ties , selecting the republishing nodes randomly from the neighborhood will make the information spread faster and wider .this is indeed out of our expectation , since previous studies show that weak ties can facilitate the information diffusion in social networks . to understand this, we further explore the process of the information diffusion in details . by eq .( [ eq : tiestrength ] ) , we can easily have assume that as increases , increases proportionately , i.e. , .then given a node and its neighbor node , we have , and vice versa .this implies that a neighbor node of tends to have a higher degree if it has a stronger strength of ties with .therefore , when selecting the republishing nodes for the next round from the neighborhood , different will select nodes with different degrees preferentially . for example , when , the weak ties will be selected with higher priority , which means that the nodes with lower degrees will be selected preferentially . however , it is easy to learn from eq .( [ eq : rounds ] ) that , for the node with lower degree , the republishing nodes selected from its neighborhood will be less , which will eventually reduce the total number of republishing nodes and impede the information from further spreading in the network .as to the case of selecting strong ties preferentially , although it will tend to select the nodes with higher degrees to republish , the local trapping will limit the scope of selected nodes into some local areas and make it harder to propagate the information further in the network . to validate the analysis above, we also observe the fraction of the nodes that have published during the diffusion , denoted by . as shown in fig .[ fig : fpub ] , increases more slowly when , and the time - varying properties of are similar to those of in fig .[ fig : id_visited_t ] for different values , respectively .we also monitor the fraction of the nodes that have published in each hop away from the source node , denoted by . as shown in fig .[ fig : flocal ] , when , decreases faster than other cases , in particular the case .it means when , the number of republishing nodes selected from the neighborhood decreases sharply as the information spreading far away from the source , which agrees with our former analysis . as for the case of , increases more and more slowly during the diffusion , because the nodes selected to republish are trapped in some local clusters .in other words , it is hard to find some new nodes to republish the information to the outer space .based on the above results , we can conclude that selecting weak ties preferentially as the path to republish information can not make it diffuse faster .however , this does not mean that weak ties play a trivial role in the information diffusion in online social networks , especially when we recall its special role in the network structure in section [ sec : sroleofweakties ] .let in , we compare the variation of under the situation of removing weak ties first with that of removing strong ties first . as shown in fig .[ fig : remove_ties ] , for the case of removing weak ties first , the coverage of the information decreases rapidly , e.g. , from 0.8 to 0.4 in ` facebook ` when the fraction of removed weak ties reaches about 0.4 .this implies that weak ties are indeed crucial for the coverage of information diffusion in online social networks .to further study the effect of , we conduct experiments with different values , as shown in fig . [fig : coverage_beta ] . as can be seen ,no matter what the value is , random selection ( ) is still the fastest mode for the information diffusion , although the gap tends to shrink with higher values .it is also shown that when grows , will also rise for all values .that is , the greater the strength of the information is , the more nodes will be attracted to republish it , and the wider it will spread in the network .until now we can conclude that weak ties play a subtle role in the information diffusion in online social networks .on one hand , they are bridges that connect isolated communities and break through the trapping of information in local areas . on the other hand , selecting weak ties preferentially as the path of republishing can not make the information diffuse faster and wider .the growing popularity of the online social networks does not mean that it is safe and reliable .on the contrary , the virus spread and the private information diffusion have made it become a massive headache for it administrators and users .for example , `` kooface '' is a trojan worm on facebook , which spreads by leaving a comment on profile pages of the victim s friends to trap a click on the malicious link .about 63% of system administrators worry that their employees will share too much private information online .so as time goes by , it becomes more and more important and urgent to control the virus spread and the private information diffusion in online social networks . in the light of this, we can make use of the weak ties for the information diffusion control .that is , in the real - world practices , we can assume that the behavior of republishing information is random , i.e. , . then according to the results in fig .[ fig : remove_ties ] , we can make the virus or the private information trapped in local communities by removing weak ties and stop them from diffusing further in the network .online social sites have become one of the most popular web 2.0 applications in the internet . as a new social media ,the core feature of online social networks is the information diffusion .we investigate the coupled dynamics of the structure and the information diffusion in the view of weak ties .different from the recent work , we do not focus on the trace collection and analysis of the real data flowing in the network . instead , inspired by , we propose a model for online social networks and take a closer look at the role of weak ties in the diffusion .we find that the phase transition found in the mobile communication network exists pervasively in online social networks , which means that the weak ties play a special role in the network structure .then we propose a new model , which associates the strength of ties with the diffusion , to simulate how the information spreads in online social networks .contrary to our expectation , selecting weak ties preferentially to republish can not facilitate the information diffusion in the network , while the random selection can . through extra analysis and experiments, we find that when , the nodes with lower degrees are preferentially selected for republishing , which will limit the scope of the distribution of republishing nodes in the following rounds .however , even for the random selection case , removal of the weak tie can make the coverage of the information decreases sharply , which is consistent with its special role in the structure .so we conclude that weak ties play a subtle role in the information diffusion in online social networks .on one hand , they play a role of bridges , which connect isolated communities and break through the trapping of information in local areas . on the other hand , selecting weak ties preferentially to republish can not make the information diffuse faster in the network . for potential applications ,we think that the weak ties might be of use in the control of the virus spread and the private information diffusion .this work was supported by national 973 program of china ( grant no.2005cb321901 ) and the fund of the state key laboratory of software development environment ( sklsde-2008zx-03 ) .the second author was supported partially by national natural science foundation of china ( grant no .70901002 and 90924020 ) and beihang innovation platform funding ( grant .no ymf-10 - 04 - 024 ) . | as a social media , online social networks play a vital role in the social information diffusion . however , due to its unique complexity , the mechanism of the diffusion in online social networks is different from the ones in other types of networks and remains unclear to us . meanwhile , few works have been done to reveal the coupled dynamics of both the structure and the diffusion of online social networks . to this end , in this paper , we propose a model to investigate how the structure is coupled with the diffusion in online social networks from the view of weak ties . through numerical experiments on large - scale online social networks , we find that in contrast to some previous research results , selecting weak ties preferentially to republish can not make the information diffuse quickly , while random selection can achieve this goal . however , when we remove the weak ties gradually , the coverage of the information will drop sharply even in the case of random selection . we also give a reasonable explanation for this by extra analysis and experiments . finally , we conclude that weak ties play a subtle role in the information diffusion in online social networks . on one hand , they act as bridges to connect isolated local communities together and break through the local trapping of the information . on the other hand , selecting them as preferential paths to republish can not help the information spread further in the network . as a result , weak ties might be of use in the control of the virus spread and the private information diffusion in real - world applications . |
mesoscopic analysis methods are among the most valuable tools available to applied network scientists and theorists alike .their aim is to identify regularities in the structure of complex networks , thereby allowing for a better understanding of their function , their structure , their evolution , and of the dynamics they support .community detection is perhaps the best - known method of all , but it is certainly not the only one of its kind .it has been shown , for example , that the separation of nodes in a core and a periphery occurs in many empirical networks , and that this separation gives rise to more exotic mesoscopic patterns such as overlapping communities .this is but an example there exist multitudes of decompositions in structures other than communities that explain the shape of networks both clearly and succinctly .the stochastic block model ( sbm ) has proven to be versatile and principled in uncovering these patterns . according to this simple generative model ,the nodes of a network are partitioned in blocks ( the _ planted partition _ ) , and an edge connects two nodes with a probability that depends on the partition .the sbm can be used in any of two directions : either to generate random networks with a planted mesoscopic structure , or to infer the hidden mesoscopic organization of real complex networks , by fitting the model to network datasets perhaps its most useful application .stochastic block models offer a number of advantages over other mesoscopic pattern detection methods .one , there is no requirement that nodes in a block be densely connected , meaning that blocks are much more general objects than communities .two , the sound statistical principles underlying the sbm naturally solve many hard problems that arise in network mesoscopic analysis ; this includes the notoriously challenging problem of determining the optimal number of communities in a network , or of selecting among the many possible descriptions of a network .another consequence of the statistical formulation of the sbm is that one can rigorously investigate its limitations .it is now known , for example , that the sbm admits a _ resolution limit _ akin to the limit that arises in modularity based detection method .the limitations that have attracted the most attention , however , are the _ detectability limit _ and the closely related concept of _ consistency limit _ .the sbm is said to be detectable for some parameters if an algorithm can construct a partition correlated with the planted partition , using no information other than the structure of a single infinitely large instance of the model .it is said to be consistent if one can _ exactly _ recover the planted partition . therefore , consistency begets detectability , but not the other way around .understanding when and why consistency ( or detectability ) can be expected is important , since one can not trust the partitions extracted by sbm if it operates in a regime where it is not consistent ( or detectable ) . due to rapid developments over the past few years, the locations of the boundaries between the different levels of detectability are now known for multiple variants of the sbm , in the limit of infinite network sizes .if the average degree scales at least logarithmically with the number of nodes , then the sbm is consistent , unless the constant multiplicative factor is too small , in which cas the sbm is then detectable , but not consistent .if the average degree scales slower than logarithmically , then the sbm is at risk of entering an _ undetectable _ phase where no information on the planted partition can be recovered from the network structure .this happens if the average degree is a sufficiently small constant independent of the number of nodes .these asymptotic results are , without a doubt , extremely useful .many efficient algorithms have been developed to extract information out of hardly consistent instances .striking connections between the sbm and other stochastic processes have been established in the quest to bound the undetectable regime from below .but real networks are not infinite objects .thus , many of the findings of these asymptotic theories do not carry over to real networks one can only _ assume _ that the asymptotic derivations are robust enough to inform us on finite cases .the objective of our contribution is to investigate detectability in _finite _ networks generated by the sbm .the remainder of the paper is organized as follows .we begin by formally introducing the sbm and the necessary background in sec .[ section : stochastic_block_model ] .we use this section to briefly review important notions , including inference ( sec .[ subsection : stochastic_block_model - inference ] ) , as well as the consistency and detectability of the infinite sbm ( sec .[ subsection : stochastic_block_model - related_work ] ) . in sec . [ section : finite_limit ] , we present a necessary condition for detectability , and show that it is always met , on average , by finite instances of the sbm .we then establish the existence of a large equivalence class with respect to this notion of average detectability . in sec .[ section : eta - detectability ] , we introduce the related concept of and investigate the complete detectability distribution , beyond its average . in sec . [section : case_study ] , we apply the perfectly general framework of secs . [ section : finite_limit][section : eta - detectability ] to a constrained variant of the sbm : the general modular graph model of ref .the results of this section hold for a broad range of models , since the general modular graphs encompass the symmetric sbm , the planted coloring model and many other models as special cases .we gather concluding remarks and open problems in sec .[ section : discussion ] .two appendices follow .the first investigates the interplay between noise and our notion of average detectability ( appendix [ appendix : noisy_sbm ] ) ; the second establishes a connection between our framework and random matrix theory ( appendix [ appendix : connection_with_rmt ] ) .the stochastic block model is formally defined as follows : begin by partitioning a set of nodes in blocks of fixed sizes , with .denote this partition by , where is the set of nodes in the ^th^ block . then , connect the nodes in block to the nodes in block with probability . in other words, for each pair of nodes , set the element of the adjacency matrix to 1 with probability and to 0 otherwise , where is the block of .note that for the sake of clarity , we will obtain all of our results for simple graphs , where edges are undirected and self - loops ( edges connecting a node to itself ) are forbidden .this implies that and that .we will think of this process as determining the outcome of a random variable , whose support is the set of all networks of nodes . due to the independence of edges , the probability ( likelihood ) of generating a particular network simply given by the product of bernoulli random variables , i.e. , ^{1-a_{ij}}[p_{\sigma(v_i)\sigma(v_j)}]^{a_{ij}}\;,\ ] ] where is the matrix of connection probabilities of element ( sometimes called the affinity or density matrix ) , and is a shorthand for `` '' .it is easy to check that the probability is properly normalized over the set of all networks of distinguishable nodes . a useful alternative to eq. expresses the likelihood in terms of the number of edges between each pair of blocks rather than as a function of the adjacency matrix .notice how the number of edges appearing between the sets of nodes and is at most equal to each of these edges exists with probability .this implies that is determined by the sum of bernoulli trials of probability , i.e. , that is a binomial variable of parameter and maximum .the probability of generating a particular instance can therefore be written equivalently as where and are jointly determined by the partition and the structure of , and denotes `` '' . having a distribution over all networks of nodes , onecan then compute average values over the ensemble .for example , the average degree of node is given by where is the kronecker delta .the expression correctly depends on the block of ; nodes in different blocks will in general have different average degree .averaging over all nodes , one finds the average degree of the network this global quantity determines the density of the sbm when .the sbm is said to be dense if , i.e. , if is a constant independent of .it is said to be sparse if , i.e. , if goes to zero as . in the latter case , a node has a constant number of connections even in an infinitely large network a feature found in most large scale real networks . for finite instances, it will often be more useful to consider the average density directly .it is defined as the number of edges in , normalized by the number of possible edges , i.e. , where , and the dense versus sparse terminology is then clearer : the density of sparse networks goes to zero as , while dense networks have a nonvanishing density . depending on the elements of , the sbm can generate instances reminiscent of real networks with , e.g. , a community structure ( ) or a core - periphery organization ( and ) .however , the sbm really shines when it is used to infer the organization in blocks of the nodes of real complex networks this was , after all , its original purpose . to have inferred the mesoscopic structure of a network ( with the sbm )essentially means that one has found the partition and density matrix that best describes it . in principle , it is a straightforward task , since one merely needs to ( a ) assign a likelihood to each pair of partition and parameters [ see eqs . ] , then ( b ) search for the most likely pair ( , ) . since there are exponentially many possible partitions , this sort of enumerative approach is of little practical use .fortunately , multiple approximate and efficient inference tools have been proposed to circumvent this fundamental problem .they draw on ideas from various fields such as statistical physics , bayesian statistics , spectral theory and graph theory , to name a few , and they all produce accurate results in general. one could expect perfect recovery of the parameters and partition from most of these sophisticated algorithms .this is called the consistency property .it turns out , however , that all known inference algorithms for the sbm , as diverse as they might be , fail on this account .and their designs are not at fault , for there exists an explanation of this generalized failure .consider the density matrix of elements , it is clear that the block partition is irrelevant the generated network can not and will not encode the planted partition .thus , no algorithm will be abe to differentiate the planted partition from other partitions .it is then natural to assume that inference will be hard or impossible if , where is a very small perturbation for networks of nodes ; there is little difference between the uniform case and this perturbed case .in contrast , if the elements of are widely different from one another , e.g. , if and for , then easy recovery should be expected .understanding where lies the transition between these qualitatively different regimes has been the subject of much recent research ( see ref . for a thorough survey ) . as a result, the regimes have been clearly separated as follows : ( i ) the undetectable regime , ( ii ) the detectable ( but not consistent ) regime and ( iii ) the consistent regime ( and detectable ) .it has further been established that the scaling of with respect to determines which regime is reached , in the limit .the sbm is said to be _ strongly consistent _ if its planted partition can be inferred perfectly , with a probability that goes to 1 as ( it is also said to be in the _ exact recovery _another close but weaker definition of consistency asks that the probability of misclassifying a node goes to zero with ( the _ weakly consistent _ or _ almost exact recovery _these regimes prevail when scales at least as fast as , where is a matrix of constants . predictably , most algorithms ( e.g. , those of refs . ) work well in the exact recovery phase regime , since it is the easiest of all . in the _ detectable _ ( but not consistent ) regime , exact recovery is no longer possible ( the _ partial recovery _the reason is simple : through random fluctuations , some nodes that belong to , say , block , end up connecting to other nodes as if they belonged to block .they are thus systematically misclassified , no matter the choice of algorithms .this occurs whenever , or , with a function of that scales slower than .the discovery of the third regime the _undetectable regime_arguably rekindled the study of the fundamental limits of the sbm . in this regime , which occurs when and is more or less uniform , it is impossible to detect a partition that is even correlated with the planted one .that is , one can not classify nodes better than at random , and no information on the planted partition can be extracted .thus , some parametrizations of the sbm are said to lie below the _ detectability limit_. this limit was first shown to exist using informal arguments from statistical physics and random matrix theory , and has since been rigorously investigated in refs . , among others .there exist many efficient algorithms that are reliable above the detectability limit , for almost all parameterizations of the sbm ; noteworthy examples include belief - propagation , and spectral algorithms based on the ordinary and weighted non - backtracking matrix , as well as matrices of self - avoiding walks .when the number of blocks is too large , most of these algorithms are known to fail well above the information theoretic threshold , i.e. , the point where it can be proven that the partition is detectable .it has been therefore conjectured in ref . , that the undetectable regime is further separated in two phases : a truly undetectable regime , and a regime where detection is not achievable _efficiently_. in the latter , it is thought that one _ can _ find a good partition , but only by enumerating all partitions a task of exponential complexity .detectability and consistency are well separated phases of the infinite stochastic block model .a minute perturbation to the parameters may potentially translate into widely different qualitative behaviors .the picture changes completely when one turns to finite instances of the model .random fluctuations are not smoothed out by limits , and transitions are much less abrupt .we argue that , as a result , one has to account for the complete distribution of networks to properly quantify detectability , i.e. , define detectability for _ network instances _ rather than parameters .this , in turn , commands a different approach that we now introduce .consider a single network , generated by the sbm with some planted partition and matrix , where is a matrix of ones , a constant , and a matrix of ( small ) fluctuations .suppose that the average density equals , and consider a second density matrix for which the block structure has no effect on the generative process .if an observer with _ complete knowledge _ of the generative process and its parameters can not tell which density matrix , or , is the most likely to have generated , then it is clear that _ this particular instance _ does not encode the planted partition . as a result, it will be impossible to detect a partition correlated with the planted partition .this idea can be translated into a mathematical statement by way of a likelihood test . for a sbm of average density ,call the ensemble of erds - rnyi graphs of density the ensemble of _ equivalent random networks_. much like the sbm ( see sec .[ section : stochastic_block_model ] ) , its likelihood is given by the product of the density of independent and identically distributed bernoulli variables , i.e. , where is the total number of edges in .the condition is then the following : given a network generated by the sbm of average density and density matrix , one can detect the planted partition if the sbm is more likely than its equivalent random ensemble of density , i.e. , a similar condition has been used in ref . and to pinpoint the location of the detectability limit in infinite and sparse instances of the sbm .but nothing forbids its application to the finite size problem ; one will see shortly that it serves us well in the context of finite size detectability. the ( equivalent ) normalized log - likelihood ratio will be more practical for our purpose .this simple transformation brings the line of separation between models from to , and prevents the resulting quantity from becoming too large .more importantly , it changes products into sums , and allows for a simpler expression + \alpha_{rs}\!\log\!\!\left[\frac{1-p_{rs}}{1-\rho}\right]\!\!\bigg\}. \label{eq : normalized_log_likelihood}\ ] ] ] .each color denotes a block .( e - f ) empirical distribution of the normalized log - likelihood obtained from samples of . the bins in which the instances ( c - d ) fall are colored in red .notice that a negative log - likelihood ratio is associated with some instances in ( f ) ., title="fig : " ] + ] .this expression can be derived in two different ways .the simplest and most obvious of the two derivations is by direct computation of the average eq . over the joint distribution of the random variables ; the second derivation follows from the extensivity of kl divergence for independent random variables.see supplemental material for more details on the mathematical development . ]the bounds of will give us an intuition for what the easiest and hardest detectability problems might look like .the kl divergence is never negative , and eq .shows that the maximum of is ; the average of the normalized log - likelihood is thus confined to the interval an example of parameters that achieves the upper bound would be the sbm of density matrix , , with ] to compute . writing as [ see eq . ] , we find \;,\ ] ] i.e. , a ( approximate ) equation in closed form for .crucially , eq . predicts that can never be smaller than .this comes about because ( i ) and ( ii ) is a sum of variances , i.e. , a positive quantity .there are therefore two possible limits which will yield and : either or .some care must be exerted in analyzing the case ; equations tell us that the distribution of is concentrated on when its average is exactly equal to 0 .we conclude that is never reached but only approached asymptotically , for parameters that yield , with small but different from zero .the consequence of is that at most half of the instances of the sbm can be declared undetectable on the account of the condition .we can immediately reach a few conclusions on the interplay between the notions of average and .first , the symmetries of , ( see sec .[ subsubsection : symmetries_avg_detectability ] ) translates into symmetries for . to see this ,first notice that is conserved under the mapping ^ 2 & \mapsto [ -x_{rs}(1-p_{rs},1-\rho)]^2\;,\\ p_{rs}(1-p_{rs } ) & \mapsto ( 1-p_{rs})p_{rs}\;.\end{aligned}\ ] ] and that a permutation of the indexes only changes the order of summation of the terms of .second , hypersurfaces of constant average detectability need not be hypersurfaces of constant . to investigate this second important aspect of the connection between average detectability and ,let us further approximate eq . .the maclaurin series of the error function is , to the first order , \right\}\notag\;,\\ & \approx \frac{1}{\sqrt{2\pi } } \frac{{\langle \mathcal{l } \rangle}}{s_{q^ * } } + \frac{1}{2}\;. \label{eq : eta_maclaurin}\end{aligned}\ ] ] this is a reasonably accurate calculation of when is small , i.e. , close to the _ average _ undetectable regime .( recall that we do not allow diverging for the reasons stated in sec .[ subsec : eta_clt ] ) .it then becomes clear that on the hypersurfaces where is constant ( and close to 0 ) , is conserved rather than itself .equation embodies a trade - off between accuracy ( ) and variance ( ) : in the regions of the hypersurface of constant where the variance is large , must be comparatively small , and vice - versa .now , turning to the complementary case where consequently close to its maximum , we obtain a simple criterion for 1detectability based the asymptotic behavior of .it is reasonable to define a ( small ) threshold beyond which for all practical purposes .the error function goes asymptotically to with large values of its argument , but reaches its maximum of very quickly , so quickly , in fact , that is numerically equal to 1 to the 10^th^ decimal place .asking that the argument of in eq .be greater than this practical threshold , we obtain the inequality for .whenever the inequality holds , the associated ensemble is 1detectable with a tolerance threshold , i.e. , we can say that for all practical purposes , there are no instances of the sbm which are necessarily is not sufficient for detectability , some instances could still be undetectable . ] undetectable .the stochastic block model encompasses quite a few well - known models as special cases ; noteworthy examples include the _ planted partition model _ , the closely related _ symmetric sbm _ ( ssbm ) , the _ core - periphery model _ , and many more .these simplified models are important for two reasons .one , they are good abstractions of structural patterns found in real networks , and a complete understanding of their behavior with respect to detectability is therefore crucial .two , they are simple enough to lend themselves to a thorough analysis ; this contrasts with the general case , where simple analytical expressions are hard to come by . in the paragraphs that follow ,we investigate the _ general modular graph model _ ( gmgm ) , a mathematically simple , yet phenomenologically rich simplified model .thanks to its simpler parametrization , we obtain easily interpretable versions of the expressions derived in secs .[ section : finite_limit][section : eta - detectability ] .the gmgm can be seen as constrained version of the sbm , in which _ pairs _ of blocks assume one of two roles : inner or outer pairs .if a pair of blocks is of the `` inner type '' , then one sets .if a pair of blocks is of the `` outer type '' , then one sets .the resulting density matrices can therefore be expressed as where is a indicator matrix [ if is an inner pair ] , and where is a length vector of ones .a non - trivial example of a density matrix of this form is shown in fig .[ fig : general_modular_graph ] ( a ) .the figure is meant to illustrate just how diverse the networks generated by the gmgm may be , but it is also important to note that the results of this section apply to _ any _ ensemble whose density matrix can be written as in eq . .this includes , for example , the ssbm , obtained by setting and . whilst the parametrization in terms of and is simple, we will prefer an arguably more convoluted parameterization which is also more revealing of the natural symmetries of the gmgm ( in line with the transformation proposed in sec .[ subsubsection : hypersurfaces ] ) .the first natural parameter is the average density , which can be computed from eqs . and and which equals [ eq : natural_gmg_params ] \;,\notag\\ & = \beta p_{{\mathrm{in } } } + ( 1-\beta)p_{{\mathrm{out}}}\;,\end{aligned}\ ] ] where is the fraction of _ potential _ edges that falls between block pairs of the inner type .the second natural parameter is simply the difference the absolute value of quantifies the distance between the parameters of the gmgm and that of the equivalent random ensemble ; its sign tells us which type of pairs is more densely connected . in this natural parametrizationthe density matrix takes on the form , i.e. , a uniform matrix of with perturbation proportional to for the inner pairs .it might appear that we have increased the complexity of the model description , since the additional parameter now appears in the definition of the density matrix .it is , however , not the case , because we could consider the combined parameter .therefore , eqs . , together with and , suffice to unambiguously parametrize the model .the average normalized log - likelihood ratio is tremendously simplified in the natural parametrization of the gmgm ; it is straightforward to show that the ratio takes on the compact ( and symmetric ) form \big\ } \\- ( 1-\beta)\big\{h(\rho ) - h\bigl[\rho-\beta\delta\bigr]\big\ } \;,\end{gathered}\ ] ] by using together with the inverse of eqs . : [ eq : natural_gmg_params_inverse ] in fig .[ fig : general_modular_graph ] ( b ) , we plot in the space hereafter the density space for the indicator matrix shown in fig .[ fig : general_modular_graph ] ( a ) ( and unequal block sizes , see caption ) .unsurprisingly , is largest when the block types are clearly separated from one another , i.e. , when is the largest .notice , however , how large separations are _ not _achievable for dense or sparse networks .this is due to the fact that not all pairs map to probabilities in ] .all empirical results are averaged over independent instances of the sbm .we infer the partition of each of these instances with an optimal metropolis - hasting algorithm , seeded with the planted partition and the correct ensemble parameters .( a ) average rnmi of the planted and the inferred partition of the ssbm ( of nodes ) in the density space .solid red lines mark the border of the 1detectability region , with tolerance threshold , see eq . .dotted black lines show the two solutions of , see eq . .( b)(c ) phase transition at constant for networks of nodes ( b ) and nodes ( c ) .circles indicate the fraction of instances for which a correlated partition could be identified , while diamonds show the average of the rnmi ( lines are added to guide the eye ) .blue solid curves show , see eq . .the shaded region lies below the kesten - stigum bound ( here with ) .the dotted lines show the two solutions of ., title="fig : " ] and ] .all empirical results are averaged over independent instances of the sbm .we infer the partition of each of these instances with an optimal metropolis - hasting algorithm , seeded with the planted partition and the correct ensemble parameters .( a ) average rnmi of the planted and the inferred partition of the ssbm ( of nodes ) in the density space .solid red lines mark the border of the 1detectability region , with tolerance threshold , see eq . .dotted black lines show the two solutions of , see eq . .( b)(c ) phase transition at constant for networks of nodes ( b ) and nodes ( c ) .circles indicate the fraction of instances for which a correlated partition could be identified , while diamonds show the average of the rnmi ( lines are added to guide the eye ) .blue solid curves show , see eq . .the shaded region lies below the kesten - stigum bound ( here with ) .the dotted lines show the two solutions of ., title="fig : " ] it will be instructive to put our framework to the test and compare its predictions with numerical experiments that involve inference , i.e. , the detection of the planted partition of actual instances of the gmgm . the procedure will be the following : ( i ) generate an instance of the model , ( ii ) run an inference algorithm on the instance , and ( iii ) compute the correlation of the inferred and planted partition .the average detectability should predict the point where the average correlation becomes significant ( see below for a precise definition ) , and should give an upper bound on the fraction of correlated instances . before we proceed, let us emphasize that the outcome of these experiments is influenced by a number of factors .since it is conjectured that there exists a gap between information - theoretically feasible inference and efficiently achievable inference , we will have to be careful in choosing an inference algorithm otherwise we would run at risk of confounding the sources of error .even for the size considered , enumeration is impossible ; we must therefore resort to an approximate algorithm .we use an efficient algorithm based on the metropolis - hasting algorithm of ref . .unlike belief propagation , it works well on dense networks with many short loops . in the spirit of refs . we initialize the algorithm with the planted partition itself , to achieve our information theoretic threshold , even if efficient inference is not possible .we must also define precisely what is meant by correlated inference if we are to quantify our experiment .crucially , we have to account for finite size effects that could introduce spurious correlations .the so - called renormalized normalized mutual information ( rnmi ) of ref . appears a good choice .much like the well - known nmi , the rnmi is bounded to the [ 0 , 1 ] interval , and means that the planted partition and the inferred partition are identical . unlike the nmi , signals the absence of correlation between the two partitions , even in finite networks . in fig .[ fig : general_modular_graph_transition ] ( a ) , we plot in the density space of the gmgm .we use the parameters , and ] is a non - negative factor , and where we have defined .it turns out that the sum is not only globally negative , but that each term is also individually negative , i.e. , \left [ \frac{f(c ) f(\rho)}{f(p_{rs } ) } - 1\right ] \leq 0\qquad \forall r\leq s.\ ] ] this comes about because the sign of the logarithm always matches that of the bracket . to prove this statement, we treat 5 different cases and use the following identities repeatedly : the cases are : 1 . if : the logarithm equals 0 and the upper bound of eq. holds .2 . if and : the logarithm is positive [ see eq . ] .the bracket is also positive , since inequality can be rewritten as using the fact that .this simplifies to , in line with our premise .if and : the logarithm is positive . using our premise , we conclude that and . therefore , , i.e. , the bracket is positive .if and : the logarithm is negative . using our premise , we conclude that and .therefore , , i.e. , the bracket is negative .5 . if and : the logarithm is negative .the bracket is also negative , since the converse of inequality can be rewritten as using the fact that .this simplifies to , in line with our premise .this list covers all cases , and therefore completes the proof that , i.e. , that average detectability decreases as a result of the application of a upp .in refs . it is argued that sbm is undetectable when the extremal eigenvalues of the modularity matrix of its instances merge with the so - called `` continuous eigenvalue band '' .it is proved in ref . that this occurs when for the 2 block ssbm with poisson distributed degrees . since we are concerned with the finite case , let us first modify this result to account for binomial distributed degrees instead .it turns out that the corrected condition is found by substituting the expectations of poisson variables [ in the rhs of eq .] by that of binomial variables .this leads to }\;,\ ] ] or , in terms of the natural parameters of the gmgm , this equation bears striking similarity with eq . , our approximate equation for curves of constant .in fact , for the 2 block ssbm ( ) , the latter reads one obtains an exact equivalence between the two expressions by setting . the fact that modularity based spectral methods can not infer a correlated partition if [ eq . ] can thus be understood as stemming from a lack of statistical evidence for the sbm .66ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12 ] , where }=\binom{m_{rs}^{\max}}{m_{rs } } p_{rs}^{m_{rs}}(1-p_{rs})^{m_{rs}^{\max}-m_{rs}} ] .the following chain rule holds for the kullback - leibler divergence : where are joint distributions for on the same supports .it is easy to see that if and are independent according to both distributions and , then the divergence is additive , i.e , this property trivially generalizes to an arbitrary number of random variables . because the distribution over all graphs can be seen as a product of independent distributions over all edges , we may write the kullback - leibler divergence as where and are the bernoulli random variables that govern the existence of the edge linking nodes and , according to the distribution and .these variables have two outcomes : either the edge exists with probability ( resp . ) , or does not with probability ( resp . ) . we write as an abbreviation of , the index of the block to which node belongs .the divergence associated with an edge is therefore where is , again , the binary entropy . now , taking for all pairs and summing over all edges [ see eq . ] , we find , after grouping terms by types , \;.\ ] ] the final result follows from the definition of the average density , after one substitutes for .[ thm : symmetries ] all transformations of the parameter space of the sbm that are ( i ) reversible , ( ii ) space - preserving , and ( iii ) valid at every point of the parameter space can be written as [ eq_group : transformation_unconstrained_algebraic ] where and where and are permutations that acts on the set . under the additional constraint that be preserved by and equal to , one must have the ( constrained ) transformation that act on can be seen as an element of group . if , where is the dimension of and , then and can be seen as elements of the symmetric group .the constrained symmetry group is therefore , and it is of order .let us first introduce new notation to clarify the proof of theorem [ thm : symmetries ] .first , we will define vectors and whose entries are the entries of the upper triangle ( and diagonal ) of and .we use to denote the standard scalar product between vectors and in .thus , in particular , we will write the average density as and the average detectability as \ ! p_{rs} + \log\!\!\left[\frac{1-p_{rs}}{1-\rho}\right]\!\right\}\;,\notag \\ & = \braket{\alpha|u(\bm{\alpha},\bm{p})}\ ; , \label{eq : braket_def_l}\end{aligned}\ ] ] where is vector parametrized by , whose entries are given by we also introduce and , two permutation matrices such that and , where is the element of vector ( we use double indexes in the vectors to emphasis the matrix to vector change in notation ) . in this notation , eqs .are given by [ eq_group : transformation_unconstrained_matricial ] where is a diagonal matrix with element on the diagonal , where is the identity matrix , and where is also a diagonal matrix .the first part of theorem [ thm : symmetries ] follows trivially from the definition of the parameter space , and from the fact that transformations which meet requirements ( i ) , ( ii ) and ( iii ) are the symmetries of this space . since takes its value in the standard simplex of dimension andsince is confined to the unit cube , the complete parameter space is given by the cartesian product of both space we can simply compose the symmetry group of both spaces to obtain the complete symmetry group .the symmetry group of the standard simplex and that of the unit cube are well - known results in geometry and algebra : the symmetry group of the -dimensional unit cube is isomorphic to the hyperoctahedral group , and the symmetry group of the -dimensional standard simplex is isomorphic to the symmetric group .their action can be written as in eq . , which proves the first part of the theorem .to prove the second part of the theorem , we look for the subset of all transformations of the form that also preserve , i.e. , transformations in that map to and that satisfy it is easy to check that if and with , then the average density equals with the same constraint on the permutations and , the average of the normalized log likelihood is given by , in the transformed parameters , - \sum_{r\leq s}\alpha_{\pi(r , s)}h[\gamma + ( 1 - 2\gamma)p_{\pi(r , s)}]\;,\\ & = h(\rho ) -\sum_{r\leq s}\alpha_{rs } h(p_{rs } ) = { \langle \mathcal{l}(\bm{\alpha},\bm{p } ) \rangle}\;,\end{aligned}\ ] ] since and permutations can be ignored ( they induce an irrelevant reordering of the sum ) .the constrained transformations therefore preserve .+ the above calculation is the sufficient part of the proof . to complete the proofwe must show that is conserved _ only if _ and .first , we note that by the properties of the scalar product and permutation matrices , we have the following obvious symmetry which is valid for all permutation matrices .we use this symmetry to `` shift '' all permutation matrices to the second part of the scalar product representation of , i.e. we write ( we use the fact that is a permutation matrix by definition ) .now , from eq . , it is clear that we will have if and only if where .since is analytic in , we can expand it by using taylor series ; this creates an infinite series of constraints that must all be satisfied .in particular , condition will be satisfied only if this is true if and only if , for all , one has where . here, is the transformed vector , on which the inverse of permutation is also applied .note that in writing condition we have used in the denominators .let us now suppose that tends to the point , which is such that for all except for ( i.e. , ) . in this limit , eq. reads which is trivially satisfied when but not otherwise .let us suppose and expand the equation around . from this second series expansion one concludes that the equality is satisfied if either or . in both cases, the indices must match , which implies that . by repeating the same argument for all ,we conclude that .thus , the map is a symmetry only if .this leaves the last part open , i.e. , the proof that .let us , by contradiction , assume that differs from one set of indices to the other and define the sets and by then one can write where returning to eq . for and using the newfound fact that which implies ( no more permutations ) , we find this can only be true if , i.e. , if or .therefore , , with .[ thm : convexity ] is convex with respect to .this property of is perhaps surprisingly not a consequence of the convexity of the kl divergence .instead , it follows from the log - sum inequality .we prove that is convex with respect to by showing that it satisfies the convexity condition explicitly for all ] . then , consider the instance with edge count matrix ,[n^2/4,0]] ] .when is close to 0 or 1 , this region is a triangle . at intermediate values of , parts of the triangle lie outside the cube of probabilities the region is then a polygon of more than 3 edges , see fig .[ fig : two_block_sbm ] ( b ) for an example .we do not investigate the general 2 blocks sbm as thoroughly as the gmgm , since much of the results and observations are identical to those of the previous case study .it will , however , be instructive to consider the average detectability and symmetries of this simple model .the average log - likelihood is , in the space , \right\}\notag\\ + ( 1-\beta)^2&\left\{h(\rho ) - h\left [ \rho + \frac{\beta^2}{1 - 2\beta(1-\beta)}\delta_x - 2\beta(1-\beta)\delta_y \right]\right\}\notag\\ + 2\beta(1-\beta)&\left\{h(\rho ) - h\left [ \rho + [ 1 - 2\beta(1-\beta)]\delta_y \right]\right\}\;,\end{aligned}\ ] ] our main objective in studying the two - blocks sbm is to showcase anew the technique introduced in the main text to compute the hypersurfaces of constant .the first step is to invert the parametrization upon substitution of these parameters in \;,\ ] ] one finds + \delta_y^2[\alpha_{12}(1-\alpha_{12})]\;.\end{aligned}\ ] ] equation predicts that these hypersurfaces are ellipses in the plane of constant density , centered at , with major axis and minor axis .this result is put to the test in fig .[ fig : two_block_sbm ] ( b ) , where we also show numerical solutions for comparison. agreement is , once again , excellent when is not too large , while leads to significant errors in our prediction . following the ellipses in fig .[ fig : two_block_sbm]-(c ) shows that that the easiest inference problems those where is the largest are the ones in the corner on the edges of the accessible region .these regions are those where block pairs are well segregated , i.e. , the bipartite ensemble with [ top - corner ] , the perfectly assortative cases with where is a constant [ bottom edge ] .the purpose of the parametrization is to facilitate the calculation of hypersurfaces of constant . for the gmgm, the natural parametrization also had the added benefit of highlighting the symmetries naturally .this needs not be the case in all variants of the sbm , as we now show .let us first simplify the notation ; we define as the fraction of nodes that belong to block , i.e. , and ( not to be confused with the of the gmgm ) . in the limit where , we then have since the parametrization assigns a special significance to the direction , the symmetries that involve these two blocks [ and _ not _ the pair ] , are the simplest . by direct enumeration, one finds i.e. , the identity , the pure graph complement , the permutation of block pairs ( 0,0 ) and ( 1,1 ) , and the same permutation accompanied by the graph complement . notice how this subset of transformation forms , once again , a group isomorphic to the klein four - group .the transformation equations would be , however , much less compact if we were to list transformations involving , say , cyclic permutations of the blocks .this situation is common : choices of parameters that favor a particular pair of blocks will yield compact symmetry equations for this pair of blocks , but not the others .therefore symmetries are , in general , best expressed directly in terms of and , unless the model has a special and all encompassing parametrization like the gmgm .in this short section , we detail the inference algorithm used in the case studies of the main text .optimal inference can be achieved ( quite inefficiently ) by evaluating the complete posterior distribution of the model for all possible partitions .the idea is to construct the marginal distribution which gives the probability that node is in block , given a network and some generating parameter and [ denotes the index of the block of node in partition .it is easy to see that we can then maximize the probability of guessing the partition correctly by assigning nodes according to while this method is attracting , it runs into an important problem very quickly , for it is impossible to compute the marginal distribution exactly for networks of even moderate size an exponential number of terms is involved .there are multiple ways to circumvent this problem .a popular method is to construct the marginal distribution directly , using a belief propagation ( bp ) algorithm ( see ref . for an overview of related method ) .bp based methods are extremely efficient , since their complexity is of the order of the number of edges in the network .moreover , they work virtually perfectly for all practical purposes .the only problem is that bp algorithms rely on the hypothesis that the network is locally tree - like ; this is never entirely true , but still a reasonable approximation for sparse instances of the sbm .unfortunately , since our analysis calls for accurate results in all regimes ( i.e. , not only on sparse networks ) , we are forced to turn an inefficient alternative of bp , the markov chain monte carlo ( mcmc ) method ._ sample _ from the posterior distribution and can therefore be used to estimate the marginal distribution of eq . .we will use one of the simplest implementation of the mcmc method , the metropolis - hastings algorithm .one only needs to specify a _proposal distribution _( given a partition , what is the probability that the next partition in the chain is ? ) and an _ acceptance probability _ proportional to for all proposed transitions .the proposal distribution usually intervenes in the calculation of but will not be needed here , since we opt for a symmetric distribution ( see next paragraph ) .the chain formed by accepting transitions with probability is then ergodic if a number of conditions are met ( e.g. , the proposal distribution must respect the detailed balance ). our proposal distribution will be simple : take a node at random , and assign it to a random block ( see ref . for a more efficient alternative ) .one can easily check that this distribution meets all the necessary conditions for the ergodicity of the markov chain .following an argument analogous to that of ref . , we find that the acceptance probability associated to this proposal distribution is ^{k_r^{(i)}}\!\!\left[\frac{p_{ss}(1-p_{rs})}{p_{rs}(1-p_{ss})}\right]^{k_s^{(i)}}\!\ !\left[\frac{1-p_{rs}}{1-p_{rr}}\right]^{n_r -1}\!\!\left[\frac{1-p_{ss}}{1-p_{rs}}\right]^{n_s } \prod_{l\neq r , s } \left[\frac{p_{ls}(1-p_{rl})}{p_{rl}(1-p_{ls})}\right]^{k_l^{(i)}}\!\!\!\!\left[\frac{1-p_{ls}}{1-p_{rl}}\right]^{n_l}\ ; , \label{eq : a_sbm_single}\end{gathered}\ ] ] if node is switched from block to block , with denoting the number of neighbors of node in block . since there are terms in eq ., transition probabilities can be calculated in time , assuming that products and exponentiations are , and that one keeps items in memory to track the neighborhoods and the block assignments .updating the memberships and neighborhoods is then a operation .the memory cost can be reduced to if the neighborhoods are computed on the fly .this comes at no significant costs since keeping the memory up - to - date is as expensive as computing the memberships every time .we give a reference implementation of the algorithm at www.github.com/jg-you/sbm_canonical_mcmc .also , we note that if one samples from the general modular graph model ( gmgm ) , the algorithmic complexity is dramatically reduced , since the acceptance probability then simplifies to ^{k_s^{(i)}-k_r^{(i)}}\left[\frac{1-p_{in}}{1-p_{out}}\right]^{(n_s - k_s^{(i)})-(n_r - k_r^{(i ) } ) + 1}\ ; , \label{eq : a_ppm_single}\ ] ] i.e. , the transition probability only depends on the blocks of the nodes involved in the mcmc move . in a normalsetting where the planted partition is not known one would initialize the metropolis - hastings at random or using the partition generated by a simpler algorithm .but since we are interested in _ detectability _ , we might as well initialize the chain with the planted partition itself .if the partition is undetectable , then the algorithm will show no particular preference for this initial configuration and `` wander away '' towards partitions uncorrelated with the planted partition .however , if the partition is detectable , then the chain will concentrate around the initial configuration , and the marginal distribution will yield a distribution correlated with the planted partition .this choice of initial condition is not merely a trick to avoid transients : it also allows correlated recovery even in the _ hard phase _ , where the planted partition is detectable , but exponentially hard to find .in this phase , the planted partition is identifiable but not globally stable , in the sense that many other locally optimal partitions exist .when this is the case , an algorithm that relies on local information such as the metropolis - hastings algorithm cannot be expected to identify the correct partition . by seeding the algorithm with the planted partition ,we ensure that it will be found ( if it is detectable ) . even though we seed the inference algorithm with the planted partition, the chain can still go through a transient before it settles in its steady state , since the planted partition need not be typical of the stationary distribution .this can cause problems , since one must ensure that the chain has settled before the sampling begins otherwise biased results are to be expected . in fig .[ fig : mcmc_calibration ] , we show the distance between the partition currently consider by the algorithm and the initial condition for different parametrization of the gmgm .these results show that the transient can be extremely long when there is little difference between the parameters of the ensemble and that of the equivalent random ensemble .for instance , the bottom panel of fig .[ fig : mcmc_calibration ] show that the algorithm is still in the transient after steps , when it is applied to instances of the gmg with , 12 & 12#1212_12%12[1][0] _( , ) _ _ ( , ) ( ) * * , ( ) * * , ( ) _ _ ( ) * * , ( ) * * , ( ) * * , ( ) ( ) * * , ( ) | it has been shown in recent years that the stochastic block model is undetectable in the sparse limit , i.e. , that no algorithm can identify a partition correlated with the partition used to generate an instance , if the instance is sparse and infinitely large . real networks are however finite objects , and one can not expect all results derived in the infinite limit to hold for finite instances . in this contribution , we treat the finite case explicitly . we give a necessary condition for finite size detectability in the general sbm , using arguments drawn from information theory and statistics . we then distinguish the concept of average detectability from the concept of instance - by - instance detectability , and give explicit formulas for both definitions . using these formulas , we prove that there exist large equivalence classes of parameters , where widely different network ensembles are equally detectable with respect to our definitions of detectability . in an extensive case study , we investigate the finite size detectability of a simplified variant of the sbm , which encompasses a number of important models as special cases . these models include the symmetric sbm , the planted coloring model , and more exotic sbms not previously studied . we obtain a number of explicit expressions for this variant , and also show that the well - known kesten - stigum bound does not capture the phenomenon of finite size detectability even at the qualitative level . we conclude with two appendices , where we study the interplay of noise and detectability , and establish a connection between our information - theoretic approach and random matrix theory . |
from the statistical study of financial time series have arisen a set of properties or empirical laws sometimes called `` stylized facts '' or seasonalities .these properties have the characteristic of being common and persistent across different markets , time periods and assets . as it has been suggested , the reason why these `` patterns '' appear could be because markets operate in synchronization with human activities which leave a trace in the financial time series .however using the `` right clock '' might be of primary importance when dealing with statistical properties and the patterns could vary depending if we use daily data or intra - day data and event time , trade time or arbitrary intervals of time ( e.g. , , minutes , etc . ) .for example , it is a well - known fact that empirical distributions of financial returns and log - returns are fat tailed , however as one increases the time scale the fat - tail property becomes less pronounced and the distribution approach the gaussian form . as was stated in , the fact that the shape of the distribution changes with time makes it clear that the random process underlying prices must have a non - trivial temporal structure . in a previous work allez et al . established several new stylized facts concerning the intra - day seasonalities of single and cross - sectional stock dynamics .this dynamics is characterized by the evolution of the moments of its returns during a typical day .following the same approach , we show the bin size dependence of these patterns for the case of returns and , motivated by the work of kaisoji , we extend the analysis to relative prices and show how in this case , these patterns are independent of the size of the bin , also independent of the index we consider but characteristic for each index .these facts could be used in order to detect an anomalous behaviour during the day , like market crashes or intra - day bubbles .the present work is completely empirical but it could offer signs of the underlying stochastic process that governs the financial time series .the data consists in two sets of intra - day high frequency time series , the cac and the s&p . for each of the days of our period of analysis ( march ), we dispose with the evolution of the prices of each of the stocks that composes our indexes during a specific day from a.m. to p.m. the main reasons why we chose to work with these two indexes are : the number of stocks that compose them ( and ) , the time gap between their respective markets and the different range of stock prices ( between and usd for the s&p and between and eu for the cac ) .as the changes in prices are not synchronous between different stocks ( figure [ fig : fig1 ] ) , we manipulated our original data in order to construct a new homogeneous matrix of `` bin prices '' . in order to do this , we divided our daily time interval ] , , ... , b_{k } = [ 16:00 - t,16:00] ] represent averages over the ensemble of stocks in a given bin and day . if are the returns , can be seen as the return of an index equi - weighted on all stocks .the following results are in complete agreement with the results previously reported by .figure [ fig : fig2 ] shows the stock average of the single stock mean ] , skewness ] for the cac ( blue ) and the s&p ( green ) , and minute bin .as can be seen in figure [ fig : fig2](a ) , the mean tends to be small ( in the order of ) and noisy around zero .the average volatility reveals the well known u - shaped pattern ( figure [ fig : fig2](b ) ) , high at the opening of the day , decreases during the day and increases again at the end of the day .the average skewness ( figure [ fig : fig2](c ) ) is also noisy around zero .the average kurtosis exhibits an inverted u - pattern ( figure [ fig : fig2](d ) ) , it increases from around at the beginning of the day to around at mid day , and decreases again during the rest of the day . + as the time average of the cross sectional mean is equal to the stock average of the single stock mean , the result we show in figure [ fig : fig3](a ) is exactly the same as the one shown in figure [ fig : fig2](a ) .the time average of the cross sectional volatility ( figure [ fig : fig3](b ) ) reveals a u - shaped pattern very similar to the stock average volatility , but less noisy ( less pronounced peaks ) .the dispersion of stocks is stronger at the beginning of the day and decreases as the day proceeds .the average skewness is noisy around zero without any particular pattern ( figure [ fig : fig3](c ) ) .the cross sectional kurtosis ( figure [ fig : fig3](d ) ) also exhibits an inverted u - pattern as in the case of the single stock kurtosis .it increases from around at the beginning of the day to around at mid day , and decreases again during the rest of the day .this means that at the beginning of the day the cross - sectional distribution of returns is on average closer to gaussian .+ in figure [ fig : fig4 ] , we compare the stock average of single stock volatility ] , volatility ] and kurtosis ] with an average value of zero .the single stock kurtosis takes values between ] with an average value of zero ( figure [ fig : fig8](c ) ) .the average kurtosis starts from a value around in the very beginning of the day and decreases quickly to the mean value in the first minutes of the day ( figure [ fig : fig8](d ) ) .+ similarly as we did in section 3.3 for returns , in figure [ fig : fig9 ] we show a comparative plot between the stock average of the single stock volatility ] , the time average of the cross - sectional volatility and the average absolute value of the equi - weighted index ( figures [ fig : fig4 ] and [ fig : fig9 ] ) .one thing that results interesting to observe , in the case of the returns , is that these `` patterns '' actually depend on the size of the bin .this fact was well illustrated with different values of bin size through figure [ fig : fig11 ] for volatilities and figure [ fig : fig12 ] for kurtosis in which its inverted u - pattern is evident just when we consider `` small '' bin sizes . in the case of relative prices , the volatilities also exhibit the same kind of intra - day pattern ( figure [ fig : fig9 ] ) , but contrary with the returns , it is independent of the size of the bin , and the index we consider , but characteristic for each index .we suggested in section how this bin size independence of intra - day patterns in relative prices could be used in order to characterize `` atypical days '' for indexes and `` anomalous behaviours '' in stocks .this was showed in figures [ fig : fig13 ] and [ fig : fig14 ] where we presented our intra - day seasonalities for the mean ( a ) and volatility ( b ) in blue and the respective the cross - sectional moments for 3 days ( and the single stock moments for 3 stocks ) randomly picked in clear blue and we saw how the average behaviour of their moments move along with our intra - day patterns which was not the case for the day and the stock .esteban guevara thanks anirban chakraborti , frederic abergel , remy chicheportiche and khashayar pakdaman for their support and discussions .special thanks to the european commission , the ecuadorian government and the secretara nacional de educacin superior , ciencia , tecnologa e innovacin , senescyt . | in this paper we perform a statistical analysis over the returns and relative prices of the cac and the s&p with the purpose of analyzing the intra - day seasonalities of single and cross - sectional stock dynamics . in order to do that , we characterized the dynamics of a stock ( or a set of stocks ) by the evolution of the moments of its returns ( and relative prices ) during a typical day . we show that these intra - day seasonalities are independent of the size of the bin , and the index we consider , ( but characteristic for each index ) for the case of the relative prices but not for the case of the returns . finally , we suggest how this bin size independence could be used to characterize `` atypical days '' for indexes and `` anomalous behaviours '' in stocks . |
the economic system is no doubt a many - particle system - it can be viewed as a collection of numerous interaction agents .so it is possible that methods and concepts developed in the study of strongly fluctuation systems might yield new results in this area .in fact , in the past decades the approaches from statistical physics have been applied in economics and a lot of interesting results , including empirical laws and theoretical models , have been achieved ( see and as a review of econophysics ) . among all these studies , a great deal of researchers is on the agent - based modelling and related non - trivial self - organizing phenomena . in the economic system ,the agents learn from each other , and their activities may be influenced by others actions .these interactions between agents may be simple and local , but they may have important consequence related with the emergence of global structure . to understand the mechanism behind these innovation phenomena , the methods and concepts in phase transitions and critical phenomena are helpful .for instance , in the study of majority and minority game , opinion formation , and computational ecosystems . in this paper , we focus on the formation of labor division . roughly speaking, an economic organizational pattern is said to involve the division of labor if it allocates the labor of different individuals to different activities .hence the specialization of individuals and the number of professional activities are two sides of division of labor .it is a common functional organization observed in many complex systems , and it is a fundamental way to improve efficiency and utilization so as to get global optimization for the system . in order to investigate the mechanism behind the formation of labor division, we have constructed a simple model with many interacting agents .every agent has only two kind of tasks , namely and we describe the level of specialization of agent by his working - time share spent on producing or . each agent make their decisions for working - time in different tasks , and receive payoffs according to their and other agents choices .the agents can adapt by evaluating the performance of their strategies from past experience so as to get maximum returns .the returns for any agent is determined by its production with endogenous technical progress - through the mechanism of learning by doing , and its cooperation with other agents . just like the hamiltonian in statistical physics, the payoff function determines the behavior of the agent in economic system . because of the bounded rationality and incompleteness information in the system , we have introduced a parameter , named social temperature ( in the model , we have absorbed the into the other parameters .such an approach is traditionally used in statistical physics , for example , let , so the new in hamiltonian of ising model means actually . ) , to describe the degree of randomness in decision - making .then we assume that the system should obey the canonical ensemble distribution , that is the probability of a microstate is proportional to its ` boltzmann factor ' determined by the total returns of the microstate : then using the metropolis simulation method , we can get a master equation to investigate the evolution of the system . with the continual change of system parameters , we have found a so called ` social phase transitions ' phenomenon related to the emergence of labor division .the model is defined in section 2 .the economies of specialization is introduced by increasing returns from learning by doing . and the economies of complementarity is described by an additional payoffs from the combination of two products .section 3 gives the numerical results .the effects of parameters on critical point are discussed in detail .it is revealed that although the technical progress is a key factor that determine whether or not the labor division will happen , the competitive cooperation among agents has very important effects on the critical point .our results are summarized in section 4 .let s consider a system consists of homogeneous agents . in a given time period , any agent has two different kind of tasks , namely and . in any time unit , the fraction of working time on task for agent is denoted by ] , which describes the working pattern for the agent .if equals or , the agent is full specialized in or and we call the system is in complete division of labor .if equals , it means that the agent spends same time interval on tasks and , and we call it as the time - dividing working mode in the following discussion . usually could be any real number between and , in this case the agent does not have any preference on job or focus on the global behavior of the system . based on the above description of the agent , we introduce the following two order parameters to describe the behavior of the system on macroscopic level : describes the intensity of labor division and cooperation in the system .it has three special values , , represent time - dividing working mode , every equals , no - preference working mode , distributed in ( 0,1 ) randomly , and full specialization , every equals or , respectively . gives the agents allocation on tasks and on global ( in average on macroscopic level ) . in the following discussion, we try to specify a suitable evolution rule for based on the similar approach in statistical physics .so we can then determine the dynamical equations for , and get the final steady states for and .analogous with ising model in statistical physics , could be treated as the spin in point of the lattice , and then is the magnetization .so if we can give the hamiltonian of the system , the evolution of the system could be determined . in the economic system ,the agent switches his behavior according to his evaluations on the returns from production .every agent try to get maximum return just as any practical tends to stay in the state with lowest energy .so the payoff function from the agent s production should have the same effect as the hamiltonian . the working mode for each agentshould be determined by its returns .according to the previous literatures on specialization and economic organization , we know that there are several factors that related to the division of labor , such as the increasing returns to specialization and transaction costs. in fact , limited ability on learning and incomplete information on the technology will lead to the labor division directly .but we do nt take them into account in this model . in our model ,every agent know the technology for producing and . and for any agentthere is not any comparative advantages for producing or in the initial .the technical progress is achieved by the mechanism of learning by doing without any cost .the payoff function for agent is given by the following formulas : in the functions above , and are the technology for producing and respectively .for simplicity and without losing any generality , we assume that unit production will get unit return .and is the return of unit production . gives the additional benefits for composite and into a final product .so it is related to the less one between and .if agent has some more single product after the composition himself , he can also get another return , named , from the cooperation with other agents .but the return got from the composition with the product from other agent is factored by . because of the incomplete information in the system , any agent could nt know the situation of production on global . so in order to get the corresponding returns for every agent , we used the average - field approach similar with that in statistical physics .that means , for every agent , he will match his product with all the other agents and then the result is the average .this assumption indicate that there are already a public market in economics . in the function for , gives the surplus product for agent , and the sum of gives the surplus product of the all other agents .when these two terms have different signs , their product is negative and , the corresponding payoff from the cooperation , will be also negative . in this case , the surplus product of the agent is the same kind as the final surplus of all the other agents . because of the transaction cost and the diseconomies of incomplementarity , it is rationale that the agent would get negative payoff . the way for technical progress is learning by doing .this mechanism is related to the accumulation of agent s historical production behavior .that means the development of comparative advantages is determined by related working time .let denote the integral working time on or for agent : we have introduced two mechanisms for technical progress , named -mechanism and -mechanism .they are given by following functions : these two mechanisms will give different results .equation([gamma ] ) is an exponential function which gives unlimited growth on technology , while there is an upper limit in equation([mu ] ) .figure([g_m ] ) shows three functions for in our computer simulations .used in the simulations .parameters are labelled in the graph . ] in the production function discussed above , term is the return from the agents cooperation .it introduces interactions among all agents ( the same as the interactions among spins ) .so if we discuss the dynamics in -space , we should construct evolutionary equations for .but they are difficult for both theoretical analysis and computer simulation . in the following discussion, we try to describe the evolution of the system in -space .that is to discuss the evolutionary behavior of the joint density function in the space sponsored by .then the total production for the system is : is the production of agent given by eq(single ) .the same as the approach of critical dynamics in ising model , the master equation for the joint density function in space is % \end{array } \label{master}\]]where is the transition probability which is determined by the boltzman factors as the following : in the above transition probabilities , is the change of returns related to the variation of working mode for agent . from eq.([total ] ) we can get % \end{array}%\]]with the master equation([master ] ) and certain initial conditions , we can get evolving behaviors of and . then we can discuss the phenomena on labor division .it is difficult to have some theoretical results .so in the next section we will show some numerical simulation results by monte carlo simulation method .based on the master equation([master ] ) and corresponding transition probability([jump ] ) , the simulation is proceed on the following metropolis algorithm : 1 .for any given state with for agent , a new state is randomly selected ; 2 . if , then the transition from to is proceeded ; 3 . if , then a random number is selected .if , the transition from to is proceeded or else the agent keeps the original state .4 . for another agent ,goto step 1 . for any given initial state ,the system will achieve a certain steady state after some transient process .figure([lambda - t ] ) gives a typical evolution behavior under mechanism . , . ]there are several parameters related to the mechanism of labor division in the model. we will show the simple or maybe trivial results of -mechanism first and then emphasize our discussion on the results of -mechanism .and we let in all the simulations before we discuss the effects of parameters and on the system evolution in the end . as shown in([ggamma ] ) because of -mechanism is described by an exponential function , the system will achieve the state of labor division for any given initial state and no matter how small is .that means if the intensity of technical progress is big enough , the agent would definitely specialized in a kind of task .other factors such as competitive cooperation described by parameter have no effects on the long term evolution . in the final stationary state as a function of parameter under -mechanism .the final steady state of the system is determined by or . ]but for the model with -mechanism , has some important effects on the system evolution . as shown in figure([betamu ] ) , there are phase transitions results in labor division and has strong effects on the critical point for parameter . decreases as the increasing .the -mechanism for learning by doing gives a logistic growth for technology .that is much more realistic than the -mechanism .the results reveal that the competitive cooperation among agents is very important for the emergence of labor division . however , as discussed in the end of this section , when , that is there is no mechanism of increasing returns , the system has no phase transition to labor division .this result is rationale for the model and it is consistent with the theory on specialization .the comparative advantages in production can only be introduced by a positive feedback caused by increasing returns .so the mechanism of learning by doing is a dominant factor for labor division ., parameter in the final stationary state as a function of parameter under -mechanism .there is phase transition in the system . has strongly effects on the critical point . ] in order to investigate the effects of all terms in production function in detail , we have simulated some special cases .the first case is .then and in equation ( single ) are all zero and technical progress is the only one factor that affects the final results . as shown in figure([no_beta ] ) , there is a phase transition in the system . under the given conditions , the critical point for ( ) is around 50 .-mechanism and without the term that is the payoff from the cooperation among agents , parameter in the final stationary state as a function of parameter for different parameter .the critical point is much larger than that in fig .4 and even in fig . 6 . ]then we have simulated the model with but no .the results are shown in ( [ no_eco ] ) .when is greater than a critical value , the system will stabilized in the state with .that is for any agent .every agent does the tasks and himself and spends the same time on both tasks .the system is in the time - dividing working mode . only when is smaller than , does the system have phase transition to labor division . but the critical value is much larger than that of in figure ( [ betamu ] ) .it is even larger than the in figure ( [ no_beta ] ) .so the term has the effect of anti - specialization . from the simulation results in figure ( [ betamu ] ) andjust as we have indicated in the beginning of this section , is helpful to labor division .actually the term is the benefit for agent get from the cooperation with all the other agents .it should has the same effect as the decreasing of transaction costs .so it is not surprising that this term will enforce specialization .-mechanism , parameter in the final stationary state as a function of parameter when there is only the term for technical progress in the production function .the critical point is much larger than that in fig .another interesting case is the system without technical progress , that is and are all equal 1 . in this case , the system is exactly the same as ising model .let and , then the total returns for the system is where .the distribution of the state for the system is determined by . comprising with ising model ,let the corresponding hamiltonian is let , then the system could be described by canonical ensemble with .that is an anti - ferromagnetic ising model with global interaction .the similar approach shows that this system either doest have phase transition related to labor division .the result is shown in figure([no_gamma ] ) . and as a function of parameter when there is no progress of productivity .the system has no phase transition . ]parameter reflects the preference for tasks and .so it is expected that parameter , the ratio of returns for and , will affect final results on .the diagram in phase space for when is shown in figure ( [ lambda_2 ] ) , in which we can see fluctuates around zero , and no phase transition with symmetry breaking emerges . and as shown in figure([lambda_a ] ) , indeed has some effects on .but because of the combination mechanism described by parameter , has only a little effect on .it also has some detailed effects on labor division described by , as shown in figure ( [ alpha5_1 ] ) .but could not change the qualitative behavior of phase transition . in the final stationary state as a function of parameter for different .there is no symmetry breaking with the phase transition . ] in the final stationary state as a function of parameter .because of the cooperation mechanism described by parameter , has only a little effect on the final distribution . ] here are other two phenomena that should be pay attention to , first , the model has no scale effect for limited , as shown in figure ( [ 500 - 1000 ] ) .the phase curves for different are almost consistent .it has only some effects on the accurate of simulation .second , when in -mechanism for technical progress , the productivity declines with the time evolution .then the system could not reach the state of specialization .this result is also meaningful for real economic system . in the final stationary state as a function of parameter for different . ] , curves for different population size .the curves are almost consistent . ]a distinctive feature of the organization of a human society is the division of labor . from the classical economic theory[11 ] , the division of labor comes from the development of endogenous comparative advantages .so there is the intrinsic relationship between technical progress and the evolution of the division of labor . buthow and why it emerges from the system consists of identical individuals ?we have studied the formation of labor division by the approach of statistical physics .the results reveal that there is a phase transition with this pattern formation .although the progress of productivity dominated the phase transition occurs or not , the competitive cooperation among the agents has important effects on the critical point .so the market formation and labor division are usually reinforced each other .all the above results give us deep understanding to the evolution of labor division .studying the economy as an evolving complex system , we can avoid the standard economic assumptions of equilibrium based on rational behavior by agents . andthe concepts and methods developed in statistical physics is helpful to uncover fundamental principles governing the evolution of complex adaptive systems .so the approaches presented here have potential applications in a variety of economical , biological and financial problems .this work was supported by the national science foundation under grant no.10175008 and national 973 program under grant no.g2000077307 .we also thanks to all members of professor yang s group and faculties from department of system science for their warm discussion . | the emergence of labor division in multi - agent system is analyzed by the method of statistical physics . considering a system consists of n homogeneous agents . their behaviors are determined by the returns from their production . using the metropolis method in statistical physics , which in this model can been regarded as a kind of uncertainty in decision making , we constructed a master equation model to describe the evolution of the agents distribution . when we introduce the mechanism of learning by doing to describe the effect of technical progress and a formula for the competitive cooperation , the model gives us the following interesting results : ( 1 ) as the results of long term evolution , the system can reach a steady state . ( 2 ) when the parameters exceed a critical point , the labor division emerges as the result of phase transition . ( 3 ) although the technical progress decides whether or not phase transition occurs , the critical point is strongly effected by the competitive cooperation . from the above physical model and the corresponding results , we can get a more deeply understanding about the labor division . key words : division of labor , statistical physics , phase transition , metropolis method pacs:05.45.-a ; 87.23-n |
in this paper , we are interested in the numerical approximation of a randomly - perturbed system of reaction - diffusion equations that can be written for , with initial conditions and , and homogeneous dirichlet boundary conditions .the stochastic perturbation is a space - time white noise and is a small parameter . in the recent article , we have proved that an averaging principle holds for such a system , and we have exhibited an order of convergence - with respect to - in a strong and in a weak sense : the slow component is approximated thanks to the solution of an averaged equation . in this article , we analyse a numerical method of time discretization which reproduces this averaging effect at the discrete time level .more precisely , our aim is to build a numerical approximation of the slow component , taking care of the stiffness induced by the time scale separation .the heterogeneous multiscale method - hmm - procedure can be used , as it is done in for sdes of the same kind .first we recall the general principle of such a method , which has been developped in various contexts , both deterministic and stochastic - see the review article and the references therein , as well as , , . in system ,the two components evolve at different time scales ; is the slow component of the problem , while the fast component has fast variations in time .we are indeed interested in evaluating the slow component , which can be thought as the mathematical model for a phenomenon appearing at the natural time scale of the experiment , whereas the fast component can often be interpreted as data for the slow component , taking care of effects at a faster time scale .instead of using a direct numerical method , which might require a very small time step size because of the fast component , we use a different solver for each time scale : a macrosolver and a microsolver .the macrosolver leads to the approximation of the slow component ; it takes into account data from the evolution at the fast time scale .the microsolver is then a procedure for estimating the unknown necessary data , using the evolution at the microtime scale , which also depends on the evolution at the macrotime scale .we emphasize a difference with the framework of : instead of analysing various numerical schemes , the infinite dimensional setting implies that we only focus on a semi - implicit euler scheme . in section [ secttheo ] , we state the two main theorems of this article : we show a strong convergence result - theorem [ cvforte ] - as well as a weak convergence result - theorem [ cvfaible ] - which are similar to the available results for sdes . compared to , we propose modified and simplified proofs leading to apparently weaker error estimates ; we made this choice for various reasons .first , even if apparently we get weaker estimates , under an appropriate choice of the parameters the cost of the method remains of the same order .second , the generalization of the final dimensional results would not yield the same bounds , due to the regularity assumptions we make on the nonlinear coefficients of the equations .finally , we can extend the weak convergence result to the situation where the fast equation only satisfies a weak dissipativity assumption . in the case of a linear fast equation - when is equal to - it is well - known that the second equation in is dissipative . in the general case , we make assumptions on so that this property is preserved for - see assumptions [ strictdiss ] and [ weakdiss ] below .the fast equation with frozen slow component - defined by in the abstract framework - then admits a unique invariant probability measure , which is ergodic and strongly mixing - with exponential convergence to equilibrium . under the strict dissipativity condition [ strictdiss ], we can prove that the averaging principle holds in the strong and in the weak sense ; moreover the `` fast '' numerical scheme has the same asymptotic behaviour as the continuous time equation .if we only assume weak dissipativity of assumption [ weakdiss ] , the averaging principle only holds in the weak sense , and we can not prove uniqueness of the invariant law of the fast numerical scheme .nevertheless , in the general setting gives an approximation result of the invariant law of the continuous time equation with the numerical method which is used to prove theorem [ cvfaible ] ; the order of convergence is with respect to the timestep size - the precise result is recalled in theorem [ weakdistance ] . the paper is organized as follows . in section [ sectdescri ] ,we give the definition of the numerical scheme .we then state the main assumptions made on the system of equations . in section [ secttheo ]we state the two main theorems proved in this article , while in section [ commsect ] we compare the efficiency of the hmm scheme with a direct one in order to justify the use of a new method . before proving the theorems ,we give some useful results on the numerical schemes .finally the last two sections contain the proof of the strong and weak convergence theorems .instead of working directly with system , we work with abstract stochastic evolution equations in hilbert spaces : with initial conditions , . to get system from , we take ; the linear operators and are laplace operators with homogeneous dirichlet boundary conditions - see example [ exampleab ] - and the nonlinearities and are nemytskii operators - see example [ exfg ] . the process is a cylindrical wiener process on - see section [ sectwiener ] . for precise assumptions on the coefficients ,we refer to section [ sectassumptions ] .we recall the idea of the averaging principle - proved in the previous article : when goes to , can be approximated by defined by the averaged equation the error is controlled in a strong sense by and in a weak sense by , where can be chosen as small as necessary , and where is a constant .the averaged coefficient - see - satisfies ,\ ] ] where is the unique invariant probability measure of the fast process with frozen slow component - more details are given in section [ known ] .to apply the hmm strategy , we need to define a macrosolver and a microsolver .we denote by the macrotime step size , and by the microtime step size .let also be a given final time .the construction of the macrosolver is deeply based on the averaging principle : for can be approximated by . if the averaged coefficient was known , one could build an approximation with a deterministic numerical scheme on the averaged equation ; nevertheless in general it is not the case , and the idea is to calculate an approximation of this coefficient on - the - fly , by using the microsolver .therefore the macrosolver is defined in the following way : for any , with the initial condition . has to be defined ; before that , we notice that the above definition leads to a semi - implicit euler scheme - we use implicitness on the linear part , but the nonlinear part is explicit .if we define a bounded linear operator on by , we rather use the following explicit formula we want to be an approximation of . the role of the microsolveris to give an approximation of - the fast process with frozen slow component , when is fixed ; moreover we compute a finite number of independent replicas of the process , in order to approximate theoretical expectations by discrete averages over different realizations of the random variables , in a monte - carlo spirit .therefore the microsolver is defined in the following way : we fix a realization index , and a macrotime step ; then for any as above we can give an explicit formula where for any .the noises are defined by where are independent cylindrical wiener processes on .it is essential to use independent noises at each macrotime step .it is important to remark that this equation is well - posed in the hilbert space - see the conditions required when a cylindrical wiener process is used in section [ sectwiener ] - since is a hilbert - schmidt operator from to , under assumptions given in section [ sectassumptions ] .the missing definition can now be written : is given by is the number of microtime steps that are not used in the evaluation of the average in , while is the number of microtime steps that are then used for this evaluation .each macrotime step then requires the computation of values of the microsolver . at each macrotime step be initialized at time . in our proofs, this is not as important as in , but for definiteness we use the same method : we initialize with the last value computed during the previous macrotime step : , while .the aim of the analysis for hmm schemes is to prove that under an appropriate choice of the parameters of the scheme , we can bound the error by expressions of the following kind , where is chosen as small as necessary : for , we have the strong error estimate and if is a test function of class , bounded and with bounded derivatives , we have the weak error estimate the origin of the three error terms appears clearly in the proofs - see sections [ sectfort ] and [ sectfaible ] : the first one is the averaging error , the second one is the error in a deterministic scheme with the macrotime step , and the third one is the weak error in a scheme for stochastic equations with the microtime step .we recall that in the spde case the strong order of the semi - implicit euler scheme for the microsolver used here is , while the weak order is , while in the sde situation the respective orders are and .the macrosolver is deterministic , so that the order is .precise results for any choice of are given in theorems [ cvforte ] and [ cvfaible ] below , while the choice of these parameters is explained in section [ commsect ] .as mentioned above , system satisfies an averaging principle , and strong and weak order of convergence can be given .the hmm method relies on that idea .the natural assumptions are basically the same as the hypothesis needed to prove those results , but must be strenghtened sometimes . the typical kind of coefficients is specified in examples [ exampleab ] and [ exfg ] . here ,we recall the definition of the cylindrical wiener process and of stochastic integral on a separable hilbert space - its norm is denoted by or just . for more details , see .we first fix a filtered probability space . a cylindrical wiener process on defined with two elements : * a complete orthonormal system of , denoted by , where is a subset of ; * a family of independent real wiener processes with respect to the filtration : when is a finite set , we recover the usual definition of wiener processes in the finite dimensional space .however the subject here is the study of some stochastic partial differential equations , so that in the sequel the underlying hilbert space is infinite dimensional ; for instance when , an example of complete orthonormal system is - see example [ exampleab ] .a fundamental remark is that the series in does not converge in ; but if a linear operator is hilbert - schmidt , then converges in for any .we recall that a linear operator is said to be hilbert - schmidt when where the definition is independent of the choice of the orthonormal basis of .the space of hilbert - schmidt operators from to is denoted ; endowed with the norm it is an hilbert space .the stochastic integral is defined in for predictible processes with values in such that a.s ; moreover when ;\mathcal{l}_2(h , k)) ] , with a uniform control with respect to : there exists such that for any ] , we define the operators and by with domains on , the norm and the sobolev norm of are equivalent : when belongs to a space , the exponent represents some regularity of the function .when , we can also define a bounded linear operator in with where . under the previous assumptions on the linear coefficients , it is easy to show that the following stochastic integral is well - defined in , for any : it is called a stochastic convolution , and it is the unique mild solution of under the second condition of assumption [ hypab ] , there exists such that for any we have ; it can then be proved that has continuous trajectories - via the _ factorization method _ , see - and that for any , any , there exists a constant such that for any we now give the assumptions on the nonlinear coefficients .first , we need some regularity properties : [ hypf ] we assume that there exists and a constant such that the following directional derivatives are well - defined and controlled : * for any and , and . * for any , , , . * for any , , , . * for any , , , . * for any , , , .we moreover assume that is bounded .we also need : [ hypf2 ] for defined in the previous assumption [ hypf ] , we have for any and latexmath:[\[\begin{gathered } we assume that the fast equation is a gradient system : for any the nonlinear coefficient is the derivative of some potential .we also assume regularity assumptions as for .[ hypg ] the function is defined through , for some potential .moreover we assume that is bounded , and that the regularity assumptions given in the assumption [ hypf ] are also satisfied for . for , we need a stronger hypothesis than for - in order to get proposition [ fbarlip ] .assumption [ hypf2 ] becomes : [ hypg2 ] we have for any , , finally , we need to assume some dissipativity of the fast equation .assumption [ strictdiss ] is necessary to obtain strong convergence in the averaging principle , while assumption [ weakdiss ] is weaker and can lead to the weak convergence .[ strictdiss ] let denote the lipschitz constant of with respect to its second variable ; then where is defined in assumption [ hypab ] .[ weakdiss ] there exist and such that for any the second assumption is satisfied as soon as is bounded , while the first one requires some knowledge of the lipschitz constant of .[ exfg ] we give some fundamental examples of nonlinearities for which the previous assumptions are satisfied : * functions of class , bounded and with bounded derivatives , such that and satisfying fit in the framework , with the choice .* functions and can be * nemytskii * operators : let be a measurable function such that for almost every is twice continuously differentiable , bounded and with uniformly bounded derivatives .then is defined for every by for , we assume that there exists a function with the same properties as above , such that . the strict dissipativity assumption [ hyplg ] is then satisfied when the conditions in assumption [ hypf ] are then satisfied for and as soon as there exists such that and are continuously embedded into - it is the case for and given in example [ exampleab ] , with .assumptions [ hypf2 ] and [ hypg2 ] are also satisfied .we then deduce that the system is well - posed for any , on any finite time interval ] : for any ] .let bounded , of class , with bounded first and second order derivatives .then with the weak dissipativity assumption , for any , , , , , , there exists such that for any , , such that and moreover , if we assume in theorem [ cvfaible ] , the factor is replaced by .if we look at the estimates of theorems [ cvforte ] and [ cvfaible ] at time , the factor is of size .the proofs rely on the following decomposition , which explains the origin of the different error terms : the numerical process is defined in below : it is the solution of the macrosolver with a known , while is solution of the macrosolver using . the continuous processes and are defined at the beginning of section [ sectdescri ] .the first term is bounded thanks to the averaging result , using strong and weak order of convergence results - see the article .the second term is the error in a deterministic numerical scheme , for which convergence results are classical ; we recall the estimate in proposition [ propoerrdet ] .the third term is the difference between the two numerical approximations , and the main task is the control of this part : we show that an extension of the averaging effect holds at the discrete time level , where plays the role of an averaged process for . when we look at the theorems [ cvforte ] and [ cvfaible ] , we first remark that we obtain the same kind of bounds as in the finite dimensional case of . however we notice some differences ; they are due both to the infinite dimensional setting and to different proofs .first , the weak order of the euler scheme for spdes is only - see - while it is for sdes ; in the estimate , the strong order never appears , and this is one of the main theoretical advantages of the method . for completeness , we recall that the strong order is in the spde case - see - and for sdes .in fact , at least in the strictly dissipative case , we are comparing the invariant measures of the continuous fast equation with the invariant measure of its numerical approximation - see theorem [ weakdistance ] and corollary [ weakcor ] . in the weakly dissipative case ,we use the weak error estimates where the constants do not depend on the final time .we need some regularity on the initial condition in theorem [ cvfaible ] , since we apply the weak order averaging theorem of ; nonetheless the parameter does not appear in the different orders of convergence , but only in the constant . in theorem [ cvfaible ] , we do not exactly obtain the same estimate as in , since does not appear : as a consequence , the estimate is not improved when is increasing .since we are looking at some weak error estimate , this can seem natural ; first appeared in the estimate of only because of some strong error estimate used in the proof , while here to prove theorem [ cvfaible ] we always think with a weak error state of mind .second , the corresponding term in the weak estimate of is easily controlled by , which already appears in another part of the error since we use a first order scheme for the macrosolver - and not a general scheme - so that there is no need for getting a better estimate .the proofs of theorems [ cvforte ] and [ cvfaible ] are inspired by the ones in , but are different .the strong error is analysed in a global way , as in the proof of the strong order theorem in .moreover we do not need a counterpart of lemma in , and we thus think that our method is more natural . for the control of the weak error , we introduce a new appropriate auxiliary function .as explained in the introduction , we present here some simplified results and proofs . in the stricly dissipative case , we could go further in the analysis of the error , thanks to the initialization procedure of the scheme , and to additional exponential convergence results .but after a very technical work , due to the regularity assumptions made on the nonlinear coefficients and we would only obtain a factor instead of in the finite dimensional case - where is any , which can be chosen as small as necessary , and is defined in assumption [ hypf ] and depends on the regularity properties of the nonlinearities and .nevertheless , the cost of the scheme remains of the same order even if we do not use those better estimates .we consider that is fixed , and then the numerical method depends on several parameters ; we would like to give some explicit choices of the parameters for which we have a simple bound , and for which a direct scheme , where the stiffness of the system is not treated , is less efficient , for the cost defined by this is the total number of microtime steps for .we take : then .the time scale separation parameter is supposed to be very small , while we fix some tolerance for the error in the numerical scheme : more precisely , we want the error to be where satisfies either - strong error - or - weak error - if the parameters and are defined by the conditions ( strong error ) and ( weak error ) : the numerical error is dominant with respect to the averaging error .we want to show that we can choose the parameters of the scheme such that each term of the estimate of the theorem [ cvforte ] are of size , except the first one , and such that the cost of the scheme is lower than the cost of a direct scheme .* we first notice that the choice for and is easy : where or , and are fixed .the choice of other parameters changes when we look at the strong or at the weak error . *we first focus on the strong convergence case .we consider the case when either or .we obtain + [ choicestrong ] + [ cols="^,^,^ " , ] + unlike in the finite dimensional situation , the bound depends on a factor - where is linked to the regularity of the nonlinear coefficients and .the charge of this difference could be taken by either or ;the advantage of choosing is that the difference only involves a logarithmic factor on the final cost . as a consequence, we can make no difference between the choices and .+ we can now make a comparison with the cost coming from the use of a direct scheme : since the strong order of euler scheme for spdes is , the error can be bounded by for some constant . therefore to have a bound of size , the time step size much satisfy .this leads to a cost we conclude that in this situation the hmm numerical method is better , since the ratio of the cost tends to when - under the condition .* we now focus on the weak error estimate . here plays no role , so that is the good choice ! the time steps and are still given by .it remains to look for parameters and such that + once again , we need to choose and such that or is large . since exponential decrease is faster than polynomial decrease , the best choice is , and therefore , while .+ we then see that , so that the additional factor appearing when only weak dissipativity is satisfied plays no role .+ the corresponding cost of the scheme is then of order .+ we can again compare this cost with the cost coming from the use of a direct scheme : the weak order of euler scheme for spdes is ; to obtain the second estimate of , we need a cost and again the hmm scheme is better than the direct scheme for such a range of parameters .in this section , we just recall without proof the main results on the fast equation with frozen slow component and on the averaged equation , defined below .proofs can be found in for the strict dissipative case , and the extension to the weakly dissipative situation relies on arguments explained below .if , we define an equation on the fast variable where the slow variable is fixed and equal to : this equation admits a unique mild solution , defined on .since is involved at time , heuristically we need to analyse the properties of , with , and by a change of time we need to understand the asymptotic behaviour of when time goes to infinity . under the strict dissipativity assumption [ strictdiss ] ,we obtain a contractivity of trajectories issued from different initial conditions and driven by the same noise : with , for any , we have under the weak dissipativity assumption [ weakdiss ] , we obtain such an exponential convergence result for the laws instead of trajectories .the proof of this result is not staightforward , and can be found in - see also for further references .[ propoexpy1y2 ] with , there exist , such that for any bounded test function , any and any , and that the convergence to equilibrium is exponentially fast .first , let be the centered gaussian probability measure on with the covariance operator - which is positive and trace - class , thanks to assumption [ hypab ] .then defined by where ,+\infty[ ] .we can give uniform estimates on and , defined by [ schmalent ] and [ schmarapide ] : [ lentenum ] there exists such that we have -almost surely for any .the linear operator satifies ; moreover is bounded , so that by we almost surely have for any .the end of the proof is then straightforward ; we also notice that does not depend on the final time .[ rapidenum ] there exists - which does not depend on , on , on or on - such that for any , , , and , we have we introduce defined by the fast numerical scheme with no nonlinear coefficient - see - with the notation : for any , and with the initial condition . is the numerical approximation - using the microsolver scheme with a step size - of the process defined by where .notice that the are independent cylindrical wiener processes , and that the are independent realizations of . using theorem of , giving the strong order for the microsolver - when the initial condition is , with no nonlinear coefficient , with a constant diffusion term and under the assumptions made here - we get the following estimate : for any , there exists , such that for any , , and thanks to and , for any , there exists a constant such that for any , , and we have now for any we define ; it is enough to control , -almost surely . by , we have the following expression : for any since is bounded , and using the inequality , we get therefore we have for any .\ ] ] but .so we get .\ ] ] therefore as a consequence , we get for any and any , , and then . at the continuous time level ,the averaging principle proved in comes from the asymptotic behaviour of the fast equation with frozen slow component , as described in section [ known ] .the underlying idea of the hmm method in our setting is to prove a similar averaging effect at the discrete time level : we therefore study the asymptotic behaviour of the fast numerical scheme which defines the microsolver , with frozen slow component - in other words we are looking at the evolution of the microsolver during one fixed macrotime step . in section [ known ], we have seen that under the weak dissipativity assumption [ weakdiss ] the fast equation with frozen slow component admits a unique invariant probability measure . at the discrete time level, this assumption only yields the existence of invariant laws ; to get a unique invariant law , we need strict dissipativity [ hyplg ] , and we obtain : [ unicinv ] under assumption [ strictdiss ] , for any and any ; the numerical scheme [ schmarapide2 ] admits a unique ergodic invariant probability measure .moreover , we have convergence to equilibrium in the following sense : for any , there exist and such that for any , , , any lipschitz continuous function from to , and , we have latexmath:[\[\label{estimvitesse } we recall the notation for the effective time step ; the noise is defined with a cylindrical wiener process : .if we fix the slow component , we define with the initial condition . with equation , we associate the transition semi - group : if is a bounded measurable function from to , if and .\ ] ] we also denote by the law of : then =\int_{h}\phi(z)\nu_{m , y}^{x,\tau}(dz).\ ] ] we notice that the semi - group satisfies the feller property : if is bounded and continuous , then is bounded and continuous .the required tightness property is a consequence of the following estimate , which can be proved thanks to regularization properties of the semi - group : for any , , there exists such that for any and moreover if , the embedding of in is compact . the key estimate to prove uniqueness is the following contractivity property , which holds thanks to assumption [ strictdiss ] : [ theoestim ] for any , there exists such that for any , , , we have -almost surely , then we have the equation if we take the scalar product in of this equation with , we get the left hand - side is equal to and we get therefore we have we remark that [ hyplg ] implies that for any , there exists such that if we have ; therefore as a consequence , there exists a unique ergodic invariant probability measure , which is strongly mixing .moreover one can prove that there exists such that for any and any ; we therefore get {\text{lip}}\int_{h}e^{-cm\tau}|y - z|\mu^{x,\tau}(dz)\\ & \leq c[\phi]_{\text{lip}}e^{-cm\tau}.\end{aligned}\ ] ] we recall that denotes the invariant law of the continuous time fast equation with frozen slow component .thanks to the fast numerical scheme , we can get an approximation result , which is proved in : with test functions of class , we can control the weak error for any time with a convergence of order with respect to the time step .moreover the estimate is easily seen to be independent from the slow component .we define for any of class [ weakdistance ] with the dissipativity condition , for any , for any , there exists such that for any of class , for any , for any and any integer \|\phi\|_{(2)}(1+|y|^3)(((m-1)\tau)^{-1/2+\kappa}+1)\tau^{1/2-\kappa}.\ ] ] as explained in section [ invarfast ] , in general the fast numerical scheme does not admit a unique invariant probability measure , but when the strict dissipativity assumption [ strictdiss ] is satisfied , it admits a unique invariant law . [ weakcor ] under the assumptions of theorem [ weakdistance ]: _ ( i ) if only is satisfied , we have for any |\leq c\|\phi\|_{(2)}(1+|y|^3)(((m-1)\tau)^{-1/2+\kappa}+1)\tau^{1/2-\kappa}+cn(\phi)(1+|y|^2)e^{-cm\tau}.\ ] ] _ _ ( ii ) if moreover is satisfied , _ we recall that in the case of euler scheme for sdes this kind of results holds with the order of convergence .we define a scheme based on the macrosolver , for theoretical purpose , in the situation when is known : we can look at the error between , defined by , and , defined by . herequantities are deterministic , and the following result is classical - see , , or the details of the proofs in : [ propoerrdet]for any , and , there exists , such that for any and to the following remark on the construction of the scheme , we can consider that corresponds to a process evaluated at time . the idea is to build noise processes by concatenation of the for the different .[ remarknoise ] to compute with the microsolver , the total number of microtime steps used is .it is then natural to use only one noise process , for each , and to make an evaluation involving time .more precisely , we can use with being independent cylindrical wiener processes on .then one can define for and for we also introduce the following notation : denotes conditional expectation with respect to the -field we notice that is -measurable , but that is not . the final time is fixed and we recall the notation . to simplify notations ,we do not always precise the range of summation in the expressions below : the indices belong to , and belong to .we recall that according to the decomposition of the error , we have to control the first part is controlled thanks to the strong order theorem of : for any , we have for any the second part is deterministic and is controlled thanks to proposition [ propoerrdet ] : where depends on .it remains to focus on the third part .instead of analyzing the local error like in , we adopt a global point of view , and we follow the idea of the proof of theorem in : for any the averaged coefficient is lipschitz continuous , and ; moreover we can define the averaged coefficient with respect to the invariant measure of the fast numerical scheme - which is unique since we assume strict dissipativity : for any the error in can then be decomposed in the following way - the idea of looking at the square of the norm in the second expression is an essential tool of the proof : if we can control the two last terms by a certain quantity , by a discrete gronwall lemma we get for any .first , the third term in is linked to the distance between the invariant measures and - since we assume strict dissipativity for this strong estimate - which is evaluated thanks to theorem [ weakdistance ] and corollary [ weakcor ] for test functions of class .since , using assumption [ hypf2 ] , we can apply corollary [ weakcor ] to obtain and summing we get the other term is more complicated ; in order to get a precise estimate , we expand the square of the norm of the sum .we can then use some conditional expectations , which allow to use exponential convergence to equilibrium via theorem [ unicinv ] .therefore we obtain the following expansion : _ ( i ) _ we first treat . by , for any we have therefore we can see that with the conditional expectation with respect to - see . when , and are independent, so that we treat differently the cases and in the above summation , and we obtain for the first part , we can directly use the exponential convergence to equilibrium result of to have a bound with for the second part , we see that we can treat only the case , and we introduce the conditional expectation with respect to the -field generated by and , when .we get a bound with we therefore get _ ( ii ) _ we now consider , which corresponds to the cross - terms in the expansion of the square of the norm of the quantity . by the definition of , the general term with indices in is bounded by |,\end{gathered}\ ] ] using conditional expectation and the boundedness of .using the exponential convergence result of and lemma [ rapidenum ] , we get the bound |\leq ce^{-m\tau},\ ] ] so that the previous quantity is bounded by summing on , we can now conclude that then by and and the result of theorem [ cvforte ] now follows from .in order to get a better bound for the weak error than for the strong error , we use an auxiliary function which is solution of a kolmogorov equation .thanks to the averaging theorem of , which is proved under the weak dissipativity assumption , the first term above can be controlled by , where is a constant - depending on , for any . for the second term ,since we look at the error made by using a deterministic scheme to approximate a deterministic equation , there is no difference between the strong and the weak orders - since the test function is lipschitz continuous ; so we again use proposition [ propoerrdet ] . for the third term, we see that we have to control some error between two different numerical schemes , in a weak sense .the usual strategy is to decompose this error by means of an auxiliary function satisfying some kind of kolmogorov equation .more precisely , we use the deterministic scheme defining in order to define for any where we explicitly mention dependence of the numerical solution on the initial condition . moreover the second and the third estimates of this lemmareveal some smoothing effect in the equation , due to the semi - group .these results are specific to the infinite dimensional framework ; if we only use the first estimate above , and a simple one like we would not obtain an optimal convergence result . in the general term of, we proceed with a taylor expansion and we get the second term is controlled by using the second estimate of the previous lemma [ regu_n ] . under the assumption that is bounded , we therefore get a term ; when summing over , we get a term , which is already dominated in the final estimate . for the first term, we do not exactly follow the proof of ; we rather define auxiliary functions for in order to keep on looking at a weak error term : then if we define we have by using conditional expectation with respect to defined by , does not depend on , so that in the sequel we fix .[ regpsi_nk ] for any and , there exists a constant , such that for any , any and any , the following derivatives exist and are controlled : for any , , , the proof relies smoothing effect of the semi - group .the lemma is proved below in section [ sect_aux_proofs ] . when the strict dissipativity assumption is satisfied, we can indeed obtain a bound without : we can control the distance between the invariant measures of the continuous and discrete time processes , thanks to the second part of corollary [ weakcor ] .we use the following expression for : by definition , for any we have ; we see that the derivatives in directions are given by and is of class on , with bounded derivatives ; therefore we just need to control and .we use the following estimates of the derivatives of , given in proposition [ fbarlip ] : for any , , , latexmath:[\[\begin{gathered } _ ( ii ) _ for any , we can write since when , and thanks to the previous estimates on and , we get therefore and a discrete gronwall lemma then yields _ ( ii ) _ when we look at the second - order derivative , we see that thanks to the last estimate of lemma [ regu_n ] , we can control the expression by we then notice that since and is bounded , and thanks to assumption [ hypf2 ] ; the other part is controlled thanks to assumption [ hypf2 ] . | we consider the discretization in time of a system of parabolic stochastic partial differential equations with slow and fast components ; the fast equation is driven by an additive space - time white noise . the numerical method is inspired by the averaging principle satisfied by this system , and fits to the framework of heterogeneous multiscale methods . the slow and the fast components are approximated with two coupled numerical semi - implicit euler schemes depending on two different timestep sizes . we derive bounds of the approximation error on the slow component in the strong sense - approximation of trajectories - and in the weak sense - approximation of the laws . the estimates generalize the results of in the case of infinite dimensional processes . |
in species sampling problems , one is interested in the species composition of a certain population ( of plants , animals , genes , etc . ) containing an unknown number of species and only a sample drawn from it is available .the relevance of such problems in ecology , biology and , more recently , in genomics and bioinformatics is not surprising . from an inferential perspective , one is willing to use available data in order to evaluate some quantities of practical interest .the available data specifically consist of a so - called _ basic sample _ of size , , which exhibits distinct species , , with respective frequencies , where clearly .given a basic sample , interest mainly lies in estimating the number of new species , , to be observed in an additional sample of size and not included among the s , .most of the contributions in the literature that address this issue rely on a frequentist approach ( see for reviews ) and only recently an alternative bayesian nonparametric approach has been set forth ( see , e.g. , ) .the latter resorts to a general class of discrete random probability measures , termed _species sampling models _ and introduced by j. pitman in . given a nonatomic probability measure on some complete and separable metric space , endowed with the borel -field , a ( proper ) species sampling model on is a random probability measure where is a sequence of independent and identically distributed ( i.i.d . )random elements taking values in and with probability distribution , the nonnegative random weights are independent from and are such that , almost surely . in the species sampling context , the s act as species tags and is the random proportion with which the species is present in the population . if is an exchangeable sequence directed by a species sampling model , that is , for every and in one has =\prod_{i=1}^n \tilde p(a_i)\ ] ] almost surely , then is termed _ species sampling sequence_. besides being an effective tool for statistical inference , species sampling models have an appealing structural property established in .indeed , if is a species sampling sequence , then there exists a collection of nonnegative weights such that , for any vector of positive integers with , and =p_{k_n+1,n}(n_1,\ldots , n_{k_n})p_0 ( \cdot)+\sum_{j=1}^{k_n } p_{j , n}(n_1,\ldots , n_{k_n } ) \delta_{x^*_j } ( \cdot),\ ] ] where is a sample with distinct values .statistical applications involving species sampling models for different purposes than those of the present paper are provided , for example , in .the bayesian nonparametric approach we undertake postulates that the data are exchangeable and generated by a species sampling model .then , conditionally on the basic sample of size , inference is to be made on the number of new distinct species that will be observed in the additional sample of size .interest lies in providing both a point estimate and a measure of uncertainty , in the form of a credible interval , for given .since the conditional distribution of becomes intractable for large sizes of the additional sample , one is led to studying its limiting behaviour as increases .such asymptotic results , in addition to providing useful approximations to the required estimators , are also of independent theoretical interest since they provide useful insight on the behaviour of the models we focus on .the only discrete random probability measure for which a conditional asymptotic result , similar to the one investigated in this paper , is known , is the two - parameter poisson dirichlet process , shortly denoted as . according to ,a process is a species sampling model characterized by=-1 =0 with , and . in this case , provide a result describing the conditional limiting behaviour of . in the present paper ,we focus on an alternative species sampling model , termed _ normalized generalized gamma process _ in .as we shall see in the next section , it depends on two parameters and and , for the sake of brevity , is denoted by .moreover , it is characterized by for any , where is the incomplete gamma function .the process prior has gained some attention in the bayesian literature and it has proved to be useful for various applications such as those considered , for example , in . it is to be noted that the does not feature a posterior structure that is as tractable as the one associated to the process ( see , e.g. , ) . nonetheless , in terms of practical implementation , it is possible to devise efficient simulation algorithms that allow for a full bayesian analysis within models based on a prior .see for a review of such algorithms . in the present manuscript, we will specify the asymptotic behaviour of , given the basic sample , as diverges and highlight the interplay between the conditional distributions of the and the processes . since the posterior characterization of a process is far more involved than the one associated to the process ,the derivation of the conditional asymptotic results considered in this paper is technically more challenging .this is quite interesting since it suggests that it is possible to study the limiting conditional behaviour of even beyond species sampling models sharing some sort of conjugacy property .for example , one might conjecture that the same asymptotic regime , up to certain transformations of the limiting random variable , should hold also for the wide class of gibbs - type priors , to be recalled in section [ section2 ] .an up to date account of bayesian nonparametrics can be found in the monograph and , in particular for asymptotic studies , provides a review of asymptotics of nonparametric models in terms of `` frequentist consistency . ''yet another type of asymptotic results are obtained in .the outline of the paper is as follows . in section [ section2 ], one can find a basic introduction to species sampling models and a recollection of some results in the literature concerning the asymptotic behaviour of the number of distinct species in the basic sample , as increases .section [ section3 ] displays the main results , whereas the last section contains some concluding remarks .let us start by providing a succinct description of completely random measures ( crm ) before defining the specific models we will consider and which can be derived as suitable transformations of crms .see for an overview of discrete nonparametric models defined in terms of crms .suppose is a random element defined on some probability space and taking values on the space of boundedly finite measures on such that for any in , with for , the random variables are mutually independent. then is termed _ completely random measure _ ( crm ) .it is well - known that the laplace functional transform of has a simple representation of the type =\mathrm { e}^{-\psi(f)},\ ] ] where \nu(\mathrm{d}s , \mathrm{d } y) ] for . by virtue of predictive sufficiency of the number of distinct species observed among the first data , in it has been shown that in the this distribution coincides with \nonumber \\[-8pt ] \\[-8pt ] \nonumber & = & \frac{\mathscr{g}(m , k;\sigma ,-n+j\sigma)}{(n)_m } \frac{\sum_{l=0}^{n+m-1}{n+m-1\choose l}(-1)^{l}\beta^{l/\sigma } \gamma ( j+k-{l}/{\sigma};\beta)}{\sum_{l=0}^{n-1}{n-1\choose l}(-1)^{l}\beta^{l/\sigma}\gamma(j-{l}/{\sigma};\beta)}\end{aligned}\ ] ] for , with denoting the non central generalized factorial coefficient .see for a comprehensive account on generalized factorial coefficients .expression ( [ posteriordistinct ] ) can be interpreted as the `` posterior '' probability distribution of the number of distinct new species to be observed in a further sample of size .now , based on ( [ posteriordistinct ] ) , one obtains the expected number of new species as =\sum _ { k=0}^{m}kp_{m}^{(n , j)}(k),\ ] ] which corresponds to the bayes estimator of under quadratic loss .moreover , a measure of uncertainty of the point estimate can be obtained in terms of -credible intervals that is , by determining an interval with such that \geq\alpha ] , one has &= & \frac{\gamma(n)}{\sigma^{j-1 } \gamma(j ) \prod_{i=1}^k(1-\sigma ) _ { n_i-1 } } \frac{1}{\gamma(n)}\\ & & { } \times\int_0^\infty u^{n-1}\mathrm{e}^{-(\lambda+u)^\sigma } \sigma^j \prod_{i=1}^k \frac{\gamma(n_i-\sigma)}{\gamma(1-\sigma ) } ( u+\lambda ) ^{-n_i+\sigma } \,\mathrm{d}u\end{aligned}\ ] ] and a simple change of variable yields the representation in ( [ eqpostlapl ] ) .proposition [ poststable ] allows one to draw an interesting comparison between unconditional and conditional limits of the number of distinct species . as we have already highlighted in section [ section2 ] , the probability distribution of the for the process and the process arise as a power transformation ( involving the parameter ) of a suitable tilting of the probability distribution of .we are now in the position to show that a similar structure carries over when one deals with the conditional case .resorting to the notation set forth in theorem [ limitresult ] , let to be a random variable whose law coincides with the probability distribution of the conditional total mass of a -stable process given a sample of size containing distinct species .hence , from the laplace transform ( [ condition ] ) in the proof of theorem [ limitresult ] one can easily spot the following identity let now and be the conditional probability distributions of , respectively , the -stable and the generalized gamma processes . according to theorem [ limitresult ] , the probability distributions and are mutually absolutely continuous giving rise to the conditional counterpart of the identity ( [ eqexpontilt ] ) , that is , for any and .in particular , if we denote by the random variable whose probability distribution is obtained by exponentially tilting the probability distribution of as in ( [ tiltingesponenziale ] ) , then one can establish that in other terms , one can easily verify that the probability distribution of the limit random variable in ( [ result ] ) can be also derived by applying to the probability distribution of the same transformation characterizing the corresponding unconditional case . in a similar fashion , one can also derive the conditional counterpart of the identity ( [ eqpolytilt ] ) for the two parameter poisson dirichlet process . indeed, according to , proposition 2 , one can introduce a probability measure on whose radon nikodm derivative with respect to the dominating measure is given by ^{-\theta}\vspace*{-1pt}\ ] ] for any and with being the probability measure of the random measure conditional on the sample .in particular , if we denote by the random variable whose probability distribution is obtained by polynomially tilting the probability distribution of as in ( [ tiltingpolinomiale ] ) , then one can easily verify that this suggests that the probability distribution of the limiting random variable in ( [ limittwoparameter ] ) can also be derived by applying to the probability distribution of the same transformation characterizing the corresponding unconditional case .the identities ( [ exptiltngg ] ) and ( [ exptilttwoparam ] ) represent the conditional counterparts of the identities ( [ prima ] ) and ( [ seconda ] ) , respectively , given a sample containing distinct species .hence , in the same spirit of , proposition 13 , we have provided a characterization of the distribution of the limiting random variables and in terms of a power transformation ( involving the parameter ) applied to a suitable tilting for the conditional distribution of the total mass of the -stable process . in particular , the identities ( [ exptiltngg ] ) and ( [ exptilttwoparam ] ) characterize the distribution of the limit random variables and via the same transformation characterizing the unconditional case and applied to an exponential tilting and polynomial tilting , respectively , for a scale mixture distribution involving the beta distribution and the -stable distribution . to conclude , there is a connection between the prior , and posterior , total mass of a -stable crm that we conjecture can be extended to any gibbs - type random probability measure and will be object of future research .the authors are grateful to an associate editor and a referee for valuable remarks and suggestions that have lead to a substantial improvement in the presentation .this work is partially supported by miur , grant 2008mk3afz , and regione piemonte . | in bayesian nonparametric inference , random discrete probability measures are commonly used as priors within hierarchical mixture models for density estimation and for inference on the clustering of the data . recently , it has been shown that they can also be exploited in species sampling problems : indeed they are natural tools for modeling the random proportions of species within a population thus allowing for inference on various quantities of statistical interest . for applications that involve large samples , the exact evaluation of the corresponding estimators becomes impracticable and , therefore , asymptotic approximations are sought . in the present paper , we study the limiting behaviour of the number of new species to be observed from further sampling , conditional on observed data , assuming the observations are exchangeable and directed by a normalized generalized gamma process prior . such an asymptotic study highlights a connection between the normalized generalized gamma process and the two - parameter poisson dirichlet process that was previously known only in the unconditional case . |
advances in nonlinear functional analysis and the rich theory of linear distributed parameter systems have led to a growing of body of work on nonlinear infinite - dimensional models .for instance , in a 2nd - order evolution framework ( especially , wave , elastodynamics , or thin plates with no rotational inertia terms ) for an appropriate elliptic operator a linear equation with viscous damping for an unknown may be expressed as with .we will focus on the evolution on a bounded domain and under suitable homogeneous boundary conditions .a nonlinear refinement on the dissipative term may take the form of a feedback law .stability properties of such models have been extensively analyzed . in an infinite - dimensional sittingsuch a nonlinear feedback may change the topology of the problem and uniform stability becomes reliant on the regularity of solutions ( for example , see ) . a more general scenario would account for coefficients that depend on the solution itself : assuming the well - posedness of an associated initial - boundary value problem can be resolved , if the term is not guaranteed to be strictly positive on a _ fixed _ appropriately configured set , then analysis of stability becomes much more involved since the region where the dissipation is active now evolves with the solution and may not always comply with the requirements of the geometric optics .the case when this coefficient vanishes at zero displacement , namely , , will be referred to as _ degenerate _ damping .such a degeneracy naturally arises when investigating energy decay of higher - order norms .for example , the natural energy space for a semilinear wave problem is and . with more regular initial data one can consider behavior of higher - order energy norms , namely for .one approach would be to differentiate the pde in time which via the substitution leads to a degenerately damped problem a particular example can be observed in the relation between maxwell s system and the ( vectorial ) wave equation . for a given mediumdenote the electric permittivity by , magnetic permeability by and conductivity by .then maxwell s system reads with . on a bounded domain ,subject to the _ electric wall _boundary conditions , and for scalar - valued , , with positive lower - bounds , the term exponentially stabilizes this system . in a more accurate nonlinear conduction model the coefficient may depend on the intensity of the electric field . if we consider , for example , for , then differentiating the first equation in time and combining with the equation for gives for example , taking , gives where is the normalized vector .the term has features of the viscous dissipation in this second - order equation , but nonlinear conductivity augments it with a degenerate coefficient .the study of stability for the above models is much more delicate than in the situations where the damping , even nonlinear , depends on the time - derivative only .weighted energy methods from basic energy laws to carleman estimates ( e.g. )have been successfully used to derive stabilization and observability inequalities for distributed parameter systems . however, these methods typically rely on the properties of the coefficients to ensure that suitable geometric optics conditions are satisfied and the control effect suitably propagated " across the physical domain .one can sometimes dispense with geometric optics requirements for smooth enough initial data , yet even then the support of the control / damping term must contain a subset that is time - invariant ( and with any time - dependent coefficients being non - vanishing , e.g. , as in ) . in turn , the analysis of control - effect propagation when the coefficients themselves depend on the solution and possibly go to zero wherever and whenever the solution does would require new techniques .the following semilinear model , if recast in a higher - dimensional setting becomes highly non - trivial even when just regarding local wellposedness . in a one - dimensional frameworkthe nonlinearity is more tractable , but the rigorous stability analysis has long been open .we focus on an elastic string with a _ degenerate damping _ , namely a dissipative term whose coefficient depends on and may vanish with the amplitudes : fixed at the end - points and with a prescribed initial configuration at : the initial data live in the natural function spaces revisited below .function is assumed to be continuous non - negative , hence the term a priori should provide some form of energy dissipation in the model .the scenario of interest is when as , essentially causing the dissipative effect to deteriorate at small amplitudes .we will focus on the polynomial case satisfying the locally lipschitz estimate existence and uniqueness of weak _ finite - energy _ solutions to was proven in by means of galerkin approximations .the advantage of a 1d framework is that the displacement function is absolutely continuous , hence topologically is still in , as in the case of the corresponding linear model .however , in higher - dimensional analogues this embedding property is lost and proving existence becomes a markedly more complex task .first , fractional damping exponents were considered in order to ensure that the damping term is bounded with respect to the finite energy topology .arbitrary damping exponents were subsequently examined in . due tothe loss of regularity solutions had to be characterized via a variational _ inequality _ and established by a rather technical application of kakutani s fixed point theorem .on the other hand , stability analysis even in one dimension poses a challenge that has been open for a number of years . despite the gain in regularity , attributed to sobolev embeddings ,the key difficulty now is that energy estimates require some sort of information on the region where the damping is supported . in both the magnitude and the support of the damping coefficientevolve with the geometry of the state , rendering all standard techniques inapplicable .it is plausible to assume that some sort of a logarithmic uniform decay rate can be verified , possibly by combining ode techniques ( e.g. ) with pointwise carleman - type estimates .another , though a rather weak , tentative indication of this outcome would be the uniform stability of the corresponding finite - dimensional analogue ( see the appendix ) .yet the situation in infinite dimensions turns out rather different .the goal of this article is to examine analytically and numerically stability properties of the dynamical system associated to : * establish global persistence of regularity in solutions with smooth initial data .besides theoretical interest such a result is useful to justify the convergence estimates for numerical approximations . *prove that a polynomial degeneracy in the damping of the form yields a system that is * not uniformly stable*. * present a numerical scheme that indicates the loss in decay rates .such observations had been performed first and , in fact , served as a motivation for the theoretical results presented here .the notation employed throughout the paper is summarized in section [ sec : notation ] .the two main results on well - posedness and stability are stated in section [ sec : main ] .several auxiliary technical definitions used in the proofs can be found in section [ sec : aux ] .local and global wellposedness are verified respectively in sections [ sec : local ] and [ sec : global ] .they draw upon two regularity lemmas proved earlier in section [ sec : regularity ] .the proof of the lack of uniform stability is the subject of section [ sec : nonuniform ] .numerical results are the subject of section [ sec : numerics ] .the appendix contains results pertaining to the ode analog of the considered problem , namely , a damped harmonic oscillator with the damping coefficient dependent on the displacement .this section serves as a quick reference for the basic notation used thorough the paper with some of the symbols revisited and discussed in more detail later in the text .henceforth will denote the norm on a normed space . for the space we will use with the corresponding inner product denoted by .we will also frequently involve the sobolev space associated to an equivalent inner product and norm the bilinear form will indicate the pairing of and its continuous dual .we will also frequently use spaces of the form ;x ) { \quad\text{or}\quad}l^p(0,t ; x),\ ] ] which will be abbreviated respectively as looking ahead , for the one - dimensional dirichlet laplacian operator ( discussed below ) let us introduce the space equipped with the natural graph norm .for example , ; { \mathscr{d}}(a^{1/2 } ) ) \cap c^1([0,t ] ; { { l^2(\omega)}}) ] be the shorthand for the initial - boundary value problem : with the indicated derivatives taken in the sense of distributions , and subject to boundary conditions and initial data [ def : weak ] suppose and for some , . then we say a function is a weak solution to \!\!\!\;\big]}} ] if a. , b. for any the scalar map is absolutely continuous ( hence a.e .differentiable ) on ] will be described as * regular of order * if it is continuously differentiable in time with the following regularity in classical terminology , weak solutions correspond to order and strong solutions to order .suppose has the weak regularity ( regular of order ) . then according to the ( 1d ) sobolev embedding for , the function is well - defined as an element of .in fact we will generalize this statement for the purposes of subsequently analyzing more regular solutions .[ prop : f - regularity ] let for .if , then \in c_t^j { \mathscr{d}}(a^{\frac{n - s - j}{2 } } ) , { \quad\text{for}\quad}s , j\in { \mathbb{n}}\cup\{0\},\quad s+j\leq n\,.\ ] ] in addition , {k=1,\ldots , j-1 } ( 1 + \|{\partial}_t^j \dot{z}\|_{c_t{{l^2(\omega)}}}),\quad j\leq n,\ ] ] where is a polynomial in variables .[ example-1 ] due to a variety of spaces and indices involved in the statement of proposition [ prop : f - regularity ] , it is helpful to look at a basic example .take , so and consider the regularity order .then the condition reads in particular , , which corresponds to square - integrable derivatives , first three of which satisfy zero boundary conditions .we have , for example , & = & 30 \dot{z } \ddot{z}^2 + 20 \dot{z}^2 \dddot{z } + 20 z \ , \ddot{z } \ , \dddot{z } + 10 z \dot{z } { \partial}_t^4z + z^2 { \fbox{}}\,.\end{aligned}\ ] ] thus , for instance , can be estimated using a polynomial of bounds on the functions , , , , and one term involving the norm of the fifth derivative of in time or , equivalently , .this is precisely the conclusion of . likewise ,if we consider , say , the -nd derivative in space and -nd in time to , in we get : = 2\dot{z}^3 + 6z \dot{z } \ddot{z}+z^2 \dddot{z}\ ] ] = & 12\dot{z}_x^2\dot{z}+12z_x\ddot{z}\dot{z}_x+12z_x\dot{z}\ddot{z}_x+2z_x^2\dddot{z}+4z\dddot{z}_xz_x+12z\ddot{z}_x\dot{z}_x+6\dot{z}z_{xx}\ddot{z}+6\dot{z}^2\dot{z}_{xx}\\ & + 2z\dddot{z}z_{xx}+6z\ddot{z}\dot{z}_{xx}+6z\dot{z}\ddot{z}_{xx}+z^2{\fbox{ } } \end{split}\ ] ] we have that ] can be bounded in since by we have .the rest of the terms are in fact in .this confirms that \in c_t { { l^2(\omega)}} ] instead of just .first of all , would give implying that have zero traces .hence so do their time - derivatives and then it is immediately follows that ] belong to .we conclude that if , then ] if it is a weak solution to \!\!\!\;\big]}} ] for any .moreover , that is , for [ ex : square ] if a solution to with has initial data given by the eigenfunctions of ( every ) , then for every .[ thm : unstable ] the dynamical system generated by on the state space corresponding to weak solutions is non - accretive , but is * not * uniformly stable .specifically , the energy functional is continuous non - increasing ; however , for any constants and any time there exists an initial datum such that the corresponding solution trajectory on ] with .then there exist a group of linear operators on such that determines a weak solution to this problem on every ] with if , this problem possesses a unique weak solution .then the continuous mapping for energy functional satisfies \,.\ ] ] in particular , from the gronwall estimate it readily follows \,.\ ] ] moreover , if and , then and ( e.g. see ( * ? ? ? * thm .2.1 , p. 229 ) ( instead of " ) . in the second half of that theorem , which is the one we cite ,it is strengthened to for strong solutions . ] ) suppose is a weak solution of on ] , it satisfies the variational identity let ) ] , .this series that converges to in and its time - derivative converges to in . by applying identity to finite sums and passing to the limit we recover .the existence of finite energy solutions to is known .well - posedness for _ regular solutions _ , however , requires more work and relies on the connection between the smoothness in space and smoothness in time as summarized by the diagram below : [ cols="^,^,^ " , ] the purpose of this subsection is to furnish this connection which can be loosely outlined as follows : a weak solution of is regular of order on ] .we split this claim into two propositions . in order not to keep track of how the structure of changes after differentiation we introduce the following , somewhat abstract property : [ def : reg - dependence ] we say two functions have * order dependence *if for every the regularity ( * if * it holds ) * would imply * that where is a non - negative integer .it is helpful to note : * if have order dependence , they trivially have order dependence for any . *if have order dependence , then for , the functions and have order dependence .again , it is helpful to consider an example .let .then it is not hard to check that have order dependence . for instance take and assume . as was shown via and in example [ example-1 ] continuously in time .in particular , .[ prop : regular - differentiable]suppose that is a weak solution on ] with and .assume functions have order dependence for some .suppose , in addition , with , then in other words , .moreover , for a constant dependent only on , , and being continuous monotone increasing with respect to the parameters , . throughout the argument below , the norms in the considered spaces can be ( inductively ) estimated in terms of and , thus ultimately verifying .we will focus in detail on proving the claimed regularity . * cases , . * by assumption we always have and which takes care of the case .if we in addition assume , then the equation implies that . since on the boundary then .conclude : for proceed by induction : suppose the result of this proposition holds for .assume holds .then let us show .* case . *because , then condition implies . as a special case ( using instead of maximal ), we have moreover , functions have a fortiori -st order dependence , so the induction hypothesis gives next , by assumption , we also have to show that , it remains to verify that for we have .to this end introduce since have -th order dependence , then and have -st order dependence ( see definition [ def : reg - dependence ] ) . as we already know , , and via the -st order dependence of , we have for . because , then ; likewise also tells us that is at least in , so . consequently ,applying to the equation for , we conclude that is a weak solution to \!\!\!\;\big]}} ] of \!\!\!\;\big]}} ] with .moreover from the assumption we also have that , i.e. , is regular of order .thus by the induction hypothesis equivalently the only remaining step from here to proving is to show that . to this enddefine then by since and by the -st order dependence ( implied by -th order dependence ) of and , we deduce from that .so confirming that as desired .thus completing the induction argument .it is known that possesses unique solutions .here we extend this result to regular solutions as well .first we formulate it for local in time solutions .[ thm : local ] suppose for a non - negative integer . then there exists such that system has a local unique solution that is regular of order on interval ] we have from the local lipschitz property of we obtain + \left[f({\tilde}{\phi}_0)\phi_1 -f({\tilde}{\phi}_0){\tilde}{\phi}_1\right ] \\= & \phi_1 m(\phi_0,{\tilde}{\phi}_0)(\phi_0 - { \tilde}{\phi}_0 ) + f({\tilde}{\phi}_0)(\phi_1 - { \tilde}{\phi}_1)\,.\ , \end{split}\ ] ] now we will rely on the fact that a priori gives : estimate , \right|_{_0}\\ \leq & \left| \phi_1 m(\phi_0,{\tilde}{\phi}_0)(\phi_0 - { \tilde}{\phi}_0 ) + f({\tilde}{\phi}_0)(\phi_1 - { \tilde}{\phi}_1 ) \right|_{_0}\\ & + \left| \sum_{i+j+k = n } c_{ijk}\,{\partial}_x^i\phi_1 \cdot { \partial}_x^j m(\phi_0,{\tilde}{\phi}_0)\cdot { \partial}_x^k(\phi_0 - { \tilde}{\phi}_0 ) \right|_{_0 } + \left| \sum_{i+j = n } d_{ij}\ , { \partial}_x^i f({\tilde}{\phi}_0)\cdot { \partial}_x^j ( \phi_1 - { \tilde}{\phi}_1)\right|_{_0}\\ \leq & { \mathcal{p}}_1\left\ { \|\phi_0\|_{c^{n}({\overline}{{\omega } } ) } , \|{\tilde}{\phi}_0\|_{c^{n}({\overline}{{\omega}})}\right\ } \cdot \|\phi_1\|_{w^{n,2}({\omega } ) } \cdot \|\phi_0 - { \tilde}{\phi}_0 \|_{c^{n}({\overline}{{\omega } } ) } + { \mathcal{p}}_2\left\ { \|{\tilde}{\phi}_0\|_{c^{n}({\overline}{{\omega}})}\right\ } \cdot \| \phi_1-{\tilde}{\phi}_1\|_{w^{2,n}({\omega } ) } \end{split}\ ] ] where are a polynomials in the indicated variables with positive coefficients .now , from so continuing we get because the group is unitary on then yields if and come from a bounded set , then choosing small enough implies that is a contraction .* step 3 : invariance . * finally , we also need to map the the ball in into itself . according to for that it suffices that satisfy for which we can impose the contraction mapping principle implies the claimed local unique solvability .the global existence result is based on a priori bounds on the energy .[ prop : extension ] let be as in with exponent , .assume is a weak solution that is regular of order , defined on some right - maximal interval .suppose and that there exists a continuous function on such that then . by proposition [ prop : f - regularity ] , and the term have -th order dependence . recall that energy controls the norm of the solution ( in fact , it controls the norm , but the continuity with values in is implied by the definition of regular solution ) .then there is a constant dependent on such that for any .hence by theorem [ thm : local ] any initial data of the form and for can be extended to a solution that exists for another time units , independently of .hence can not be finite .[ lem : energy - regular ] let be as in with exponent , .suppose is a weak solution to on ] with and .now call upon the energy identity for to obtain for .let be a weak solution on interval |u^{(k)}(t)|_{_0}^2 = with the decay rate uniform with respect to the finite energy . by interpolation between and conclude uniform a decay ( but at slower rates , where is modified by the interpolation exponent ) for every norm with .+ note that uniform stability of the original system would have required us to include the case , which as we are about to show can not happen .* summarizing : * is precisely the solution to \!\!\!\;\big]}} ] is controlled by its initial lower - order norm * independently of *. now we are going to use the smallness of the norm of to show that its norm can not decay too fast .let be the unique solution to _ linear homogeneous _problem \!\!\!\;\big]}} ] .the energy identity for gives note that * ( since ) * independently of * in .* given as in we have . via 1-dimensional embeddings and interpolation for so because independently of in , then by we get for any for * independent of * in .at this point we finally expand the definition of to get ( see ) plugging these observations into the estimate for we obtain : [ lem : decay ] for let denote the weak solution on ] and be the solution to linear homogeneous problem \!\!\!\;\big]}} ] and for any it satisfies \,,\ ] ] with * independent of * . at this point for brevitylet us suppress the superscript " stemming from the choice of the parameter in the definition of initial data .let be as in lemma [ lem : decay ] .suppose for a moment that for some .then via we have to apply this estimate , pick any and find such that ( e.g. , if ) .fix , then there is large enough so that for initial condition yields a solution whose energy satisfies \,,\ ] ] for as in lemma [ lem : decay ] .consequently , by , \,.\ ] ] however , the initial condition had energy ( again , independently of ) .hence the family of initial conditions with the associated solutions , resides in a bounded ball ( of radius ) in , yet the corresponding solutions do not decay to zero uniformly in the topology of .thus the associated dynamical system on is not uniformly stable .this argument demonstrates theorem [ thm : unstable ] for and .the general case follows merely by attaching a factor of to and choosing a potentially smaller in the last step of the argument .for a weak solution of \!\!\!\;\big]}} ] and suppose on ] , then we have for every . by the continuity of the solutionthis is only possible if for .then and we arrive at an equilibrium solution which has to be trivial .this observation completes the proof of theorem [ thm : unstable ] .the strategy for the proof of instability was largely prompted by numerical observations described below .the numerical implementation presented here treats the case of : with and given initial data solution was discretized in space via a ritz - galerkin method .the dynamic problem could be analyzed explicitly using a discretization in time and a runge - kutta scheme , though , rigorous justification of convergence becomes more delicate .another approach is to approximate the successive approximations of theorem [ thm : local ] which , when exact , are guaranteed to converge , at least over small time intervals .the iterates correspond to _ linear _ inhomogeneous pde problems that are resolved using a hybrid scheme : a. for relatively short times find solutions using an approximation of semi - discrete ritz - galerkin method by discretizing time - integrals in the variation of parameter formula .if the error in numerical integration is small , then this approach enjoys an explicit convergence estimate ( for smooth solutions and over finite time intervals ) essentially proportional to the space discretization parameter .b. for larger times , collect the last -points of the semi - discrete approximation and resolve the rest of the iteration using a multi - step method ( adams - bashforth ) .thus , we begin with some initial guess } , \dot{u}_{[0]}) ] to linear inhomogeneous problem , then for small , e.g. see ( * ? ? ?13.1 , p. 202 ) , and for simplicity taking the initial conditions to be the more accurate ritz projections of the initial data , we get } - u_{[k , h ] } \|_{c_t{\mathscr{h}}^0 } \leq c h { \int_0^t}|\ddot{u}_{[k]}(s)|_2 ds\,.\ ] ] this estimate of course requires sufficiently regular solutions . as theorem [ thm : well - posed ] and example [ ex : square ] show , in order to have the regularity on } ] of the projection of } ]is the constant solution : }(t , x ) = ( { \mathcal{r}}_h^{1 } u_0(x))^2({\mathcal{r}}_h^{0 } u_1(x ) ) { \quad\text{for all}\quad}t\geq 0.\ ] ] let denote the projection of the initial data .we obtain a semi - discrete approximation of the original system for unknown coefficient vector } ] needs to be computed . for this purpose only several values of the matrix exponentials are needed in order to apply the newton - cotes rule on sub - interval ] . as before ,let be the eigenfunction for with eigenvalue .then for initial data for constants , , the solution of can be reduced to a dissipative ode using the ansatz plugging it into equation yields this identity would be implied if for each function solves the 2nd - order nonlinear ode it corresponds to a first - order nonlinear system : function is smooth with respect to the components of and to variable , which now acts as a parameter .this ode system has global differentiable solutions , moreover since is smooth , in fact , analytic in then local solutions are differentiable with respect to ( * ? ? ?3.1 , p. 95 ) . because the initial data is smooth then by theorem [ thm : well - posed ] the unique solution is , among other things , in .consequently must coincide with the solution to the ansatz .in turn , is a dissipative system of odes and can be approximated by a runge - kutta scheme .to get some quantitative estimate on the absolute error of solutions found in section [ particulars ] , at least for initial data of the form , one can consider a piecewise linear interpolation of and then calculate the energy - norm difference from the finite - element solution .the accompanying figures and data demonstrate some of the numerical results .the initial data is considered of the form which permits to compare the finite element solutions to the pointwise runge - kutta solutions described in section [ sec : pointwise - rk ] .figure [ fig:1 ] shows the point - value of displacement next to the displacement value at the same for the corresponding initial boundary value problem _ with linear damping_. figure [ fig:2 ] presents numerical estimates of the energy for solutions obtained by ritz - galerkin finite element scheme and successive approximations .the graphs indicate that the energy decay deteriorates as the frequency of the initial data goes up while the initial finite - energy remains fixed ( independently of ) , thus illustrating the lack of uniform which was rigorously confirmed by theorem [ thm : unstable ] .the initial data are of the form with zero initial velocity .the indicated errors are obtained by comparing each finite - element solution to a piecewise - linear interpolant of the corresponding piecewise rk solution .figure [ fig:3 ] uses multi - step extensions of the same solutions shown in figure [ fig:2 ] to a larger time - scale using ( 5-step ) adams - bashforth method .it also includes the decay of the -norm for these solutions .the research of george avalos and daniel toundykov was partially supported by the national science foundation grant dms-1211232 . the numerical analysis presented in thiswork and partial theoretical results were obtained during a research experience for undergraduates ( reu ) program on applied partial differential equations in summer 2013 at the university of nebraska - lincoln .this reu site was funded by the national science foundation under grant nsf dms-1263132 .the reu project would not have been possible without assistance of tom clark .to complement the analysis of the infinite - dimensional model it is also interesting to examine the related finite - dimensional version of a degenerately damped harmonic oscillator : for , .rewrite it as a first order evolution problem henceforth , let denote an equivalent norm on : because is smooth and non - negative , then classical ode results guarantee that solutions are unique , exist globally and satisfy below we present stability results which contrast their infinite - dimensional analogues discussed earlier . in particular , the finite - dimensional system is uniformly stable , while for distributed - parameter version the strong stability is open while uniform stability has been proven false .this is a direct consequence of lasalle s invariance principle with the lyapunov function given by the equivalent norm : .we only need to check what kinds of trajectories reside in the invariant set of the system : so for in particular , either or .now suppose that a solution trajectory resides in .if at some we have , then by the continuity in time , it is nonzero on some interval . on that intervalwe must have by the property of .but then on that interval , from equation we get a contradiction since the solution has to be constant and therefore zero .proceed by contradiction .assume for a bounded set there exists some , a bounded sequence of initial data , and a sequence of corresponding times with such that extract a convergent subsequence of initial data , reindexed again by .let denote this limit point and be the corresponding solution . by lemma [ lem : lasalle ] in particular , there exists such that for we have .because the non - linearity is locally lipschitz on , and the system is non - accretive , then there exists such that for any if , the corresponding solutions satisfy \,.\ ] ] since , we can find .next , because converge to , then for we can find find so that and , consequently , \,.\ ] ] then . because the system is non - accretive , then for all , and in particular it holds for .so which contradicts the choice of .roberto triggiani and peng - fei yao .carleman estimates with no lower - order terms for general riemann wave equations . global uniqueness and observability in one shot ., 46(2 - 3):331375 , 2002 .special issue dedicated to the memory of jacques - louis lions .irena lasiecka , roberto triggiani , and x. zhang .global uniqueness , observability and stabilization of nonconservative schrdinger equations via pointwise carleman estimates .i. -estimates ., 12(1):43123 , 2004 .irena lasiecka , roberto triggiani , and x. zhang .global uniqueness , observability and stabilization of nonconservative schrdinger equations via pointwise carleman estimates .ii . -estimates . , 12(2):183231 , 2004 .roberto triggiani and xiangjin xu .pointwise carleman estimates , global uniqueness , observability , and stabilization for schrdinger equations on riemannian manifolds at the -level . in _ control methods in pde - dynamical systems _ , volume 426 of _ contemp ._ , pages 339404 .soc . , providence , ri , 2007 .igor chueshov , irena lasiecka , and daniel toundykov .long - term dynamics of semilinear wave equation with nonlinear localized interior damping and a source term of critical exponent . , 20(3):459509 , 2008 .matthias eller and daniel toundykov .carleman estimates for elliptic boundary value problems with applications to the stabilization of hyperbolic systems ., 1(2):271296 , 2012 .evolution equations and control theory .philip hartman . , volume 38 of _ classics in applied mathematics_. society for industrial and applied mathematics ( siam ) , philadelphia , pa , 2002 . corrected reprint of the second ( 1982 ) edition [ birkhuser , boston , ma ; mr0658490 ( 83e:34002 ) ] , with a foreword by peter bates . | presented here is a study of well - posedness and asymptotic stability of a degenerately damped " pde modeling a vibrating elastic string . the coefficient of the damping may vanish at small amplitudes thus weakening the effect of the dissipation . it is shown that the resulting dynamical system has strictly monotonically decreasing energy and uniformly decaying lower - order norms , however , is _ not uniformly stable _ on the associated finite - energy space . these theoretical findings were motivated by numerical simulations of this model using a finite element scheme and successive approximations . a description of the numerical approach and sample plots of energy decay are supplied . in addition , for certain initial data the solution can be determined in closed form up to a dissipative nonlinear ordinary differential equation . such solutions can be used to assess the accuracy of the numerical examples . [ pageinit ] department of computing & mathematical sciences , california institute of technology , ca 91125 + department of mathematics , university of nebraska - lincoln , ne 68588 + carroll college , mt 59625 + school of public health , university of michigan , mi 48104 + department of mathematics & statistics , texas tech university , tx 79409 + _ corresponding author : _ dtoundykov.edu |
_ `` the tension between giving away your information- to let people know what you have to offer- and charging them for it to recover your costs is a fundamental problem in the information economy . '' _ carl shapiro and hal r. varian in information rules : a strategic guide to the network economy , harvard business school press ( 1998 ) .+ the advent of the internet as a big playground for resource sharing among selfish agents with diverse interests , and the emergence of web as a giant platform for hosting information has raised a plethora of opportunities for commerce , as well as , a plenty of new design , pricing and complexity problems .one good example of a multi - billion dollar industry evolved as a consequence of web is the sponsored search advertising ( ssa ) , making fortunes for internet search giants such as google and yahoo ! , and has got tremendous attention in academia recently , due to various interesting research problems originated as a result of this continuously growing industry .one of the most important concern for such an industry or in general for the information economy is the _`` pricing problem''_. for example , for the goods like an ad - slot in the ssa which has no intrinsic value and is perishable , it is not clear what price should it be sold for .similarly , for a digital good , where the cost of reproduction is negligible , the standard way of pricing based on the production cost does not work .therefore , auctions are becoming a popular pricing mechanism in electronic commerce as they automatically adjust prices to market conditions , and specifically prices gets adjusted according to its value to the consumers rather than to the production costs .auction theory has a pretty impressive literature - from the lovely vickrey auction to the sponsored search auctions and the auctions of digital goods .the literature has so far focused mostly on the design of mechanisms that takes the revenue or the efficiency as a yardstick .this is perfectly logical as these are two very important metrics from the viewpoint of the seller / auctioneer and the society respectively .however , the scenarios where the _ capacity _ , which we define as _ `` the number of bidders the auctioneer wants to have a positive probability of getting the item '' _ , is a fundamental concern are ubiquitous in the information economy . for instance , in the sponsored search auctions or in online ad - exchanges , the true value of an ad - slot for an advertiser is inherently derived from the conversion - rate , which in turn depends on whether the advertiser actually obtained the ad - slot or not , which in turn depends on the capacity . in general , the same holds true for all information goods / digital goods . in the present paper , our goal is to first motivate _ capacity _ as a fundamental metric in designing auctions in the information economy and then to initiate study of such a design framework for some simple and interesting scenarios . in section [ mot ] , we motivate the capacity as a fundamental and interesting additional metric on the top of revenue / efficiency for mechanism design . in section [ single ] ,we start with the _ capacity _ constrained framework for selling an indivisible item .we propose a simple way to incorporate capacity constraints via designing mechanisms to sell probability distributions .we show that such optimal probability distributions could be identified using a linear programming approach , objective being revenue , efficiency or a related function .further , we define a quantity called _ price of capacity _ to capture the tradeoff between capacity and revenue / efficiency and derive upper bounds on it . in section [ ssa ] ,we discuss the case of sponsored search auctions and also note that the auctioneer controlled probability spikes based auctions suggests a new model for sponsored search advertising , where a click is sold directly and not indirectly via allocating impressions . in section [ conc ] ,we conclude with a list of research directions for future work , inspired by the present paper .[ [ experience - goods ] ] * experience goods : * + + + + + + + + + + + + + + + + + + + + a bidder might not know her true valuation for the item unless she acquires it sometimes meaning that the true value is inherently derived from the actual acquisition of the item .such a good is called an _ experience good_ .experience goods are ubiquitous in the information economy as clearly all information goods are experience goods . sometimes , a particular goodmight also act as an experience for another good .for example , a particular song from a singer might act as an experience for another song of that singer . therefore , in the auction of an experience good , the values for some of the bidders can be known to them as they might have experienced it from an earlier purchase of this or a related item , and for the other bidders the values are still unknown and they have to simply guess this value if they are participating in the auction . for the second kind of bidders , even if their values might turn out to be pretty high , their guesses might not be high enough to actually acquire the item when revenue / efficiency is the only goal .moreover , they might be _ loss - averse_ and would not bid a high value at all , due to the potential risk involved if the item does not turn out to be of high value to them . therefore , it is important that such bidders be given a chance to acquire the item , and consequently capacity becomes a fundamental metric in designing the mechanisms to achieve this goal .[ [ two - fold - exploration - in - sponsored - search - auctions ] ] * two - fold exploration in sponsored search auctions : * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + first , the ad - slots in the ssa are necessarily experience goods for a new advertiser and the estimates for the advertisers getting lower ranked slots is also generally poor as they hardly get any clicks .the value of an ad - slot is derived from the clicks themselves ( i.e. rate of conversion or purchase given a click ) , and therefore , unless the bidder actually obtains a slot and receives user clicks , there is essentially no means for her to estimate her true value for the associated keyword .second , even if all the true valuations are known to the corresponding bidders , for each bidder the ssa involves a parameter called _ quality score _ of the bidder which is defined as the expected clickability of the bidder for the associated keyword if she obtains a slot .this parameter is also not known a priori and the auctioneer needs to estimate it .certainly , a model that automatically allows one to estimate these key parameters ( i.e. click - through - rates and true values ) is desirable .indeed , some mechanisms to incorporate explorations for estimating such important parameters has started to appear in literature .capacity as an additional metric can provide a generic framework for designing such exploration based mechanisms . [ [ avoiding - over - exposure - in - online - advertising ] ] * avoiding over - exposure in online advertising : * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + typically , the online ad - exchanges such as right media or doubleclick convince their advertisers that their ads will not be over exposed to users . one way of avoiding such an over - exposure could be via increasing capacity .[ [ uncertainty - and - switching - costs ] ] * uncertainty and switching costs : * + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let us consider a production company buying raw materials ( multiple units of a good ) from providers and via a reverse - auction and is uncertain about the time these providers might take to deliver the raw material to .the providers are very likely to lie about the delivery times and it is hard to incorporate delivery times in designing the auction . if the goal of the auctioneer ( i.e. the company ) is cost minimization , it will buy the raw material from the provider with the minimum ask ( assume that the ask of is smaller than that of which is smaller than that of ) . now if the provider lied about the delivery time at least for a significant fraction of the total required units , s production gets delayed .if wants to switch to some other provider , run another auction and buys from , now it will buy at a higher cost and their is still a delay in s production as will take its time in delivery too . the time taken by could actually be smaller than that of s for the remaining units , however , still there is a delay .moreover , such delay might persist further as could also lie about its delivery time .it might have been better if would buy not only from to start with and give a chance as well .therefore , one way to reduce such delay times could be via increasing _capacity _ as per our definition .there is a single _ indivisible _ item for sale .there are bidders interested in the item .the bidder has a value for this item , for some bidders s are the actual true values while for some others these are just crude estimates / guesses . ] .the item is sold via an auction on an experiment designed by the seller / auctioneer .the experiment has outcomes with associated probabilities , where . therefore, the item is essentially sold via an auction of the probability spikes wherein the auctioneer can choose these probability spikes in advance or adaptively based on bidders reports so as to achieve some defined goal such as maximizing her profit or efficiency or to accommodate a wider pool of bidders .bidders bid on the experiment by reporting bids s to indicate their respective values of the item ._ at most one probability spike is assigned to each bidder ._ thus there are effectively two steps in this auction model .* stage 1 ( _ commit / compete _ ) : the bidders report their bids s and by way of using some mechanism , the auctioneer assigns the probability spikes to them and decides corresponding payments to be made by them . let us call a bidder a _ prospective winner _ if she was assigned one of the probability spikes .* stage 2 ( _ win or lose _ ) : the experiment is performed .if the outcome of the experiment is , then the _ prospective winner _ assigned to the spike is declared the _ winner _ , and is given the item .further , the auctioneer could choose various payment schemes such as - * _ betting : _ every _ prospective winner _ is charged its payment decided in _compete / commit _ stage irrespective of whether she will be a _ winner _ or not .* _ pay - per - acquisition _ : a bidder is charged the amount decided in _compete stage _ only when she is a _ winner _i.e. only when she actually acquires the item. the above model can also be interpreted as selling of a single _ divisible _ item in terms of specified fractional bundles , the bundles corresponding to the probability spikes . without loss of generality ,let us assume that .further , for notational simplicity let .let be the allocation rule and be the payment rule decided in _compete stage_. thus , for , the spike is assigned to the bidder and for , the bidder is not assigned any spike .further , for , is the expected payment to be made by the bidder and otherwise. therefore , the expected utility of the bidder assigned to spike is given by for and is zero otherwise .for the sake of simplicity , let , then the famous _ vcg _ mechanism ranks the bidders by their bids ( and true values s as being a truthful mechanism ) and charges them their respective opportunity costs .that is , is the bidder with the maximum bid and for and zero otherwise . if the payment is done via the _ betting _ model then this is the amount charged to bidder in the _ compete / commit _ stage .if the payment is done via _ pay - per - acquisition _ and is the _ winner _ then she is charged an amount , and therefore , her expected payment is still as is the probability that she wins .therefore , the auctioneer s revenue is let be the spike gaps , then .the condition translates to and translates to therefore , and clearly as _ vcg _ ranks by true values .[ vcgrev ] the revenue of the auctioneer in the _ vcg _ mechanism for selling probability spikes can be expressed as where for all , and for all .further , the efficiency for the _ vcg _ mechanism is further , we have [ vcgeff ] the efficiency in the _ vcg _ mechanism for selling probability spikes can be expressed as where for all , and for all .we say that a linear function of spike - gaps s is _ gap - wise monotone _ if , where s do not depend on gaps s and for all .a mechanism for selling probability spikes is called _ gap - wise monotone _ if the revenue of the auctioneer at the prescribed equilibrium point is _ gap - wise monotone _ and is called _ strongly gap - wise monotone _ if the social value ( i.e. efficiency ) at the prescribed equilibrium point is _ gap - wise monotone _ as well . therefore , from lemma [ vcgrev ] and lemma [ vcgeff ] we obtain the following theorem .[ vcg ] the _ vcg _ mechanism for selling probability spikes is _ strongly gap - wise monotone_. define , .* walrasian equilibrium : * let be an allocation and be a payment rule , then is called a walrasian equilibrium if for all , . following , it is not hard to establish the following lemma .let be an allocation and be a payment rule , then is a walrasian equilibrium iff it is efficient .therefore , at a walrasian equilibrium , bidders are ranked according to their values and efficiency can be written as in the case of vcg i.e where for all .let be a walrasian equilibrium for selling probability spikes then the efficiency at this equilibrium is gap - wise monotone .this means that optimal efficiency is always gap - wise monotone .further , the optimal omniscient auction ( i.e. when the auctioneer knows everyone s true value s ) extracts a revenue equal to , where for all . therefore , the optimal revenue of omniscient auction is also gap - wise monotone . in this section ,we develop a linear programming approach to identify optimal probability spikes subject to the capacity constraints in terms of spikes gaps , where the objective is a gap - wise monotone function . for such functions ,it is simpler to put the constraints in terms of spike - gaps than in terms of spikes themselves , however , it wo nt be hard to see that a similar approach can also be developed if we put the constraints in terms of the spikes , as well as , in the case of functions more general than the gap - wise monotone .for the sake of simplicity we omit any such details .let be a _ gap - wise monotone _ function and be a generic set of parameters with the property that and let us consider the following linear programming problem in variables s , the dual problem is and the _ kkt _ conditions are and therefore an optimal solution is as it can be checked to satisfy the __ conditions .the optimal value is clearly , the maximum of the optimal solution over parameters s is attained when for all and in that case now , note that the primal optimal variables s do not depend on the quantities s at all .therefore , so long as is _ gap - wise monotone _ , the optimal solution to the primal remains the same as in equation [ optsol ] .it is quite intuitive as the best possible spike allowed by the capacity constraints is assigned to the best possible bidder ( i.e. ) , and all other spikes are the minimum possible as per the constraints .let be a gap - wise monotone function and the spike - gaps s satisfy conditions [ gap - ep ] , [ epsilonjs0 ] and [ epsilonjs ] , then the optimal choice of spike - gaps are given by equation [ optsol ] and the optimal value of is given by equation [ h - opt ]. given parameters , let us define the _ capacity _ as now consider the parameters satisfying the properties [ epsilonjs0 ] and [ epsilonjs ] such that .given a gap - wise monotone function , we claim that such s satisfying properties [ epsilonjs0 ] and [ epsilonjs ] can always be obtained from s such that as long as , meaning that the _ capacity _ can always be increased without any loss in optimal value as long as we do not shoot over the absolute optimum .we have , now we can always choose s satisfying properties [ epsilonjs0 ] and [ epsilonjs ] by taking suitable , and .in particular , taking , and does the job .an interesting case to consider is when for all and otherwise . andin this case we can increase the capacity without loss in optimal value by taking and otherwise , where now let us define then it is clear that and therefore , as long as capacity can always be increased without any loss in the optimal value as discussed above , however there is a strict decrease in the optimal value if we wish to increase capacity from to .we can naturally define a parameter which we call _ price of capacity _ as follows : thus , _ price of capacity _ is the worst possible loss in optimal value while increasing capacity from to . again let us consider the case when all non - zero , then often our goal will be to maximize efficiency or revenue subject to the capacity constraints , and consequently such a loss may not be considered good . therefore , its really a price that we are paying for increasing capacityas we discussed in the section [ mot ] , one nice motivation for the study of capacity as a metric for mechanism design comes from the sponsored search advertising .we first describe the formal ssa model .formally , in the current models , there are slots to be allocated among ( ) bidders ( i.e. the advertisers ) .a bidder has a true valuation ( known only to the bidder ) for the specific keyword and she bids . the expected _ click through rate _ ( ctr ) of an ad put by bidder when allocated slot has the form i.e. separable in to a position effect and an advertiser effect . s can be interpreted as the probability that an ad will be noticed when put in slot and it is assumed that for all and for . can be interpreted as the probability that an ad put by bidder will be clicked on if noticed and is referred to as the _ relevance_(quality score ) of bidder .the payoff / utility of bidder when given slot at a price of per - click is given by . as ofnow , google as well as yahoo ! use schemes closely modeled as rbr(rank by revenue ) with gsp(generalized second pricing ) .the bidders are ranked in the decreasing order of and the slots are allocated as per this ranks . let the be the bidder allocated to the slot according to this ranking rule , then is charged an amount equal to per - click .this mechanism has been extensively studied in recent years .the solution concept that is widely adopted to study this auction game is a refinement of nash equilibrium called _ symmetric nash equilibria(sne ) _ independently proposed by varian and edelman et al . for notational simplicity ,let , then under this refinement , the revenue of the auctioneer at equilibrium is given by in this section , we discuss how to incorporate the capacity constraints in the keyword auctions being currently used by google and yahoo!. we understand that there could be several ways for doing so , however , we consider a very simple and intuitive way of incorporating the capacity constraints via probability spikes as follows : * the first slots are sold as usual to the high - ranked bidders . * the last slot ( i.e. the slot ) is sold via probability spikes among the bidders ranked through . is chosen to accommodate as many more bidders as the auctioneer wants .* there is a single combined auction for both of the above . clearly , the single combined auction is equivalent to the keyword auction with slots with position based ctrs taken as i.e. for , for and otherwise .the revenue of the auctioneer at _ sne _ is now , maximizing as a function of s is equivalent to maximizing the function note that may not be gap - wise monotone , however , it is not hard to see that the similar linear programming analysis as in the section [ genopt ] can be done to compute the price of capacity in the present scenario of keyword auctions as well .we omit the details .[ [ selling - clicks - via - auctioneer - controlled - probability - spikes - a - new - model - for - sponsored - search - advertising ] ] selling clicks via auctioneer - controlled probability spikes : a new model for sponsored search advertising : + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the design framework in section [ single ] , suggests a new model for ssa , where the clicks are sold directly and not indirectly via allocating impressions . in the usual pay - per - click model , a click is assigned to one of the advertisers who has been allocated an impression in the page currently being viewed by the user ; thus , the user s collective experience determines the probability that an impression winner would be the beneficiary of the click , and having set up the impressions and the user experience , the auctioneer does not actively control the probability with which a click will be allocated to a bidder .the impressions are sold and clicks are only stochastically related to impressions , even though an advertisers is charged only when she obtains a click . in the new model ,a click could be considered as the indivisible item being sold via probability spikes .instead of putting ads directly in the slots , the auctioneer could put some categories / information related to the specific keyword as a link .an auction based on probability spikes is run whenever a user clicks on this link and the user is directly taken to the landing page of the winning advertiser .in the present paper , our main goal was to motivate _capacity _ as a fundamental metric in designing auctions in the information economy and then to initiate study of such a design framework for some simple and interesting scenarios such as single indivisible item and sponsored search advertising. however , there are myriad of other interesting scenarios where the capacity - enabled framework should be interesting to study . for example , the auctions of digital goods , combinatorial auctions for selling information goods in bundles , double auctions and ad - exchanges etc .further , probably the most important question that remains to be addressed is to identify the best way of putting capacity constraints and to see how generic the approach via probability spikes could be made .we can consider the most general case of selling any set of items e.g. heterogeneous , homogeneous , indivisible , or divisible or any combination their of .let be the set of all possible allocations .further , the elements of are named such that , where is auctioneer s preference over allocations . then , these set of items can be sold with capacity constraints via probability spikes s with , wherein is enforced with probability .the authors thank dork alahydoian , sushil bikhchandani , gunes ercal , himawan gunadhi and adam meyerson for insightful discussions .the work of s.k.s was partially supported by his internship at netseer inc ., los angeles .g. aggarwal , a. goel , r. motwani , truthful auctions for pricing search keywords , ec 2006 .u. birchler and m. butler , information economics , routledge , 2007 .t. borgers , i. cox , m. pesendorfer , v. petricek , equilibrium bids in sponsored search auctions : theory and evidence , technical report , university of michigan ( 2007 ) . s. bikhchandani and j. m. ostroy , from the assignment model to combinatorial auctions . in combinatorial auctionsmit press 2006 .p. crampton , y. shoham , and r. steinberg , combinatorial auctions , mit press , 2006 .gabrielle demange , david gale , and marilda sotomayor , multi - item auctions , jour .political economy , 94 , 863 - 872 , 1986 .b. edelman , m. ostrovsky , m. schwarz , internet advertising and the generalized second price auction : selling billions of dollars worth of keywords , american economic review 2007 .a. v. goldberg , j. d. hartline , a. wright , competitive auctions and digital goods , soda 2001 .a. v. goldberg , j. d. hartline , a. r. karlin , m. saks , a. wright , competitive auctions , games and economic behavior , vol .55 , pp:242 - 269 , 2006 . v. krishna , auction theory , academic press , san diego , 2002 .j. kalagnanam and d. parkes , auctions , bidding and exchange design . in : simchi - levi ,wu , shen : supply chain analysis in the ebusiness area , kluwer academic publishers , 2003. s. lahaie , an analysis of alternative slot auction designs for sponsored search , ec 2006 .s. lahaie , d. pennock , revenue analysis of a family of ranking rules for keyword auctions , ec 2007 .p. milgrom , putting auction theory to work , cambridge university press , 2004 .a. mehta , a. saberi , u. vazirani , v. vazirani , adwords and generalized on - line matching , focs 2005 .p. nelson , information and consumer behaviour , the journal of political economy , vol 78 , pp:311 - 329 , 1970 . s. pandey , and c. olston , handling advertisements of unknown quality in search advertising , nips 2006 .s. k. singh , v. p. roychowdhury , m. bradonji , and b. a. rezaei , exploration via design and the cost of uncertainty in keyword auctions , preprint ( available at http://arxiv.org/abs/0707.1053 ) .s. k. singh , v. p. roychowdhury , h. gunadhi , and b. a. rezaei , capacity constraints and the inevitability of mediators in adword auctions , wine 2007 ( to appear ) .s. k. singh , v. p. roychowdhury , h. gunadhi , and b. a. rezaei , diversification in the internet economy : the role of for - profit mediators , preprint ( available at http://arxiv.org/abs/0711.0259 ) .l. s. shapley and m. shubik , the assignment game i : the core , int .j. game theory 1 , no . 2 , 111 - 30 , 1972 .c. shapiro and h. r. varian , information rules : a strategic guide to the network economy , harvard business school press , 1998 .a. tversky ; d. kahneman , loss aversion in riskless choice : a reference - dependent model , the quarterly journal of economics , vol .( nov . , 1991 ) , pp .1039 - 1061 .h. varian , position auctions , to appear in international journal of industrial organization .j. wortman , y. vorobeychik , l. li , and j. langford , maintaining equilibria during exploration in sponsored search auctions , wine 2007 ( to appear ) . | the auction theory literature has so far focused mostly on the design of mechanisms that takes the revenue or the efficiency as a yardstick . however , scenarios where the _ capacity _ , which we define as _ `` the number of bidders the auctioneer wants to have a positive probability of getting the item '' _ , is a fundamental concern are ubiquitous in the information economy . for instance , in sponsored search auctions ( ssa s ) or in online ad - exchanges , the true value of an ad - slot for an advertiser is inherently derived from the conversion - rate , which in turn depends on whether the advertiser actually obtained the ad - slot or not ; thus , unless the capacity of the underlying auction is large , key parameters , such as true valuations and advertiser - specific conversion rates , will remain unknown or uncertain leading to inherent inefficiencies in the system . in general , the same holds true for all information goods / digital goods . we initiate a study of mechanisms , which take capacity as a yardstick , in addition to revenue / efficiency . we show that in the case of a single indivisible item one simple way to incorporate capacity constraints is via designing mechanisms to sell probability distributions , and that under certain conditions , such optimal probability distributions could be identified using a linear programming approach . we define a quantity called _ price of capacity _ to capture the tradeoff between capacity and revenue / efficiency . we also study the case of sponsored search auctions . finally , we discuss how general such an approach via probability spikes can be made , and potential directions for future investigations . |
we hereafter propose a method that enables us to build a drp scheme while minimizing the error due to the finite difference approximation , by means of an equivalent matrix equation .+ + consider the transport equation : , \,\,\,t \,\in \,[0,t]\ ] ] with the initial condition . a finite difference scheme for this equation can be written under the form : where : , , , , , denoting respectively the mesh size and time step ( , ) .+ the courant - friedrichs - lewy number ( ) is defined as .+ + a numerical scheme is specified by selecting appropriate values of the coefficients , , , , , , , and in equation ( [ scheme ] ) , which , for sake of usefulness , will be written as : where the `` '' denotes a dependance upon the mesh size , while the `` '' denotes a dependance upon the time step .+ the number of time steps will be denoted , the number of space steps , . in general , .+ in the following : the only dependance of the coefficients upon the time step existing only in the crank - nicolson scheme , we will restrain our study to the specific case : the paper is organized as follows .the building of the drp scheme is exposed in section [ drp ]. the equivalent matrix equation , which enables us to minimize the error due to the finite difference approximation , is presented in section [ sylv ] .a numerical example is given in section [ ex ] .the first derivative is approximated at the node of the spatial mesh by : following the method exposed by c. tam and j. webb in , the coefficients , , and are determined requiring the fourier transform of the finite difference scheme ( [ approx ] ) to be a close approximation of the partial derivative .+ ( [ approx ] ) is a special case of : where is a continuous variable , and can be recovered setting .+ denote by the phase . applying the fourier transform , referred to by , to both sides of ( [ approx_cont ] ) , yields : denoting the complex square root of .+ comparing the two sides of ( [ wavenb ] ) enables us to identify the wavenumber of the finite difference scheme ( [ approx ] ) and the quantity , i. e. : the wavenumber of the finite difference scheme ( [ approx ] ) is thus : to ensure that the fourier transform of the finite difference scheme is a good approximation of the partial derivative over the range of waves with wavelength longer than , the a priori unknowns coefficients , , and must be choosen so as to minimize the integrated error : the conditions that is a minimum are : and provide the following system of linear algebraic equations : which enables us to determine the required values of , , and : problem ( [ scheme ] ) can be written under the following matricial form : where and are square matrices respectively by , by , given by : the matrix being given by : and where is a linear matricial operator which can be written as : where , , and are given by : the second member matrix bears the initial conditions , given for the specific value , which correspond to the initialization process when computing loops , and the boundary conditions , given for the specific values , .denote by the exact solution of ( [ transp ] ) .+ the corresponding matrix will be : {\ , 1\leq i\leq { n_x-1},\ , 1\leq n\leq { n_t}\ , } } \ ] ] where : with , .we will call _ error matrix _ the matrix defined by : consider the matrix defined by : the _ error matrix _ satisfies : minimizing the error due to the approximation induced by the numerical scheme is equivalent to minimizing the norm of the matrices satisfying ( [ eqmtr ] ) . _ note : _ since the linear matricial operator appears only in the crank - nicolson scheme , we will restrain our study to the case .the generalization to the case can be easily deduced .the problem is then the determination of the minimum norm solution of : which is a specific form of the sylvester equation : where and are respectively by and by matrices , and , by matrices .calculation yields : the singular values of are the singular values of the block matrix , i. e. of order , and of order . + the singular values of are , of order , and , of order .consider the singular value decomposition of the matrices and : where , , , , are orthogonal matrices . , are diagonal matrices , the diagonal terms of which are respectively the nonzero eigenvalues of the symmetric matrices , .+ multiplying respectively [ sylverr ] on the left side by , on the right side by , yields : which can also be taken as : set : we have thus : it yields : one easily deduces : the problem is then the determination of the and satisfying : denote respectively by , the components of the matrices , .+ the problem [ pb ] uncouples into the independent problems : + minimize under the constraint this latter problem has the solution : the minimum norm solution of [ sylverr ] will then be obtained when the norm of the matrix is minimum .+ in the following , the euclidean norm will be considered .due to ( [ ftilde ] ) : and being orthogonal matrices , respectively by , by , we have : also : the norm of is obtained thanks to relation ( [ m0 ] ) .this results in : can be minimized through the minimization of the second factor of the right - side member of ( [ min ] ) , which is function of the scheme parameters .+ is a constant .the quantities , and being strictly positive , minimizing the second factor of the right - side member of ( [ min ] ) can be obtained through the minimization of the following functions : i.e. : setting : one obtains the drp scheme with the minimal error through the minimization of : denote by the non - dimensional time parameter .figure [ erreur1 ] displays the norm of the error for an optimized scheme ( in black ) , where , , are given by ( [ opt_values1 ] ) , and a non - optimized one : numerical results perfectly fit the theoretical ones .+ figure [ erreur2 ] displays the norm of the error for the above optimized scheme ( in black ) , a seven - point stencil drp scheme ( in gray ) , and the fcts scheme ( dashed plot ) . as time increases , the optimized scheme yields , as expected , better results than the fcts one .also , for , results appear to be better than those of the classical drp scheme . for large values of the time parameter , both latter schemes yield the same results .+ figure [ erreurl2 ] displays the norm of the error for the above optimized scheme ( in black ) , the seventh - order drp scheme ( in gray ) , and the fcts scheme ( dashed plot ) . as expected , results coincide .the above results open new ways for the building of drp schemes .it seems that the research on this problem has not been performed before as far as our knowledge goes . in the near future, we are going to extend the techniques described herein to nonlinear schemes , in conjunction with other innovative methods as the lie group theory . | finite difference schemes are here solved by means of a linear matrix equation . the theoretical study of the related algebraic system is exposed , and enables us to minimize the error due to a finite difference approximation , while building a new drp scheme in the same time . * keywords * + drp schemes , sylvester equation |
the kloe-2 is an upgraded version of the kloe detector which is installed at da , the e collider located at the frascati laboratories of infn .the newly installed subdetectors are ( i ) the inner tracker for the improvement of the vertex position resolution and the acceptance increase for low transverse momentum tracks ; ( ii ) two pairs of small angle tagging devices for detection of low ( low energy tagger - let ) and high ( high energy tagger - het ) , energy electrons and positrons from reactions ; ( iii ) crystal calorimeters ( ccalt ) for covering of the low polar angle region to increase acceptance for very forward electrons and photons down to 8 deg , and a tile calorimeter ( qcalt ) for detection of photons coming from decays in the drift chamber .currently , the intensive work involving commissioning of new detectors is being performed . at the same time , new programming procedures are being developed , e.g. reconstruction procedures for the inner tracker .one of the tools that can be particularly useful to quickly verify the correctness of reconstruction algorithms or to reveal some malfunctioning of the detectors parts is the so - called event display .this program permits to graphically visualize reconstructed trajectories or energies of detected particles on event by event basis . in this contributionwe present knedle ( _ kloe new event display environment _ ) , the new event display for the kloe-2 detector .didone ( _ dafne interactions display _ ) is a previous event display for the kloe detector . its graphical layout is based on the opacs graphical libraries written entirely in c language .it interfaces the content of several ybos banks created by the reconstruction program with the kloe database .although , didone is still operational , e.g. particle trajectories reconstructed in the drift chamber can be visualized , there are important arguments that favored the development of an entirely new display .the first argument is that the program is outdated .as it was mentioned before , its core is based on opacs package system , which is an abandoned project , without any support available .the use of callback functions combined with a lack of documentation make the current code very hard to maintain .in addition , the didone display runs only on aix server , and a significant amount of time would be needed to make it compatible with modern linux systems .therefore , the decision was made to stop the previous project , and to concentrate on the development of a new solution based on the root libraries .the knedle event display aims to replace and enhance the functionality of didone .the application should contain the graphical user interface with the visualization of different detector components .it was decided that in the first development stage , the event display for inner tracker , following the drift chamber part should be implemented , but the final version will include also other detector s parts . to make it possible to run on personal computers and on the kloe server , the program must work both on aix and linux operating systems .finally the appropriate documentation should be provided along the development of the project .the knedle application is written in c++ with the object - oriented approach .it uses a several root libraries like geometry library to implement detectors geometry .the graphical user interface is implemented using qt - like gui library . for the backward compatibility with the root version available on the kloe server ( root 5.08 compiled with xlc++ ) , some of the packages features were deliberately ommited .the application contains a configurable logger , that delivers the login information that can be printed on the screen or saved to a special file .the quality of the code is assured by a set of unit tests .the boost unit test environment was chosen for this purpose .the code documentation can be generated using doxygen .finally , the whole source code is store on a git repository .the knedle internal architecture was designed in a modular form , in order to easily extend it with further detector components . in order to separate a given graphical representation from data ,the model - view - controller pattern has been applied ( see fig . [fig : archi ] ) .the dataprocessor is responsible for reading the data and providing it for the visualization .the edgui manges graphical user interface along with the different detector views .the controller mediates between the dataprocessor and the edgui module .the communication between the controller module and the user interface is implemented using the signal - slot technique .the edroottuplereader is responsible for reading the data in the tree root format .the current version of the event display implements the inner tracker and drift chamber geometries .it permits for the visualization of the x - strips layers ( for the geometry details please refer to ) on the event by event basis .along with the graphical representation , the basic text information is displayed ( see fig . [fig : view ] ) .the button `` save to picture '' permits to save the current view in the `` png '' format .in this article , we presented knedle , the new event display for the kloe-2 detector .it is a root - based application written in c++ , that can be compiled and run on the kloe server ( aix ) and separately on the linux operating system .the current version implements several views of the inner tracker and the drift chamber .it permits to visualize the inner tracker strips by layers .also the procedure to visualize the trajectories in the drift chamber is ready to use .the program supports the root tree format as an input data .the operation with the ybos files is planned to be added in the near future .this work was supported in part by the eu integrated infrastructure initiative hadron physics project under contract number rii3-ct- 2004 - 506078 ; by the european commission under the 7th framework programme through the _ research infrastructures _ action of the _ capacities _ programme , call : fp7-infrastructures-2008 - 1 , grant agreement no .227431 ; by the polish national science centre through the grants no .0469/b / h03/ 2009/37 , 0309/b / h03/2011/40 , 2011/03/n / st2/02641 , 2011/01/d / st2/ 00748 , 2011/03/n / st2/02652 , 2013/08/m / st2/00323 , 2014/12/s / st2/00459 and by the foundation for polish science through the mpd programme and the project homing plus bis/2011 - 4/3 .00 a. balla et al.,_acta phys . pol ._ , vol . 6 , no . 4 , 1053 ( 2013 )a. di cicco and g. morello , _ acta .phys . pol .b _ this number ( 2014 ) .d. babusci et al . , _ nucl .instr . & meth ._ * a617 * , 81 ( 2010 ) .f. archilli et al . , _ nucl .instr . & meth ._ * a617 * , 266 ( 2010 ) .f. happacher et al .197 _ , 215 ( 2009 ) .m. cordelli et al . , _ nucl .instr . & meth ._ * a617 * , 105 ( 2010 ) .boost - website of the project http://www.boost.org/ doxygen - website of the project www.doxygen.orgr. brun and f. rademakers , _ nucl. inst . & meth ._ * a389 * 81 - 86 ( 1997 ) .see also http://root.cern.ch/. | in this contribution we describe knedle - the new event display for the kloe-2 experiment . the basic objectives and software requirements are presented . the current status of the development is given along with a short discussion of the future plans . |
in this short note we study the semantics of two basic computational effects , exceptions and states , from a new point of view .exceptions are studied in section [ sec : exceptions ] .the focus is placed on the exception `` flags '' which are set when an exception is raised and which are cleared when an exception is handled .we define the _ exception constructor _ operation which sets the exception flag , and the _ exception recovery _ operation which clears this flag .states are considered in the short section [ sec : states ] . then in section [ sec : dual ] we show that our point of view yields a surprising result : there exists a symmetry between the computational effects of exceptions and states , based on the categorical duality between sums and products . more precisely , the lookup and update operations for states are respectively dual to the constructor and recovery operations for exceptions .this duality is deeply hidden , since the constructor and recovery operations for exceptions are mixed with the control .this may explain that our result is , as far as we know , completely new .states and exceptions are _ computational effects _ : in an imperative language there is no type of states , and in a language with exceptions the type of exceptions which may be raised by a program is not seen as a return type for this program . in this notewe focus on the denotational semantics of exceptions and states , so that the sets of states and exceptions are used explicitly .however , with additional logical tools , the duality may be expressed in a way which fits better with the syntax of effects .other points of view about computational effects , involving monads and lawvere theories , can be found in .however it seems difficult to derive from these approaches the duality described in this note .the syntax for exceptions heavily depends on the language . for instance in ml - like languagesthere are several exception _ names _ , and the keywords for raising and handling exceptions are ` raise ` and ` handle ` , while in java there are several exception _ types _ , and the keywords for raising and handling exceptions are ` throw ` and ` try - catch ` . in spite of the differences in the syntax ,the semantics of exceptions share many similarities .a major point is that there are two kinds of values : the ordinary ( i.e. , non - exceptional ) values and the exceptions .it follows that the operations may be classified according to the way they may , or may not , interchange these two kinds of values : an ordinary value may be `` tagged '' for constructing an exception , then the `` tag '' may be cleared in order to recover the value .first let us focus on the raising of exceptions .let denote the set of _ exceptions_. the `` tagging '' process can be modelled by injective functions called the _exception constructors _ , with disjoint images : for each index in some set of indices , the exception constructor maps a non - exceptional value ( or _ parameter _ ) to an exception .when a function _ raises _ ( or _ throws _ ) an exception of index , the following _ raising _ operation is called : the raising operation is defined as the exception constructor followed by the inclusion of in . given a function and an element , if for some then one says that _ raises an exception of index with parameter into . one says that a function _ propagates exceptions _ when it is the identity on . clearly , any function can be extended in a unique way as a function which propagates exceptions .now let us study the handling of exceptions .the process of clearing the `` exception tags '' can be modelled by functions called the _ exception recovery _operations : for each and the exception recovery operation tests whether the given exception is in the image of .if this is actually the case , then it returns the parameter such that , otherwise it propagates the exception . for handling exceptions of indices raised by some function , one provides a function , which may itself raise exceptions , for each in .then the handling process builds a function which propagates exceptions , it may be named or : using the recovery operations , the handling process can be defined as follows .999 = 999 = 999 = 999 = for each , is defined by : + // _ if was an exception before the , then it is just propagated _+ if then return ; + // _now is not an exception _ + compute ; + if then return ; + // _now is an exception _ + for repeat + compute ; + if then return ; + // _now is an exception but it does not have index , for any _ + return .given an exception of the form , the recovery operation returns the non - exceptional value while the other recovery operations propagate the exception .this is expressed by the equations ( [ eq : exceptions - explicit ] ) in figure [ fig : exceptions ] . whenever with the s as coprojections , then equations ( [ eq : exceptions - explicit ] ) provide a characterization of the operations s .* for each index : * a set ( parameters ) * two operations ( exception constructor ) + and ( exception recovery ) * and two equations : which correspond to commutative diagrams , where and are the injections : {{\mathit{m}}_i } \\ { \mathit{exc}}\ar[u]^{c_i } & { \mathit{par}}_i \ar[l]^{t_i } \ar[u]_{{\mathit{id } } } \ar@{}[ul]|{= } \\ } \qquad \qquad \xymatrix=1pc=2pc { { \mathit{par}}_i+{\mathit{exc } } & { \mathit{exc}}\ar[l]_(.3){{\mathit{n}}_i } & { \mathit{par}}_j \ar[l]_{t_j } \\ { \mathit{exc}}\ar[u]^{c_i } & & { \mathit{par}}_j \ar[ll]^{t_j } \ar[u]_{{\mathit{id } } } \ar@{}[ull]|{= } \\ } \ ] ]now let us forget temporarily about the exceptions in order to focus on the semantics of an imperative language .let denote the set of _ states _ and the set of _ locations _ ( also called _variables _ or _ identifiers _ ) . for each location ,let denote the set of possible _ values _ for .for each there is a _ lookup _operation for reading the value of location in the given state .in addition , for each there is an _ update _ operation for setting the value of location to the given value , without modifying the values of the other locations in the given state .this is summarized in figure [ fig : states ] . whenever with the s as projections , two states and are equal if and only if for each , and equations ( [ eq : states - explicit ] ) provide a characterization of the operations s .* for each location : * a set ( values ) * two operations ( lookup ) + and ( update ) * and two equations : which correspond to commutative diagrams , where and are the projections : ^{{\mathit{p}}_i } \ar[d]_{u_i } & { \mathit{val}}_i \ar[d]^{{\mathit{id } } } \\ { \mathit{st}}\ar[r]_{l_i } & { \mathit{val}}_i \ar@{}[ul]|{= } \\ } \qquad\qquad \xymatrix=1pc=2pc { { \mathit{val}}_i\times{\mathit{st}}\ar[r]^(.7){{\mathit{q}}_i } \ar[d]_{u_i } & { \mathit{st}}\ar[r]^{l_j } & { \mathit{val}}_j \ar[d]^{{\mathit{id } } } \\ { \mathit{st}}\ar[rr]_{l_j } & & { \mathit{val}}_j \ar@{}[ull]|{= } \\ } \ ] ]our main result is now clear from figures [ fig : exceptions ] and [ fig : states ] .[ theo : duality ] the duality between categorical products and sums can be extended as a duality between the semantics of the lookup and update operations for states on one side and the semantics of the constructor and recovery operations for exceptions on the other side . in an equational presentation of states is given , with seven families of equations .these equations can be translated in our framework , and it can be _ proved _ that they are equivalent to equations ( [ eq : states - explicit ] ) . then by duality we get for free seven families of equations for exceptions .for instance , it can be proved that for looking up the value of a location only the _ previous _ updating of this location is necessary , and dually , when throwing an exception constructed with only the _ next _ recovery operation , with the same index , is necessary . | * abstract . * in this short note we study the semantics of two basic computational effects , exceptions and states , from a new point of view . in the handling of exceptions we dissociate the control from the elementary operation which recovers from the exception . in this way it becomes apparent that there is a duality , in the categorical sense , between exceptions and states . |
complex networks are realistic substrates for simulating many social and natural phenomena . to address the influence of network topology , primarily ,different classes of degree distributions can be considered . meanwhile , for a given distribution of degrees , correlations may give rise to important network structure effects on the studied process .these structural effects may have important consequences , for instance , correlations may shift the epidemic threshold . although correlation effects may be absent in some cases , in other ones , they can not be neglected .despite there are efficient algorithms to generate networks with fixed degree - degree correlations , real joint probabilities of two or more degrees measured in networks of moderate size may be noisy and hard to be modeled . then , operationally , average nearest - neighbors degree distributions or single quantity measures are used .although other variants have been defined in the literature , as quantifier of the tendency of adjacent vertices to have similar or dissimilar degrees , we will consider the standard measure of ( linear ) degree - degree correlations , namely , the assortativity ( pearson ) coefficient where denotes average over edges and and are the degrees of vertices at each end of an edge . despite this coefficientis known to present some drawbacks , it is a standard and commonly used quantity , hence being worth to be analyzed .moreover , it has the advantage of being a single value measure , that is easier to be controlled than other multi - valued quantities . to analyze the influence of correlations , as well as of any other structural feature, it is useful to build ensembles of networks holding that property , while keeping fixed the sequence of degrees . as it will be described in sec .[ sec : ensembles ] , this kind of ensembles can be achieved by means of a suitable rewiring , performed through a standard simulated annealing monte carlo ( mc ) procedure to minimize a given energy - like quantity ( maximum entropy ensemble approach ) , function of the graph property to be controlled ( in our case ) .once tuned , it is important to characterize how other network properties are altered as by product .some interdependencies among certain network properties have already been numerically shown in the literature , for real as well as for artificial graphs .analytical relations have also been derived .because of its crucial role in spreading phenomena , we will focus here on the effect of over typical distance measures as well as on the branching and transitivity of links . as a measure of the average separation between nodes , we consider the average path length . in the subsequent calculations we use the expression , where is the number of ( disconnected ) clusters and is the number of nodes in cluster .moreover , being the distance ( number of edges along the shortest path ) between nodes and ( taking if the nodes do not belong to the same cluster ) , then alternatively , in order to avoid the issue of the divergence of the distance between disconnected nodes , we consider the inverse , , of the so - called efficiency where is the number of nodes .it represents a harmonic mean instead of the arithmetic one .we also compute the diameter .the transitivity of links can be measured by the clustering coefficient where is the number of triangles and is the degree of node .we also considered the mean value , , of the local clustering coefficient , defined as , where is the number of connections between the neighbors of vertex .we took when or 1 .other measures that arise in the decomposition of will also be considered . besides detecting interdependencies among structural properties , it is also important to know how these properties depend on the system size .we will analyze these issues for two main classes of degree distributions ( poisson and power - law tailed ) .we will also investigate real networks degree sequences .for each class of networks , we will consider different values of the size , , and the mean degree , , within realistic ranges . as a paradigm of the class of networks with a peaked distribution of degrees , with all its moments finite , we consider the random network of erds and rnyi . within this model ,a network with nodes is assembled by selecting different pairs of nodes at random and linking each pair .the resulting distribution of links is the poisson distribution , where the mean degree is .we also analyze networks of the power - law type , i.e. , with , , corresponding to a wide distribution of degrees , with power - law tails .then , moments of order are divergent .we built power - law networks by means of the configuration model . following this procedure ,one starts by choosing random numbers , drawn from the degree distribution .they represent the number of edges coming out from each node , where these edges have one end attached to the node and another still open .second , two open ends are randomly chosen and connected such that , although multiple connections are allowed , self connections are not .this second stage is repeated until each node attains the connectivity attributed in the first step .if eventually an edge has an open end , then it is discarded .however , for large networks , the fraction of discarded edges is negligible .to draw the set of numbers with probability , with ( hence the normalization factor is ) , we used the inverse transform algorithm .notice that and , then we determined to fit the selected value of ( within a tolerance of at most 1% ) , such that it is worth mentioning that the value is not usually achieved , the natural cut - off being . in order to attain a desired value of , we follow an standard rewiring approach .we want to build an ensemble of networks \{g } with a given value of ( -ensemble ) but that are maximally random in other aspects , i.e. , making the fewer number of assumptions as possible about the distribution .then , we use an exponential random graph model , such that the set of networks \{g } has distribution , where is a hamiltonian or energy - like quantity . in order to get an -ensemble , with ,we consider where is a real parameter .the ensemble can be simulated by means of a mc procedure : at each step , a rewiring attempt is accepted with probability }\} ] , while for : /{\cal o}(n ) $ ] , meaning , with . to approach the lower limit , one must have minimal .if it is of order greater than the other terms in the numerator , then one can not have negative , because is non - negative and it will dominate the numerator . thus , negative can arise only if is of the same or lower order .but in that case in the large limit .this explains why the lower bound tends to 0 when ( see fig .[ fig : rlim](b ) ) . along this line , however , is not expected to vanish when , but to tend to a small finite value .similarly , to attain a non - null upper bound of , needs to grow like the denominator , otherwise , the upper bound will be negative and also vanish when , leading to the collapse of the upper bound too .however , this does not necessarily happens if is driven to grow enough during rewiring , which is what seems to be happen according to fig .[ fig : rlim](b ) .the connection between and distance measures is not so direct analytically .numerical results showed that , for networks with localized distribution of links , changing modifies significantly the mean path length only when correlations are assortative ( ) and small .these changes could be related to the induced fragmentation , that diminishes by increasing .then , the impact of becomes less important as increases .meanwhile , the influence on the diameter is more dramatic . in power - law networks ,the modification of the mean path length by is a bit more marked even if fragmentation is absent for , while the diameter is not largely affected . in both cases ,the modification of characteristic lengths that occur when varying may affect transport processes and should be taken into account either when interpreting or designing numerical experiments on top of these networks .we acknowledge partial financial support from brazilian agency cnpq .the authors are grateful to professor thadeu penna for having provided the computational resources of the group of complex systems of the universidade federal fluminense , brazil , where some of the simulations were performed .m. a. serrano , m. bogu , r. pastor - satorras , a. vespignani , correlations in complex networks . in _ large scale structure and dynamics of complex networks : from information technology to finance and natural sciences _ , g. caldarelli , a. vespignani , editors , ( world scientific , singapore , 2007 ) . | correlations may affect propagation processes on complex networks . to analyze their effect , it is useful to build ensembles of networks constrained to have a given value of a structural measure , such as the degree - degree correlation , being random in other aspects and preserving the degree sequence . this can be done through monte carlo optimization procedures . meanwhile , when tuning , other network properties may concomitantly change . then , in this work we analyze , for the -ensembles , the impact of on properties such as transitivity , branching and characteristic path lengths , that are relevant when investigating spreading phenomena on these networks . the present analysis is performed for networks with degree distributions of two main types : either localized around a typical degree ( with exponentially bounded asymptotic decay ) or broadly distributed ( with power - law decay ) . size effects are also investigated . |
since poincar , homology has been used as the main descriptor of the topology of geometric objects . in the classical context , however , all homology classes receive equal attention .meanwhile , applications of topology in analysis of images and data have to deal with noise and other uncertainty .this uncertainty appears usually in the form of a real valued function defined on the topological space .persistence is a measure of robustness of the homology classes of the lower level sets of this function , , , .since it s unknown beforehand what is or is not noise in the dataset , we need to capture all homology classes including those that may be deemed noise later . in this paperwe introduce an algebraic structure that contains , without duplication , all these classes .each of them is associated with its persistence and can be removed when the acceptable threshold for noise is set .the last step can be carried out repeatedly in order to find the best possible threshold .the approach follows the approach to analysis of digital images presented in .the topological spaces subject to such analysis are cell complexes .cell complex _ is a combinatorial structure that describes how -dimensional cells are attached to each other along -dimensional cells .cell complexes come from the following two main sources .first , a gray scale image is a real - valued function defined on a rectangle . given a threshold , the lower level set can be thought of as a binary image .each black pixel of this image is treated as a square cell in the plane .these 2-dimensional cells are combined with their edges ( 1-cells ) and vertices ( 0-cells ) and in the -dimensional case , the image is decomposed into a combination of - , - , ... , -cubes .this process is called _thresholding_. the result is a cell complex for each see .second , a point cloud is a finite set in some euclidean space of dimension . given a threshold ,we deem any two points that lie within from each other as `` close '' . in this case , this pair of points is connected by an edge .further , if three points are `` close '' , pairwise , to each other , we add a face spanned by these points . if there are four , we add a tetrahedron , and , finally , any `` close '' points create a -cell .the process is called the _ vietoris - rips construction_. the result is a cell complex for each .next , we would like to quantify the topology of the cell complex it is done via the _ betti numbersof _ : is the number of connected components in ; is the number of holes or tunnels ( 1 for letter o or the donut ; 2 for letter b and the torus ) ; is the number of voids or cavities ( 1 for both the sphere and the torus ) , etc .the betti numbers are computed via _ homology theory _ .one starts by considering the collection _ _ _ _ of all formal linear combinations ( over a ring ) of in called _chains_. combined they form a finitely generated abelian group called the _ chain complex _ , or collectively a -chain can be recorded as an -vector , where is the total number of -cells in .the boundary of a -chain is the chain comprised of all -faces of its cells taken with appropriate signs . then the _ boundary operator _ acts on the chain complex and is represented by a matrix . from the chain complex homology group is constructed by means of the standard algebraic tools . to capture the topological features one concentrates on _ cycles _ , i.e. , chains with zero boundary , .further , one can verify whether two given -cycles and are _ homologous _ : the difference between them is the boundary of a -chain ( such as two meridians of the torus ) . in this case , and belong to the same _ homology class _ =[b] ] representing the life - spans , called _ barcodes _ , of the homology classes .our approach is somewhat different .it consists of two steps .first step : we pool all possible homology classes in all elements of the filtration together in a single algebraic structure ( sections 4 and 5 ) .the presence of noise is ignored . the homology group of filtration _ _ _ _ captures all homology classes in the whole filtration without double counting .second step : for a given positive integer the -__noise group _ _ _ _is comprised of the homology classes in with the persistence less than next , we `` remove '' the noise from the homology group of filtration by using the quotient ( sections 6 and 7 ) : in other words : _ if the difference between two homology classes is deemed noise , they are equivalent_. the second step can be repeated as needed .we also discuss the computational aspects of this approach ( section 8) and multiparameter filtrations ( section 9 ) .our approach provides a coarser classification of the homology of filtrations than the one based on barcodes .the reason is that all homology classes with long enough life - spans , i.e. , high persistence , have equal place in the homology group of the filtration regardless of the time of birth and death .in this section we will try to understand the meaning of the homology of the gray scale image in figure 1 . for simplicitywe assume that there are only 2 levels of gray in addition to black and white .a visual inspection of the image suggests that it has three connected components each with a hole .therefore , its - and -homology groups shouldhave three generators each .we now develop an algebraic procedure to arrive at this result .first the image is `` thresholded '' .the lower level sets of the gray scale function of the image form a filtration : a sequence of three binary images , i.e. cell complexes : where the arrows represent the inclusions .suppose are the homology classes that represent the components of and are the holes , clockwise starting at the upper left corner .the homology groups of these images also form sequences one for each dimension 0 and 1 .suppose are the two homology maps , i.e. , homomorphisms of the homology groups generated by the inclusions of the complexes , with included for convenience .these homomorphisms act on the generators , as follows : to avoid double counting , we want to count only the homology classes that do nt reappear in the next homology group .as it turns out , a more algebraically convenient way to accomplish this is to count only the homology classes that go to under these homomorphisms .these classes form the kernels of . now, we choose the homology group of the original , gray scale image to be the direct sum of these kernels: the image has three components and three holes , as expected .in the following sections we provide formal definitions .all cell complexes are finite .suppose we have a one - parameter filtration: are cell complexes and the arrows represent the inclusions and so do .we will denote the filtration by or simply next , homology generates a `` direct system '' of groups and homomorphisms: denote this direct system by or simply the zero is added in the end for convenience . our goal is to define a single structure that captures all homology classes in the whole filtration without double counting .the rationale is that if and there is no other satisfying this condition , then and may be thought of as representing the same homology class of the geometric object behind the filtration . the _ homology group of filtration _ is defined as the product of the kernels of the inclusions: , from each group we take only the elements that are about to die .since each dies only once , there is no double - counting . since the sequence ends with we know that everyone will die eventually .hence every homology class appears once and only once .these are a few simple facts about this group .if is an isomorphism for each then if is a monomorphism for each then and are filtrations .then suppose and are filtrations and is a cell map .then the homology map of the homology groups of these filtrations is well defined as where is the restriction of to the stability of the homology group of a filtration follows from the stability of its persistence diagram , i.e. , the set of points for the generators of the homology groups of the filtration , plus the diagonal .it is proven in that where is the bottle - neck distance between the persistence diagrams of two filtrations generated by tame functions function creates an analogue bottle - neck distance for the set of points and its stability follows from the continuity of .to justify our approach to persistence , we observe that some of the features in the image in figure 1 are more prominent than others . in particular , some of the features have lower contrast .these are the holes in the second and the third rings as well as the third ring itself . by _contrast _ of a lower level set of the gray level function we understand the difference between the highest gray level adjacent to the set and the lowest gray level within the set .an easy computation shows that the homology classes with persistence of 3 or higher among the generators are : however , the set of the classes of high persistence is nt a subgroup of the homology group of the respective complex .instead , we look at the classes with _ low _ persistence , i.e. , the noise . in particular , the classes in of persistence 2 or lower formthe kernel of .we now `` remove '' this noise from the homology groups of the filtration by considering their quotients over these kernels .in particular , the 3-persistent homology groups of the image are: that the output is identical to the homology of a single complex , i.e. , a binary image , with two components and one hole .the way persistence is defined ensures that we can never remove a component as noise but keep a hole in it .observe now that the holes in the second and third rings have the same persistence ( contrast ) and , therefore , occupy the same position in the homology group regardless of their birth dates ( gray level ) .second , if we shrunk one of these rings , its persistence and , therefore , its place in the homology group would nt change .these observations confirm the fact that the homology group of the gray scale image , unlike the barcodes , captures only its topology . in the case of a vietoris - ripps complex , not only the barcode , [ birth , death ] , but also the persistence , death - birth , of a homology class contains information about the size of representatives of these classes .for example , a set of points arranged in a circle will produce a 1-cycle with twice as large birth , death , and persistence than the same set shrunk by a factor of 2 .however , persistence defined as death / birth will have the desired property of scale independence .the same result can be achieved by an appropriate re - parametrizing of the filtration .in the general context of filtrations the measure of importance of a homology class is its persistence which is the length of its lifespan within the direct system of homology of the filtration . given filtration we say that _ the persistence _ _ of _ _ _ is equal to _ _ if and our interest is in the `` robust '' homology classes , i.e. the ones with high persistence . however , the collection of these classes is not a group as it does nt even contain 0 .so we deal with `` noise '' first .given a positive integer the -noise ( homology ) group _ _ of is the group of all elements of with persistence less than alternatively , we can define these groups via kernels of the homomorphisms of the inclusions : next , we `` remove '' the noise from the homology group .the -persistent ( homology ) group _ _ of with respect to the filtration is defined as point of this definition is that , given a threshold for noise , if the difference between two homology classes is noise , they should be equivalent .next , just as in the case of noise - less analysis , we define a single structure to capture all ( robust ) homology classes .let be a positive integer suppose and let then we have proved that follows that the homomorphism generated by the inclusion is well - defined .next , we use these homomorphisms to define the -noise ( homology ) group _ _ _ _ of filtration _ _ as that the formula is the same as the one in the definiton of since is a restriction of each term in the above definition is a subgroup of the corresponding term in the definition of the proposition below follows . finally , the _ _ _ _ -persistent ( homology ) group of filtration __ is the results about analogous to the ones about in section 5 hold .for 2-dimensional gray scale images , this approach to homology and persistence has been used in an image analysis program . the algorithm described in has complexity of where is the number of pixels in the image . for the general case ,the analysis algorithm may be outlined as follows : 1 .the input is a filtration .2 . the homology groups of its members and the homomorphisms induced by inclusions are computed .3 . the homology group of the filtration is computed .4 . the persistence of all elements of the homology groups is computed .the user sets a threshold for persistence and the -noise group of the filtration is computed .the -persistent homology group of the filtration is computed and given as output .if the user changes the threshold , the last step is repeated as necessary without repeating the rest .the algorithm above computes the homology group of filtration , as defined , incrementally .this may be both a disadvantage and an advantage . in comparison, the persistence complex also contains information about all homology classes of the filtration but its computation does not require computing the homology of each complex of the filtration . meanwhile , the above algorithm may have to compute the same homology over and over if consecutive complexes are identical . hence , the algorithm has a disadvantage in terms of processing time . on the other hand ,the incremental nature of the algorithm makes its use of memory independent from the length of the filtration .another advantage is that multi - parameter filtrations are dealt with in the exact same manner ( see next section ) .the inefficiency of the above algorithm can be addressed with a proper algebraic tool .this tool is the mapping cone .suppose , for simplicity , that our filtration has only two elements : the mapping cone is , in a sense , a combination of the kernel and the cokernel of .it captures the difference between and on the chain level : everything in is killed unless it also appears in under .then the algorithm is to construct the homology group from the chain complexes of the elements of the filtration and the chain map filtrations come from the same main sources as one - parameter filtrations . first , color images are thresholded according to their three color channels .second , point clouds are thresholded by the closeness of their points and , for example , the density of hte points .let limit our attention to the two - parameter case .a ( finite ) two - parameter filtrations is a table of complexes connected by inclusions these inclusions generate homomorphisms 0s added in the end of each row and each column .define the _ homology group of the filtration _ as analogues of the results in section 5 hold .there are many ways to define persistence in the multiparameter setting .for example , we can evaluate the robustness of a homology class in terms of the pairs of positive integerssatisfying , just as in section 7 , we restrict the homomorphisms generated by the inclusions to the homology classes of low persistence: the -_noise group of _ is defined via these homomorphisms: , the -persistent ( homology ) group of filtration _ _ is defined as | such modern applications of topology as digital image analysis and data analysis have to deal with noise and other uncertainty . in this environment , the data structures often appear `` filtered '' into a sequence of cell complexes . we introduce the homology group of the filtration as the group of all possible homology classes of all elements of the filtration without double count . the second step of analysis is to discard the features that lie outside the user s choice of the acceptable level of noise . |
in texturing , we often encounter the following problem : fill a region with a given collection of small square patches in such a way that patches of a same kind do not appear in a row .we make this problem more precise . for natural numbers and , let be a rectangular grid we call its elements _cells_. for a finite set of tiles with , we call a function a _ tiling _ of with . for a natural number and ,we say satisfies the condition if there is no horizontal strip with more than consecutive s , that is , there is no such that .similarly , we say satisfies the condition for a natural number , if there is no vertical strip with more than consecutive s .consider a set consisting of conditions of the form and with varying and .alternatively , we can think of as functions so that . here , we allow , which will be never violated , for notational simplicity. we will use both notations interchangeably .we say a tiling is -_dappled _ if satisfies all the conditions in .the problem we are concerned is : give an efficient algorithm to produce -dappled tilings , which posses some controllability by the user . notethat enumerating all the -dappled tilings is fairly straightforward ; we can fill cells sequentially from the top - left corner . however , this is not practical since there are exponentially many -dappled tilings with respect to the number of cells , and many of them are not suitable for applications as we see below .. there exist at least tilings which are -dappled .we will create _ draughtboard _ tilings .for each cell , choose any tile and put the same tile at ( if it exists ) .pick any and put them at and ( if they exist ) .one can see that for any the tile at or is different from that at . similarly , the tile of or is different from that of , and hence , the tiling thus obtained is -dappled with any . there are cells of the form , and hence , there are at least draughtboard tilings .it is easy to see that the above argument actually shows that there are at least draughtboard ( and hence , -dappled ) tilings with .of course , draughtboard patters look very artificial and are not suitable for texturing .we would like to have something more natural .therefore , instead of enumerating all of the -dappled tilings , in this paper we provide an algorithm to produce one in such a way that the user has some control over the output . we also discuss a concrete applications with the brick wang tiles studied in , and with flow generation .for the special case of and , the numbers of -dappled tilings for several small and are listed at .no recursive or non - recursive formula for general and nor a generating function is known as far as the authors are aware .we show an example of a draughtboard tiling .let .then for any set of conditions , the following is an -dappled tiling .first , note that the problem becomes trivial when ( we can choose a tile for at step ( i ) below which is different from and ) .so , we assume consists of two elements .fix a set of conditions and we just say dappled for -dappled from now on . given any tiling , we give an algorithm to convert it into a dappled one .we can start with a random tiling or a user specified one .the idea is to perform `` local surgery '' on .we say _ violates _ the condition ( resp . ) at when ( resp . ) . for a cell define its _ weight _ .let be a cell with the minimum weight such that violates any of the conditions or .we modify some values of around to rectify the violation in the following manner . 1 .set if it does not violate any condition at in .2 . otherwise , set , and .let us take a close look at the step ( ii ) .assume that violated at .this means .note also that since otherwise we could set at the step ( i ) .when , we can set without introducing a new violation at .when , we can set and without introducing a new violation at either of or .a similar argument also holds when is violated at . after the above procedure, the violation at is resolved without introducing a new violation at cells with weight .( we successfully `` pushed '' the violation forward . )notice that each time either the minimal weight of violating cells increases or the number of violating cells with the minimal weight decreases .therefore , by repeating this procedure a finite number of times , we are guaranteed to obtain a dappled tiling transformed from the initially given one .the algorithm works in whatever order the cells of a same weight are visited , but our convention in this paper is in increasing order of .all the examples are produced using this ordering .fix any , , and with .algorithm [ algorithm ] takes a tiling and outputs an -dappled tiling . if is already -dappled , the algorithm outputs .( note that in the below the values of and for negative indices should be understood appropriately ) the sub - routine returns true if violates any of horizontal or vertical conditions at the given cell . in practice ,the check can be efficiently performed by book - keeping the numbers of consecutive tiles of smaller weight in the horizontal and the vertical directions .see the python implementation for details .[ rem : p=1 ] algorithm [ algorithm ] does not always work when or for some .for example , when it can not rectify give two extensions of the main algorithm discussed in the previous section .it is easy to see that our algorithm works when the conditions and vary over cells .that is , and can be functions of as well as .this allows the user more control over the output .for example , the user can put non - uniform constraints , or even dynamically assign constraints computed from the initial tiling .let and , where and . in the left half , long horizontal white strips and long vertical orange stripsare prohibited , while in the right half , long vertical white strips and long horizontal orange strips are prohibited .sometimes we would like to have an -dappled tiling which can be repeated to fill a larger region .for this , we have to require the conditions to be _ cyclic _ ; for example , is violated if there is a cell with ,j)=\cdots f([i - p],j)=t ] is the reminder of divided by .we say a tiling is _ cyclically -dappled _ if it does not violate any of the conditions in in the above cyclic sense .we discuss a modification of our algorithm to produce a cyclically -dappled tiling .however , there are two limitations : it only works for a limited class of conditions ; when , we have to assume should satisfy for all .( see example [ cyclic : fail ] ) .the other drawback is that the algorithm changes an input tiling even when it is already -dappled .this is because it produces an -dappled tiling with additional conditions .let be any tiling .algorithm [ algorithm ] is modified as described in algorithm [ cyc - algorithm ] .we visit cells in increasing order of the weight as in algorithm [ algorithm ] .when the cell is visited , for each 1 .impose no horizontal conditions if 2 .impose if 3 .impose if , where is the smallest non - negative integer such that .( note that by the previous condition . )4 . impose otherwise . anddo similarly for . due to the extra condition imposed by ( ii ) , the output is in a restricted class of cyclically -dappled tilings .fix any , , and with for all .algorithm [ cyc - algorithm ] takes a tiling and outputs a cyclically -dappled tiling .[ cyclic : fail ] one might wonder why we can not just impose on the cells in ( ii ) above to make it work when .in this case , we may have to impose in ( iii ) , which is problematic as we see in the following example with : rectifying the cell will introduce a new violation at the one to the down - left , and vice versa . if consists of just two conditions , we can modify algorithm [ cyc - algorithm ] further to ensure the algorithm works even when .the idea is to make the first two rows and columns draughtboard : 1 . , , , and 2 . then , the rest is rectified with algorithm [ cyc - algorithm ] .the second requirement ensures that algorithm [ cyc - algorithm ] works . for the technical details ,refer to the implementation .[ cols="^,^,^ " , ]consider an -dappled tiling with and . we given an interpretation to it so that we can use it to create a crowd simulation .we start with particles spread over the tiling .they move around following the `` guidance '' specified by the tile .more precisely , each particle picks a direction according to the tile on which it locates . for example , assume a particle is at a cell with .then , choose either left or right and move in the direction .when it reaches the centre of an adjacent tile , say with , choose either up or down and proceeds .see the supplementary video .we defined the notion of dappled tilings , which is useful to produce texture patterns free of a certain kind of repetition .we gave an efficient algorithm to convert any tilings to a dappled one .our method has the following advantages .* it produces all the dappled tilings if we start with a random tiling .this is because the algorithm outputs as it is if the input is already -dappled . *it has some control over the distribution of tiles since we can specify the initial .we finish our discussion with a list of future work which encompasses both the theoretical and the practical problems . 1 . a better cyclic algorithm : in [ sec : cyclic ] we gave an algorithm to produce cyclically dappled tilings with some limitations .we would like to develop a better way to get rid of these limitations .2 . conditions specified by subsets : for , we define the condition which prohibits horizontal strips consisting of tiles in .we would like to give an algorithm to produce -dappled tilings , where consists of this kind of generalised conditions .closest dappled tiling : our algorithm takes a tiling as input and produces an -dappled tiling , which is usually not very different from the input . however , the output is not the closest solution in terms of the hamming distance . + for algorithm converts but one of the closest dappled tilings to the input is + it is interesting to find an algorithm to produce an -dappled tiling closest to the given tiling .extension of the flow tiling in [ sec : flow ] : we can consider different kinds of tiles such as emitting / killing tiles , where new particles are born / killed , and speed control tiles , where the speed of a particle is changed .a parallel algorithm : our algorithm is sequential but it is desirable to have a parallelised algorithm .we may use cellular automaton to give one .global constraints : the conditions we consider in the -dappled tiling is _ local _ in the sense that they can be checked by looking at a neighbourhood of each cell .global constraints such as specifying the total number of a particular tile can be useful in some applications. we would like to generalise our framework so that we can deal with global constraints . 7 .boundary condition : given a partial tiling of , we can ask to extend it to an -dappled tiling .a typical example is the case where the tiles at the boundary are specified . in the cyclic setting, it is not even trivial to determine if there is a solution or not .+ consider a -grid with and the following partial tiling : there exists no cyclically -dappled tiling extending ( obtained by filling the cells marked with `` '' ) the given one .this is because in a cyclically -dappled tiling , there should be an equal number of and .this implies there should be exactly two s in each column , which is not the case with the above example .+ for a larger board , where , , and is divisible by , we have a similar example : there exists no cyclically -dappled tiling extending it .this can be checked , for example , by choosing a tile for and continue filling cells which are forced to have either or by the conditions .no matter what tile we choose for , we encounter violation at some point .+ we would like to have a more efficient algorithm to decide and solve tiling problems with boundary conditions .interpretation as a sat problem : the -dappled tiling is a satisfiability problem and it would be interesting to formalise it to give a formal verification of the algorithm .a part of this work was conducted during the imi short term research project `` formalisation of wang tiles for texture synthesis '' at kyushu university .the authors thank kyushu university for the support .the authors are grateful to yoshihiro mizoguchi for his helpful comments .a. derouet - jourdan , y. mizoguchi , and m. salvati , , in _symposium on mathematical progress in expressive image synthesis ( meis2015 ) _ , volume * 64 * of _ mi lecture note series _ , pages 6170 .kyushu university , 2015 . | we consider a certain tiling problem of a planar region in which there are no long horizontal or vertical strips consisting of copies of the same tile . intuitively speaking , we would like to create a dappled pattern with two or more kinds of tiles . we give an efficient algorithm to solve the problem , and discuss its applications in texturing . |
it has been known that entanglement can be detected with help of special class of maps called positive maps .in particular there is an important criterion saying that acting on a given product hilbert space is separable if and only if for all positive ( but not completely positive ) maps the following operator (\varrho)\ ] ] has all non - negative eigenvalues which usually is written as (\varrho ) \geq 0 \label{positivemaps1}.\ ] ] here by we denote the identity map acting on . since any positivity - preserving map is also hermiticity - preserving , it makes sense to speak about eigenvalues of .however , it should be emphasized that there are many ( and equivalently the corresponding criteria ) and to characterize them is a hard and still unsolved problem ( see , e.g. , ref . and references therein ) . for a long timethe above criterion has been treated as purely mathematical .one used to take matrix ( obtained in some _ prior _ state estimation procedure ) and then put it into the formula ( [ positivemaps1 ] ) .then its spectrum was calculated and the conclusion was drawn. however it can be seen that for , say states acting on and maps , the spectrum of the operator consists of elements , while full _ prior _ estimation of such states corresponds to parameters .the question was raised as to whether one can perform the test ( [ positivemaps1 ] ) physically without necessity of _ prior _ tomography of the state despite the fact that the map is not physically realizable .the corresponding answer was that one can use the notion of structural physical approximation ( spa ) of un physical map which is physically realizable already , but at the same time the spectrum of the state (\varrho)\ ] ] is just an affine transformation of that of the ( unphysical ) operator .the spectrum of can be measured with help of the spectrum estimator , which requires estimation of only parameters which ( because of affinity ) are in one to one correspondence with the needed spectrum of ( [ positivemaps1 ] ) .note that for systems ( the composite system of two qubits ) , similar approaches lead to the method of detection of entanglement measures ( concurrence and entanglement of formation ) without the state reconstruction .the disadvantage of the above method is that realization of spa requires addition the noise to the system ( we have to put some controlled ancillas , couple the system , and then trace them out ) . in ref . the question was raised about the existence of noiseless quantum networks , i.e. , those of which the only input data are : ( i ) unknown quantum information represented by ( ii ) the controlled measured qubit which reproduces us the spectrum moments ( see ref .it was shown that for at least one positive map ( transposition ) the noiseless network exists .such networks for two - qubit concurrence and three - qubit tangle have also been designed . in the present paperwe ask a general question : do noiseless networks work only for special maps ( functions ) or do they exist for any positive map test ? in the case of a positive answer to the latter : is it possible to design a general method for constructing them ? can it be adopted to any criteria other than the one defined in ( [ positivemaps1 ] ) ? for this purposewe first show how to measure a spectrum of the matrix , where is an arbitrary linear , hermiticity - preserving map and is a given density operator acting on , with the help of only parameters estimated instead of . for bipartite where gives instead of .this approach is consistent with previous results where arbitrary polynomials of elements of a given state have been considered . in these worksit was shown out that any at most -th degree polynomial of a density matrix can be measured with help of two collective observables on copies of .in fact one can treat the moments of which we analyze below as polynomials belonging to such a class .we derive the explicit form of observables for the sake of possible future application .moreover , approach presented in the present paper allows for quite natural identification of observable that detects an arbitrary polynomial of the state subjected to some transformation .then we provide an immediate application in entanglement detection showing that for suitable the scheme constitutes just a right method for detecting entanglement without prior state reconstruction with the help of either positive map criteria ( [ positivemaps1 ] ) or linear contraction methods discussed later .since matrix is hermitian its spectrum may be calculated using only numbers ^{k}=\sum_{i=1}^{m}\lambda_{i}^{k}\qquad ( k=1,\ldots , m),\ ] ] where are eigenvalues of .we shall show that all these spectrum moments can be represented by mean values of special observables . to this aimlet us consider the permutation operator defined by the formula where and are vectors from .one can see that is just an identity operator acting on .combining eqs .( [ alfy ] ) and ( [ vka ] ) we infer that may be expressed by relation ^{\otimes k}\right\ } \label{alfa}\ ] ] which is generalization of the formula from refs . where was ( unlike here ) required to be a physical operation . at this stage the careful analysis of the right hand side of eq .( [ alfa ] ) shows that is a polynomial of at most -th degree in matrix elements of .this , together with the observation of refs . allows us already to construct a single collective observable that detects .however , for the sake of possible future applications we derive the observable explicitly below . to this aimwe first notice that may be obtained using hermitian conjugation of which again is a permutation operator but permutes states in the reversed order .therefore all the numbers may be expressed as .\ ] ] let us focus for a while on the map . due to its hermiticity - preserving propertyit may be expressed as with and being linearly independent -by- matrices . by the virtue of this fact and some well - known properties of the trace , after rather straightforward algebrawe may rewrite eq .( [ alfa2 ] ) as ,\ ] ] where is a dual map to and is given by .here we have applied a map on the operator instead of applying to .this apparently purely mathematical trick with the aid of the fact that the square brackets in the above contain a hermitian operator allows us to express the numbers as a mean value of some observables in the state . indeed , introducing \ ] ]we arrive at .\label{meanvalues}\ ] ] in general , a naive measurement of all mean values would require estimation of much more parameters that .but there is a possibility of building a unitary network that requires estimation of exactly parameters using the idea that we recall and refine below .finally , let us notice that the above approach generalizes measurements of polynomials of elements of in the sense that it shows explicitly how to measure the polynomials of elements of .of course , this is only of rather conceptual importance since both issues are mathematically equivalent and have the origin in refs . .let be an arbitrary observable ( it may be even infinite dimensional ) which spectrum lies between finite numbers and and be a state acting on . in ref . it has been pointed out that the mean value may be estimated in process involving the measurement of only one qubit .this fact is in good agreement with further proof that single qubits may serve as interfaces connecting quantum devices .below we recall the mathematical details of the measurement proposed in ref . . at the beginning onedefines the following numbers and observe that the hermitian operators and satisfy and as such define a generalized quantum measurement which can easily be extended to a unitary evolution ( see appendix a of ref . for a detailed description ) . consider a partial isometry on the hilbert space defined by the formula the first hilbert space represents the qubit which shall be measured in order to estimate the mean value .the partial isometry can always be extended to unitary such that if it acts on then the final measurement of observable on the first ( qubit ) system gives probabilities `` spin - up '' ( of finding it in the state ) and `` spin - down '' ( of finding in state ) , respectively of the form one of the possible extensions of to the unitary on is the following the unitarity of follows from the fact that operators and commute .due to the practical reasons instead of unitary operation representing povm we shall consider where is an identity operator on the one - qubit hilbert space and is an arbitrary unitary operation that acts on and simplifies the decomposition of into elementary gates .now if we define a mean value of measurement of on the first qubit after action of the network ( which sometimes may be called visibility ) : , \label{vis}\ ] ] where is a projector onto state , i.e. , , then we have an easy formula for the mean value of the initial observable : a general scheme of a network estimating the mean value ( [ meana ] ) is provided in fig .[ fig1 ] . , with a bounded spectrum , in a given state .both and its conjugate standing before can obviously be removed as they give rise to identity , last unitary on the bottom wire can be removed as it does not impact measurement statistics on the top qubit .however , they have been put to simplify subsequent network structure.,width=302 ] we put an additional unitary operation on the bottom wire after unitary ( which does not change the statistics of the measurement on control qubit ) and divided identity operator into two unitaries acting on that wire which explicitly shows how simplification introduced in eq .( [ udet ] ) works in practice .now one may ask if the mean value belongs to some fixed interval , i.e. , where and are real numbers belonging to the spectrum of , i.e. , ] satisfies ^{\dagger} ] .let us also put and apply the scheme presented above to detect the spectrum of .it is easy to see that the moments detected in that way are ^{k}= { \mathrm{tr}}\left[\mathcal{r}(\varrho)\mathcal{r}(\varrho)^{\dagger}\right]^{k}=\sum_{i}\gamma_{i}^{k}.\ ] ] from the moments one easily reconstructs and may check the violation of eq . ( [ theorem2 ] ) .we have shown how to detect the spectrum of the operator for arbitrary linear hermiticity - preserving map given the source producing copies of the system in state .the network involved in the measurement is noiseless in the sense of and the measurement is required only on the controlled qubit .further we have shown how to apply the method to provide general noiseless network scheme of detection detecting entanglement with the help of criteria belonging to one of two classes , namely , those involving positive maps and applying linear contractions on product states .the structure of the proposed networks is not optimal and needs further investigations . herehowever we have been interested in quite a fundamental question which is interesting by itself : _ is it possible to get noiseless networks schemes for any criterion from one of the above classes ? _ up to now their existence was known _ only _ for special case of positive partial transpose ( cf . ) . herewe have provided a positive answer to the question . finally , let us note that the above approach can be viewed as an application of collective observables [ see eq .( [ meanvalues ] ) ] .the general paradigm initiated in refs . has been recently fruitfully applied in the context of general concurrence estimates which has been even preliminarily experimentally illustrated .moreover , recently the universal collective observable detecting any two - qubit entanglement has been constructed .it seems that the present approach needs further analysis from the point of view of collective observables including especially collective entanglement witness ( see ) .p. h. thanks artur ekert for valuable discussions .the work is supported by the polish ministry of science and education under the grant no. 1 p03b 095 29 , eu project qprodis ( ist-2001 - 38877 ) and ip project scala .figures were prepared with help of qcircuit package . | we present the general scheme for construction of noiseless networks detecting entanglement with the help of linear , hermiticity - preserving maps . we show how to apply the method to detect entanglement of unknown state without its prior reconstruction . in particular , we prove there always exists noiseless network detecting entanglement with the help of positive , but not completely positive maps . then the generalization of the method to the case of entanglement detection with arbitrary , not necessarily hermiticity - preserving , linear contractions on product states is presented . |
this paper considers a general data assimilation filtering problem with multiple time scales . on a probability space with ,for positive integers we consider the -dimensional process ;\mathbbr^m\times\mathbb r^k\times\mathbb r^n) ] to the process ( e.g. ) , where and where actually , due to the fact that the observation process has constant diffusion , condition [ a : assumption1 ] and the ergodic theorem guarantee that a stronger result holds for any , i.e. , for every data is contained in the filtration for any , we define a new measure on via the relationship under the proper assumptions is an exponential martingale and thus the probability measures and are absolutely continuous with respect to each other , and the distribution of is the same under both and .furthermore , the process is a -brownian motion and is independent of .next , for that is in , we define the measure valued process acting on as \doteq\mathbb{e}_{\theta}^{*}\left [ z_t^{\delta,\theta}f(x^{\delta}_{t},u^{\delta}_{t } ) \big |\mathcal{y}_{t}^\delta\right]\ , \ ] ] a process which , for is well - known to be the unique solution ( see ) to the following equation : &=&\left(\frac{1}{\delta}\phi^{\delta,\theta}_{t}[{\mathcal{l}^{f}}_{\theta}f ] + \phi^{\delta,\theta}_{t}[{\mathcal{l}^{s}}_{\theta}f ] \right)dt + \phi^{\delta,\theta}_{t}[h_{\theta } f ] dy_s^\delta,\quad \mathbb{p}_{\theta}^ { * } \textrm{-a.s . } \ , \nonumber\\ \phi_0^{\theta}[f]&= & \mathbb e_{\theta}f(x_0^\delta , u_0^\delta)\ .\label{eq : zakai } \end{aligned}\ ] ] equation is the zakai equation for nonlinear filtering .furthermore , is actually an unnormalized probability measure which yields the normalized posterior expectations via the kalianpour - striebel formula , \doteq{\mathbb e}_{\theta}\left[f(x_t^\delta , u_t^\delta)\big|\mathcal y_t^\delta\right]=\frac{\phi_t^{\delta,\theta}[f]}{\phi_t^{\delta,\theta}[1]}\quad \mathbb{p}_{\theta},\mathbb{p}_{\theta}^ { * } \textrm{-a.s.}\ .\ ] ] if then we have the innovations process , \qquad\forall t\in[0,t]\ .\ ] ] the process is a -brownian motion under the filtration generated by the observed process , but will only be observable as brownian motion if is equal to the true parameter value . for a suitable test function , the innovation process is used in the nonlinear kushner - stratonovich equation to describe the evolution of ] and ] and =\frac{\bar{\phi}^{\theta}_{t}[f]}{\bar{\phi}^{\theta}_{t}[1]} ] .we conclude this section by mentioning that , under , equation ( [ eq : ksformula ] ) defines a measure - valued process , the conditional distribution , by the formula ={\mathbb e}_{\theta}\left[f(x^{\delta}_t , u^{\delta}_t)\big|\mathcal{y}^{\delta}_t\right]\ .\ ] ] similarly , we define the probability measure - valued processes and by \quad\text { and } \quad \left(\bar{p}_{t},f\right)=\bar{\pi}_{t}[f]\ ] ] the measure - valued process especially , will become handy in proving the consistency and asymptotic normality of the mle based on the reduced estimator .consider and define the following class of test functions }\mathbb{e}_{\theta}\left|f(x^{\delta}_{t},u^{\delta}_{t})\right|^{2+\eta}<\infty \right\}.\ ] ] then , we have the following result which is a generalization of the results of and .[ t : filterconvergence1 ] assume conditions [ a : assumption1 ] . for any ,the following hold uniformly in ] converges in -mean - square to ] , one has -\bar{\pi}^{\alpha}_{t}[\bar{h}_{\alpha}]\right|^{2}>0.\ ] ] let us define since we have assumed that is bounded , we get that with probability .we then have the following theorem : [ t : consisitencyreducedlikelihood ] assume conditions [ a : assumption1 ] , [ a : assumption4 ] , [ a : assumption4b ] and [ a : assumptionidentifiability ] . the maximum likelihood estimator based on ( [ eq : intermediatelikelihood ] )is strongly consistent as first and then , i.e. , for any under , we recall that the innovations process \qquad\forall t\in[0,t]\ ] ] is a -brownian motion under the filtration generated from the observed process .hence , under we have ^{\delta}_s -\frac 12\int_0^t\left| \bar{\pi}_s^{\delta,\theta}[\bar{h}_{\theta}]\right|^2ds\\ & = \int_0^t \left[\bar{\pi}_s^{\delta,\theta}[\bar{h}_{\theta}]\cdot\pi^{\delta,\alpha}_{s}[h_{\alpha}]-\frac 12\left| \bar{\pi}_s^{\delta,\theta}[\bar{h}_{\theta}]\right|^2\right]ds + \int_0^t \bar{\pi}_s^{\delta,\theta}[\bar{h}_{\theta } ] d\nu_{s}\ , \end{aligned}\ ] ] where \cdot\pi^{\delta,\alpha}_{s}[h_{\alpha}] ] as a function of and under condition [ a : extraconditionforclt ] , asymptotic normality of the mle corresponding to the reduced log - likelihood holds . to be precise , we have the following theorem .[ t : cltreducedlikelihood ] assume conditions [ a : assumption1 ] , [ a : assumption4 ] , [ a : assumption4b ] , [ a : assumptionidentifiability ] , [ a : extraconditionforclt ] and that ] for notational convenience . based on ( [ eq : equationforreducedmle ] ) and for ,we write for some such that \left(dy_{s}^{\delta}-\bar\pi_{s}^{\delta,\bar{\theta}}[\bar{h}_{\bar\theta}]ds\right)\nonumber\\ & = \int_{0}^{t}\dot{\bar\pi}_{s}^{\delta,\bar{\theta}}[\bar{h}_{\bar\theta}]\left(dy_{s}^{\delta}-\bar\pi_{s}^{\delta,\alpha}[\bar{h}_{\alpha}]ds-(\bar{\theta}-\alpha)\dot{\bar\pi}_{s}^{\delta,\alpha^{*}}[\bar{h}_{\alpha^{*}}]ds\right)\nonumber\\ & = \int_{0}^{t}\dot{\bar\pi}_{s}^{\delta,\bar{\theta}}[\bar{h}_{\bar\theta}]dy_{s}^{\delta } -\int_{0}^{t}\dot{\bar\pi}_{s}^{\delta,\bar{\theta}}[\bar{h}_{\bar\theta}]\cdot\bar\pi_{s}^{\delta,\alpha}[\bar{h}_{\alpha}]ds -(\bar{\theta}-\alpha)\int_{0}^{t}\dot{\bar\pi}_{s}^{\delta,\bar{\theta}}[\bar{h}_{\bar\theta}]\cdot\dot{\bar\pi}_{s}^{\delta,\alpha^{*}}[\bar{h}_{\alpha^{*}}]ds\ .\nonumber\end{aligned}\ ] ] after some term rearrangement , we obtain \cdot\dot{\bar\pi}_{s}^{\delta,\alpha^{*}}[\bar{h}_{\alpha^{*}}]ds\right)^{-1 } \frac{1}{\sqrt{t}}\int_{0}^{t}\dot{\bar\pi}_{s}^{\delta,\bar{\theta}}[\bar{h}_{\bar\theta } ] d\nu_{s}\nonumber\\ & + \left(\frac{1}{t}\int_{0}^{t}\dot{\bar\pi}_{s}^{\delta,\bar{\theta}}[\bar{h}_{\bar\theta}]\cdot\dot{\bar\pi}_{s}^{\delta,\alpha^{*}}[\bar{h}_{\alpha^{*}}]ds\right)^{-1 } \frac{1}{\sqrt{t}}\int_{0}^{t}\dot{\bar\pi}_{s}^{\delta,\bar{\theta}}[\bar{h}_{\bar\theta}]\cdot\left(\pi^{\delta,\alpha}_{s}[h_{\alpha}]-\bar\pi_{s}^{\delta,\alpha}[\bar{h}_{\alpha}]\right)ds\ , \end{aligned}\ ] ] and by taking , ergodcity and theorem [ t : filterconvergence1 ] guarantee that -\bar\pi_{s}^{\delta,\alpha}[\bar{h}_{\alpha}]\right|^{2}ds&=0\ .\end{aligned}\ ] ] the latter statement and condition [ a : extraconditionforclt ] guarantee us that in -probability as first and then \cdot\dot{\bar\pi}_{s}^{\delta,\alpha^{*}}[\bar{h}_{\alpha^{*}}]ds\right)^{-1 } \frac{1}{\sqrt{t}}\int_{0}^{t}\dot{\bar\pi}_{s}^{\delta,\bar{\theta}}[\bar{h}_{\bar\theta}]\cdot\left(\pi_{s}^{\delta,\alpha}[h_{\alpha}]-\bar\pi_{s}^{\delta,\alpha}[\bar{h}_{\alpha}]\right)ds\rightarrow 0\ .\label{eq : limitclt1}\end{aligned}\ ] ] for notational convenience , let us define the random matrix \cdot\dot{\bar\pi}_{s}^{\delta,\theta_{2}}[\bar{h}_{\theta_{2}}]ds\ .\ ] ] since under , the innovations process \qquad\forall t\in[0,t]\ ] ] is a -brownian motion , for the stochastic integral we notice that a time change gives the following equality in distribution for some brownian motion , say \cdot\dot{\bar\pi}_{s}^{\delta,\alpha^{*}}[\bar{h}_{\alpha^{*}}]ds\right)^{-1 } \frac{1}{\sqrt{t}}\int_{0}^{t}\dot{\bar\pi}_{s}^{\delta,\bar{\theta}}[\bar{h}_{\bar\theta } ] d\nu_{s}=\left(f^{\delta}_{t}(\bar{\theta},\alpha^{*})\right)^{-1 } \frac{1}{\sqrt{t}}\int_{0}^{t}\dot{\bar\pi}_{s}^{\delta,\bar{\theta}}[\bar{h}_{\bar\theta}]d\nu_{s}\nonumber\\ & \qquad=\tilde{w}\left(\left(\left(f^{\delta}_{t}(\bar{\theta},\alpha^{*})\right)^{\top}\right)^{-1 } f^{\delta}_{t}(\bar{\theta},\bar{\theta } ) \left(f^{\delta}_{t}(\bar{\theta},\alpha^{*})\right)^{-1}\right)\nonumber \end{aligned}\ ] ] since , theorem [ t : consisitencyreducedlikelihood ] implies that and hence by the almost sure continuity of ] defined by the equation \right)\qquad\forall t\in[0,t]\ ] ] is a -brownian motion under the filtration generated from the observed process .the maximizer satisfies \left(dy_{s}^{\delta}-\bar\pi_{s}^{\delta,\tilde{\theta}}[\bar{h}_{\tilde\theta}]ds\right)=0\ , \ ] ] and the fisher information turns out to be ^{\top}\cdot\nabla_{\alpha}\bar\pi_{s}^{\alpha}[\bar{h}_{\alpha } ] ds \ .\ ] ] the limiting system ( [ eq : modelexamplelimit ] ) uses the well - known kalman - bucy filter . the inference problem for the limiting linear system ( [ eq : modelexamplelimit ] ) was studied in . in ,the author develops mle estimators for based on ( [ eq : modelexamplelimit ] ) , i.e. using as data .however , the difference of our setup with the rest of the literature is that we want to estimate based on observations , which come from the multiscale model ( [ eq : modelexampleprelimit ] ) and not from the limit model ( [ eq : modelexamplelimit ] ) . of course, the limit problem is used in order to derive properties of the estimators , but the actual inference is done based on observations from the multiscale model .let us write , .notice that in the notation of section [ s : problemformulation ] we have and .let us compute the fisher information matrix for this model and derive the conditions under which is strictly positive and the model is identifiable .let us first denote ] then ,\frac{\sigma^{2}}{\bar{a}^{2}(\theta)}\zeta(\theta)\right) ] will satisfy the equation &=&-\bar{\beta}(\theta)\bar{\pi}^{\theta}_{t}[\bar{h}_{\theta}]dt+ \zeta(\theta)\left(d\bar{y}_{t}-\bar{\pi}^{\theta}_{t}[\bar{h}_{\theta}]dt\right)\ .\label{eq : averagedlinearfilter } \ ] ] now notice that if ( i.e. , the true parameter value ) , then defined by \right) ] satisfies the averaged linear sde ( [ eq : averagedlinearfilter ] ) , so when we have =\bar{\pi}^{\alpha}_{0}[\bar{h}_{\alpha}]e^{-\bar{\beta}(\alpha)t}+ \zeta(\alpha ) \sigma\int_{0}^{t}e^{-\bar{\beta}(\alpha)(t - s)}d\bar{\nu}_{s}\ , \ ] ] from which it is clear that ] with respect to , at we find that ] -\phi_t^{\delta,\theta}[\bar f_\theta]\right|^{2}\rightarrow 0\qquad\hbox{as } \delta\rightarrow0\ , \ ] ] let us consider an independent copy of , which has the same law as , but which is independent of .we have -\phi_t^{\delta,\theta}[\bar f_\theta]\right)^2\nonumber\\ & = \mathbb e_\theta^*\left(\phi_t^{\delta,\theta}[f-\bar f_\theta]\right)^2\nonumber\\ & = \mathbb e_\theta^*\left[\mathbb e_\theta^*\left[\left(f(x_t^\delta , u_t^\delta)-\bar f_\theta(u_t^\delta)\right)\exp\left ( \int_{0}^th_{\theta}(x^{\delta}_{s},u_s^\delta)dy_s^\delta-\frac{1}{2}\int_{0}^t\left|h_{\theta}(x^{\delta}_{s},u_s^\delta)\right|^{2}ds \right)\big|\mathcal y_t^\delta\right]^2\right]\nonumber\\ & = \mathbb e_\theta^*\bigg[\mathbb e_\theta^*\bigg[\big(f(x_t^\delta , u_t^\delta)-\bar f_\theta(u_t^\delta)\big)\big(f(\widetilde x_t^\delta,\widetilde u_t^\delta)-\bar f_\theta(\widetilde u_t^\delta)\big)\nonumber\\ & \times\exp\left ( \int_{0}^t\left(h_{\theta}(x^{\delta}_{s},u_s^\delta)+h_{\theta}(\widetilde x^{\delta}_{s},\widetilde u_s^\delta)\right)dy_s^\delta-\frac{1}{2}\int_{0}^t\left(\left|h_{\theta}(x^{\delta}_{s},u_s^\delta)\right|^{2}+\left|h_{\theta}(\widetilde x^{\delta}_{s},\widetilde u_s^\delta)\right|^{2}\right)ds \right)\big|\mathcal y_t^\delta\bigg]\bigg]\nonumber\\ & = \mathbb e_\theta^*\bigg[\mathbb e_\theta^*\bigg[\big(f(x_t^\delta , u_t^\delta)-\bar f_\theta(u_t^\delta)\big)\big(f(\widetilde x_t^\delta,\widetilde u_t^\delta)-\bar f_\theta(\widetilde u_t^\delta)\big)\nonumber\\ & \hspace{2cm}\times\exp\left ( \int_{0}^t\left(h_{\theta}(x^{\delta}_{s},u_s^\delta)+h_{\theta}(\widetilde x^{\delta}_{s},\widetilde u_s^\delta)\right)dy_s^\delta\right.\nonumber\\ & \hspace{5cm}\left.-\frac{1}{2}\int_{0}^t\left(\left|h_{\theta}(x^{\delta}_{s},u_s^\delta)\right|^{2}+\left|h_{\theta}(\widetilde x^{\delta}_{s},\widetilde u_s^\delta)\right|^{2}\right)ds \right)\big|\mathcal f_t^{u,\tilde u , x,\tilde x}\bigg]\bigg]\nonumber\\ & = \mathbb e_\theta^*\bigg[\big(f(x_t^\delta , u_t^\delta)-\bar f_\theta(u_t^\delta)\big)\big(f(\widetilde x_t^\delta,\widetilde u_t^\delta)-\bar f_\theta(\widetilde u_t^\delta)\big)\exp\left(\int_{0}^{t}h_{\theta}(x^{\delta}_{s},u_s^\delta)h_{\theta}(\widetilde x^{\delta}_{s},\widetilde u_s^\delta)ds \right)\bigg ] \nonumber\\ & = \mathbb e_\theta^ * \bigg[\big(f(x_t^\delta , u_t^\delta)-\bar f_\theta(u_t^\delta)\big)\big(f(\widetilde x_t^\delta,\widetilde u_t^\delta)-\bar f_\theta(\widetilde u_t^\delta)\big)\nonumber\\ & \hspace{1cm}\times\bigg(\exp\left(\int_{0}^{t}h_{\theta}(x^{\delta}_{s},u_s^\delta)h_{\theta}(\widetilde x^{\delta}_{s},\widetilde u_s^\delta)ds\right)-\exp\left(\int_0^th_{\theta}(\tilde{x}_s^\delta , \tilde{u}_s^\delta ) \bar h_\theta(\bar u_s)ds\right)\bigg)\bigg]\nonumber\\ & ~~+\mathbb e_\theta^*\bigg[\big(f(x_t^\delta , u_t^\delta)-\bar f_\theta(u_t^\delta)\big)\big(f(\widetilde x_t^\delta,\widetilde u_t^\delta)-\bar f_\theta(\widetilde u_t^\delta)\big)\exp\left(\int_0^th_{\theta}(\tilde{x}_s^\delta , \tilde{u}_s^\delta ) \bar h_\theta(\bar u_s)ds\right)\bigg ]\label{eq : unnormalizedfilter0}\ .\end{aligned}\ ] ] in the 2nd to last line of the above display , the term goes to zero by lemma [ l : neededergodicresult ] , \bigg|\\ & \leq 4\|f\|_\infty^2 \mathbb e_\theta^ * \bigg|\exp\left(\int_{0}^{t}h_{\theta}(x^{\delta}_{s},u_s^\delta)h_{\theta}(\widetilde x^{\delta}_{s},\widetilde u_s^\delta)ds\right)-\exp\left(\int_0^th_{\theta}(\tilde{x}_s^\delta , \tilde{u}_s^\delta ) \bar h_\theta(\bar u_s)ds\right ) \bigg|\\ & \rightarrow 0\ , \end{aligned}\ ] ] and the term in the last line of the display ( [ eq : unnormalizedfilter0 ] ) goes to zero as follows , \bigg|\\ & \leq 4\|f\|_\infty^2 \mathbb e_\theta^*\bigg|\exp\left(\int_0^th_{\theta}(\tilde{x}_s^\delta , \tilde{u}_s^\delta ) \bar h_\theta(\bar u_s)ds\right)-\exp\left(\int_0^{t-\epsilon}h_{\theta}(\tilde{x}_s^\delta , \tilde{u}_s^\delta ) \bar h_\theta(\bar u_s)ds\right)\bigg|\\ & + \bigg|\mathbb e_\theta^*\bigg[\mathbb e_\theta^*\big[f(x_t^\delta , u_t^\delta)-\bar f_\theta(u_t^\delta)\big|\mathcal f_{t-\epsilon}^{u^{\delta},\bar u}\vee\mathcal f_t^{\tilde u^{\delta},\tilde x^{\delta}}\big]\big(f(\widetilde x_t^\delta,\widetilde u_t^\delta)-\bar f_\theta(\widetilde u_t^\delta)\big)\exp\left(\int_0^{t-\epsilon}h_{\theta}(\tilde{x}_s^\delta , \tilde{u}_s^\delta ) \bar h_\theta(\bar u_s)ds\right)\bigg]\bigg|\\ & \leq 4\|f\|_\infty^2 \mathbb e_\theta^*\bigg|\exp\left(\int_0^th_{\theta}(\tilde{x}_s^\delta , \tilde{u}_s^\delta ) \bar h_\theta(\bar u_s)ds\right)-\exp\left(\int_0^{t-\epsilon}h_{\theta}(\tilde{x}_s^\delta , \tilde{u}_s^\delta ) \bar h_\theta(\bar u_s)ds\right)\bigg|\\ & \hspace{2cm}+2\|f\|_\infty \mathbb e_\theta^*\bigg[\big|\mathbb e_\theta^*\big[f(x_t^\delta , u_t^\delta)-\bar f_\theta(u_t^\delta)\big|\mathcal f_{t-\epsilon}^{u^{\delta},\bar u}\vee\mathcal f_t^{\tilde u^{\delta},\tilde x^{\delta}}\big]\big|\exp\left(\int_0^{t-\epsilon}h_{\theta}(\tilde{x}_s^\delta , \tilde{u}_s^\delta ) \bar h_\theta(\bar u_s)ds\right)\bigg]\\ & = 4\|f\|_\infty^2 \mathbb e_\theta^*\bigg|\exp\left(\int_0^th_{\theta}(\tilde{x}_s^\delta , \tilde{u}_s^\delta ) \bar h_\theta(\bar u_s)ds\right)-\exp\left(\int_0^{t-\epsilon}h_{\theta}(\tilde{x}_s^\delta , \tilde{u}_s^\delta ) \bar h_\theta(\bar u_s)ds\right)\bigg|\\ & \hspace{2cm}+2\|f\|_\infty \mathbb e_\theta^*\bigg[\big|\mathbb e_\theta^*\big[f(x_t^\delta , u_t^\delta)-\bar f_\theta(u_t^\delta)\big|\mathcal f_{t-\epsilon}^{u^{\delta},\bar u}\big]\big|\exp\left(\int_0^{t-\epsilon}h_{\theta}(\tilde{x}_s^\delta , \tilde{u}_s^\delta ) \bar h_\theta(\bar u_s)ds\right)\bigg]\\ & \rightarrow 4\|f\|_\infty^2 \mathbb e_\theta^*\bigg|\exp\left(\int_0^t\bar h_{\theta}(\tilde{\bar u}_s ) \bar h_\theta(\bar u_s)ds\right)-\exp\left(\int_0^{t-\epsilon}\bar h_{\theta}(\tilde{\bar u}_s ) \bar h_\theta(\bar u_s)ds\right)\bigg|\ , \text { as } \delta\downarrow 0\end{aligned}\ ] ] where can be arbitrarily small .the limit is taken as , with the conditional expectation being handled in the following way : \\ & = \mathbb e_\theta^*\big[f(x_t^\delta , u_t^\delta)-\bar f_\theta(u_t^\delta)\big|\mathcal f_{t-\epsilon}^{u^{\delta},\bar u}\big]\qquad\qquad\hbox{by independence of from ,}\\ & = \mathbb e_\theta^*\big[\mathbb e_\theta^*\big[f(x_t^\delta , u_t^\delta)-\bar f_\theta(u_t^\delta)\big|x^\delta_{t-\epsilon},u^{\delta}_{t-\epsilon}\big]\big|\mathcal f_{t-\epsilon}^{u^{\delta},\bar u}\big]\\ & \rightarrow 0\ , \end{aligned}\ ] ] the last convergence is due to the fact that \rightarrow 0 ] -\bar\pi_t^{\delta,\theta } [ f]\right|\rightarrow 0\qquad\hbox{as } \delta\rightarrow0\ , \ ] ] where =\bar \phi_t^{\delta,\theta}[f]/\bar \pi_t^{\delta,\theta}[1] ] as for all .next , we notice that for such that and /\phi_t^{\delta,\theta}[1]-\bar\phi_t^{\delta,\theta}[f]/\bar\phi_t^{\delta,\theta}[1]\right|^{p}= \mathbb e^{*}_{\theta}\left|\frac{\phi_t^{\delta,\theta}[f]\bar\phi_t^{\delta,\theta}[1]-\bar\phi_t^{\delta,\theta}[f]\phi_t^{\delta,\theta}[1]}{\phi_t^{\delta,\theta}[1]\bar\phi_t^{\delta,\theta}[1]}\right|^{p}\nonumber\\ & \leq c \left(\mathbb e^{*}_{\theta}\left|\frac{1}{\phi_t^{\delta,\theta}[1]\bar\phi_t^{\delta,\theta}[1]}\right|^{pr_{1}}\right)^{1/r_{1 } } \left(\mathbb e^{*}_{\theta}\left|\phi_t^{\delta,\theta}[f]\bar\phi_t^{\delta,\theta}[1]-\bar\phi_t^{\delta,\theta}[f]\phi_t^{\delta,\theta}[1]\right|^{pr_{2}}\right)^{1/r_{2}}\nonumber\\ & \leq c \left(\mathbb e^{*}_{\theta}\left|\frac{1}{\phi_t^{\delta,\theta}[1]\bar\phi_t^{\delta,\theta}[1]}\right|^{pr_{1}}\right)^{1/r_{1 } } \left(\mathbb e^{*}_{\theta}\left|\phi_t^{\delta,\theta}[f]-\bar\phi_t^{\delta,\theta}[f]\right|^{pr_{2}}+\mathbb e^{*}_{\theta}\left|\phi_t^{\delta,\theta}[1]-\bar\phi_t^{\delta,\theta}[1]\right|^{pr_{2}}\right)^{1/r_{2}}\nonumber\\ & \leq c \left(\mathbb e^{*}_{\theta}\left|\frac{1}{\phi_t^{\delta,\theta}[1]}\right|^{2pr_{1}}+\mathbb e^{*}_{\theta}\left|\frac{1}{\bar\phi_t^{\delta,\theta}[1]}\right|^{2pr_{1}}\right)^{1/r_{1 } } \left(\mathbb e^{*}_{\theta}\left|\phi_t^{\delta,\theta}[f]-\bar\phi_t^{\delta,\theta}[f]\right|^{pr_{2}}+\mathbb e^{*}_{\theta}\left|\phi_t^{\delta,\theta}[1]-\bar\phi_t^{\delta,\theta}[1]\right|^{pr_{2}}\right)^{1/r_{2}}\nonumber\end{aligned}\ ] ] where boundedness of was used . by combining lemma [ l : filterconvergence3 ] and ( [ eq : averagephiconvergence ] ) we get that -\bar\phi_t^{\delta,\theta}[f]\right|^{pr_{2}}+\mathbb e^{*}_{\theta}\left|\phi_t^{\delta,\theta}[1]-\bar\phi_t^{\delta,\theta}[1]\right|^{pr_{2}}\rightarrow 0\end{aligned}\ ] ] in addition , we have }\right|^{2pr_{1}}&\leq \mathbb e^{*}_{\theta}\left(z_t^{\delta,\theta}\right)^{-2pr_{1}}=\mathbb e^{*}_{\theta}\left[e^{*}_{\theta}\left[\left(z_t^{\delta,\theta}\right)^{-2pr_{1}}\right]|\mathcal f_t^ { u^{\delta } , x^{\delta}}\right]\nonumber\\ & = \mathbb e^{*}_{\theta}\left[e^{(2p^{2}r^{2}_{1}+pr_{1})\int_{0}^{t}|h_{\theta}(x^{\delta}_{s},u^{\delta}_{s})|^{2}ds}\right]\nonumber\\ & < \infty.\end{aligned}\ ] ] similarly , we can also obtain }\right|^{2pr_{1}}<\infty ] follows as ( [ eq : ratiophis ] ) .this proves convergence in , and convergence in follows from dominated convergence because the test function was assumed bounded so that -\bar{\pi}_t^{\delta,\theta}[f]\right|^2\leq 2 \|f\|_\infty^2 ] .so , it is enough to prove that -\pi_t^{\delta,\theta}[f_{n}]\right)^2=0\ ] ] and -\bar \pi_t^{\delta,\theta}[f_{n}]\right)^2=0.\ ] ] both of these statements follow from the observation : for such that we have and in particular , letting , so that and , then taking the following similar set of steps as in equation we have -\pi_t^{\delta,\theta}[f_{n}]\right|^2\\ \nonumber & \leq\lim_{n\rightarrow\infty}\limsup_{\delta\downarrow 0}\mathbb e_{\alpha}\mathbb e_\theta\left[\left|f(x_t^\delta , u_t^\delta)-f_n(x_t^\delta , u_t^\delta)\right|^2\big|\mathcal y_t^\delta\right]\\ \nonumber & \leq\lim_{n\rightarrow\infty}\limsup_{\delta\downarrow 0}\mathbb e_{\alpha}^*z_t^{\delta,\alpha}\mathbb e_\theta\left[\left|f(x_t^\delta , u_t^\delta)-f_n(x_t^\delta , u_t^\delta)\right|^2\big|\mathcal y_t^\delta\right]\\ \nonumber & \leq\lim_{n\rightarrow\infty}\limsup_{\delta\downarrow 0}\left(\mathbb e_{\alpha}^*(z_t^{\delta,\alpha})^q\right)^{1/q}\left(\mathbb e_\alpha^*\mathbb e_\theta\left[\left|f(x_t^\delta , u_t^\delta)-f_n(x_t^\delta , u_t^\delta)\right|^2\big|\mathcal y_t^\delta\right]^p\right)^{1/p}\\ \nonumber & \leq\lim_{n\rightarrow\infty}\limsup_{\delta\downarrow 0}\left(\mathbb e_{\alpha}^*(z_t^{\delta,\alpha})^q\right)^{1/q}\left(\mathbb e_\alpha^*\mathbb e_\theta\left[\left|f(x_t^\delta , u_t^\delta)-f_n(x_t^\delta , u_t^\delta)\right|^{2p}\big|\mathcal y_t^\delta\right]\right)^{1/p}\\ \nonumber & = \lim_{n\rightarrow\infty}\limsup_{\delta\downarrow 0}\left(\mathbb e_{\alpha}^*(z_t^{\delta,\alpha})^q\right)^{1/q}\left(\mathbb e_\theta^*\mathbb e_\theta\left[\left|f(x_t^\delta , u_t^\delta)-f_n(x_t^\delta , u_t^\delta)\right|^{2p}\big|\mathcal y_t^\delta\right]\right)^{1/p}\\ \nonumber & = \lim_{n\rightarrow\infty}\limsup_{\delta\downarrow 0}\left(\mathbb e_{\alpha}^*(z_t^{\delta,\alpha})^q\right)^{1/q}\left(\mathbb e_\theta\left [ ( z_t^{\delta,\theta})^{-1}\mathbb e_\theta\left[\left|f(x_t^\delta , u_t^\delta)-f_n(x_t^\delta , u_t^\delta)\right|^{2p}\big|\mathcal y_t^\delta\right]\right]\right)^{1/p}\\ & \leq c \lim_{n\rightarrow\infty}\limsup_{\delta\downarrow 0}n^{-\eta/2p^2}\left(\mathbb e_{\alpha}^*\left(z_t^{\delta,\alpha}\right)^q\right)^{1/q}\left(\mathbb e_\theta ( z_t^{\delta,\theta})^{-q}\right)^{1/q}\left ( \mathbb e_{\theta}\left|f(x^{\delta}_{t},u_t^\delta)\right|^{2+\eta}\right)^{1/p^2}\nonumber\\ \nonumber & = 0\ .\end{aligned}\ ] ] the same limit can be shown for -\bar \pi_t^{\delta,\theta}[f_{n}]\right)^2 $ ] , but with and used instead .due to ergodicity , the proof of -\bar \pi_t^{\theta}[f]\right|=0\ , \ ] ] follows similarly and thus omitted .this concludes the proof of the theorem . | we consider partially observed multiscale diffusion models that are specified up to an unknown vector parameter . we establish for a very general class of test functions that the filter of the original model converges to a filter of reduced dimension . then , this result is used to justify statistical estimation for the unknown parameters of interest based on the model of reduced dimension but using the original available data . this allows to learn the unknown parameters of interest while working in lower dimensions , as opposed to working with the original high dimensional system . simulation studies support and illustrate the theoretical results . * keywords . * data assimilation , filtering , parameter estimation , homogenization , multiscale diffusions , dimension reduction . + * subject classifications . * 93e10 93e11 93c70 62m07 62m86 |
the inhomogeneous helmholtz wave equation is this has the well known free - space retarded green function [ 1 , p. 284] is a field point , is a source point and is the wave number , here considered to be a general complex number .the free - space green function ( [ eqn2 ] ) is restricted to values of such that as . for general dispersive waves with where and are real , then is a condition for this to hold . in the limitas these equations reduce to the poisson equation and its corresponding green function .the general retarded solution of the helmholtz equation at a field point for a general source density , subject to the boundary condition that as , is given in terms of the green function as the volume integral is to be taken over all regions of space where the source density is non zero .many problems of practical interest have some element of axial symmetry and are best treated in cylindrical coordinates , the cartesian components of being related to the cylindrical components by .it follows immediately from this relation that the distance between a source point and a field point is given by solution for when is a general circular ring source is of particular interest , with applications such as circular loop antennas [ 2 ] , [ 3 ] , [ 4 ] , the acoustics of rotating machinery [ 5 ] and acoustic and electromagnetic scattering [ 6 ] . for the simpler poisson equation most of the analytical solutions found in the literature for cylindrical geometryare either ring source solutions or can be easily constructed from them by integration or summation .examples are gravitating rings and disks , ring vortices and vortex disks , and circular current loops and solenoids .the source density for a thin circular ring of radius located in the plane is of the form is the angular distribution of the source strength around the ring .this can be most conveniently described by a fourier series of the form the fourier coefficients and are given by [ 7 , p. 1066] equation ( [ eqn4 ] ) , the green function ( [ eqn3 ] ) is even in the variable , where is the angular coordinate of and is the angular coordinate of .it is convenient to exploit this symmetry when substituting equations ( [ eqn5 ] ) and ( [ eqn6 ] ) into ( [ eqn3 ] ) . from the identity obtain on substituting equations ( [ eqn5 ] ) and ( [ eqn7 ] ) into ( [ eqn3 ] ) and performing the volume integration , the odd terms proportional to in equation ( [ eqn7 ] ) do not contribute to the solution as is even in .the remaining integrals from the even terms can be calculated over the reduced interval from to .this gives the solution of the helmholtz equation for a circular ring source with general in the form where the explicit dependence of the solution on the constant ring parameters and has been introduced in these definitions .introducing the neumann factor such that for and for , and defining allows ( [ eqn8 ] ) to be expressed more concisely as from a constant factor , the terms in ( [ eqn9a ] ) are also the coefficients in the fourier expansion of the green function ( [ eqn2 ] ) itself , when the source point is given by . from equations ( [ eqn6 ] ) , ( [ eqn6a ] ) and ( [ eqn6b ] )this is given by the solution of the helmholtz equation for a general ring source can be constructed directly from the coefficients in the fourier expansion of the green function ( [ eqn3 ] ) .this provides in large measure the motivation to analytically construct the fourier series for the helmholtz green function . for the poisson equation with the corresponding fourier expansion of the green functionhas already been given in closed form as [ 8]: a toroidal variable such that and the are the legendre functions of the second kind and half - integral degree , which are also toroidal harmonics .the fourier expansion given by equations ( [ eqn11 ] ) and ( [ eqn12 ] ) can be obtained immediately by writing the green function ( [ eqn2 ] ) for in the form is given by ( [ eqn12 ] ) , and noting that the function has the simple integral representation [ 7 , eqn 8.713 ] alternative derivation of ( [ eqn11 ] ) employs the lipschitz integral [ 7 , eqn 6.611 1] neumann s addition theorem [ 9 , eqn11.2 1] obtain the well known eigenfunction expansion reduces to ( [ eqn11 ] ) on employing the integral [ 7 , eqn 6.612 3],[9 , eqn 13.22]: generalization of ( [ eqn17 ] ) for the helmholtz case is also well known [ 10 , p. 888] can be similarly obtained from neumann s theorem by employing the integral [ 7 , eqn 6.616 2] of the lipschitz integral . equation ( [ eqn19 ] ) gives the fourier coefficients of the helmholtz green function in the form reduces to ( [ eqn17 ] ) in the limit as but unfortunately the integral in ( [ eqn21 ] ) is not given in standard tables for .numerical evaluation of this integral requires care , as the integrand is oscillatory and singular in an infinite range of integration , though the integrand tends exponentially to zero as .equation ( [ eqn9 ] ) is a convenient alternative numerical evaluation of the fourier coefficients , provided is not too large .the integrals ( [ eqn9 ] ) and ( [ eqn21 ] ) contain the additional parameter which is not contained in ( [ eqn14 ] ) and ( [ eqn18 ] ) . as a consequence of this , the closed form generalization of ( [ eqn11 ] ) for the helmholtz case involves two - multidimensional gaussian hypergeometric series , and the main purpose of this article is to present these solutions and various related results .the core idea leading to the solution is expansion of the exponential in ( [ eqn2 ] ) as the absolutely convergent power series [ 4] integral ( [ eqn25 ] ) can be evaluated as a series by binomial expansion and this gives a double series for the fourier coefficient .the expansion of ( [ eqn25 ] ) gives an infinite number of terms for even and a finite number of terms for odd .these two cases are best treated separately and it is therefore convenient to split the summation over in ( [ eqn22 ] ) into odd and even terms .this is equivalent to splitting the green function ( [ eqn2 ] ) such that the half advanced+half retarded green function and the half advanced retarded green function .the corresponding fourier coefficients are split in the same manner such that real , splitting the green function in this way is equivalent to dividing it into its real and imaginary parts , but this is not the case for general complex .it is shown in section 2 that the fourier coefficients in ( [ eqn29 ] ) and ( [ eqn30 ] ) are given respectively by\label{eqn32}\]]where variable is the usual modulus contained in elliptic integral solutions of elementary ring problems and is related to the toroidal variable by function in equation ( [ eqn31 ] ) is one of the standard horn functions [ 11 , eqn 5.7.1 31 ] and is equivalent to the double hypergeometric series kamp de friet function [ 12 , p. 27] in ( [ eqn32 ] ) is equivalent to the double hypergeometric series = \]] the integral ( [ eqn25 ] ) can also be evaluated using an integral representation for the associated legendre function of the first kind , and it is shown in appendix a that this gives the series expansion : legendre function in equation ( [ eqn39 ] ) reduces to an associated legendre polynomial for odd .the series in ( [ eqn39 ] ) can be split into even and odd terms such that it is shown in appendix a that the even and odd series can be expressed respectively as: in equation ( [ eqn45 ] ) the legendre function is purely imaginary for real . in the static limit as and from the gamma function identity [ 7 , eqn 8.334 2 ] \label{eqn46}\]]then equation ( [ eqn44 ] ) reduces to \label{eqn47}\]]as it must do for consistency with ( [ eqn11 ] ) .the solutions in terms of two - dimensional hypergeometric functions defined by equations ( [ eqn31])-([eqn35 ] ) and ( [ eqn37])-([eqn38 ] ) can be summed over either index to give the solutions as series of special functions .it is shown in section 3 that summation over the index in equation ( [ eqn37 ] ) gives equation ( [ eqn44 ] ) , exactly as given by the integral representation .however , summation over the index in equation ( [ eqn38 ] ) gives instead the series solution hypergeometric identity to reduce the hypergeometric function in equation ( [ eqn48 ] ) to other well - known special functions does not seem to be available in standard tabulations .it might nevertheless be conjectured that ( [ eqn48 ] ) could somehow be reducible to equation ( [ eqn45 ] ) , but this is not in fact the case .it is easily verified numerically that although equations ( [ eqn45 ] ) and ( [ eqn48 ] ) both converge rapidly to the same limit , the individual terms do not match .hence , equation ( [ eqn48 ] ) is a distinct series from equation ( [ eqn45 ] ) .it is also shown in section 3 that summation over the index in equations ( [ eqn37 ] ) and ( [ eqn38 ] ) gives the bessel function series : these two series can be conveniently combined to give a series of hankel functions of the first kind: from the solutions ( [ eqn31 ] ) and ( [ eqn32 ] ) it can be seen that dimensionless fourier coefficients defined by only on the two dimensionless variables and .the functions are given explicitly by equations ( [ eqn31 ] ) and ( [ eqn32 ] ) as: \label{eqn54}\]]where - dimensional hypergeometric series such as ( [ eqn53 ] ) and ( [ eqn54 ] ) are associated with pairs of partial differential equations [ 11 , section 5.9 ] and these can be used to construct ordinary differential equations for with fixed and as the independent variable .it is shown in section 4 that for constant the coefficients both satisfy the same fourth - order ordinary differential equation in : in section 5 an integral representation is derived for this is used to derive a fourth - order ordinary differential equation for in terms of : in the static limit as , equation ( [ eqn58 ] ) reduces to: = 0 \label{eqn59}\]]where legendre s equation of degree [ 7 , eqn 8.820 ] .it is also shown in section 5 that the differential equations ( [ eqn56 ] ) and ( [ eqn58 ] ) , obtained by quite different routes , are equivalent .the special functions used in the analysis are given in table 1 . & \text{a kamp\'{e } de f\'{e}riet function } \\ h_{\nu } ^{\left ( 1\right ) } \left ( x\right ) & \text{hankel function of the first kind } \\ \text{h}_{3}\left ( a , b , c , x , y\right ) & \text{the h}_{3}\text { confluent horn function } \\j_{\nu } ( x ) & \text{bessel function of the first kind } \\p_{\nu } ^{\mu } ( x ) & \text{associated legendre function of the first kind } \\q_{\nu } ^{\mu } ( x ) & \text{associated legendre function of the second kind } \\y_{\nu } \left ( x\right ) & \text{bessel function of the second kind } \\\gamma ( x ) & \text{gamma function } \\\delta \left ( x\right ) & \text{dirac delta function}% \end{array}% $ ] recurrence relations for the fourier coefficients for the helmholtz equation were investigated by matviyenko [ 6 ] , but the closed form solutions and differential equations presented here appear to be new .werner [ 3 ] presented an expansion of the fourier coefficient as a series of spherical hankel functions , superficially similar to equation ( [ eqn51 ] ) , but the two expansions are distinct .the two - dimensional hypergeometric series approach applied here to obtain the fourier expansion for the helmholtz green function has recently been applied to obtain the fourier expansion in terms of the amplitude for the legendre incomplete elliptic integral of the third kind [ 13 ] .the numerical performance of the various expressions for the fourier coefficients was investigated using mathematica [ 14 ] and this is examined in appendix c.the power series expansion ( [ eqn24 ] ) for the fourier coefficient can be expressed in the form and are defined by ( [ eqn33 ] ) and ( [ eqn34 ] ) .the term in ( eqna1 ) can be expanded binomially to give as a double series containing integrals of the form integral is given by gradshteyn and ryzhik [ 7 , eqns 3.631 8,12 ] in a form which can be recast as expressing the beta function in ( [ eqna4 ] ) in terms of gamma functions and employing the duplication theorem [ 7 , eqn 8.335 1] after some reduction the alternative form binomial expansion of ( [ eqna1 ] ) gives an infinite series for zero or even and a finite sum for odd , and these two cases must be treated separately .it is therefore convenient to split the series for such that on employing equation ( [ eqna6 ] ) the divided series are given by expansion of the integrals in equations ( [ eqna9 ] ) and ( eqna10 ) gives respectively employing the explicit formula ( [ eqna7 ] ) for in ( eqna11 ) and ( [ eqna12 ] ) gives respectively the gamma identity ( [ eqn46 ] ) has been used to simplify equation ( [ eqna13 ] ) .the substitution in equation ( [ eqna13 ] ) yields after some reduction the double series: in ( [ eqna15]) the pochhammer symbol .the double hypergeometric function in ( [ eqna15 ] ) can be identified as one of the confluent horn functions [ 11 , eqn 5.7.1 31 ] and hence convergence condition given in [ 11 , eqn 5.7.1 31 ] for the double series in equation ( [ eqna15 ] ) is , which always holds .the order of summation in equation ( [ eqna15 ] ) can be reversed , but the order of the arguments and in equation ( [ eqna17 ] ) can not be exchanged . ( [ eqna14 ] ) can be converted to a doubly infinite series by reversing the order of summation , which gives substitution in ( [ eqna18 ] ) gives the further substitution in ( [ eqna19 ] ) gives equation ( [ eqna20 ] ) in terms of pochhammer symbols gives after some reduction the double hypergeometric series this can be expressed as a kamp de friet function as defined by srivastava and karlsson [ 12 , p. 27]:= \]] in the definition ( [ eqna22 ] ) , and and so on , are the lists of the arguments of the pochhammer symbols of the various types which appear in the products on the right - hand side of the equation .if a list has no members , it is represented by a hyphen . comparing ( [ eqna21 ] ) with ( [ eqna22 ] ) gives \text{. } \label{eqna23}\ ] ]in the static limit as then equation ( [ eqn31 ] ) reduces to from ( [ eqn34 ] ) and the standard hypergeometric identity [ 15 , eqn 7.3.1 71]: reduces to in agreement with equations ( [ eqn11 ] ) and ( [ eqn34 ] ) . the double series given by equation ( [ eqn31 ] )can be summed with respect to either the index or the index in the definition ( [ eqn37 ] ) . summing with respect to in ( [ eqn37 ] )gives a series of bessel functions of the second kind: the gamma function identity ( [ eqn46 ] ) and the bessel function identity \label{eqnb5}\]]have been employed to obtain equation ( [ eqnb4 ] ) .summing instead over the index in ( [ eqn37 ] ) gives the alternative series can be reduced using ( [ eqnb2 ] ) and ( [ eqn46 ] ) to give: dimensionless variables and defined by equations ( [ eqn34 ] ) and ( [ eqn40 ] ) respectively are related by substituting this equation and equation ( [ eqn36 ] ) in equation ( eqnb7 ) gives immediately equation ( [ eqn44 ] ) . summing with respect to in equation ( [ eqna20 ] ) gives the bessel series corresponding summation over the index gives seems to be no hypergeometric transformation listed in standard tables suitable for directly reducing the hypergeometric function in this equation .equations ( [ eqnb4 ] ) and ( [ eqnb9 ] ) can be conveniently combined to give a series of hankel functions of the first kind: .erdlyi et .al . [ 11 , section 5.9 ] tabulate the partial differential equations satisfied by all the functions in horn s list .they employ the notation reproduced below for the various partial derivatives is any function on the list .each function in the list satisfies two partial differential equations , and unfortunately those given in [ 11 , 5.9 34 ] for contain typographical errors .the correct equations can be shown by the methods given in [ 11 , section 5.7 ] to be: p+\beta yq-\alpha \beta z=0 \label{eqnc0}\]] for the particular case considered here we have and so that ( [ eqnc0 ] ) reduces to p+\alpha yq-\alpha ^{2}z=0\text{. } \label{eqnc2}\]]similar equations can also be derived for the kamp de friet function defined by equation ( [ eqn38 ] ) .for the definition \label{eqnc3}\]]and the notation the equations corresponding to ( [ eqnc1 ] ) and ( [ eqnc2 ] ) can be shown to be writing ( [ eqn53 ] ) can be expressed as ordinary differential equation for in terms of can be derived by first obtaining the corresponding differential equation for from ( [ eqnc1 ] ) and ( [ eqnc2 ] ) and then substituting ( [ eqnc7 ] ) into this equation .although straightforward in principle , this procedure is rather intricate in practice , and only the essential elements of the derivation are given below .it is convenient to define a differential operator such that: has by definition the properties: equations ( [ eqnc1 ] ) and ( [ eqnc14 ] ) gives the equation applying the operators and to this equation gives respectively: variable gives a system of four coupled equations : xp-\alpha xyq+\alpha ^{2}xz \label{eqnc23}\ ] ] eliminating gives a system of 3 equations : eliminating gives the two equations : eliminating gives finally the fourth - order equation: d^{3}z\]] d^{2}z\]] dz\]] z=0% \text{. } \label{eqnc31}\]]this equation can be converted to standard differential form using the identity: gives: x^{3}\frac{d^{3}z}{dx^{3}}\]] x^{2}\frac{d^{2}z}{dx^{2}}\]] x\frac{dz}{dx}\]] z=0% \text{. } \label{eqnc33}\]]from equation ( [ eqnc7 ] ) , the fourier coefficient is related to by where the constant is given by equation ( [ eqnc7 ] ) .differentiating equation ( [ eqnc34 ] ) gives the relations \label{eqnc35}\]] \label{eqnc36}\]] \label{eqnc37}\]] .\label{eqnc38}\]]and substituting these relations into equation ( [ eqn33 ] ) gives after much reduction the fourth - order differential equation: x^{2}\frac{d^{2}g_{+}^{m}}{dx^{2}}+\]] x\frac{% dg_{+}^{m}}{dx}+y^{2}g_{+}^{m}=0\text{. } \label{eqnc39}\ ] ] equation ( [ eqn54 ] ) can be written in the form equation for can be established in the same manner as for equation ( [ eqnc39 ] ) , but having already derived ( eqnc39 ) , it is enough to establish that and both obey the same differential equation . from equations ( [ eqnc41 ] ) and ( [ eqnc42 ] )then with the definition the notation we have equations ( [ eqnc45])-([eqnc50 ] ) in the kamp de friet differential equations ( [ eqnc4 ] ) and ( [ eqnc5 ] ) gives compare these equations with the partial differential equations from the horn function , we note that and are constants .defining and will satisfy the same ordinary differential equation if and satisfy the same pair of partial differential equations . with the notation substituting these relations in equations ( [ eqnc51 ] ) and ( eqnc52 ) gives \hat{p}+x\left ( 1-x\right ) \hat{r}-y^{2}\hat{t}-\alpha ^{2}w+\left ( 2\alpha -1\right ) y\hat{q% } + 2xy\hat{s}\text{. } \label{eqnc63}\]]eliminating from ( [ eqnc63 ] ) using equation ( [ eqnc62 ] ) gives \hat{p}+\alpha y\hat{q}-\alpha ^{2}w=0\text{. } \label{eqnc64}\]]equations ( [ eqnc62 ] ) and ( [ eqnc64 ] ) obtained from the kamp de friet functionare identical to equations ( [ eqnc1 ] ) and ( [ eqnc2 ] ) obtained from the horn function , so also satisfies the differential equation ( [ eqnc39 ] ) .the integral representation ( [ eqn9 ] ) for the fourier coefficient can be written in the form: function satisfies the partial differential equation this has the elementary separated solution: \label{eqnd4}\]]where is the separation constant .the solution can be constructed as a superposition of the allowable ( i.e. finite at infinity ) elementary solutions given by ( [ eqnd4 ] ) .this gives in the form: ds \label{eqnd5}\]]where the sign is chosen so that the integral converges . as is real and positive , this depends only on the imaginary part of , the appropriate sign being the same as that of , which will be assumed positive here .setting in ( eqnd2 ) and ( [ eqnd5 ] ) gives can be determined from the integral [ 7 , eqn 6.621 1]: , and in equation ( [ eqnd7 ] ) gives the expression for in terms of the gauss hypergeometric function is [ 16 , eqn 8.1.3] equations ( [ eqnd6 ] ) , ( [ eqnd8 ] ) and ( [ eqnd9 ] ) it follows that hence j_{m}\left ( s\right ) s^{-1/2}ds\text{. } \label{eqnd11}\]]for the special case of evanescent waves such that with then with a suitable transformation in the complex plane , this equation can be expressed in the form i_{m}\left ( s\right ) s^{-1/2}ds .\label{eqnd12}\]]the details of this transformation are given in appendix b. equation ( eqnd12 ) is straightforward to evaluate numerically as the integrand is not oscillatory and decays exponentially to zero as provided , which from equation ( [ eqn12 ] ) is always the case .this follows immediately from the leading term in the asymptotic approximation as of , which is [ 7 , eqn 8.451 5]: the integral representation ( [ eqnd11 ] ) allows the ordinary differential equations in terms of or satisfied by to be constructed in a straightforward manner .it is convenient to define a new variable such that also a new dependent variable such that is given by is to be regarded as a constant embedded parameter in the odesatisfied by and where is given by \text{. } \label{eqnd16}\]]the various derivatives of are then given by bessel function satisfies the differential equation [ 7 , eqn 8.401] therefore: ( [ eqnd19 ] ) twice with respect to and utilizing equations ( [ eqnd15])-([eqnd17 ] ) gives: this twice by parts yields \right ) ds .\label{eqnd21}\]]since \right ) = \left ( \frac{1}{4}+2i\omega s-\omega ^{2}s^{2}+2\omega \chi -\chi ^{2}s^{-2}\right ) f(s,\omega , \chi ) \label{eqnd22}\]]then the poisson case with , employing ( [ eqnd17 ] ) in ( eqnd23 ) gives legendre s equation ( [ eqn57 ] ) of degree , as must be the case for consistency with equation ( [ eqn11 ] ) .for the helmholtz case , ( [ eqnd23 ] ) must be differentiated twice with respect to before employing ( [ eqnd17 ] ) .this yields the fourth - order linear ode becomes equation ( [ eqn55 ] ) on substituting . equation ( [ eqnd25 ] ) can be converted to an equation in terms of by making the substitutions: collecting terms and simplifying this yields x^{2}\frac{d^{2}% \bar{y}}{dx^{2}}+\ ] ] x\frac{d% \bar{y}}{dx}+y^{2}\bar{y}=0\text{. } \label{eqnd34}\ ] ] inspection of equations ( [ eqnc39 ] ) and ( [ eqnd34 ] ) , obtained by totally different methods , shows that they are identical .the fourier coefficients for the helmholtz green function have been split into their half advanced+half retarded and half advanced retarded components , and these components have been given in closed form in terms of two - dimensional hypergeometric functions .these solutions generalize the well - known solutions of poisson s equation for ring sources , and reduce to them in the static limit when the wave number .the two - dimensional hypergeometric functions can be considered as double series , with the order of summation arbitrary .the two summation choices give different series of special functions for each of the fourier components , and all of these series have been numerically verified , as have the closed form solutions themselves .one series is given in terms of hankel functions , and only a few terms are need far from the ring source for accurate results .a second series in terms of associated legendre functions only requires a few terms in the neigborhood of the ring to give accurate results .the systems of partial differential equations associated with each of the two generalized hypergeometric functions have been used to derive a fourth - order ordinary differential equation in terms of for the fourier coefficients .a completely different approach involving integral representations of the fourier coefficients has been presented in tandem , which derives many of the same results , as well as some new ones .both approaches give exactly the same fourth - order differential equation for the general fourier coefficient , despite the algebra being rather intricate in both cases .another fourth order ordinary differential equation in terms of the wave number parameter can also be derived by the methods presented here .the fourier coefficient given by equations ( eqn24 ) and ( [ eqn25 ] ) can be expressed in the form and are defined by equations ( [ eqn12 ] ) and ( [ eqn40 ] ) respectively. evaluation of ( [ app01 ] ) requires the integral can be evaluated for using the integral representation [ 7 , eqn 8.711 2]: is equivalent to substitutions give ( [ app01 ] ) becomes the series ( [ app08 ] ) into even and odd terms gives after some reduction the factorial formulas been used to obtain ( [ app09 ] ) and ( [ app10 ] ) .the index in ( app10 ) runs from rather than from as the associated legendre polynomial is zero for .the substitution in ( [ app10 ] ) gives the alternative form indices in equations ( [ app09 ] ) and ( [ app13 ] ) can be switched to negative values using the relations [ 16 , eqns 8.2.5 , 8.2.1]: \label{app14}\]] gives kind of legendre functions in equations ( [ app16 ] ) and ( [ app17 ] ) can be switched using the whipple relation [ 16 , eqn 8.2.7],[17]: gives after some reduction integral for in equation ( eqnd11 ) for can be considered to be the contribution along the real axis of the contour integral j_{m}\left ( s\right ) s^{-1/2}ds \label{appb1}\]]where is the closed contour shown in the figure below , in the limits as and . the corresponding contribution to the contour integral along the imaginary axis is given by j_{m}\left ( iy\right ) y^{-1/2}\left ( i\right ) ^{1/2}dy \label{appb2}\]]and this can be stated in terms of the modified bessel function of the first kind using the identity [ 7 , eqn 8.406 3] gives immediately i_{m}\left ( y\right ) y^{-1/2}dy\text{. } \label{appb4}\]]the integrand of ( [ appb1 ] ) is analytic everywhere within the contour and therefore from cauchy s theorem , the integral ( [ appb1 ] ) is zero .therefore if the contributions to the contour integral along the two quarter circles vanish in the limits as and , then equation ( [ eqnd12 ] ) has been proven . along the smaller quarter circle we set and the contribution to the contour integral becomes \times\]] this clearly vanishes as . along the larger quarter circle we set and the contribution to the integralis given by \times\]] leading term in the asymptotic approximation of is given by [ 7 , eqn 8.451 1] therefore \times\]] d\theta \label{appb9}\]]which can be expressed as \times\]] + \exp \left [ i\left ( \left ( \omega -1\right ) r\exp \left ( i\theta \right ) + \frac{i\pi m}{2% } + \frac{i\pi } { 4}\right ) \right ] \right ) d\theta \text{. } \label{appb10}\]]inspection of ( [ appb10 ] ) shows that the integrand vanishes exponentially in the limit as provided .this condition always holds and is also the condition for the integral in ( [ eqnd12 ] ) to converge .the series solutions for the fourier coefficients given by equation ( eqn51 ) and equations ( [ eqn44])-([eqn45 ] ) were evaluated using mathematica and the numerical performance was explored for various geometric parameters and wave numbers . for comparison ,the two integrals ( [ eqn9 ] ) and ( [ eqn19 ] ) for the fourier coefficients were also evaluated numerically for the same parameters .all four methods give identical results at locations which are neither too far away nor too close to the ring source . the numerical integration ( [ eqn9 ] ) performs very well at all distances from the ring , whereas the numerical integration ( [ eqn19 ] ) fails when either very close to the ring or too far away .no cases were identified where equation ( [ eqn19 ] ) was superior . the hankel function series ( [ eqn51 ] ) requires fewer and fewer terms for convergence as the distance from the loop increases , and conversely performance decreases as the ring is approached . the associated legendre function series ( [ eqn44 ] ) and ( [ eqn45 ] ) have precisely the opposite performance , with great accuracy close to the ring and failure at large distances from the ring .the two series ( [ eqn44 ] ) and ( [ eqn45 ] ) are well suited to calculations close to the ring as the associated legendre functions themselves each contain the ring singularity as .by contrast , the hankel functions ( [ eqn51 ] ) are not singular at the ring and hence an increasing number of terms are required to model the singularity as the ring is approached . in all cases , there is are always at least one numerical integration and one series solution which can be used to cross check each other . samplenumerical results are given in table 2 for moderate distances from the ring source , and shows the number of terms required by each series to match the numerical integrations exactly .table 3 shows the performance of the hankel series with increasing distance from the ring .the number of hankel terms decreases to very few at large distances from the ring .table 4 shows the performance of the two associated legendre series ( [ eqn44 ] ) and ( [ eqn45 ] ) as the ring is approached .the real part of diverges logarithmically as , whereas the imaginary part tends to a finite limit .it can be seen immediately from table 4 that only 8 terms in each legendre series is sufficient to calculate for the range . | a new method is presented for fourier decomposition of the helmholtz green function in cylindrical coordinates , which is equivalent to obtaining the solution of the helmholtz equation for a general ring source . the fourier coefficients of the helmholtz green function are split into their half advanced+half retarded and half advanced retarded components . closed form solutions are given for these components in terms of a horn function and a kamp de friet function , respectively . the systems of partial differential equations associated with these two - dimensional hypergeometric functions are used to construct a fourth - order ordinary differential equation which both components satisfy . a second fourth - order ordinary differential equation for the general fourier coefficent is derived from an integral representation of the coefficient , and both differential equations are shown to be equivalent . series solutions for the various fourier coefficients are also given , mostly in terms of legendre functions and bessel / hankel functions . these are derived from the closed form hypergeometric solutions or an integral representation , or both . numerical calculations comparing different methods of calculating the fourier coefficients are presented . |
networks have significant benefits in terms of diversity and robustness over non - cooperative networks .consequently , they have been presented as a topology for the next generation of mobile networks .antenna selection , relay selection ( rs ) and diversity maximization are central themes in mimo relaying literature . however , current approaches are often limited to stationary , single relay systems and channels which assume the direct path from the source to the destination is negligible . in this letter ,the problems of transmit diversity selection ( tds ) and rs are formulated as joint discrete optimization problems , where rs refines the set from which tds is made ; leading to improved convergence , performance and complexity .complexity discrete stochastic algorithms ( dsa ) with mean square error ( mse ) cost functions are employed to arrive at a solution .continuous recursive least squares ( rls ) channel estimation ( ce ) is introduced to form a combined framework , where adaptive rs and tds are performed jointly with no forward channel state information ( csi ) .the proposed algorithms are implemented , and bit error - rate ( ber ) and diversity comparisons given against the exhaustive search solution and the unmodified cooperative system .we consider a qpsk , two - phase , decode - and - forward ( df ) , multi - relay mimo system with half - duplex relays .linear minimum mean square error ( mmse ) receivers are used at all nodes and an error - free control channel is assumed .all channels between antenna pairs are flat fading , have a coherence time equal to the period of an symbol packet and are represented by a complex gain .the direct path is non - negligible and has an expected gain of a fraction of that of the indirect paths ; reflecting the increased distance and shadowing involved .an outline system model is given by fig . [fig : system_model ] .the system comprises intermediate relay nodes which lie between single source and destination nodes which have and antennas , respectively .the relay nodes have antennas , where is an integer multiple of in order to reduce feedback requirements .the transmitted data consists of independent , .the source node transmits to the relay and destination nodes during the first phase , and the second phase involves the relay nodes decoding and forwarding their received signals to the destination . the maximum spatial multiplexing gain and diversity advantage simultaneously available in the system and , respectively .the and first phase received signals at the destination and the relay are given by = \mathbf{h}_{\mathrm{sd}}[i]a_{s}\mathbf{t}_{\mathrm{s}}\mathbf{s}[i ] + \eta_{\mathrm{sd}}[i],\ ] ] = \mathbf{h}_{\mathrm{sr}_{n}}[i]a_{\mathrm{s}}\mathbf{t}_{\mathrm{s}}\mathbf{s}[i ] + \eta_{\mathrm{sr}_{n}}[i ] , \label{eq : source_relay}\ ] ] respectively .the matrices and are the source - destination and source - relay channel matrices , respectively .the subscripts s , d and refer to the source , destination and relay nodes , respectively .the quantity is a vector of zero mean additive white gaussian noise , is the data vector , and is the scalar transmit power allocation . the tds matrix , , is a diagonal matrix where each element on the main diagonal specifies whether the correspondingly numbered antenna is active .the received signal of the second phase at the destination is the sum of the forwarded signals from the relays and is expressed as = \boldsymbol{\boldsymbol{\mathcal{h}}}_{\mathrm{rd}}[i]a_{\mathrm{r}}\boldsymbol{\mathcal{t}}_{\mathrm{r}}[i]\mathbf{\hat{\bar{s}}}[i ] + \eta_{\mathrm{rd}}[i ] , \label{eq : rd_compound}\ ] ] where ] ] \big] ] is the channel matrix .the linear mmse receiver at each relay is given by =\underset{\mathbf{w}_{\mathrm{sr}_{n}}}{\mbox{arg\,min}}\;e\big[\big\vert \mathbf{s}[i]-\mathbf{w}_{\mathrm{sr}_{n}}^{h}[i]\mathbf{r}_{\mathrm{sr}_{n}}[i]\big\vert^{2}\big ] , \label{eq : relay_wiener_filter}\ ] ] resulting in the following wiener filter , , where \mathbf{r}_{\mathrm{sr}_{n}}^{h}[i]\big] ] are the autocorrelation and cross - correlation matrices , respectively .at the destination , the received signals are stacked to give = \big[\mathbf{r}^{t}_{\mathrm{sd}}[i ] \mathbf{r}^{t}_{\mathrm{rd}}[i]\big]^{t} ]is given by =\underset{\mathbf{w}_{\mathrm{d}}}{\mbox{arg\,min}}\;e\big[\big\vert \mathbf{s}[i]-\mathbf{w}_{\mathrm{d}}^{h}[i]\mathbf{r}_{\mathrm{d}}[i]\big\vert^{2}\big ] \label{eq : dest_wiener_filter}\ ] ] and the resulting wiener filter is where \mathbf{r}_{\mathrm{d}}^{h}[i]\big] ] .a qpsk slicer follows mmse reception at all nodes ; the output of which is taken as the symbol estimate . using ( [ eq : relay_wiener_filter ] ) and ( [ eq : dest_wiener_filter ] ) ,the mse at the relay and destination are given by and , respectively , where \mathbf{s}[i]\big] ] is the label of the worst performing relay at the iteration . the current optimum is then chosen and tracked by means of a state occupation probability ( sop ) vector , .this vector is updated at each iteration by adding ] , as a markov chain and the members of as the possible transition states .the current optimum can then be defined as the most visited state .once rs is complete at each time instant , set reduction ( , step 6 ) and tds can take place .to perform tds , modified versions of steps 1 - 5 are used .the considered set is replaced , ; the structure of interest is replaced , ; the best performing matrix is sought ; the sop vector is replaced and from ( [ eq : mmse_opt_function ] ) .finally , the inequality of step 3 is reversed to enable convergence to the lowest mse tds matrix which is the feedback to the relays .convergence of the proposed algorithm to the optimal exhaustive solution is dependent on the independence of the cost function observations and the satisfaction of \big]>\mathcal{f}\big[r[i]\big]\big\ } > \mathrm{pr}\big\{\mathcal{f}\big[r[i]\big]>\mathcal{f}\big[r^{\mathrm{opt}}[i]\big]\big\ } ] for rs and tds ( with the afore mentioned modifications ) . in this work , to minimize complexity , independent observations are not used , therefore the proof of convergence is intractable . however, excellent convergence has been observed under these conditions in and throughout the simulations conducted for this work .significant complexity savings result from the proposed algorithm ; savings which increase with , , , and . when , , and , the number of complex multiplications for mmse reception and exhaustive tds , exhaustive tds with rs , iterative tds and iterative tds with rs are , , and , respectively , for each time instant .in this section , simulations of the proposed algorithms ( iterative tds with rs ) are presented and comparisons drawn against the optimal exhaustive solutions ( exhaustive tds with rs ) , the unmodified system ( no tds ) , and the direct transmission ( non - cooperative ) . plots of the schemes with tds only ( exhaustive tds , iterative tds ) are also included to illustrate the performance improvement obtained by rs .equal power allocation is maintained in each phase , where when tds is employed and for the unmodified system .for the rls ce , , and are initialized as identity matrices and the exponential forgetting factor is 0.9 .the initial values of , and are zeros matrices .each simulation is averaged over 1000 packets ( ) ; each made up of pilot symbols .1#21 fig .[ fig : ber_ce ] gives the ber convergence performance of the proposed algorithms .the iterative tds with rs algorithm converges to the optimal ber as does tds with rs and ce , albeit in a delayed fashion due to the ce .the tds with rs scheme exhibits quicker convergence and lower steady state ber .these results and the interdependence between elements of the algorithm confirm that both the rs and tds portions of the algorithm converge to their exhaustive solutions but also the satisfaction of the probability conditions of section [ sec : proposed_algorithms ] . # 1#21 fig . [ fig : snr ] shows the ber versus snr performance of the proposed and conventional algorithms .increased diversity has been achieved whilst maintaining , illustrating that although the maximum available diversity advantage decreases with rs with tds to , the actual diversity achieved has increased .these diversity effects can be attributed to the removal of poor paths which bring little benefit in terms of diversity , but also the increase in transmit power over the remaining paths .the largest gains in diversity are present in region and begin to diminish above this region because relay decoding becomes increasingly reliable and lower power paths become more viable for transmission .this work presented a joint dsa which combines tds and rs along with continuous ce for multi - relay cooperative mimo systems.the scheme exceeds the performance of systems which lack tds and matches that of the optimal exhaustive solution whilst saving considerable computational expense , making it ideal for realtime mobile use .p. clarke and r. c. de lamare , `` joint transmit diversity optimization and relay selection for multi - relay cooperative mimo systems using discrete stochastic algorithms , '' _ ieee communications letters _ , vol.15 , no.10 , pp.1035 - 1037 , october 2011 .p. clarke and r. c. de lamare , `` transmit diversity and relay selection algorithms for multirelay cooperative mimo systems '' _ ieee transactions on vehicular technology _ ,vol.61 , no .3 , pp . 1084 - 1098 , october 2011 .r. c. de lamare and a. alcaim , `` strategies to improve the performance of very low bit rate speech coders and application to a 1.2 kb / s codec , '' _iee proceedings- vision , image and signal processing .1 , feb . 2005 .r. c. de lamare and r. sampaio - neto , `` minimum mean squared error iterative successive parallel arbitrated decision feedback detectors for ds - cdma systems , '' _ ieee transactions on communications .5 , may , 2008 .de lamare and r. sampaio - neto , adaptive reduced - rank equalization algorithms based on alternating optimization design techniques for mimo systems , " _ ieee trans .vehicular technology _ , vol .60 , no . 6 , pp.2482 - 2494 , july 2011 .p. li , r. c. de lamare and r. fa , multiple feedback successive interference cancellation detection for multiuser mimo systems , " _ ieee transactions on wireless communications _ , vol .10 , no . 8 , pp . 2434 - 2439 , august 2011 . | we propose a joint discrete stochastic optimization based transmit diversity selection ( tds ) and relay selection ( rs ) algorithm for decode - and - forward ( df ) , cooperative mimo systems with a non - negligible direct path . tds and rs are performed jointly with continuous least squares channel estimation ( ce ) , linear minimum mean square error ( mmse ) receivers are used at all nodes and no inter - relay communication is required . the performance of the proposed scheme is evaluated via bit - error rate ( ber ) comparisons and diversity analysis , and is shown to converge to the optimum exhaustive solution . mimo relaying , transmit diversity , cooperative systems , relay selection |
recurrence intervals , defined as the time periods between consecutive extreme events , have been a topic of extensive research across many fields , financial markets in particular .the primary contribution of the published research is an understanding of the statistical regularities in recurrence intervals .the memory behavior in the underlying process strongly affects the distribution form of recurrence intervals .the interval distribution is exponential if the process has no memory . incorporating a long memory into the underlying processgreatly alters the recurrence interval distribution .for example , the stretched exponential and weibull recurrence interval distribution are analytically and numerically confirmed in a process with a long linear memory .when a process has a long nonlinear memory ( a multifractual process ) , the recurrence intervals are power - law distributed .there is extensive literature that examines the empirical distribution of recurrence intervals in financial markets .the distribution form is found to be dependent on data source , data type , and data resolution .for example , recurrence interval distributions with a power - law tail are found in the daily volatilities in the japanese market , in the minute volatilities in the korean and italian markets , in the daily returns in the us stock markets , in the minute returns in the chinese markets , and in the minute volume in the us and chinese markets .in addition , stretched recurrence interval distributions are also observed in the financial volatility at different resolutions in a range of different markets . the -exponential distribution has also been observed in the recurrence intervals between losses in financial returns , and the corresponding distribution in the chinese stock index future market is a stretched exponential .in addition to the inconsistent findings on the distribution of empirical recurrence intervals , the existence of scaling behaviors in the recurrence interval distribution for the extremes filtered by different thresholds is under debate . analyzing the distribution of recurrence intervalshas indicated that the extreme event filtering threshold should influence the recurrence interval distribution .this indication was supported when the estimated distributional parameters were found to be strongly dependent on the thresholds when the recurrence intervals are fitted by such distribution functions as the stretched exponential distribution and the -exponential distribution . and propose that the distribution of recurrence intervals depends only on the mean recurrence interval , and not on a specific asset or on the time resolution of the data .only a limited amount of research has used recurrence interval analysis to assess and manage risks in financial markets .an improved method for estimating the value at risk ( var ) based on the recurrence interval is significantly more accurate than traditional estimates based on the overall or local return distributions .another way of predicting extremes using statistics of recurrence intervals is also superior to the precursory pattern recognition technique when the underlying process is multifractal . defining a conditional loss probability as the inverse of the expected waiting time before observing another extreme determined by the latest recurrence interval, finds that the risk of extreme loss events is high if the latest recurrence interval is long or short . in all of these studies , however , only in - sample tests are conducted , and a good performance in in - sample tests can not ensure good results in out - of - sample tests .in contrast , recently found that the extreme predicting method using recurrence interval analysis does provide good predictions in out - of - sample tests .such events as market crashes , currency crises , and bank failures are financial crisis in which the value of assets or the equity of financial institutions shrinks rapidly .financial crises shock the real - world economy and can cause recessions or depressions if left unchecked . to reduce investor losses and shocks to the economy and to reduce financial turbulence , much effort has gone into predicting financial extremes .there is a plethora of literature on forecasting financial crises , especially currency crises and bank failures , and most of the research relies on the early warning model ( ewm ) .the ewm identifies the leading indicators of emerging financial problems and uses such techniques as logit ( or probit ) regressions and intelligence approaches to translate them into the hazard probability of crises occurring in the future , which is used as an early warning signal that indicates whether a crisis is imminent .compared to the vast ewm research predicting bank failures and currency crisis , early warning models to monitor stock markets and provide warning signals of market extremes have received little attention .the contributions of the existing literature are as follows .a number of indicators are able to warn of incoming financial extremes . show that risk aversion indicators are useful in predicting stock market crises , but not currency crises . finds that such macroeconomic indicators as yield curve spreads and inflation rates can be used to predict stock market recessions . show that a global measure of liquidity can predict asset price booms . show that the price - to - book ratio can predict emerging price bubbles . show that such variables of index futures and options as the vix , open interest , dollar volume , put option price , and put option effective spread can predict equity market crises . define the average value at risk ( avars ) based on the arma - garch model with standard infinitely divisible innovations as an early warning indicator and find that avars can predict both extreme events and highly volatile markets . by constructing two investment networks based on the cross - border equity and a long - term debt securities portfolio , two network - based indicators ( algebraic connectivity and edge density ) that could have predicted the 2008 global financial crisis . show that the interconnectedness in the global network of financial linkages could have predicted the financial crises that occurred during the 19782010 period .composite indices averaged from crisis - related variables have been proposed to predict financial crises . a daily financial condition indicator , market volatility , to determine whether a stock market is unstable or not . define and propose a stock market instability index based on the difference between the current market condition and the past conditions when the market was stable . propose a model to predict stock market collapse that signals when a massive selling by global institutional investors occurs . integrate all crisis - related variables into a monthly financial market condition indicator and find that by using a support vector machine the indicator can detect market crises . use a market instability index to capture risk warning levels , quantify the instability level of the current market , and predict its future behaviorthere is a pattern of price trajectories that signals near - future market crashes . develops a log - periodic power law singularity ( lppls ) model for detecting bubbles by combining ( i ) the economic theory of rational expectation bubbles , ( ii ) the effect on the market of imitation and herding behaviors among investors and traders , and ( iii ) the mathematical and statistical physics of bifurcations and phase transitions .the faster - than - exponential ( power law with finite - time singularity ) increase in asset prices accompanied by accelerating oscillations is the main diagnostic that indicates bubbles . also corroborate that the lppls pattern can be used as an early warning signal for market crashes .in addition , convert the price series into networks using a visible graph alogorithm and use the degree - of - price network to measure the magnitude of the faster - than - exponential growth of stock prices , and to predict imminent financial extreme events . on averagethis indicator performs better than the lppls pattern - recognition indicator .the patterns of financial crises are modeled to predict financial extreme events . uncover the distribution pattern of waiting time between consecutive market extremes and use it to define a hazard probability that subsequent extremes will occur within a certain time period .they find that this hazard probability performs well in out - of - sample predictions . as an analogue to the seismic activity around earthquakes , an epidemic - type aftershock sequence model ( a type of mutually self - exciting hawkes point process ) to capture the occurring dynamics of stock market crashes , which can serve as an early warning model for predicting the probability of medium - term crashes .we analyze the daily dow jones industrial average ( djia ) index from 16 february 1885 to 31 december 2015 .the logarithmic return of the djia index over a time scale of one day is defined figures [ fig : index : return](a ) and [ fig : index : return](b ) show plots of the logarithmic djia and its return , respectively .the djia index grows from 30.92 on 16 february 1885 to 17425.03 on 31 december 2015 with a total logarithmic return greater than 6 .although the index exhibits a rising trend throughout sample period , there are falling trends and range - bounds in different subperiods .figure [ fig : index : return ] shows six turbulent periods ( highlighted in shadow ) , the wall street crash of 19291932 , the oil crisis of 19731975 , the black monday crash of 19871989 , the dot - com bubble of 20002003 , the subprime crisis 20072009 , the 2008 financial crisis , and the european sovereign debt crisis 20112015 .( color online ) .plots of the logarithmic djia index and it s difference , return .( a ) .( b ) ., title="fig:",width=272 ] ( color online ) .plots of the logarithmic djia index and it s difference , return .( a ) .( b ) ., title="fig:",width=283 ]an extreme value is usually defined as a peak above a threshold ( pot ) that is times the sample standard deviation .the parameter is a predefined value ( see a summary in table 1 of ) .although identifying extreme events in terms of pot is widely applied in empirical analysis , the pot has drawbacks .a small value will produce many `` extreme values , '' not all of which are truly extreme , and a large value will indicate genuine extremes but not necessarily include all of them .( color online ) . determining the extreme value threshold for negative , positive , and absolute returns .( a ) plots of the tail exponents as a function of the sorted returns .( b ) plots of the ks statistics with respect to the sorted returns .the ks statistics is defined as the maximum absolute difference between the empirical and fitting tail distributions.,title="fig:",width=275 ] ( color online ) . determining the extreme value threshold for negative , positive , and absolute returns .( a ) plots of the tail exponents as a function of the sorted returns .( b ) plots of the ks statistics with respect to the sorted returns .the ks statistics is defined as the maximum absolute difference between the empirical and fitting tail distributions.,title="fig:",width=283 ] we estimate the shape parameter using the hill estimator , which is a non - parametric method . for a given sample , we sort the data in ascending order , the value given by the hill estimator is ,\ ] ] where corresponds to the extreme value threshold that will be determined . one way to find threshold is by ( i ) estimating the value of with respect to all possible values of , and ( ii ) plotting against to find a range of values within which the estimated values are stable .in practice , this `` stable behavior '' between and is difficult to quantify .for example , fig .[ fig : evt : threshold](a ) uses djia returns to illustrate the estimated as a function of the sorted djia ( negative , positive , and absolute ) returns .the values strongly fluctuate and there is no stable range .an alternative approach is to use ks statistics to measure the agreement between the empirical and fitting tail distributions .ks statistics quantify the maximum absolute difference between both distributions . the most suitable threshold is associated with the best fits to the tail distribution , which has the smallest ks statistical values .figure [ fig : evt : threshold](b ) shows the plots of the ks statistics with respect to the sorted ( negative , positive , and absolute ) returns .the significant low point in each curve allows us to more easily determine the extreme value threshold . for sake of comparison, we also use the quantiles of 95% , 97.5% , and 99% to define the extremes .definitions based on the quantile are common in the analysis of value - at - risk ( var ) . also define the 95% quantile of returns and the 95% quantile of negative returns as extremes and crashes . by taking into consideration only the time in which extremes occur , we base our prediction of extreme returns on the hazard probability , which measures the probability that following an extreme return occurring at time in the past there is an additional waiting time before another extreme return occurs . and theoretically derived the hazard probability using the distribution of recurrence intervals between extreme events , where is the probability distribution of the recurring intervals .once we have the distribution form of , the formula for can be derived from eq .( [ eq : wq ] ) .although the recurrence intervals of poisson processes are exponentially distributed , which generates a constant hazard probability when is given , financial processes always exhibit such non - poissonian characteristics as long - term dependence and multifractality in volatilities , medium - term dependence ( e.g. , momentum and contrarian behaviors ) , and multiscaling behaviors in returns , which leads to that the recurrence intervals are no longer exponentially distributed , and that the derivation of the close distribution form for the recurrence intervals is obstructed .the non - poissonian features also result in a controversial situation in the empirical analysis of the distribution formula of recurrence intervals .for example , the reported distributions range from a power - law distribution with an exponential cutoff to a stretched exponential distribution , from a -exponential distribution to a -weibull distribution . herewe employ three common functions to fit the recurrence interval distributions .the three formulas are the stretched exponential distribution , ,\ ] ] the -exponential distribution , ^{-\frac{1}{q-1}},\ ] ] and the weibull distribution , ,\ ] ] by putting the three probability distributions eqs .( [ eq : pdf : sexp])([eq : pdf : wbl ] ) into eq .( [ eq : wq ] ) , we obtain the hazard probability for the stretched exponential distribution , ^\mu\right)}{\gamma_u\left ( \frac{1}{\mu } , ( bt)^{\mu}\right)},\ ] ] the hazard probability for -exponential distribution , ^{1-\frac{1}{q-1}},\ ] ] and the hazard probability for weibull distribution , ,\ ] ] where and are lower and upper incomplete gamma functions . for fixed ,all three hazard probabilities decrease as increases , which explains the clustering of extremes in financial returns and volatilities . to use the hazard probability to predict the extremes we must set a hazard threshold to trigger the early warning indicator of an approaching extreme event .if the hazard probability is greater than the hazard threshold , an alarm that an extreme return will occur during the next time is activated .the hazard threshold is not an arbitrary given value but depending on the risk level preferences of investors is optimized to balance between false alarms and not detecting events .the hazard probability becomes a binary extreme forecast that equals one when exceeds the hazard threshold and equals zero otherwise .when comparing the forecasted extremes with the actual events we see ( i ) correct predictions of an extreme return occurring , ( ii ) correct predictions of a non - extreme return occurring , ( iii ) missed events , and ( iv ) false alarms . by counting how many times each outcome occurs we can compute a range of evaluation measurements including the correct prediction rate , the false alarm rate , and the accuracy .our primary interest here is correct prediction rate and false alarm rate , which are defined as where is the number of extreme returns that are correctly predicted , the number of non - extreme returns that are correctly predicted , the number of missed events , and the number of false alarms .following , we use the hanssen - kuiper skill score ( kss ) to assess the validity of extreme forecasts .the kss is the difference between the correct prediction rate and the false alarm rate .the kss encompasses both missing occurrence errors and false alarms errors .decreasing these two errors increases the value of kss .our goal is to find a balanced signal for investors when they prefer either type 1 and type 2 errors and to take into account whether they use or discard the predictive signals .following we define a loss function when a hazard probability threshold is added issue extreme forecasts , where is the ratio of missing events ( type 1 errors ) and is the ratio of false alarms ( type 2 errors ) .the parameter is the investor preference for avoiding either type 1 or type 2 errors .we further define the usefulness of extreme forecasts as where is the loss faced by investors when they ignore the predictive signals , and is the extent to which the extreme forecasting model offers better performance than no model at all .extreme forecasts are useful when , which means that losses using the forecasts are lower than when the forecasts are ignored .the usefulness definition here ignores any influence from the data imbalance , i.e. , that non - extreme events occur much more frequently than extreme events .given hazard probability , we need a hazard threshold that maximizes usefulness . optimizes the threshold by minimizing the noise - to - signal ratio .when we optimize the usefulness there is a marginal rate of substitution between type 1 and type 2 errors , but this marginal rate is not clear in the optimization of the noise - to - signal ratio , and this can result in an unacceptable level of type 1 and type 2 errors . by introducing the stretched exponential function of eq .( [ eq : pdf : sexp ] ) into the probability density function , we obtain where is the gamma function . and describe the one - to - one correspondence between the average recurrence interval and the percentage of extremes , where is the quantile that is used to define the extreme values . for this equation to be valid, the extremes must be positive .when extremes are negative , we convert them into positives by multiplying by . find that the average recurrence interval is universal irrespective of the dependence structure of the underlying process . from the definition of expectation , the average recurrence intervalcan also be written .for the stretched exponential distribution , we have by solving eqs .( [ eq : se:1 ] ) and ( [ eq : se:2 ] ) and using and for the stretched exponential distribution , parameters and are this strategy reduces the number of estimated parameters from three to one .the mean of the -exponential distribution is ] .( color online ) .plots of hazard probability with .the hazard events correspond to the extreme negative returns obtained from the quantile threshold of 99% .the analysis is performed in the period from 1885 to 1928.,width=302 ] figure [ fig : hp : ri : w ] shows a plot of the hazard probability as a function of the elapsing time for the extreme negative returns obtained from the 99% quantile threshold when .it shows the empirical hazard probability estimated from the real data ( filled markers ) and the analytical hazard probabilities obtained from the theoretical equations ( solid curves ) .note that although all the theoretical lines do not overlap on the same curve they all decrease with respect to the elapsing time , as does the empirical hazard probability .the statistics are poor and the empirical hazard probability strongly oscillates , but for a given value of the analytical hazard probability values are comparable to those of the empirical hazard probability , suggesting that the analytical hazard probabilities agree with the empirical hazard probability .these decreasing patterns in the hazard probability are also seen in energy futures , spot index and index futures , and stock returns , indicating that the probability of observing a follow - up extreme return decreases as time elapses .this reveals the existence of extreme return clustering and a potential dependent structure in the triggering processes of the extreme returns , which supports the argument that `` many extreme price movements are triggered by previous extreme movements '' and that `` larger extremes occur more often after big events or frequent events than after tranquil periods '' .this is caused by the positive herding behavior of investors and the endogenous growth of instability in financial markets . because the results are all similar , we do not show the hazard probabilities for different thresholds and other types of return .using hazard probabilities and an optimized hazard threshold , we build a model to predict the occurrence of positive , negative , and absolute extreme returns in financial markets within a given time period .the hazard probabilities are specified by the distribution parameter of the recurrence intervals between extreme events in the return history .the indicators of incoming extreme events are generated when the hazard probability exceeds the optimized hazard threshold , and this maximizes the usefulness of these extreme forecasts .we perform out - of - sample tests to evaluate the predictive power of this extreme - return - prediction model as follows . 1. we mark extreme events according to a specified extreme value or quantile threshold during a given in - sample calibrating period .2 . fitting the recurrence intervals between the marked extreme events , we estimate the stretched exponential distribution , -exponential distribution , or weibull distribution parameters. 3 . using the estimated distribution parameters in the in - sample calibrating period , we determine the hazard probability and find the optimized hazard threshold by maximizing the usefulness .using the distribution parameters and optimized hazard threshold from the in - sample calibrating period , we forecast the indicators of incoming extreme events within time period and evaluate the forecasting signals . to find the optimized hazard threshold , we vary the hazard threshold in $ ] to obtain all possible pairs of . plotting with respect to , we obtain the well - known `` receiver operator characteristic '' ( roc ) curve . using the roc curve we measure the validity of the predicting power of early warning models .figure [ fig : roc : insample ] shows the roc predictive curves of extreme negative returns for in - sample tests and out - of - sample tests .the in - sample ( out - of - sample ) period is from 1885 to 1928 ( from 1929 to 1932 ) .the diagonal line is a random guess .note that the roc curves of the three fitting distributions are overlap exactly on the same curve for in - sample and out - of - sample tests , suggesting that the results do not depend on the distribution formula used to fit the recurrence intervals .all roc curves are above the random guess line , indicating that both in - sample and out - of - sample tests have a better predictive power than a random guess .note also that the out - of - sample curves are lower than the in - sample curves , which confirms the observation that out - of - sample predictions are usually worse than in - sample tests . because they all exhibit very similar patterns , we do not show the roc curves obtained from different thresholds and different types of extreme returns .( color online ) .roc curves of in - sample tests and out - of - sample predictions .the extreme returns correspond to the negative returns in the quantile of 99% .the in - sample period covers from 1885 to 1928 .the out - of - sample period spans from 1929 to 1932.,width=302 ] because all three fitting distributions give the same roc curve , we evaluate only the in - sample and out - of - sample performance of the extreme return prediction model for the -exponential distribution .we find the optimized hazard threshold , which maximizes the usefulness in the in - sample calibrating period , and estimate such performance measurements as the rate of correct predictions , the false alarm rate , the usefulness , and the kss score during in - sample and out - of - sample periods .the results are shown in table [ tb : pe : performance ] .first , we observe that all usefulness values are positive except in the positive returns in the 97.5% quantile in panel a and in the 95% quantile in panel b , indicating that when missing - event and false - alarm errors are weighted equally our model provides more accurate results than the benchmark of ignoring the forecasting signals .second , excluding the above two exceptions all kss scores are greater than 0 , which corresponds to random guessing , indicating that the rate of correct predictions exceeds that of false alarms .third , note that in most of the results , the and kss scores of in - sample performance are larger than those of out - of - sample performance , which is in consistent with the observation that out - of - sample predictions are inferior to the in - sample tests .we do find one exception in panel e and nine exceptions in panel f in which the out - of - sample predictions surpass the in - sample tests , and this indicates the predictive power of the testing model .the results also imply that the more data available for the in - sample tests , the better the performance of out - of - sample predictions , and this is further supported by the predictions during two recent turbulent periods , which were better than predictions during other periods .fourth , note that the predictions of the extreme returns in the 99% quantile produce a lower false alarm rate and a higher correct prediction rate than those in the 95% and 97.5% quantiles , and this produces high usefulness and kss scores .the results imply that the extreme events with a high quantile can be predicted more accurately .table [ tb : ri : statistics ] shows the statistics of ljung - box q tests that exhibit a decreasing pattern as quantile thresholds increase in all panels , indicating that increasing the quantile threshold could decrease the memory strength in the extremes .we neglect the potential dependence structure in the extreme series in our model because the larger the quantile threshold , the weaker the memory in extremes and the better the forecasting performance .compared to the model based on the hawkes processes our model has the advantage of fewer model parameters , easier estimating methods , and a faster prediction implementation [email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]@.l & & & & & & + & & & & & & & & & & & & & & & + + in : & & 0&256 & 0&319 & 0&286 & 0&228 & & 0&317 & 0&382 & 0&494 & 0&263 & & 0&159 & 0&207 & 0&242 & 0&142 + out : & & 0&807 & 0&814 & 0&834 & 0&895 & & 0&821 & 0&862 & 0&967 & 0&773 & & 0&719 & 0&699 & 0&805 & 0&721 + in : & & 0&616 & 0&623 & 0&657 & 0&626 & & 0&543 & 0&602 & 0&755 & 0&656 & & 0&637 & 0&582 & 0&713 & 0&626 + out : & & 0&953 & 0&949 & 0&967 & 0&980 & & 0&891 & 0&898 & 0&963 & 0&963 & & 0&960 & 0&935 & 0&975 & 0&958 + in : & & 0&180 & 0&152 & 0&186 & 0&199 & & 0&113 & 0&110 & 0&131 & 0&197 & & 0&239 & 0&187 & 0&235 & 0&242 + out : & & 0&073 & 0&068 & 0&066 & 0&042 & & 0&035 & 0&018 & &002 & 0&095 & & 0&121 & 0&118 & 0&085 & 0&119 + in : kss & & 0&361 & 0&304 & 0&372 & 0&398 & & 0&226 & 0&220 & 0&261 & 0&393 & & 0&478 & 0&375 & 0&470 & 0&484 + out : kss & & 0&146 & 0&135 & 0&133 & 0&084 & & 0&069 & 0&036 & &005 & 0&190 & & 0&241 & 0&236 & 0&170 & 0&238 + + in : & & 0&413 & 0&332 & 0&263 & 0&120 & & 0&316 & 0&337 & 0&257 & 0&191 & & 0&303 & 0&170 & 0&159 & 0&075 + out : & & 0&760 & 0&634 & 0&377 & 0&111 & & 0&677 & 0&676 & 0&608 & 0&361 & & 0&641 & 0&360 & 0&245 & 0&042 + in : & & 0&727 & 0&729 & 0&760 & 0&725 & & 0&657 & 0&664 & 0&702 & 0&757 & & 0&723 & 0&700 & 0&799 & 0&737 + out : & & 0&824 & 0&750 & 0&769 & 0&250 & & 0&776 & 0&782 & 0&714 & 0&692 & & 0&780 & 0&671 & 0&789 & 0&667 + in : & & 0&157 & 0&199 & 0&248 & 0&302 & & 0&170 & 0&163 & 0&222 & 0&283 & & 0&210 & 0&265 & 0&320 & 0&331 + out : & & 0&032 & 0&058 & 0&196 & 0&069 & & 0&049 & 0&053 & 0&053 & 0&166 & & 0&069 & 0&155 & 0&272 & 0&312 + in : kss & & 0&314 & 0&397 & 0&497 & 0&605 & & 0&341 & 0&327 & 0&445 & 0&566 & & 0&420 & 0&531 & 0&640 & 0&663 + out : kss & & 0&064 & 0&116 & 0&392 & 0&139 & & 0&099 & 0&106 & 0&107 & 0&331 & & 0&139 & 0&311 & 0&545 & 0&624 + + in : & & 0&284 & 0&342 & 0&284 & 0&193 & & 0&405 & 0&362 & 0&309 & 0&191 & & 0&309 & 0&177 & 0&167 & 0&099 + out : & & 0&370 & 0&512 & 0&370 & 0&291 & & 0&721 & 0&688 & 0&676 & 0&394 & & 0&546 & 0&336 & 0&229 & 0&205 + in : & & 0&763 & 0&717 & 0&763 & 0&780 & & 0&726 & 0&675 & 0&732 & 0&753 & & 0&703 & 0&680 & 0&785 & 0&749 + out : & & 0&786 & 0&667 & 0&786 & 0&750 & & 0&763 & 0&683 & 0&789 & 0&733 & & 0&645 & 0&662 & 0&828 & 0&733 + in : & & 0&239 & 0&187 & 0&239 & 0&294 & & 0&161 & 0&157 & 0&211 & 0&281 & & 0&197 & 0&252 & 0&309 & 0&325 + out : & & 0&208 & 0&077 & 0&208 & 0&230 & & 0&021 & &003 & 0&057 & 0&169 & & 0&050 & 0&163 & 0&299 & 0&264 + in : kss & & 0&479 & 0&375 & 0&478 & 0&587 & & 0&321 & 0&313 & 0&423 & 0&562 & & 0&394 & 0&504 & 0&618 & 0&650 + out : kss & & 0&416 & 0&155 & 0&416 & 0&459 & & 0&042 & &005 & 0&113 & 0&339 & & 0&099 & 0&325 & 0&599 & 0&529 + + in : & & 0&245 & 0&350 & 0&248 & 0&140 & & 0&414 & 0&370 & 0&317 & 0&185 & & 0&233 & 0&181 & 0&172 & 0&105 + out : & & 0&522 & 0&690 & 0&523 & 0&330 & & 0&768 & 0&702 & 0&725 & 0&461 & & 0&539 & 0&463 & 0&418 & 0&278 + in : & & 0&709 & 0&707 & 0&712 & 0&725 & & 0&725 & 0&669 & 0&726 & 0&734 & & 0&733 & 0&669 & 0&773 & 0&747 + out : & & 0&731 & 0&821 & 0&745 & 0&533 & & 0&883 & 0&820 & 0&855 & 0&769 & & 0&784 & 0&664 & 0&764 & 0&700 + in : & & 0&232 & 0&178 & 0&232 & 0&292 & & 0&156 & 0&149 & 0&204 & 0&275 & & 0&250 & 0&244 & 0&301 & 0&321 + out : & & 0&104 & 0&065 & 0&111 & 0&102 & & 0&058 & 0&059 & 0&065 & 0&154 & & 0&122 & 0&100 & 0&173 & 0&211 + in : kss & & 0&463 & 0&357 & 0&463 & 0&585 & & 0&311 & 0&298 & 0&409 & 0&549 & & 0&500 & 0&487 & 0&601 & 0&642 + out : kss & & 0&209 & 0&130 & 0&223 & 0&204 & & 0&115 & 0&118 & 0&130 & 0&308 & & 0&244 & 0&201 & 0&345 & 0&422 + + in : & & 0&248 & 0&351 & 0&249 & 0&144 & & 0&414 & 0&367 & 0&315 & 0&186 & & 0&236 & 0&258 & 0&172 & 0&107 + out : & & 0&660 & 0&711 & 0&666 & 0&333 & & 0&633 & 0&603 & 0&640 & 0&423 & & 0&557 & 0&616 & 0&471 & 0&175 + in : & & 0&710 & 0&708 & 0&713 & 0&715 & & 0&733 & 0&678 & 0&734 & 0&730 & & 0&736 & 0&746 & 0&774 & 0&742 + out : & & 0&855 & 0&902 & 0&875 & 0&900 & & 0&913 & 0&869 & 0&887 & 0&933 & & 0&892 & 0&911 & 0&863 & 0&969 + in : & & 0&231 & 0&179 & 0&232 & 0&286 & & 0&160 & 0&155 & 0&209 & 0&272 & & 0&250 & 0&244 & 0&301 & 0&317 + out : & & 0&097 & 0&095 & 0&104 & 0&283 & & 0&140 & 0&133 & 0&124 & 0&255 & & 0&168 & 0&147 & 0&196 & * 0 * & * 397 * + in : kss & & 0&462 & 0&357 & 0&463 & 0&571 & & 0&319 & 0&311 & 0&419 & 0&544 & & 0&500 & 0&489 & 0&603 & 0&634 + out : kss & & 0&195 & 0&191 & 0&209 & 0&567 & & 0&279 & 0&266 & 0&247 & 0&510 & & 0&335 & 0&294 & 0&392 & * 0 * & * 793 * + + in : & & 0&258 & 0&350 & 0&254 & 0&140 & & 0&326 & 0&439 & 0&312 & 0&183 & & 0&230 & 0&256 & 0&169 & 0&100 + out : & & 0&180 & 0&352 & 0&174 & 0&128 & & 0&229 & 0&358 & 0&229 & 0&136 & & 0&155 & 0&195 & 0&112 & 0&066 + in : & & 0&729 & 0&712 & 0&725 & 0&726 & & 0&747 & 0&752 & 0&737 & 0&738 & & 0&740 & 0&751 & 0&780 & 0&758 + out : & & 0&667 & 0&636 & 0&682 & 0&667 & & 0&682 & 0&696 & 0&682 & 0&714 & & 0&737 & 0&720 & 0&842 & 0&571 + in : & & 0&235 & 0&181 & 0&235 & 0&293 & & 0&211 & 0&157 & 0&213 & 0&277 & & 0&255 & 0&248 & 0&306 & 0&329 + out : & & * 0*&*243 * & 0&142 & * 0*&*254 * & 0&269 & & * 0*&*227 * & * 0*&*169 * & * 0*&*227 * & * 0*&*289 * & & * 0*&*291 * & * 0*&*262 * & * 0*&*365 * & 0&253 + in : kss & & 0&470 & 0&362 & 0&471 & 0&587 & & 0&421 & 0&313 & 0&426 & 0&554 & & 0&510 & 0&495 & 0&611 & 0&658 + out : kss & & * 0*&*486 * & 0&285 & * 0*&*508 * & 0&538 & & * 0*&*453 * & * 0*&*338 * & * 0*&*453 * & * 0*&*578 * & & * 0*&*582 * & * 0*&*525 * & * 0*&*730 * & 0&505 +we have performed a recurrence interval analysis of financial extremes in the djia index during the period from 1885 to 2015 .we determine the extreme returns according to a newly proposed extreme identifying approach , as well as quantile thresholds . with the extreme identifying approachwe are able to locate the optimal extreme threshold associated with the minimum ks statistics of tail distributions .we find that the recurrence intervals , which are the periods of time between the successive extremes of different types of returns and thresholds , follow a -exponential distribution .this allows us to analytically derive the hazard probability that within the time interval since the last extreme event that occurred at time we will observe the next extreme event .the analytical value agrees well with the empirical hazard probability estimated from real data . using the hazard probability, we develop an extreme - return - prediction model for forecasting imminent financial extreme events .when the hazard probability is greater than the hazard threshold , this model can warn when an extreme event is about to occur .the hazard threshold is obtained by maximizing the usefulness of extreme forecasts .both in - sample tests and out - of - sample predictions reveal that the signals generated by our prediction model are better statistically than the benchmark of neglecting these signals and that the input distribution formula used to fit the recurrence intervals has no influence on the final outcome of our early warning model .although in most cases the predictive performance of in - sample tests are better than that of out - of - sample predictions , expanding the in - sample calibrating period could yield out - of - sample predictions that are better than in - sample tests .in addition , increasing the extreme - extracting threshold could improve the predictive power of our model in both in - sample tests and out - of - sample predictions .our results may shed new light the occurrence of extremes in financial markets and on the application of recurrence interval analysis to forecasting financial extremes .z .- q.j . andacknowledge support from the national natural science foundation of china ( 71131007 and 71532009 ) , shanghai `` chen guang '' project ( 2012cg34 ) , program for changjiang scholars and innovative research team in university ( irt1028 ) , china scholarship council ( 201406745014 ) and the fundamental research funds for the central universities .g .- j.w . and c.x .acknowledge support from the national natural science foundation of china ( 71501066 , 71373072 , and 71521061 ) .a.c . acknowledges the support from brazilian agencies fapeal ( ppp 20110902 - 011 - 0025 - 0069/60030 - 733/2011 ) and cnpq ( pde 20736012014 - 6 ) .was supported by nsf ( grants cmmi 1125290 , phy 1505000 , and che- 1213217 ) and by doe contract ( de - ac07 - 05id14517 ) .72 natexlab#1#1url # 1`#1`urlprefix babecky , j. , havrnek , t. , matj , j. , rusnk , m. , mdkov , k. , vaek , b. , 2014 .banking , debt , and currency crises in developed countries : stylized facts and early warning indicators . j. financ . stab .15 , 117 .jiang , z .- q . , canabarro , a. a. , podobnik , b. , stanley , h. e. , zhou , w .- x . , 2016 .early warning of large volatilities based on recurrence interval analysis in chinese stock markets . to be appeared in quant .finance .jiang , z .- q . , zhou , w .- x . ,sornette , d. , woodard , r. , bastiaensen , k. , cauwels , p. , 2010 .bubble diagnosis and prediction of the 2005 - 2007 and 2008 - 2009 chinese stock market bubbles .74 , 149162 .petersen , a. m. , wang , f .- z . ,havlin , s. , stanley , h. e. , 2010 .market dynamics immediately before and after financial shocks : quantifying the omori , productivity , and bath laws .e 82 , 036114 .son , i. s. , oh , k. j. , kim , t. y. , kim , d. h. , 2009 . an early warning system for global institutional investors at emerging stock markets based on machine learning forecasting . expert sys . appl .36 , 49514957 .sornette , d. , demos , g. , zhang , q. , cauwels , p. , filimonov , v. , zhang , q .- z . ,real - time prediction and post - mortem analysis of the shanghai 2015 stock market bubble and crash .j. of invest .4 ( 7795 ) . | being able to predict the occurrence of extreme returns is important in financial risk management . using the distribution of recurrence intervals the waiting time between consecutive extremes we show that these extreme returns are predictable on the short term . examining a range of different types of returns and thresholds we find that recurrence intervals follow a -exponential distribution , which we then use to theoretically derive the hazard probability introduction ------------ predicting such extreme financial events as market crashes , bank failures , and currency crises is of great importance to investors and policy markers because they destabilize the financial system and can greatly shrink asset value . much research has been carried out in an attempt to detect the underlying vulnerabilities and the common precursors to financial extremes . a number of different models have been developed to predict the occurrence of financial distresses including those using probability , signal approaches and intelligence . a faster - than - exponential increase in price accompanied by accelerating price oscillations indicates the presence of bubbles . the behavior of these bubbles can be characterized using the log - period power - law singularity ( lppls ) model , which is capable of accurately forecasting a bubble s tipping point . recent research on the occurrence of financial extremes and on the market dynamics around financial crashes has enabled us to better forecast emerging financial crises . we can understand the occurrence pattern of extremes by determining the distribution of waiting times between consecutive financial extremes ( the `` recurrence intervals '' ) and charting the memory behavior within the occurring extremes . built an early warning model of this waiting time distribution to predict the probability that extremes will occur within a given time period . following a financial crisis the financial system gradually transitions back to a stasis . this relaxation behavior following a financial market crash is similar to the aftershocks following an earthquake . indicates that a possible theoretical explanation for bursts of speculating bubbles is a positive herding behavior of traders that causes local self - excited crashes . this is in accordance with the phenomenon that extremes cluster and are interdependent . show that approximately 7685% of occurring extremes are triggered by other extremes , and they develop an early warning model that treats financial crashes as earthquakes and compute the probability that an extreme event will occur within a certain time period . here we extend the probabilistic framework for extreme returns presented in to predict extremes by using the conditional probability of an future extreme event within a fixed time frame in which type 1 and type 2 errors are balanced in current market state . the contributions of our works are in four ways . * we identify extremes by locating the threshold at the minimum ks value between the empirical and fitting distributions of the extreme values . * we classify the returns as either extreme or non - extreme by quantifying the extreme threshold , and we assume that the extremes are independent . this simplifies the modeling and reduces the computational complexity when estimating parameters but provides an adequate performance when doing out - of - sample prediction . * we define a hazard probability that is dependent on the distribution formula of recurrence intervals between extremes , and this translates the problem into finding a suitable distribution form for recurrence intervals . unlike the hawkes point process , our modeling framework is easy to implement . * instead of using a predefined threshold of hazard probability , we predict extremes when the hazard probability exceeds an optimized hazard threshold , obtained by maximizing a usefulness function that takes into account an investor s preference for either type 1 or type 2 errors . we organize the paper as follows . in section 2 we present a brief review of recurrence interval analysis and early warning models . in section 3 we provide the dataset . in section 4 we describe the model and methods . in section 5 we present the results of our recurrence interval analysis for different subperiods . in section 6 we document and discuss the performance of our out - of - sample predictions . in section 7 we present our conclusions . |
we begin with the basic background on the ionospheric operator , followed by that of the ionospheric correction operator .these sections expand upon the mathematical framework created and briefly outlined in morales , et .al . , .the ionospheric operator * a*( ) is the operator which takes an unperturbed map of the sky and maps it to a perturbed map which has been distorted by the ionosphere , put simply , * a*( ) is a generalized coordinate change from to .( the order of the arguments of * a*( ) here , and with other operators later , is indicative of the direction of change . ) in the regime of mwa , it is a very good approximation that the mapping of angles is approximately linear with only a small deviation , the perturbed intensity at is the summed contribution from the intensities at all where this relation holds ; ie , where represents the dirac delta function .for example , if ( only ) and are mapped to according to equation [ delthetadef ] , then . in the limitthat is a continuous variable , this discrete sum for is altered to an integral , where ( notice that we ve used that the magnitude of the delta - function s argument s gradient is approximately 1 here ) . from this equationit is evident that in the limit of continuous , the ionospheric operator equation becomes so far our calculations have been confined to real space , using the variables \ { } .however , as is common for antenna arrays in radio astronomy , the output of the correlator for mwa will actually be the fourier transform of the real space sky intensity , .in addition , the final power spectrum of the sky will also be measured in this fourier transfrom space ( also known as _ visibility space _ , or _ -space _ ) .therefore , correcting for the ionosphere in real space requires inverse fourier transforming to real space , making the correction , and then fourier transforming back to visibility space .the problem with this is that fourier transforming is computationally expensive , especially for a system operating in real time , such as the mwa s _ real time system _ . as such , correcting for the ionosphere in the -plane would greatly reduce computation .we now study the nature of this -plane correction .let represent the fourier transform of .define this fourier transform by and the corresponding inverse fourier transform by now define * a* to be the ionospheric operator in the ; that is , the operator that maps the unperturbed map to the perturbed map , one way to obtain from is to inverse fourier transform to using * f* , apply the ionospheric operator * a* in real space to obtain , and then fourier transform to using * f* . in all , comparison of this to the definition of * a* shows that this expression may be thought of as simply a basis change of * a * from \{ } to \{}. the three operators on the right here have all been previously given .plugging in these predetermined expressions ( equations [ aop ] , [ fourierdef ] , and [ inversefourierdef ] ) and simplifying as much as possible , we obtain notice that the integral over has been evaluated by using the delta function from the expression for ( see equations [ a2 ] and [ aop ] ; here again we use that the magnitude of the delta - function s argument s gradient is approximately 1 ) . one interesting characteristic of this expression is the non - local nature , by which finding the value of the perturbed intensity at one particular requires knowing the value of the pure intensity at other .this property will also appear in the ionospheric correction operator , found below .the previous section corresponds to the operator which distorts the pure data into the perturbed data , but the reverse process is what actually interests us we want to correct for the effect of the ionosphere to obtain the pure data from the perturbed data .define * a* to be this ionospheric correction operator which corrects for the influence of the ionosphere by mapping the perturbed map of the sky back to the unperturbed map , the mwa will not run during periods of scintillation ( at which times multiple values of are perturbed to the same ) , but will instead run during times when it is a very good approximation that the mapping from to is one - to - one and approximately linear with only a small correction , the derivation of an expression for the ionospheric correction operator in the -plane follows analogously to the derivation of , so i ll merely quote the result : this is the most general expression for obtaining from ; proceeding further requires knowledge of the ionospheric perturbation . numerically solving for using this equationis an incredibly daunting task for an arbitrary choice of , as it involves a double integral over all space for every value of . therefore , unless we find a choice of which offers an analytic solution for , the correction for the ionosphere in the -plane will actually be more computationally expensive than the two fourier transforms necessary to correct for the ionosphere in real space .unfortunately , choices of which lend themselves to analytic solutions are hard to come by .there are a couple , however , and they will be discussed in the following sections .the above equation for contains an exponential with in the exponent . by expanding this exponential, we obtain a form for which may be solved analytically for a couple of choices of .to be explicit , expanding the exponential in , leads to ( interchanging an infinite sum and an integral requires that the sum be uniformly convergent , which will be true for all choices of that we choose . ) with the above expansion , may be solved analytically if we choose here , the are chosen to be purely real , but the are allowed to assume complex values .physically , this choice of corresponds to modeling the integral along the line of sight of the ionosphere s electron density ( is the distance along the line of sight ) as a sum over sinusoidal modes , the reflection by the ionosphere is then related to this by where represents the two - dimensional gradient with respect to .( the actual shift , of course , is a hermitian observable , so only the real part of this is included in . ) notice that this choice ultimately stems from our decision to model density fluctuations from the ionosphere as sinusoidal modes .actually , any orthonormal basis would have sufficed here . once again, this particular choice was made because it allows an analytic solution for the unperturbed intensity .( an alternate choice which likewise offers an analytic solution will be briefly discussed later on in section [ choice2 ] . ) with this choice of , the intensity becomes \tilde{i}(\vec{u ' } ) .\end{aligned}\ ] ] the math leading to a solution for may be found in the appendix , section [ math1 ] .very briefly , the integral is solved by conveniently redefining the ionospheric modes ( as given below ) , performing a multinomial expansion on the term raised to the power , recognizing that the final product of this expansion leaves the integral in the form a delta function , and then using that delta function to evaluate the integral over .the final solution is where and the summation over all ( with ) is a restricted sum such that and .this equation might look a little daunting , but it may be thought of simply as the addition of many delta functions of varying amplitudes , with those further from the point in question tending to contribute less to the sum .( this is , in fact , similar to what one sees with intermodulation distortion ) .notice that although there are modes distorting the sky , the sums and products above involve .thus , there appear to be _ effective modes _ distorting the sky .this factor of 2 comes from the constraint that be real , as may be more easily seen by following the math provided in the appendix , section [ math1 ] .another important feature of this solution is that it is inherently non - local , with the corrected intensity at a given depending on the values of the perturbed intensity at the appropriate neighboring points .this non - locality , which is also evident in the most general form for ( equation [ iupure ] ) , will create edge effects because the -plane is finite in all practical applications , as will be more easily seen and understood later in section [ trunc10 ] .it should also be pointed out that this equation for is the result of a double expansion : the taylor - series expansion of the exponential containing ( see equation [ n1 ] ) , which is now evident in the summation over , and the expansion of the ionospheric perturbation itself into sinusoidal modes ( see equation [ dtsine ] ) , which is now evident in the restricted sum over . throughout this paperwe will assume that this second expansion is perfect " ; that is , we will assume that we are able to perfectly model the ionosphere with the modes that we assume are provided for us .we will instead study the errors created by truncating the first expansion .as previously stated , the main goal of the -plane correction is to correct for the ionosphere in a less computationally intensive manner than that required for the real space correction . our final expression for , however , contains an infinite sum over .clearly , making the -plane correction computationally feasible will require truncating this sum after a finite number of terms .the next section will explore the effect of such a truncation . but even truncating this sum over can not guarantee the computational feasibility of the -plane correction because of the second expansion over sinusodial modes and its resulting restricted summation . more specifically , the number of terms in the restricted sum over all possible combinations of such that may be calculated through the following trick : if represents the total number of effective modes , then consider the problem of arranging balls and partitions in a straight line . herethe -th partition marks the stopping point where ends and begins .for example , if for a particular arrangement 7 balls lie between the 4th and 5th partitions , then = 7 for that arrangement .the total number of ways to arrange these objects is . of course ,exchanging the position of any two of the same object ( ball or partition ) does not lead to a different arrangement , so the total number of terms in the restricted sum such that is given by as an example , suppose that we wish to calculate this sum for 10 modes ( effective modes ) to the 40th order in . using the above formula , we calculate that such a sum has approximately 70 trillion terms . from thiswe see that the -plane correction is only computationally feasible if the number of modes necessary to model the sky _ and _ the number of orders necessary in the expansion of the exponential are relatively small .as stated in the previous section , the -plane correction is only computationally feasible if we truncate the infinite sum over . at this pointwe pause to study the results of truncating this sum after a finite order of correction , . in order to study the qualitative effects of truncating the sum over , we created a simple sky and perturbed it with a simple mode , and then used numerical code to correct for this ( known ) perturbation in the -plane by using the mathematical formula found above in equation [ finaliu ] .the initial , unperturbed sky is shown in the top panel of figure [ pureandbad ] .we will refer to this sky throughout this paper as the _ simple sky_. it contains a pixel array with a spacing of 4 arcmin between pixels , which is the approximate resolution we expect for the final mwa array ( the axes in this and all the following real space figures are labeled in radians ) .this pure sky is a single source sky : the value of the intensity at all pixels is set to 0.0 except at one pixel where the value 1.0 .( the important qualitative results found below would not be altered by including side lobes , so we will leave them out to keep things simpler . ) figure [ uv ] shows the real part of the intensity in the -plane for this pure sky plotted in the third direction ( which is determined by the color scale ) and demonstrates that the uncorrupted , -sky is a simple sinusoid ( as one would expect for the fourier - transform of a delta function ) .in contrast to this plot , the color scale for all the following -space plots is representative of the magnitude of the intensity in the -plane ( although is complex , the important -plane results found below do not require phase information to understand graphically ) .the important feature to take away from this plot is that the absolute magnitude of the intensity is constant and of the order at all points in the -plane ( although we ve only plotted the real part here ) .we then perturb this simple sky with a rather strong mode , as shown in the bottom panel of figure [ pureandbad ] .this distorting mode has radians squared and with a magnitude of 378.0 inverse radians and oriented in the direction as measured from the axis .( these values were chosen for the sole reason that they produce a strong shift of a few pixels ( tens of arcminutes ) , and thus accentuate the qualitative features of the ionospheric correction as seen below .a more realistic distortion will be discussed later in section [ realistic ] . )notice that the intensity is still 1.0 at exactly one pixel and 0.0 at every other pixel , but now the location of this pixel has slightly shifted in the direction .( it should be noted that this ionospheric shift was applied to the continuous sky with a delta function at one point , and not to its pixelized represention shown in the top panel of figure [ pureandbad ] ) .although we have not included the plot , the magnitude of the intensity in the perturbed -plane remains of the order of , its value in the pure visibility sky .we now begin correcting for this simple one - mode distortion using various values of .we ( quite naturally ) begin with the first order correction , ( leads to no correction , see equation [ finaliu ] ) . after correcting to first order in the -plane , we inverse fourier transform back to the real space sky shown in the top panel of figure [ order1 ] in order to determine the effect that this first order correction has had on the real sky ( in particular, we would like to know whether it has successfully shifted the single source back to its unperturbed location ) . as it turns out, the correction to one order has not shifted the star from its perturbed location .the cross - like pattern of the star is somewhat interesting , but what is most important about this figure is that the maximum in the intensity has now doubled from 1.0 to 2.0 .a potential clue to this behavior is found by studying the visibility space sky corrected to first order , as shown in the bottom panel of figure [ order1 ] . from this figurewe see that the first order correction in the -plane has created an increase of an order of magnitude in the absolute value of the intensity at those points in the -plane furthest from the origin .another important feature of this figure ( although , as it turns out , it is not the cause of the increase in the real space intensity ) is the ring around the outside edge of the figure .this ring is caused by the previously mentioned fact that the correction in the -plane is non - local ( see equation [ finaliu ] ) , combined with the finite nature of our numerical -plane .more specifically , points near the edge of our -plane may not obtain the full correction in visibility space , because doing so requires pulling values of the intensity that are off the edge of the grid .therefore , values near the edge are never fully corrected .we will later see that the result of this is a small spreading of the initial source ( ie , a loss of precision ) , in real space .we now continue on to the second order correction ( ) .after inverse fourier transforming , we obtain the real space sky shown in figure [ order2 ] ( top panel ) . from this figurewe see that the maximum in the intensity has increased even more , and is now 12 times its unperturbed and uncorrected value .the visibility sky after two orders of correction , as shown in the bottom panel of figure [ order2 ] , has a maximum in the intensity that is now 100 times the value of the pure -sky . correcting to higher orders, we see that the problem with intensities that are too high not only persists , but continues to get worse .after 5 orders , the intensity in the real space sky ( figure [ order5 ] , top panel ) is 120 times too large , and that in the visibility space sky ( figure [ order5 ] , bottom panel ) is times too large ; after 10 orders , the intensity in the real space sky ( figure [ order10 ] , top panel ) is 600 times too large , and that in the visibility space sky ( figure [ order10 ] , bottom panel ) is times too large .in addition to the increase in the maximum in real space intensity , the source is also beginning to spread out and look less like a single point source . before continuing ,let s pause to develop an intuition of what is happening here .consider a simple exponential , suppose that we want to approximate this exponential using a taylor expansion , it makes sense that a decent approximation to the original exponential should be possible by truncating this sum after a finite number of terms .but how many terms are necessary ?let s first consider the zeroth order approximation , in which only the term is kept , notice that the zeroth order approximation gives us the right magnitude of 1 , but all information about the phase has been lost .if we instead correct to first order , we obtain this is not even close to the correct answer not only does this not contain the correct phase , but the magnitude is now not even close to being correct . correcting to second order gets us even further from the correct answer , this pattern continues for higher orders as well .in fact , the approximation wo nt begin to look decent until .even more relevant to our observations in the previous section , notice that adding subsequent terms to the approximation does not necessarily make the approximation better until . before then, adding subsequent terms actually makes the approximation worse . drawing from these observations, we expect that the trend we ve seen so far is the result of under correcting in the -plane , and that by going to more and more orders we will eventually obtain a decent correction .we now verify that this intuition is correct by studying higher order corrections .if correct , we expect to see the results gradually get better .we now consider the fifteenth order correction , = 15 , as shown in figure [ order15 ] .the real space sky after 15 orders ( top panel ) is now only 500 times too intense ( versus 600 for 10 orders ) , while the visibility space sky ( bottom panel ) is still approximately times too intense . from thisit is unclear that things are getting better , but in the very least the intensities are not getting worse .the shape of the source , however , continues to grow further from a point source .moving on to ( figure [ order20 ] ) is a bit more reassuring .the maximum in the real space sky intensity is now approximately only 50 times its actual value ( top panel ) , although the maximum in the visibility space intensity is still four orders too high ( bottom panel ) .the gradual improvement continues when we skip ahead to 25 orders ( figure [ order25 ] ) .the real space intensity is now only 4 or 5 times too large ( top panel ) , while the visibility space intensity has now dropped to 1000 times too large ( bottom panel ) .the shape of the source , however , continues to grow worse .skipping ahead next to 30 orders of correction shows a dramatic improvement .the top panel of figure [ order30 ] shows that the single source now appears to be a single source of the right order of magnitude in intensity . andin addition , the most intense pixel is now located exactly where it was for the pure sky , so the -plane correction has ( at least in terms of location of the max ) successfully corrected for the shift by the ionosphere .the bottom panel of figure [ order30 ] shows that the maximum in the visibility space sky intensity ( which , as always , occurs near the edge of the grid ) is now only an order of magnitude too big .it appears as if we ve gone over the hump , and are now on our way to decent results .the correction to 35 orders shows minor improvement in real space ( figure [ order35 ] , top panel ) . in the visibility space sky( figure [ order35 ] , bottom panel ) , however , the entire grid now has the correct order of magnitude of , including the most extreme pixels .we therefore now see some of the finer patterns caused by the non - locality of the correction and finite nature of the grid ( as mentioned previously in section [ trunc10 ] ) , which had previously been hidden by the extreme intensities at the corners .it should be noted that the most intense pixel in this fully corrected real space sky in the top panel of figure [ order35 ] still lies at the location of the single source in the original , pure sky . in other words ,the -plane correction has successfully shifted the reflected source back to its initial position , at the cost of minor spreading over a few neighboring pixels .this spreading , which can not be eliminated by correcting to still higher orders , is caused by the finite nature of the -plane and thus can not be avoided .it turns out that corrections to higher orders show negligible improvement over the correction to 35 orders , so the resulting skies , identical to those of figure [ order35 ] , are not shown .so far we have only used a particularly simple sky with one star .figure [ 10stars ] shows ( the absolute value of ) the residual between a more complicated pure sky with 10 stars , and the real space sky obtained after perturbing this pure sky and then correcting in the -plane to 40 orders . for comparison s sake the perturbation used here was the same ionospheric mode used above to perturb the simple sky of figure [ pureandbad ] ( ie , that used throughout this section ) .this figure shows the kinds of errors we may expect from the -plane correction .the residual from the star on the top right shows a light cross pattern , indicative of a spreading of the source caused by the process of perturbing the star and then applying the -plane correction .places in the plot with two consecutive pixels of high intensity represent stars which were not shifted back to exactly the same pixel that they started at , but rather to a neighboring pixel .recall that the stars are typically initially shifted 3 or 4 pixels by the ionosphere , so the -plane correction is still providing some improvement with these stars .there are two main points to be taken from our analysis so far .first , from section [ numterms ] we learned that either needing too many modes to model the ionospheric correction or too many orders of correction leads to an unreasonable number of numerical calculations .second , from section [ nmax ] we learned that under correcting in the -plane is a huge mistake and a lot worse than not correcting at all .hence the dilemma : choosing too small leads to the destruction of the data , while choosing too large leads to a computationally infeasible problem .it is therefore advantageous to develop a theoretical prediction of how many orders of correction are necessary .as it turns out , the result will lead to a few tricks which make the problem more reasonable .if we correct to only orders , then the magnitude error in our result must be the absolute value of the sum of all the terms we left out ; more specifically , the steps leading to an upper bound on this error may be found in the appendix , section [ math2 ] .the result is where and is the number of modes , as always .[ see equations [ c_qdef ] and [ d_qdef ] for reminders on how effective modes are related to actual modes .this is our final result for a strict upper bound on the error .unfortunately , this formula is not too enlightening . in order to obtain a theoretical estimate for , we must make a few further approximations . as one would expect , the optimal value of , which represents the number of orders necessary to obtain some level of accuracy in the -plane , is dependent upon the level of accuracy desired .to quantify this , define to be the value of the intensity in the -plane after correcting up through , and to be the value of the intensity in the -plane that one would obtain by employing the full correction and not truncating the sum ( ie , ) .( it should be noted that here also assumes a -plane infinite in extent .this will have effects seen later . ) the fractional error in the -plane correction caused by truncating the sum is then in the appendix ( section [ math3 ] ) you will find the steps leading up to a theoretical prediction of the value of at a given in the -plane necessary to obtain a fractional error less than or equal to if given the ionospheric effective modes distorting the sky , .the result is that the optimal value of is estimated by where is the number of modes and is defined as it was above in equation [ vu ] .this formula is a little hard to digest , so some values for given and are provided in table [ nmaxtab ] . for the example sky and perturbation used in section [ nmax ] , the theoretical predictions for the number of orders necessary is shown graphically in the top panel of figure [ nmaxtheory ] .this figure has , although does not change too significantly when varying , as is seen in table [ nmaxtab ] .recall that the single distorting mode is in the direction , which defines the favored direction seen in this figure . as a check of these theoretical predictions , we used matlab to numerically compute the number of orders necessary to obtain the desired fractional error of .the bottom panel of figure [ nmaxtheory ] shows the difference between these computational results and the theoretical predictions shown in the top panel of figure [ nmaxtheory ] .more specifically , it represents the number of orders of correction theoretically predicted minus the number found computationally .this figure suggests that for a bulk of the -plane , the theoretical prediction is quite accurate , predicting the number of orders to within 5 . near the extremes, however , the finite nature of the -plane causes problems ( remember that the theoretical estimate assumed an infinite -plane ) .in fact , the cool - colored pixels near the corners are pixels which never obtained a fractional error of ( the numerical code cutoff after 50 orders ; all points with fractional errors too high at that point were assigned a value of 50 orders ) .the theory predicts that about 35 orders are required to correct at the most extreme points in the -plane , which is what our previous numerical computations found. an important feature of the theoretical predictions shown in figure [ nmaxtheory ] that is characteristic of all skies is that the necessary number of orders of correction varies with , and in particular it increases as increases along the direction of the mode .therefore , points closer to the origin are corrected in less orders than those further away .[ nmaxtab ] the accuracy of the theoretical prediction here is in no small part due to the existence of only one ionospheric mode in our simple sky model .this reason for this is that the above theoretical estimate ( equation [ finalnmax ] ) was derived from an expression for the upperbound on the error ( equation [ upper ] ) which assumes that all the ionospheric modes in the sky are as strong as the strongest mode at and add constructively ( which may be seen in the appendix , section [ math2 ] , near equation [ uu ] ) . as such , the result is not a bad prediction for only one distorting mode , but tends to ( perhaps significantly ) overestimate the necessary number of orders for multiple distorting modes . in short ,the above mentioned theoretical estimate may perhaps be more accurately called a theoretical _ _ over__estimate . given the results seen in section [ nmax ] ( more specifically , the terrible consequences of undercorrecting in the -plane ) , this was done intentionally to ensure that our -plane was adequately corrected .still , it may be useful to obtain a more accurate estimate of the number of orders necessary .one such estimate would be a _strongest mode approximation _ , in which we assume that at any given , the only significant contribution comes from the strongest mode at that point .the contributions from the other modes are assumed to be weak and negligible .this approximation ultimately boils down to setting in the final equation determining from the previous section ( equation [ finalnmax ] ) .this approximation may provide a more accurate estimate of , but it also runs a high risk of underestimating the correct number of orders , which should be avoided if possible. an alternate approximation would be a _significant modes approximation _ , in which only modes at a given with strengths within a certain critical fraction of the strongest mode s are included in the value for used in equation [ finalnmax ] . with a closer study of perturbations from more realistic ionospheric modes , it may be possible to set this critical fraction in such a way as to fairly accurately predict the number of orders necessary .the above analysis suggests two methods for making the -plane correction less time intensive : 1 ) the points furthest out in the -plane take the most time to correct .eliminating them decreases computation time , but at the cost of resolution in the real space sky .2 ) different points in the -plane require different numbers of orders of correction , so write a code that corrects to different numbers of orders depending on the point in the -plane .( in other words , do nt waste time correcting to 35 orders near the origin when 2 is enough . ) for this method , we eliminate the problems caused by under correcting at the extremes in the -plane by setting the values at those extremes to 0 .take , as a visual example , figure [ uvshave ] , in which we have set the values of the pixels in the 25 diagonal rows from the corner to zero ( we _ shaved _ 25 pixels from the corner ) . as a reminder of the real space sky after 25 orders without edge shaving , consider the top panel of figure [ corner25 ] ( a reproduction of figure [ order25 ] , top panel ) .notice that the intensity is approximately 4 or 5 times too high at the brightest points and , even worse , our single point source has turned into some sort of supernova explosion . compare this to the edge - shaved version of the real space sky ( figure [ corner25 ] , bottom panel ) , in which we see a sky that looks almost identical to our fully corrected sky after 35 orders ( top panel , figure [ order35 ] ) . to see this more clearly , consider figure [ residshave ] , which represents ( the absolute value of ) the residual between the real space sky corrected to 40 orders with no edge shaving and the real space sky corrected to 25 orders with edge shaving . from this figurewe see that the result of the edge shaving was to create a small spread around the star , but of an intensity about an order of magnitude lower than the maximum intensity of the star .the relative success of this scheme leads to the question of how low we may push the number of correction orders when edge shaving is introduced .figure [ realshave15 ] shows the result of only correcting to 15 orders , but shaving 75 rows of pixels from the corners in the -plane . without edge shaving ,the real space sky corrected to 15 orders had a maximum intensity of about 500 ( figure [ order15 ] , top panel ) .now , the total intensity is approximately 1 as it should be , but it is spread over a number of pixels .so while the results are a dramatic improvement over what they had been , for the sake of precision it might be a good idea to correct to higher orders and shave less .the moral : the process works , but be careful about trying to shave too much . for this method , we correct to different numbers of orders at different . to test this method ,we re - wrote our matlab code so that the number of orders of correction at a given was determined by the theoretical estimate from section [ theorynmax ] ( more specifically , equation [ finalnmax ] ) .we then corrected the distortion for the same simple sky used in section [ nmax ] .figure [ residmeth2 ] shows the residual between the real space sky corrected to 40 orders at all points in the -plane and the real space sky corrected to different orders in the -plane .not surprisingly , the residual is incredibly small 5 or 6 orders of magnitude less than the maximum in the intensity of the source .( this is , of course , another sign that the theoretical prediction of the number of orders of correction is pretty good ) .however , this new matlab code presented a small problem : matlab is so much better at manipulating matrices than running for - loops that this second code , which theoretically requires less computation , takes approximately 15 times as long to run .of course , if the -plane correction is eventually used in mwa , a programming language more adept at loops will undoubtfully be used , and this method will potentially save time .previously , we have assumed that took the form given the relative complexity of the results above , it pays to investigate an alternate choice of .the main reason that this was chosen was because it allowed for an analytic solution for , where as shown previously in equation [ iupure ] . without an analytic solution to these integrals for ,the -plane correction becomes more computationally intensive than inverse fourier transforming to real space , correcting for the ionosphere there , and then fourier transforming back to visibility space .therefore , an analytic solution is required for any choice suitable choice of .unfortunately , choices for which allow such analytic solutions are hard to find. there is , however , at least one other such distortion : a polynomial expansion , given by where and are the number of terms in the and directions , respectively , necessary to accurately model the distortion by the ionosphere .the and shown here are real .the analytic derivation of is given in the appendix , section [ math4 ] .the result is where and the sums over and are restricted so that and .while we ve confined the analytic solution of this to the appendix , it should be mentioned that en route to this solution the exponential in from equation [ second ] was taylor - expanded ( which , you may recall , was also the case for the other choice of as a sum over sinusoidal modes , and is explicitly shown in equations [ n1 ] and [ n2 ] ) . in other words ,this solution is likewise characterized by the double expansion mentioned previously at the end of section [ sinmodes ] : one expansion over from expanding the exponential in , and one expansion in resulting from our model for the ionosphere . as such, this solution shows many of the unfortunate characteristics of the sinusoidal choice . in particular , there are still restricted sums which contain a number of terms comparable to that calculated in section [ numterms ] , so this choice has the same problem of making the -plane correction unreasonable if too many orders of correction or ionospheric modes are needed .in addition , the correction is still not strictly local : numerical computation of derivatives requires neighboring pixels , with higher orders requiring more neighbors .moreover , numerical computation of derivatives for a finite data set introduces its own set of additional errors , and thus makes this choice much less appealing than the previous one with sinusoidal modes .the ugliness of both of these choices ultimately stems from the inability to solve for analytically without expanding the exponential containing . unless a choice is found which may be solved analytically without this first expansion , it is doubtful that a better choice than the sinusoidal modes will be found .the ionospheric distortion presented throughout this paper was chosen because its particularly strong nature accentuated the subtleties of the -plane correction .the strength of this mode made the -plane correction appear computationally infeasible : any ionosphere which required even 10 of these modes to accurately model would require too much computation ( see section [ numterms ] ) .however , an ionospheric mode which shifts sources on the sky by tens of arcmin is somewhat unrealistic .we conclude by considering a more realistic distortion .the computational feasibility of the -plane correction is determined by the largest value of ( given by equation [ finalnmax ] ) required for any . to calculate this for a realistic sky, we need to know the largest possible value of .it is possible to cast the largest value of , which we label , in a form which better elucidates its physical significance .more specifically , notice that where is the angle between and . for an arbitrary choice of , this cosine term may be significant .however , if we wish to calculate , we may set and = , where is the greatest distance from the origin in the -plane that our antenna s -coverage allows .this is valid for mwa because the -coverage is approximately circular , so that the strongest ionospheric distorting mode is guaranteed to lie along a direction which possesses this maximum displacement in the -plane . with these changes , and based upon our previous definitions of the effective modes in terms of the actual modes ( see equations [ c_qdef ] and [ d_qdef ] ) , we may write as but , let s define in words , is the maximum deflection caused by a single mode that we might observe .in addition , let denote the length of our antenna array s longest baseline .the maximum -plane displacement is then this length measured in units of the wavelength that our antenna is detecting , [ the extra factor of is the result of our convention for fourier transforms ( see equation [ fourierdef ] ) , which differs from that conventionally used in radio astronomy ] . with these substitutions , , in terms of these parameters , the number of orders of correction necessary is ( adapted from equation [ finalnmax ] ) where is ( as before ) the fractional error desired for the correction .it should be noted that this form is only valid for determining the largest among all . for calculating for a particular , equation [ finalnmax ] must be used . for mwa, a typical frequency detected will be about 140 mhz , corresponding to meters ( this represents the 21 cm emission for a red shift of . ) we expect the ionosphere to deflect such a wave approximately 0.6 arcmin = radians ( , value is for the night ) .if we consider baselines of approximately 400 meters , then if given the number of modes necessary to accuately model the ionosphere ( which is as of yet undetermined ) , then for the full array may be determined from table [ nmaxtab ] by substituting for .as an example , if a fractional error of is desired and , then and table [ nmaxtab ] shows that 12 orders of correction are necessary .whether such a result is computationally feasible is dependent upon how much time is alloted for the ionospheric correction and the quality of the computers used . as such , it may not be determined here .what is clear , however , is that such a correction is not obviously ruled out on computational grounds ( especially if a technique such as edge shaving is used to reduce ) .[ quick aside : edge shaving alters the above results by substituting the largest left unshaved in place of in the above calculations . ]it should be noted that the values that went into calculating above were estimates , and certainly not set in stone . in particular, we once again emphasise that throughout this paper we have remained ignorant of the details involved in the expansion of the ionosphere , and have no knowledge of how many modes are necessary to sufficiently model the effect of the ionosphere . in addition , the ionospheric deflection is proportional to , with longer wavelengths experiencing greater shifts . therefore , we expect longer wavelengths than the above to require more orders of correction and shorter wavelengths , fewer .if is in fact lower by a factor of 5 , for example , then and the -plane correction is certainly a viable candidate for correcting the ionosphere .in particular , if the strongest mode approximation discussed in section [ strongest ] turns out to be a good approximation , then even with baselines of 1.5 km we may expect a good correction after only 4 orders for the wavelength given above . on the other hand , if is raised by a factor of 5 , then and the -plane correction is clearly computationally infeasible for any reasonable value of .as stated previously , correcting for the ionosphere in the -plane entails multiplying the perturbed data by the ionospheric correction operator * a* , where we have now made the time dependence of these quantities explicit .the above may be thought of as a matrix equation , where and represent our uncorrected and corrected ( respectively ) data arrays , and represents a correction matrix .the entries of this correction matrix are calculated by the appropriate binning of the coefficients in our correction equation ( reproduced from [ finaliu ] ) , where the coefficients " are the quantities preceeding on the right hand side of the equation . if the timescale within which one wishes to recalculate the effect of the ionosphere is small compared to the timescalewithin which the ionosphere significantly changes , then it is possible that the correction matrix * a* has changed very little from that previously calculated .more specifically , if one wishes to calculate the correction matrix at a time shortly after having calculated it at time ( ie , if where is the time scale of significant change in the ionosphere ) , then where represents a small correction matrix . in this regime , it is computationally much easier to calculate the small correction and add it to the previously calculated then to calculate from scratch. therefore , in such a scenario the process of updating the correction matrix is computationally favorable .the first order correction may be analytically calculated as follows : let s assume that we model the ionosphere using the effective modes that we have been throughout this paper ( see equations [ c_qdef ] and [ d_qdef ] ) , and that the represent part of a fixed fourier basis while the are our fitting parameters .( in other words , the are fixed and time independent , while the fluctuate with time ) .let s assume that we ve calculated the correction at a time .more specifically , assume that for all we ve calculated and stored all the relevent terms in the correction equation for , where the time dependence of , , and is now explicit .now we want to correct for the ionosphere at a later time . to compose the correction matrix at time , we could start with the full correction formula , and then construct the new matrix . instead , however , let s assume that we re in the regime of small ionospheric changes , so that to first order where is small . substituting this into the full correction and only keeping terms to first order weobtain in this final equation , the coefficients in this first group of terms exactly replicate those coefficients from time ( see equation [ t_0 ] ) . in other words , these terms represent the previously determined correction matrix .the second group represents the small adjustment to the correction matrix at .notice that these terms have been there own maximum cutoff for , labeled as in the above equation .if the quantity is small , as assumed , then the individual terms in this second sum are also small , and thus a smaller value of is necessary to obtain a desired fractional error for the intensity . in this case , it is computationally favorable to update the correction matrix rather than derive it from scratch .it should also be noted that if , then the nonzero entries of the matrix form a subset of the nonzero entries of the matrix , and thus their sum ( which represents ) is equally as sparse as . in other words , the matrix does not become less sparse through this process of correction ( a fact which is important for large numerical matrix manipulations ) .it is worth mentioning that whether updating the correction matrix is a viable method depends on the time scales of the changing ionosphere .more specifically , the above was calculated keeping only terms to first order in .it is possible that for the time scales considered higher terms are also necessary , or that is not small compared to ; the former situation complicates the math but does not necessarily outrule this method , while the latter pretty much requires that the correction matrix be built from scratch every time .the -plane correction only makes computational sense if the model for the ionospheric perturbation allows for an analytic solution to ( equation [ iupure ] ) .one such model is a sum over sinusoidal modes ( equation [ dtsine ] ) . by running numerical codes with this choice , the most important result discovered was that under correcting in the -plane is worse than not correcting at all ( section [ nmax ] ) .but in addition to this , correcting to too many orders or requiring too many modes to model the effect of the ionosphere may lead to a computationally unreasonable problem ( section [ numterms ] ) . to help avoid this issue , a theoretical estimate of the number of orders of correction necessary ( which agrees well with the sample sky provided in this paper ) may be used ( section [ theorynmax ] ) .this estimate reveals that the number of orders of correction necessary varies in the -plane .this , however , suggests two methods for alleviating the problem : eliminating those points in the -plane which are particularly troublesome at the cost of precision for the real space sky ( section [ meth1 ] ) and correcting to different orders at different points in the -plane ( section [ meth2 ] ) .both techniques prove successful and make the problem of correcting in the -plane more feasible .in addition , depending on how often the ionosphere s effect is updated compared to the timescales of change in the ionosphere , it may be compuationally favorable to update the previously determined effect of the ionosphere rather than rederive its full effect from scratch each time .the authors would like to thank matias zaldarriaga for helpful conversations .the purpose of this appendix is to rigorously derive some of the mathematical formulas merely stated within the main text .it is included for completeness and for the curious reader ; no new results are derived .in this section , we solve for the unperturbed intensity , using a sum over sinusoidal modes for our ionospheric deflection , with a bit of algebra ( and keeping in mind that and are real but is complex ) , may be written as where is the complex conjugate of . with this choice of ,the expression for becomes before proceeding , it is convenient to convert the summation over as follows : where and with this form , we see that although there are modes distorting the sky , there are terms in the sum .this extra factor of 2 comes from the above constraint that be real .we will refer to these modes labeled by ( ) as _ effective modes_. writing the intensity in terms of effective modes gives the individual terms inside the summation over may be manipulated using the multinomial expansion to give where denotes a restricted sum such that and .the expression for now becomes because of the above performed multinomial expansion , the integral over ( once brought inside the summation ) now takes on the familiar form of a delta - function , and is easily performed to yield the integral over is now a simple delta - function integral , and its integration gives this section , we estimate the error in the -plane accumulated by truncating the infinite sum over after the term . recall that the total correction term is correcting through , the magnitude error in the correction is equal to the absolute value of the sum over all terms left out .more specifically , we now attempt to determine an upper bound on this error .to begin , we bring the absolute value inside the sum , so that all terms now add constructively , we expect the magnitude of will be approximately the same at all points in the -plane . denoting the maximum value of for the uncorrected sky as , we find next , consider the terms in the sum of the form .define to be the maximum value of for a given value of and the effective modes in question .then , where the equality in the equation above occurs because the restricted sum over requires that . with this substitution ,the upper bound on our error function becomes now concentrate on the inner summation .define according to i assert that [ aside : before providing the proof of this , we should point out that the fraction is not guaranteed to be an integer , and therefore this factorial and the ones given hereafter should be taken to be given by the gamma - function , \ ] ] the proof is quite short : \i ) start with the given form of , corresponding to , for any .\ii ) any value of which corresponds to an alternative choice for the under the constraint that may be obtained by multiplying this value for by a finite number of factors whose magnitudes are all less than 1 .therefore , is the maximum .an example may be quite useful here : consider the scenario with and . in this case , we assert that which corresponds to the choice for all four .now , let s pick an alternative choice for the ; let s say . for this choice we obtain , but this may be re - written as where the first factor transforms from ( 3,3 ) to ( 2,4 ) and the second transforms from ( 3,3 ) to ( 5,1 ) . therefore .our new knowledge of , when combined with our previous determination of the number of terms in the restricted sum over ( see section [ numterms ] ) , leads us to conclude that therefore , our new upper bound on the error becomes this is our final result for a strict upper bound on the total error .notice that this final step is equivalent to assuming that the contributions from all modes are as strong as the strongest , and add constructively . as such, is clearly an upperbound on the error . for one mode , this last step does not lead to that great of an overestimate . with the addition of more modes, however , this step overemphasises the contribution from weaker modes , and leads to a ( potentially much ) larger overestimate of the error .the goal of this section is to determine the value of necessary to obtain a precision in the -plane equal to if given and the ionospheric modes distorting the sky . based upon the terrible consequences which result from undercorrecting in the -plane ( see section [ nmax ] ) ,we begin with the expression just derived for the upperbound on the error in hopes of avoiding this pitfall . in order to make sense of our expression for an upper bound ( equation [ uu ] ) and derive from it the optimal choice of , we must make a few further approximations .some of these approximations will actually slightly decrease the expression for the error relative to , but are necessary in order to make sense of this ugly expression .to begin , we define so that next , we calculate the ratio and find that where we have employed stirling s approximation , stirling s approximation is best suited for large , but is actually quite accurate for small as well .it gives an answer within 8% of the actual value for , and within 1% for .in other words , by using this approximation we greatly simplify our expression and sacrifice only a little in terms of accuracy .using the fact that we see that for large this ratio reduces to this expression shows that at a critical value of , namely , this ratio is approximately equal to 1 , and . for , ; and for , .in other words , is an increasing function of until , and then decreases from then on .strictly speaking , these results are only valid for large . however , in order to get an approximate expression for , we now extend these results to all .the hope is that the approximate value of is varied only slightly by this extension to small .but even if this approximation causes large enough error to raise doubts about our quantitative results for , it should still be good enough to learn something about the qualitative behavior of . recall that setting is the same thing as not correcting in the -plane ( see equation [ finaliu ] ); put differently , the term in the sum is of the order of the uncorrected -plane , . according to the above , successive corrections differ in magnitude from the previous term by a factor of .therefore , the approximate magnitude of the term is we expect the distortions created by the sky to alter the magnitude of the intensity only very slightly , so that .furthermore , for ( which is the case for ) we approximate that the ratio falls quick enough that we may approximate the total remaining error as being enirely due to , , where is the intensity in the -plane after being corrected to orders .therefore , in order to obtain an fractional error less than for our -plane correction , we must correct to enough orders so that therefore the optimal value of is given by some values for given and are given in the table embedded within the main text , table [ nmaxtab ] .a bulk of this paper has assumed that takes the form where and are the number of terms in the and directions , respectively , necessary to accurately model the distortion by the ionosphere .the and here are real . similar to last time ,this choice is chosen because it allows an analytic solution to . with this choice , our earlier equation for , becomes where first , we focus on evaluating . to do this, we first taylor - expand the second exponential , where the sum over is a restricted sum such that .plugging this into our expression for gives where we have assumed that the summations and the integrals may be freely interchanged . to solve this integral, we first introduce an additional parameter ( which we will eventually set to 1 ) and notice that therefore , if we define , then but , thus , plugging this into our earlier expression for , we obtain where but here is of the same form as earlier , and therefore where and the sums over and are restricted so that and .mitchell , d.a . ; greenhill , l.j . ; wayth , r.b .; sault , r.j . ; lonsdale , c.j . ; cappallo , r.j . ; morales , m.f . ; ord , s.m ., `` real - time calibration of the murchison widefield array , '' selected topics in signal processing , ieee journal of , vol.2 , no.5 , pp.707 - 717 , oct .2008 url : http://ieeexplore.ieee.org / stamp / stamp.jsp?arnumber= \linebreak 4703504&isnumber=4703300[http://ieeexplore.ieee.org / stamp / stamp.jsp?arnumber= \linebreak 4703504&isnumber=4703300 ] | as is common for antenna arrays in radio astronomy , the output of the mwa s correlator is the intensity measured in visibility space . in addition , the final power spectrum will be created in visibility space . as such , correcting for the ionosphere in visibility space instead of real space saves the computation required to inverse fourier transform to real space and then fourier transform back ( a significant decrease in computation for systems operating in real time such as the mwa . ) in this paper , we explore this problem of correcting for ionospheric distortions in the -plane . the mathematical formula for obtaining the unperturbed data from that reflected by the ionosphere is non - local , which in any practical application creates edge effects because of the finite nature of the -plane ( section [ ic ] ) . in addition , obtaining an analytic solution for the unperturbed intensity is quite difficult , and can only be done using very specific expansions of ionospheric perturbations . we choose one of these models ( with perturbations as sinusoidal modes , section [ sinmodes ] ) and run numerical codes to further study the correction . numerically implementing this correction to too few orders distorts the data in such a way as to be worse than not correcting at all ( section [ nmax ] ) . it is therefore critical to correct to the correct number of orders , and we present an analytic estimate for the optimal order ( section [ theorynmax ] ) . this analytic estimate shows that the optimal number of orders varies with , and in particular increases as increases along the direction of an ionospheric distorting mode . based on this observation , we then investigate a couple of methods which save computation ( section [ feasible ] ) . these methods are ( a ) eliminating the intensity at values of which require too many orders , and ( b ) correcting to different orders at different . both methods prove successful , although the first creates a loss of some precision in the real space sky . we conclude by considering an alternate form with which to model ionospheric perturbations ( section [ choice2 ] ) . this alternate form was once again chosen because it lends itself to an analytic solution , but contains as many ( if not more ) downfalls than the original choice . |
the sierpinski gasket described in 1915 by w. sierpiski is a classical fractal .suppose is the solid regular triangle with vertexes let be the contracting similitude for then the sierpinski gasket is the self - similar set , which is the unique invariant set of ifs satisfying sierpinski gasket is important for the study of fractals , e.g. , the sierpinski gasket is a typical example of post - critically finite self - similar fractals on which the dirichlet forms and laplacians can be constructed by kigami , see also strichartz .for the word with letters in , i.e. , every letter for all we denote by latexmath:[ ] the line segment between and we also have some geodesic paths from and also we have but then the geodesic distance between and is in figure 2 , , and .in fact , by observation we have [ c : two]suppose with then if and only if there are at most two letters in in particular , if and then for example , and are neighbors , but and are not . for every denote the geodesic distance on let the average path length of the complex network we can state our main result as follows .we have the asymptotic formula [ r:6 ] since , theorem implies that the evolving networks have small average path length , namely .the paper is organized as follows . in section 2we give notations and sketch of proof for theorem 1,consisting of four steps . in sections 3 - 6, we will provide details for the four steps respectively .our main techniques come from the self - similarity and the renewal theorem .we will illustrate our following four steps needed to prove theorem 1 . given a small solid triangle can find a maximal solid triangle which contains and their boundaries are touching . translating into the language of words , for a given word we can find a unique shortest word such that and for a word where is the maximal suffix with at most two letters appearing , using claim [ c : two ] we have iterating again and again, we obtain a sequence let in particular , we define for we have , then we will prove in section 3 in fact , this proposition shows that is independent of the choice of whenever in this case , we also write write is independent of in fact , is the minimal number of moves for to touch the boundary of * step 2 . * given we consider the average geodesic distance between the empty word and word of length and set we will obtain the limit property of as in section 4 . as illustrated in figure 3 ,proposition [ p : asym ] shows that the _ typical _ geodesic path is the geodesic path between and whose first letters of codings are different . on the other hand , for example the geodesic path between and with the same first letter will give negligible contribution to using with we obtain that , ignoring the terms like we have is the average value of using stolz theorem , we have in fact , suppose or for all is composed of infinite words with letters in then we have a natural mass distribution on such that for any word of length any word let denote the cardinality of letters appearing in word for -almost all let for -almost all suppose and and there is an infinite sequence of integers such that for all and and for all we then let is uniquely determined by word and is symmetric for letters in we find that is a sequence of positive independent identically distributed random variables with for example , for then and , | in this paper , we introduce a new method to construct evolving networks based on the construction of the sierpinski gasket . using self - similarity and renewal theorem , we obtain the asymptotic formula for average path length of our evolving networks . |
double - diffusive instabilities commonly occur in any astrophysical fluid that is stable according to the ledoux criterion , as long as the entropy and chemical stratifications have opposing contributions to the dynamical stability of the system .they drive weak forms of convection described below , and can cause substantial heat and compositional mixing in circumstances reviewed in this paper .two cases can be distinguished . in _ fingering convection _ , entropy is stably stratified ( , but chemical composition is unstably stratified ; it is often referred to as _ thermohaline _ convection by analogy with the oceanographic context in which the instability was first discovered . in _oscillatory double - diffusive convection _( oddc ) , entropy is unstably stratified ( , but chemical composition is stably stratified ; it is related to semiconvection , but can occur even when the opacity is independent of composition . fingering convection can naturally occur at late stages of stellar evolution , notably in giants , but also in main sequence stars that have been polluted by planetary infall ( as first proposed by vauclair , ) , or by material transferred from a more evolved companion star .oddc on the other hand is naturally found in stars in the vicinity of convective nuclear - burning regions , including high - mass core - burning main sequence or red clump stars , and shell - burning rgb and agb stars .it is also thought to be common in the interior of giant planets that have been formed through the core - accretion scenario .beyond competing entropy and compositional gradients , a necessary condition for double - diffusive instabilities to occur is ( and are the microscopic compositional and thermal diffusivities ) .this is usually the case in astrophysical fluids , where is typically _ much _ smaller than one owing to the added contribution of photon and electron transport to the thermal diffusivity .the somewhat counter - intuitive manner in which a high thermal diffusivity can be destabilizing is illustrated in figure [ fig : instab ] . in the fingering case , a small ensures that any small displaced fluid element rapidly adjusts to the ambient temperature of its new position , while retaining its original composition .an element displaced downward thus finds itself denser than the surrounding fluid and continues to sink ; the opposite occurs for an element displaced upward . in the case of oddc, thermal diffusion can progressively amplify any internal gravity wave passing through , by heating a fluid element at the lowest position of its displacement and cooling it near the highest . in both cases ,the efficient development of the instability is conditional on the fluid element being small enough for thermal diffusion to take place .double - diffusive convection is therefore a process driven on very small scales , usually orders of magnitude smaller than a pressure scaleheight .consequently , a common way of studying fingering and odd convection is by a _ local _ linear stability analysis , in which the background gradients of entropy ( related to ) and composition are approximated as being constant ( baines & gill , ) .the governing equations in the boussinesq approximation are then : where , , and are the non - dimensional velocity field , pressure , temperature and mean molecular weight perturbations of the fluid around the background state , pr is the prandtl number ( and is the viscosity ) , and is called the _ density ratio_. here the unit lengthscale used is the typical horizontal scale of the basic instability , the unit time is , the unit temperature is and the unit compositional perturbation is .the sign in the temperature and composition equations should be used to model fingering convection , while the sign should be used to model oddc .mathematically speaking , this sign change is the only difference between the two processes .assuming perturbations have a spatio - temporal structure of the form where is either one of the dependent variables , is the wavenumber of the perturbation and its growth rate ( which could be complex ) , satisfies a cubic equation : \lambda + \left [ k^6 { \rm pr } \tau \pm l^2 { \rm pr } ( \tau - r_0^{-1 } ) \right ] = 0 \mbox { , } \nonumber \label{eq : cubic}\end{aligned}\ ] ] where again refers to for fingering convection and for oddc , and is the norm of the horizontal component of . is real in the case of fingering convection but complex in the case of oddc , as expected from the physical description of the mechanism driving the instability .the fastest growing mode in both cases is vertically invariant .its growth rate and horizontal wavenumber can be obtained by maximizing over all possible .finally , setting identifies marginal stability , and reveals the parameter range for double - diffusive instabilities to be: note that in both cases corresponds to the ledoux criterion .while linear theory is useful to identify _ when _ double - diffusive convection occurs , nonlinear calculations are needed to determine how the latter saturates , and how much mixing it causes .vertical mixing is often measured via non - dimensional vertical fluxes , called nusselt numbers .the temperature and compositional nusselt numbers are defined here as where and are the dimensional temperature and compositional turbulent fluxes .to reconstruct the dimensional _ total _ fluxes of heat and composition and , we have ( wood , ) where is the thermal conductivity , and is the specific heat at constant pressure .it is worth noting that can also be interpreted as the ratio of the effective to microscopic compositional diffusivities . direct numerical simulations , which solve the fully nonlinear set of equations ( [ eq : goveqs ] ) for given parameter values pr , and from the onset of instability onward , can in principle be run to estimate the functions and nu .however , the actual nonlinear behavior of double - diffusive systems reveals a number of surprises , that must be adequately studied before a complete theory for mixing can be put forward .it has long been known in oceanography that double - diffusive convection has a tendency to drive the growth of structures on scales much larger than that of the basic instability ( cf .stern , ) .this tendency was recently confirmed in the astrophysical context as well ( rosenblum , ; brown , ) .these structures either take the form of large - scale internal gravity waves or thermo - compositional staircases , as shown in figure [ fig : large - scale ] . , .the basic instability first saturates into a near - homogeneous state of turbulence , but later develops large - scale gravity waves .bottom : oddc simulation for pr = , .the basic instability first saturates into a near - homogeneous state , but later develops into a thermo - compositional staircase , whose steps gradually merge until only one is left .the mean nusselt numbers increase somewhat in the presence of waves in the fingering case , and quite significantly when the staircase forms , and at each merger , in the oddc case ., scaledwidth=90.0% ] for density ratios close to one , fingering convection tends to excite large - scale gravity waves , through a process called the _ collective instability _ first discovered by stern ( ) .these waves grow to significant amplitudes , and enhance mixing by fingering convection when they break .the same is true for oddc , but the latter can sometimes also form thermo - compositional staircases excited by a process called the _ ( radko , ) .the staircases spontaneously emerge from the homogeneously turbulent state , and appear as a stack of fully convective , well - mixed regions ( the layers ) separated by thin strongly stratified interfaces .the layers have a tendency to merge rather rapidly after they form .vertical mixing increases significantly when layers form , and with each merger . for these reasons, quantifying transport by double - diffusive convection requires understanding not only how and at what amplitude the basic small - scale instabilities saturates , but also under which circumstances large - scale structures may emerge and how the latter affect mixing .given their ubiquity in fingering and odd convection , it is natural to seek a unified explanation for the emergence of large - scale structures that is applicable to both regimes .mean - field hydrodynamics is a natural way to proceed , as it can capitalize on the separation of scales between the primary instability and the gravity waves or staircases . to understand how mean - field instabilities can be triggered , first note that the intensity of vertical mixing in double - diffusive convection is naturally smaller if the system is closer to being stable , and vice - versa .if a homogeneously turbulent state is spatially modulated by large - scale ( but small amplitude ) perturbations in temperature or chemical composition , then vertical mixing will be more efficient in regions where the _ local _ density ratio is closer to one , and smaller in regions where it is further from one . the spatial convergence or divergence of these turbulent fluxes can , under the right conditions , enhance the initial perturbations in a positive feedback loop , in which case a mean - field instability occurs .first discussed separately in oceanography , the collective and -instabilities were later discovered to be different unstable modes of the same mean - field equations by traxler ( ) , in the context of fingering convection .their work has successfully been extended to explain the emergence of thermo - compositional layers in oddc in astrophysical systems by mirouh ( ) .a formal stability analysis of the mean - field equations shows that they are unstable to the ( the layering instability ) whenever the _ flux ratio _ is a _ decreasing _ function of ( radko , ) .similarly , a necessary condition for the collective instability was given by stern ( ) , who argued that large - scale gravity waves can develop whenever where .this criterion is often much less restrictive than the one for the development of the .note that and in ( [ eq : gamma ] ) and ( [ eq : a ] ) are the nusselt numbers associated with the small - scale turbulence present _ before _ any large - scale structure has emerged .traxler ( ) were the first to run a systematic sweep of parameter space to study fingering convection in astrophysics , and to measure and in 3d numerical experiments .however , they were not able to achieve very low values of pr and .brown ( ) later presented new simulations with pr and as low as , but this is still orders of magnitude larger than in stellar interiors , where pr and typically range from to . to bridgethe gap between numerical experiments and stellar conditions , brown ( ) derived a compelling semi - analytical prescription for transport by small - scale fingering convection , that reproduces their numerical results and can be extrapolated to much lower pr and .their model attributes the saturation of the fingering instability to the development of shearing instabilities between adjacent fingers ( see also denissenkov , and radko & smith , ) . for a given set of governing parameters pr , and , the growth rate and horizontal wavenumber of the fastest - growing fingers can be calculated from linear theory ( see section [ sec : linear ] ) . meanwhile ,the growth rate of shearing instabilities developing between neighboring fingers is proportional to the velocity of the fluid within the finger times its wavenumber ( a result that naturally emerges from dimensional analysis , but can also be shown formally using floquet theory ) . stating that shearing instabilities can disrupt the continued growth of fingers requires and to be of the same order .this sets the velocity within the finger to be where is a universal constant of order one .meanwhile , linear stability theory also relates the temperature and compositional fluctuations and within a finger to .the turbulent fluxes can thus be estimated _ only using linear theory _ : comparison of these formula with the data helps calibrate .brown ( ) found that using can very satisfactorily reproduce most of their data within a factor of order one or better , except when pr ( which is rarely the case in stellar interiors anyway ) .equation ( [ eq : nusfinger ] ) implies that for low pr and , turbulent heat transport is negligible , while turbulent compositional transport is significant only when is close to one .however , the values of obtained by brown ( ) are still not large enough to account for the mixing rates required by charbonnel & zahn ( ) to explain surface abundances in giants .such large values of might on the other hand be achieved if mean - field instabilities take place . as discussed in section [ sec : large ] , one simply needs to estimate and in order to determine in which parameter regime mean - field instabilities can occur . using ( [ eq : gamma ] ) and( [ eq : nusfinger ] ) it can be shown that is always an _ increasing _ function of at low pr and .this implies that fingering convection is stable to the , and therefore not likely to transition _ spontaneously _ to a state of layered convection in astrophysics .the simulations of brown ( ) generally confirm this statement , except in a few exceptional cases discussed below .by contrast , fingering convection does appear to be prone to the collective instability ( as shown in figure [ fig : large - scale ] ) for sufficiently low . by calculating for a typical stellar fluid with pr and , we find for instance that gravity waves should emerge when or so . in this regime, we expect transport to be somewhat larger than for small - scale fingering convection alone , although probably not by more than a factor of 10 ( see figure [ fig : large - scale ] ) .nevertheless , a first - principles theory for the vertical mixing rate in fingering convection , in the presence of internal gravity waves , remains to be derived . finally ,as first hypothesized by stern ( ) and found in preliminary work by brown ( ) , it is possible that these large - scale gravity waves could break on a global scale and mechanically drive the formation of layers .if this is indeed confirmed , transport could be much larger than estimated in ( [ eq : nusfinger ] ) in the region of parameter space for which .this , however , remains to be confirmed .until we gain a better understanding of the various effects of gravity waves described above , ( [ eq : nusfinger ] ) is our best current estimate for transport by fingering convection in astrophysical objects .an example of the numerical implementation of the model by brown ( ) is now available in mesa , and consists of the following steps .( 1 ) to estimate the local properties of the star , and calculate all governing parameters / diffusivities . ( 2 ) to estimate the properties of the fastest - growing fingering modes using linear theory ( see section [ sec : linear ] ) and ( 3 ) to apply ( [ eq : nusfinger ] ) to calculate , and then ( [ eq : totalfluxes ] ) to calculate .turbulent heat transport is negligible , so .3d numerical simulations of oddc were first presented by rosenblum ( ) and mirouh ( ) . both explored parameter space to measure , as in the case of fingering convection , the functions and after saturation of the basic odd instability .the values of pr and achieved , however , were not very low , and models are needed once again to extrapolate these results to parameters relevant for stellar interiors .mirouh ( ) proposed an empirical formula for nu ( and nu , via ) , whose parameters were fitted to the experimental data . however , a theory based on first principles is more desirable .we have recently succeeded in applying a very similar method to the one used by brown ( ) to model transport by small - scale oddc . as described in moll ( ) ,a simple approximate estimate for temperature and compositional nusselt numbers can also be derived from the linear theory for the fastest - growing mode ( see section [ sec : linear ] ) , this time in the form of where , and .the constants and must again be fitted to the existing data ; preliminary results suggest that and . by contrast with fingering convection , oddc is subject to both and collective instabilities .both modify the vertical heat and compositional fluxes quite significantly so ( [ eq : nusoddc ] ) should _ not _ be used _ as is _ to model mixing by oddc .it is used , on the other hand , to determine when mean - field instabilities occur .the region of parameter space unstable to layering can again be determined by calculating ( using [ eq : nusoddc ] this time ) , and checking when .moll ( ) ( see also mirouh , ) showed that layering is always possible for pr and below one , provided ] , and argued that their results are consistent with the following empirical transport laws for layered convection : where and are slowly varying functions of and , and where the rayleigh number is , where is the mean step height in the staircase . for numerically achievable pr and , wood ( ) estimated that and .equation ( [ eq : nuslayers ] ) has two important consequences .the first is that turbulent heat transport can be significant in layered convection , provided is large enough .secondly , both nusselt numbers are ( roughly ) proportional to , but nothing so far has enabled us to determine what may be in stellar interiors .indeed , in _ all _ existing simulations of layered oddc to date , layers were seen to merge fairly rapidly until a single one was left . whether staircases in stellar interiors evolve in the same way , oreventually reach a stationary state with a layer height smaller than the thickness of the unstable region itself , is difficult to determine without further modeling , and remains the subject of current investigations .odd systems which do not transition into staircases ( ) also usually evolve further with time after saturation of the basic instability , with the small - scale wave - turbulence gradually giving way to larger - scale gravity waves .whether the latter are always excited by the collective instability , or could be promoted by other types of nonlinear interactions between modes that transfer energy to larger scales , remains to be determined .in all cases , these large - scale waves have significant amplitudes and regularly break .this enhances transport , as it did in the case of fingering convection .moll ( ) found that the resulting nusselt numbers are between 1.2 and 2 across most of the unstable range , regardless of pr or .these results are still quite preliminary , however , and their dependence on the domain size ( which sets the scale of the longest waves ) remains to be determined .based on the results obtained so far , and summarized above , a plausible mixing prescription for oddc can be obtained by applying the following steps .( 1 ) to estimate the local properties of the star , and calculate all governing parameters / diffusivities .( 2 ) to estimate the properties of the fastest - growing modes using linear theory ( see section [ sec : linear ] ) ( 3 ) to determine whether layers are expected to form or not by calculating ( using [ eq : nusoddc ] ) for neighboring values of , and evaluating . ( 4 )if the system is layered , then _assume a layer height _( for instance , some small fraction of a pressure scaleheight ) , and calculate the heat and compositional fluxes using ( [ eq : totalfluxes ] ) with ( [ eq : nuslayers ] ) . if the system is not expected to form layers , then calculate these fluxes using ( [ eq : totalfluxes ] ) and instead .the unknown layer height is the only remaining free parameter of this model , and will hopefully be constrained in the future by comparison of the model predictions with asteroseismic results . | much progress has recently been made in understanding and quantifying vertical mixing induced by double - diffusive instabilities such as fingering convection ( usually called thermohaline convection ) and oscillatory double - diffusive convection ( a process closely related to semiconvection ) . this was prompted in parts by advances in supercomputing , which allow us to run direct numerical simulations of these processes at parameter values approaching those relevant in stellar interiors , and in parts by recent theoretical developments in oceanography where such instabilities also occur . in this paper i summarize these recent findings , and propose new mixing parametrizations for both processes that can easily be implemented in stellar evolution codes . |
the two most characteristic ensembles is , on one side , the one resulting in exponential ( boltzmann - gibbs - ( bg ) ) distributions , , and , on the other side , the one with power distributions , .both are encountered in the realm of the high energy multiparticle production processes investigated in hadronic and nuclear collisions .they are connected there with , respectively , nonperturbative _ soft _ dynamics operating at small s and described by exponential distributions , ) , and with perturbative _ hard _ dynamics , responsible for large s and described by power distributions , .these two types of dynamics are investigated separately .it is usually assumed that they operate in distinct parts of phase space of , separated by .however , it was found recently that the new high energy data covering the whole of phase space ( cf ., for example , ) are best fitted by a simple , quasi - power law formula extrapolating between both ensembles : this formula coincides with the so called tsallis nonextensive distribution for , ^{\frac{1}{1-q } } \stackrel{def}{= } c_q\exp_q\left(-\frac{x}{x_0}\right ) \stackrel{q \rightarrow 1}{\longrightarrow } c_1\exp \left(-\frac{x}{x_0}\right).\label{eq : tsallis}\ ] ] this is the distribution we shall shall concentrate on and discuss . in section [ sec : examples ] we shall discuss some examples of processes leading to such distributions .it depends on the nonextensivity parameter and this can be different depending on whether it arises from a tsallis distribution ( ) or from the nonextensive tsallis entropy ( ) .both are connected by and this relation seems to be confirmed experimentally .this is presented in section [ sec : duality ] . in section [ sec : shannon ] we shall discuss necessary conditions for obtaining a tsallis distribution from shannon information entropy .section [ sec : logosc ] demonstrates that a tsallis distribution can also accommodate the log - periodic oscillations apparently observed in high energy data .our conclusions and a summary are presented in section [ sec : summary ] .in many practical applications , a tsallis distribution is derived from tsallis statistics based on his nonextensive entropy , on the other hand , there are even more numerous examples of physical situations not based on and still leading to quasi - power distributions in the tsallis form . inwhat follows , we shall present some examples of such mechanisms , concentrating on those which allow for an interpretation of the parameter .the first example is _ superstatistics _ ( cf , also ) based on the property that a gamma - like fluctuation of the scale parameter in exponential distribution results in the -exponential tsallis distribution with ( cf .( [ eq : tsallis ] ) ) .the parameter defines the strength of such fluctuations , . from the thermal perspective, it corresponds to a situation in which the heat bath is not homogeneous , but has different temperatures in different parts , which are fluctuating around some mean temperature .it must be therefore described by two parameters : a mean temperature and the mean strength of fluctuations , given by . as shown in ,this allows for further generalization to cases where one also has an energy transfer to / from heat bath .the scale in the tsallis distribution becomes then -dependent : here the parameter depends on the type of energy transfer , cf . for illustrative examples from , respectively , nuclear collisions and cosmic ray physics .the second example is the _ preferential attachment approach _ ( used in stochastic networks ) . herethe system under consideration exhibits correlations of the preferential attachment type ( like , for example , `` rich - get - richer '' phenomenon in networks ) and the scale parameter depends on the variable under consideration . if then the probability distribution function , , is given by an equation the solution of which is a tsallis distribution ( again , with ) : ^{\frac{1}{1-q}}. \label{eq : nets}\ ] ] for one again gets the usual exponential distribution .consider now a _tsallis distribution from multiplicative noise _ .we start from the langevin equation , where and denote stochastic processes corresponding to , respectively , multiplicative and additive noises .this results in the following fokker - planck equation for the distribution function , stationary satisfies with in the case of no correlation between noises and no drift term due to additive noise ( i.e. , for ) its solution is a tsallis distribution for , ^{\frac{q}{1-q}}~~{\rmwith}~~ t = \frac{2var(\xi)}{\langle \xi\rangle};~~q = 1 + \frac{2var(\gamma)}{\langle \gamma\rangle}. \label{eq : solutionpp}\ ] ] however , if we insist on a solution in the form of ^n,~~n = \frac{1}{q-1 } , \label{eq : singlep}\ ] ] eq .( [ eq : k1vk2 ] ) has to be replaced by .\label{eq : k1vk2p}\ ] ] one then gets in the form of a tsallis distribution , ( [ eq : singlep ] ) , but with and with -dependent ( reminiscent of from eq .( [ eq : teff ] ) discussed before , cf . ) : \quad{\rm with}~~ t_0=\frac{cov(\xi,\gamma)}{\langle \gamma\rangle},~~t_1 = \frac{\langle \xi\rangle}{2\langle \gamma\rangle}. \label{eq : teff}\ ] ] let us now remember that the usual situation in statistical physics is that out of three variables considered , energy , multiplicity and temperature , two are fixed and one fluctuates .fluctuations are then given by gamma distributions ( in the case of multiplicity distributions where are integers , they become poisson distributions ) and only in the thermodynamic limit ( ) does one get them in the form of gaussian distributions , usually discussed in textbooks . in discussed in detail situations when two or all three variables fluctuate .if all are fixed we have a distribution of the type of this is nothing else but a tsallis distribution with randomly chosen independent points breaks segment into parts , length of which is distributed according to eq .( [ eq : constn ] ) .the length of the such part corresponds to the value of energy ( for ordered ) .one could think of some analogy in physics to the case of random breaks of string in points in the energy space .notice that induced partition differs from _ successive sampling _ from the uniform distribution , ] ) . however , if the available energy is limited , , then the resulting _ conditional probability _ becomes a tsallis distribution with : only if the scale parameter would fluctuate in the same way as in in the case of superstatistics , see . ]^{\frac{1}{1 - q } } , \label{eq : constraints}\\ & & q = \frac{n-3 } { n-2 } < 1 , \qquad \qquad \lambda = \frac{\alpha n}{n-1}. \label{eq : c1}\end{aligned}\ ] ] we end this part by a reminder of how tsallis distribution with arises from _ statistical physics considerations_. consider an isolated system with energy and with degrees of freedom ( particles ) .choose single degree of freedom with energy ( i.e. , the remaining , or reservoir , energy is ) .if this degree of freedom is in a single , well defined , state then the number of states of the whole system is and probability that the energy of the chosen degree of freedom is is . expanding ( slowly varying ) around , and ( because ) keeping only the two first termsone gets i.e. , a boltzmann distribution ( or ) . on the other hand , because one usually expects that ( where are of the order of unity and we put and , to account for diminishing the number of states in the reservoir by one , ) , one can write and write the full series for probability of choosing energy : =\nonumber\\ & = & c\left(1 - \frac{1}{\nu - 2}\beta e\right)^{(\nu - 2 ) } = \nonumber\\ & = & \beta(2-q)[1 - ( 1-q)\beta e]^{\frac{1}{1-q } } ; \label{eq : statres}\end{aligned}\ ] ] ( where we have used the equality ] , arises also if in the poisson multiplicity distribution , , one fluctuates the mean multiplicity using gamma distribution with means average value in a given event whereas denotes averages over events ( or ensembles ) . ] .now , identifying fluctuations of mean with fluctuations of , one can express the above observation via fluctuations of temperature .noticing that ( i.e. , that and ] , supplied with constraint , where is some function of , subjected to the usual maxent variational procedure , results in the following form of : , \label{eq : tfroms}\ ] ] with constants and calculated from the normalization of and from the constraint equation .it is now straightforward to check that \label{eq : constrt}\ ] ] results in which translates to ( remembering that ) a tsallis distribution ^{\frac{1}{1 - q}}. \label{eq : tfs}\ ] ] the parameter can be deduced from the additional condition which must be imposed , namely from the assumed knowledge of the ( notice that in the case of bg distribution this would be the only condition ) .so far the physical significance of the constraint ( [ eq : constrt ] ) is not fully understood .its form can be deduced from the idea of varying scale parameter in the form of the preferential attachment , eq .( [ eq : nets ] ) , which in present notation means .as shown in ( [ eq : nets ] ) it results in tsallis distribution ( [ eq : tfs ] ) .this suggest the use of ] and , because , therefore for tsallis distribution becoming for boltzmann - gibbs ( bg ) distribution ( ) .it is interesting that the constraint ( [ eq : constrt ] ) seems to be natural a for multiplicative noise described by the langevine equation : , with traditional multiplicative noise and additive noise ( stochastic processes ) ) ( see for details ) .in fact , there is a connection between the kind of noise in this process and the condition imposed in the maxent approach . for processes described by an additive noise , , the natural condition is that imposed on the arithmetic mean , , and it results in the exponential distributions . for the multiplicative noise , , the natural condition is that imposed on the geometric mean , , which results in a power law distribution .one has to start with some explanatory remarks .the tsallis distribution can be also obtained via maxent procedure from tsallis entropy , where and .now , depending on the condition imposed one gets from ^{-\frac{1}{1-q}}\quad { \rm for}\quad\langle x\rangle_1 \label{eq : cond1}\\ { \rm or}\quad f(x ) & = & ( 2-q)[1+(q-1)x]^\frac{1}{1-q}\quad { \rm for}\quad\langle x\rangle_q .\label{eq : condq}\end{aligned}\ ] ] however , after replacement of by , the distribution ( [ eq : cond1 ] ) becomes the usual tsallis distribution ( [ eq : condq ] ) .therefore one encounters an apparent puzzle , namely the of tsallis distribution does not coincides with the of corresponding tsallis entropy , instead they are connected by relation .the natural question therefore arises : is such a relation seen in data ? as shown in that seems really be the case , at least quantitatively .this is seen when comparing obtained from data on _ distributions _[ duality] ) to obtained from data on _ multiplicities _ in collisions assuming that entropy is proportional to the number of particles produced ( cf .[ duality ] ) .whereas is deduced from a tsallis distribution taken in one of the forms discussed above , is deduced directly from the corresponding entropy of the collision .assume that such collision can be adequately described by a superposition model in which the main ingredients are nucleons which have interacted at least once .assume further that they are identical and independent and produce secondaries of each other .as a result a are produced in one collision and the mean multiplicity is ( where is the mean number of nucleons participating in the collision and the mean multiplicity in elementary collision .the corresponding entropy of such process will then be -sum of entropies of individual collisions and is given by : [ t ] energy dependencies of the parameters obtained from , respectively : multiplicity distributions ( squares ) , from different analysis of transverse momenta distributions in data ( - circles , full symbols ) and from data on from pb+pb collisions ( - half filled circles ) . dependence of the charged multiplicity for nucleus - nucleus collisions divided by the superposition of multiplicities from proton - proton collisions fitted to data on multiplicity taken from ( na49 ) and from compilation .,title="fig : " ] energy dependencies of the parameters obtained from , respectively : multiplicity distributions ( squares ) , from different analysis of transverse momenta distributions in data ( - circles , full symbols ) and from data on from pb+pb collisions ( - half filled circles ) . energy dependence of the charged multiplicity for nucleus - nucleus collisions divided by the superposition of multiplicities from proton - proton collisions fitted to data on multiplicity taken from ( na49 ) and from compilation .,title="fig : " ] ^k = \frac{\left [ 1 + ( 1-q)s^{(1)}_q\right]^{\nu } - 1}{1 - q}. \label{eq : sqnu}\ ] ] notice that = \nu \ln \left [ 1 + ( 1 - q ) s^{(1)}_q\right] ] ( where are coefficients of the expansion ) .this the origin of the usual dressing factor appearing in and used to describe data : \label{eq : r}\ ] ] ( only and terms are kept ) .it turns out that a similar scaling solution can also be obtained in case of a tsallis quasi - power like distribution . to this endone must start from stochastic network approach , section [ sec : sn ] and eq .( [ eq : nets ] ) , in which tsallis distribution is obtained by introducing a scale parameter depending on the variable considered . in our case it is resulting in in final difference form ( with change in notation : replaced by ) consider a situation in which .it depends now on the new scale parameter ( in order to keep changes of to be of the order of ) and can be very small but always remains finite .it can now be shown that = ( 1 - \alpha n)f(e) ] with fitting parameters and . in terms of , and have . ] : \right\}. \label{eq : approx}\ ] ] with .in addition to the scale parameter one has two more parameters occurring in the dressing factor , and .the other parameters occurring in eq .( [ eq : r ] ) are expressed by the original parameters in the following way : , , and .one can , however , consider a more involved evolution process , with sequential cascades ; in this case the additional parameter changes parameter in ( [ eq : r ] ) , .it does not affect the slope parameter but changes the frequency of oscillations which now decrease as .comparison with data requires ( cf . for details ) .as mentioned before , one can translate a dressed tsallis distribution into a normal one but with a log - periodically oscillating in scale factor , cf .[ figlpot]a .the formula used there to fit the obtained results resembles that for dressing factor ( [ eq : r ] ) , .\label{eq : t}\ ] ] in fit shown in fig .[ figlpot]a parameters ( generally energy dependent ) are .[ t ] oscillations of scale parameter leading to identical dressed tsallis distribution as shown in fig .[ summary]b ( obtained for cms data at tev and fitted using eq .( [ eq : t ] ) ) . dependencies of from eq .( [ eq : taue ] ) and from eq .( [ eq : st6 ] ) resulting in oscillations of shown in panel . , title="fig : " ] oscillations of scale parameter leading to identical dressed tsallis distribution as shown in fig .[ summary]b ( obtained for cms data at tev and fitted using eq .( [ eq : t ] ) ) . dependencies of from eq .( [ eq : taue ] ) and from eq .( [ eq : st6 ] ) resulting in oscillations of shown in panel ., title="fig : " ] to explain eq .( [ eq : t ] ) one uses a stochastic equation for the temperature evolution written in langevin formulation with energy dependent noise , , and allowing for time dependent but results are for transverse momenta here .however , they are taken at the midrapidity , i.e. , for , and for large transverse momenta , , and in this region one has . ] : assuming now a scenario of _ preferential attachment _ ( cf .section [ sec : sn ] above ) known from the growth of networks ) one has and eq .( [ eq : st2 ] ) has now the form : after straightforward manipulations ( cf . for details ) one gets , for large ( i.e. , neglecting terms ) : \frac{dt}{d(\ln e ) } + t\frac{d\xi(t , e)}{d(\ln e ) } = 0.\label{eq : st5}\ ] ] assuming now that noise increases logarithmically with energy , in this case eq .( [ eq : st5 ] ) becomes an equation for the damped hadronic oscillator with solution in the form of log - periodic oscillation of temperature with frequency and depending on initial conditions phase shift parameter : \ln e\right\}\cdot \sin(\omega\ln e + \phi ) .\label{eq : st7}\ ] ] averaging the noise fluctuations over time and taking into account that the noise term can not on average change the temperature , , one arrives at this should now be compared with the parametrization of given by eq .( [ eq : t ] ) and used to fit data in fig.[figlpot ] , of the order of , emerges from the stochastic process with energy dependent noise ; the main contribution comes from the usual energy - independent gaussian white noise . ] .we close with the remark that , instead of using energy dependent noise given by eq .( [ eq : st6 ] ) and keeping the relaxation time constant .we could equivalently keep the energy independent white noise , , but allow for the energy dependent relaxation time , for example in the form of in this case the temperature evolution has the form e^{-t\omega^2/n } \exp\left(-\frac{t}{\tau_0}\right ) , \label{eq : ttau}\ ] ] and gradually approaches its equilibrium value .actually , for , as in our case , this approach towards equilibrium is faster for large .this is because , in addition to the usual exponential relaxation characteristic for case , we have an additional factor .we presented examples of possible mechanisms resulting in quasi - power distributions exemplified by tsallis distribution , eq .( [ eq : tsallis ] ) .our presentation had to be limited , therefore we did not touch thermodynamic connections of this distribution or the possible connection of tsallis distributions with qcd calculations discussed recently .* statistical physics consideration , as well as `` induced partition process '' , results in eq .( [ eq : constn ] ) , i.e. , in tsallis distribution with . fluctuations of the multiplicity modify the parameter which is now equal to , cf .( [ eq : fluctn ] ) .notice that conditional probability for the bg distribution again results in eq .( [ eq : constn ] ) .* fluctuations of the multiplicity are equivalent to results of an application of superstatistics , where the convolution becomes a tsallis distribution , eq .( [ eq : tsallis ] ) , for * differentiating eq .( [ eq : sst ] ) one gets this is nothing else than a `` preferential attachment '' case , again resulting in a tsallis distribution which for becomes a bg distribution , cf . eq .( [ eq : nets ] ) . * replacing in eq .( [ eq : diffsst ] ) differentials by finite differences , cf . eq .( [ eq : deltae ] ) , one gets for the scale invariant relation , eq . ( [ eq : gscaling ] ) , which results in log - periodic oscillations in tsallis distributions . of such well known distributions as the snedecor distribution ( with with integer , for it becomes an exponential distribution ) , can be extended to complex nonextensivity parameter . ]in addition to this line of reasoning , we have also brought in the problem of the apparent duality between the nonextensive parameters obtained from the whole phase space measurements of multiplicity and more local measurements of transverse momenta .this point deserves an experimental and phenomenological scrutiny . finally , we tentatively suggested that , by choosing the right constraints , which account for additive or multiplicative processes considered , one can also get a tsallis distribution directly from the shannon information entropy . c. tsallis , j. stat .phys . * 52 * ( 1988 ) 479 and eur .j. a * 40 * ( 2009 ) 257 ; cf . also c. tsallis ,_ introduction to nonextensive statistical mechanics _ ( berlin 2009 : springer ) . for an updated bibliography on this subject ,see http://tsallis.cat.cbpf.br/biblio.htm .o. j. e. maroney , phys .rev e * 80 * ( 2009 ) 061141 ; t. s. bir , k. rmssy and z. schram , j. phys .g * 37 * ( 2010 ) 094027 ; t. s. bir and p. vn , phys .e * 83 * ( 2011 ) 061147 ; t. s. bir and z. schram , eur .. j. web conf .* 13 * , ( 2011 ) 05004 ; t. s. bir , _ is there a temperature ?conceptual challenges at high energy , acceleration and complexity _ ( springer , new york dordrecht heidelberg london , 2011 ) ; p. vn , g. g. barnafldi , t. s. bir and k. rmssy , j. phys . : conf .* 394 * ( 2012 ) 012002 .g. wilk , z. wodarczyk , acta phys .b * 35 * ( 2004 ) 871 and 2141 ; c. tsallis , eur .j. st * 161 * ( 2008 ) 175 ; d. j. b. soares , c. tsallis , a. m. mariz , l. r. da silva , europhys .* 70 * ( 2008 ) 70 .t. s. bir , g. g. barnafldi , p. van , physica a * 417 * ( 2015 ) 215 ; t. s. bir , p. van , g. g. barnafldi , k. rmssy , _ statisytical power law due to reservoir fluctuations and the universal thermostat independence principle _ , arxiv:1409.5975[cond - mat.stat - mech ] , to be published in entropy .y. huang , h. saleur , c. sammis , d. sornette , europhys .lett . * 41 * ( 1998 ) 43 ; h. saleur , c.g .sammis , d. sornette , j. geophys .( 1996 ) 17661 ; a. krawiecki , k. kacperski , s. matyjaskiewicz , j. a. holyst , chaos solitons fractals * 18 * ( 2003 ) 89 ; j. bernasconi , w. r. schneider , j. stat .30 * ( 1983 ) 355 ; d. stauffer , d. sornette , physica a * 252 * ( 1998 ) 271 ; d. stauffer , physica a * 266 * ( 1999 ) 35 . c. y. wong , g. wilk , acta phys .b * 43 * ( 2012 ) 2047 ; phys . rev .d * 87 * ( 2013 ) 114007 and _ relativistic hard - scattering and tsallis fits to spectra in collisions at the lhc _ , arxiv:1309.7330[hep - ph ] , to be published in the open nuclear and particle physics journal ( 2014 ) .l. j. l. cirto , c. tsallis , c .- y .wong , g. wilk , _ the transverse - momenta distributions in high - energy collisions - a statistical - mechanical approach _arxiv:1409.3278[hep - ph ] and c .- y .wong , g. wilk , l.j .l. cirto , c. tsallis , _ possible implication of a single nonextensive distribution for hadron production in high - energy pp collisions _ ; arxiv:1412.0474[hep - ph ] . | quasi - power law ensembles are discussed from the perspective of nonextensive tsallis distributions characterized by a nonextensive parameter . a number of possible sources of such distributions are presented in more detail . it is further demonstrated that data suggest that nonextensive parameters deduced from tsallis distributions functions , , and from multiplicity distributions ( connected with tsallis entropy ) , , are not identical and that they are connected via . it is also shown that tsallis distributions can be obtained directly from shannon information entropy , provided some special constraints are imposed . they are connected with the type of dynamical processes under consideration ( additive or multiplicative ) . finally , it is shown how a tsallis distribution can accommodate the log - oscillating behavior apparently seen in some multiparticle data . |
the theory of mechanism design has been developed and applied to many branches of economics for decades .nash implementation is a cornerstone of the mechanism design theory .the maskin s theorem provides an almost complete characterization of social choice rules ( scrs ) that are nash implementable : when the number of agents is at least three , the sufficient conditions for nash implementation are monotonicity and no - veto , and the necessary condition is monotonicity .note that an scr is specified by a designer , a desired outcome from the designer s perspective may not be desirable for the agents ( see table 1 in section 3.1 ) .the maskin mechanism ( page 394 , ) constructed in the proof of maskin s sufficiency theorem is an abstract mechanism .people seldom consider how the designer actually receives messages from agents . roughly speaking , there are two distinct manners : direct and indirect manner . in the former manner ,agents report their messages to the designer directly ( _ e.g. _ , speak face to face , hand over , _ etc _ ) , thereby the designer can know exactly that a message is reported by an agent himself , not by any other device . in the latter manner ,agents report messages to the designer through channels ( _ e.g. _ , internet , cable _ etc _ ) . thereby ,when the designer receives a message from a channel , he can not know what has happened on the other side of the channel : whether the message is reported by an agent himself , or generated by some device authorized by an agent .traditionally , nobody notice the difference between the two manners in the maskin mechanism .however , in this paper , we will point out that traditional sufficient conditions on nash implementation may fail if agents report messages to the designer in an indirect manner .the rest of the paper is organized as follows : section 2 recalls preliminaries of the mechanism design theory given by serrano ; section 3 is the main part of this paper , where we will propose a self - enforcing agreement to help agents break through the restriction of maskin s sufficiency theorem .section 4 draws the conclusion .let be a finite set of _ agents _ with , be a finite set of social _ outcomes_. the information held by the agents is summarized in the concept of a _state_. the true state is not verifiable by the designer .we denote by a typical state and by the domain of possible states . at state , each agent is assumed to have a complete and transitive _ preference relation _ over the set .we denote by the profile of preferences in state , and denote by the strict preference part of .fix a state , we refer to the collection as an _ environment_. let be the class of possible environments .a _ social choice rule _ ( scr ) is a mapping .a _ mechanism _ describes a message or strategy set for agent , and an outcome function . is unlimited except that if a mechanism is direct , _i.e. _ , .an scr satisfies _ no - veto _ if , whenever for all and for every agent but perhaps one , then .an scr is _ monotonic _ if for every pair of environments and , and for every , whenever implies that , there holds .we assume that there is _ complete information _ among the agents , _i.e. _ , the true state is common knowledge among them . given a mechanism played in state , a _nash equilibrium _ of in state is a strategy profile such that : .let denote the set of nash equilibria of the game induced by in state , and denote the corresponding set of nash equilibrium outcomes .an scr is _ nash implementable _ if there exists a mechanism such that for every , .maskin provided an almost complete characterization of scrs that were nash implementable .the main results of ref . are two theorems : 1 ) ( _ necessity _ ) if an scr is nash implementable , then it is monotonic . 2 ) ( _ sufficiency _ ) let , if an scr is monotonic and satisfies no - veto , then it is nash implementable . in order to facilitate the following investigation, we briefly recall the maskin mechanism given by serrano as follows : consider a mechanism , where agent s message set is , is the set of non - negative integers . a typical message sent by agent described as .the outcome function is defined in the following three rules : ( 1 ) if for every agent , and , then .( 2 ) if agents send and , but agent sends , then if , and otherwise .( 3 ) in all other cases , , where is the outcome chosen by the agent with the lowest index among those who announce the highest integer .this section is the main part of this paper . in the beginning, we will show an example of scr which satisfies monotonicity and no - veto .it is nash implementable although all agents dislike it .then , we will propose a self - enforcing agreement using complex numbers , by which the agents may break through the maskin s sufficiency theorem and make the scr not nash implementable .let , , . in each state ,the preference relations over the outcome set and the corresponding scr are given in table 1 .the scr is _ pareto - inefficient _ from the agents perspectives because in state , all agents unanimously prefer a pareto - optimal outcome : for each agent , . _ table 1 : an scr satisfying monotonicity and no - veto is pareto - inefficient from the agents perspectives . _ + [ cols="^,^,^,^,^,^ " , ] suppose the true state is . at first sight , might be a unanimous for each agent , because by doing so would be generated by rule 1 of the maskin mechanism .however , has an incentive to unilaterally deviate from to in order to trigger rule 2 ( where stands for any legal value ) , since , ; also has an incentive to unilaterally deviate from to , since , .note that either or can certainly obtain her expected outcome only if just one of them deviates from ( if this case happened , rule 2 would be triggered ) .but this condition is unreasonable , because all agents are rational , nobody is willing to give up and let the others benefit .therefore , both and will deviate from . as a result ,rule 3 will be triggered .since and both have a chance to win the integer game , the winner is uncertain and the final outcome is also uncertain between and . to sum up ,although every agent prefers to in state , can not be yielded in nash equilibrium .indeed , the maskin mechanism makes the pareto - inefficient outcome be nash implementable in state . can the agents find a way to break through the maskin s sufficiency theorem and let the pareto - efficient outcome be nash implementable in state ?interestingly , we will show that the answer may be `` yes '' if agents report messages to the designer through channels ( _ e.g. _ , internet ) .in what follows , first we will define some matrices with complex numbers , then we will propose a self - enforcing agreement to help agents break through the maskin s sufficiency theorem . *definition 1 * : let be two matrices , and be two basis vectors : hence , , ; , .* definition 2 * : for agents , suppose each agent possesses a basis vector . is defined as the tensor product of basis vectors : contains basis vectors and elements . is also denoted as .similarly , obviously , there are possible vectors : .* definition 3 * : , _ i.e. _ , where the symbol denotes an imaginary number , and is the conjugate transpose of . *definition 4 * : * definition 5 * : for ] , ,\phi\in[0,\pi/2]\} ] , ] , let latexmath:[ ] , . + * output * : , .+ step 1 : reading from each agent .+ step 2 : computing the leftmost and rightmost columns of .+ step 3 : computing \overrightarrow{\psi}_{1}_{c\cdots cd}\sin^{2}(\theta/2 ) + \$_{d\cdots dd}\cos^{2}(\theta/2)\sin^{2}\phi\end{aligned}\ ] ] since is satisfied , _ i.e. _ , , then the -th agent chooses . as a result , the -th agent belongs to , by condition , can be chosen as . according to step 5 of _ messagecomputing _ , .thus , . in this case , can be chosen as . by symmetry , in state , consider the following strategy : each agent submits , ; each agent submits .then this strategy profile is a nash equilibrium of in state , and the final outcome implemented in nash equilibrium is . | the maskin s theorem is a fundamental work in the theory of mechanism design . in this paper , we propose that if agents report messages to the designer through channels ( _ e.g. _ , internet ) , agents can construct a self - enforcing agreement such that any pareto - inefficient social choice rule satisfying monotonicity and no - veto will not be nash implementable when an additional condition is satisfied . the key points are : 1 ) the agreement is unobservable to the designer , and the designer can not prevent the agents from constructing such agreement ; 2 ) the agents act non - cooperatively , and the maskin mechanism remain unchanged from the designer s perspective . mechanism design ; nash implementation ; social choice . |
in this section of the supplementary information , we prove that sampling the distribution given in eq .( 8) is optimal in that it gives the minimum variance of the born rule estimator of all distributions .our goal is to estimate .we will do so by sampling from some distribution , and compute an estimator given by .the variance of this estimator is given by ^ 2}{p({\boldsymbol{\lambda } } ) } - p^2 \,.\ ] ] to minimize the variance , we choose to minimize the first term in this expression . the distribution that minimizes the variance is .the proof follows directly from the cauchy - schwarz inequality .consider ^ 2}{p({\boldsymbol{\lambda } } ) } \right ) \left ( \sum_{{\boldsymbol{\lambda } } } p({\boldsymbol{\lambda } } ) \right ) \notag \\ & = \sum_{{\boldsymbol{\lambda } } } \frac{[w({\boldsymbol{\lambda}})]^2}{p({\boldsymbol{\lambda}})}\end{aligned}\ ] ] the inequality is saturated for , and therefore this distribution minimizes the variance . with this choice ,the variance of the estimator is where .in this section of the supplementary information , we detail ways to exploit an efficiently computable symmetry of the born rule to give a variant of our estimation algorithm . if we replace the quantum circuit and measurement effect with another such that the born rule probability remains the same , then eq . ( 10 ) provides two ( in general ) different estimators for this born rule probability .the rate of convergence of these estimator need not be the same under this symmetry , and so such a variant may provide an advantage . as an example , consider the ` time reversal ' symmetry that exchanges states and measurement effects in a unitary circuit ( with some care taken to appropriately normalise the distributions for states and effects ) .one can define a `` reverse protocol '' which produces a poly precision estimator , provided that the _ total reverse negativity _ of the circuit is polynomially bounded . in general , , as seen from because both and are efficiently computable , one is free to choose the direction of simulation resulting in the faster estimator convergence rate .( we note that while , which suggests that the reverse protocol would have slower convergence when using a high rank effect .however , in such cases , is in general larger than by a similar factor , cancelling the effect of in the ratio . )another symmetry of the born rule is the the regrouping of unitaries into different ` elementary ' gates , such as reexpressing as .different groupings can lead to different estimators , as we demonstrate with a simple example using a grouping of two unitaries into one , .we can estimate by sampling trajectories using and , or directly by sampling using as a single step . while both of these methods will produce an unbiased estimator of the born rule, they will not converge at the same rate in general , as a result of the general inequality \neq \frac{{\left|w_{u}(\lambda_2|\lambda_0)\right|}}{{\mathcal{m}}_{u}(\lambda_0 ) } \,.\ ] ] we note that equality holds in the case where and are both nonnegative . | we present a method for estimating the probabilities of outcomes of a quantum circuit using monte carlo sampling techniques applied to a quasiprobability representation . our estimate converges to the true quantum probability at a rate determined by the total negativity in the circuit , using a measure of negativity based on the 1-norm of the quasiprobability . if the negativity grows at most polynomially in the size of the circuit , our estimator converges efficiently . these results highlight the role of negativity as a measure of non - classical resources in quantum computation . estimating the probability of a measurement outcome in a quantum process using only classical methods is a longstanding problem that remains of acute interest today . directly calculating such probabilities using the born rule is inherently inefficient in the size of the quantum system , and efficiently estimating such probabilities for a generic quantum process is expected to be out of reach of classical computers . nonetheless , there are interesting and nontrivial classes of quantum circuits for which we _ can _ efficiently estimate the probabilities of outcomes . the canonical example of such a class is that of stabilizer circuits . such circuits can create highly - entangled states and perform many of the fundamental operations involved in quantum computing ( teleportation , quantum error correction , distillation of magic states ) but the celebrated gottesman - knill theorem allows such circuits to be classically simulated efficiently . other examples include fermionic linear optics / matchgates , and some classes of quantum optics . while these methods may be extended to include bounded numbers of operations outside of the class ( for example , ref . ) , such extensions generally treat all operations outside of the class on an equal footing ( for example , the cost of adding noisy magic states is the same as adding pure magic states ) and so do not provide any insight into the relative resources of different operations . in this letter , we present a general method for estimating outcome probabilities for quantum circuits using quasiprobability representations . simulation methods based on quasiprobability representations have a long history in physics , and have recently been used in quantum computation to identify classes of operations that are efficiently simulatable . our method allows for estimation in circuits wherein the quasiprobabilities may go negative . that is , while making the most efficient use of circuit elements that are represented nonnegatively , it nonetheless provides an unbiased estimator of the true quantum outcome probability regardless of the inclusion of more general elements that are negatively represented . we quantify the performance of this method by providing an upper bound on the rate of convergence of this estimator that scales with a measure of the total amount of negativity in the circuit . _ _ probability estimation.__consider quantum circuits of the following form . the circuit initiates with qudits ( -level quantum systems ) in a product state , evolves through a circuit consisting of elementary gates that act nontrivially on at most a fixed number of qudits ( for example , 1- and 2-qudit gates ) , and terminates with a product measurement , i.e. , an independent measurement of each qudit . universal quantum computation can be achieved with circuits of this form . note that we do not include circuits with intermediate measurements and conditional operations based on their outputs ( we return to this consideration in the discussion ) . we aim to estimate the probability of a fixed outcome where denotes the outcome of the measurement on the qudit . ( note that _ estimation _ of the probability of a fixed outcome is distinct from a _ simulation _ as in refs . , wherein different outcomes are sampled from this distribution . ) a natural benchmark for the precision of an estimator is the precision that can be obtained from sampling the quantum circuit itself . if we had access to a quantum computer that implemented a circuit in this class , then we could use it to estimate the probability of a fixed outcome by computing the observed frequency of outcome over samples . by the hoeffding inequality , will be within of the quantum probability with probability provided the number of samples satisfies this bound implies that for any fixed , the number of samples required to achieve error scales polynomially in . we call estimators satisfying this property _ poly - precision _ estimators . ( we distinguish these from _ exponential - precision _ estimators , defined as estimators for which scales logarithmically in . ) our central results are a classical algorithm that produces a poly precision estimate of a quantum circuit in the above class , and a bound on the efficiency of this algorithm based on a measure of the circuit s negativity in a quasiprobability representation . _ _ quasiprobability representations.__a quasiprobability representation of a qudit over is defined by a frame and a dual frame , which are ( generally over - complete ) bases for the space of hermitian operators acting on satisfying ] , where = \pm 1 ] , where we have defined to be the _ total forward negativity bound _ of the circuit : let be the average of over independent samples of . using the boundedness and unbiasedness properties of , the hoeffding inequality yields an upper bound on the rate of convergence of the average . specifically , will be within of the quantum probability with probability if a total of samples are taken . consequently , if the total forward negativity bound grows at most polynomially with , then our protocol gives an efficient estimate of the quantum probability to within , with an exponentially small failure probability . that is , for circuits with a polynomially - bounded total forward negativity bound , is a poly - precision estimator of the born rule probability and we can sample efficiently in . we note that the total forward negativity bound of is insensitive to the measurement negativity , instead depending only on . any efficiently computable symmetry of the born rule can be used to give a variant on the procedure defined above . the rate of convergence of the estimator need not be symmetric under these born rule symmetries , and so such a variant may provide an advantage . two examples of such symmetries the time reversal symmetry that exchanges states and measurement effects in a unitary circuit , and the regrouping of unitaries into different elementary gates are explored in the appendix . in particular , a variant procedure is presented for which the total negativity bound is insensitive to the negativity of the initial state . _ example : estimation with the discrete wigner function.__the odd- qudit stabilizer subtheory and the associated discrete wigner function provide a canonical example for demonstrating the use of our algorithm ; see also ref . . using this discrete wigner representation for our estimation algorithm , the nonnegativity of the stabilizer subtheory ensures that stabilizer states , gates , and rank-1 measurements have negativity and so are `` free '' resources . moreover , due to the existence of nonnegatively represented operations that are not in the stabilizer polytope , our approach is efficient on a strictly larger set of circuits than those of ref . . circuits with operations possessing negativity strictly greater than 1 , such as magic states and non - clifford gates , can still be estimated but now come at a cost . provided the total negativity bound grows at most polynomially in , our protocol provides an efficient estimator . as an example , consider a circuit with an input state given by a product state of qutrit magic states , with , together with stabilizer states in a 100-qutrit random clifford circuit , and estimate the probability of measuring on the first qutrit of the output . the total forward negativity bound of this circuit scales exponentially in and consequently the number of samples required to guarantee a fixed precision scales exponentially in by eq . . the results of our numerical simulations , shown in fig . [ fig : randomcliffords ] , indicate that our estimator does indeed converge with an appropriately chosen number of samples . moreover , while the true precision of in our simulations is often orders of magnitude better than the target precision , there are circuits that come close to saturating the target precision , suggesting that our bound can not be substantially improved without further detailed knowledge of the circuit . between the estimated probability and the true probability of the outcome for the first qutrit as a function of the number of magic states . each data point represents a random 100-qutrit clifford circuit with the non - magic states initialized to the state . the number of samples was chosen using eq . with target precision ( indicated by the solid line ) with 95% confidence ( ) , so the number of samples increases exponentially with ( color scale ) . ] _ _ discussion.__our results highlight the role of the total negativity of a circuit as a resource required for a quantum computer to outperform any classical computer . in particular , any circuit element that is represented nonnegatively does not contribute to the total negativity bound and can be viewed as a `` free '' resource within the algorithm . other circuit elements have an associated cost quantified by their negativity , _ unless _ they appear at the final timestep of the algorithm . this latter observation motivates us to exploit the time - reversal and other symmetries of the born rule , seeking to minimize the total forward ( or reverse ) negativity bound . in particular , one could seek equivalent circuits wherein negative operations can be replaced with nonnegative ones by using negative initial states or measurements via gate teleportation . by choosing the forward or reverse procedure as appropriate , the efficiency can be made insensitive to the negativity of these initial states or measurements . it also motivates us to identify quasiprobability representations in which many of the circuit elements of interest are represented nonnegatively . interesting and relevant examples abound , beyond the well - studied qudit discrete wigner function . discrete wigner functions for qubits ( ) can be defined for which all stabilizer states with real coefficients ( rebits ) and all css - preserving unitaries are nonnegatively represented . the range of quasiprobability representations introduced in ref . represent discrete subgroups of on a single qubit nonnegatively , but have no nonnegative entangling gates ; as such representations can represent certain non - stabilizer single - qubit states nonnegatively , they may be useful for estimation in circuits for gate synthesis . there is also flexibility in how a quasiprobability representation is defined . for example , a given quasiprobability representation can be modified to describe more states nonnegatively at the expense of a decreasing set of nonnegative measurements , and vice versa , by exploiting the structure of the dual frames . overcomplete frames provides freedom in the choice of dual frame , and the negativity of unitaries and measurements will depend on this choice . as the dual frames formalism itself captures the relationship between quantum states and measurements , there is also freedom in the definition of the quasiprobability representation of unitaries beyond that given by eq . . finally , again using the freedom in the choice of dual for overcomplete frames , it is possible to switch between frames throughout a single circuit . these freedoms can be used to minimize the total negativity bound of the circuit , allowing more efficient estimators . our procedure can be applied to infinite - dimensional hilbert spaces using any of the range of quasiprobability representations with continuous phase spaces developed in the study of quantum optics , by performing an appropriate discretization as in ref . . in this case , the negativity of distributions is quantified by integrating the absolute value of the distributions over the phase space , and is directly related to the _ volume _ of negativity . we note that the resulting estimator can be applied to quantum optics experiments including states and measurements with negative wigner function , such as photon number fock states , and so may provide additional insight into the classical simulation cost of boson sampling . while there exist means to efficiently estimate the outcome probability of a specific linear optics circuit with fock state input and measurement , our estimation procedure extends these results by providing a general method for estimating outcome probabilities of such linear optical circuits for any input and output together with a bound on the efficiency of this estimation based on the volume of negativity of these states . in addition , our estimation can easily incorporate squeezing , as well as the loss and noise mechanisms common to linear optics experiments . there are two natural ways to extend our results to circuits that include intermediate measurements and conditional operations based on them . first , one could replace the measurement and conditional operation with a coherently - controlled operation , and delay the measurement to the end . we note that such controlled operations can be negative , even if the measurement and classically - controlled operation are both nonnegative . second , our algorithm can be used to directly estimate the probabilities of the intermediate measurements and to sample from them . in this case , the required precision is exponential in the number of intermediate measurements in order to calculate conditional probabilities for subsequent use in the algorithm . thus , in general , both approaches require resources that are exponential in the number of intermediate measurements . finally , our estimation procedure provides insight into the study of operationally meaningful measures of non - classical resources in quantum computation . negativity in a quasiprobability representation has long been used as an indicator of quantum behaviour , but only recently has it been quantified as a resource for quantum computation . our results provide a related operational meaning of this resource : as a measure that bounds the efficiency of a classical estimation of probabilities . the authors are grateful to j. emerson , c. ferrie , s. flammia , r. jozsa and a. krishna for helpful discussions . this work is supported by the arc via the centre of excellence in engineered quantum systems ( equs ) project number ce110001013 and by the u.s . army research office through grant w911nf-14 - 1 - 0103 . 99 s. aaronson and d. gottesman , * 70 * , 052328 ( 2004 ) . l. g. valiant , siam j. comput . * 31 * , 1229 ( 2002 ) . b. m. terhal and d. p. divincenzo , * 65 * , 032325 ( 2002 ) . s. d. bartlett , b. c. sanders , s. l. braunstein , and k. nemoto , phys . rev . lett . * 88 * , 097904 ( 2002 ) . v. veitch , n. wiebe , c. ferrie , and j. emerson , new j. phys . * 15 * , 013037 ( 2013 ) . l. gurvits , in _ mathematical foundations of computer science _ , lecture notes in computer science * 3618 * , 447 - 458 ( 2005 ) . c. gardiner and p. zoller , _ quantum noise : a handbook of markovian and non - markovian quantum stochastic methods with applications to quantum optics _ ( springer - verlag , berlin , 3rd ed . , 2004 ) . v. veitch , c. ferrie , d. gross , and j. emerson , new j. phys . * 14 * , 113011 ( 2012 ) . a. mari and j. eisert , phys . rev . lett . * 109 * , 230503 ( 2012 ) . d. stahlke , phys . rev . a * 90 * , 022302 ( 2014 ) . c. ferrie and j. emerson , new j. phys . * 11 * , 063040 ( 2009 ) . c. ferrie , rep . . phys . * 74 * , 116001 ( 2011 ) . v. veitch , s. a. h. mousavian , d. gottesman and j. emerson , new j. phys . * 16 * , 013009 ( 2014 ) . c. cormick , e. f. galvao , d. gottesman , j .- p . paz , and a. o. pittenger , phys . rev . a * 73 * , 012301 ( 2006 ) . d. gross , j. math . phys . * 47 * , 122107 ( 2006 ) . n. delfosse , p. allard guerin , j. bian , and r. raussendorf , phys . rev . x * 5 * , 021003 ( 2015 ) . j. j. wallman and s. d. bartlett , * 85 * , 062121 ( 2012 ) . a. kenfack and k. yczkowski , j. opt . b : quantum semiclass . * 6 * , 396 ( 2004 ) . s. aaronson and a. arkhipov , proc . acm symposium on theory of computing , san jose , ca pp . 333342 ( 2011 ) . |
one of the burning questions in science today is the understanding of dark matter .the quantification of the distribution of dark matter in our universe , at different scales , is of major interest in cosmology roberts75 , rubin2001,salucci2000,deblok2003 , hayashi2007 . at scales of individual galaxies ,the estimation of the density of the gravitational mass of luminous as well as dark matter content of these systems , is the relevant version of this exercise .readily available data on galactic images , i.e. photometric observations from galaxies , can in principle , be astronomically modelled to quantify the gravitational mass density of the luminous matter in the galaxy , gallazi2009 , bell2001 ; such luminous matter is however , only a minor fraction of the total that is responsible for the gravitational field of the galaxy since the major fraction of the galactic gravitational mass is contributed to by dark matter . thus , the gravitational mass density of luminous matter , along with constraints on the gravitational mass density of the dark matter content , if available , can help the learning of the total gravitational mass density .however , the learning of this physical density is difficult in light of the sparse and noisy measurements that are available , coercing the practitioner to resort to undertaking simplifying model assumptions . in this paper, we present a new way of quantifying the relative support to such a model assumption in two independent data sets , by comparing the posterior probabilities of models given the two data sets .model selection is a very common exercise faced by practitioners of different disciplines , and substantial literature exists in this field kassraftery , ibf_2001,chipman_2001,ghoshsamanta_2001,barabari_2004,tony , cassella_2009 . in this context, some advantages of bayesian approaches , over frequentist methods has been reported ibf_2004,robert_2001 .much has been discussed in the literature to deal with the computational challenge of bayes factors ( * ? ? ?* ; * ? ? ?* ; * ? ? ?* to name a few ) .at the same time , methods have been advanced as possible resolutions when faced with the challenge of improper priors on the system variables aitkin , ibf , tony .however , the computation of posterior , intrinsic or fractional bayes factors persist as a challenge , especially in the context of discrete , non - parametric and multimodal inference on a high - dimensional state space linkbarker_2006 .this paper demonstrates a novel methodology for quantifying support in two ( or more ) independent data sets from the same galaxy , for the hypothesis that the system state space admits a certain symmetry - namely , isotropy .assuming isotropy implies that the probability density function in this high - dimensional state space of the system , depends only on the magnitude of the state space vector , and does not depend on the inclination of this vector to a chosen direction .the null is nested within the alternative in this problem . at the same time , one of the data sets is small and the other large .also , the dimensionalities of the model parameter vectors sought under the different hypotheses , are also different .indeed in such a case , testing with multiple hypotheses can be performed , such that one hypothesis suggests that the state space that one of the data sets is sampled from , admits this symmetry while the other hypothesis suggests the same for the other data set ( considering the case of 2 available data sets ) .however , in the two cases , little prior information are available on the system parameter vectors. the priors on the model parameters are in fact non - informative ( uniform ) , marked by unknown multiplicative constants . then, as discussed in ibf_1996a , kassraftery , this results in the ratio of the predictive densities of the data sets , under the models , being defined only up to the ratio of these unknown multiplicative constants .thus , the bayes factor is rendered arbitrary .the computation of the bayes factors is then , in principle possible with posterior bayes factors aitkin , intrinsic bayes factors ibf , ibf_1996a or with fractional bayes factors tony .however in this real - life problem that we discuss here , the implementation of bayes factors gets difficult given that the relative support to an undertaken assumption in two distinct data sets is sought by comparing the posterior probability of models that are characterised by different numbers of parameters . secondly , the more complex model in which the simpler model ( the null ) is nested in this situation ,is intractable .the null or the simpler model assumes the model parameter space to bear a simple symmetry , thus rendering posterior computation possible . on the contrary , the alternative ( or the more complex model ) does not constrain the geometry of the model parameter space in any way . thus, the simpler model is nested within the more complex model . however , under the alternative hypothesis , i.e. in lieu of such simplifying assumptions , it is not possible to compute the posterior probability of the model parameter vectors .this situation is in principle caused by the complexity of the system , compounded by the paucity of measurements . in the test presented here , learning can be performed - relatively easily under the null - and thereafter , support in the data for the assumption about the symmetry of the model parameter space is quantified .lastly , it is acutely challenging in this high - dimensional non - parametric situation , to achieve intrinsic priors ibf with imaginary training data sets .such training data can be generated from the posterior predictive distribution under the null perez_berger , and is subsequently used to train the prior under the more complex model , to implement it in the computation of the marginal density of the real data .however , the computational difficulty involved in the training of the intrinsic prior even under the null , in this high - dimensional setup , is discouragingly daunting .the implementation of real training data is not possible either , given that any method implemented to learn the gravitational mass density - particularly the similarly motivated bayesian methodology discussed below - might be misled by undersampled data that is likely to result if subsamples are taken from the already small observed data sets that typify this application .it was presented in the last paragraph that the generated training data can not be of practical use in real - life applications of the kind we consider here . however , the implementation of real training data , for the purposes of achieving intrinsic priors , is not possible either .such real training data is obtained as a sub - sample of the available data set . given that for the current application , the data sets are typically small to begin with , estimation of the unknown model parameter vectors using a still smaller ( sub)sample of measurements might be risky in terms of convergence of undertaken inference methods .the difficulties with the computation of bayes factors that we have delineated above , motivates the need for a test that allows for computation of the comparative support in an available data set for one hypothesis to another , and is operational even with non - informative priors , without needing to invoke any other than the available data set , in the computation of the posterior probability distribution of the hypothesis given the data .the test is also motivated to work irrespective of the dimensionality of parameter spaces .motivated by this framework , this paper introduces a new nonparametric test of hypothesis that tests for the existence of global symmetries of the phase space that the available data are sampled from .the test works in parameter space , in the context of non - parametric bayesian inference when for the two or more cases of differently sized samples , little and/or differential prior information are available .this new test involves partitioning the space of the model parameter vectors such that one of the partitions - the subspace - contains those parameters for which the posterior probability density given the data , exceeds the maximal posterior density that can be achieved under the null .outside lie the parameter vectors for which the posterior probability falls below this maximal posterior under the null .the model parameter vectors that lie in are those that underlie the support in the data against the null .thus , the probability of the null given a data set is the complement of the posterior probability density integrated over the subspace ( instead of over the whole of the parameter space ) .such an integral can also be viewed as the probability that a model parameter vector is in the subspace . in this treatment , the probability of a null given the data that is computationally simple to achieve , in the context of bayesian nonparametric inference in high - dimensional state spaces .the paper is organised as follows .section [ sec : application ] discusses the application , in the context of which the new test is introduced .the formulation of a phase space as an isotropic scalar - valued function of the velocity and location vectors of a galactic particle , is discussed in section [ sec : isotropy ] .the details of the bayesian estimation of the unknown parameters of the galaxy , is discussed in section [ sec : chassis ] .the estimated functions of a real galaxy are presented in section [ sec : bayesianlearning ] .the null hypotheses that we test are motivated in section [ sec : testing ] . in section [ sec : priors ] , we discuss the availability of priors on the unknown functions in the relevant literature and how this affects the testing .thereafter , shortcomings of the bayes factor computations in the context of this application are delineated in section [ sec : shortcomings ] . in section[ sec : outline ] , the outline of the new test is presented ; in particular , section [ sec : implementation ] is devoted to a detailed discussion about the implementation of this new methodology .the test is illustrated on simulated data in section [ sec : simulated ] and on real data in section [ sec : results ] .the paper is concluded with a discourse on the implications of the results , in section [ sec : discussions ] .as discussed above , it is difficult to learn the gravitational mass density of dark+luminous matter in galaxies , where , . is positive definite , implying that in any infinitesimally small volume inside the galaxy , gravitational mass is non - zero ( and positive ) .other physically motivated constraints on will be discussed in the next section in the context of learning this function , given the data at hand .while photometric measurements are more readily available , direct detection of dark matter has hitherto been impossible , implying that measurements that can allow for quantification of the gravitational mass density of dark matter , are not achievable . instead , there are effects of the total ( dark+luminous ) gravitational mass that can be measured , though astronomical measurements that bear signature of such effects are hard to achieve in `` early - type '' galaxies , the observed image of which is typically elliptical in shape . of some such astronomical measurements , noisy and partially missing velocities of individual galactic particleshave been implemented to learn cote2001,genzel , chakrabarty_somak . from this , when the astronomically modelled gravitational mass density of luminous matter , , is subtracted , the density of the gravitational mass of the dark matter content of early - type galaxies can be learnt . in this paradigm of learning ,the data is referred to as partially missing since the noisy measurements of only one component , namely , the component along the line - of - sight that joins the observer to the particle - of the three - dimensional velocity vector of galactic particles , are typically available .we view these measurables as sampled from the phase space of the system , where is the space of all the states that the system can achieve . for a galaxy , is the space of the spatial vectors and velocity vectors of all galactic particles . on other occasions ,the dispersion of the line - of - sight component of the velocity vector comprises the measurement . in either case, such kinematic data is expected to track the gravitational field of the galaxy in which the sampled particles play .we ask the question that if the available data include velocity measurements of particles in distinct samples that live in mutually disjoint volumes of , what are the implications for the gravitational mass density estimated using such data sets ?one major implication is that the estimate of obtained using one data set will be in general , different from that obtained using another data set . of course , for the same system , distinct are impossible - in fact , such a fallacious result , if achieved , can be explained as due to the fact that the used data sets have been drawn from mutually insulated volumes of the galactic phase space , the of which do not concur .such is possible , if the galactic phase space is characterised by disjoint volumes , motions in which do not communicate with each other .this is a viable scenario in non - linear dynamics even for systems with moderate complexity ; it is possible that the two types of galactic particles , the data of which are measured , live in distinct basins of attraction that characterise the galactic phase space thomsonstewart . in particular , we configure the question of unequal estimates of gravitational mass density functions , using the available data sets , to the context of the real galaxy ngc 3379 . for this system ,multiple data sets are measured for two distinct types of galactic particles pns , bergond .the 3-d spatial location vector of a particle resident in a galaxy is written as , where only the coordinates of the image of the particle - and - can be measured . also , the 3-d velocity vector of a particle is , with only the component being a measurable , i.e. we can only measure the speed with which the particle is approaching us ( the observer ) or receding from us .the galactic phase space is the space of and of all galactic particles .thus , . if we are convinced that motion tracks the gravitational field due to a given gravitational mass density function , then we can write down ] and $ ] , . here are experimentally chosen constants and the prior on the phase space density is uniform in [ 0,1 ] since is normalised to be 1 for the most bound orbit , i.e. for the maximum value of .the priors are used along with the likelihood function ( defined above in equation [ eqn : likeli ] ) in bayes rule to write the posterior probability density + above , the posterior density is referred to as to distinguish it from the posterior probability that we actually sample from , after convolving with the distribution of error in the measurements of , .the measurement errors in and are stated by astronomers to be small enough to be ignored .inputs from astronomers involved in the observed data set are considered when modelling the error distribution .typically , the error distribution is considered gaussian with zero mean and variance that is suggested by the relevant observational astronomer(s ) .thus , + + then and are learnt by sampling from this posterior , using adaptive random - walk metropolis - hastings haario .some of the approximations that underlie sampling from this posterior are now discussed , followed by a brief discussion of the inference .this is the approach used by and + who assumed an isotropic phase space .now , the undertaken model assumption of phase space isotropy also allows for the identification of , with the spherical radius , where , the radial location of the -th particle from the system centre . in other words ,the assumption of an isotropic phase space is inclusive of a spherical spatial geometry . then identifying with the gravitational potential energy ,the connection between and ( poisson equation ) is recalled as in this geometry .then the -th -bin discussed above is synonymous to the -th radial bin , .the unknown functions abide by the physically motivated constraints that , and . the last constraint is intuitively motivated as valid in a gravitationally bound spherical system that we model the galaxy to be .we view the galaxy as being built by stacking spherical layers on top of each other .then owing to gravity being a force that attracts mass towards the centre , the compactness in the packaging of mass is higher near the centre than away from the centre . in other words, increases as decreases , in general bt . following the trend of the various phase space discussed in astronomical literature bt , is also treated as a monotonically increasing function of energy in this implementation of the methodology . in the adopted discrete model , a simultaneous learning of discretised versions of two univariate functions and is attempted , i.e. the target is to learn the vector and the vector as proxies for the unknown functions .given that the gravitational mass density is non - negative and monotonically non - increasing function of , .again , given that , . in our implementation , to ensure monotonic decline in the gravitational mass density function , it is the difference between the gravitational mass densities in the -th and -th radial bins that is proposed , as , where is the folded normal distribution folded with mean and variance , , .the motivation behind this choice of the proposal density is that over the support , it is a relatively easy density to sample from , while satisfying the requirement that in general , when . here , the current difference between the gravitational mass density values in the and -th radial bins is .the variance is the empirical variance of , computed using values of this difference variable from step number to , where is the current step number and is chosen experimentally to be post - burnin haario , . here and are defined as 0 .then as is varied from to 1 , the proposed -th component of the unknown gravitational mass density vector is . is updated similarly , while following an imposed constraint that is monotonically non - decreasing with energy , .however , unlike in the case of the updating of the gravitational mass density value , a monotonically non - decreasing phase space with energy is not motivated by any physically justifiable constraints , though astronomical literature often suggests phase space density functions that typically increase with energy bt .thus , we realise that when the data comprises even partially missing information on the state space coordinates of a system , this methodology allows for the formulation of the likelihood of the parameters describing the sought system function , in terms of the projection of the of the system state space onto the space of the observables , as long as the unknown function can be embedded within the definition of this .such an approach is useful in contexts similar to what we work with here , for applications other than the current one .a notable feature of this methodology is that even when the data are missing , i.e. sampled from a sub - space of the system state space , it is possible to write the likelihood function as a product of the values of the sub - space density , at each data point , where the density of this sub - space is obtained by integrating the unobserved variables out of the state space .ngc 3379 is one of the few elliptical galaxies , for which kinematic information is available for individual members of two different populations of galactic particles - referred to as planetary nebulae ( pne ) and globular clusters ( gcs ) - over an extensive radial range spanning the outer parts of the galaxy .the data used in the work include measurements of , and of 164 pne reported by pns and of 29 gcs by bergond .we refer to the pne data set as and the gc data set as , with respective sample sizes of =164 and =29 .the learning of the model parameter vectors and is performed in a high - dimensional state space ( dimensionality = ) .the traces of the posterior given either data are presented in figure [ fig : hist_real ] .the marginal posterior densities given either data are shown for one component of the learnt vector , namely ( figure [ fig : marg_real ] ) .the marginal densities are noted to be markedly multimodal . in figure [fig : fig1 ] , we present the vectors and , learnt from using the -th data , .this estimation is performed using the aforementioned bayesian nonparametric methodology , under the model assumption of an isotropic phase space , i.e. a phase space density that is expressed as and approximated in the discretised model of chassis as the vector , ( the -th component of which is the value of the phase space density in the -th energy - bin ) and the vector , ( the -th component of which is the value of the phase space density in the -th radial - bin ) .the learnt 95 hpds are represented as error bars and the modal values are shown as open circles . .table displaying seeds that individual chains run with data and are started with .the initial choice of the gravitational mass density function is one that is sometimes used in astrophysical literature , .the starting phase space density is chosen to be either of the form or . here the parameters .the chains run with pne data are assigned names characterised by the prefix `` pne - run '' , while runs performed with the gc data are labelled with prefix `` gc - run '' . [ cols="^,^,^,^,^,^",options="header " , ] comparing the computed and across the chains implies that the assumption of isotropy is more likely to be invalid for the phase space from which the pne data are drawn than from which the gc data are drawn . basically , support in real data for the assumption of isotropyis distinct from that in .this implies that the , where phase space that is sampled from is and .however , both data sets carry information on the phase space coordinates of the same galaxy , i.e. both data sets are sampled from that describe the phase space structure of all or some volume inside the same galactic phase space .thus , where is phase space defined in volume and is phase space defined in volume . in terms of the phase space structure of this real galaxy ngc 3379 , we can then conclude that the phase space of the system is marked by at least two distinct volumes , motions in which do not communicate with each other , leading to distinct orbital distributions being set up in these two volumes , which in turn manifests into distinct for these subspaces of the galactic phase space .the pne and gc samples are drawn from such distinct .of course , such an interpretation would hold up if we can rule out extraneous reasons that might be invoked to explain the differential support in and to the assumption of phase space isotropy .such extraneous factors are systematically dealt with in section [ sec : discussions ] .it merits mention that our result that also suggests that the phase space density that the observed gcs in this galaxy live in , is nearly isotropic .the motivating idea is that if different data sets are sampled from different volumes of the galactic phase space , where there is no communication amongst these volumes , the gravitational mass density estimates obtained using these data sets will be different .none of these individual estimates will however tell us of the gravitational mass density function of the whole galaxy , in general .the worrying implication of this is that interpreting one of these estimates of as the galactic estimate , can be completely erroneous .the estimate achieved given one data set will then reveal no more about the entire galaxy than the very isolated phase space volume from which these data on particle motions are sampled. ideally speaking , the gravitational mass density of the galaxy , at any can be is approximated as + where is the estimate obtained using the -th data set , , where there are available data sets that have been drawn from mutually insulated volumes or sub - spaces within the galactic phase space , namely from .the more the number of such data sets available , i.e. bigger is , the better is the approximation suggested above .however , in this framework we are always running the risk of misidentifying the estimate of a property of a subspace of the galaxy as that of the galactic property . here, the -th data set where is the that describes sub - space . then the statement that and are `` mutually insulated '' implies that motions in do not cross over into and vice versa , resulting in , .thus , are not identically distributed . hence collating all data sets , as a single input into any method that attempts learning the galactic gravitational mass density by invoking phase space densities , will not work , if the method demands the data to be .( all methods of learning are underlined by the need to invoke the phase space ) .still , if we are in possession of knowledge of the correlation between the and , we could use the collated data set in such a method to learn the galactic mass density .however , as there is absolutely no measured , simulated or theoretical information available on the correlation structure between mutually insulated sub - volumes inside galaxies , such a modelling strategy is untenable .indeed , the development of any such learning strategy , can imply all available data sets jointly , but will be highly sensitive to the non - linear dynamical modelling of the phase space of the particular galaxy in question . at this stagewe recall that for the majority of galaxies , =1 and is at most 2 for a few galaxies ; examples of these systems include ngc 3379 pns , bergond , cena ( woodley chakrabarty , in preparation ) .thus , we can put a lower bound on the total gravitational mass content of the galaxy , as we now present for ngc 3379 .ngc 3379 is advanced as a dark matter rich galaxy , with the gravitational mass inside a radius of about 20 kpc to be at least as high as about 4 to 10 , where m denotes the mass of the sun and the astronomical unit of length , kiloparsec , is abbreviated as kpc .in the above test , a high support in the gc sample towards an isotropic phase space , along with a moderate support in the sampled pne for the same assumption , indicate that the two samples are drawn from two distinct phase space density functions . the expectation that the implementation of the pne and gc data sets will lead to concurring gravitational mass density estimatesis foreshadowed by the assumption that both data sets are sampled from the same - namely , the galactic - phase space density .the apparent motivation behind this assumption is that since both samples live in the galactic phase space , they are expected to be sampled from the same galactic phase space density , at the galactic gravitational potential . however , such does not necessarily follow if - for example - is a non - analytic function : then , if the gc data are sampled from the density and pne data , where , , then the statement that both the observed samples are drawn from equal phase space densities is erroneous .if this assumption is erroneous , i.e. if observed data and are sampled from unequal phase space , 1 .firstly it implies that the phase portrait of the galaxy ngc 3379 is split into at least two sub - spaces such that the observed gcs live in a sub - space and the observed pne live in a distinct sub - space , .2 . secondly that the phase space densities that describe and are unequal .qualitatively we understand that if the galactic phase space is split into isolated volumes , such that the motions in these volumes do not mix and are therefore distinctly distributed in general , the phase space densities of these volumes would be unequal .this is synonymous to saying that is marked by at least two distinct basins of attraction and the two observed samples reside in such distinct basins .thus , a split will readily explain lack of consistency in the estimate of support in the two data sets for the null that the phase space density function that these two data sets are drawn from are isotropic .however , the fundamental question is really about the inverse of this statement .does differential support in and to isotropic pase space necessarily imply a split ?indeed it does , since differential support in and towards isotropy of the that describes the respective native phase space and , implies that the distribution of the phase space vector in the sub - spaces and are distinct .this in turn implies that the phase portrait of this galaxy manifests at least two volumes , the motions in which are isolated from each other . such separation of the motions is possible if these distinct sub - spaces that the two datasets reside in , are separated by separatrices thomsonstewart into isolated volumes , motions across which do not mix . as for a physical reason for the phase space distribution of the pne to be less isotropic than the gc population, we can only speculate at the level of this paper .the gc population being an older component of a galaxy ( than the pne population ) , might have had longer time to equilibrate towards an isotropic distribution .also , pne being end states of stars , the pne population is likely to reside in a flatter component inside the galaxy , than the gc population .galactic phase spaces can be split given that a galaxy is expectedly a complex system , built of multiple components with independent evolutionary histories and distinct dynamical timescales .as an example , at least in the neighbourhood of the sun , the phase space structure of the milky way is highly multi - modal and the ensuing dynamics is highly non - linear , marked by significant chaoticity .the standard causes for the splitting of include the development of basins of attraction leading to attractors , generated in a multistable galactic gravitational potential . basins of attraction could also be triggered around chaotic attractors , which in turn could be due to resonance interaction with external perturbers or due to merging events in the evolutionary history of the galaxy .one worry that astronomers have expressed in the literature about the galaxy ngc 3379 is that the spatial geometry of the system is triaxial and not spherical . for us ,the relevant question to ask is if there is support in the data for the consideration of the gravitational mass density to depend on the spherical radius alone . in our work ,the fact that the methodology assumes the gravitational mass density to be dependent on := is a manifestation of the broader assumption of phase space isotropy ( see section [ sec : isotropy ] ) .thus , our test for phase space isotropy includes testing for the assumption that the gravitational mass density of the galaxy ngc 3379 bears a dependence on the spatial coordinates , only via , . in other words , we have already tested for the assumption that the gravitational mass density depends on the components of the spatial vector , via the spherical radius .in contrast to the pne and gc samples observed in this galaxy , if two observed sets of galactic particles can be inferred to have been drawn from the same phase space density , we will expect consistency in the gravitational matter density that is recovered by using such data sets in a mass determination formalism . at the end of the discussion presented above, we will naturally want to know what the true gravitational mass density function of ngc 3379 .however , if distinct density estimates are available at a given radius , from independent data sets as , then the lower limit on the galactic gravitational mass density at is .the above results and arguments suggest that it is inherently risky to refer to the gravitational mass density recovered using an observed particle sample - and the gravitational potential computed therefrom - as the gravitational potential of the galaxy .we have demonstrated this with the example of ngc 3379 and shown that inconsistencies in learnt using distinct observed samples , can not be attributed to any other factor except that these observed samples are drawn from distinct and insular sub - spaces within the galactic phase space , and/or the lack of time - independence in the gravitational mass density or phase space density function .i am thankful to dr .john aston for his kind comments .i acknowledge the support of a warwick centre for analytical sciences fellowship . | in lieu of direct detection of dark matter , estimation of the distribution of the gravitational mass in distant galaxies is of crucial importance in astrophysics . typically , such estimation is performed using small samples of noisy , partially missing measurements - only some of the three components of the velocity and location vectors of individual particles that live in the galaxy are measurable . such limitations of the available data in turn demands that simplifying model assumptions be undertaken . thus , assuming that the phase space of a galaxy manifests simple symmetries - such as isotropy - allows for the learning of the density of the gravitational mass in galaxies . this is equivalent to assuming that the phase space from which the velocity and location vectors of galactic particles are sampled from , is an isotropic function of these vectors . we present a new non - parametric test of hypothesis that tests for relative support in two or more measured data sets of disparate sizes , for the undertaken model assumption that a given set of galactic particle data is sampled from an isotropic phase space . this test is designed to work in the context of bayesian non - parametric , multimodal inference in a high - dimensional state space . in fact , the different models that are being compared are characterised by differential dimensionalities of the model parameter vectors . in addition , there is little prior information available about the unknown parameters , suggesting uninformative priors on the parameters in the different models . the problem of model parameter vectors of distinct dimensionalities and the difficulties of computing bayes factors in this context , are circumvented in this test . the test works by identifying the subspace ( of the system phase space ) that is populated by those model parameter vectors , the posterior probability density of which exceed the maximal posterior density achieved under the null . the complement of the probability of the null given a data set is then the integral of the posterior probability density of the model parameters over this sub - space . this integral is the probability that the model parameter lives in this subspace ; in implementational terms this probability is approximated as the fraction of the model parameters that live in this subspace . we illustrate applications of this test with two independent particle data sets in simulated as well as in a real galactic system . the dynamical implications of the results of application to the real galaxy is indicated to be the residence of the observed particle samples in disjoint volumes of the galactic phase space . this result is used to suggest the serious risk borne in attempts at learning of gravitational mass density of galaxies , using particle data . , |
this paper is devoted to a study of families of equilibrium measures on the real line in the polynomial external fields .we consider these measures as functions of a parameter representing either the total mass , time or temperature .these families are regarded as models of many physical processes , which motivates their intense study in the context of the mathematical physics .but equilibrium problems for the logarithmic potential play an important role also in analysis and approximation theory ; in particular , they provide a general method used in the theory of orthogonal polynomials . in this sense ,the subject is essentially `` bilingual '' and has so many ramifications that we opted for writing an extended introduction , instead of a dull enumeration of our results .one of our main goals is a formal investigation of the dependence of the key parameters of the equilibrium measure from the parameter , and in particular , their singularities as functions of . in thermodynamical terms , these singularities are closely related to phase transitions , and we start the introduction with the discussion of this problem in the context of statistical mechanics and thermodynamics . it does not mean that we actually intend to interpret our results in any nonstandard way .we want , on the contrary , to make a short review of a few existing standard interpretations , which could be helpful during `` translations '' in our bilingual area . then we formally introduce basic facts related to the equilibrium measure and mention briefly some applications in analysis and approximation theory , returning at the end to the topic of the coulomb gas and random matrices . in the course of these discussions we describe in general terms the main results of the paper and its structure .determinantal random point processes ( or self - avoiding point fields ) are pervasive in statistical mechanics , probability theory , combinatorics , and many other branches of mathematics . among the examples of such processes we can mention 2-d fermionic systems or coulomb gases , plancherel measures on partitions , two - dimensional random growth models , random non - intersecting paths , totally - asymetric exclusion processes ( tasep ) , quantum hall models , and random matrix models , to mention a few .many of these models lead to the so - called _ orthogonal polynomial ensembles _ , an important subclass of determinantal random point processes .the large scale behavior of such ensembles is described in terms of the asymptotics of the underlying family of orthogonal polynomials , and in the last instance , by the related equilibrium measure solving an extremal problem from the potential theory .one of the earliest and probably best known examples of orthogonal polynomial ensembles is the joint distribution of the eigenvalues of a random hermitian matrix drawn from a unitary ensemble ( see ) .more precisely , we endow the set of hermitian matrices with the joint probability distribution where is a given function with enough increase at to guarantee the convergence of the integral in the definition of the normalizing constant then induces a ( joint ) probability distribution on the eigenvalues of these matrices , with the density and with the corresponding _ partition function _ this result can be traced back to a classical theorem by h. weyl , see and also . notice that is invariant under unitary transformations of the hermitian matrices , while measure defines the equilibrium statistical mechanics of a dyson gas of particles with positions on in the potential , where they interact by the repulsive electrostatic potential of the plane ( the logarithmic potential ) . on the other hand , is also the joint probability density function of an -point orthogonal polynomial ensemble with weight ( see e.g. ) .free energy _ of this matrix model is defined as regardless its interpretation , a particularly important problem is to analyze its asymptotic behavior in the thermodynamic limit , i.e. as . a rather straightforward fact , the existence of the limit ( infinite volume free energy ) has been established under very general conditions on , see e.g. and section [ subsect : equilibrium_analytic ] below . an important property of the infinite volume free energy is its analyticity with respect to the parameters of the problem .the values of the parameters at which the free energy is not analytic are the _ critical points _ ; curves of discontinuity of some derivatives of the free energy connecting the critical points divide the parameter space into different phases of the model . thus , critical points are points of phase transition , and they are going to be a center of our case study .the observation that the distribution can be regarded as the gibbs ensemble on the weyl chamber with hamiltonian allows to foretell the fundamental fact that the value of is given by the solution of a minimization problem for the weighted logarithmic energy .the corresponding minimizer is the _ equilibrium measure _ associated to the problem . ] .this measure , which is a one - dimensional distribution on , is also a model for the limit distribution of the eigenvalues in .indeed , the multidimensional probability distribution is concentrated for large near a single point , which is actually the minimizer for the corresponding discrete energy of an -point distribution . in other words , for large the measure is close to .as , the discrete equilibrium measure converges to the continuous one , so that the thermodynamic limits are essentially described by the ( continuous ) equilibrium measures ( see section [ sec : equilibriuminanalysis ] ) .when is real - analytic , the support of such a measure ( or the asymptotic spectrum of the corresponding unitary ensemble ) is comprised of a finite number of disjoint intervals ( or `` cuts '' ) , and the number of these intervals is a fundamental parameter .for instance , the free energy has a full asymptotic expansion in powers of ( `` topological large expansion '' ) if and only if this support is a single interval ; otherwise , oscillatory terms are present ( this was observed in , and studied systematically in ) .the particularly interesting phenomena occur precisely in the neighborhood of the values of the parameters at which the number of the connected components of the support of the equilibrium measure changes .any change in the number of cuts is a phase transition in the sense specified above , but there are also phase transitions of other kinds ( not related to a change in the number of cuts ) : see sections [ subsec : classification ] and [ sec : phasetrans ] for details .we specialize our analysis to the polynomial potential .it is convenient to write in the form where is a polynomial of an even degree and positive leading coefficient .this case is of great interest , see e.g. , to cite a few references .a common situation is when in such a way that although the coefficients of ( `` coupling constants '' of the model ) play a role of the variables in the problem , the parameter stands clearly out , as it was already mentioned above .recall that it can be regarded either as a temperature ( from the point of view of statistical mechanics ) or time ( from the perspective of a dynamical system ) , and will correspond to the total mass of the equilibrium measure in the external field on .one of the goals of this paper is the description of the evolution of the limiting spectrum of the unitary ensemble for as time ( temperature ) grows from zero to infinity , paying special attention to the mechanisms underlying the increase ( `` birth of a cut '' ) and decrease ( `` fusion of two cuts '' or `` closure of a gap '' ) in the number of its connected components .equivalently , we study one - parametric families of equilibrium measures on in a polynomial external field with the total mass of the measure as the parameter .this problem has been addressed in several publications before , here we only mention a few of them .for instance , studied the fusion of two cuts for the quartic potential , proving the phase transition of the third order for the free energy ( in other words , that ) .this work found continuation in , where in particular the birth of the cut in the same model was observed ( but not proved rigorously ) .it turns out actually that the third order phase transition of the infinite volume free energy is inherent to all possible transitions existing in this model .the evolution of the support of the equilibrium measure in terms of the parameters of the polynomial potential has been studied also in , where a connection with some pde s and integrable systems have been exploited . however , despite of this intense activity , the picture is not complete ; even the monographic chapter contains imprecisions .open questions exist actually for the first non - trivial case of a quartic field , and this paper adds a number of new details to this particular picture ; some of them seem to be significant .for instance , we present a simple characterization of the case when the equilibrium measure has one cut _ for all _ values of the parameter , or when a singularity of type iii ( see the definition in section [ sec : revisited ] ) can occur. we also study in more detail the system of differential equations governing the dynamics of the endpoints of the support of the equilibrium measure .in particular , we analyze the behavior of the infinite volume free energy and of other magnitudes of the system near the singular points , revealing some interesting universal properties . finally , but not less important, we show that all the known and new results related to the outlined problem can be systematically derived from two basic facts in the potential theory .one of them is a representation for the cauchy transform of the equilibrium measure .this representation is a corollary of the fact that on the real line any equilibrium measure in an analytic field is a critical measure , which allows us to apply a variational technique presented in , and systematically in .another fact is a buyarov rakhmanov differentiation formula for the equilibrium measure with respect to its total mass and some of its immediate consequences , which we complement with a unified treatment of the differentiation formulas with respect to any coupling constant , a result which to our knowledge is new .we provide further details in section [ sec : critical ] . in this sectionwe introduce basic notation and mention a number of fundamental facts on the equilibrium measures on the real line , necessary for the rest of the exposition . for more details the reader can consult the original papers and the monograph . for a finite borel measure with compact support on the plane we can define its _ logarithmic potential _ and its _ logarithmic energy _ , = -\iint \log |z - x|\,d\sigma ( x)d\sigma ( z).\ ] ]suppose further that a real - valued function , called the _ external field _, is defined on .then we introduce the _ total _ ( or `` chemical '' ) _ potential _, ( defined at least where is ) and the _ total energy _ , = i [ \sigma ] + 2\int \varphi(z ) \ , dz,\ ] ] respectively .this definition makes sense for a very wide class of functions , although for the purpose of this paper it is sufficient to consider basically real - analytic , actually polynomial , external fields on the real axis . as usual ,we assume also that condition that is automatically satisfied for any real non - constant polynomial of even degree and positive leading coefficient . then for each there exists a unique measure with compact suport , minimizing the total energy = \min_{\sigma \in { { \mathcal m}}_t } i_{\varphi } [ \sigma]\ ] ] in the class of positive borel measures compactly supported on and with total mass ( that is , ) .moreover , is completely determined by the equilibrium condition satisfied by the total potential where the _ equilibrium _ or _ extremal constant _ can be written as -\int \varphi \ , d\lambda_t \right).\ ] ] when is a compact set on and the external field is the corresponding energy minimizer in the class of all probability measures , denoted by , is known as the _ robin measure _ of , its energy is the _ robin constant _ of , ,\ ] ] and the logarithmic capacity of is given by .measure is characterized by the the fact that together with the equilibrium condition for a general external field the main technical problem when finding the equilibrium measure is that its support is not known a priori and has to be found from the equilibrium conditions .once the support is determined , the measure itself can be recovered from the equations presented by the equality part in , which is an integral equation with the logarithmic kernel .after differentiation it is reduced to a singular integral equation with a cauchy kernel whose solution ( on reasonable sets ) has an explicit representation .hence , solving the support problem is the key , and it is essentially more difficult . in the one cut case ,its endpoints are determined by relatively simple equations , but in general , straightforward solutions are not available . for a polynomial ( actually , real analytic ) fieldthe support is a union of a finite number of intervals .equations may be written for the endpoints of those intervals ( and we better know the number of intervals in advance ) , but these equations are not easy to deal with .for instance , in the case of the polynomial field they may be interpreted as systems of equations on periods of an abelian differential on a hyperelliptic riemann surface ( see e.g. remark [ remark : riemann ] ) , but such systems are usually far from being simple . in this context, our approach is based on the dynamics of the family .as we have mentioned above , one of our goals is to describe the dynamics of the family of supports as the mass ( which is also `` time '' , or according to , the `` temperature '' ) changes from to .the detailed discussion starts in sections [ sec : critical ] and [ sec : polyn ] below .observe finally that a study of the family with total mass as a parameter in a fixed external field may be reduced by a simple connecting formula to the family of unit equilibrium measures with respect to family of fields .we have in some cases such a reduction is reasonable , but more often it brings more problems than benefits .in particular , we believe that the total mass as a parameter has particular advantages in the problem under consideration .the significant progress in the theory of rational approximations of analytic functions ( pad - type approximants ) and in the related theory of orthogonal polynomials in the 1980 s is due in part to the logarithmic potential method ; see the original papers and the monographs , although the list is not complete . a novel important ingredient of the techniques developed during that period was precisely the introduction of the weighted equilibrium measure ( among other types of equilibria ) .for instance , a typical application of the notion of equilibrium measures for orthogonal polynomials is the following fundamental result from ( see also ) , presented here in a simplified form : let be a continuous external field , , and let polynomial be defined by this polynomial is also characterized by the extremal property now , if in such a way that , then where is the weak- convergence . from the point of view of the spectral theory ,zeros of are eigenvalues of the truncated jacobi matrices associated with the weight , and shows that can be naturally interpreted as the limit spectrum of the infinite jacobi matrix , associated with the scaling . for details related to applications of equilibrium measures in spectral theory of discrete sturm - liouville operatorssee . another large circle of applications of ( continuous ) equilibrium measures is related to the discrete analogue of this notion .let denote the set of all point mass measures on of total mass .the corresponding discrete energies are defined by := \sum_{i\neq j } \log\frac{1}{|\zeta_i-\zeta_j| } , \quad e_\varphi[\mu]:= e[\mu]+2\int \varphi\ , d\mu\ ] ] ( cf . ) .let be a minimizer ( not necessarily unique ) for ] ; this readily yields the bound . naturally , both polynomials and , as well as the endpoints of the support , are functions of the parameter , fact that we usually omit from notation for the sake of brevity . in what followswe understand by and the holomorphic branches of these functions in determined by the condition and for , as well as denotes the boundary value of on from the upper half - plane .by , the factorization plays a fundamental role in the sequel .in particular , the -representation of the the equilibrium measure in may be equivalently written in terms of the following -representation : in a one - cut case ( ) the equalities and render a system of algebraic equations on the zeros of and , matching the number of unknowns .taking into account additionally that for equation is algebraic in s , this shows that in this ( and basically , only in this ) case all parameters of the equilibrium measure are algebraic functions of the coefficients of the external field .we conclude this section with two trivial but useful identities : and ( see ) .the -representation introduced above has several advantages : it allows us to recast the discussion on the evolution of the support and of the phase transitions for the free energy in terms of the zeros of and .this will be carried out in the next two subsections .a more general analysis of the variation of as a function of all coupling constants is done in subsection [ subsec : variations ] .our first task is to rewrite the differentiation formulas of theorem [ lem : buyarov / rakhmanov:99 ] in terms of the zeros of and , which can be regarded as phase coordinates of a dynamical system . recall that if has the form , then its robin measure ( see the definition in section [ subsect : equilibrium_analytic ] ) is while for the complex green function with pole at infinity we have here is a real monic polynomial of degree ; for , , while for it is uniquely determined by conditions it follows in particular that for , , \quad j=1 , \dots , p-1.\ ] ] in order to denote the differentiation with respect to the total mass we will use either or the dot over the function , indistinctly . recall that are the zeros of , and denote by the zeros of .[ thm : dynamical system - t ] if and have no common zeros then if additionally all zeros of are simple , then the second set of equations in simplifies to taking the square root in both sides of and differentiating the resulting formula with respect to ( recall that does not depend on ) , we get by theorem [ lem : buyarov / rakhmanov:99 ] and , and further , with the help of , assertions of the theorem follow by equating residues of the rational functions in the last equation above .a subset of equations from corresponding to has been obtained in several places before , in particular , in the relevant work of bleher and eynard , although using a different approach . that paper is ,also probably , one of the first works where some analytic properties of phase transitions in dyson gases were rigorously studied .it contains several noteworthy results , such as a discussion of the string equations and the double scaling limit of the correlation functions when simultaneously the volume goes to infinity and the parameter approaches its critical value with an appropriate speed .equations are similar in form to a system of odes studied by dubrovin in for the dynamics of the korteweg - de vries equation in the class of finite - zone or finite - band potentials .curiously , dubrovin s equations govern the evolution of the `` spurious '' poles of the diagonal pad approximants to rational modifications of a markov function ( cauchy transform ) , and are equivalent to the equations obtained by one of the authors in in terms of the harmonic measure of the support of .we will see in the sequel that conditions of theorem [ thm : dynamical system - t ] are satisfied for any except for a finite number of values .these values of may be called critical ; they correspond to some of the phase transitions but not to all of them .there are other significant values , also presenting phase transitions , which are not critical in the above mentioned sense . as it follows from theorems [ lem : buyarov / rakhmanov:99 ] and [ thm : generalparameter ] ( see also and ( * ? ?* theorem 1.3 .( iii ) ) ) , the endpoints of the support of the equilibrium measure , its density function , and the corresponding equilibrium energy ( or the infinite volume free energy ) are analytic functions of the coefficients of the external field , except for a finite number of values where the analyticity breaks down . these critical points divide the parameter space into different phases of the model and represent phase transitions .some of them ( but not all ) correspond also to the change of topology of the support of the equilibrium measure .recall the classification of the singularities of the equilibrium measure we summarized in subsection [ sec : revisited ] . from the point of view of the evolution in and with the - ( or - )representation at hand these singularities are now generically classified as follows .* * singularity of type i * is a bifurcation , representing a birth of the cut , is the event at a critical time when a simple zero of ceases to exist and at its place two new zeros of are born .formally , at the inequality , is no longer strict in , and the equality is attained at some point , where we will have . at this pointa bifurcation of the zero occurs .+ a significant property of this situation is that the phase transition occurs by saturation of the inequality ; the moment of the bifurcation is not defined by the dynamical system , i.e. it is not its singular point .all the phase parameters may be analytically continued through .the solution of the system for would give us a critical , but not an equilibrium , measure . * * singularity of type ii * is the opposite event , or fusion of two cuts , consisting in the collision and subsequent disappearance of two zeros of ( note that a zero of the complex green function , `` trapped '' in the closing gap , disappears simultaneously ) , and an appearance of a ( double ) zero of , followed by the splitting of this real double zero into two complex simple zeros .the collision of two zeros of is a critical point of the dynamical system .+ we note that as a rare event ( event of a higher co - dimension ) it may happen that a number of other cuts were present in the vanishing gap immediately before the collision ; they all disappear at the moment of collision of and .this will be accompanied by an appearance for a moment of a zero of of an even multiplicity higher than two .+ thus , a double zero ( or of an even higher multiplicity ) of is present at the moment of collision inside . according to, the density of vanishes at these points with an integer even order . * * singularity of type iii * are the endpoints of where has a multiple zero ; they correspond to the case when and in have a common real zero .additionally , a special situation is created when two complex - conjugate simple zeros of collide on , and either bounce back to the complex plane or continue their evolution as two real simple roots of ( at this moment , two new local extrema of the total potential in are born ) . at these values of the parametersthe free energy is still analytic , so this is not a phase transition in the sense we agreed to use in this study , although the colliding zeros of lose analyticity with respect to . in a certain sense , the singularity of type iii is a limiting case of this phenomenon , when it occurs at an endpoint of .the reader should be aware of a certain freedom in our terminology regarding the singular points : we refer to singularities meaning both the value of the parameter at which a bifurcation occurs , and the point on where the actual bifurcation takes place . we hope that the correct meaning in each case is clear from the context and will not lead the reader to confusion .we return to the analysis of the local behavior at the singularities of the system in terms of in section [ sec : phasetrans ] .so far we have been regarding the equilibrium measure and its - and -representations as functions of the total mass , assuming that is fixed .now we discuss a more general problem : the dependence of and , correspondingly , of , , and from all the coefficients of the external field .it is a remarkable fact that the differentiation formulas with respect to the coupling constants , , have the same form as the differentiation formula with respect to , and that they all can be obtained in a unified way .we start with a simple technical observation : [ existence_h ] given points and the corresponding monic polynomial , there exist polynomials , , , uniquely determined by the following conditions : and clearly , , where is the numerator of introduced in subsection [ subsec : dynamical ] , see .furthermore , for each , means that the coefficients corresponding to powers of the laurent expansion of at infinity vanish , which renders linear equations , additional to linear equations on the coefficients of .the corresponding homogeneous linear equations are obtained by setting along with . in particular , every is of degree at most , and according to , has a zero in each interval , , which yields only the trivial solution for this system .since we are going to write all differentiation formulas in a unified way , we prefer to use here the notation with .[ thm : generaldiff ] let the polynomial external field be given by .let also denote the corresponding equilibrium measure of mass , and let polynomials and be the -representation of this equilibrium measure , see .then where polynomials are given in lemma [ existence_h ] .moreover , in consequence , and for , formulas and are just a restatement of theorem [ thm : dynamical system - t ] .furthermore , from theorem [ thm : generalparameter ] it follows that for , where is a signed measure on satisfying it follows from that the multivalued analytic function has a single - valued real part , which is continuous in and satisfies taking into account we conclude that since for , identity of follows for all remaining s .finally , shows that is a homogeneous function of degree 1 of the vector of coupling constants , so that is just euler s theorem for such a function .formula is obtained by replacing and correspondingly in the left and right hand sides of .evaluating at the zeros of we obtain a set of algebraic identities , called _ hodograph equations _ in . solving them we could find the main parameters of the equilibrium measure and of its support .however , their explicit character is misleading : in the multi - cut case the dependence of the coefficients of from the coupling constants s is highly transcendental , and as the authors of point out , except for the simplest examples , equations are extremely difficult to solve , `` even by numerical methods '' . [remark : riemann ] alternatively , following the general methodology put forward in , we can derive the identities on the endpoints of the connected components of considering the hyperelliptic riemann surface of and its deformations depending on the set of coupling constants , imposing the condition that the partial derivatives with respect to the parameters of the corresponding normalized abelian differentials of this surface are given by a meromorphic differential on .this is equivalent to the set of the so - called _ whitham equations _ on . actually , polynomials defined in lemma [ existence_h ] , appear in the explicit representation of these normalized abelian differentials of the third ( ) and second kind ( ) on .the key connection with the equilibrium problem is provided by identity , which shows that the differential , with given by , can be extended as this meromorphic differential on .this approach was used in to obtain in particular an analogue of , and developed further in .again , a direct consequence of theorem [ thm : generaldiff ] is the possibility to rewrite the differentiation formulas in terms of the zeros of and of : [ thm : odebis ] if under assumptions of theorem [ thm : generaldiff ] , and have no common zeros then for , if additionally all zeros of are simple , then the second set of equations in simplifies to it is a consequence of that observe that analogous formula is valid for .hence , both the left and the right hand sides in are rational functions in , with possible poles only at the zeros of and .the necessary identities are established by comparing the corresponding residues at each pole .for instance , with the assumption that the zeros of and are disjoint , the residue of the left hand side of at is equal to , which yields the first set of equations in ( recall that by construction , all zeros of are simple ) .the analysis of the residues at gives us the remaining identities .the proof shows how the statement can be modified in the case of coincidence of some zeros of and ( in other words , in the case of roots of of degree higher than 2 ) .moreover , the evolution of is such that as long as the right - hand sides in remain bounded , all zeros of , and hence , itself , are in .we can rewrite the equations on s in in a weaker form : which are the _whitham equations in hydrodynamic form _ ( see ( * ? ? ?* eq . ( 77 ) ) ) .the simplest case to consider is when , so that consists of a single interval ] becomes part of ( _ birth of a cut _ ) ; we assume that for in a neighborhood of , is a simple zero of ; * * singularity of type ii : * at a time two simple zeros and of ( simple zeros of ) collide ( _ fusion of two cuts _ ) . * * singularity of type iii : * at a time a pair of complex conjugate zeros and of ( double zeros of ) collide with a simple zero of , so that as . additionally , in subsection [ subs : iv ] we analyze the scenario when at a time a pair of complex conjugate zeros and of ( double zeros of ) collide at and either bounce back to the complex plane or become two simple real zeros of ( these real zeros are new local extrema of the total potential on ) . obviously , for a general polynomial potential some of these phase transitions can occur simultaneously : for a given value two or more cuts could merge , while a new cut is open elsewhere , together with a type iii singularity at some endpoints of .still , the basic `` building blocks '' of all phase transitions are precisely the four cases described above , which we proceed to study .it was established in that in the second case , when at two components of merge into a single cut in such a way that is regular for , the equilibrium energy can be analytically continued through from both sides .for the case of a quartic potential it was shown in that the energy and its first two derivatives are continuous at , but the third derivative has a finite jump .this is a third order phase transition , observed also in a circular ensemble of random matrices . with respect to the singularity of type iii ,it is mentioned in that in this case the free energy is expected to have an algebraic singularity at , `` but this problem has not been studied yet in details '' . in this sectionwe extend this result to a general multi - cut case and show that for the three types of singularities the energy and its first two derivatives ( but in general not the third one ) are continuous at the critical value .the character of the discontinuities is also analyzed , and we can summarize our findings as follows : in all cases there is a parameter , that we call , expressing geometrically the `` distance '' to the singularity . in the case of a birth of a new cut , this is the size of this new component of ; for a fusion of two cuts , this is the size of the vanishing gap , while for the singularity of type iii we can take it as the distance between two colliding zeros of . in the three cases the first two derivatives of the equilibrium energy ] becomes part of .first of all we want to estimate the size of the new cut as a function of .we use the notation introduced above , indicating explicitly the dependence from the parameter .let us remind the reader in particular that polynomial is the numerator of the derivative of the complex green function , see .our assumptions can be written as follows : for a small , there exist polynomials , and , such that ,\\ ( x - a_-)(x - a_+)\bm a(x;t ) , & t\in ( t , t+\varepsilon),\\ \end{cases } \\ b(x;t)&=\begin{cases } ( x - b ) \bm b(x;t ) , & t\in ( t-\varepsilon ,t],\\ \bm b(x;t ) , & t\in ( t , t+\varepsilon),\\ \end{cases } \\ h(x;t)&=\begin{cases } \bm h(x;t ) , & t\in ( t-\varepsilon , t],\\ ( x-\zeta)\bm h(x;t ) , & t\in ( t , t+\varepsilon),\\ \end{cases}\end{aligned}\ ] ] where , and are real - valued continuous functions of such that we denote this common value by ; this is the place where the new cut is born at .remember that it is imposed only by the saturation of the inequality constraint in outside of .polynomials , and are continuous with respect to the parameter , but represent , generally speaking , different real - analytic functions of for and .let us denote , omitting from the notation when possible the explicit dependence on . from it follows that , \\ \label{locala } \dot{a_\pm } & = \pm \frac{2 h ( a_\pm ) } { ( a_+-a_-)\bm q(a_\pm ) } = \pm \frac{2 ( a_\pm -\zeta ) \bm h ( a_\pm ) } { ( a_+-a_-)\bm q(a_\pm ) } , \quad t\in ( t , t+\varepsilon).\end{aligned}\ ] ] it is convenient to introduce two new variables for : adding both equations in we get which shows in particular that and we conclude that the function defined piecewise as is also in .analogously , subtracting equations in yields that for , or equivalently , let us denote by the largest zero of satisfying .then condition reads as clearly , on the other hand , from and it follows now that if we denote then notice that the solution of the ode , with , satisfies so that implies that this is consistent with the scaling used e.g. in .we turn next to the asymptotic behavior of the robin constant of the support , which according to theorem [ lem : buyarov / rakhmanov:99 ] is the second derivative of the infinite volume free energy .let us study first the following model situation , from which the general conclusion is readily derived .assume that is a union of a finite number of disjoint real intervals , and ] , while for .polynomials , and are continuous with respect to the parameter , but represent , generally speaking , different real - analytic functions of for and .we denote , omitting from the notation when possible the explicit dependence on . from itfollows that adding the first two equations in and using the notation introduced in we get so that analogously , from , and we conclude that the function defined piecewise as is in .furthermore , by , condition reads as with the change of variables in the integrand we obtain that where so that uniformly for ] .consequently , combining and we see that using it in we conclude that this formula shows that two cuts can come together only at zeros of for which .furthermore , we have now we turn to the simplified model problem .assume that , where ] , for ; notice that .zeros do depend on , so we will write when we want to make it explicit .observe also that and , , are the zeros of the derivative of the complex green s function for .[ lemma : merger ] with the notation above , moreover , with consider first the vector - valued function , assigning each value of to the corresponding vector , defined by equations .it is clearly differentiable for any .if we denote then by the implicit function theorem , consider the -th row of the matrix in the right - hand side of for .we have since only the numerator of the integrand depends on , it is easy to see that on the other hand , the minor of the matrix in the right - hand side of obtained after eliminating the -th row and column is clearly invertible for : it corresponds to the system for , and the endpoints of are free ends where no phase transition occurs .hence , expanding the matrix in along its -th row we conclude that it is invertible for small values of , and we can write observe that for , while for , this proves .let us be more precise about the asymptotics of . by , , defining and making in the integral above the appropriate change of variables we obtain an expression for : where since , and defining , , we get that so that where we claim that in the asymptotic expression above we can replace by . this is a direct consequence of and the fact that in , only the numerator depends on .this proves the lemma . as in the case of the birth of a cut, we study the asymptotics of the robin constant as . observe that for and , , with the term uniform in .recall that for all , is a monic polynomial of degree , and by , uniformly in the endpoints of .this motivates us to define the existence of this limit will be established next .meanwhile , it is clear that is a polynomial of degree at most ( for , function in is a constant , so that by , in this case ) .assume that ; equations for yield : dividing it through by , using and considering limit when we obtain the following equations on : where is defined in .this renders a system of linear equations with unknowns ( the coefficients of ) , which has a unique solution ( as the consideration of the corresponding homogeneous system clearly shows ) , and in particular , establishes the existence of the limit in . fromwe have for , so that and the integral is convergent for every sufficiently small .taking into account , and , we conclude that . hence , dividing the identity above through by and using the definition of we get we summarize this in the following lemma : [ lem : robinasympt ] under the assumptions above , where the constant is given by the right hand side of , and the polynomial is uniquely defined by the equations for , or for .now we go back to the phase transition when two cuts merge .using formula we see that observe that this expression involves explicitly the point where two cuts merged ; clearly , this is not the case if we consider the limit of as .we could conclude from here that at has finite but , in general , different values from the left and from the right .in particular , is not differentiable at .[ example : merge2 ] as an illustration , let us consider the case when has only two cuts that merge into a single interval at .using the notation above , this means that , with for , , and .thus , by , since ] .for the quartic external field it takes place if and only if it is symmetric , i.e. attains its global minimum at two distinct points .this conclusion is a straightforward consequence of the formulas in . indeed ,if and , then the first formula in gives us that since the situation is invariant by translation in , we can conclude that if is the midpoint of the interval ] in a small neighborhood of , so that with , from we obtain that using that as and expressions and , we get that and taking into account we conclude that in other words , in the case of a singularity of type iii , we have again a third order phase transition with an infinite algebraic jump of the third derivative of the free energy at the critical time , with the exponent .we finally turn our attention to the situation created when a pair of complex conjugate zeros and of collide at and become two new simple zeros , of .it was mentioned that all s are analytic through , and thus by , and are also analytic functions of the parameter . using the notation introduced above , our assumptions can be written as follows : for a small , there exist polynomials , and , continuous with respect to the parameter , but represent , generally speaking , different real - analytic functions of for and , such that where and are continuous functions of such that notice that for ; without loss of generality , . a priori, we do not assume that are real - valued for , so the two possibilities are either or .we denote , omitting from the notation when possible the explicit dependence on . from itfollows that subtracting / adding both equations in we easily get that it follows in particular that observe that these formulas show that the collision of and on can occur only at a position where moreover , assuming that for the new zeros are complex conjugate , the same formulas apply .this yields the partial conclusion : _ the scenario when the complex zeros of collide at and bounce back to the complex plane can occur only when is either a zero or a pole of ._ this is the situation , for instance , when coincides with one of the endpoints of , and in this case we get a type iii phase transition .thus , let us assume that this is always the case , for instance , if is a single interval .then , for . denoting again , we obtain in the same fashion that finally , we have mentioned that a singularity of type iii is a limit case of the situation analyzed here , when coincides with one of the s .furthermore , under assumption , a collision of a pair of complex conjugate zeros of at is followed by a type i phase transition ( birth of a new cut ) . however , these two phenomena can not occur simultaneously : if , conditions ( see ) are incompatible .in this section we consider in detail a particularly important case of a quartic potential , i.e. when in the representation . observe that this is the first non - trivial situation , since for ( quadratic polynomial ) all calculations are rather straightforward . according to section [ sec : polyn ] , for , has the form with either ( one - cut regime " or one - cut case " ) or ( two - cut case " ) . additionally to the description of all possible scenarios for the evolution of as travels the positive semi axis , we characterize here the quartic potentials for which is connected _ for all values of _ , as well as those for which the singularity of type iii ( higher order vanishing of the density of the equilibrium measure ) or the birth of new local extrema occur .roughly speaking , the evolution of can be described qualitatively as follows : for the quartic potential there exists a two - sided infinite sector on the plane , centered at a global minimum of and symmetric with respect to the horizontal line passing through this minimum and with the slope , such that is a single interval for all values of if and only if the other critical points of lie outside of this sector .otherwise , the positive -semi axis splits into two finite subintervals and an infinite ray .the finite subinterval containing ( which may degenerate to a single point ) corresponds to the one - cut situation , and for the neighboring finite interval the support has two connected components .finally , the infinite ray corresponds again to the one - cut case .recall ( see sections [ subsec : classification ] and [ sec : phasetrans ] ) that the transition from one to two cuts occurs always by saturation of the inequality in , while the transition from two to one cut occurs by collision of some zeros of the right hand side of .let us give the rigorous statements .for any quartic real polynomial with positive leading coefficient we define the value as follows : let denote a point where attains its global minimum on ( which can be unique or not ) , and , any other critical point of ( zero of ) . then geometrically , is the square of the slope of the straight line joining and .notice that a real cubic polynomial has either 3 real zeros , or one real and two complex conjugate zeros , so that there is no ambiguity in the definition of .next , we define the critical slope : let denote the only positive root of the equation explicitly , {5 \left(3072 \sqrt{6}-3107\right)}>0.\ ] ] alternatively , where is the only real solution of the equation [ thm : characterization ] let the quartic potential be given , and denote the critical value as described above. then , case : : : is a single interval for all values of , no singularities occur . moreover , non - real zeros of in move monotonically away from the real line if andonly if . case : : : is a single interval for all values of , but there exists a ( unique ) value of for which a type iii singularity occurs ( and this is the unique phase transition ) .case : : : evolves from one cut to two cuts , and then back to one cut , presenting once the birth of new local extrema , a singularity of type i and a singularity of type ii , in this order . no other singularities occur .case : : : if attains its global minimum at a single point , then evolves from one cut to two cuts , and then back to one cut , presenting once a singularity of type i and a singularity of type ii , in this order . no other singularities occur . + if attains its global minimum at two different points , then evolves from two cuts to one cut , and only a singularity of type ii is present .we obviously consider ; the value is not regarded as a singularity .observe that for with more than one local extrema on there are no type iii phase transitions .we can also easily characterize the quartic external fields for which singularity of type iii occurs ( i.e. such that the zeros of lie on the critical line ) directly in terms of their coefficients . indeed ,if , then so , without loss of generality , we may assume that let be a real zero of ; according to theorem [ thm : characterization ] , will have a type iii singularity for a certain value of if and only if the other two zeros of are of the form , , and the value is a root of the polynomial . in particular , comparing and we conclude that eliminating and from we find that the resultant of the polynomials in the left hand side of and is an integer multiple of .since both polynomials share a common root , , the resultant must vanish , and we conclude the following : for an external field with derivative of the form the equilibrium measure develops a type iii singularity if and only if using the substitution we can easily extend this result to the general case : _ for an external field such that the equilibrium measure develops a type iii singularity if and only if _ for instance , direct substitution shows that satisfies this condition . according to , in this case we will have a singularity of type iii for a finite value of , where the density of the equilibrium measure vanishes with the exponent . the value of can be found by the procedure described at the end of this section .theorem [ thm : characterization ] is a consequence of theorems [ thm : mainresult ] and [ thm : mainresult2 ] below , where some additional finer results on the dynamics of the equilibrium measure as a function of are established .since the problem is basically invariant under homotopy , horizontal and vertical shifts in the potential , as well as mirror transformation of the variable , in the rest of this section without loss of generality we assume that with both and in the closed right half plane .we have so that we may suppose that one of the following two generic situations takes place : case 1 : : : has three real roots , and ; case 2 : : : has one real root , at , is in the first quadrant , and . condition in case 1 is equivalent to saying that has on two local minima , at and , and a maximum at , in such a way that ( obviously , any general situation when has three real roots can be reduced to case 1 by an affine change of variables and by adding a constant to ) . since the situation ( or ) is equivalent to an even external field , in what follows we assume that the inequalities are strict : . case 2 means that has only one local extremum on the real line . in this way ,the only case excluded from the analysis is , for which and the situation is trivial .for the polynomial in the right hand side of has degree 6 , and the identity takes the form for certain constants .the following technical result will be useful in what follows : [ lemma : equations ] assume that for in , or equivalently , .then if is given by , the following identities hold : this is a straightforward consequence of replacing in and equating the coefficients in both sides .[ remk1 ] it is important to observe that the identities remain valid under a homothetic transformation for .in other words , a linear scaling in space yields a quartic scaling in time ( or temperature ) .let us consider first case 1 ( with strict inequalities ) .the theorem below is the quantitative description of the following evolution : for small values of temperature a single cut is born in a neighborhood of the origin .the other two ( double ) zeros of are real and close to and , moving in opposite directions . at the first critical temperature a bifurcation of typei occurs : the rightmost double zero splits into two simple real zeros , giving birth to a second cut in the spectrum .this configuration is preserved until both cuts merge at a quartic point at a temperature ( phase transition of type ii ) .after that , two complex conjugate double roots of drift away to infinity in the complex plane , so that for the remaining situation we are back in the one - cut case . for the sake of convenience , herewe use all introduced notations interchangeably , [ thm : mainresult ] let be an external field given by with . then there exist two critical values such that : * * ( phase 1 ) : * for , and in have the form with , and ] ( two - cut case ) .+ moreover , these values satisfy the system of differential equations parameters and are continuous at , while the value of is determined by in particular , the critical temperature is determined by the collision condition . * * ( 2nd transition , phase transition of type ii ) : * for , and in have the form where so that , and ] ( one - cut case ) .+ moreover , these values satisfy the system of differential equations of the form ( setting now ) ; in particular , grows monotonically with , and if , then .the evolution is as described , except for ( phase 1 is missing ) . using and representationwe conclude that there exists such that for we are in the one - cut case , and formulas hold , with both and close to and , respectively . in this situation , , so that is a particularization of , and the inequalities are just straightforward consequences of .observe also that implies that furthermore , for any finite initial positions , , with , the solution of the system of differential equations exhibits collision ( of and ) in finite time . indeed , by , for ( and before the collision ) , since , solving the corresponding ode we conclude that analogously , replacing these bounds in we get or thus , a collision will occur by time if since , as , the integral in the left - hand side diverges as , which proves that there will always be a collision in a finite time .observe that function in is well - defined in the whole interval , and that the integrand in is , up to a constant , the analytic continuation of the density of , see .using we conclude that however , at the collision time , which shows that there is a unique time for which . from the positivity of the measure and expressionit is easy to conclude that all roots of , which are not endpoints of the support , need to be double .hence , for , the rightmost double root splits into a pair of simple real roots and , giving rise to formula .we apply again theorem [ thm : dynamical system - t ] with , which yields .observe that it is not straightforward to deduce the sign of from taking into account the initial values we see that for a small and , is close to , so that .this implies that immediately after the birth of a cut ( ) , , so that point still moves to the left `` by inertia '' . in that range of time , and ( and in consequence , also and ) are in the collision course , and collision occurs in finite time , when all these four points merge _ simultaneously_. this critical time can be characterized by the appearance of a quadruple root of inside , so that the system is valid . taking into account the monotonicity of , we see that for the quadruple root of splits into two complex - conjugate roots and , and formulas hold . adding the equations for s ( ) we obtain that is an equation of an `` east - west opening '' rectangular ( or equilateral ) hyperbola with its vertices at and , and corresponds to the connected component of its complement in containing the segment joining and .figure1 ( 17,46) ( 77,46) ( 3,79) ( 33,69) hence , we conclude from that moreover , the monotonicity of the support ( or equivalently , the fact that ) implies that clearly , for , , so that in this range of , grows monotonically , and we have a one - cut case for all . finally ,since by assumptions , for all , using the representation we conclude that regarding the second limit in , observe that from equations , taking , we get an immediate consequence of these identities is that the centers of masses of the zeros of and of the zeros of are always in a collision course . a comparison of the coefficients at and in both sides of yields the system an assumption that ( collision ) in the last two equations in implies that , which is possible only if .this is the symmetric case not considered here .hence , we conclude that in our situation there exist the limits by using this in the first identity in we conclude the proof of . next , we turn to case 2 .[ thm : mainresult2 ] consider the external field given by with , and with in the first quadrant . then : 1 .if where , and is the only positive root of the equation , then for all , polynomials and in have the form with , and consists of a single interval ] . * for ,polynomials and in have the form , with , and ] .its discriminant ( easily found with the help of a computer algebra system ) , is positive for ] , find the root of lying in ,\ ] ] and take . then the endpoints of the support are obtained by replacing in , and taking ; the value of is finally computed from the fourth equation in . in this case , as we have seen , is a singular point of type ii ( zero of the density of the equilibrium measure ) .let us consider the case when is in the first quadrant , and . denoting and dividing by , we arrive at the polynomial equation whose discriminant , as natural , is given by the left hand side of with replaced by .thus , for , with defined in , polynomial has only one real root , which yields non - real values of and in . in this case, no quadruple critical point appears . if on the contrary , , then has three positive roots ; only two of them give real values of and in . summarizing , in the case when , the recipe for finding two critical points is as follows : find the real roots of , take and replace it in .if , then , and ; the value of is computed from the fourth equation in . if on the contrary , , then , and ; the value of is computed again from the fourth equation in .the first and the second authors have been supported in part by the research projects mtm2011 - 28952-c02 - 01 ( a.m .- f . ) and mtm2011 - 28781 ( r.o . ) from the ministry of science and innovation of spain and the european regional development fund ( erdf ) . additionally , the first author was supported by the excellence grant p09-fqm-4643 and the research group fqm-229 from junta de andaluca .a.m .- f . and e.a.r .also thank aim workshop `` vector equilibrium problems and their applications to random matrix models '' and useful discussions in that research - stimulating environment .we gratefully acknowledge also several constructive remarks from razvan teodorescu ( university of south florida , usa ) , arno kuijlaars ( university of leuven , belgium ) and tamara grava ( sissa , italy ) .a. i. aptekarev and w. van assche .asymptotics of discrete orthogonal polynomials and the continuum limit of the toda lattice . , 34(48):1062710637 , 2001 .symmetries and integrability of difference equations ( tokyo , 2000 ) .p. deift , t. kriecherbauer , k. t .-mclaughlin , s. venakides , and x. zhou. uniform asymptotics for polynomials orthogonal with respect to varying exponential weights and applications to universality questions in random matrix theory . , 52(11):13351425 , 1999 .a. a. gonchar and e. a. rakhmanov . on the convergence of simultaneous pad approximants for systems of functions of markov type ., 157:3148 , 234 , 1981 . number theory , mathematical analysis and their applications .a. a. gonchar and e. a. rakhmanov .equilibrium measure and the distribution of zeros of extremal polynomials ., 125(2):117127 , 1984 .translation from mat .134(176 ) , no.3(11 ) , 306 - 352 ( 1987 ) .a. a. gonchar and e. a. rakhmanov .equilibrium distributions and degree of rational approximation of analytic functions . , 62(2):305348 , 1987 .translation from mat .134(176 ) , no.3(11 ) , 306 - 352 ( 1987 ) . a. b. j. kuijlaars and k. t .-generic behavior of the density of states in random matrix theory and equilibrium problems in the presence of real analytic external fields ., 53(6):736785 , 2000 .g. lpez and e. a. rakhmanov .rational approximations , orthogonal polynomials and equilibrium distributions . in _orthogonal polynomials and their applications ( segovia , 1986 ) _ , volume 1329 of _ lecture notes in math ._ , pages 125157 .springer , berlin , 1988 .a. martnez - finkelshtein and e. a. rakhmanov . on asymptotic behavior of heine - stieltjes and van vleck polynomials . in _ recent trends in orthogonal polynomials andapproximation theory _ ,volume 507 of _ contemp ._ , pages 209232 .soc . , providence , ri , 2010 .n. i. muskhelishvili . .dover publications inc ., new york , 1992 . translated from the second ( 1946 ) russian edition and with a preface by j. r. m. radok . corrected reprint of the 1953 english translation .e. a. rakhmanov .strong asymptotics for orthogonal polynomials . in _ methods of approximation theory in complex analysis and mathematical physics ( leningrad , 1991 ) _ , volume 1550 of _ lecture notes in math ._ , pages 7197 .springer , berlin , 1993 .h. stahl .orthogonal polynomials of complex - valued measures and the convergence of pad approximants . in _ fourier analysis and approximation theory ( proc .budapest , 1976 ) , vol .volume 19 of _ colloq .jnos bolyai _ , pages 771788 .north - holland , amsterdam , 1978 . `a. martnez - finkelshtein ( andrei.es ) department of mathematics university of almera , spain , and instituto carlos i de fsica terica y computacional granada university , spain e. a. rakhmanov ( rakhmano.usf.edu ) department of mathematics , university of south florida , usa r. orive ( rorive.es ) department of mathematical analysis , university of la laguna tenerife , canary islands , spain ` | the paper is devoted to a study of phase transitions in the hermitian random matrix models with a polynomial potential . in an alternative equivalent language , we study families of equilibrium measures on the real line in a polynomial external field . the total mass of the measure is considered as the main parameter , which may be interpreted also either as temperature or time . our main tools are differentiation formulas with respect to the parameters of the problem , and a representation of the equilibrium potential in terms of a hyperelliptic integral . using this combination we introduce and investigate a dynamical system ( system of ode s ) describing the evolution of families of equilibrium measures . on this basis we are able to systematically derive a number of new results on phase transitions , such as the local behavior of the system at all kinds of phase transitions , as well as to review a number of known ones . |
a heterogeneous material ( medium ) is one that composed of domains of different materials or phases ( e.g. , a composite ) or the same material in different states ( e.g. , a polycrystal ) .such materials are ubiquitous ; examples include sandstones , granular media , animal and plant tissue , gels , foams and concrete .the microstructures of heterogeneous materials can be only characterized statistically via various types of -point correlation functions .the effective transport , mechanical , and electromagnetic properties of heterogeneous materials are known to be dependent on an infinite set of correlation functions that statistically characterize the microstructure ._ reconstruction _ of heterogeneous materials from a knowledge of limited microstructural information ( a set of lower - order correlation functions ) is an intriguing inverse problem .an effective reconstruction procedure enables one to generate accurate structures and subsequent analysis can be performed on the image to obtain macroscopic properties of the materials ; see , e.g. , ref .this provides a nondestructive means of estimating the macroscopic properties : a problem of important technological relevance .another useful application is reconstruction of a three - dimensional structure of the heterogeneous material using information extracted from two - dimensional plane cuts through the material .such reconstructions are of great value in a wide variety of fields , including petroleum engineering , biology and medicine , because in many cases one only has two - dimensional information such as a micrograph or image . generating realizations of heterogeneous materials from a set of hypothetical correlation functions is often referred to as a _ construction _ problem .a successful means of construction enables one to identify and categorize materials based on their correlation functions .one can also determine how much information is contained in the correlation functions and test realizability of various types of hypothetical correlation functions .furthermore , an effective ( re)construction procedure can be employed to investigate any physical phenomena where the understanding of spatiotemporal patterns is fundamental , such as in turbulence . a popular ( re)construction procedure is based on the use of gaussian random fields : successively passing a normalized uncorrelated random gaussian field through a linear and then a nonlinear filter to yield the discrete values representing the phases of the structure .the mathematical background used in the statistical topography of gaussian random fields was originally established in the work of rice .many variations of this method have been developed and applied since then .the gaussian - field approach assumes that the spatial statistics of a two - phase random medium can be completely described by specifying only the volume fraction and standard two - point correlation function , which gives the probability of finding two points separated by vector distance in one of the phases .however , to reproduce gaussian statistics it is not enough to impose conditions on the first two cumulants only , but also to simultaneously ensure that higher - order cumulants vanish .in addition , the method is not suitable for extension to non - gaussian statistics , and hence is model dependent .recently , torquato and coworkers have introduced another stochastic ( re)construction technique . in this method ,one starts with a given , arbitrarily chosen , initial configuration of random medium and a set of target functions .the medium can be a dispersion of particle - like building blocks or , more generally , a digitized image .the target functions describe the desirable statistical properties of the medium of interest , which can be various correlation functions taken either from experiments or theoretical considerations .the method proceeds to find a realization ( configuration ) in which calculated correlation functions best match the target functions .this is achieved by minimizing the sum of squared differences between the calculated and target functions via stochastic optimization techniques , such as the simulated annealing method .this method is applicable to multidimensional and multiphase media , and is highly flexible to include any type and number of correlation functions as microstructural information . it is both a generalization and simplification of the aforementioned gaussian - field ( re)construction technique .moreover , it does not depend on any particular statistics .there are many different types of statistical descriptors that can be chosen as target functions ; the most basic one is the aforementioned two - point correlation function , which is obtainable from small - angle x - ray scattering .however , not every hypothetical two - point correlation function corresponds to a realizable two - phase medium .therefore , it is of great fundamental and practical importance to determine the necessary conditions that realizable two - point correlation functions must possess .shepp showed that convex combinations and products of two scaled autocovariance functions of one - dimensional media ( equivalent to two - point correlation functions ; see definition below ) satisfy all known necessary conditions for a realizable scaled autocovariance function .more generally , we will see that a hypothetical function obtained by a particular combination of a set of realizable scaled autocovariance functions corresponding to -dimensional media is also realizable . in this paper , we generalize shepp s work and argue that given a complete two - point correlation function space , of any statistically homogeneous materialcan be expressed through a map on a selected set of bases of the function space .we collect all known necessary conditions of realizable two - point correlation functions and formulate a new conjecture .we also provide new examples of realizable two - point correlation functions and suggest a set of analytical basis functions .we further discuss an exact mathematical formulation of the ( re)construction problem and show that can not completely specify a two - phase heterogeneous material alone , apart from the issue of chirality .moreover , we devise an efficient and isotropy - preserving construction algorithm to generate realizations of materials from their two - point correlation functions . subsequent analysis can be performed on the generated images to estimate desired macroscopic properties that depend on , including both linear and nonlinear behavior .these developments are integrated here into a general scheme that enables one to model and categorize heterogeneous materials via two - point correlation functions .although the general scheme is applicable in any space dimension , we will mainly focus on two - dimensional media here . in the second part of this series of two papers , we will provide algorithmic details and applications of our general scheme .the rest of this paper is organized as follows : in sec .ii , we briefly introduce the basic quantities used in the description of two - phase random media . in sec .iii , we gather all the known necessary conditions for realizable two - point correlation functions and make a conjecture on a new possible necessary condition based on simulation results . in sec .iv , we propose a general form through which the scaled autocovariance functions can be expressed by a set of chosen basis functions and discuss the choice of basis functions . in sec .v , we formulate the ( re)construction problem using rigorous mathematics and show that alone can not completely specify a two - phase random medium .thus , it is natural to solve the problem by stochastic optimization method ( i.e. , simulated annealing ) .the optimization procedure and the lattice - point algorithm are also discussed . in sec .vi , we provide several illustrative examples . in sec .vii , we make concluding remarks .the ensuing discussion leading to the definitions of the -point correlation functions follows closely the one given by torquato .consider a realization of a two - phase random heterogeneous material within -dimensional euclidean space . to characterize this binary system ,in which each phase has volume fraction ( ) , it is customary to introduce the indicator function defined as where is the region occupied by phase and is the region occupied by the other phase .the statistical characterization of the spatial variations of the binary system involves the calculation of -point correlation functions : where the angular brackets denote ensemble averaging over independent realizations of the random medium . for _ statistically homogeneous _ media ,the -point correlation function depends not on the absolute positions but on their relative displacements , i.e. , for all , where .thus , there is no preferred origin in the system , which in eq .( [ eq1002 ] ) we have chosen to be the point . in particular , the one - point correlation function is a constant everywhere , namely , the volume fraction of phase , i.e. , and it is the probability that a randomly chosen point in the medium belongs to phase . for_ statistically isotropic _ media , the -point correlation function is invariant under rigid - body rotation of the spatial coordinates . for ,this implies that depends only on the distances ( ) . for , it is generally necessary to retain vector variables because of chirality of the medium .the two - point correlation function defined as is one of the most important statistical descriptors of random media .it also can be interpreted as the probability that two randomly chosen points and both lie in phase . for _ statistical homogeneous _ and _ isotropic _ media , only depends on scalar distances , i.e. , where .global information about the surface of the phase may be obtained by ensemble averaging the gradient of .since is different from zero only on the interfaces of the phase , the corresponding specific surface defined as the total area of the interfaces divided by the volume of the medium is given by note that there are other higher - order surface correlation functions which are discussed in detail by torquato .the calculation of higher - order correlation functions encounters both analytical and numerical difficulties , and very few experimental results needed for comparison purposes are available so far . however , their importance in the description of collective phenomena is indisputable .a possible pragmatic approach is to study more complex lower - order correlation functions ; for instance , the two - point cluster function defined as the probability that two randomly chosen points and belong to the same cluster of phase ; or the lineal - path function defined as the probability that the entire line segment between points and lies in phase . and of the reconstructed mediaare sometimes computed to study the non - uniqueness issue of the reconstruction .the task of determining the necessary and sufficient conditions that must possess is very complex . in the context of stochastic processes in time ( one - dimensional processes ), it has been shown that the autocovariance functions must not only meet all the necessary conditions we will present in this section but another condition on `` corner - positive '' matrices . since little is known about corner - positive matrices , this theorem is very difficult to apply in practice .thus , when determining whether a hypothetical function is realizable or not , we will first check all the necessary conditions collected here and then use the construction technique to generate realizations of the random medium associated with the hypothetical function as further verification . herewe collect all of the known necessary conditions on . for a two - phase statistically homogeneous medium , the two - point correlation function for phase 2is simply related to the corresponding function for phase 1 via the expression and the _ autocovariance _ function for phase 1 is equal to that for phase 2 .generally , for , and in the absence of any _ long - range _ order , an important necessary condition of realizable for a two - phase statistically homogeneous medium with dimensions is that the -dimensional fourier transform of the autocovariance function , denoted by must be non - negative for all wave vectors , i.e. , for all this non - negativity result is sometimes called the wiener - khintchine condition , which physically results since is proportional to the scattered radiation intensity .the two - point correlation function must satisfy the following bounds for all and the corresponding bounds on the autocovariance function are given by a corollary of eq .( [ eq113 ] ) recently derived by torquato states that the infimum of any two - point correlation function of a statistically homogeneous medium must satisfy the inequalities \le \phi_i^2.\ ] ] another necessary condition on in the case of statistically homogeneous and isotropic media , i.e. , when is dependent only the distance , is that its derivative at is strictly negative for all : this is a consequence of the fact that slope at is proportional to the negative of the specific surface .taking that it is axiomatic that is an even function , i.e. , , then it is non - analytic at the origin .a lesser - known necessary condition for statistically homogeneous media is the so - called `` triangular inequality '' that was first derived by shepp and later rediscovered by matheron : where .note that if the autocovariance of a statistically homogeneous and isotropic medium is monotonically decreasing , nonnegative and convex ( i.e. , ) , then it satisfies the triangular inequality eq .( [ eq115 ] ) .the triangular inequality implies several point - wise conditions on the two - point correlation function .for example , for statistically homogeneous and isotropic media , the triangular inequality implies the condition given by eq .( [ eq114 ] ) , the fact that the steepest descent of the two - point correlation function occurs at the origin : and the fact that must be convex at the origin : torquato showed that the triangular inequality is actually a special case of the more general condition : where ( and is odd ) . note that by choosing ; , , eq . ( [ eq115 ] ) can be rediscovered . if ; are chosen instead , another `` triangular inequality '' can be obtained , i.e. , where .equation ( [ eq1151 ] ) was first derived by quintanilla .equation ( [ eq118 ] ) is a much stronger necessary condition that implies that there are other necessary conditions beyond those identified thus far .however , eq . ( [ eq118 ] ) is difficult to check in practice , because it does not have a simple spectral analog .one possible method is to randomly generate a set of points and compute the value of . among these values of ,select the largest ones and set their coefficients equal to .thus , we have equations for s. then we can substitute the solved s into eq .( [ eq118 ] ) and check the inequality . if the inequality holds , then we can generate several different sets of random points and test the inequality in the same way .{hypo_s2.eps}\\ \mbox{\bf ( a ) } \\\\\\\includegraphics[width=5cm , keepaspectratio]{hypo_config.eps}\\ \mbox{\bf ( b ) } \end{array} ]. there could be different choices of the basis functions ( like different basis choices of a hilbert space ) , and we would like the basis functions to have nice mathematical properties , such as simple analytical forms .let denotes our choice of the basis functions .thus , the media can be represented merely by different maps s .note that a hypothetical two - point correlation function corresponds to a hypothetical map and effective construction algorithms can be used to test the realizability of .a systematic way of determining the basis functions is not available yet .here we take the first step to determine the bases by considering certain known realizable analytical two - point correlation functions and the corresponding scaled autocovariance functions . for convenience , we categorize these functions into three families : ( ) _ monotonically decreasing _ functions ; ( ) _ damped - oscillating _ functions ; and ( ) functions of _ known constructions_. the family of monotonically decreasing functions includes the simple exponentially decreasing function introduced by debye and polynomial functions .the former is given by where is a correlation length , corresponding to structures in which one phase consists of `` random shapes and sizes '' ( shown in fig .[ fig2 ] ) .it is now known that certain types of space tessellations have autocovariance functions given by eq .( [ eq129 ] ) .we have referred to this class of structures as _ debye random media _ .{debye_obj2.eps}\\ \mbox{\bf ( a)}\\\\\\ \includegraphics[width=5cm , keepaspectratio]{debye_config.eps}\\ \mbox{\bf ( b ) } \end{array} ] the family of functions of known constructions includes scaled autocovariance functions of -dimensional identical overlapping spheres and symmetric - cell materials . for overlapping spheres of radius , the scaled autocovariance function for the particle phase ( spheres ) is given by -\phi_1 ^ 2}{\phi_1\phi_2},\ ] ] where and are volume fractions of the spheres and matrix respectively , is the number density of spheres , and is the union volume of two spheres of radius whose centers are separated by . for the first three space dimensions , the latter is respectively given by \theta(2r - r),\ ] ] \theta(2r - r),\ ] ] where is the heaviside function , and is the volume of a -dimensional sphere of radius given by where is the gamma function . for and , and , respectively .{cb_obj2.eps}\\ \mbox{\bf ( a)}\\\\\\ \includegraphics[width=5cm , keepaspectratio]{config14.eps}\\ \mbox{\bf ( b ) } \end{array} ] without loss of generality , we choose the black phase to be the phase of interest and assume _ periodic boundary condition _ is applied , which is commonly used in computer simulations .the two - point correlation function of the black phase can be calculated based on its probabilistic nature , i.e. , the probability of finding two points separated by the vector distance in the same phase .the value of two - point correlation function for a particular is given by where are entries of defined in eq .( [ eq201 ] ) , and are integers satisfying ] need to be considered .this quantity is given by ^ 2 - 8[n/2 ] + 6 ] first , we consider specific two - dimensional and two - phase structure composed of a square array of nonoverlapping disks , as shown in fig .this morphology may be viewed as a cross section of two - phase materials containing rod- or fiber - like inclusions .various transport properties of these materials have been well explored because of their practical and theoretical importance in materials science .the regular structure is discretized by introducing an square lattice .the volume fractions of black and white phases are , , respectively .the target two - point correlation of the digitized medium is sampled using both the orthogonal and the lattice - point algorithm for comparison purpose .the simulations start from random initial configurations ( i.e. , random collections of black and white pixels ) , at some initial temperature , with fixed volume fractions . at each monte - carlo( mc ) step , when an attempt to exchange two randomly chosen pixels with different colors ( or to randomly displace a chosen black pixel ) is made , is efficiently recomputed by using the orthogonal - sampling algorithm ( or the lattice - point algorithm ) .the set of constants specifies the annealing schedule : at each temperature , the system is thermalized until either mc moves are accepted or the total number of attempts to change the original configurations reaches the value .subsequently , the system temperature is decreased by the reduction factor , i.e. , .{lattice_lp.eps } & \includegraphics[width=4cm , keepaspectratio]{lattice_ot.eps } \\ \mbox{\bf ( a ) } & \mbox{\bf ( b ) } \end{array} ] for comparison purposes , both the orthogonal - sampling algorithm and the lattice - point algorithm are used in the construction , the results are shown in fig .[ fig8 ] . at a lower density of the black phase , is manifested as a characteristic repulsion among different elements with diameter of order .the repulsion vanishes beyond the length scale . at a higher density ,both length scales and are clearly noticeable in the distribution of the black and white phases .note that the structures generated by the orthogonal - sampling algorithm exhibit some anisotropy features , i.e. , containing stripes along degree directions , which implies that the orthogonal - sampling algorithm should be used with care in the case where the medium has long - range correlations .{dampsine_0.2.eps } \end{array} ] {cb0.25_0.1.eps } & \includegraphics[height=3cm , keepaspectratio]{cb0.25_0.3.eps } & \includegraphics[height=3cm , keepaspectratio]{cb0.25_0.5.eps } \\ \mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } \end{array} ] {cb0.75_0.1.eps } & \includegraphics[height=3cm , keepaspectratio]{cb0.75_0.3.eps } & \includegraphics[height=3cm , keepaspectratio]{cb0.75_0.5.eps } \\ \mbox{\bf ( a ) } & \mbox{\bf ( b ) } & \mbox{\bf ( c ) } \end{array}$ ] the results imply that even a simple combination of two basis functions enables one to obtain scaled autocovariance functions with properties of interest and to generate a variety of structures with controllable morphological features , e.g. , local `` particle '' shape and cluster size .in this paper , we have provided a general rigorous scheme to model and categorize two - phase statistically homogeneous and isotropic media . in particular , given a set of basis functions , we have shown that the medium can be modeled by a map composed of convex combination and product operations .the basis functions should be realizable but , if they are not , they should at least satisfy all the known necessary conditions for a realizable autocovariance function .we have gathered all the known necessary conditions and made a conjecture on a possible new condition based on simulation results .a systematic way of determining basis functions is not available yet .we proposed a set of basis functions with simple analytical forms that capture salient microstructural features of two - phase random media .we give for the first time a rigorous mathematical formulation of the ( re)construction problem and showed that the two - point correlation function alone can not completely specify a two - phase heterogeneous material .moreover , we devised an efficient and isotropy - preserving ( re)construction algorithm , namely , the lattice - point algorithm to generate realizations of materials based on the yeong - torquato technique .we also provided an example of non - realizable yet non - trivial two - point correlation function and showed that our algorithm can be used to test realizability of hypothetical functions .an example of generating hypothetical random media with combined realizable correlation functions was given as an application of our general scheme .we showed that even a simple combination of two basis functions enables one to produce media with a variety of microstructures of interest and therefore a means of categorizing microstructures .we are investigating applications of our general scheme in order to model real materials .we are also developing more efficient ( re)construction algorithms .there is a need for a theoretical and numerical analysis of the _ energy threshold _ of the algorithm , which is the aforementioned `` acceptable tolerance '' .this quantity provides an indication of the extent to which the algorithms have reproduced the target structure and it is directly related to the non - uniqueness issue of reconstructions .moreover , additional realizable basis functions are needed to construct a complete basis set .such work will be reported in our future publications . where and is the effective correlation length .it is easy to check that satisfies the known necessary conditions collected in this paper except for eq .( [ eq118 ] ) , which can only be checked for a finite number of cases .realizations of the random medium associated with have been constructed using the yeong - torquato technique with very high numerical precision .thus , we believe to be a valid candidate for a realizable scaled autocovariance function . | heterogeneous materials abound in nature and man - made situations . examples include porous media , biological materials , and composite materials . diverse and interesting properties exhibited by these materials result from their complex microstructures , which also make it difficult to model the materials . yeong and torquato [ phys . rev . e * 57 * , 495 ( 1998 ) ] introduced a stochastic optimization technique that enables one to generate realizations of heterogeneous materials from a prescribed set of correlation functions . in this first part of a series of two papers , we collect the known necessary conditions on the standard two - point correlation function and formulate a new conjecture . in particular , we argue that given a complete two - point correlation function space , of any statistically homogeneous material can be expressed through a map on a selected set of bases of the function space . we provide new examples of realizable two - point correlation functions and suggest a set of analytical basis functions . we also discuss an exact mathematical formulation of the ( re)construction problem and prove that can not completely specify a two - phase heterogeneous material alone . moreover , we devise an efficient and isotropy - preserving construction algorithm , namely , the lattice - point algorithm to generate realizations of materials from their two - point correlation functions based on the yeong - torquato technique . subsequent analysis can be performed on the generated images to obtain desired macroscopic properties . these developments are integrated here into a general scheme that enables one to model and categorize heterogeneous materials via two - point correlation functions . we will mainly focus on the basic principles in this paper . the algorithmic details and applications of the general scheme are given in the second part of this series of two papers . |
recent technological advancements have enabled increasing use of infrastructure - free wireless communications .for example , smartphone users can exchange information with each other by exploiting local wi - fi and bluetooth connections , or using the fifth - generation ( 5 g ) cellular device - to - device communications ; and even unmanned aerial vehicles ( uavs ) can directly communicate with nearby ground stations and send back photos and videos in real time .although these infrastructure - free communication links bring great convenience to our daily lives , they can also be used by malicious users to launch various security attacks .for instance , terrorists can use peer - to - peer wi - fi connections to communicate and facilitate terror attacks , and criminals can control uavs to spy and collect private information from rightful users . as such malicious attacks are launched via infrastructure - free wireless communications , they are difficult to be monitored by solely using existing information surveillance methods that intercept the communication data at the cellular or internet infrastructures . in response to such new threats on public security , authorized parties such as government agencies should develop new approaches to legitimately surveil these suspicious wireless communication links over the air ( e.g. , via eavesdropping ) to detect malicious attacks , and then intervene in them ( e.g. , via jamming and spoofing ) to quickly defend and disable these attacks .there have been several recent studies in the literature that investigate the surveillance of wireless communications , where authorized parties efficiently intercept suspicious wireless communication links , extract their exchanged data contents , and help identify the malicious wireless communication links to intervene in .conventionally , the methods for wireless communications surveillance include wiretapping of wireless operators infrastructures and installation of monitoring software in smartphones .recently , over - the - air eavesdropping has emerged as a new wireless communications surveillance method . among others ,passive eavesdropping ( see , e.g. , ) and proactive eavesdropping are two approaches implemented at the physical layer , in which authorized parties can deploy dedicated wireless monitors to overhear the targeted wireless communications , especially the infrastructure - free ones .efficient surveillance can help detect and identify malicious users and their communications .after that , authorized parties need to quickly respond and defend them via wireless communication intervention .for example , the security agency may need to disrupt , disable , or spoof ongoing terrorists communications to prevent terror attacks at the planning stage , and it is also desirable to change the control signal of a malicious uav to land it in a targeted location and catch it . in the literature ,physical - layer jamming ( see , e.g. , ) is one existing approach that can be employed to intervene in malicious communications , though it was originally proposed for military instead of public security applications . in the physical - layer jamming , the jammer sends artificially generated gaussian noise ( so - called `` uncorrelated jamming '' ) or a processed version of the malicious signal ( so - called `` correlated jamming '' ) to disrupt or disable the targeted malicious wireless communications . however , jamming the targeted communications at the physical layer is easy to be detected , and may not be sufficient to successfully intervene in malicious activities .this is due to the fact that when the targeted communication continuously fails due to the jamming attack , the malicious users may take counter - measures by changing their communication frequency bands or switching to another way of communications .thus , we are motivated to study a new wireless communication intervention via spoofing at the physical layer , which can keep the malicious communication but change the communicated information to intervene in .= 1 we investigate the new physical - layer spoofing by considering a fundamental three - node system over additive white gaussian noise ( awgn ) channels . as shown in fig .[ fig:1 ] , an intermediary legitimate spoofer aims to spoof a malicious communication link from alice to bob , such that the received message at bob is changed from alice s originally sent message to the one desired by the spoofer . under this setup, we propose a new symbol - level spoofing approach , in which the spoofer designs the spoofing signals via exploiting the symbol - level relationship between each original constellation point of alice and the desirable one of the spoofer , so as to optimize the spoofing performance .in particular , we consider two cases when alice employs the widely - used binary phase - shift keying ( bpsk ) and quadrature phase - shift keying ( qpsk ) modulations , respectively .-ary quadrature amplitude modulation ( -qam ) and -ary phase shift keying ( -psk ) with . nevertheless , under these modulation techniques , how to design spoofing signals to optimally solve the average sser minimization problem is generally a more difficult task , since the corresponding sser functions will become very complicated .] the objective of the spoofer is to minimize the average spoofing - symbol - error - rate ( sser ) , i.e. , the average probability that the symbols decoded by bob fail to be changed as the desirable ones of the spoofer .the main results of this paper are summarized as follows .* in the bpsk case ( with the constellation points being ) , the spoofing signals are designed by classifying the symbols into two types . in each of type - i symbols ( see fig . [fig : bpsk]-(a ) ) , where the original constellation point of alice and the desirable one of the spoofer are identical ( both are or ) , the spoofing signal is designed to _ constructively _ combine with the original signal of alice at bob to help improve the decoding reliability against gaussian noise . in each of type - ii symbols ( see fig . [fig : bpsk]-(b ) ) , where the original constellation point of alice and the desirable one of the spoofer are opposite ( one is ( or ) but the other is ( or ) ) , the spoofing signal is designed to _destructively _ combine with the original signal of alice at bob , thus moving the constellation point towards the desirable opposite direction .we minimize the average sser by optimizing the spoofing signals and their power allocations over type - i and type - ii symbols at the spoofer , subject to its average transmit power constraint .although this problem is non - convex , we derive its optimal solution .it is shown that when the transmit power at alice is low or the spoofing power at the spoofer is high , the spoofer should allocate its transmit power to both type - i and type - ii symbols .otherwise , when the transmit power at alice is high and the spoofing power at the spoofer is low , the spoofer should allocate almost all its transmit power over a certain percentage of type - ii symbols with an `` on - off ''power control . * in the qpsk case with the constellation points being with , the symbols are further classified into three types , where in type - i , type - ii , and type - iii symbols , the original constellation points of alice and the desirable ones of the spoofer are identical , opposite , and neighboring , respectively , as shown in fig .[ fig : qpsk ] . for type - i and type - ii symbols ,the spoofing signals are designed to have equal strengths for the real and imaginary components , such that at the receiver of bob they can be be constructively and destructively combined with the original constellation points by alice , respectively . for type - iii symbols ,the spoofing signals are designed to have independent real and imaginary components .under such a design , we formulate the average sser minimization problem by optimizing the spoofing power allocations over symbols , subject to the average transmit power constraint .though this problem is non - convex and generally difficult , we obtain its optimal solution , motivated by that in the bpsk case .* numerical results show that for both bpsk and qpsk cases , the symbol - level spoofing scheme with optimized transmission achieves a much better spoofing performance ( in terms of a lower average sser ) , as compared to the block - level spoofing benchmark where the spoofer does not exploit the symbol information of alice , and a heuristically designed symbol - level spoofing scheme .it is worth noting that in the existing literature there is another type of higher - layer spoofing attack , which can also be utilized for wireless communication intervention ( see , e.g. , ) .for example , in the medium access control ( mac ) spoofing and internet protocol ( ip ) spoofing , a network attacker can hide its true identity and impersonate another user , so as to access the targeted wireless networks .nevertheless , for these higher - layer spoofing , the network attacker needs to establish new wireless communication links to access the network .in contrast , our proposed symbol - level spoofing is implemented at the physical layer , which can change the communicated information of _ ongoing _ malicious wireless communications , thus leading to a quicker response and intervention that is also more likely to be covert .it is also worth comparing our proposed symbol - level spoofing versus the symbol - level precoding ( not for security ) in downlink multiuser multi - antenna systems . in the symbol - level precoding ,the transmitter designs its precoding vectors by exploiting the symbol - level relationships among the messages to different receivers , such that the constructive part of the inter - channel interference is preserved and exploited and only the destructive part is eliminated . although the symbol - level spoofing and precoding are based on a similar design principle of exploiting the symbol - level relationship among co - channel signals , they focus on different application scenarios for different purposes , thus requiring different design methods .the remainder of this paper is organized as follows .section [ sec:2 ] introduces the system model and formulates the average sser minimization problem .sections [ sec:3 ] and [ sec : qpsk ] propose the symbol - level spoofing approach and design the spoofing signals and their power allocations for the cases of bpsk and qpsk modulations , respectively . section [ sec:4 ] presents numerical results to evaluate the performance of the proposed symbol - level spoofing design as compared to other benchmark schemes .finally , section [ sec:5 ] concludes the paper .as shown in fig . [ fig:1 ] , we consider a fundamental three - node system over awgn channels , where an intermediary legitimate spoofer aims to spoof a malicious wireless communication link from alice to bob by changing the communicated data at the bob side . we consider that the malicious communication employs the bpsk or qpsk modulation techniques , which are most commonly used in existing wireless communication systems . in the symbol of this block , we denote the transmitted signal by alice as , where is the transmit power per symbol at alice , and denotes the message that alice wants to deliver to bob . here , is equally likely chosen from the set of constellation points , where and for the bpsk and qpsk cases , respectively .therefore , we have .first , we introduce the receiver model of bob by considering the case without spoofing .accordingly , the received signal by bob in the symbol is expressed as where denotes the noise at the receiver of bob , which is an independent and identically distributed ( i.i.d . )circularly symmetric complex gaussian ( cscg ) random variable with zero mean and unit variance . based on the maximum likelihood ( ml ) detection , the decoded message by bobis expressed as next , we consider the spoofing strategy employed by the spoofer . it is assumed that the spoofer perfectly knows the transmitted symbol information s of alice . here , can be practically obtained by the spoofer via efficient eavesdropping or wiretapping beforehand .for example , if alice is an intermediary node of a multi - hop communication link , then the spoofer can obtain s via eavesdropping the previous hops ; if alice gets its transmitted data from the backhaul or infrastructure - based networks , then the spoofer can acquire them via using wiretapping devices to overhear the backhaul communications ; and furthermore , the spoofer can even secretly install an interceptor software ( e.g. , flexispy ) in the alice s device to get s . note that the assumption about the perfect symbol information at the spoofer has been made in the existing correlated jamming literature ( see , e.g., ) to improve the jamming performance .we make a similar assumption here for the purpose of characterizing the spoofing performance upper bound , and leave the details about the symbol information acquisition for future work . based on the information of s ,the spoofer designs the spoofing signal as in the symbol ( the design details will be provided in the next section ) .then , the received signal at bob is expressed as with the ml detection , the decoded message by bob is expressed as the spoofer aims to maximize the opportunity of changing the messages of alice to be the desirable ones by itself .let denote the desirable constellation point for the symbol , which is equally likely chosen from and is independent from the message sent by alice .nevertheless , due to the limited spoofing power and receiver noise , it is difficult for the spoofer to ensure that all symbols s are successfully changed to be the desirable s . in this case , we define the probability of unsuccessful spoofing in any symbol as the sser , denoted by .then , the objective of the spoofer is to minimize the average sser , i.e. , , where denotes the statistical expectation over all possible symbols .suppose that the spoofer is constrained by a maximum average transmit power denoted by , i.e. , .as a result , the optimization problem of our interest is in the following two sections , we will solve problem ( [ eqn:5 ] ) by considering the bpsp and qpsk modulations , respectively .= 1 of alice , and the blue circular denotes the desirable constellation point of the spoofer . ( a ) an example of type - i symbols , where and are identical with ; ( b ) an example of type - ii symbols , where and are opposite with and .,title="fig:",width=302 ]in this section , we consider the case with bpsk signaling , i.e. , . in the following ,we first propose the symbol - level spoofing signals design and then optimally solve the average sser minimization problem ( [ eqn:5 ] ) in this case . to facilitate the description , as shown in the examples in fig .[ fig : bpsk ] , we classify the symbols over each block into two types as follows based on the relationship between the original constellation point of alice and the desirable one of the spoofer in each symbol . *_ type - i symbol _ : the symbol is called a type - i symbol if and are identical ( or ) .we denote the set of all type - i symbols as . * _ type - ii symbol _ : the symbol is called a type - ii symbol if and are opposite ( and , or and ) .we denote the set of all type - ii symbols as . in the following two propositions , we present the optimal symbol - level spoofing signal design , and obtain the corresponding sser functions . [proposition : typei ] given any type - i symbol , it is optimal to minimize the conditional sser by designing aligning with , where denotes the spoofing power for this symbol .accordingly , is given as where is the error function defined as see appendix [ proof : typei ] .[ proposition : typeii ] given any type - ii symbol , it is optimal to minimize the conditional sser by designing opposite to , where denotes the spoofing power for this symbol .accordingly , is given as this proposition can be proved by following a similar procedure as for proposition [ proposition : typei ] .therefore , the details are omitted for brevity .propositions [ proposition : typei ] and [ proposition : typeii ] are intuitive . in each type - i symbol , proposition [ proposition : typei ] shows that the spoofing signal should be designed such that at the receiver of bob it is _ constructively _ combined with the original signal from alice , thus increasing the received power of the desirable constellation point against gaussian noise . in each type - ii symbol , proposition [ proposition : typeii ] shows that at the receiver of bob the spoofing signal should be _ destructively _ combined with the original signal from alice , so as to move the constellation point towards the desirable opposite direction . based on these two propositions ,the average sser minimization problem ( [ eqn:5 ] ) is specified as follows by jointly optimizing the spoofing power s over type - i symbols and s over type - ii symbols . where the term follows from the fact that each of the two symbol sets and on average occupies a half of all symbols over each block .the spoofing power allocation problem ( [ problem : bpsk ] ) is generally non - convex , since the sser function in the objective is non - convex over ( as will be shown next ) .therefore , this problem is difficult to solve . in the following ,we first show some useful properties of the sser functions and , and then present the optimal solution to problem ( [ problem : bpsk ] ) .first , we have the following lemma for the sser function . [ proposition:1 ] is monotonically decreasing and convex over .it is easy to show that over , the first- and second - order derivatives of satisfy that and , respectively .therefore , this lemma follows .next , we study the sser function .[ proposition:2 ] is monotonically decreasing over .the convexity of is given as follows depending on alice s transmit power .* _ alice s low transmit power regime ( i.e. , ) _ : is convex over . * _ alice s high transmit power regime ( i.e. , ) _ : is first convex over ] and , and when .therefore , it follows that is convex over ], it is evident that , i.e. , the slope of the line passing through and is between the values of and .then , we proceed as follows . * in the first step , we decrease the value of to find a new such that .note that is convex over ] and .since the function is convex over this regime , and , it is evident that over such two regimes , the points are above the line passing through and . then , consider the regime with ] we have , i.e. , is convex .next , note that when , it follows that .also , when , we have . by combining the above two facts , when , it holds that .accordingly , and is convex .y. zou , x. wang , and l. hanzo , `` a survey on wireless security : technical challenges , recent advances and future trends , '' to appear in _ proc .ieee_. [ online ] available : http://arxiv.org/abs/1505.07919 .y. zeng and r. zhang , `` wireless information surveillance via proactive eavesdropping with spoofing relay , '' to appear in _ieee j. sel .topics signal process._. [ online ] available : https://arxiv.org/abs/1606.03851 .m. alodeh , s. chatzinotas , and b. ottersten , `` constructive multiuser interference in symbol level precoding for the miso downlink channel , '' _ ieee trans .signal process .9 , pp . 2239 - 2252 , may 2015 . | with recent developments of wireless communication technologies , malicious users can use them to commit crimes or launch terror attacks , thus imposing new threats on the public security . to quickly respond to defend these attacks , authorized parities ( e.g. , the national security agency of the usa ) need to intervene in the malicious communication links over the air . this paper investigates this emerging wireless communication intervention problem at the physical layer . unlike prior studies using jamming to disrupt or disable the targeted wireless communications , we propose a new physical - layer spoofing approach to change their communicated information . consider a fundamental three - node system over additive white gaussian noise ( awgn ) channels , in which an intermediary legitimate spoofer aims to spoof a malicious communication link from alice to bob , such that the received message at bob is changed from alice s originally sent message to the one desired by the spoofer . we propose a new symbol - level spoofing scheme , where the spoofer designs the spoofing signal via exploiting the symbol - level relationship between each original constellation point of alice and the desirable one of the spoofer . in particular , the spoofer aims to minimize the average spoofing - symbol - error - rate ( sser ) , which is defined as the average probability that the symbols decoded by bob fail to be changed or spoofed , by designing its spoofing signals over symbols subject to the average transmit power constraint . by considering two cases when alice employs the widely - used binary phase - shift keying ( bpsk ) and quadrature phase - shift keying ( qpsk ) modulations , we obtain the respective optimal solutions to the two average sser minimization problems . numerical results show that the symbol - level spoofing scheme with optimized transmission achieves a much lower average sser , as compared to other benchmark schemes . wireless communication surveillance and intervention , symbol - level spoofing , spoofing - symbol - error - rate ( sser ) minimization , power control . [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] [ section ] |
the problem of fading and the ways to combat it through spatial diversity techniques have been an active area of research .multiple - input multiple - output ( mimo ) techniques have become popular in realizing spatial diversity and high data rates through the use of multiple transmit antennas . for such co - located multiple transmit antenna systems low maximum - likelihood ( ml ) decoding complexity space - time block codes ( stbcs )have been studied by several researchers - which include the well known complex orthogonal designs ( cods ) and their generalizations .recent research has shown that the advantages of spatial diversity could be realized in single - antenna user nodes through user cooperation , via relaying .a simple wireless relay network of nodes consists of a single source - destination pair with relays . for such relay channels , use of cods, has been studied in .cods are attractive for cooperative communications for the following reasons : they offer full diversity gain and coding gain , they are ` scale free ' in the sense that deleting some rows does not affect the orthogonality , entries are linear combination of the information symbols and their conjugates which means only linear processing is required at the relays , and they admit very fast ml decoding ( single - symbol decoding ( ssd ) ) .however , it should be noted that the last property applies only to the decode - and - forward ( df ) policy at the relay node . in a scenario where the relays amplify and forward ( af ) the signal, it is known that the orthogonality is lost , and hence the destination has to use a complex multi - symbol ml decoding or sphere decoding , .it should be noted that the af policy is attractive for two reasons : the complexity at the relay is greatly reduced , and the restrictions on the rate because the relay has to decode is avoided . in order to avoid the complex ml decoding at the destination , in ,the authors propose an alternative code design strategy and propose a ssd code for 2 and 4 relays .for arbitrary number of relays , recently in , distributed orthogonal stbcs ( dostbcs ) have been studied and it is shown that if the destination has the complete channel state information ( csi ) of all the source - to - relay channels and the relay - to - destination channels , then the maximum possible rate is upper bounded by complex symbols per channel use for relays . towards improving the rate of transmission and achieving simultaneously both full - diversity as well as ssd at the destination , in this paper , we study relay channels with the assumption that the relays have the phase information of the source - to - relay channels and the destination has the csi of all the channels .coding for partially - coherent relay channel ( pcrc , section [ pcrc_sec ] ) has been studied in , where a sufficient condition for ssd has been presented .the contributions of this paper can be summarized as follows : * first , a new set of necessary and sufficient conditions for a stbc to be ssd for co - located multiple antenna communication is obtained .the known set of necessary and sufficient conditions in is in terms of the dispersion matrices ( weight matrices ) of the code , whereas our new set of conditions is in terms of the column vector representation matrices of the code and is a generalization of the conditions given in in terms of column vector representation matrices for cods . * a set of necessary and sufficient conditions for a distributed stbc ( dstbc ) to be ssd for a pcrc is obtained by identifying the additional conditions . using this , several ssd dstbcs for pcrcare identified among the known classes of stbcs for co - located multiple antenna system . *it is proved that even if a ssd stbc for a co - located mimo channel does not satisfy the additional conditions for the code to be ssd for a pcrc , single - symbol decoding of it in a pcrc gives full - diversity and only coding gain is lost . *it is shown that when a dstbc is ssd for a pcrc , then arbitrary coordinate interleaving of the in - phase and quadrature - phase components of the variables does not disturb its ssd property for pcrc . *it is shown that the possibility of _ channel phase compensation _ operation at the relay nodes using partial csi at the relays increases the possible rate of ssd dstbcs from when the relays do not have csi to which is independent of . *extensive simulation results are presented to illustrate the above contributions .the remaining part of the paper is organized as follows : in section [ sec2 ] , the signal model for a pcrc is developed . using this model , in section [ sec3 ] , a new set of necessary and sufficient conditions for a stbc to be ssd in a co -located mimo is presented .several classes of ssd codes are discussed and conditions for full - diversity of a subclass of ssd codes is obtained .then , in section [ sec4 ] , ssd dstbcs for pcrc are characterized by identifying a set of necessary and sufficient conditions .it is shown that the ssd property is invariant under coordinate interleaving operations which leads to a class of ssd dstbcs for pcrc . the class of rate half cods obtained from rate one real orthogonal designs ( rods ) by stacking construction is shown to be ssd for pcrc .also , it is shown that ssd codes for co - located mimo , under suboptimal ssd decoder for pcrc offer full diversity .simulation results and discussion constitute section [ sec5 ] .conclusions and scope for further work are presented in section [ sec6 ] .consider a wireless network with nodes consisting of a source , a destination , and relays , as shown in fig .all nodes are half - duplex nodes , i.e. , a node can either transmit or receive at a time on a specific frequency .we consider amplify - and - forward ( af ) transmission at the relays .transmission from the source to the destination is carried out in two phases . in the first phase ,the source transmits information symbols in time slots .all the relays receive these symbols .this phase is called the _broadcast phase_. in the second phase , all the relays relays participate in the cooperative transmission .it is also possible that some relays do not participate in the transmission based on whether the channel is in outage or not .we do not consider such a partial participation scenario here .] perform distributed space - time block encoding on their received vectors and transmit the resulting encoded vectors in time slots .that is , each relay will transmit a column ( with entries ) of a distributed stbc matrix of size .the destination receives a faded and noise added version of this matrix .this phase is called the _ relay phase_.we assume that the source - to - relay channels remain static over time slots , and the relay - to - destination channels remain static over time slots .the received signal at the relay , , in the time slot , , denoted by , can be written as and denote transpose and conjugate transpose operations , respectively and denotes matrix conjugation operation . ] where is the complex channel gain from the source to the relay , is additive white gaussian noise at relay with zero mean and unit variance , is the transmit energy per symbol in the broadcast phase , and = 1 ] . substituting ( [ stackx ] ) in ( [ rx1_no_csi ] ) ,we can write in this subsection , we obtain a signal model for the case of partial csi at the relays , where we assume that each relay has the knowledge of the channel phase on the link between the source and itself in the broadcast phase . that is , defining the channel gain from source to relay as , we assume that relay has perfect knowledge of only and does not have the knowledge of in the proposed scheme , we perform a phase compensation operation on the amplified received signals at the relays , and space - time encoding is done on these phase - compensated signals .that is , we multiply in ( [ no_comp ] ) by before space - time encoding .note that multiplication by does not change the statistics of .therefore , with this phase compensation , the vector in ( [ vhat2x ] ) becomes consequently , the vector generated by relay is given by where is the equivalent weight matrix with phase compensation .now , we can write the received vector as figure [ fig2 ] shows the processing at the relay in the proposed phase compensation scheme . such systems will be referred as _ partially - coherent relay channels _ ( pcrc ) . a distributed stbc which is ssd for a pcrcwill be referred as ssd - dstbc - pcrc .the class of ssd codes , including the well known cods , for co - located mimo has been studied in , where a set of necessary and sufficient conditions for an arbitrary linear stbc to be ssd has been obtained in terms of the dispersion matrices , also known as weight matrices . in this section , a new set of necessary and sufficient conditions in terms of the column vector representation matrices of the codeis obtained that are amenable for extension to pcrc .this is a generalization of the conditions given in in terms of column vector representation matrices for cods . towards this end , the received vector in a co -located mimo setup can be written as for co - located mimo with transmit antennas , the linear stbc as given in ( [ rx_colocate ] ) is ssd _ iff _ where , where and are real matrices , and and are block diagonal matrices of the form } _ { { \bf d}_{ij,1}^{(k ) } } & { \bf 0 } & \cdots & { \bf 0 } \\ { \bf 0 } & \underbrace { \left [ \begin{array}{cc } a_{ij,2}^{(k ) } & b_{ij,2}^{(k ) } \\b_{ij,2}^{(k ) } & c_{ij,2}^{(k ) } \end{array } \right ] } _ { { \bf d}_{ij,2}^{(k ) } } & \cdots & { \bf 0 } \\ \vdots & \vdots & \ddots & \vdots \\ { \bf 0 } & \cdots & \cdots & \underbrace { \left [ \begin{array}{cc } a_{ij , t_1}^{(k ) } & b_{ij , t_1}^{(k ) } \\ b_{ij ,t_1}^{(k ) } & c_{ij , t_1}^{(k ) } \end{array}\right ] } _ { { \bf d}_{ij , t_1}^{(k ) } } \end{array}\right ] , \end{aligned}\ ] ] where it is understood that whenever the superscript is ( 1 ) as in then _ proof : _ in ( [ rx2_no_csi ] ) , let .then the ml optimal detection of is given by latexmath:[\ ] ]we summarize the conclusions in this paper and future work as follows .amplify - and - forward ( af ) schemes in cooperative communications are attractive because of their simplicity .full diversity ( fd ) , linear - complexity single symbol decoding ( ssd ) , and high rates of dstbcs are three important attributes to work towards af cooperative communications . earlier work in has shown that , without assuming phase knowledge at the relays , fd and ssd can be achieved in af distributed orthogonal stbc schemes ; however , the rate achieved decreases linearly with the number of relays .our work in this paper established that if phase knowledge is exploited at the relays in the way we have proposed , then fd , ssd , and high rate can be achieved simultaneously ; in particular , the rate achieved in our scheme can be , which is independent of the number of relays .we proved the ssd for our scheme in theorem 2 .fd was proved in theorem 6 .rate-1/2 construction for any was presented in theorem 5 .in addition to these results , we also established other results regarding invariance of ssd under coordinate interleaving ( theorem 4 ) , and retention of fd even with single - symbol non - ml decoding .simulation results confirming the claims were presented .all these important results have not been shown in the literature so far .these results offer useful insights and knowledge for the designers of future cooperative communication based systems ( e.g. , cooperative communication ideas are being considered in future evolution of standards like ieee 802.16 ) . in this work ,we have assumed only phase knowledge at the relays .of course , one can assume that both amplitude as well as the phase of source - to - relay are known at the relay .a natural question that can arise then is ` what can amplitude knowledge at the relay ( in addition to phase knowledge ) buy ? ' since we have shown that phase knowledge alone is adequate to achieve fd , some extra coding gain may be possible with amplitude knowledge .this aspect of the problem is beyond the scope of this paper ; but it is a valid topic for future work .v. tarokh , h. jafarkhani , and a. r. calderbank , `` space - time block codes from orthogonal designs , '' _ ieee trans .theory , _ vol .1456 - 1467 , july 1999 . o. tirkkonen and a. hottinen , `` square matrix embeddable stbc for complex signal constellations space - time block codes from orthogonal design , '' _ ieee trans .theory , _ vol .384 - 395 , february 2002 .w. su and x .- g .xia , `` signal constellations for quasi - orthogonal space - time block codes with full diversity , '' _ ieee trans .theory , _ vol .2331 - 2347 , october 2004 . c. yuen , y. l. guan , and t. t. tjhung , a class of four - group quasi - orthogonal space - time block code achieving full rate and full diversity for any number of antennas , " _ proc .ieee pimrc2005 , _ vol .1 , pp . 92 - 96 , september 2005 .d. n. dao , c. yuen , c. tellambura , y. l. guan , and t. t. tjhung , `` four - group decodable space - time block codes , '' available on line in arxiv:0707.3959v1 [ cs.it ] , 26 july 2007 . also to appear in_ ieee trans . on signal processing . | space - time block codes ( stbcs ) that are single - symbol decodable ( ssd ) in a co - located multiple antenna setting need not be ssd in a distributed cooperative communication setting . a relay network with relays and a single source - destination pair is called a partially - coherent relay channel ( pcrc ) if the destination has perfect channel state information ( csi ) of all the channels and the relays have only the phase information of the source - to - relay channels . in this paper , first , a new set of necessary and sufficient conditions for a stbc to be ssd for co - located multiple antenna communication is obtained . then , this is extended to a set of necessary and sufficient conditions for a distributed stbc ( dstbc ) to be ssd for a pcrc , by identifying the additional conditions . using this , several ssd dstbcs for pcrc are identified among the known classes of stbcs . it is proved that even if a ssd stbc for a co - located mimo channel does not satisfy the additional conditions for the code to be ssd for a pcrc , single - symbol decoding of it in a pcrc gives full - diversity and only coding gain is lost . it is shown that when a dstbc is ssd for a pcrc , then arbitrary coordinate interleaving of the in - phase and quadrature - phase components of the variables does not disturb its ssd property for pcrc . finally , it is shown that the possibility of _ channel phase compensation _ operation at the relay nodes using partial csi at the relays increases the possible rate of ssd dstbcs from when the relays do not have csi to , which is independent of . 0.25 in 2.00pc 1.85pc |
compressive sensing ( cs ) is a recently proposed concept that enables sampling below nyquist rate , without ( or with little ) sacrificing reconstruction quality . based on exploiting the signal sparsity in typical domains ,cs methods can be used in the sensing devices , such as mr imaging and ad conversion , where the devices have a high cost of acquiring each additional sample or a high requirement on time .therefore , as the sparsity level is often not known a priori , it can be very challenging to use cs in practical sensing hardware .sequential compressive sensing can effectively deal with the above problems .sequential cs considers a scenario where the observations can be obtained in sequence , and computations with observations are performed to decide whether these samples are enough .consequently , it is allowed to recover the signal either exactly or to a given tolerance from the smallest possible number of observations .there have been several recovery algorithms for sequential cs .asif solved the problem by homotopy method .garrigues discussed the lasso problem with sequential observations .this work extends a recent proposed zero - point attracting projection ( zap ) algorithm to the scenario of sequential cs .zap employs an approximate norm as the sparsity constraint and updates in the solution space . comparing with the existing algorithms , it needs fewer measurements and lower computation complexity .therefore the new algorithm can provide a much more appropriate solution for practical sensing devices , which is validated by numerical simulations .suppose is an unknown sparse signal , which is -length but has only nonzero entries , where .in their ice - breaking contributions , candes et al suggested to measure with under - determined observations , i.e. , where consists of random entries and has much fewer rows than columns .they also proved that norm or norm constraint optimization can successfully recover the unknown signal with overwhelming probability , there are many methods proposed to solve ( [ problem ] ) , of which concerned in this work is zap .zap iteratively searches the sparsest result in solution space .the recursion starts from the least square optimal solution , , where denotes the pseudo - inverse matrix of . in the iteration ,the solution is first updated along the negative gradient direction of a sparse penalty , where denotes a sparse constraint function and denotes the step size . in the reference , an approximate norm is employed and the corresponding entry of is where is a controlling parameter and it is readily recognized that the penalty tends to norm as approaches to infinity . then is projected back to the solution space to satisfy the observation constraint , where is defined as projection matrix and .equation ( [ za ] ) appears that an attractor locates at the zero - point is pulling the iterative solution to be sparser , as explains the first part of the algorithm s name .the last part comes from ( [ projection ] ) , which means that is projected back to the solution space . imagining a scenario that the samples are measured in realtime . at time , an -length measurement vector is collected and utilized to solve the sparsest solution by ( [ problem ] ) .if the available measurements are not enough to recover the original sparse signal , a new sample is generated at time , where denotes the sampling weight vector .thus the problem becomes solving , where ,\qquad { \bf a}_{m+1}=\left[{\bf a}_m \atop { \bf a}_{m+1}^{\rm t}\right].\ ] ] obviously , it is a waste of resources if the recovery algorithm is re - initialized without the utilization of earlier estimate , i.e. the available result at time .consequently , the basic aim of sequential compressive signal reconstruction is to find an effective method of refining based on the information of .for conciseness , the detailed iteration procedure of online zap is provided in tab.[tab1 ] .it can be seen that online zap has two recursions .the inner iteration is to update the solution by zap with the given measurements .the outer iteration is for sequential input . in order to improve the performance ,several techniques are used in online zap and they are discussed in the following subsections ..the procedure of online zap [ cols= " < " , ] zap works in an iterative way to produce a sparse solution via recursion in the solution space . in the online scenario , the previous estimate can be used to initialize the incoming iteration , i.e. , where denotes the maximum iteration number at time .the pseudo - inverse matrix plays an important role in the recursion of ( [ projection ] ) . considering the high computational cost of matrix inverse operation , is generally prepared before iterations .however , in the online scenario , becomes time - dependent and need to be recalculated in each time instant . in order to reduce the complexity ,the pseudo - inverse matrix is updated iteratively .define , which is already available after time .consequently , as the new sample is arriving , using basic algebra one has the recursion \left[{\bf a}_m^{\rm t } \ ; { \bf a}_{m+1}\right]\right]^{-1 } = \left[\begin{matrix}{\bf\gamma}_{m}^{-1 } & { \boldsymbol\alpha}_m \\ { \boldsymbol\alpha}_m^{\rm t } & \beta_m\end{matrix}\right]^{-1}\nonumber\\ & = \left[\begin{matrix}{\bf\gamma}_m+\theta_m{\bf\gamma}_m { \boldsymbol\alpha}_m{\boldsymbol\alpha}_m^{\rm t}{\bf\gamma}_m & -\theta_m{\bf\gamma}_m{\boldsymbol\alpha}_m\\ -\theta_m{\boldsymbol\alpha}_m^{\rm t}{\bf\gamma}_m & \theta_m\end{matrix}\right],\label{updategamma}\end{aligned}\ ] ] where as the step size in gradient descent iterations , the parameter controls a tradeoff between the speed of convergence and the accuracy of the solution . in order to improve the performances of the proposed algorithm ,the idea of variable step size is taken into consideration .the control scheme is rather direct : is initialized to be a large value after new sample arrived , and reduced by a factor as long as the iteration is convergent .the reduction is repeated several times until is sufficiently small .since the algorithm has two recursions , we employ and to denote the decreasing speed of outer and inner iteration , respectively . in addition, is no longer decreased when the step size is rather small .there are two kinds of recursions requiring stop rules in the online zap algorithm .firstly , after the sample arrived , iterates with to produce the best estimate based on the measurements .the inner iteration should stop after the algorithm reaches steady state , which means the sparsity penalty starts increasing .consequently , the inner recursion stops ( a ) when the number of reductions of reaches one- of its initial value or ( b ) when the number of iterations reaches the bound . secondly , as soon as the sparse signal is successfully reconstructed , the following samples are no longer necessary and the sensing procedure stops .therefore , the outer recursion stops when the estimate error is below a particular value .computer simulation is presented in this section to verify the performance of the proposed algorithm compared with typical sequential cs reconstruction algorithm for solving bpdn problem , whose matlab code can be downloaded from the website . in the following experiment ,the entries of each row of are independently generated from normal distribution .the locations of nonzero coefficients of sparse signal are randomly chosen with uniform distribution $ ] .the corresponding nonzero coefficients are gaussian with mean zero and unit variance .the system parameters are and .the number of measurements increases form to .the parameters for bpdn are set as the recommended values by the author .the parameters for online zap are , , , , , .the simulation is repeated ten times , then mean square derivation ( msd ) between the original signal and reconstruction signal as well as the average running time calculated .figure [ varm ] shows msd curve according to . as can be seen ,the performance of zap is better than that of bpdn .when the sparse signal is recovered successfully , the number of measurements bpdn needs is larger than , while the number zap algorithm needs is less than 80 .figure [ time ] demonstrates the cpu running time as increases .again , zap has the better performance .the cpu time of bpdn is twice than that of zap for successful recovery ( according to fig.[varm ] , here is chosen as for comparison ) .we have introduced in this paper a new online signal reconstruction algorithm for sequential compressive sensing .the proposed algorithm extends zap to sequential scenario . andin order to improve the performance , some methods , including the warm start and variable step size , are adopted .the final experiment indicates that the proposed algorithm needs less measurements and less cpu time than the reference algorithm .d. l. donoho , `` compressed sensing , '' _ ieee trans . on information theory _ , 52(4 ) , pp.1289 - 1306 , april 2006 .e. candes , j. romberg , and t. tao , `` robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , '' _ ieee trans .inform . theory _ , 52:489 - 509 , 2006 . m. lustig , d. donoho , and j. pauly , `` sparse mri : the application of compressed sensing for rapid mr imaging , '' _ magnetic resonance in medicine _ , 58(6 ) pp .1182 - 1195 , december 2007 . m. mishali , y. c. eldar , and j. a. tropp , `` efficient sampling of sparse wideband analog signals , '' _ ccit report # 705 _ , october 2008 .d. malioutov , s. sanghavi , and a. willsky , `` compressed sensing with sequential observations , '' _ icassp _ , pp .3357 - 3360 , april 2008 .j. jin , y. gu , and s. mei , `` a stochastic gradient approach on compressive sensing signal reconstruction based on adaptive filtering framework , '' _ ieee journal of selected topics in signal processing _ , 4(2 ) , pp.409 - 420 , 2010 . | sequential compressive sensing , which may be widely used in sensing devices , is a popular topic of recent research . this paper proposes an online recovery algorithm for sparse approximation of sequential compressive sensing . several techniques including warm start , fast iteration , and variable step size are adopted in the proposed algorithm to improve its online performance . finally , numerical simulations demonstrate its better performance than the relative art . * keywords : * compressive sensing , sparse signal recovery , sequential , online algorithm , zero - point attracting projection |
a large amount of research has been devoted to the task of defining and identifying communities in social and information networks , _i.e. _ , in graphs in which the nodes represent underlying social entities and the edges represent interactions between pairs of nodes .most recent papers on the subject of community detection in large networks begin by noting that it is a matter of common experience that communities exist in such networks .these papers then note that , although there is no agreed - upon definition for a community , a community should be thought of as a set of nodes that has more and/or better connections between its members than between its members and the remainder of the network .these papers then apply a range of algorithmic techniques and intuitions to extract subsets of nodes and then interpret these subsets as meaningful communities corresponding to some underlying `` true '' real - world communities . in this paper , we explore from a novel perspective several questions related to identifying meaningful communities in large sparse networks , and we come to several striking conclusions that have implications for community detection and graph partitioning in such networks .we emphasize that , in contrast to most of the previous work on this subject , we look at very large networks of up to millions of nodes , and we observe very different phenomena than is seen in small commonly - analyzed networks . at the risk of oversimplifying the large and often intricate body of work on community detection in complex networks ,the following five - part story describes the general methodology : 1 .data are modeled by an `` interaction graph . '' in particular , part of the world gets mapped to a graph in which nodes represent entities and edges represent some type of interaction between pairs of those entities .for example , in a social network , nodes may represent individual people and edges may represent friendships , interactions or communication between pairs of those people .the hypothesis is made that the world contains groups of entities that interact more strongly amongst themselves than with the outside world , and hence the interaction graph should contain sets of nodes , _i.e. _ , communities , that have more and/or better - connected `` internal edges '' connecting members of the set than `` cut edges '' connecting the set to the rest of the world .3 . a objective function or metric is chosen to formalize this idea of groups with more intra - group than inter - group connectivity . 4 . an algorithm is then selected to find sets of nodes that exactly or approximately optimize this or some other related metric .sets of nodes that the algorithm finds are then called `` clusters , '' `` communities , '' `` groups , '' `` classes , '' or `` modules '' . 5 .the clusters or communities or modules are evaluated in some way .for example , one may map the sets of nodes back to the real world to see whether they appear to make intuitive sense as a plausible `` real '' community .alternatively , one may attempt to acquire some form of `` ground truth , '' in which case the set of nodes output by the algorithm may be compared with it . with respect to points ( 1)(4 ) , we follow the usual path . in particular , we adopt points ( 1 ) and ( 2 ) , and we then explore the consequence of making such a choice , _i.e. _ , of making such an hypothesis and modeling assumption . for point( 3 ) , we choose a natural and widely - adopted notion of community goodness ( community quality score ) called _ conductance _ , which is also known as the normalized cut metric .informally , the conductance of a set of nodes ( defined and discussed in more detail in section [ sxn : related : lowcondalgs ] ) is the ratio of the number of `` cut '' edges between that set and its complement divided by the number of `` internal '' edges inside that set .thus , to be a good community , a set of nodes should have small conductance , _i.e. _ , it should have many internal edges and few edges pointing to the rest of the network .conductance is widely used to capture the intuition of a good community ; it is a fundamental combinatorial quantity ; and it has a very natural interpretation in terms of random walks on the interaction graph .moreover , since there exist a rich suite of both theoretical and practical algorithms , we can for point ( 4 ) compare and contrast several methods to approximately optimize it .to illustrate conductance , note that of the three -node sets , , and illustrated in the graph in figure [ fig : conduct2 ] , has the best ( the lowest ) conductance and is thus the most community - like .-nodes sets that have been marked , has the best ( _ i.e. _ , the lowest ) conductance , as it has the lowest ratio between the number of edges cut and the number of edges inside .so , set is the best -node community or the most community - like set of nodes in this particular network.,scaledwidth=50.0%,height=226 ] however , it is in point ( 5 ) that we deviate from previous work . instead of focusing on individual groups of nodes and trying to interpret them as `` real '' communities , we investigate statistical properties of a large number of communities over a wide range of size scales in over large sparse real - world social and information networks .we take a step back and ask questions such as : how well do real graphs split into communities ? what is a good way to measure and characterize presence or absence of community structure in networks ?what are typical community sizes and typical community qualities ? to address these and related questions, we introduce the concept of a _ network community profile ( ncp ) plot _ that we define and describe in more detail in section [ sxn : ncpp : def ] .intuitively , the network community profile plot measures the score of `` best '' community as a function of community size in a network .formally , we define it as the conductance value of the minimum conductance set of cardinality in the network , as a function of .as defined , the ncp plot will be np - hard to compute exactly , so operationally we will use several natural approximation algorithms for solving the minimum conductance cut problem in order to compute different approximations to it . by comparing and contrasting these plots for a large number of networks , and by computing other related structural properties ,we obtain results that suggest a significantly more refined picture of the community structure in large real - world networks than has been appreciated previously .we have gone to a great deal of effort to be confident that we are computing quantities fundamental to the networks we are considering , rather than artifacts of the approximation algorithms we employ . in particular : * we use several classes of graph partitioning algorithms to probe the networks for sets of nodes that could plausibly be interpreted as communities .these algorithms , including flow - based methods , spectral methods , and hierarchical methods , have complementary strengths and weaknesses that are well understood both in theory and in practice .for example , flow - based methods are known to have difficulties with expanders , and flow - based post - processing of other methods are known in practice to yield cuts with extremely good conductance values . on the other hand ,spectral methods are known to have difficulties when they confuse long paths with deep cuts , a consequence of which is that they may be viewed as computing a `` regularized '' approximation to the network community profile plot .( see section [ algo - notes - section ] for a more detailed discussion of these and related issues . ) * we compute spectral - based lower bounds and also semidefinite - programming - based lower bounds for the conductance of our network datasets .* we compute a wide range of other structural properties of the networks , _ e.g. _ , sizes , degree distributions , maximum and average diameters of the purported communities , internal versus external conductance values of the purported communities , etc .* we recompute statistics on versions of the networks that have been modified in well - understood ways , _ e.g. _ , by removing small barely - connected sets of nodes or by randomizing the edges .* we compare our results across not only over large social and information networks , but also numerous commonly - studied small social networks , expanders , and low - dimensional manifold - like objects , and we compare our results on each network with what is known from the field from which the network is drawn .to our knowledge , this makes ours the most extensive such analysis of the community structure in large real - world social and information networks .* we compare results with analytical and/or simulational results on a wide range of commonly and not - so - commonly used network generation models . * main empirical findings : * taken as a whole ,the results we present in this paper suggest a rather detailed and somewhat counterintuitive picture of the community structure in large social and information networks .several qualitative properties of community structure , as revealed by the network community profile plot , are nearly universal : * up to a size scale , which empirically is roughly nodes , there not only exist cuts with relatively good conductance , _ i.e. _ , good communities , but also the slope of the network community profile plot is generally sloping downward .this latter point suggests that smaller communities can be combined into meaningful larger communities , a phenomenon that we empirically observe in many cases .* at the size scale of roughly nodes , we often observe the global minimum of the network community profile plot ; these are the `` best '' communities , according to the conductance measure , in the entire graph .these are , however , rather interestingly connected to the rest of the network ; for example , in most cases , we observe empirically that they are a small set of nodes barely connected to the remainder of the network by just a _ single _ edge .* above the size scale of roughly nodes , the network community profile plot gradually increases , and thus there is a nearly inverse relationship between community size and community quality . as a function of increasing size ,the best possible communities become more and more `` blended into '' the remainder of the network .intuitively , communities blend in with one another and gradually disappear as they grow larger .in particular , in many cases , larger communities can be broken into smaller and smaller pieces , often recursively , each of which is more community - like than the original supposed community . *even up to the largest size scales , we observe significantly more structure than would be seen , for example , in an expander - like random graph on the same degree sequence .a schematic picture of a typical network community profile plot is illustrated in figure [ fig : intro_ncpp ] . inred ( labeled as `` original network '' ) , we plot community size vs. community quality score for the sets of nodes extracted from the original network . in black( rewired network ) , we plot the scores of communities extracted from a random network conditioned on the same degree distribution as the original network .this illustrates not only tight communities at very small scales , but also that at larger and larger size scales ( the precise cutoff point for which is difficult to specify precisely ) the best possible communities gradually `` blend in '' more and more with the rest of the network and thus gradually become less and less community - like .eventually , even the existence of large well - defined communities is quite questionable if one models the world with an interaction graph , as in point ( 1 ) above , and if one also defines good communities as densely linked clusters that are weakly - connected to the outside , as in hypothesis ( 2 ) above . finally , in blue ( bag of whiskers ), we also plot the scores of communities that are composed of disconnected pieces ( found according to a procedure we describe in section [ sxn : obs_struct ] ) .this blue curve shows , perhaps somewhat surprisingly , that one can often obtain better community quality scores by combining unrelated disconnected pieces . to understand the properties of generative models sufficient to reproduce the phenomena we have observed , we have examined in detail the structure of our social and information networks .although nearly every network is an exception to any simple rule , we have observed that an `` octopus '' or `` jellyfish '' model provides a rough first approximation to structure of many of the networks we have examined .that is , most networks may be viewed as having a `` core , '' with no obvious underlying geometry and which contains a constant fraction of the nodes , and then there is a periphery consisting of a large number of relatively small `` whiskers '' that are only tenuously connected to the core .figure [ fig : intro_graph ] presents a caricature of this network structure . of course, our network datasets are far from random in numerous ways_e.g ._ , they have higher edge density in the core ; the small barely - connected whisker - like pieces are generally larger , denser , and more common than in corresponding random graphs ; they have higher local clustering coefficients ; and this local clustering information gets propagated globally into larger clusters or communities in a subtle and location - specific manner .more interestingly , as shown in figure [ fig : phicore ] in section [ sxn : obs_struct : remove_whiskers ] , the core itself consists of a nested core - periphery structure .* main modeling results : * the behavior that we observe is not reproduced , at even a qualitative level , by any of the commonly - used network generation models we have examined , including but not limited to preferential attachment models , copying models , small - world models , and hierarchical network models . moreover , this behavior is qualitatively different than what is observed in networks with an underlying mesh - like or manifold - like geometry ( which may not be surprising , but is significant insofar as these structures are often used as a scaffolding upon which to build other models ) , in networks that are good expanders ( which may be surprising , since it is often observed that large social networks are expander - like ) , and in small social networks such as those used as testbeds for community detection algorithms ( which may have implications for the applicability of these methods to detect large community - like structures in these networks ) . for the commonly - used network generation models , as well as for expander - like , low - dimensional , and small social networks , the network community profile plots are generally downward sloping or relatively flat .although it is well understood at a qualitative level that nodes that are `` far apart '' or `` less alike '' ( in some sense ) should be less likely to be connected in a generative model , understanding this point quantitatively so as to reproduce the empirically - observed relationship between small - scale and large - scale community structure turns out to be rather subtle .we can make the following observations : * very sparse random graph models with no underlying geometry have relatively deep cuts at small size scales , the best cuts at large size scales are very shallow , and there is a relatively abrupt transition in between .( this is shown pictorially in figure [ fig : intro_ncpp ] for a randomly rewired version of the original network . )this is a consequence of the extreme sparsity of the data : sufficiently dense random graphs do not have these small deep cuts ; and the relatively deep cuts in sparse graphs are due to small tree - like pieces that are connected by a single edge to a core which is an extremely good expander . *a forest fire generative model , in which edges are added in a manner that imitates a fire - spreading process , reproduces not only the deep cuts at small size scales and the absence of deep cuts at large size scales but other properties as well : the small barely connected pieces are significantly larger and denser than random ; and for appropriate parameter settings the network community profile plot increases relatively gradually as the size of the communities increases . * the details of the `` forest fire '' burning mechanism are crucial for reproducing how local clustering information gets propagated to larger size scales in the network , and those details shed light on the failures of commonly - used network generation models . in the forest fire model ,a new node selects a `` seed '' node and links to it . then with some probability it `` burns '' or adds an edge to the each of the seed s neighbors , and so on , recursively .although there is a `` preferential attachment '' and also a `` copying '' flavor to this mechanism , two factors are particularly important : first is the local ( in a graph sense , as there is no underlying geometry in the model ) manner in which the edges are added ; and second is that the number of edges that a new node can add can vary widely , depending on the local structure around the seed node .depending on the neighborhood structure around the seed , small fires will keep the community well - separated from the network , but occasional large fires will connect the community to the rest of the network and make it blend into the network core .thus , intuitively , the structure of the whiskers ( components connected to the rest of the graph via a single edge ) are responsible for the downward part of the network community profile plot , while the core of the network and the manner in which the whiskers root themselves to the core helps to determine the upward part of the network community profile plot . due to local clustering effects , whiskers in real networksare larger and give deeper cuts than whiskers in corresponding randomized graphs , fluctuations in the core are larger and deeper than in corresponding randomized graphs , and thus the network community profile plot increases more gradually and levels off to a conductance value well below the value for a corresponding rewired network . * main methodological contributions :* to obtain these and other conclusions , we have employed approximation algorithms for graph partitioning to investigate structural properties of our network datasets . briefly, we have done the following : * we have used metis+mqi , which consists of using the popular graph partitioning package metis followed by a flow - based mqi post - processing . with this procedure , we obtain sets of nodes that have very good conductance scores . at very small size scales, these sets of nodes could plausibly be interpreted as good communities , but at larger size scales , we often obtain tenuously - connected ( and in some cases unions of disconnected ) pieces , which perhaps do not correspond to intuitive communities . *thus , we have also used the local spectral method of anderson , chung , and lang to obtain sets of nodes with good conductance value that that are `` compact '' or more `` regularized '' than those pieces returned by metis+mqi .since spectral methods confuse long paths with deep cuts , empirically we obtain sets of nodes that have worse conductance scores than sets returned by metis+mqi , but which are `` tighter '' and more `` community - like . '' for example , at small size scales the sets of nodes returned by the local spectral algorithm agrees with the output of metis+mqi , but at larger scales this algorithm returns sets of nodes with substantially smaller diameter and average diameter , which seem plausibly more community - like . we have also used what we call the bag - of - whiskers heuristic to identify small barely connected sets of nodes that exert a surprisingly large influence on the network community profile plot . both metis+mqi and the local spectral algorithm scale well and thus either may be used to obtain sets of nodes from very large graphs . for many of the small to medium - sized networks, we have checked our results by applying one or more other spectral , flow - based , or heuristic algorithms , although these do not scale as well to very large graphs . finally , for some of our smaller network datasets, we have computed spectral - based and semidefinite - programming - based lower bounds , and the results are consistent with the conclusions we have drawn. * broader implications : * our observation that , independently of the network size , compact communities exist only up to a size scale of around nodes agrees well with the `` dunbar number '' , which predicts that roughly individuals is the upper limit on the size of a well - functioning human community .moreover , we should emphasize that our results do not disagree with the literature at small sizes scales .one reason for the difference in our findings is that previous studies mainly focused on small networks , which are simply not large enough for the clusters to gradually blend into one another as one looks at larger size scales . in order to make our observations, one needs to look at large number ( due to the complex noise properties of real graphs ) of large networks .it is only when dunbar s limit is exceeded by several orders of magnitude that it is relatively easy to observe large communities blurring together and eventually vanishing .a second reason for the difference is that previous work did not measure and examine the _ network community profile _ of cluster size vs. cluster quality .finally , we should note that our explanation also aligns well with the _ common bond _ vs. _ common identity _ theory of group attachment from social psychology , where it has been noted that bond communities tend to be smaller and more cohesive , as they are based on interpersonal ties , while identity communities are focused around common theme or interest .we discuss these implications and connections further in section [ sxn : discussion ] .the rest of the paper is organized as follows . in section[ sxn : related ] we describe some useful background , including a brief description of the network datasets we have analyzed .then , in section [ sxn : ncpp ] we present our main results on the properties of the network community profile plot for our network datasets .we place an emphasis on how the phenomena we observe in large social and information networks are qualitatively different than what one would expect based on intuition from and experience with expander - like graphs , low - dimensional networks , and commonly - studied small social networks .then , in sections [ sxn : obs_struct ] and [ algo - notes - section ] , we summarize the results of additional empirical evaluations. in particular , in section [ sxn : obs_struct ] , we describe some of the observations we have made in an effort to understand what structural properties of these large networks are responsible for the phenomena we observe ; and in section [ algo - notes - section ] , we describe some of the results of probing the networks with different approximation algorithms in an effort to be confident that the phenomena we observed really are properties of the networks we study , rather than artifactual properties of the algorithms we chose to use to study those networks . we follow this in section [ sxn : models ] with a discussion of complex network generation models .we observe that the commonly - used network generation models fail to reproduce the counterintuitive phenomena we observe .we also notice that very sparse random networks reproduce certain aspects of the phenomena , and that a generative model based on an iterative `` forest fire '' burning mechanism reproduces very well the qualitative properties of the phenomena we observe .finally , in section [ sxn : discussion ] we provide a discussion of our results in a broader context , and in section [ sxn : conclusion ] we present a brief conclusion .in this section , we will provide background on our data and methods . we start in section [ sxn : related : networkdata ] with a description of the network datasets we will analyze .then , in section [ sxn : related : clusters ] , we review related community detection and graph clustering ideas . finally , in section [ sxn : related : lowcondalgs ] , we provide a brief description of approximation algorithms that we will use .there exist a large number of reviews on topics related to those discussed in this paper .for example , see the reviews on community identification , data clustering , graph and spectral clustering , graph and heavy - tailed data analysis , surveys on various aspects of complex networks , the monographs on spectral graph theory and complex networks , and the book on social network analysis . see section [ sxn : discussion ] for a more detailed discussion of the relationship of our work with some of this prior work .we have examined a large number of real - world complex networks .see tables [ tab : data_statsdesc_1 ] , [ tab : data_statsdesc_2 ] , and [ tab : data_statsdesc_3 ] for a summary . for convenience, we have organized the networks into the following categories : social networks ; information / citation networks ; collaboration networks ; web graphs ; internet networks ; bipartite affiliation networks ; biological networks ; low - dimensional networks ; imdb networks ; and amazon networks .we have also examined numerous small social networks that have been used as a testbed for community detection algorithms ( _ e.g. _ , zachary s karate club , interactions between dolphins , interactions between monks , newman s network science network , etc . ) , numerous simple network models in which by design there is an underlying geometry ( _ e.g. _ , power grid and road networks , simple meshes , low - dimensional manifolds including graphs corresponding to the well - studied `` swiss roll '' data set , a geometric preferential attachment model , etc . ) , several networks that are very good expanders , and many simulated networks generated by commonly - used network generation models(_e.g ._ , preferential attachment models , copying models , hierarchical models , etc . ) .network & & & & & & & & & & description + + delicious & 147,567 & 301,921 & 0.40 & 0.65 & 4.09 & 48.44 & 0.30 & 24 & 6.28 & del.icio.us collaborative tagging social network + epinions & 75,877 & 405,739 & 0.48 & 0.90 & 10.69 & 183.88 & 0.26 & 15 & 4.27 & who - trusts - whom network from epinions.com + flickr & 404,733 & 2,110,078 & 0.33 & 0.86 & 10.43 & 442.75 & 0.40 & 18 & 5.42 & flickr photo sharing social network + linkedin & 6,946,668 & 30,507,070 & 0.47 & 0.88 & 8.78 & 351.66 & 0.23 & 23 & 5.43 & social network of professional contacts + livejournal01 & 3,766,521 & 30,629,297 & 0.78 & 0.97 & 16.26 & 111.24 & 0.36 & 23 & 5.55 & friendship network of a blogging community + livejournal11 & 4,145,160 & 34,469,135 & 0.77 & 0.97 & 16.63 & 122.44 & 0.36 & 23 & 5.61 & friendship network of a blogging community + livejournal12 & 4,843,953 & 42,845,684 & 0.76 & 0.97 & 17.69 & 170.66 & 0.35 & 20 & 5.53 & friendship network of a blogging community + messenger & 1,878,736 & 4,079,161 & 0.53 & 0.78 & 4.34 & 15.40 & 0.09 & 26 & 7.42 & instant messenger social network + email - all & 234,352 & 383,111 & 0.18 & 0.50 & 3.27 & 576.87 & 0.50 & 14 & 4.07 & research organization email network ( all addresses ) + email - inout & 37,803 & 114,199 & 0.47 & 0.82 & 6.04 & 165.73 & 0.58 & 8 & 3.74 & ( all addresses but email has to be sent both ways ) + email - inside & 986 & 16,064 & 0.90 & 0.99 & 32.58 & 74.66 & 0.45 & 7 & 2.60 & ( only emails inside the research organization ) + email - enron & 33,696 & 180,811 & 0.61 & 0.90 & 10.73 & 142.36 & 0.71 & 13 & 3.99 & enron email dataset + answers & 488,484 & 1,240,189 & 0.45 & 0.78 & 5.08 & 251.78 & 0.11 & 22 & 5.72 & yahoo answers social network + answers-1 & 26,971 & 91,812 & 0.56 & 0.87 & 6.81 & 59.17 & 0.08 & 16 & 4.49 & cluster 1 from yahoo answers + answers-2 & 25,431 & 65,551 & 0.48 & 0.80 & 5.16 & 56.57 & 0.10 & 15 & 4.76 & cluster 2 from yahoo answers + answers-3 & 45,122 & 165,648 & 0.53 & 0.87 & 7.34 & 417.83 & 0.21 & 15 & 3.94 & cluster 3 from yahoo answers + answers-4 & 93,971 & 266,199 & 0.49 & 0.82 & 5.67 & 94.48 & 0.08 & 16 & 4.91 & cluster 4 from yahoo answers + answers-5 & 5,313 & 11,528 & 0.41 & 0.73 & 4.34 & 29.55 & 0.12 & 14 & 4.75 & cluster 5 from yahoo answers + answers-6 & 290,351 & 613,237 & 0.40 & 0.71 & 4.22 & 57.16 & 0.09 & 22 & 5.92 & cluster 6 from yahoo answers + + cit - patents & 3,764,105 & 16,511,682 & 0.82 & 0.96 & 8.77 & 21.34 & 0.09 & 26 & 8.15 & citation network of all us patents + cit - hep - ph & 34,401 & 420,784 & 0.96 & 1.00 & 24.46 & 63.50 & 0.30 & 14 & 4.33 & citations between physics ( arxiv hep - th ) papers + cit - hep - th & 27,400 & 352,021 & 0.94 & 0.99 & 25.69 & 106.40 & 0.33 & 15 & 4.20 & citations between physics ( arxiv hep - ph ) papers + blog - nat05 - 6 m & 29,150 & 182,212 & 0.74 & 0.96 & 12.50 & 342.51 & 0.24 & 10 & 3.40 & blog citation network ( 6 months of data ) + blog - nat06all & 32,384 & 315,713 & 0.87 & 0.99 & 19.50 & 153.08 & 0.20 & 18 & 3.94 & blog citation network ( 1 year of data ) + post - nat05 - 6 m & 238,305 & 297,338 & 0.21 & 0.34 & 2.50 & 39.51 & 0.13 & 45 & 10.34 & blog post citation network ( 6 months ) + post - nat06all & 437,305 & 565,072 & 0.22 & 0.38 & 2.58 & 35.54 & 0.11 & 54 & 10.48 & blog post citation network ( 1 year ) + + ata - imdb & 883,963 & 27,473,042 & 0.87 & 0.99 & 62.16 & 517.40 & 0.79 & 15 & 3.48 & imdb actor collaboration network from dec 2007 + ca - astro - ph & 17,903 & 196,972 & 0.89 & 0.98 & 22.00 & 65.70 & 0.67 & 14 & 4.21 & co - authorship in astro - ph of arxiv.org + ca - cond - mat & 21,363 & 91,286 & 0.81 & 0.93 & 8.55 & 22.47 & 0.70 & 15 & 5.36 & co - authorship in cond - mat category + ca - gr - qc & 4,158 & 13,422 & 0.64 & 0.78 & 6.46 & 17.98 & 0.66 & 17 & 6.10 & co - authorship in gr - qc category + ca - hep - ph & 11,204 & 117,619 & 0.81 & 0.97 & 21.00 & 130.88 & 0.69 & 13 & 4.71 & co - authorship in hep - ph category + ca - hep - th & 8,638 & 24,806 & 0.68 & 0.85 & 5.74 & 12.99 & 0.58 & 18 & 5.96 & co - authorship in hep - th category + ca - dblp & 317,080 & 1,049,866 & 0.67 & 0.84 & 6.62 & 21.75 & 0.73 & 23 & 6.75 & dblp co - authorship network + l|r|r|r|r|r|r|r|r|r|l network & & & & & & & & & & description + + web - berkstan & 319,717 & 1,542,940 & 0.57 & 0.88 & 9.65 & 1,067.55 & 0.32 & 35 & 5.66 & web graph of stanford and uc berkeley + web - google & 855,802 & 4,291,352 & 0.75 & 0.92 & 10.03 & 170.35 & 0.62 & 24 & 6.27 & web graph google released in 2002 + web - notredame & 325,729 & 1,090,108 & 0.41 & 0.76 & 6.69 & 280.68 & 0.47 & 46 & 7.22 & web graph of university of notre dame + web - trec & 1,458,316 & 6,225,033 & 0.59 & 0.78 & 8.54 & 682.89 & 0.68 & 112 & 8.58 & web graph of trec wt10 g web corpus + + as - routeviews & 6,474 & 12,572 & 0.62 & 0.80 & 3.88 & 164.81 & 0.40 & 9 & 3.72 & as from oregon exchange bgp route view + as - caida & 26,389 & 52,861 & 0.61 & 0.81 & 4.01 & 281.93 & 0.33 & 17 & 3.86 & caida as relationships dataset + as - skitter & 1,719,037 & 12,814,089 & 0.99 & 1.00 & 14.91 & 9,934.01 & 0.17 & 5 & 3.44 & as from traceroutes run daily in 2005 by skitter + as - newman & 22,963 & 48,436 & 0.65 & 0.83 & 4.22 & 261.46 & 0.35 & 11 & 3.83 & as graph from newman + as - oregon & 13,579 & 37,448 & 0.72 & 0.90 & 5.52 & 235.97 & 0.46 & 9 & 3.58 & autonomous systems + gnutella-25 & 22,663 & 54,693 & 0.59 & 0.83 & 4.83 & 10.75 & 0.01 & 11 & 5.57 & gnutella network on march 25 2000 + gnutella-30 & 36,646 & 88,303 & 0.55 & 0.81 & 4.82 & 11.46 & 0.01 & 11 & 5.75 & gnutella p2p network on march 30 2000 + gnutella-31 & 62,561 & 147,878 & 0.54 & 0.81 & 4.73 & 11.60 & 0.01 & 11 & 5.94 & gnutella network on march 31 2000 + edonkey & 5,792,297 & 147,829,887 & 0.93 & 1.00 & 51.04 & 6,139.99 & 0.08 & 5 & 3.66 & p2p edonkey graph for a period of 47 hours in 2004 + + iptraffic & 2,250,498 & 21,643,497 & 1.00 & 1.00 & 19.23 & 94,889.05 & 0.00 & 5 & 2.53 & ip traffic graph a single router for 24 hours + atp - astro - ph & 54,498 & 131,123 & 0.70 & 0.87 & 4.81 & 16.67 & 0.00 & 28 & 7.78 & authors - to - papers network of astro - ph + atp - cond - mat & 57,552 & 104,179 & 0.65 & 0.79 & 3.62 & 10.54 & 0.00 & 31 & 9.96 & authors - to - papers network of cond - mat + atp - gr - qc & 14,832 & 22,266 & 0.47 & 0.60 & 3.00 & 9.72 & 0.00 & 35 & 11.08 & authors - to - papers network of gr - qc + atp - hep - ph & 47,832 & 86,434 & 0.60 & 0.76 & 3.61 & 16.80 & 0.00 & 27 & 8.55 & authors - to - papers network of hep - ph + atp - hep - th & 39,986 & 64,154 & 0.53 & 0.68 & 3.21 & 13.07 & 0.00 & 36 & 10.74 & authors - to - papers network of hep - th + atp - dblp & 615,678 & 944,456 & 0.49 & 0.64 & 3.07 & 13.61 & 0.00 & 48 & 12.69 & dblp authors - to - papers bipartite network + spending & 1,831,540 & 2,918,920 & 0.34 & 0.58 & 3.19 & 1,536.35 & 0.00 & 26 & 5.62 & users - to - keywords they bid + hw7 & 653,260 & 2,278,448 & 0.99 & 0.99 & 6.98 & 346.85 & 0.00 & 24 & 6.26 & downsampled advertiser - query bid graph + netflix & 497,959 & 100,480,507 & 1.00 & 1.00 & 403.57 & 28,432.89 & 0.00 & 5 & 2.31 & users - to - movies they rated . from netflix prize + queryterms & 13,805,808 & 17,498,668 & 0.28 & 0.41 & 2.53 & 14.92 & 0.00 & 86 & 19.81 & users - to - queries they submit to a search engine + clickstream & 199,308 & 951,649 & 0.39 & 0.87 & 9.55 & 430.74 & 0.00 & 7 & 3.83 & users - to - urls they visited + + bio - proteins & 4,626 & 14,801 & 0.72 & 0.91 & 6.40 & 24.25 & 0.12 & 12 & 4.24 & yeast protein interaction network + bio - yeast & 1,458 & 1,948 & 0.37 & 0.51 & 2.67 & 7.13 & 0.14 & 19 & 6.89 & yeast protein interaction network data + bio - yeastp0.001 & 353 & 1,517 & 0.73 & 0.93 & 8.59 & 20.18 & 0.57 & 11 & 4.33 & yeast protein - protein interaction map + bio - yeastp0.01 & 1,266 & 8,511 & 0.79 & 0.97 & 13.45 & 47.73 & 0.44 & 12 & 3.87 & yeast protein - protein interaction map + l|r|r|r|r|r|r|r|r|r|l network & & & & & & & & & & description + + road - ca & 1,957,027 & 2,760,388 & 0.80 & 0.85 & 2.82 & 3.17 & 0.06 & 865 & 310.97 & california road network + road - usa & 126,146 & 161,950 & 0.97 & 0.98 & 2.57 & 2.81 & 0.03 & 617 & 218.55 & usa road network ( only main roads ) + road - pa & 1,087,562 & 1,541,514 & 0.79 & 0.85 & 2.83 & 3.20 & 0.06 & 794 & 306.89 & pennsylvania road network + road - tx & 1,351,137 & 1,879,201 & 0.78 & 0.84 & 2.78 & 3.15 & 0.06 & 1,064 & 418.73 & texas road network + powergrid & 4,941 & 6,594 & 0.62 & 0.69 & 2.67 & 3.87 & 0.11 & 46 & 19.07 & power grid of western states power grid + mani - faces7k & 696 & 6,979 & 0.98 & 0.99 & 20.05 & 37.99 & 0.56 & 16 & 5.52 & faces ( 64x64 grayscale images ) ( connect 7k closest pairs ) + mani - faces4k & 663 & 3,465 & 0.90 & 0.97 & 10.45 & 20.20 & 0.56 & 29 & 8.96 & faces ( connect 4k closest pairs ) + mani - faces2k & 551 & 1,981 & 0.84 & 0.94 & 7.19 & 12.77 & 0.54 & 32 & 11.07 & faces ( connect 2k closest pairs ) + mani - facesk10 & 698 & 6,935 & 1.00 & 1.00 & 19.87 & 25.32 & 0.51 & 6 & 3.25 & faces ( connect every to 10 nearest neighbors ) + mani - facesk3 & 698 & 2,091 & 1.00 & 1.00 & 5.99 & 7.98 & 0.45 & 9 & 4.89 & faces ( connect every to 5 nearest neighbors ) + mani - facesk5 & 698 & 3,480 & 1.00 & 1.00 & 9.97 & 12.91 & 0.48 & 7 & 4.03 & faces ( connect every to 3 nearest neighbors ) + mani - swiss200k & 20,000 & 200,000 & 1.00 & 1.00 & 20.00 & 21.08 & 0.59 & 103 & 37.21 & swiss - roll ( connect 200k nearest pairs of nodes ) + mani - swiss100k & 19,990 & 99,979 & 1.00 & 1.00 & 10.00 & 11.02 & 0.59 & 162 & 58.32 & swiss - roll ( connect 100k nearest pairs of nodes ) + mani - swiss60k & 19,042 & 57,747 & 0.93 & 0.96 & 6.07 & 7.03 & 0.59 & 243 & 89.15 & swiss - roll ( connect 60k nearest pairs of nodes ) + mani - swissk10 & 20,000 & 199,955 & 1.00 & 1.00 & 20.00 & 25.38 & 0.56 & 10 & 5.47 & swiss - roll ( every node connects to 10 nearest neighbors ) + mani - swissk5 & 20,000 & 99,990 & 1.00 & 1.00 & 10.00 & 12.89 & 0.54 & 13 & 8.34 & swiss - roll ( every node connects to 5 nearest neighbors ) + mani - swissk3 & 20,000 & 59,997 & 1.00 & 1.00 & 6.00 & 7.88 & 0.50 & 17 & 6.89 & swiss - roll ( every node connects to 3 nearest neighbors ) + + atm - imdb & 2,076,978 & 5,847,693 & 0.49 & 0.82 & 5.63 & 65.41 & 0.00 & 32 & 6.82 & actors - to - movies graph from imdb ( imdb.com ) + imdb - top30 & 198,430 & 566,756 & 0.99 & 1.00 & 5.71 & 18.19 & 0.00 & 26 & 8.32 & actors - to - movies graph heavily preprocessed + imdb - raw07 & 601,481 & 1,320,616 & 0.54 & 0.79 & 4.39 & 20.94 & 0.00 & 32 & 8.55 & country clusters were extracted from this graph + imdb - france & 35,827 & 74,201 & 0.51 & 0.76 & 4.14 & 14.62 & 0.00 & 20 & 6.57 & cluster of french movies + imdb - germany & 21,258 & 42,197 & 0.56 & 0.78 & 3.97 & 13.69 & 0.00 & 34 & 7.47 & german movies ( to actors that played in them ) + imdb - india & 12,999 & 25,836 & 0.57 & 0.78 & 3.98 & 31.55 & 0.00 & 19 & 6.00 & indian movies + imdb - italy & 19,189 & 37,534 & 0.55 & 0.77 & 3.91 & 11.66 & 0.00 & 30 & 6.91 & italian movies + imdb - japan & 15,042 & 34,131 & 0.60 & 0.82 & 4.54 & 16.98 & 0.00 & 19 & 6.81 & japanese movies + imdb - mexico & 13,783 & 36,986 & 0.64 & 0.86 & 5.37 & 24.15 & 0.00 & 19 & 5.43 & mexican movies + imdb - spain & 15,494 & 31,313 & 0.51 & 0.76 & 4.04 & 14.22 & 0.00 & 28 & 6.44 & spanish movies + imdb - uk & 42,133 & 82,915 & 0.52 & 0.76 & 3.94 & 15.14 & 0.00 & 23 & 7.04 & uk movies + imdb - usa & 241,360 & 530,494 & 0.51 & 0.78 & 4.40 & 25.25 & 0.00 & 30 & 7.63 & usa movies + imdb - wgermany & 12,120 & 24,117 & 0.56 & 0.78 & 3.98 & 11.73 & 0.00 & 22 & 6.26 & west german movies + + amazon0302 & 262,111 & 899,792 & 0.95 & 0.97 & 6.87 & 11.14 & 0.43 & 38 & 8.85 & amazon products from 2003 03 02 + amazon0312 & 400,727 & 2,349,869 & 0.94 & 0.99 & 11.73 & 30.33 & 0.42 & 20 & 6.46 & amazon products from 2003 03 12 + amazon0505 & 410,236 & 2,439,437 & 0.94 & 0.99 & 11.89 & 30.93 & 0.43 & 22 & 6.48 & amazon products from 2003 05 05 + amazon0601 & 403,364 & 2,443,311 & 0.96 & 0.99 & 12.11 & 30.55 & 0.43 & 25 & 6.42 & amazon products from 2003 06 01 + amazonall & 473,315 & 3,505,519 & 0.94 & 0.99 & 14.81 & 52.70 & 0.41 & 19 & 5.66 & amazon products ( all 4 graphs merged ) + amazonallprod & 524,371 & 1,491,793 & 0.80 & 0.91 & 5.69 & 11.75 & 0.35 & 42 & 11.18 & products ( all products , source+target ) + amazonsrcprod & 334,863 & 925,872 & 0.84 & 0.91 & 5.53 & 11.53 & 0.43 & 47 & 12.11 & products ( only source products ) + * social networks : * the class of social networks in table [ tab : data_statsdesc_1 ] is particularly diverse and interesting .it includes several large on - line social networks : a network of professional contacts from linkedin ( linkedin ) ; a friendship network of a livejournal blogging community ( livejournal01 ) ; and a who - trusts - whom network of epinions ( epinions ) .it also includes an email network from enron ( email - enron ) and from a large european research organization . for the latter we generated three networks: email - inside uses only the communication inside organization ; email - inout also adds external email addresses where email has been sent both way ; and email - all adds all communication inside the organization and to the outside world . also included in the class of social networks are networks that are not the central focus of the websites from which they come , but which instead serve as a tool for people to share information more easily .for example , we have : the networks of a social bookmarking site delicious ( delicious ) ; a flickr photo sharing website ( flickr ) ; and a network from yahoo ! answers question answering website ( answers ) . in all these networks , a node refers to an individual and an edge is used to indicate that means that one person has some sort of interaction with another person , _e.g. _ , one person subscribes to their neighbor s bookmarks or photos , or answers their questions .* information and citation networks : * the class of information / citation networks contains several different citation networks .it contains two citation networks of physics papers on arxiv.org , ( cit - hep - th and cit - hep - ph ) , and a network of citations of us patents ( cit - patents ) .( these paper - to - paper citation networks are to be distinguished from scientific collaboration networks and author - to - paper bipartite networks , as described below . )it also contains two types of blog citation networks . in the so - called post networks , nodes areposts and edges represent hyperlinks between blog posts ( post - nat05 - 6 m and post - nat06all ) . on the other hand ,the so - called blog network is the blog - level - aggregation of the same data , _i.e. _ , there is a link between two blogs if there is a post in first that links the post in a second blog ( blog - nat05 - 6 m and blog - nat06all ) .* collaboration networks : * the class of collaboration networks contain academic collaboration ( _ i.e. _ , co - authorship ) networks between physicists from various categories in arxiv.org ( ca - astro - ph , etc . ) and between authors in computer science ( ca - dblp ) .it also contains a network of collaborations between pairs of actors in imdb ( ata - imdb ) , _ i.e. _ , there is an edge connecting a pair of actors if they appeared in the same movie .( again , this should be distinguished from actor - to - movie bipartite networks , as described below . ) * web graphs : * the class of web graph networks includes four different web - graphs in which nodes represent web - pages and edges represent hyperlinks between those pages .networks were obtained from google ( web - google ) , the university of notre dame ( web - notredame ) , trec ( web - trec ) , and stanford university ( web - berkstan ) .the class of internet networks consists of various autonomous systems networks obtained at different sources , as well as a gnutella and edonkey peer - to - peer file sharing networks .* bipartite networks : * the class of bipartite networks is particularly diverse and includes : authors - to - papers graphs from both computer science ( atp - dblp ) and physics ( atp - astro - ph , etc . ) ; a network representing users and the urls they visited ( clickstream ) ; a network representing users and the movies they rated ( netflix ) ; and a users - to - queries network representing query terms that users typed into a search engine ( queryterms ) .( we also have analyzed several bipartite actors - to - movies networks extracted from the imdb database , which we have listed separately below . )* biological networks : * the class of biological networks include protein - protein interaction networks of yeast obtained from various sources .* low dimensional grid - like networks : * the class of low - dimensional networks consists of graphs constructed from road ( road - ca , etc . ) or power grid ( powergrid ) connections and as such might be expected to `` live '' on a two - dimensional surface in a way that all of the other networks do not .we also added a `` swiss roll '' network , a -dimensional manifold embedded in -dimensions , and a `` faces '' dataset where each point is an by gray - scale image of a face ( embedded in dimensional space ) and where we connected the faces that were most similar ( using the euclidean distance ) .* imdb , yahoo ! answers and amazon networks : * finally , we have networks from imdb , amazon , and yahoo !answers , and for each of these we have separately analyzed subnetworks .the imdb networks consist of actor - to - movie links , and we include the full network as well as subnetworks associated with individual countries based on the country of production . for the amazon networks , recall that amazon sells a variety of products , and for each item one may compile the list the up to ten other items most frequently purchased by buyers of .this information can be presented as a directed network in which vertices represent items and there is a edge from item to another item if was frequently purchased by buyers of .we consider the network as undirected .we use five networks from a study of clauset _ et al . _ , and two networks from the viral marketing study from leskovec _ et al ._ . finally , for the yahoo !answers networks , we observe several deep cuts at large size scales , and so in addition the full network , we analyze the top six most well - connected subnetworks . in addition to providing a brief description of the network , tables [ tab : data_statsdesc_1 ] , [ tab : data_statsdesc_2 ] and [ tab : data_statsdesc_3 ] show the number of nodes and edges in each network , as well as other statistics which will be described in section [ sxn : obs_struct : stats ] .( in all cases , we consider the network as undirected , and we extract and analyze the largest connected component . ) the sizes of these networks range from about nodes up to nearly million nodes , and from about edges up to more than million edges .all of the networks are quite sparse their densities range from an average degree of about for the blog post network , up to an average degree of about in the network of movie ratings from netflix , and most of the other networks , including the purely social networks , have average degree around ( median average degree of ) . in many cases, we examined several versions of a given network .for example , we considered the entire imdb actor - to - movie network , as well as sub - pieces of it corresponding to different language and country groups .detailed statistics for all these networks are presented in tables [ tab : data_statsdesc_1 ] , [ tab : data_statsdesc_2 ] and [ tab : data_statsdesc_3 ] and are described in section [ sxn : obs_struct ] . in total , we have examined over large real - world social and information networks , making this , to our knowledge , the largest and most comprehensive study of such networks .hierarchical clustering is a common approach to community identification in the social sciences , but it has also found application more generally . in this procedure , one first defines a distance metric between pairs of nodes and then produces a tree ( in either a bottom - up or a top - down manner ) describing how nodes group into communities and how these group further into super - communities . a quite different approach that has received a great deal of attention ( and that will be central to our analysis )is based on ideas from _ graph partitioning _ . in this case , the network is a modeled as simple undirected graph , where nodes and edges have no attributes , and a partition of the graph is determined by optimizing a merit function .the graph partitioning problem is find some number groups of nodes , generally with roughly equal size , such that the number of edges between the groups , perhaps normalized in some way , is minimized .let denote a graph , then the _ conductance _ of a set of nodes , ( where is assumed to contain no more than half of all the nodes ) , is defined as follows .let be the sum of degrees of nodes in , and let be the number of edges with one endpoint in and one endpoint in , where denotes the complement of .then , the conductance of is , or equivalently , where is the number of edges with both endpoints is .more formally : given a graph with adjacency matrix the _ conductance of a set _ of nodes is defined as : where , or equivalently , where is a degree of node in .moreover , in this case , the _ conductance of the graph _ is : [ def : conductance ] thus , the conductance of a set provides a measure for the quality of the cut , or relatedly the goodness of a community .indeed , it is often noted that communities should be thought of as sets of nodes with more and/or better intra - connections than inter - connections ; see figure [ fig : conductance ] for an illustration .when interested in detecting communities and evaluating their quality , we prefer sets with small conductances , _i.e. _ , sets that are densely linked inside and sparsely linked to the outside .although numerous measures have been proposed for how community - like is a set of nodes , it is commonly noted_e.g ._ , see shi and malik and kannan , vempala , and vetta that conductance captures the `` gestalt '' notion of clustering , and as such it has been widely - used for graph clustering and community detection .there are many other density - based measures that have been used to partition a graph into a set of communities .one that deserves particular mention is modularity . for a given partition of a network into a set of communities, modularity measures the number of within - community edges , relative to a null model that is usually taken to be a random graph with the same degree distribution .thus , modularity was originally introduced and it typically used to measure the strength or quality of a particular partition of a network .we , however , are interested in a quite different question than those that motivated the introduction of modularity . rather than seeking to partition a graph into the `` best '' possible partition of communities, we would like to know how good is a particular element of that partition , _i.e. _ , how community - like are the best possible communities that modularity or any other merit function can hope to find , in particular as a function of the size of that partition .in addition to capturing very well our intuitive notion of what it means for a set of nodes to be a good community , the use of conductance as an objective function has an added benefit : there exists an extensive theoretical and practical literature on methods for approximately optimizing it .( finding cuts with exactly minimal conductance is np - hard . ) in particular , the theory literature contains several algorithms with provable approximation performance guarantees .first , there is the spectral method , which uses an eigenvector of the graph s laplacian matrix to find a cut whose conductance is no bigger than if the graph actually contains a cut with conductance .the spectral method also produces lower bounds which can show that the solution for a given graph is closer to optimal than promised by the worst - case guarantee .second , there is an algorithm that uses multi - commodity flow to find a cut whose conductance is within an factor of optimal .spectral and multi - commodity flow based methods are complementary in that the worst - case approximation factor is obtained for flow - based methods on expander graphs , a class of graphs which does not cause problems for spectral methods , whereas spectral methods can confuse long path with deep cuts , a difference that does not cause problems for flow - based methods .third , and very recently , there exists an algorithm that uses semidefinite programming to find a solution that is within of optimal .this paper sparked a flurry of theoretical research on a family of closely related algorithms including , all of which can be informally described as combinations of spectral and flow - based techniques which exploit their complementary strengths . however , none of those algorithms are currently practical enough to use in our study . of the above three theoretical algorithms , the spectral method is by far the most practical .also very common are recursive bisection heuristics : recursively divide the graph into two groups , and then further subdivide the new groups until the desired number of clusters groups is achieved .this may be combined with local improvement methods like the kernighan - lin and fiduccia - mattheyses procedures , which are fast and can climb out of some local minima .the latter was combined with a multi - resolution framework to create metis , a very fast program intended to split mesh - like graphs into equal sized pieces .the authors of metis later created cluto , which is better tuned for clustering - type tasks .finally we mention graclus , which uses multi - resolution techniques and kernel -means to optimize a metric that is closely related to conductance .while the preceding were all approximate algorithms for finding the lowest conductance cut in a whole graph , we now mention mqi , an _ exact _ algorithm for the slightly different problem of finding the lowest conductance cut in _ half _ of a graph .this algorithm can be combined with a good method for initially splitting the graph into two pieces ( such as metis or the spectral method ) to obtain a surprisingly strong heuristic method for finding low conductance cuts in the whole graph .the exactness of the second optimization step frequently results in cuts with extremely low conductance scores , as will be visible in many of our plots .mqi can be implemented by solving single parametric max flow problems , or sequences of ordinary max flow problems .parametric max flow ( with mqi described as one of the applications ) was introduced by , and recent empirical work is described in , but currently there is no publicly available code that scales to the sizes we need .ordinary max flow is a very thoroughly studied problem . currently , the best theoretical time bounds are , the most practical algorithm is , while the best implementation is hi_pr by .since metis+mqi using the hi_pr code is very fast and scalable , while the method empirically seems to usually find the lowest or nearly lowest conductance cuts in a wide variety of graphs , we have used it extensively in this study. we will also extensively use local spectral algorithm of andersen , chung , and lang to find node sets of low conductance , _i.e. _ , good communities , around a seed node .this algorithm is also very fast , and it can be successfully applied to very large graphs to obtain more `` well - rounded '' , `` compact , '' or `` evenly - connected '' communities than those returned by meits+mqi .the latter observation ( described in more detail in section [ algo - notes - section ] ) is since local spectral methods also confuse long paths ( which tend to occur in our very sparse network datasets ) with deep cuts .this algorithm takes as input two parameters the seed node and a parameter that intuitively controls the locality of the computation and it outputs a set of nodes .local spectral methods were introduced by spielman and teng , and they have roughly the same kind of quadratic approximation guarantees as the global spectral method , but they have computational cost is proportional to the size of the obtained piece .in this section , we discuss the _ network community profile plot _ ( ncp plot ) , which measures the quality of network communities at different size scales .we start in section [ sxn : ncpp : def ] by introducing it .then , in section [ sxn : ncpp : low_small ] , we present the ncp plot for several examples of networks which inform peoples intuition and for which the ncp plot behaves in a characteristic manner. then , in sections [ sxn : ncpp : large_sparse ] and [ sxn : ncpp : large_sparse_more ] we present the ncp plot for a wide range of large real world social and information networks .we will see that in such networks the ncp plot behaves in a qualitatively different manner . in order to more finely resolve community structure in large networks, we introduce the _ network community profile plot _ ( ncp plot ) . intuitively , the ncp plot measures the quality of the best possible community in a large network , as a function of the community size . formally, we may define it as the conductance value of the best conductance set of cardinality in the entire network , as a function of .given a graph with adjacency matrix , the _ network community profile plot ( ncp plot ) _plots as a function of , where where denotes the cardinality of the set , and where the conductance of is given by equation ( [ eqn : conductance_set ] ) .since this quantity is intractable to compute , we will employ well - studied approximation algorithms for the minimum conductance cut problem to approximate it .in particular , operationally we will use several natural heuristics based on approximation algorithms to do graph partitioning in order to compute different approximations to the ncp plot .although other procedures will be described in section [ algo - notes - section ] , we will primarily employ two procedures .first , metis+mqi , _ i.e. _ , the graph partitioning package metis followed by the flow - based post - processing procedure mqi ; this procedure returns sets that have very good conductance values .second , the local spectral algorithm of andersen , chung , and lang ; this procedure returns sets that are somewhat more `` compact '' or `` smoothed '' or `` regularized , '' but that often have somewhat worse conductance values .just as the conductance of a set of nodes provides a quality measure of that set as a community , the shape of the ncp plot provides insight into the community structure of a graph as a whole .for example , the magnitude of the conductance tells us how well clusters of different sizes are separated from the rest of the network . one might hope to obtain some sort of `` smoothed '' measure of the notion of the best community of size ( _ e.g. _ , by considering an average of the conductance value over all sets of a given size or by considering a smoothed extremal statistic such as a -th percentile ) rather than conductance of the best set of that size .we have not defined such a measure since there is no obvious way to average over all subsets of size and obtain a meaningful approximation to the minimum . on the other hand, our approximation algorithm methodology implicitly incorporates such an effect .although metis+mqi finds sets of nodes with extremely good conductance value , empirically we observe that they often have little or no internal structure they can even be disconnected . on the other hand ,since spectral methods in general tend to confuse long paths with deep cuts , the local spectral algorithm finds sets that are `` tighter '' and more `` well - rounded '' and thus in many ways more community - like .( see sections [ sxn : related : lowcondalgs ] and [ algo - notes - section ] for details on these algorithmic issues and interpretations . )the ncp plot behaves in a characteristic manner for graphs that are `` well - embeddable '' into an underlying low - dimensional geometric structure . to illustrate this ,consider figure [ fig : ncpp_lowdim ] . in figure[ fig : ncpp_lowdim : toy ] , we show the results for a -dimensional chain , a -dimensional grid , and a -dimensional cube . in each case , the ncp plot is steadily downward sloping as a function of the number of nodes in the smaller cluster .moreover , the curves are straight lines with a slope equal to , where is the dimensionality of the underlying grids .in particular , as the underlying dimension increases then the slope of the ncp plot gets less steep .thus , we observe : if the network under consideration corresponds to a -dimensional grid , then the ncp plot shows that this is simply a manifestation of the isoperimetric ( _ i.e. _ , surface area to volume ) phenomenon : for a grid , the `` best '' cut is obtained by cutting out a set of adjacent nodes , in which case the surface area ( number of edges cut ) increases as ) , while the volume ( number of vertices / edges inside the cluster ) increases as .this qualitative phenomenon of a steadily downward sloping ncp plot is quite robust for networks that `` live '' in a low - dimensional structure , _e.g. _ , on a manifold or the surface of the earth .for example , figure [ fig : ncpp_lowdim : power ] shows the ncp plot for a power grid network of western states power grid , and figure [ fig : ncpp_lowdim : road ] shows the ncp plot for a road network of california .these two networks have very different sizes the power grid network has nodes and edges , and the road network has nodes and edges and they arise in very different application domains . in both cases , however , we see predominantly downward sloping ncp plot , very much similar to the profile of a simple -dimensional grid . indeed ,the `` best - fit '' line for power grid gives the slope of , which by ( [ eqn : slope_dim ] ) suggests that , which is not far from the `` true '' dimensionality of .moreover , empirically we observe that minima in the ncp plot correspond to community - like sets , which are occasionally nested .this corresponds to hierarchical community organization .for example , the nodes giving the dip at are included in the nodes giving the dip at , while dips at and are both included in the dip at . in a similar manner ,figure [ fig : phimodels : manifold ] shows the profile plot for a graph generated from a `` swiss roll '' dataset which is commonly examined in the manifold and machine learning literature . in this case, we still observe a downward sloping ncp plot that corresponds to internal dimensionally of the manifold ( 2 in this case ) .finally , figures [ fig : phimodels : expander1 ] and [ fig : phimodels : expander2 ] show ncp plots for two graphs that are very good expanders .the first is a graph with nodes and a number of edges such that the average degree is , , and .the second is a constant degree expander : to make one with degree , we take the union of disjoint but otherwise random complete matchings , and we have plotted the results for . in both of these cases ,the ncp plot is roughly flat , which we also observed in figure [ fig : ncpp_lowdim : toy ] for a clique , which is to be expected since the minimum conductance cut in the entire graph can not be too small for a good expander .somewhat surprisingly ( especially when compared with large networks in section [ sxn : ncpp : large_sparse ] ) , a steadily decreasing downward ncp plot is seen for small social networks that have been extensively studied in validating community detection algorithms .several examples are shown in figures [ fig : ncpp_small ] .for these networks , the interpretation is similar to that for the low - dimensional networks : the downward slope indicates that as potential communities get larger and larger , there are relatively more intra - edges than inter - edges ; and empirically we observe that local minima in the ncp plot correspond to sets of nodes that are plausible communities .consider , _e.g. _ , zachary s karate club network ( zacharykarate ) , an extensively - analyzed social network .the network has nodes , each of which represents a member of a karate club , and edges , each of which represent a friendship tie between two members .figure [ fig : ncpp_small : karate_graph ] depicts the karate club network , and figure [ fig : ncpp_small : karate_plot ] shows its ncp plot .there are two local minima in the plot : the first dip at corresponds to the cut , and the second dip at corresponds to cut .note that cut , which separates the graph roughly in half , has better conductance value than cut .this corresponds with the intuition about the ncp plot derived from studying low - dimensional graphs .note also that the karate network corresponds well with the intuitive notion of a community , where nodes of the community are densely linked among themselves and there are few edges between nodes of different communities . [ cols="^,^ " , ] thus , intuitively , one can think of small well - separated communities those below the size scale that consist of subsets of the small trees starting to grow , and as they pass the size scale and become bigger and bigger , they blend in more and more with the central part of the network , which ( since it exhibits certain expander - like properties ) does not have particularly well - defined communities .note ( more generally ) that if there are nodes in the small tree at the top of the graph , then the dip in the ncp plot in figure [ fig : models_clique2 ] is of depth .in particular , if then the depth of this cut is . intuitively , the ncp plot increases since the `` cost '' per edge for every additional edge inside a cluster increases with the size of the cluster .for example , in cut in figure [ fig : models_clique1 ] , the `` price '' for having internal edges is to cut edges , _i.e. _ , edges cut per edge inside . to expand the cluster by just a single edge, one has to move one level down in the tree ( toward the cut ) where now the price for a single edge is edges , and so on .the question arises now as to whether we can find a simple generative model that can explain both the existence of small well - separated whisker - like clusters and also an expander - like core whose best clusters get gradually worse as the purported communities increase in size .intuitively , a satisfactory network generation model must successfully take into account the following two mechanisms : * the model should produce a relatively large number of relatively small but still large when compared to random graphs well connected and distinct whisker - like communities .( this should reproduce the downward part of the community profile plot and the minimum at small size scales . ) * the model should produce a large expander - like core , which may be thought of as consisting of intermingled communities , perhaps growing out from the whisker - like communities , the boundaries of which get less and less well - defined as the communities get larger and larger and as they gradually blend in with rest of the network .( this should reproduce the gradual upward sloping part of the community profile plot . ) the so - called _ forest fire model _ captures exactly these two competing phenomena .the forest fire model is a model of graph generation ( that generates directed graphs an effect we will ignore ) in which new edges are added via a recursive `` burning '' mechanism in an epidemic - like fashion . since the details of the recursive burning process are critical for the model s success , we explain it in some detail . to describe the forest fire model of , let us fix two parameters , a _forward burning probability _ and a _ backward burning probability _ .we start the entire process with a single node , and at each time step , we consider a new node that joins the graph constructed thus far .the node forms out - links to nodes in as follows : * node first choose a node , which we will refer to as a `` seed '' node or an `` ambassador '' node , uniformly at random and forms a link to .* node selects out - links and in - links of that have not yet been visited .( and are two geometrically distributed random numbers with means and , respectively .if not enough in - links or out - links are available , then selects as many as possible . )let denote the nodes at the other ends of these selected links .* node forms out - links to , and then applied step ( ii ) recursively to each of the , except that nodes can not be visited a second time during the process .thus , burning of links in the forest fire model begins at node , spreads to , and proceeds recursively until the process dies out .one can view such a process intuitively as corresponding to a model in which a person comes to the party and first meets an ambassador who then introduces him or her around .if the person creates a small number of friendships these will likely be from the ambassadors `` community , '' but if the person happens to create many friendships then these will likely go outside the ambassador s circle of friends .this way , the ambassador s community might gradually get intermingled with the rest of the network .joins the network and selects a seed node .middle : then attaches itself by recursively linking to s neighbors , s neighbor - neighbors , and so on , according to the `` forest fire '' burning mechanism described in the text .right : a new node joins the network , selects seed , and recursively adds links using the same `` forest fire '' burning mechanism .notice that if causes a large `` fire , '' it links to a large number of existing nodes .in this way , as potential communities around node grow , the ncp plot is initially decreasing , but then larger communities around gradually blend - in with the rest of the network , which leads the ncp plot to increase.,title="fig:",scaledwidth=15.0% ] joins the network and selects a seed node .middle : then attaches itself by recursively linking to s neighbors , s neighbor - neighbors , and so on , according to the `` forest fire '' burning mechanism described in the text .right : a new node joins the network , selects seed , and recursively adds links using the same `` forest fire '' burning mechanism . notice that if causes a large `` fire , '' it links to a large number of existing nodes . in this way , as potential communities around node grow , the ncp plot is initially decreasing , but then larger communities around gradually blend - in with the rest of the network , which leads the ncp plot to increase.,title="fig:",scaledwidth=15.0% ] joins the network and selects a seed node .middle : then attaches itself by recursively linking to s neighbors , s neighbor - neighbors , and so on , according to the `` forest fire '' burning mechanism described in the text .right : a new node joins the network , selects seed , and recursively adds links using the same `` forest fire '' burning mechanism . notice that if causes a large `` fire , '' it links to a large number of existing nodes . in this way , as potential communities around node grow , the ncp plot is initially decreasing , but then larger communities around gradually blend - in with the rest of the network , which leads the ncp plot to increase.,title="fig:",scaledwidth=17.0% ] [ fig : forestfire ] two properties of this model are particularly significant .first , although many nodes might form one or a small number of links , certain nodes can produce large conflagrations , burning many edges and thus forming a large number of out - links before the process ends .such nodes will help generate a skewed out - degree distribution , and they will also serve as `` bridges '' that connect formerly disparate parts of the network. this will help make the ncp plot gradually increase .second , there is a locality structure in that as each new node arrives over time , it is assigned a `` center of gravity '' in some part of the network , _i.e. _ , at the ambassador node , and the manner in which new links are added depends sensitively on the local graph structure around node . not only does the probability of linking to other nodes decrease rapidly with distance to the current ambassador , but because of the recursive process regions with a higher density of links tend to attract new links .figure [ fig : forestfire ] illustrates this .initially , there is a small community around node .then , node joins and using the forest fire mechanism locally attaches to nodes in the neighborhood of seed node .the growth of the community around corresponds to downward part of the ncp plot .however , if a node then joins and causes a large fire , this has the effect of larger and larger communities around blending into and merging with the rest of the network .cc community profile plots for the forest fire model at various parameter settings .the backward burning probability is , and we increase ( left to right , top to bottom ) the forward burning probability .note that the largest and smallest values for lead to less realistic community profile plots , as discussed in the text ., title="fig:",scaledwidth=45.0% ] & community profile plots for the forest fire model at various parameter settings .the backward burning probability is , and we increase ( left to right , top to bottom ) the forward burning probability .note that the largest and smallest values for lead to less realistic community profile plots , as discussed in the text ., title="fig:",scaledwidth=45.0% ] + community profile plots for the forest fire model at various parameter settings .the backward burning probability is , and we increase ( left to right , top to bottom ) the forward burning probability .note that the largest and smallest values for lead to less realistic community profile plots , as discussed in the text ., title="fig:",scaledwidth=45.0% ] & community profile plots for the forest fire model at various parameter settings .the backward burning probability is , and we increase ( left to right , top to bottom ) the forward burning probability .note that the largest and smallest values for lead to less realistic community profile plots , as discussed in the text ., title="fig:",scaledwidth=45.0% ] + community profile plots for the forest fire model at various parameter settings .the backward burning probability is , and we increase ( left to right , top to bottom ) the forward burning probability .note that the largest and smallest values for lead to less realistic community profile plots , as discussed in the text ., title="fig:",scaledwidth=45.0% ] & community profile plots for the forest fire model at various parameter settings .the backward burning probability is , and we increase ( left to right , top to bottom ) the forward burning probability .note that the largest and smallest values for lead to less realistic community profile plots , as discussed in the text . ,title="fig:",scaledwidth=45.0% ] + not surprisingly , however , the forest fire model is sensitive to the choice of the burning probabilities and . we have experimented with a wide range of network sizes and values for these parameters , and in figure [ fig : phiforestfire ] , we show the community profile plots of several node forest fire networks generated with and several different values of .the first thing to note is that since we are varying the six plots in figure [ fig : phiforestfire ] , we are viewing networks with very different densities . next , notice that if , _ e.g. _ , or then we observe a very natural behavior : the conductance nicely decreases , reaches the minimum somewhere between and nodes , and then slowly but not too smoothly increases .not surprisingly , it is in this parameter region where the forest fire model has been shown to exhibit realistic time evolving graph properties such as densification and shrinking diameters . next , also notice that if is too low or too high , then we obtain qualitatively different results .for example , if , then the community profile plot gradually decreases for nearly the entire plot . for this choice of parameters ,the forest fire does not spread well since the forward burning probability is too small , the network is extremely sparse and is tree - like with just a few extra edges , and so we get large well separated `` communities '' that get better as they get larger . on the other hand , when burning probability is too high , _e.g. _ , , then the ncp plot has a minimum and then rises extremely rapidly . for this choice of parameters ,if a node which initially attached to a whisker successfully burns into the core , then it quickly establishes many successful connections to other nodes in the core .thus , the network has relatively large whiskers that failed to establish such a connection and a very expander - like core , with no intermediate region , and the increase in the community profile plot is quite abrupt .we have examined numerous other properties of the graphs generated by the forest fire model and have found them to be broadly consistent with the social and information networks we have examined .one property , however , that is of particular interest is what the whiskers look like .figure [ fig : whiskff ] shows an example of several whiskers generated by the forest fire model if we choose and .they are larger and more well - structured than the tree - like whiskers from the random graph model of section [ sxn : models : sparse_gw ] .also notice that they all look plausibly community - like with a core of the nodes densely linked among themselves and the bridge edge then connects the whisker to the rest of the network . and .the green square node belongs to the network core , and by cutting the edge connecting it with red circular node we separate the community of circles from the rest of the network ( depicted as a green square).,title="fig:",width=83 ] and .the green square node belongs to the network core , and by cutting the edge connecting it with red circular node we separate the community of circles from the rest of the network ( depicted as a green square).,title="fig:",width=83 ] and .the green square node belongs to the network core , and by cutting the edge connecting it with red circular node we separate the community of circles from the rest of the network ( depicted as a green square).,title="fig:",width=75 ] and .the green square node belongs to the network core , and by cutting the edge connecting it with red circular node we separate the community of circles from the rest of the network ( depicted as a green square).,title="fig:",width=75 ] and .the green square node belongs to the network core , and by cutting the edge connecting it with red circular node we separate the community of circles from the rest of the network ( depicted as a green square).,title="fig:",width=86 ] and .the green square node belongs to the network core , and by cutting the edge connecting it with red circular node we separate the community of circles from the rest of the network ( depicted as a green square).,title="fig:",width=56 ] and .the green square node belongs to the network core , and by cutting the edge connecting it with red circular node we separate the community of circles from the rest of the network ( depicted as a green square).,title="fig:",width=56 ] we conclude by noting that there has also been interest in developing hierarchical graph generation models , _i.e. _ , models in which a hierarchy is given and the linkage probability between pairs of nodes decreases as a function of their distance in the hierarchy . the motivation for this comes largely from the intuition that nodes in social networks and are joined in to small relatively tight groups that are then further join into larger groups , and so on . as figures [ fig : phimodels : bh ] and [ fig : phimodels : cga ] make clear , however , such models do not immediately lead to community structure similar to what we have observed and which has been reproduced by the forest fire model . on the other hand , although there are significant differences between hierarchical models and the forest fire model , notes that there are similarities .in particular , in the forest fire model a new node is assigned an ambassador as an entry point to the network . this is analogous to a child having a parent in the hierarchy which helps to determine how that node links to the remainder of the network .similarly , many hierarchical models have a connection probability that decreases exponentially in the hierarchical tree distance . in the forest fire model ,the probability that a node will burn along a particular path to another node is exponentially small in the path length , although the analogy is not perfect since there may exist many possible paths .in this section , we discuss several aspects of our main results in a broader context . in particular , in section [ sxn : discussion : ground_truth ] , we compare to several data setsin which there is some notion of `` ground truth '' community and we also describe several broader non - technical implications of our results .then , in section [ sxn : discussion : community ] , we describe recent work on community detection and identification . finally , in section [ sxn : discussion : technical ], we discuss several technical and algorithmic issues and questions raised by our work . in this subsection, we examine the relationship between network communities of the sort we have been discussing so far and some notion of `` ground truth . ''when considering a real network , one hopes that the output of a community finding algorithms will be `` real '' communities that exist in some meaningful sense in the real world . for example , in the karate club network in figure [ fig : ncpp_small : karate_graph ] , the cut found by the algorithm corresponds in some sense to a true community , in that it splits the nodes almost precisely as they split into two newly formed karate clubs . in this section ,we take a different approach : we take networks in which there are explicitly defined communities , and we examine how well these communities are separated from the rest of the network . in particular , we examine a minimum conductance profile of several network datasets , where we can associate with each node one or more community labels which are exogenously specified .note that we are overloading the term `` community '' here , as in this context the term might mean one of two things : first , it can refer to groups of nodes with good conductance properties ; and second , it can refer to groups of nodes that belong to the same self - defined or exogenously - specified group .we consider the following five datasets : * livejournal12 : livejournal is an on - line blogging site where users can create friendship links to other users .in addition , users can create groups which other users can then join . in livejournal , there are such groups , and a node belongs to groups on the average .thus , in addition to the information in the interaction graph , we have labels specifying those groups with which a user is associated , and thus we may view each such group as determining a `` ground truth '' community . * ca - dblp : we considered a co - authorship network in which nodes are authors and there is an edge if authors co - authored at least one paper . here , publication venues ( _ e.g. _ , journals and conferences ) can play the role of `` ground truth '' communities .that is , an author is a member of a particular group or community if he or she published at a particular conference or in a particular journal . in our dblp network , there are such groups , with a node belonging to on the average . * amazonallprod : this is a network of products that are commonly purchased together at amazon.com .( intuitively one might expect that , _e.g. _ , gardening books are frequently purchased together , so the network structure might reflect a well - connected cluster of gardening books . ) here , each item belongs to one or more hierarchically organized categories ( book , movie genres , product types , etc . ) , and products from the same category define a group which we will view as a `` ground truth '' community .items can belong to different groups , and each item belongs to groups on the average . * atm - imdb : this network is a bipartite actors - to - movies network composed from imdb data , and an actor is connected to movie if appeared in . for each moviewe also know the language and the country where it was produced . countries and languages may be taken as `` ground truth '' communities or groups , where every movie belongs to exactly one group and actors belongs to all groups to which movies that they appeared in belong . in our dataset , we have language groups and country groups . * email - inside and email - inout : this is an email communication network from a large european research organization conducting research in natural sciences : physics , chemistry , biology and computer science. each of members of the organization belongs to exactly one of departments , and we use the department memberships to define `` ground truth '' communities .although none of these notions of `` ground truth '' is perfect , many community finding algorithms use precisely this form of anecdotal evaluation : a network is taken , network communities are found , and then the correspondence of network communities to `` ground truth '' communities is evaluated .note , in contrast , we are evaluating how `` ground truth '' communities behave at different size scales with respect to our methodology , rather than examining how the groups we find relate to `` ground truth '' communities .furthermore , note that the notions of `` ground truth '' are not all the same we might expect that people publish papers across several different venues in a very different way than actors appear in movies from different countries .more detailed statistics for each of these networks may be found in tables [ tab : data_statsdesc_1 ] , [ tab : data_statsdesc_2 ] and [ tab : data_statsdesc_3 ] . to examine the quality of `` ground truth '' communities in the these network datasets , we take all groups and measure the conductance of the cut that separates that group from the rest of the network .thus , we generated ncp plots in the following way .for every `` ground truth '' community , we measure the conductance of the cut separating it from the rest of the graph , from which we obtain a scatter plot of community size versus conductance . then , we take the lower - envelope of this plot , _i.e. _ , for every integer we find the conductance value of the community of size that has the lowest conductance .figure [ fig : explicitcmty ] shows the results for these network datasets ; the figure also shows the ncp plot obtained from using the local spectral algorithm on both the original network ( plotted in red ) and on the rewired network ( plotted in black ) .several observations can be made : * the conductance of `` ground truth '' communities follows that for the network communities up to until size - nodes , _i.e. _ , larger communities get successively more community - like . as `` ground truth '' communities get larger , their conductance values tend to get worse and worse , in agreement with network communities discovered with graph partitioning approximation algorithms .thus , the qualitative trend we observed in nearly every large sparse real - world network ( of the best communities blending in with the rest of the network as they grow in size ) is seen to hold for small `` ground truth '' communities .* one might expect that the ncp plot for the `` ground truth '' communities ( the green curves ) will be somewhere between the ncp plot of the original network ( red curves ) and that for the rewired network ( black curves ) , and this is seen to be the case in general .the ncp plot for network communities goes much deeper and rises more gradually than for `` ground truth '' communities .this is also consistent with our general observation that only small communities tend to be dense and well separated , and to separate large groups one has to cut disproportionately many edges .* for the two social networks we studied ( livejournal12 and ca - dblp ) , larger `` ground truth '' communities have conductance scores that get quite `` random '' , _i.e. _ , they are as well separated as they would be in a randomly rewired network ( green and black curves overlap ) .this is likely associated with the relatively weak and overlapping notion of `` ground truth ''we associated with those two network datasets .on the other hand , for amazonallprod and atm - imdb networks , the general trend still remains but large `` ground truth '' communities have conductance scores that lie well below the rewired network curve .our email network illustrates a somewhat different point . the ncp plot for email - insideshould be compared with that for email - inout , which is displayed in figure [ fig : phidatasets1 ] .the email - inside email network is rather small , and so it has a decreasing community profile plot , in agreement with the results for small social networks . since communication is mainly focused between the members of the same department , both network and `` ground truth '' communitiesare well expressed .next , compare the ncp plot of email - inside with that of email - inout ( figure [ fig : phidatasets1 ] ) .we see that the ncp plot of email - inside slopes downwards ( as we consider only the communication inside the organization ) , but as soon as we consider the communication inside the organization and to the outside world ( email - inout , or alternatively , see email - enron ) then we see a completely different and more familiar picture the ncp plot drops and then slowly increases .this suggests that the organizational structure , ( _ e.g. _ , departments ) manifest themselves in the internal communication network , but as soon as we put the organization into the broader context ( _ i.e. _ , how it communicates to the rest of the world ) then the internal department structure seems to disappear .in contrast to numerous studies of community structure , we find that there is a natural size scale to communities .communities are relatively small , with sizes only up to about nodes .we also find that above size of about , the `` quality '' of communities gets worse and worse and communities more and more `` blend into '' the network .eventually , even the existence of communities ( at least when viewed as sets with stronger internal than external connectivity ) is rather questionable .we show that large social and information networks can be decomposed into a large number of small communities and a large dense and intermingled network `` core''we empirically establish that the `` core '' contains on average of the nodes and of all edges .but , as demonstrated by figure [ fig : phicore ] , the `` core '' itself has a nontrivial structure in particular , it has a core - whisker structure that is analogous to the original complete network . * the dunbar number : * our observation on the limit of community size agrees with dunbar who predicted that roughly is the upper limit on the size of a well - functioning human community .moreover , allen gives evidence that on - line communities have around members , and on - line discussion forums start to break down at about active contributors .church congregations , military companies , divisions of corporations , all are close to the sum of .we are thus led to ask : why , above this size , is community quality inversely proportional to its size ? and why are ncp plots of small and large networks so different ?previous studies mainly focused on small networks ( _ e.g. _ , see ) , which are simply not large enough for the clusters to gradually blend into one another as one looks at larger size scales .our results do not disagree with literature at small sizes .but it seems that in order to make our observations one needs to look at large networks .it is only when dunbar s limit is passed that we find large communities blurring and eventually vanishing .a second reason is that previous work did not measure and examine the _ network community profile _ of cluster size vs. cluster quality .* common bond vs. common identity communities : * dunbar s explanation aligns well with the common bond vs. common identity theory of group attachment from social psychology .common identity theory makes predictions about people s attachment to the group as a whole , while common bond theory predicts people s attachment to individual group members .the distinction between the two refers to people s different reasons for being in a group . because they like the group as a whole we get identity - based attachment , or because they like individuals in the group we get bond - based attachment .anecdotally , bond - based groups are based on social interaction with others , personal knowledge of them , and interpersonal attraction to them . on the other hand ,identity - based groups are based on common identity of its members , _e.g. _ , liking to play a particular on - line game , contributing to wikipedia , etc .it has been noted that bond communities tend to be smaller and more cohesive , as they are based on interpersonal ties , while identity communities are focused around common theme or interest .see for a very good review of the topic . translating this to our context , the bond vs. identity communities mean that small , cohesive and well - separated communities are probably based on common bonds , while bigger groups may be based on common identity , and it is hard to expect such big communities to be well - separated or well - expressed in a network sense .this further means the transition between common bond ( _ i.e. _ , maintaining close personal ties ) and common identity ( _ i.e. _ , sharing a common interest or theme ) occurs at around one hundred nodes .it seems that at this size the cost of maintaining bond ties becomes too large and the group either dies or transitions into a common identity community .it would be very interesting as a future research topic to explore differences in community network structure as the community grows and transitions from common bond to common identity community .* edge semantics : * another explanation could be that in small , carefully collected networks , the semantics of edges is very precise while in large networks we know much less about each particular edge , _e.g. _ , especially when online people have very different criteria for calling someone a friend .traditionally social scientists through questionnaires `` normalized '' the links by making sure each link has the same semantics / strength . * evidence in previous work : * there has also been some evidence that hints towards the findings we make here . for example , clauset _ et al . _ analyzed community structure of a graph related to the amazonallprod , and they found that around of the nodes belonged to the largest `` miscellaneous '' community .this agrees with the typical size of the network core , and one could conclude that the largest community they found likely corresponds to the intermingled core of the network , and most of the rest of the communities are whisker - like .in addition , recently there have been several works hinting that the network communities subject is more complex than it seems at the first sight . for example, it has been found that even random graphs can have good modularity scores . intuitively , random graphs have no community structure , but there can still exist sets of nodes with good community scores , at least as measured by modularity ( due to random fluctuations about the mean ) .moreover , very recently a study of robustness of community structure showed that the canonical example of presence of community structure in networks may have no significant community structure . * more general thoughts : * our work also raises an important question of what is a natural community size and whether larger communities ( in a network sense ) even exist .it seems that when community size surpasses some threshold , the community becomes so diverse that it stops existing as a traditionally understood `` network community . '' instead , it blends in with the network , and intuitions based on connectivity and cuts seem to fail to identify it .approaches that consider both the network structure and node attribute data might help to detect communities in these cases . also , conductance seems like a very reasonable measure that satisfies intuition about community quality , but we have seen that if one only worries about conductance , then bags of whiskers and other internally disconnected sets have the best scores .this raises interesting questions about cluster compactness , regularization , and smoothness : what is a good definition of compactness , what is the best way to regularize these noisy networks , and how should this be connected to the notion of community separability ?a common assumption is that each node belongs to exactly one community .our approach does not make such an assumption . instead ,for each given size , we independently find best set of nodes , and `` communities '' of different sizes often overlap .as long there is a boundary between communities ( even if boundaries overlap ) , cut- and edge - density- based techniques ( like modularity and conductance ) may have the opportunity to find those communities .however , it is the absence of clear community boundaries that makes the ncp plot go upwards .a great deal of work has been devoted to finding communities in large networks , and much of this has been devoted to formalizing the intuition that a community is a set of nodes that has more and/or better intra - linkages between its members than inter - linkages with the remainder of the network . very relevant to our workis that of kannan , vempala , and vetta , who analyze spectral algorithms and describe a community concept in terms of a bicriterion depending on the conductance of the communities and the relative weight of inter - community edges .flake , tarjan , and tsioutsiouliklis introduce a similar bicriterion that is based on network flow ideas , and flake _ et al . _ defined a community as a set of nodes that has more intra - edges than inter - edges .similar edge - counting ideas were used by radicchi _et al . _ to define and apply the notions of a strong community and a weak community . within the `` complex networks '' community , girvan and newman proposed an algorithm that used `` centrality '' indices to find community boundaries . following this ,newman and girvan introduced _modularity _ as _ a posteriori _ measure of the strength of community structure .modularity measures inter- ( and not intra- ) connectivity , but it does so with reference to a randomized null model .modularity has been very influential in the recent community detection literature , and one can use spectral techniques to approximate it .on the other hand , guimer , sales - pardo , and amaral and fortunato and barthlemy showed that random graphs have high - modularity subsets and that there exists a size scale below which communities can not be identified . in part as a response to this , some recent work has had a more statistical flavor . in light of our results , this work seems promising , both due to potential `` overfitting '' issues arising from the extreme sparsity of the networks , and also due to the empirically - promising regularization properties exhibited by local spectral methods .we have made extensive use of the local spectral algorithm of andersen , chung , and lang .similar results were originally proven by spielman and teng , who analyzed local random walks on a graph ; see chung for an exposition of the relationship between these methods .andersen and lang showed that these techniques can find ( in a scalable manner ) medium - sized communities in very large social graphs in which there exist reasonably well - defined communities . in light of our results ,such methods seem promising more generally . other recent work that has focused on developing local and/or near - linear time heuristics for community detectioninclude .in addition to this work we have cited , there exists work which views communities from a very different perspective .for example , kumar _et al . _ view communities as a dense bipartite subgraph of the web ; gibson , kleinberg , and raghavan view communities as consisting of a core of central authoritative pages linked together by hub pages ; hopcroft _ et al . _ are interested in the temporal evolution of communities that are robust when the input data to clustering algorithms that identify them are moderately perturbed ; and palla _ et al . _ view communities as a chain of adjacent cliques and focus on the extent to which they are nested and overlap. the implications of our results for this body of work remain to be explored . in this subsection, we describe the relationship between our work and recent work with similar flavor in graph partitioning , algorithms , and graph theory .recent work has focused on the expansion properties of power law graphs and the real - world networks they model .for example , mihail , papadimitriou , and saberi , as well as gkantsidis , mihail , and saberi , studied internet routing at the level of autonomous systems ( as ) , and showed that the preferential attachment model and a random graph model with power law degree distributions each have good expansion properties if the minimum degree is greater than or , respectively .this is consistent with the empirical results , but as we have seen the as graphs are quite unusual , when compared with nearly every other social and information network we have studied . on the other hand , estrada has made the observation that although certain communication , information , and biological networks have good expansion properties , social networks do not .this is interpreted as evidence that such social networks have good small highly - cohesive groups , a property which is not attributed to the biological networks that were considered . from the perspective of our analysis , these results are interesting since it is likely that these small highly - cohesive groups correspond to sets near the global minimum of the network community profile plot .reproducing deep cuts was also a motivation for the development of the geometric preferential attachment models of flaxman , frieze , and vera .note , however , that the deep cuts they obtain arise from the underlying geometry of the model and thus are nearly bisections .consider also recent results on the structural and spectral properties of very sparse random graphs . recall that the random graph model consists of those graphs on nodes , in which there is an edge between every pair vertices with a probability , independently .recall also that if , then a typical graph in has a giant component , _i.e. _ , connected subgraph consisting of a constant fraction of the nodes , but the graph is not fully connected .( if , the a typical graph is disconnected and there does not exist a giant component , while if , then a typical graph is fully connected . ) as noted , _e.g. _ , by feige and ofek , this latter regime is particularly difficult to analyze since with fairly high probability there exist vertices with degrees that are much larger than their expected degree . as reviewed in section [ sxn : models : sparse_gw ] , however , this regime is not unlike that in a power law random graph in which the power law exponent .chakrabarti _ et al . _ defined the `` min - cut '' plot which has similarities with our ncp plot .they used a different approach in which a network was recursively bisected and then the quality of the obtained clusters was plotted against as a function of size ; and the `` min - cut '' plots were only used as yet - another statistic to test when assessing how realistic are synthetically generated graphs .note , however , that the `` min - cut '' plots have qualitatively similar behavior to our ncp plots , _i.e. _ , they initially decrease , reach a minimum , and then increase . of particular interest to usare recent results on the mixing time of random walks in this regime of the ( and the related ) random graph model .benjamini , kozma , and wormald and fountoulakis and reed have established rapid mixing results by proving structural results about these very sparse graphs . in particular, they proved that these graphs may be viewed as a `` core '' expander subgraph , whose deletion leaves a large number of `` decorations , '' _i.e. _ , small components such that a bounded number are attached to any vertex in the core .the particular constructions in their proofs is complicated , but they have a similar flavor to the core - and - whiskers structure we have empirically observed .similar results were observed by fernholz and ramachandran , whose analysis separately considered the -core of these graphs and then the residual pieces .they show that a typical longest shortest path between two vertices and consists of a path of length from to the -core , then a path of length across the -core , and finally a path of length from the -core to . again , this is reminiscent of the core - and - whiskers properties we have observed . in all these cases , the structure is very different than traditional expanders , which we also empirically observe .eigenvalues of power law graphs have also been studied by mihail and papadimitriou , chung , lu , vu , and flaxman , frieze , and fenner .we investigated statistical properties of community - like sets of nodes in large real - world social and information networks .we discovered that community structure in these networks is very different than what we expected from the experience with small networks and from what commonly - used models would suggest . in particular , we defined a _ network community profile plot ( ncp plot ) _ , and we observed that good network communities exist only up to a size scale of nodes .this agrees well with the observations of dunbar . for size scales above nodes ,the ncp plot slopes upwards as the conductance score of the best possible set of nodes gets gradually worse and worse as those sets increase in size .thus , if the world is modeled by a sparse `` interaction graph '' and if a density - based notion such as conductance is an appropriate measure of community quality , then the `` best '' possible `` communities '' in nearly every real - world network we examined gradually gets less and less community - like and instead gradually `` blends in '' with the rest of the network , as the purported communities steadily grow in size .although this suggests that large networks have a _ core - periphery _ or _ jellyfish _ type of structure , where small `` whiskers '' connect themselves into a large dense intermingled network `` core , '' we also observed that the `` core '' itself has an analogous core - periphery structure .none of the commonly - used network generation models , including preferential - attachment , copying , and hierarchical models , generates networks that even qualitatively reproduce this community structure property .we found , however , that a model in which edges are added recursively , via an iterative `` forest fire '' burning mechanism , produces remarkably good results .our work opens several new questions about the structure of large social and information networks in general , and it has implications for the use of graph partitioning algorithms on real - world networks and for detecting communities in them .we thank reid andersen , christos faloutsos and jon kleinberg for discussions , lars backstrom for data , and arpita ghosh for assistance with the proof of theorem [ thm : maingw ] .r. andersen , f.r.k .chung , and k. lang .local graph partitioning using pagerank vectors . in _focs 06 : proceedings of the 47th annual ieee symposium on foundations of computer science _ , pages 475486 , 2006 .s. arora , e. hazan , and s. kale . approximation to sparsest cut in time . in _focs 04 : proceedings of the 45th annual symposium on foundations of computer science _ , pages 238247 , 2004 .m. babenko , j. derryberry , a. goldberg , r. tarjan , and y. zhou .experimental evaluation of parametric max - flow algorithms . in _wea 07 : proceedings of the 6th workshop on experimental algorithms _ , pages 256269 , 2007 .l. backstrom , d. huttenlocher , j. kleinberg , and x. lan .group formation in large social networks : membership , growth , and evolution . in _kdd 06 : proceedings of the 12th acm sigkdd international conference on knowledge discovery and data mining _ , pages 4454 , 2006 .a. broder , r. kumar , f. maghoul , p. raghavan , s. rajagopalan , r. stata , a. tomkins , and j. wiener .graph structure in the web . in _www 00 : proceedings of the 9th international conference on world wide web _ , pages 309320 , 2000 .cherkassky and a.v .goldberg . on implementing push - relabel method for the maximum flow problem . in _ipco 95 : proceedings of the 4th international ipco conference on integer programming and combinatorial optimization _ , pages 157171 , 1995 .a. fabrikant , e. koutsoupias , and c.h .heuristically optimized trade - offs : a new paradigm for power laws in the internet . in _icalp 02 : proceedings of the 29th international colloquium on automata , languages and programming _ , pages 110122 , 2002 .m. faloutsos , p. faloutsos , and c. faloutsos . on power - law relationships of the internet topology . in _sigcomm 99 : proceedings of the conference on applications , technologies , architectures , and protocols for computer communication _ , pages 251262 , 1999 .flake , s. lawrence , and c.l .efficient identification of web communities . in _kdd 00 : proceedings of the 6th acm sigkdd international conference on knowledge discovery and data mining _ ,pages 150160 , 2000 .flaxman , a.m. frieze , and j. vera . a geometric preferential attachment model of networks .in _ waw 04 : proceedings of the 3rd workshop on algorithms and models for the web - graph _ , pages 4455 , 2004 .flaxman , a.m. frieze , and j. vera . a geometric preferential attachment model of networks ii . in _ waw 07 : proceedings of the 5th workshop on algorithms and models for the web - graph _ ,pages 4155 , 2007 . c. gkantsidis , m. mihail , and a. saberi .conductance and congestion in power law graphs . in _ proceedings of the 2003 acm sigmetrics international conference on measurement and modeling of computer systems _ , pages 148159 , 2003 .j. hopcroft , o. khan , b. kulis , and b. selman .natural communities in large linked networks . in _kdd 03 : proceedings of the 9th acm sigkdd international conference on knowledge discovery and data mining _ , pages 541546 , 2003 .r. kumar , j. novak , and a. tomkins .structure and evolution of online social networks . in _kdd 06 : proceedings of the 12th acm sigkdd international conference on knowledge discovery and data mining _ , pages 611617 , 2006 .r. kumar , p. raghavan , s. rajagopalan , d. sivakumar , a. tomkins , and e. upfal .stochastic models for the web graph . in _focs 00 : proceedings of the 41st annual symposium on foundations of computer science _ , pages 5765 , 2000 .k. lang and s. rao . a flow - based method for improving the expansion or conductance of graph cuts . in _ipco 04 : proceedings of the 10th international ipco conference on integer programming and combinatorial optimization _ , pages 325337 , 2004 .t. leighton and s. rao .an approximate max - flow min - cut theorem for uniform multicommodity flow problems with applications to approximation algorithms . in _focs 88 : proceedings of the 28th annual symposium on foundations of computer science _ ,pages 422431 , 1988 .j. leskovec , d. chakrabarti , j. kleinberg , and c. faloutsos .realistic , mathematically tractable graph generation and evolution , using kronecker multiplication . in _ecml / pkdd 05 : proceedings of the european international conference on principles and practice of knowledge discovery in databases _ , pages 133145 , 2005 .j. leskovec , j. kleinberg , and c. faloutsos .graphs over time : densification laws , shrinking diameters and possible explanations . in _kdd 05 : proceeding of the 11th acm sigkdd international conference on knowledge discovery in data mining _ , pages 177187 , 2005 .j. leskovec , k.j .lang , a. dasgupta , and m.w .statistical properties of community structure in large social and information networks . in _ www 08 : proceedings of the 17th international conference on world wide web _ , pages 695704 , 2008 .d. lusseau , k. schneider , o.j .boisseau , p. haase , e. slooten , and s.m .the bottlenose dolphin community of doubtful sound features a large proportion of long - lasting associations ., 54:396405 , 2003 .spielman and s .- h . teng .spectral partitioning works : planar graphs and finite element meshes . in _ focs 96 : proceedings of the 37th annual ieee symposium on foundations of computer science _ , pages 96107 , 1996 .spielman and s .- h .nearly - linear time algorithms for graph partitioning , graph sparsification , and solving linear systems . in _stoc 04 : proceedings of the 36th annual acm symposium on theory of computing _ ,pages 8190 , 2004 . | a large body of work has been devoted to defining and identifying clusters or communities in social and information networks , _ i.e. _ , in graphs in which the nodes represent underlying social entities and the edges represent some sort of interaction between pairs of nodes . most such research begins with the premise that a community or a cluster should be thought of as a set of nodes that has more and/or better connections between its members than to the remainder of the network . in this paper , we explore from a novel perspective several questions related to identifying meaningful communities in large social and information networks , and we come to several striking conclusions . rather than defining a procedure to extract sets of nodes from a graph and then attempt to interpret these sets as a `` real '' communities , we employ approximation algorithms for the graph partitioning problem to characterize as a function of size the statistical and structural properties of partitions of graphs that could plausibly be interpreted as communities . in particular , we define the _ network community profile plot _ , which characterizes the `` best '' possible community according to the conductance measure over a wide range of size scales . we study over large real - world networks , ranging from traditional and on - line social networks , to technological and information networks and web graphs , and ranging in size from thousands up to tens of millions of nodes . our results suggest a significantly more refined picture of community structure in large networks than has been appreciated previously . our observations agree with previous work on small networks , but we show that large networks have a very different structure . in particular , we observe tight communities that are barely connected to the rest of the network at very small size scales ( up to nodes ) ; and communities of size scale beyond nodes gradually `` blend into '' the expander - like core of the network and thus become less `` community - like , '' with a roughly inverse relationship between community size and optimal community quality . this observation agrees well with the so - called dunbar number which gives a limit to the size of a well - functioning community . however , this behavior is not explained , even at a qualitative level , by any of the commonly - used network generation models . moreover , it is exactly the opposite of what one would expect based on intuition from expander graphs , low - dimensional or manifold - like graphs , and from small social networks that have served as testbeds of community detection algorithms . the relatively gradual increase of the network community profile plot as a function of increasing community size depends in a subtle manner on the way in which local clustering information is propagated from smaller to larger size scales in the network . we have found that a generative graph model , in which new edges are added via an iterative `` forest fire '' burning process , is able to produce graphs exhibiting a network community profile plot similar to what we observe in our network datasets . ' '' '' ' '' '' |
graphical analysis ( ga ) has been routinely used for quantification of positron emission tomography ( pet ) radioligand measurements . the first ga method for measuring primarily tracer uptakes for irreversible kineticswas introduced by patlak , , and extended for measuring tracer distribution ( accumulation ) in reversible systems by logan , .these techniques have been utilized both with input data acquired from plasma measurements and using the time activity curve from a reference brain region .they have been used for calculation of tracer uptake rates , absolute distribution volumes ( dv ) and dv ratios ( dvr ) , or , equivalently , for absolute and relative binding potentials ( bp ) .they are widely used because of their inherent simplicity and general applicability regardless of the specific compartmental model .the well - known bias , particularly for reversible kinetics , in parameters estimated by ga is commonly attributed to noise in the data , , and therefore techniques to reduce the bias have concentrated on limiting the impact of the noise .these include ( i ) rearrangement of the underlying system of linear equations so as to reduce the impact of noise yielding the so - called _ multi - linear _ method ( ma1 ) , , and a second multi - linear approach ( ma2 ) , , ( ii ) preprocessing using the method of generalized linear least squares ( glls ) , , yielding a hybrid glls - ga method , , ( iii ) use of the method of perpendicular least squares , , also known as total least squares ( tls ) , , ( iv ) likelihood estimation , , ( v ) tikhonov regularization , ( vi ) principal component analysis , , and ( vii ) reformulating the method of logan so as to reduce the noise in the denominator , . here , we turn our attention to another important source of the bias : the model error which is implicit in ga approaches .the bias associated with ga approaches has , we believe , three possible sources .the bias arising due to random noise is most often discussed , but errors may also be attributed to the use of numerical quadrature and an approximation of the underlying compartmental model .it is demonstrated in section [ sec : theory ] that not only is bias an intrinsic property of the linear model for limited scan durations , which is exaggerated by noise , but also that it may be dominated by the effects of the model error .indeed , numerical simulations , presented in section [ sec : validatetheory ] , demonstrate that large bias can result even in the noise - free case .conditions for over- or under - estimation of the dv due to model error and the extent of bias of the logan plot are quantified analytically .these lead to the design of a bias correction method , section [ sec : method ] , which still maintains the elegant simplicity of ga approaches .this bias reduction is achieved by the introduction of a simple nonlinear term in the model . while this approach adds some moderate computational expense , simulations reported in section [ sec : resultssc0 ] for the fibrillar amyloid radioligand [ 11c ] benzothiazole - aniline ( pittsburgh compound - b [ pib ] ) , , illustrate that it greatly reduces bias .relevant observations are discussed in section [ sec : diss ] and conclusions presented in section [ sec : conc ] .for the measurement of dv , existing linear quantification methods for reversible radiotracers with a known input function , i.e. the unmetabolized tracer concentration in plasma , are based on the following linear approximation of the true kinetics developed by logan , : here is the measured _ tissue time activity curve _ (ttac ) , is the _ input function _ , dv represents the _ distribution volume _ and quantity is a constant . with known and can solve for dv and by the method of linear least squares .this model , which we denote by ma0 to distinguish it from ma1 and ma2 introduced in , approximately describes tracer behavior at equilibrium . dividing through by , showing that the dv is the linear slope and the intercept , yields the original logan graphical analysis model , denoted here by logan - ga , in which the dv and intercept are obtained by using linear least squares ( ls ) for the sampled version of ( [ eq : logan ] ) .although it is well - known that this model often leads to under - estimation of the dv it is still widely used in pet studies . an alternative formulation based on ( [ eq : ma0 ] )is the so - called ma1 , for which the dv can again be obtained using ls .recently another formulation , obtained by division in ( [ eq : ma0 ] ) by instead of , has been developed by zhou _et al _ , .but , as noted by varga _et al _ in the noise appears in both the independent and dependent variables in ( [ eq : logan ] ) and thus tls may be a more appropriate model than ls for obtaining the dv . whereas it has been concluded through numerical experiments for tracer [ ]fcway and [ ]mdl 100,907 , , that ma1 ( [ eq : multi - lin1 ] ) performs better than other linear methods , including logan - ga ( [ eq : logan ] ) , tls and ma2 , none of these techniques explicitly deals with the inherent error due to the assumption of model ma0 ( [ eq : ma0 ] ) .the focus here is thus examination of the model error specifically for logan - ga and ma1 , from which a new method for reduction of model error is designed . the general three - tissue compartmental model for the reversible radioligand binding kinetics of a given brain region or a voxel can be illustrated as follows , : ( 9.200000,3.400000)(0.000000,-3.400000 ) ( 0.0000,-1.2000)(2.0000,1.2000)[c ] ( 2.0000,-1.0000)(1.6000,0.8000)[c ] ( 3.6000,-1.2000)(2.0000,1.2000)[c ] ( 3.6000,-0.7750)(-1,0)1.6000 ( 2.0000,-0.4250)(1,0)1.6000 ( 5.6000,-1.0000)(1.6000,0.8000)[c ] ( 7.2000,-1.2000)(2.0000,1.2000)[c ] ( 7.2000,-0.7750)(-1,0)1.6000 ( 5.6000,-0.4250)(1,0)1.6000 ( 3.6000,-2.2000)(2.0000,1.0000)[c ] ( 3.6000,-3.4000)(2.0000,1.2000)[c ] ( 4.7750,-2.2000)(0,1)1.0000 ( 4.4250,-1.2000)(0,-1)1.0000 here ( kbq / ml ) is the input function , i.e. the unmetabolized radiotracer concentration in plasma , and , and ( kbq / g ) are free radioactivity , nonspecific bound and specific bound tracer concentrations , resp . , and ( ml / min / g ) and ( 1/min ) , , are rate constants . the dv is related to the rate constants as follows , the numerical implementation for estimating the unknown rate constants of the differential system illustrated in figure [ fig:3 t ] is difficult because three exponentials are involved in the solution of this system , .specifically , without the inclusion of additional prior knowledge , the rate constants may be unidentifiable , .fortunately , for most tracers it can safely be assumed that and reach equilibrium rapidly for specific binding regions .then it is appropriate to use a two - tissue four - parameter ( 2t-4k ) model by binning and to one compartment .this is equivalent to taking , and hence . on the other hand , for regions without specific binding activity , we know which is equivalent to taking , and it is again appropriate for most radioligands to bin and . the one - tissue compartmental model is then appropriate for regions without specific binding activity .for some tracers , however , for example the modeling of pib in the cerebellar reference region , the best data fitting is obtained by using the 2t-4k model without binning and , . assuming the latter , the dv is given by , and , for regions with and without specific binding activity , resp . ignoring the notational differences between the two models , for regions with and without specific binding activity , they are both described by the same abstract mathematical 2t-4k model equations . here , without loss of generality , we present the 2t-4k model equations for specific binding regions , to obtain the equations appropriate for regions without specific binding activity , is replaced by and and are interpreted as the association and dissociation parameters of regions without specific binding activity . to simplify the explanation , and used throughout for both regions with and without specific binding activity , with the assumption that , and should automatically be replaced by , and respectively , when relevant . the solution of the linear differential system ( [ eq:2t4kc1])-([eq:2t4kcs ] )is given by where represents the convolution operation , the overall concentration of radioactivity is integrating ( [ eq:2t4kc1])-([eq:2t4kcs ] ) and rearranging yields this is model ( [ eq : ma0 ] ) when is linearly proportional to for a time window within the scan duration of minutes .the accuracy of linear methods based on ( [ eq : ma0 ] ) is thus dependent on the validity of the assumption that and are approximately linearly proportional to over a time window within ] , bias in the estimated uptake rate or dv will be introduced , as shown later in section [ sec : resultssc0 ] , due to the intrinsic model error of a ga method . indeed , in section[ subsec : equilibrium ] we show that , for the pib radioligand on some regions with small , there is no window within a minutes scan duration where and are linearly proportional .this is despite the apparent good linearity , visually , of the logan plot of against . waiting for equilibrium , which may take several hours , is impractical in terms of patient comfort , cost and measurement of radioactivities .the limitation of the constant approximation can be analysed theoretically .because and is very small for large time the convolution is relatively small .we can safely assume that the ratio of to is roughly for .then , see equation ( [ sol : cs ] ) , is approximately proportional to for . in our tests with pib , the neglected component is less than for min .. on the other hand , this is not the case for , see equation ( [ sol : c_1 ] ) , because and need not be of the same scale .for example , if we know from ( [ symb : a1 ] ) , thus .specifically , may not be small in relation to .thus , it is not appropriate , as is assumed for the logan - ga ( [ eq : logan ] ) and other linear methods derived from ma0 , to approximate as constant for ] is a non - constant decreasing ( increasing ) function , and * the dv is exact if , ] by }{\mathbf { x}}(t)-\displaystyle\min_{t \in [ t_l , t_n ] } { \mathbf { x}}(t)|.\ ] ] then the bias in calculated by logan - ga is bounded by where .this theorem is an immediate result of lemma [ lemma : ma1_ana ] and corollary [ cor : logan - ga ] in the _ appendix _ for the vectors obtained from the sampling of the functions at discrete time points the relevant vectors are defined by , , , where the division corresponds to componentwise division .it is easy to check that all these vectors are positive vectors , , , and are non - constant increasing vectors and is decreasing .thus all conditions for lemma [ lemma : ma1_ana ] and corollary [ cor : logan - ga ] are satisfied .note that in the denominator of ( [ logandv_bound ] ) the simplification is used . in the latter discussion we may use the variation ( increasing or decreasing ) of instead of that of because it is not surprising that the properties of logan - ga and ma0 are similar . indeed ,ma0 is none other than weighted logan - ga with weights , which changes the noise structure in the variables .in contrast to the conventional under - estimation observations , it is suprising that the dv may be over - estimated .however , the over - estimation is indeed observed in the tests presented in section [ sec : overest ] and [ sec : resultssc0 ] .inequality ( [ logandv_bound ] ) indicates that logan - type linear methods will work well for data for which is flat .unfortunately , may become flat only for a late time interval .thus our interest , in section [ sec : method ] , is to better estimate the dv using a reasonable ( practical ) time window , which may include the window over which is still increasing .our initial focus is on the modification of logan - type methods .then , in section [ sec : validatetheory ] we present numerical simulations using noise - free data which illustrate the difficulties with logan - ga and ma1 , and support the results of theorem [ thm : logan_bias ] .in the previous discussion we have seen the theoretical limitations of the logan - ga and ma1 methods . herewe present a new model and associated algorithm which assists with reducing the bias in the estimation of the dv .observe that , , implies that , where can be ignored for .therefore , for ( [ eq : root ] ) can be approximated by a new model as follows where and .this suggests new algorithms should be developed for estimation of parameters dv , , and . here ,a new approach , based on the basis function method ( bfm ) in , in which is discretized , is given by the following algorithm .[ alg : biascor_guo]given and for and , the dv is estimated by performing the following steps . 1 .calculate and intercept , using logan - ga .2 . set and if otherwise .3 . form discretization , for , with equal spacing logarithmically between and .for each solve the linear ls problem , i.e. cast it as a multiple linear regression problem with as the dependent variable . with data at , , to give values , and . : 5 .determine for which residual is minimum over all .set , and to be , and , resp .* remarks : * + 1 .[ remark : alpha1 ] the interval for is determined as follows : first the lower bound for is suitable for most tracers , but could be reduced appropriately .this lower bound is not the same as that on used in bfm , in which is required to be greater than the decay constant of the isotope , .second by point ( [ item : est_b ] ) of corollary [ cor : logan - ga ] in the _ appendix a _ , should be positive and near the average value of , where , by ( [ ineq : bounds_s ] ) , .on the other hand , if is small relative to .thus , is linked with through .this is used to give the estimate of the upper bound on . practically , it is possible that the logan - ga may yield an intercept , then we set .2 . numerically , because is much larger than both and for , the estimate of dv is much more robust to noise in the formulation , including both model and random noise effects , than are the estimates of and . therefore , while and may not be good estimates of and , resp . for noisy data ,the estimate of dv will still be acceptable .consequently , it is possible that logan - ga and ma0 will produce reasonable estimates for dv , even when the model error is non negligible .3 . the algorithm can be accelerated by employing a coarse - to - fine multigrid strategy .the coarser level grid provides bounds for the fine level grid .the grid resolution can be gradually refined until the required accuracy is satisfied .we present a series of simulations which first validate the theoretical analysis of section [ sec : theory ] for noise - free data , and then numerical experiments which contrast the performance of algorithm [ alg : biascor_guo ] with logan - ga , ma1 and nonlinear kinetic analysis ( ka ) algorithms for noisy data .we assume the radioligand binding system is well modeled by the 2t-4k compartmental model and focus the analysis on the bias in the estimated dv which can be attributed to the simplification of the 2t-4k model . for the simulationwe use representative kinetic parameters for brain studies with the pib tracer .these kinetic parameters , detailed in table [ tab : ki ] , are adopted from published clinical data , .the simulated regions include the posterior cingulate ( pcg ) , cerebellum ( cere ) and a combination of cortical regions ( cort ) .the kinetic parameters of each roi are also associated with the subject medical condition , namely normal controls ( nc ) and alzheimer s disease ( ad ) diagnosed subjects .the kinetic parameters for the first seven rois are from while the last four are from .rate constants for rois * 5 * to * 11 * are directly adopted from the published literature , while those for rois * 1 * to * 4 * are rebuilt from information provided in .the values for rois * 1 * to * 4 * and * 8 * to * 11 * represent average values for each group , while those for rois * 5 * and * 6 * are derived from one ad subject and those for roi * 7 * from another ad subject . .rate constants for eleven rois , including pcg , cere , and cort , for ad and nc adopted from . for rois * 6 * , * 7 * , * 10 * and * 11 * no specific binding activity is assumed , i.e. , ; while for rois * 1 * to * 5 * , * 8 * and * 9 * we assume that the free and nonspecific compartments rapidly reach equilibrium , i.e. , . coefficients and are defined in ( [ symb : a1 ] ) .the values for rois * 1 * to * 4 * and * 8 * to * 11 * represent average values for each group , while those for rois * 5 * and * 6 * are derived from one ad subject and those for roi * 7 * from another ad subject . [ tab : ki ] [ cols="^,^,^,^,^,^,^,^,^,^",options="header " , ] in contrasting the results with respect to only the bias in the calculation of the dv it is clear that algorithm [ alg : biascor_guo ] leads to significantly more robust solutions than logan - ga1 and ma1 for noise - free data . on the other hand, the ka approach can lead to very good solutions , comparable and perhaps marginally better than algorithm [ alg : biascor_guo ] . for roi * 6 * , for which the ka solution is significantly better , we recall that the solution depends on the initial values of the parameters . changing the initial to , the resulting bias in the dv of roi * 6 * calculated by ka is increased to . on the other hand , algorithm [ alg : biascor_guo ] is not dependent on specifying initial values , and is thus more computationally robust . while the results with noise - free data support the use of algorithm [ alg : biascor_guo ] , it is more critical to assess its performance for noise - contaminated simulations . the experimental evaluation for noisy data is based on the noise - free input and noise - free output , one output ttac for each of the eleven parameter sets given in table [ tab : ki ] .noise contamination of the input function and these ttacs is obtained as follows . for a given noise - free decay - corrected concentration ttac , , gaussian ( ) noise at each time point is modeled using the approach in . the standard deviation in the noise at each time point , depends on the frame time interval in seconds , the tracer decay constant ( for ) and a scale factor the resulting coefficients of variation ( ratio to ) , for scale factors and , are illustrated in figure [ fig : cv ] .the noise in the input function can be attributed to two sources , system and random noise .although the random -ray emission follows a poisson distribution , we use the limiting result that a large mean poisson distribution is approximately gaussian to model this randomness as gaussian .thus both sources are modeled as gaussian but with different variance . consider first the following model for determining the randomness of the emissions .suppose a ml blood sample is placed in a -ray well counter which has efficiency and the measured counts over seconds are . then the measured decay corrected concentration ( kbq / ml ) is where is a normalization factor to convert the counts to `` kilo '' counts .then , assuming that the mean of ( or its true value ) is as given in ( [ inpf.eq ] ) , the standard deviation in the measurement of due to random effects is .the coefficient of variation , c , which results from this random noise is shown in figure [ fig : cv ] .it is assumed in the experiments that each blood sample has volume , the count duration is seconds and the well counter efficiency is .then , denoting the coefficient of variation due to system noise by c , the noise - contaminated input is given by where is selected from a standard normal distribution ( g ) , and in the simulations we use c , see figure [ fig : inpf ] .two hundred random noise realizations are generated for each input - ttac pair , and for each noise level ( , ) .the distribution volume is calculated for each experimental pair using logan - ga , ma1 , ka and algorithm [ alg : biascor_guo ] . in each casetwo scan durations are considered , and minutes respectively , and minutes .unlike the noise - free case , the numerical quadrature for uses only the samples at scan points .we present histograms for the percentage relative error of the bias in order to provide a comprehensive contrast of the methods . figure[ fig : dv - histall ] shows the histograms for all eleven rois , with the range of the error for each method indicated in the legend .the figures ( a)-(b ) are for scan windows of minutes , for noise scale factors and while ( c)-(d ) are for scan windows of minutes .figure [ fig : dv - hist3 ] provides equivalent information for a representative cortical region roi * 3*. it is clear that the distributions of the relative errors for ka and ma1 are far from normal ; ka has a significant positive tail while logan - ga has strong negative bias .ma1 has unacceptably long tails except for the case of low noise with long scan duration , i.e. with minutes scan duration . on the other hand ,the histogram for algorithm [ alg : biascor_guo ] is close to a gaussian random distribution ; the mean is near zero and the distribution is approximately symmetric .moreover , algorithm [ alg : biascor_guo ] performs well , and is only outperformed marginally by ma1 for the lower noise and longer time window case . on the other hand , there are some situations , particularly for ma1 , in which the relative error is less than ; in other words , the calculated dvs are negative .such _ unsuccessful _ results occur only for the higher noise level ( ) . while there was only one such occurrence for the logan - ga ( min . with roi * 9 * ) , there were such occurrences for ma1 , for the shorter time interval of minutes ( rois * 1 * , * 3 * , * 4 * , * 5 * , * 6 * , * 8 * and * 9 * ) and for the longer interval of minutes , ( rois * 1 * and * 6 * ) .the reason for the negative dv for ma1 is discussed in section [ subsec : negdv ] .from the results for the higher noise we conclude that algorithm [ alg : biascor_guo ] using the shorter minutes scan duration outperforms the other algorithms , even in comparison to their results for the longer scan duration .obviously algorithm [ alg : biascor_guo ] is more expensive computationally than logan - ga and ma1 . in the simulations , the average cpu time , in seconds , per ttac was , , and , for logan - ga , ma1 , ka and algorithm [ alg : biascor_guo ] , respectively . the high cost of the ka results from the requirement to use a nonlinear algorithm .because the ka requires a good initial estimate for the parameters the cost is variable for each ttac ; it is dependent on whether the supplied initial value is a good initial estimate .indeed the ka results take from to seconds , while the costs using the other methods are virtually ttac independent .the graphical analysis methods of logan - type rely on the assumption that the ratio to is approximately constant within a chosen window ] ; then 1 .the estimated solution and exact solution are related by * if is a non - constant decreasing vector , * if is a non - constant increasing vector , * if is a constant vector ; 2 .the following inequality is true without any monotonicity assumptions : 3 .[ lemma_bsign ] the sign of the intercept is determined as follows : * if is a non - constant decreasing vector , * if is a non - constant increasing vector , * if is a constant vector ; 4 .given , the ls solution of for is ; 5 . given , the ls solution of for and the true solution have the same relationship as stated in the first conclusion of this theorem .it is easy to verify that the ls solution of is the proof then follows as outlined below : 1 .replace in the expression for with .then and the results immediately follow from lemma [ lemma : baseineq ] ( [ lemma2_1])-([lemma2_3 ] ) and the fact when is not linear proportional to .because and similarly we have using the fact and ( [ eq : hatx ] ) , we conclude the inequality is true .again we replace with , then the expression for becomes the results immediately follow from lemma [ lemma : baseineq ] ( [ lemma2_4])-([lemma2_6 ] ) and the fact when and do not have the same direction .4 . this result is easily verified .5 . given ,the ls solution of for is the results now follow from lemma [ lemma : baseineq ] .we now transform the exact equation to and rewrite the results using vectors , and .correspondingly , we find the ls solution of for , .[ cor : logan - ga ] if , and are positive , of which is strictly increasing , , , and satisfy ; and =\mathrm{argmin}\|\bar{{{\mathbf p } } } x - b{{\mathbf e}}-\bar{{{\mathbf r}}}\|_2 ^ 2 ] .the two columns of the system matrix are denoted by and , i.e. ] , has the exact solution ] has the following statistical properties 1 . and , and 2 . .we assume matrix has the following singular value decomposition =u s v^t = u \left ( \begin{array}{cc } s_1 & 0\\ 0 & s_2\\ \vdots & \dots\\ 0&0 \end{array } \right ) \left ( \begin{array}{cc } \cos\theta & \sin\theta\\ -\sin\theta&\cos\theta \end{array } \right ) , \ ] ] in which .then where because is an unitary matrix and we immediately derive the the following inequality from equation ( [ eq : svd ] ) : this inequality is equivalent to which implies , i.e. and , and .if we denote the two rows of matrix by and than because and we conclude if we let and be the two rows of matrix then and because is unitary .thus let it is clear and because the means of are zero , and and resp .. therefore .because we conclude and this result is illustrated by the following simple example : the first column is much larger than the second column .if we add noise to the right hand side , i.e. and , and perform simulation with realizations the distribution of the resulted and are illustrated in figure [ fig : diffsens ] .these results are consistent with the conclusions in theorem [ thm : var ] .[ fig : diffsens ]integrating ( [ eq:2t4kc1 ] ) and ( [ eq:2t4kcs ] ) from to we obtain taking the sum of equations ( [ eq : int2t4kc1 ] ) and ( [ eq : int2t4kcs ] ) yields : and canceling from ( [ eq : int2t4kcs1 ] ) using ( [ eq : ctsum ] ) gives : this can be transformed to ( [ eq : root ] ) immediately by using .j. logan , j. s. fowler , n. d. volkow , a. p. wolf , s. l. dewey , d. j. schlyer , r. r. macgregor , r. hitzemann , b. bendriem , s. j. gatley , graphical analysis of reversible radioligand binding from time - activity measurements applied to [ -11-methyl]-(-)-cocaine studies in human subjects , j. cereb .blood flow metab . 10( 1990 ) 740747 .r. buchert , f. wilke , j. van den hoff , j. mester , improved statistical power of the multilinear reference tissue approach to the quantification of neuroreceptor ligand binding by regularization , j. cereb .blood flow metab .23 ( 5 ) ( 2003 ) 612620 .y. zhou , w. ye , j. r. brai , a. h. crabb , j. hilton , d. f. wong , a consistent and efficient graphical analysis method to improve the quantification of reversible tracer binding in radioligand receptor dynamic studies , neuroimage , 44 ( 3 ) ( 2009 ) 661670 .c. a. mathis , y. wang , d. p. holt , g. f. huang , m. l. debnath , w. e. klunk , synthesis and evaluation of 11-labeled 6-substituted 2-aryl benzothiazoles as amyloid imaging agents , j. med .46 ( 2003 ) 27402755 .j. j. frost , k. h. douglass , h. s. mayberg , r. f. dannals , j. m. links , a. a. wilson , h. t. ravert , w. c. crozier , h. n. j. wagner , multicompartmental analysis of [ 11]-carfentanil binding to opiate receptors in humans measured by positron emission tomography , j. cereb .blood flow metab . 9( 1989 ) 398409 .j. c. price , w. e. klunk , b. j. lopresti , x. lu , j. a. hoge , s. k. ziolko , d. p. holt , c. c. meltzer , s. t. dekosky , c. a. mathis , kinetic modeling of amyloid binding in humans using imaging and ittsburgh compound- , j. cereb .blood flow metab .. 25 ( 11 ) ( 2005 ) 15281547 .s. g. mueller , m. w. weiner , l. j. thal , r. c. petersen , c. r. jack , w. jagust , j. q. trojanowski , a. w. toga , l. beckett , ways toward an early diagnosis in lzheimer s disease : the lzheimer s disease neuroimaging initiative ( ) , alzheimer s dement . 1 ( 1 ) ( 2005 ) 5566 . | graphical analysis methods are widely used in positron emission tomography quantification because of their simplicity and model independence . but they may , particularly for reversible kinetics , lead to bias in the estimated parameters . the source of the bias is commonly attributed to noise in the data . assuming a two - tissue compartmental model , we investigate the bias that originates from model error . this bias is an intrinsic property of the simplified linear models used for limited scan durations , and it is exaggerated by random noise and numerical quadrature error . conditions are derived under which logan s graphical method either over- or under - estimates the distribution volume in the noise - free case . the bias caused by model error is quantified analytically . the presented analysis shows that the bias of graphical methods is inversely proportional to the dissociation rate . furthermore , visual examination of the linearity of the logan plot is not sufficient for guaranteeing that equilibrium has been reached . a new model which retains the elegant properties of graphical analysis methods is presented , along with a numerical algorithm for its solution . we perform simulations with the fibrillar amyloid radioligand [ 11c ] benzothiazole - aniline using published data from the university of pittsburgh and rotterdam groups . the results show that the proposed method significantly reduces the bias due to model error . moreover , the results for data acquired over a minutes scan duration are at least as good as those obtained using existing methods for data acquired over a minutes scan duration . bias ; graphical analysis ; logan plot ; pet quantification ; pib ; alzheimer s disease ; distribution volume . |
algorithmic information theory ( ait , for short ) is a framework for applying information - theoretic and probabilistic ideas to recursive function theory .one of the primary concepts of ait is the _ program - size complexity _ ( or _ kolmogorov complexity _ ) of a finite binary string , which is defined as the length of the shortest binary program for an optimal computer to output . herean optimal computer is a universal decoding algorithm . by the definition , is thought to represent the amount of randomness contained in a finite binary string , which can not be captured in a computational manner . in particular , the notion of program - size complexity plays a crucial role in characterizing the _ randomness _ of an infinite binary string , or equivalently , a real . in we introduced and developed a statistical mechanical interpretation of ait .we there introduced the notion of _ thermodynamic quantities at temperature _ , such as partition function , free energy , energy , and statistical mechanical entropy , into ait .these quantities are real functions of a real argument .we then proved that if the temperature is a computable real with then , for each of these thermodynamic quantities , the partial randomness of its value equals to , where the notion of _ partial randomness _ is a stronger representation of the compression rate by means of program - size complexity .thus , the temperature plays a role as the partial randomness of all the thermodynamic quantities in the statistical mechanical interpretation of ait . in further showed that the temperature plays a role as the partial randomness of the temperature itself , which is a thermodynamic quantity of itself .namely , we proved _ the fixed point theorem on partial randomness _ , which states that , for every , if the value of partition function at temperature is a computable real , then the partial randomness of equals to , and therefore the compression rate of equals to , i.e. , , where is the first bits of the base - two expansion of . in our second work on this interpretation, we showed that a fixed point theorem of the same form as for holds also for each of free energy , energy , and statistical mechanical entropy .moreover , based on the statistical mechanical relation , we showed that the computability of gives completely different fixed points from the computability of . in this paper, we develop the statistical mechanical interpretation of ait further and pursue its formal correspondence to normal statistical mechanics . as a result, we unlock the properties of the sufficient conditions further .the thermodynamic quantities in ait are defined based on the halting set of an optimal computer . in this paper, we show in theorem [ main ] below that there are infinitely many optimal computers which give completely different sufficient conditions in each of the thermodynamic quantities in ait .we do this by introducing the notion of composition of computers into ait , which corresponds to the notion of composition of systems in normal statistical mechanics .we start with some notation about numbers and strings which will be used in this paper . is the set of natural numbers , and is the set of positive integers . is the set of rationals , and is the set of reals . let with .we say that is _ increasing _( resp . , _ decreasing _ ) if ( resp ., ) for all with .normally , denotes any function such that . is the set of finite binary strings , where denotes the _empty string_. for any , is the _ length _ of .a subset of is called _ prefix - free _ if no string in is a prefix of another string in . for any partial function , the domain of definition of is denoted by .let be an arbitrary real .we denote by the first bits of the base - two expansion of with infinitely many zeros , where is the greatest integer less than or equal to .for example , in the case of , .we say that a real is _ computable _ if there exists a total recursive function such that for all .see e.g. weihrauch for the detail of the treatment of the computability of reals . in the following we concisely review some definitions and results of algorithmic information theory .computer _ is a partial recursive function such that is a nonempty prefix - free set . for each computer and each , is defined by ( may be ) .a computer is said to be _ optimal _ if for each computer there exists , which depends on , with the following property ; for every there exists for which and .it is easy to see that there exists an optimal computer .we choose a particular optimal computer as the standard one for use , and define as , which is referred to as the _ program - size complexity _ of or the _ kolmogorov complexity _ of .it follows that for every computer there exists such that , for every , . for any , we say that is _ weakly chaitin random _ if there exists such that for all .on the other hand , for any , we say that is _ chaitin random _ if . obviously , for every , if is chaitin random , then is weakly chaitin random .we can show that the converse also hold .thus , for every , is weakly chaitin random if and only if is chaitin random ( see chaitin for the proof and historical detail ) . in the works ,we generalized the notion of the randomness of a real so that _ the degree of the randomness _ , which is often referred to as _ the partial randomness _recently , can be characterized by a real with as follows .let with .for any , we say that is _ weakly chaitin -random _ if there exists such that for all .let with .for any , we say that is _-compressible _ if , which is equivalent to . in the case of , the weak chaitin-randomness results in the weak chaitin randomness . for every ] and every , if is chaitin -random , then is weakly chaitin -random .however , in 2005 reimann and stephan showed that , in the case of , the converse does not necessarily hold .this contrasts with the equivalence between the weak chaitin randomness and the chaitin randomness , each of which corresponds to the case of .in this section , we review some results of the statistical mechanical interpretation of ait , developed by our former works . we first introduce the notion of thermodynamic quantities into ait in the following manner . in statistical mechanics ,the partition function , free energy , energy , and entropy at temperature are given as follows : where is a complete set of energy eigenstates of a quantum system and is the energy of an energy eigenstate .the constant is called the boltzmann constant , and the denotes the natural logarithm .let be an arbitrary computer .we introduce the notion of thermodynamic quantities into ait by performing replacements [ cs06 ] below for the thermodynamic quantities in statistical mechanics .[ cs06 ] 1 .replace the complete set of energy eigenstates by the set of all programs for .2 . replace the energy of an energy eigenstate by the length of a program .3 . set the boltzmann constant to .thus , motivated by the formulae and taking into account replacements [ cs06 ] , we introduce the notion of thermodynamic quantities into ait as follows . [ tdqait ] let be any computer , and let be any real with .first consider the case where is an infinite set . in this case, we choose a particular enumeration of the countably infinite set .can be chosen quite arbitrarily , and the results of this paper are independent of the choice of .this is because the sum and in definition [ tdqait ] are positive term series and converge as for every . ]the _ partition function _ at temperature is defined as where 2 . the _ free energy _ at temperature is defined as where 3 .the _ energy _ at temperature is defined as where 4 .the _ statistical mechanical entropy _ at temperature is defined as where in the case where is a nonempty finite set , the quantities , , , and are just defined as , , , and , respectively , where is an enumeration of the finite set .note that , for every optimal computer , is precisely a chaitin number introduced by chaitin .theorems [ cprpffe ] and [ cpreesh ] below hold for these thermodynamic quantities in ait . [ cprpffe ]let be an optimal computer , and let . 1 .if and is computable , then each of and converges and is weakly chaitin -random and -compressible .2 . if , then and diverge to and , respectively .[ cpreesh ] let be an optimal computer , and let . 1 .if and is computable , then each of and converges and is chaitin -random and -compressible .2 . if , then both and diverge to .the above two theorems show that if is a computable real with then the temperature equals to the partial randomness ( and therefore the compression rate ) of the values of all the thermodynamic quantities in definition [ tdqait ] for an optimal computer .these theorems also show that the values of all the thermodynamic quantities diverge when the temperature exceeds .this phenomenon might be regarded as some sort of phase transition in statistical mechanics .note here that the weak chaitin -randomness in theorem [ cprpffe ] is replaced by the chaitin -randomness in theorem [ cpreesh ] in exchange for the divergence at . in statistical mechanics or thermodynamics , among all thermodynamic quantitiesone of the most typical thermodynamic quantities is temperature itself .theorem [ fptpr ] below shows that the partial randomness of the temperature can equal to the temperature itself in the statistical mechanical interpretation of ait .we denote by the set of all real such that is weakly chaitin -random and -compressible , and denote by the set of all real such that is chaitin -random and -compressible . obviously , .each element of is a _fixed point on partial randomness _, i.e. , satisfies the property that the partial randomness of equals to itself , and therefore satisfies that .let be a computer .we define the sets by in the same manner , we define the sets , , and based on the computability of , , and , respectively .we can then show the following .[ fptpr ] let be an optimal computer. then and .theorem [ fptpr ] is just a fixed point theorem on partial randomness , where the computability of each of the values , , , and gives a sufficient condition for a real to be a fixed point on partial randomness .thus , by theorem [ fptpr ] , the above observation that the temperature equals to the partial randomness of the values of the thermodynamic quantities in the statistical mechanical interpretation of ait is further confirmed .in this paper , we investigate the properties of the sufficient conditions for to be a fixed point on partial randomness in theorem [ fptpr ] . using the monotonicity and continuity of the functions and on temperature and using the statistical mechanical relation , which holds from definition [ tdqait ], we can show the following theorem for the sufficient conditions in theorem [ fptpr ] .[ sczf ] let be an optimal computer . then each of the sets and is dense in while .thus , for every optimal computer , the computability of gives completely different fixed points from the computability of .this implies also that and .the aim of this paper is to investigate the structure of and in greater detail .namely , we show in theorem [ main ] below that there are infinitely many optimal computers which give completely different sufficient conditions in each of the thermodynamic quantities in ait .we say that an infinite sequence of computers is _ recursive _ if there exists a partial recursive function such that for each the following two hold : ( i ) if and only if , and ( ii ) for every . then the main result of this paper is given as follows .[ main ] there exists a recursive infinite sequence of optimal computers which satisfies the following conditions : 1 . for all with .2 . and .3 . and . in the subsequent sections we prove the above theorems by introducing the notion of _ composition _ of computers into ait , which corresponds to the notion of composition of systems in normal statistical mechanics .+ let be computers .the composition of , , , and is defined as the computer such that ( i ) , and ( ii ) for every , , , and . [ ocmpo ]let be computers .if is optimal then is also optimal .we first choose particular strings , , , with , , , and .let be an arbitrary computer .then , by the definition of the optimality of , there exists with the following property ; for every there exists for which and .it follows from the definition of the composition that for every there exists for which and .thus is an optimal computer . in the same manner as in normal statistical mechanics, we can prove theorem [ quc ] below for the thermodynamic quantities in ait .in particular , the equations , , and correspond to the fact that free energy , energy , and entropy are extensive parameters in thermodynamics , respectively .[ quc ] let be computers .then the following hold for every . for any computer and any , the computer is denoted by .in order to prove the main result , theorem [ main ] , we also introduce the notion of _ physically reasonable computer_. for any computer , we say that is physically reasonable if there exist such that . then we can prove theorem [ inc - dec ] below in a similar manner to the proof of theorem 7 of .[ inc - dec ] let be a physically reasonable computer . then each of the mapping , the mapping , and the mapping is an increasing real function . on the other hand , the mapping is a decreasing real function .in order to prove the main result , it is also convenient to use the notion of _ computable measure machine _ , which was introduced by downey and griffiths in 2004 for the purpose of characterizing the notion of schnorr randomness of a real in terms of program - size complexity . a computer is called a computable measure machine if ( i.e. ) is computable . for the thermodynamic quantities in ait , we can prove theorem [ comp - prcmm ] below using theorem [ inc - dec ] .[ comp - prcmm ] let be a physically reasonable , computable measure machine .then , for every , the following conditions are equivalent : 1 . is computable .2 . one of , , , and is computable .all of , , , and are computable .[ ex - prc ] the following two computers are examples of physically reasonable , computable measure machines . * ( i ) two level system : * let be a particular computer for which .then we see that , for every , * ( ii ) one dimensional harmonic oscillator : * let be a particular computer for which .then we see that , for every , since and , we see that and are physically reasonable , computable measure machines .note that theorems [ inc - dec ] and [ comp - prcmm ] certainly hold for each of the particular physically reasonable computers and .based on theorems [ quc ] and [ comp - prcmm ] , the main result is proved as follows .we first choose any optimal computer and any physically reasonable , computable measure machine .then , for each , we denote the computer by . by theorem [ ocmpo ], we see that is optimal for every .furthermore , it is easy to see that the infinite sequence of computers is recursive . on the other hand , it follows from theorem [ quc ] that , for every and every , let and be arbitrary two positive integers with .then it follows from the equations that for every . in what follows , using we show that . in a similar manner , using , , and we can show that as well . now , let us assume contrarily that . then there exists such that both and are computable .it follows from that thus , is also computable .since is a physically reasonable , computable measure machine , it follows from theorem [ comp - prcmm ] that is also computable .therefore , since is optimal , it follows from theorem [ cprpffe ] ( i ) that is weakly chaitin -random . however , this contradicts the fact that is computable .thus we have .this completes the proof of theorem [ main ] ( i ) .theorem [ main ] ( ii ) and theorem [ main ] ( iii ) follow immediately from theorem [ fptpr ] and the fact that is optimal for all .this work was supported by kakenhi , grant - in - aid for scientific research ( c ) ( 20540134 ) , by scope of the ministry of internal affairs and communications of japan , and by crest of the japan science and technology agency .k. tadaki , algorithmic information theory and fractal sets .proceedings of 1999 workshop on information - based induction sciences ( ibis99 ) , pp. 105110 , august 26 - 27 , 1999 , syuzenji , shizuoka , japan . in japanese .k. tadaki , a statistical mechanical interpretation of algorithmic information theory .local proceedings of computability in europe 2008 ( cie 2008 ) , pp . 425434 , june 15 - 20 , 2008 , university of athens , greece .extended and electronic version available : http://arxiv.org/abs/0801.4194v1 k. tadaki , fixed point theorems on partial randomness .proceedings of the symposium on logical foundations of computer science 2009 ( lfcs09 ) , lecture notes in computer science , springer - verlag , vol . 5407 , pp .422440 , 2009 . | the statistical mechanical interpretation of algorithmic information theory ( ait , for short ) was introduced and developed by our former works [ k. tadaki , local proceedings of cie 2008 , pp . 425434 , 2008 ] and [ k. tadaki , proceedings of lfcs09 , springer s lncs , vol . 5407 , pp . 422440 , 2009 ] , where we introduced the notion of thermodynamic quantities , such as partition function , free energy , energy , and statistical mechanical entropy , into ait . we then discovered that , in the interpretation , the temperature equals to the partial randomness of the values of all these thermodynamic quantities , where the notion of partial randomness is a stronger representation of the compression rate by means of program - size complexity . furthermore , we showed that this situation holds for the temperature itself as a thermodynamic quantity , namely , for each of all the thermodynamic quantities above , the computability of its value at temperature gives a sufficient condition for to be a fixed point on partial randomness . in this paper , we develop the statistical mechanical interpretation of ait further and pursue its formal correspondence to normal statistical mechanics . the thermodynamic quantities in ait are defined based on the halting set of an optimal computer , which is a universal decoding algorithm used to define the notion of program - size complexity . we show that there are infinitely many optimal computers which give completely different sufficient conditions in each of the thermodynamic quantities in ait . we do this by introducing the notion of composition of computers into ait , which corresponds to the notion of composition of systems in normal statistical mechanics . |
dimension reduction is ubiquitous in many areas ranging from pattern recognition , clustering , classification , to fast numerical simulation of complicated physical phenomena .the fundamental question to address is how to approximate a -dimensional space by a -dimensional one with .specifically , we are given a set of high - dimensional data \in { \mathbb r}^{m \times n},\ ] ] and the goal is to find its low - dimensional approximation \in { \mathbb r}^{m \times d}\ ] ] with reasonable accuracy .there are two types of dimension reduction methods .the first category consists of `` projective '' ones .these are the linear methods that are _ global _ in nature , and that explicitly transform the data matrix into a low - dimensional one by .the leading examples are the principal component analysis ( pca ) and its variants .the methods in the second category act locally and are inherently nonlinear . for each sample in the high - dimensional space( e.g. each column of ) , they directly find their low - dimensional approximations by preserving certain locality or affinity between nearby points . in this paper , inspired by the reduced basis method ( rbm ), we propose a linear method called `` reduced basis decomposition ( rbd ) '' .it is much faster than pca / svd - based techniques .moreover , its low - dimensional vectors are equipped with error estimator indicating how close they are approximating the high - dimensional data .rbm is a relative recent approach to speed up the numerical simulation of parametric partial differential equations ( pdes ) .it utilizes an offline online computational decomposition strategy to produce surrogate solution ( of dimension ) in a time that is of orders of magnitude shorter than what is needed by the underlying numerical solver of dimension ( called _ truth _ solver hereafter ) .the rbm relies on a projection onto a low dimensional space spanned by truth approximations at an optimally sampled set of parameter values .this low - dimensional manifold is generated by a greedy algorithm making use of a rigorous _ a posteriori _ error bounds for the field variable and associated functional outputs of interest which also guarantees the fidelity of the surrogate solution in approximating the truth approximation .the rbd method acts in a similar fashion .given the data matrix as in , it iteratively builds up whose column space approximates that of .it starts with a randomly selected column of ( or a user input if existent ) . at each step where we have vectors , the next vector is found by scanning the columns of and locating the one whose error of projection into the current space is the largest .this process is continued until the maximum projection / compression error is small enough or until the limit on the size of the reduced space is reached .an important feature is an offline - online decomposition that allows the computation of the compression error , and thus the cost of locating , to be independent of ( the potentially large ) .this paper is organized as follows . in section[ sec : background ] , we review the background material , mainly the rbm .section [ sec : rbc ] describes the reduced basis decomposition algorithm and discuss its properties .numerical validations are presented in section [ sec : numerical ] , and finally some concluding remarks are offered in section [ sec : conclusion ] .the reduced basis method was developed for use with finite element methods to numerically solve pdes .we assume , for simplicity , that the problems ( usually parametric partial differential equations ( pde ) ) to simulate are written in the weak form : find in an hilbert space such that where is an input parameter .these simulations need to be performed for many values of chosen in a given parameter set . in this problem and are bilinear and linear forms , respectively , associated to the pde ( with and denoting their numerical counterparts ) .we assume that there is a numerical method to solve this problem and the solution , called the `` truth approximation '' or `` snapshot '' , is accurate enough for all .the fundamental observation utilized by rbm is that the parameter dependent solution is not simply an arbitrary member of the infinite - dimensional space associated with the pde . instead, the solution manifold can typically be well approximated by a low - dimensional vector space .the idea is then to propose an approximation of by where , are pre - computed truth approximations corresponding to the parameters judiciously selected according to a sampling strategy . for a given , we now solve in for the reduced solution .the online computation is -independent , thanks to the assumption that the ( bi)linear forms are affine and the fact that they can be approximated by affine ( bi)linear forms when they are nonaffine . hence , the online part is very efficient . in order to be able to `` optimally ''find the parameters and to assure the fidelity of the reduced basis solution to approximate the truth solution , we need an _ a posteriori _ error estimator which involves the residual and stability information of the bilinear form . with this estimator , we can describe briefly the classical * greedy algorithm * used to find the parameters and the space .we first randomly select one parameter value and compute the associated truth approximation .next , we scan the entire ( discrete ) parameter space and for each parameter in this space compute its rb approximation and the error estimator .the next parameter value we select , , is the one corresponding to the largest error estimator .we then compute the truth approximation and thus have a new basis set consisting of two elements .this process is repeated until the maximum of the error estimators is sufficiently small .the reduced basis method typically has exponential convergence with respect to the number of pre - computed solutions .this means that the number of pre - computed solutions can be small , thus the computational cost reduced significantly , for the reduced basis solution to approximate the finite element solution reasonably well .the author and his collaborators showed that it works well even for a complicated geometric electromagnetic scattering problem that efficiently reveals a very sensitive angle dependence ( the object being stealthy with a particular configuration ) .in this section , we detail our proposed methodology by stating the algorithm , studying the error evaluation , and pinpointing the computational cost .set , , and a random integer between and . at the heart of the method stated in algorithm [ alg : c_greedy ] is a greedy algorithm similar to that used by rbm .it builds the reduced space dimension - by - dimension . at each step , the _ greedy _ decision for the best next dimension to pursue in the space corresponding to the data is made by examining an error indicator quantifying the discrepancy between the uncompressed data and the one compressed into the current ( reduced ) space . in the context of the rbm, we view each column ( or row if we are compressing the row space ) of the matrix as the fine solution of certain ( virtual ) parametric pde with the ( imaginary ) parameter taking a particular value . since this solution is _ explicitly _ given already by the data , the fact that the pde and the parameter are absent does not matter .once this _ common mechanism _satisfied by each column ( or row ) is identified , the greedy algorithm still relies on an accurate and efficient estimate quantifying the error between the original data and the compressed one .this will be the topic of the next subsection . to state the algorithm, we assume that we are given a data matrix , the largest dimension that the practitioner wants to retain , and a tolerance capping the discrepancy between the original and the compressed data .the output is the set of bases for the compressed data ( a low - dimensional approximation of the original data ) and the transformation matrix . here, is the actual dimension of the compressed data . with this output, we can * compress . * : : we represent any data entry , the column of , by the column of , , with usually . * uncompress . * : : an _ approximation _ of the data is reconstructed by * evaluate the compression of out - of - sample data . * : : given any that is not equal to any column of , its compressed representation in is a critical part to facilitate the greedy algorithm and make the algorithm realistic is an efficient mechanism measuring ( or estimating ) the error under certain norm , , in step of the algorithm . in this work , we are using the defined as follows . for a given symmetric and positive definite matrix ,the of a vector is defined by for being any column of the data matrix and its low - dimensional approximation , it is easy to see that the choice of reflects the criteria of the data compression .typical examples are : * * identity : * equal weights are assigned to each component of the data entry .this makes the quality of compression uniform . in this case , the evaluation of is greatly simplified and the algorithm is the fastest as shown below by the numerical results . * * general diagonal matrix : * this setting can be used if part of each data entry needs to be preserved better and other parts can afford less fidelity . * * general spd matrix : * this most general case can be helpful if the goal is to preserve data across different entries anisotropiclly .the goal is then to evaluate the error through as efficiently as possible for any given .this is achieved by employing an offline - online decomposition strategy where the -independent parts are evaluated beforehand ( offline ) enabling a quick turnaround time for any given encountered online .the specifics are given in the next subsection .the offline - online decomposition of the computations and their complexities are as follows . here, we use to denote the number of nonzero entries of a sparse matrix .* offline * : : the total cost is of order + * offline mgs * ; ; every basis needs to orthogonalized against the current set of bases .the total cost is of order .* offline calculation of errors * ; ; the next basis is located by comparing each column with its compressed version into the current space . to enable that, we encounter the following computational cost : + * pre - computation * : : of ( for in ) and ( for in ) .the cost is of order . * expansion * : : of and .the former takes time of order , and the latter of order . * offline searching * ; ; after these calculations , the comparison between the original and compressed data is then only dependent on the size of ( which is also the number of columns for ) .the complexity is of order .it will be repeated for up to times in the searching process of step 2.3 of the algorithm for each of the up to basis elements .the total cost is at the level of . *online * : : given any ( possibly out - of - sample ) data , its coefficients in the compressed space is obtained by evaluating .the cost is of order .the decoding ( ) can be done with the same cost .the online computation has complexity of order we remark that , if the actual practice does not requires forming ( e.g. clustering and classification etc ) and so we only work with the coordinates of in the compressed space , then the online cost will be independent of and thus much smaller .in this section , we test the reduced basis decomposition on image compression , and data compression .lastly , we devise a simple face recognition algorithm based on rbd and test it on a database of images while comparing rbd with other face recognition algorithms. the computation is done , and thus the speedup numbers reported herein should be understood as , in matlab 2014a on a 2011 imac with a ghz intel core i7 processor . [sec : numerical ] we first test it on compressing two standard images lena and mandrill in figure [ fig : original ] .they both have an original resolution of .we take and test the algorithm . for each component of every image, we run the algorithm with which implies a compression ratio of , , and respectively .the resulting images ( formed by multiplying the corresponding and together ) are shown on the and row of figures [ fig : lenaresult ] . as a comparison ,we run svd and obtain the reconstructed matrices with the first singular values accordingly .the resulting images are on the second and last row .clearly , svd provides the best quality pictures among all possible algorithms ( and thus better than what rbd provides ) .however , we see that the rbd pictures are only slightly blurrier .moreover , it takes much less time .in fact , we show the comparison in time between svd and rbd in table [ tab : lenatime ] .we see that , when , rbd is three times faster than svd and seven times faster when . herethe svd time is the shorter between those taken by ` vd } and { \verb ` vds commands in matlab . from left to right .the first and third row are from reduced basis decomposition , and the second and fourth are from singular value decomposition.,title="fig:",scaledwidth=32.0% ] from left to right . the first and third roware from reduced basis decomposition , and the second and fourth are from singular value decomposition.,title="fig:",scaledwidth=32.0% ] from left to right .the first and third row are from reduced basis decomposition , and the second and fourth are from singular value decomposition.,title="fig:",scaledwidth=32.0% ] + from left to right . the first and third roware from reduced basis decomposition , and the second and fourth are from singular value decomposition.,title="fig:",scaledwidth=32.0% ] from left to right . the first and third roware from reduced basis decomposition , and the second and fourth are from singular value decomposition.,title="fig:",scaledwidth=32.0% ] from left to right .the first and third row are from reduced basis decomposition , and the second and fourth are from singular value decomposition.,title="fig:",scaledwidth=32.0% ] + from left to right . the first and third roware from reduced basis decomposition , and the second and fourth are from singular value decomposition.,title="fig:",scaledwidth=32.0% ] from left to right . the first and third roware from reduced basis decomposition , and the second and fourth are from singular value decomposition.,title="fig:",scaledwidth=32.0% ] from left to right . the first and third roware from reduced basis decomposition , and the second and fourth are from singular value decomposition.,title="fig:",scaledwidth=32.0% ] + from left to right . the first and third roware from reduced basis decomposition , and the second and fourth are from singular value decomposition.,title="fig:",scaledwidth=32.0% ] from left to right .the first and third row are from reduced basis decomposition , and the second and fourth are from singular value decomposition.,title="fig:",scaledwidth=32.0% ] from left to right . the first and third roware from reduced basis decomposition , and the second and fourth are from singular value decomposition.,title="fig:",scaledwidth=32.0% ] .relative computational time for image compression . [ cols="^,^,^",options="header " , ] we use the umist database that is publicly available on roweis web page .table [ tab : datasetinfo ] summarizes its characteristics : it contains people under different poses .the number of different views per subject varies from to .we use the cropped version whose snapshot is shown in figure [ fig : umist ] . as in , we randomly choose views from each class to form a training set .the rest of the samples ( of them ) are used as testing images .we show the average classification error rates in figure [ fig : fr_result ] left .these averages are computed over random formations of the training and test sets . shown in the middleare the results of six traditional dimension reduction techniques taken from .clearly , our method has similar performance as the pca method , and outperforms three of the other five methods . however , rbd is much faster than pca and other methods since they all involves solving eigenproblems .a speedup factor as a function of the number of bases is plotted in figure [ fig : fr_result ] right which demonstrates a speedup factor of larger than two for this particular test when we reach the asymptotic region ( around when the number of basis vectors is ) .this paper presents and tests an extremely efficient dimension reduction algorithm for data processing .it is multiple times faster than the svd / pca - based algorithms .what makes this possible is a greedy algorithm that iteratively builds up the reduced space of basis vectors .each time , the next dimension is located by exploring the errors of compression into the current space for all data entries .thanks to an offline - online decomposition mechanism , this searching is independent of the size of each entry . numerical results including one concerning a real world face recognition problem confirm these findings .m. barrault , n. c. nguyen , y. maday , and a. t. patera , _ an `` empirical interpolation '' method : application to efficient reduced - basis discretization of partial differential equations _ , c. r. acad .paris , srie i * 339 * ( 2004 ) , 667672 .a. buffa , y. maday , a. t. patera , c. prudhomme , and g. turinici , _ a priori convergence of the greedy algorithm for the parametrized reduced basis _ , esaim - math . model .( 2011 ) , special issue in honor of david gottlieb .j. p. fink and w. c. rheinboldt , _ on the error behavior of the reduced basis technique for nonlinear finite element approximations _ , z. angew .* 63 * ( 1983 ) , no . 1 , 2128 .mr mr701832 ( 85e:73047 ) d. b graham and n. m allinson , _ characterizing virtual eigensignatures for general purpose face recognition _ , face recognition : from theory to applications ( h. wechsler , p. j. phillips , v. bruce , f. fogelman - soulie , and t. s. huang , eds . ) , nato asi series f , computer and systems sciences , vol .163 , 1998 , pp . 446456 .m. a. grepl , y. maday , n. c. nguyen , and a. t. patera , _ efficient reduced - basis treatment of nonaffine and nonlinear partial differential equations _, mathematical modelling and numerical analysis * 41 * ( 2007 ) , no . 3 , 575605 .l. machiels , y. maday , i. b. oliveira , a. t. patera , and d. v. rovas , _ output bounds for reduced - basis approximations of symmetric positive definite eigenvalue problems _ , c. r. acad .* 331 * ( 2000 ) , no . 2 , 153158. mr mr1781533 ( 2001d:65148 ) y. maday , _ reduced basis method for the rapid and reliable solution of partial differential equations _ , international congress of mathematicians . vol .iii , eur .soc . , zrich , 2006 , pp .mr 2275727 ( 2007m:65099 ) y. maday , a. t. patera , and d. v. rovas , _ a blackbox reduced - basis output bound method for noncoercive linear problems _ , nonlinear partial differential equations and their applications .collge de france seminar , vol .xiv ( paris , 1997/1998 ) , stud .31 , north - holland , amsterdam , 2002 , pp .mr mr1936009 ( 2003j:65120 ) y. maday , a. t. patera , and g. turinici , _ a priori convergence theory for reduced - basis approximations of single - parameter elliptic partial differential equations _* 17 * ( 2002 ) , 437446 .nguyen , k. veroy , and a. t. patera , _ certified real - time solution of parametrized partial differential equations _, handbook of materials modeling ( sidney yip , ed . ) , springer netherlands , 2005 , pp .15291564 ( english ) . c. prudhomme , d. rovas , k. veroy , y. maday , a. t. patera , and g. turinici , _ reliable real - time solution of parametrized partial differential equations : reduced - basis output bound methods _ , journal of fluids engineering * 124 * ( 2002 ) , no . 1 , 7080 . g. rozza , d.b.p .huynh , and a.t .patera , _ reduced basis approximation and a posteriori error estimation for affinely parametrized elliptic coercive partial differential equations : application to transport and continuum mechanics _ , arch comput methods eng * 15 * ( 2008 ) , no . 3 , 229275 .s. sen , k. veroy , d.b.p .huynh , s. deparis , n.c .nguyen , and a.t .`` natural norm '' a posteriori error estimators for reduced basis approximations _ , j. comput .* 217 * ( 2006 ) , no . 1 , 37 62 . | dimension reduction is often needed in the area of data mining . the goal of these methods is to map the given high - dimensional data into a low - dimensional space preserving certain properties of the initial data . there are two kinds of techniques for this purpose . the first , projective methods , builds an explicit linear projection from the high - dimensional space to the low - dimensional one . on the other hand , the nonlinear methods utilizes nonlinear and implicit mapping between the two spaces . in both cases , the methods considered in literature have usually relied on computationally very intensive matrix factorizations , frequently the singular value decomposition ( svd ) . the computational burden of svd quickly renders these dimension reduction methods infeasible thanks to the ever - increasing sizes of the practical datasets . in this paper , we present a new decomposition strategy , reduced basis decomposition ( rbd ) , which is inspired by the reduced basis method ( rbm ) . given the high - dimensional data , the method approximates it by with being the low - dimensional surrogate and the transformation matrix . is obtained through a greedy algorithm thus extremely efficient . in fact , it is significantly faster than svd with comparable accuracy . can be computed on the fly . moreover , unlike many compression algorithms , it easily finds the mapping for an arbitrary `` out - of - sample '' vector and it comes with an `` error indicator '' certifying the accuracy of the compression . numerical results are shown validating these claims . data mining , lossy compression , reduced basis method , singular value decomposition , greedy algorithm |
we are given a random variable with outcomes in a set of of source words with associated probabilities with , and a code word subset of , the set of finite binary strings .let denote the _ length _ ( number of bits ) in .let .the _ optimal source coding problem _ is to find a 1:1 mapping , satisfying is not a proper prefix of for every pair , such that is minimal among all such mappings .codes satisfying the prefix condition are called _ prefix codes _ or _instantaneous codes_. this problem is solved theoretically up to 1 bit by shannon s noiseless coding theorem , and exactly and practically by a well - known greedy algorithm due to huffman , which for source words runs in steps , or steps if the s are not sorted in advance .if achieves the desired minimum , then denote .we study the far more general question of length restrictions on the individual code words , possibly different for each code word .this problem has not been considered before .the primary problem in this setting is the problem with equality lengths restrictions , where we want to find the minimal expected code - word length under the restriction of individually prescribed code - word lengths for a subset of the code words .apart from being a natural question it is practically motivated by the desire to save some part of the code tree for future code words , or restrict the lengths of the code words for certain source words to particular values .for example , in micro - processor design we may want to reserve code - word lengths for future extensions of the instruction set .no polynomial time algorithm was known for this problem .initially , we suspected it to be np - hard . here , we show an dynamic programming algorithm .this method allows us to solve an integer programming problem that may be of independent interest .the key idea is that among the optimal solutions , some necessarily exhibit structure that makes the problem tractable .this enables us to develop an algorithm that finds those solutions among the many possible solutions that otherwise exhibit no such structure .formally , we are given _ length restrictions _ , where the s are positive integer values , or the _ dummy _ , and we require that the coding mapping satisfies for every with .for example the length restrictions mean that we have to set and , say and .then , for the remaining s the coding mapping can use only code words that start with .we assume that the length restrictions satisfy below , the kraft s inequality , where we take for , since otherwise there does not exist a prefix code as required .* related work : * in a variant of this question is studied by bounding the maximal code - word length , which results in a certain redundancy ( non - optimality ) of the resulting codes . in boththe maximal code - word length and minimal code - word length are prescribed .shannon s noiseless coding theorem states that if is the entropy of the source , then .the standard proof exhibits the shannon - fano code achieving this optimum by encoding by a code word of length ( ) . ignoring the upper rounding to integer values for the moment, we see that for a code that codes by a code word of length .this suggests the following approach .suppose we are given length restrictions .let be the set of equality length restrictions , and let be the minimal expected code - word length under these restrictions given the probabilities .similar to shannon s noiseless coding theorem , we aim to bound the minimal expected code - word length under equality restrictions below by an entropy equivalent where corresponds to the best possible coding with real - valued code - word lengths. define if we define s also for the s with such that , then altogether we obtain a new probability assignment for every ( ) , which has a corresponding shannon - fano code with code lengths for the s .moreover , with respect to the probabilities induced by the original random variable , and simultaneously respecting the length restrictions , the minimum expected code word length of such a -based shannon - fano code is obtained by a partition of into s ( ) such that is minimized .clearly , the part can not be improved .thus we need to minimize over all partitions of into s .the partition that reaches the minimum does not change by linear scaling of the s .hence we can argue as follows .consider such that is the entropy of the set of probabilities .denote and define then , both and with the ( ) reaches their minimum for this partition of .[ lem.1 ] assume the above notation with with determined as above .the minimal expected prefix code length under given length restrictions is achieved by encoding with code length for all .let us compare the optimal expected code length under length constraints with the unconstrained case .the difference in code length is the kulback - leibler divergence between the -distribution and the -distribution .the kl - divergence is always nonnegative , and is 0 only if for all . for the optimum -distribution determined in lemma [ lem.1 ] for the index set we can compute it explicitly : [ lem.2 ] given a random source with probabilities ( ) , length restrictions and with determined as above .then , the minimum expected constrained code length is which equals the minimal expected unconstrained code word length only when for all .thus , the redundancy induced by the equality length restrictions is note that , just like in the unconstrained case we can find a prefix code with code word lengths , showing that the minimal expected integer prefix - code word length is in between the entropy and , the same holds for the constrained case .there , we constructed a new set of probabilities with entropy , and for this set of probabilities the minimal expected integer prefix - code word length is in between the entropy and by the usual argument .let us look at an example with probabilities and length restrictions .the entropy bits , and the , non - unique , huffman code , without the length restrictions , is , which shows the minimal integer code - word length of bits , which is bits above the noninteger lower bound .the redundancy excess induced by the equality length restrictions is bits , which shows that the integer minimal average code - word length is in between bits and bits .the actual optimal equality restricted code , given by algorithm a below , is with , which is bits above the noninteger lower bound .above we have ignored the fact that real shannon - fano codes have code - word length rather than .this is the reason that in the unconstrained case , leaving a slack of 1 bit for the minimal expected code word length .the huffman code is an on - line method to obtain a code achieving .gallager has proved an upper bound on the redundancy of a huffman code , of ] to be the _ minimal _ expected code - word length of the leaves of a tree ] ( ) , inducing subtrees ] , we obtain a total expected code - word length for the overall tree of .\ ] ] let us now consider the expected code word length of a tree which consists of tree with a subset of subtrees removed and the corresponding probabilities from the overall probability set . removing subtree is equivalent to removing the corresponding free stub , and turning it into a length restriction .let , and be defined as above .let has minimal total code - word length for then the total code - word length of every as above can not be improved by another partition of the probabilities involved among its subtrees .( _ if _ ) if we could improve the total code word length of by a redistribution of probabilities among the subtrees attached to the free stubs then some would not have minimal total code - word length before this redistribution .( _ only if _ ) if we could improve the total code - word length of any by redistribution of the probabilities among its subtrees attached to the free stubs involved , then we could also do this in the overall tree and improve its overall total code - word length , contradicting minimality .this suggests a way to construct an optimal by examining every -partition corresponding to a candidate set of subtrees ( for ) , of every initial segment of the probability sequence ( for ) .the minimal expected code - word length tree for the partition element is attached to the free stub .the crucial observation is that by corollary [ cor.seqopt1 ] the minimal total code word length for probabilities using free stub levels is reached for a binary split in the ordered probabilities and free stubs involved , consisting of the minimal total code - word length solution for probabilities using stub levels and probabilities using free stub level . computing the optimal minimum code - word lengths of initial probability segments and initial free stub level segments in increasing order , this way we find each successive optimum by using previously computed optima .this type of computation of a global optimum is called dynamic programming . the following algorithma gives the precise computation . at termination of the algorithm ,the array ] using the largest ( ) probabilities , optimally divided into subtrees attached to the least level ( ) free stubs .thus , ] ( , ) each such subtree with as the root ( ) .thus , on termination ] and ] = as in , = for all and ( , ) .+ set : = h[1,j,1] ] \} ] , with the least achieving the minimum + the complexity of computing the and is .step 1 of the algorithm takes steps .first , compute for every the quantities : = \sum_{i = r}^j l(p_r) ] .there are such quantities and each computation takes steps .second , for every compute = l[i , j]+h_k p[i , j]$ ] .there are such quantities and each computation takes steps .step 2 of the algorithm takes steps .step 3 of the algorithm involves a outer loop of length , an inner loop of length , and inside the nesting the determining of the minimum of possibilities ; overall steps .the running time of the algorithm is therefore steps . since this shows the stated running time . | we study the new problem of huffman - like codes subject to individual restrictions on the code - word lengths of a subset of the source words . these are prefix codes with minimal expected code - word length for a random source where additionally the code - word lengths of a subset of the source words is prescribed , possibly differently for every such source word . based on a structural analysis of properties of optimal solutions , we construct an efficient dynamic programming algorithm for this problem , and for an integer programming problem that may be of independent interest . |
before addressing the convergence theory , we would like to discuss stochastic noise modeling and its intrinsic conflict with the deterministic model . here and throughout the rest of the paper ,assume to be a complete probability space with a set of outcomes of the stochastic event , the corresponding -algebra and a probability measure , ] , i.e. , it is uniformly distributed on the interval ] we have and , we find that and are compatible realizations of and . with this one can show under the parameter choice .from the given examples it is evident that the convergence speed is heavily influenced by the conditions d ) and e ) in theorem [ thm : tikh_nonlin_stoch ] .therefore , although the general formula for the convergence rate may suggest that the convergence rate is close to the deterministic one , it may be significantly slower due to the additional stochastic properties . as before we seek the solution of a nonlinear ill - posed problem given noisy data according to where the stochastic distribution of the noise is assumed to be known .landweber s method can be seen as a descent algorithm for and is defined via the iteration where is an appropriately chosen stepsize and an initial guess .landweber s method constitutes a regularization method if it is stopped early enough . in the deterministic theory ,i.e. when is the noisy data with , we have the following theorem from for convergence rates of the landweber method .[ thm : lw_nonlin_det ] let be convex , such that and denote the -minimum norm solution of. assume has a solution in .furthermore let the following conditions hold on .* is frechet - differentiable with and * where the bounded linear operators satisfy * satisfies the source condition for some and .let be sufficiently small . then, if the regularization parameter is stopped according to the discrepancy principle , i.e. , at the unique index for which for the first time with , we obtain we can obtain a stochastic version of theorem [ thm : lw_nonlin_det ] in the same way and with the same techniques used to show that theorem [ thm : tikh_nonlin_stoch ] followed from theorem [ thm : tikh_nonlin_det ] .[ thm : lw_nonlin_stoch ] let be convex , be given with and let denote the -minimum norm solution of .assume has a solution in for almost all .furthermore let the following conditions hold on .* where for almost all the set describes a family of bounded linear operators with * satisfies the source condition for some and . * * then , if the regularization parameter is stopped according to the discrepancy principle , i.e. , at the unique index for which for the first time with , we obtain for sufficiently small the rate where the constant depends on only . in the fully stochasticsetting , the source condition b ) from theorem [ thm : lw_nonlin_stoch ] need not hold with constant exponent for all .there are at least two situations which lead to the power being a stochastic quantity as well , i.e. , it holds with . in the first case all solutions come from some initial element , with small -norm .some randomly smoothing operator is acting on this element and generates .( one could for instance think of some kind of evolution process , e.g. , a diffusion process that is applied to some initial value ) .the smoothness of is therefore random .secondly , may be a deterministic element satisfying a certain smoothness condition .the data is generated by applying a forward operator with random smoothness properties .if the realization of is strongly smoothing , this corresponds to a source condition with small , if is weakly smoothing we have the source condition with larger .the following proposition shows the convergence rate that results from the source condition for the case that is uniformly distributed on the interval ] , i.e. , then the approximations obtained by landweber s method satisfy the convergence rate where denotes the lambert w - function , defined by , see .as can be seen from the proof of theorem 3.1 in , the requirement `` sufficiently small '' , becomes stronger , the larger is .supposing that in is sufficiently small for the case , implies therefore that also the convergence conditions for are satisfied .secondly we observe that the convergence rate in theorem [ thm : lw_nonlin_stoch ] contains a constant that depends on .although it is difficult to state an explicit formula for , investigation of shows , that attains its maximum value when .after these observations we start with the actual derivation of the convergence rate . for the sake of simplicitywe assume that all appearing constants are just equal to 1 .furthermore we may assume that and both vanish .asymptotically , for given we therefore have the estimate measuring the distance in the ky fan metric we must , since we assumed that is as in , solve the equation for .we first consider the simplified equation which is solved by in the following we show that the above approximate solution is sufficiently accurate .therefore we construct a better estimate via the ansatz .the original equation then contains the term . neglecting the quadratic part, we can replace this term with , and obtain an equation that mathematica can solve for .the solution for the correction term is given as and tends to zero approximately linearly in .thus this correction becomes small rather quickly , and we can consider the asymptotic bound in as sufficiently accurate due to the asymptotics of the lambert w - function .we will now apply the theory developed in the previous section to selected deterministic regularization methods .let be a linear compact operator between hilbert spaces and with singular system , see e.g. .then , for , the generalized inverse to is given by since for compact operators the singular values approach zero , their inverse blows up and the generalized inverse yields a meaningless solution to for noisy data .a popular class of regularization methods is based on the filtering of the generalized inverse . introducing an appropriate filter function depending on the regularization parameter that controls the growth of ,the regularized solutions are defined by examples for filter based methods are for example the classical tikhonov regularization , truncated singular value decomposition or landwebers method .the regularization properties are fully determined by the filter functions . in the deterministic setting ,the conditions can be found in , e.g. , ( * ? ? ?* theorem 3.3.3 . ) .convergence rates can be obtained for a priori and a posteriori parameter choice rules under stricter conditions on the filter functions .we will only comment on an a priori choice here in order to keep this section short .an example of the discrepancy principle as a posteriori parameter choice is given in the next section in a different context .using the smoothness condition the following theorem can be obtained .* theorem 3.4.3)[thm : louis_filter_optimality ] let and .assume that it holds and for , where and are constants independent of .then with the a priori parameter choice the method induced by the filter is order optimal for all , i.e. , for some constant independent of and .now we use theorem [ thm : lifting_rates_kyfan ] and obtain convergence rates in the ky fan metric .let and be known .assume that it holds and for , and hold .then with the a priori parameter choice the method induced by the filter fulfills for some constant independent of and .more about filter methods in the stochastic setting including numerical examples can be found in .we consider an autoconvolution equation (s)=\int_0^s x(s - t)x(t)\ , dt,\qquad 0\leq s\leq 1\ ] ] between hilbert spaces ] where .such an equation is of great interest in , for example , stochastics or spectroscopy and has been analyzed in detail in .recently , a more complicated autoconvolution problem has emerged from a novel method to characterize ultra - short laser pulses . here, we want to show the transition from the deterministic setting to the stochastic setting in a numerical example .we base our results on the deterministic paper . using the haar - wavelet basis , the authors of reformulate as an equation from to by switching to the space of coefficients in the haar basis . in order to stabilize the inversion , an penalty termis used such that the task is to minimize the functional the regularization parameter in is chosen according to the discrepancy principle . in ,the following formulation is used : for choose such that holds .the authors show that this leads to a convergence of the regularized solutions against a solution of with minimal -norm of its coefficients .it was also shown that the regularization parameter fulfills by courtesy of stephan anzengruber we were allowed to use the original code for the numerical simulation in .we only changed the parts directly connected to the data noise .namely , we replaced the deterministic error with i.i.d gaussian noise , .the discretization is due to the truncation of the expansion of the functions in the haar - basis after elements .the parameter choice was realized with replaced by in accordance with theorem [ thm : lifting_convergence_kyfan ] . instead of the correct expectation , we used the upper bound since , as shown in this chapter , the expectation has to be `` blown up '' anyway . in a first experiment we let . in this case , the numerical results suggest that the regularization parameter decreases too fast , i.e. , does not converge to zero as the requirement in states ; see figure [ fig : conv_nonlin ] . for comparison ,in a second run we chose where is the amount of data points .this way , .now converges to zero as it should be the case from [ eq : alpha_props ] , see figure [ fig : conv_nonlin ] . at this pointwe would like to mention that the discrepancy principle using the ky fan distance and the deterministic one are not completely equivalent since a different way of measuring the noise is used .typically the stochastic noise level will be smaller ( it need to bound 100% of the possible realizations ) and the iteration will be stopped later than in the deterministic setup . .left : constant value of in the discrepancy principle with the expectation of the noise leads to the regularization parameter decreasing too fast .right : increasing appropriately with decreasing variance resolves this issue.,title="fig : " ] . left : constant value of in the discrepancy principle with the expectation of the noise leads to the regularization parameter decreasing too fast .right : increasing appropriately with decreasing variance resolves this issue.,title="fig : " ] in the lifting strategy was used in a slightly different way . in particular , the ky fan metric was used to obtain a novel parameter choice rule .the convergence rates obtained there , however , can also be viewed in the framework of this work .the scope of that paper was to transfer the deterministic convergence results from into the stochastic setting .the seminal paper initiated the investigation of sparsity - promoting regularization for inverse problems .looking for the solution of the linear ill - posed problem between hilbert spaces and with given noisy data , the regularization strategy was to obtain an approximation to via where is an appropriate index set , , a dictionary ( typically an orthonormal basis or frame ) in and .choosing a sufficiently smooth wavelet basis for and setting with , the penalty term in corresponds to a norm in the besov space . formulating the problem of determining from noisy data , , in the bayesian setting with the distributions and and using the maximum a - posteriori solution lead to the formulation where is the variance of the noise and roughly be described as the inverse variance of the prior .the product gives the actual regularization parameter . in direct application of theorem [ thm : lifting_convergence_kyfan ] ,the deterministic condition with replaced by from translates to the conditions leading to convergence of to the unique ( in case the operator is assumed to be injective ) solution of minimal norm in the ky fan metric .the proof of convergence rates is based on two assumption : where and for some .combining proposition 4.5 , proposition 4.6 , proposition 4.7 from it is where with , and we know that the deterministic rate is an upper bound to the reconstruction error whenever and .hence , it is where and where the besov - space functions were truncated after the first basis functions . by definition of the ky fanmetric , it follows immediately from that since is a free parameter , we can balance the terms in , i.e. solve the nonlinear equation for . with this parameter choice rule one obtains by construction we can also apply the theory developed in this work to this problem . in the deterministic setting ,see , it was proposed to chose the regularization parameter . combining ( * ? ? ?* proposition 4.5 ) and ( * ? ? ?* proposition 4.7 ) then yields the rate with from and some . theorem [ thm : lifting_rates_kyfan ] then yields in the stochastic setting the parameter choice and in the notation of it is for gaussian noise see proposition .since , compare and , the convergence rate in is slightly slower than the one in , but they share the same order of convergence .our goal was to demonstrate how convergence results for inverse problems in the deterministic setting can be carried over into the stochastic setting . using the ky fan metric , we have shown that , when only the noise is assumed to be stochastic whereas the other quantities are deterministic , this is is possible in a straight - forward way .namely , assuming the knowledge of an estimate of , the convergence results and parameter choice follows from the deterministic setting by replacing , which originates from the basic deterministic assumption , with . we have shown that , under some slight modifications , it is possible to use the expectation as measure for the magnitude of the noise . in a fully stochastic situation , where additionally to the noise other objects might be of stochastic nature , the lifting of deterministic convergence results is possible as well .however , careful analysis is necessary in order to carry the deterministic conditions over into the stochastic setting . | both for the theoretical and practical treatment of inverse problems , the modeling of the noise is a crucial part . one either models the measurement via a deterministic worst - case error assumption or assumes a certain stochastic behavior of the noise . although some connections between both models are known , the communities develop rather independently . in this paper we seek to bridge the gap between the deterministic and the stochastic approach and show convergence and convergence rates for inverse problems with stochastic noise by lifting the theory established in the deterministic setting into the stochastic one . this opens the wide field of deterministic regularization methods for stochastic problems without having to do an individual stochastic analysis for each problem . in inverse problems , the model of the inevitable data noise is of utmost importance . in most cases , an additive noise model is assumed . in , is the true data of the unknown under the action of the ( in general ) nonlinear operator , and in corresponds to the noise . the spaces are assumed to be banach- or hilbert spaces . when speaking of inverse problems , we assume that is ill - posed . in particular this means that solving for with noisy data is unstable in the sense that `` small '' errors in the data may lead to arbitrarily large errors in the solution . hence , is not a sufficient description of the noise . more information is needed in order to compute solutions from the data in a stable way . in the _ deterministic _ setting , one assumes for some where is an appropriate distance functional . typically , is induced by a norm such that reads . here and further on we use the superscript to indicate the deterministic setting . solutions of under the assumption , are often computed via a tikhonov - type variational approach where again is a distance function and is the penalty term used to stabilize the problem and to incorporate a - priori knowledge into the solution . the regularization parameter is used to balance between data misfit and the penalty and has to be chosen appropriately . the literature in the deterministic setting is rich , at this point we only refer to the monographs for an overview . the deterministic worst - case error stands in contrast to _ stochastic _ noise models where a certain distribution of the noise in is assumed . we shall indicate the stochastic setting by the superscript . in this paper , will be the parameter controlling the variance of the noise . depending on the actual distribution of , may be arbitrarily large , but with low probability . a very popular approach to find a solution of is the bayesian method . for more detailed information , we refer to . in the bayesian setting , the solution of the inverse problem is given as a distribution of the random variable of interest , the _ posterior distribution _ , determined by bayes formula that is , roughly spoken , all values are assigned a probability of being a solution to given the noisy data . in , the _ likelihood function _ represents the model for the measurement noise whereas the _ prior distribution _ represents a - priori information about the unknown . the data distribution as well as the normalization constants are usually neglected since they only influence the normalization of the posterior distribution . in practice one is often more interested in finding a single representation as solution instead of the distribution itself . popular point estimates are the _ conditional expectation _ ( conditional mean , cm ) and the _ maximum a - posteriori ( map ) _ solution i.e. , the most likely value for under the prior distribution given the data . both point estimators are widely used . the computation of the cm - solution is often slow since it requires repeated sampling of stochastic quantities and the evaluation of high - dimensional integrals . the map - solution , however , essentially leads to a tikhonov - type problem . namely , assuming and , one has analogously to . also non - bayesian approaches for inverse problems often seek to minimize a functional , see e.g. or use techniques known from deterministic theory such as filter methods . finally , inverse problems appear in the context of statistics . hence , the statistics community has developed methods to solve , partly again based on the minimization of . we refer to for an overview . in summary , tikhonov - type functionals and other deterministic methods frequently appear also in the stochastic setting . from a practical point of view , one would expect to be able to use deterministic regularization methods for even when the noise is stochastic . indeed , the main question for the actual computation of the solution , given a particular sample of noisy data , is the choice of the regularization parameter . a second question , mostly coming from the deterministic point of view , is the one of convergence of the solutions when the noise approaches zero . in the stochastic setting these questions are answered often by a full stochastic analysis of the problem . in this paper we present a framework that allows to find appropriate regularization parameters , prove convergence of regularization methods and find convergence rates for inverse problems with a stochastic noise model by directly using existing results from the deterministic theory . the paper takes several ideas from the dissertation , which is only publicly available as book and not published elsewhere . it is organized as follows . in section [ ssec : noisemodel ] we discuss an issue occurring in the transition from deterministic to stochastic noise for infinite dimensional problems . the ky fan metric , which will be the main ingredient of our analysis , and its relation to the expectation will be introduced in section [ ssec : kyfan ] . we present our framework to lift convergence results from the deterministic setting into the stochastic setting in section [ ssec : conv_stoch ] . examples for the lifting strategy are given in section [ sec : examples ] . |
we consider the incompressible navier stokes equations in a bounded domain ( ) with a smooth boundary subject to homogeneous dirichlet boundary conditions on . in ( [ onetwo ] ) , is the velocity field , the pressure , and a given force field . for simplicity in the exposition we assume , as in , , , , , that the fluid density and viscosity have been normalized by an adequate change of scale in space and time .let and be the semi - discrete ( in space ) mixed finite element ( mfe ) approximations to the velocity and pressure , respectively , solution of ( [ onetwo ] ) corresponding to a given initial condition we study the a posteriori error estimation of these approximations in the and norm for the velocity and in the norm for the pressure . to do this for a given time , we consider the solution ( , ) of the stokes problem we prove that and are approximations to and whose errors decay by a factor of faster than those of and ( being the mesh size ) . as a consequence ,the quantities and , are asymptotically exact indicators of the errors and in the navier - stokes problem ( [ onetwo])([ic ] ) .furthermore , the key observation in the present paper is that ( ) is also the mfe approximation to the solution of the stokes problem ( [ eq : stokes ] ) .consequently , any available procedure to a posteriori estimate the errors in a stokes problem can be used to estimate the errors and which , as mentioned above , coincide asymptotically with the errors and in the evolutionary ns equations .many references address the question of estimating the error in a stokes problem , see for example , , , , , , and the references therein . in this paperwe prove that any efficient or asymptotically exact estimator of the error in the mfe approximation to the solution of the _ steady _ stokes problem ( [ eq : stokes ] ) is also an efficient or asymptotically exact estimator , respectively , of the error in the mfe approximation to the solution of the _ evolutionary _ navier - stokes equations ( [ onetwo])([ic ] ) . for the analysis in the present paper we do not assume to have more than second - order spatial derivatives bounded in up to initial time , since demanding further regularity requires the data to satisfy nonlocal compatibility conditions unlikely to be fulfilled in practical situations , .the analysis of the errors and follows closely where mfe approximations to the stokes problem ( [ eq : stokes ] ) ( the so - called postprocessed approximations ) are considered with the aim of getting improved approximations to the solution of ( [ onetwo])([ic ] ) at any fixed time . in this paperwe will also refer to ( , ) as postprocessed approximations although they are of course not computable in practice and they are only considered for the analysis of a posteriori error estimators .the postprocessed approximations to the navier - stokes equations were first developed for spectral methods in , , , and also developed for mfe methods for the navier - stokes equations in , , .for the sake of completeness , in the present paper we also analyze the use of the computable postprocessed approximations of for a posteriori error estimation .the use of this kind of postprocessing technique to get a posteriori error estimations has been studied in , and for nonlinear parabolic equations excluding the navier - stokes equations .we refer also to where the so - called stokes reconstruction is used to a posteriori estimate the errors of the semi - discrete in space approximations to a linear time - dependent stokes problem .we remark that the stokes reconstruction of is exactly the postprocessing approximation ( ) in the particular case of a linear model . in the second part of the paper we consider a posteriori error estimations for the fully discrete mfe approximations and , ( for ) obtained by integrating in time with either the backward euler method or the two - step backward differentiation formula ( bdf ) . for this purpose, we define a stokes problem similar to ( [ eq : stokes ] ) but with the right - hand - side depending now on the fully discrete mfe approximation ( problem ( [ posth0n])([posth1n ] ) in section [ sec:4 ] below ) . we will call time - discrete postprocessed approximation to the solution of this new stokes problem . as before , is not computable in practice and it is only considered for the analysis of a posteriori error estimation .observe that in the fully discrete case ( which is the case in actual computations ) the task of estimating the the error of the mfe approximation becomes more difficult due to the presence of time discretization errors , which are added to the spatial discretization errors .however we show in section [ sec:4 ] that if temporal and spatial errors are not very different in size , the quantity correctly esimates the spatial error because the leading terms of the temporal errors in and get canceled out when subtracting , leaving only the spatial component of the error .this is a very convenient property that allows to use independent procedures for the tasks of estimating the errors of the spatial and temporal discretizations .we remark that the temporal error can be routinely controlled by resorting to well - known ordinary differential equations techniques .analogous results were obtained in for fully discrete finite element approximations to evolutionary convection - reaction - diffusion equations using the backward euler method . as in the semidiscrete case , a key point in our results is again the fact that the fully discrete mfe approximation to the navier - stokes problem ( [ onetwo])([ic ] ) is also the mfe approximation to the solution of the stokes problem ( [ posth0n])([posth1n ] ) . as a consequence , we can use again any available error estimator for the stokes problem to estimate the spatial error of the fully discrete mfe approximations to the navier - stokes problem ( [ onetwo])([ic ] ) .computable mixed finite element approximations to , the so - called fully discrete postprocessed approximations , were studied and analyzed in where we proved that the fully discrete postprocessed approximations maintain the increased spatial accuracy of the semi - discrete approximations . the analysis in the second part of the present paper borrows in part from .also , we propose a computable error estimator based on the fully discrete postprocessed approximation of and show that it also has the excellent property of separating spatial and temporal errors .the rest of the paper is as follows . in section 2we introduce some preliminaries and notation . in section 3we study the a posteriori error estimation of semi - discrete in space mfe approximations . in section 4we study a posteriori error estimates for fully discrete approximations .finally , some numerical experiments are shown in section 5 .we will assume that is a bounded domain in , of class , for .when dealing with linear elements ( below ) may also be a convex polygonal or polyhedral domain .we consider the hilbert spaces endowed with the inner product of and , respectively . for integer and , we consider the standard spaces , , of functions with derivatives up to order in , and .we will denote by the norm in , and will represent the norm of its dual space .we consider also the quotient spaces with norm .we recall the following sobolev s imbeddings : for , there exists a constant such that for , ( [ sob1 ] ) holds with .the following inf - sup condition is satisfied ( see ) , there exists a constant such that where , here and in the sequel , denotes the standard inner product in or in . let be the projector onto .we denote by the stokes operator on : applying leray s projector to ( [ onetwo ] ) , the equations can be written in the form where for , in . we shall use the trilinear form defined by it is straightforward to verify that enjoys skew - symmetry : let us observe that for .let us consider for and the operators and , which are defined by means of the spectral properties of ( see , e.g. , , ) .notice that is a positive self - adjoint operator with compact resolvent in .an easy calculation shows that where , here and in what follows , when applied to an operator denotes the associated operator norm .we shall assume that the solution of ( [ onetwo])-([ic ] ) satisfies for some constants and .we shall also assume that there exists a constant such that finally , we shall assume that for some so that , according to theorems 2.4 and 2.5 in , there exist positive constants and such that the following bounds hold : where and for some .observe that for , we can take and . for simplicity, we will take these values of and .let , be a family of partitions of suitable domains , where is the maximum diameter of the elements , and are the mappings of the reference simplex onto .let , we consider the finite - element spaces where denotes the space of polynomials of degree at most on . as it is customary in the analysis of finite - element methods for the navier - stokes equations( see e. g. , , , , , ) we restrict ourselves to quasiuniform and regular meshes , so that as a consequence of ( * ? ? ?* theorem 3.2.6 ) , the following inverse inequality holds for each where , .we shall denote by the so - called hood taylor element , when , where and the so - called mini - element when , where , and . here, is spanned by the bubble functions , , defined by , if and 0 elsewhere , where denote the barycentric coordinates of . for these elementsa uniform inf - sup condition is satisfied ( see ) , that is , there exists a constant independent of the mesh grid size such that we remark that our analysis can also be applied to other pairs of lbb - stable mixed finite elements ( see ( * ? ? ?* remark 2.1 ) ) .the approximate velocity belongs to the discrete divergence - free space which is not a subspace of .we shall frequently write instead of whenever the value of plays no particular role . let be the discrete leray s projection defined by we will use the following well - known bounds we will denote by the discrete stokes operator defined by let be the solution of a stokes problem with right - hand side , we will denote by the so - called stokes projection ( see ) defined as the velocity component of solution of the following stokes problem : find such that the following bound holds for : the proof of ( [ stokespro ] ) for can be found in .for the general case , must be such that the value of satisfies .this can be achieved if , for example , is piecewise of class , and superparametric approximation at the boundary is used . under the same conditions ,the bound for the pressure is where the constant depends on the constant in the inf - sup condition ( [ lbbh ] ) .we will assume that the domain is of class , with so that standard bounds for the stokes problem , imply that for a domain of class we also have the bound ( see ) in what follows we will apply the above estimates to the particular case in which is the solution of the navier stokes problem ( [ onetwo])([ic ] ) . in that case is the discrete velocity in problem ( [ stokesnew])([stokesnew2 ] ) with .note that the temporal variable appears here merely as a parameter , and then , taking the time derivative , the error bound ( [ stokespro ] ) can also be applied to the time derivative of changing , by , .since we are assuming that is of class and , from ( [ stokespro ] ) and standard bounds for the stokes problem , we deduce that we consider the semi - discrete finite - element approximation to , solution of ( [ onetwo])([ic ] ) .that is , given , we compute and , ] to obtained by solving ( [ ten])([ten2 ] ) .we consider the postprocessed approximation in which is the solution of the following stokes problem written in weak form for all and .we remark that the mfe approximation to is also the mfe approximation to the solution of the stokes problem ( [ posth0])([posth1 ] ) . in theorems [ semi_disve ] and [ semi_dispre ] belowwe prove that the postprocessed approximation is an improved approximation to the solution of the evolutionary navier - stokes equations ( [ onetwo])([ic ] ) at time .although , as it is obvious , is not computable in practice , it is however a useful tool to provide a posteriori error estimates for the mfe approximation at any desired time . in theorem [ semi_disve ]we obtain the error bounds for the velocity and in theorem [ semi_dispre ] the bounds for the pressure .the improvement is achieved in the norm when using the mini - element ( ) and in both the and norms in the cases . in the sequelwe will use that for a forcing term satisfying ( [ tildem2 ] ) there exists a constant , depending only on , and , such that the following bound hold for : the following inequalities hold for all and , see ( * ? ? ? * ( 3.7 ) ) : the proof of theorem [ semi_disve ] requires some previous results which we now state and prove .we will use the fact that for , from where it follows that where the constant is independent of .let be the solution of ( [ onetwo])([ic ] ) and fix . then there exists a positive constant such that for satisfying the threshold condition the following inequalities hold for : due to the equivalence ( [ a-1equiv ] ) and and since for it is sufficient to prove for or .we follow the proof ( * ? ? ?* lemma 3.1 ) where a different threshold assumption is assumed .we do this for , since the case is similar but yet simpler .we write where .we first observe that where , in the last inequality , we have used that thanks to sobolev s inequality ( [ sob1 ] ) we have .similarly , the proof of the case in ( [ eq : aux_bat ] ) is finished if we show that for , both and are bounded in terms of and the value in the threshold assumption ( [ threshold ] ) . to do this, we will use the inverse inequality ( [ inv ] ) and the fact that the stokes projection satisfies that for some constant ( see for example the proof of lemma 3.1 in ) .we have where in the last inequality we have applied ( [ inv ] ) , and , similarly , where we also have used that for . now the threshold assumption ( [ threshold ] ) and ( [ stokespro ] ) show the boundedness of and .finally , the proof of the case in ( [ eq : aux_bat ] ) is , with obvious changes , that of the equivalent result in ( * ? ? ?* lemma 3.1 ) . in the sequel we consider the auxiliary function \rightarrow v_h ] .we now propose a simple procedure to estimate the error which is based on computing a mfe approximation to the solution of ( [ posth0])-([posth1 ] ) on a mfe space with better approximation capabilities than in which the galerkin approximation is defined .this procedure was applied to the -version of the finite - element method for evolutionary convection - reaction - diffusion equations in .the main idea here is to use a second approximation of different accuracy than that of the galerkin approximation of and whose computational cost hardly adds to that of the galerkin approximation itself .let us fix any time ] : \(i ) if the postprocessing element is , then \(ii ) if the postprocessing element is , then for only the case in ( [ postcuad1 ] ) and ( [ postcuad2 ] ) holds . in ( [ postcuad1])([ulti2 ] ) , when and otherwise .the cases have been proven in theorems 5.2 and 5.3 in . following the same arguments, we now prove the results corresponding to and , the case being similar , yet easier .we decompose the error , where is the solution of that is , is the stokes projection of onto .since in view of ( [ stokespro])([stokespre ] ) we have we only have to estimate and .to do this , we subtract ( [ posth0dis ] ) from ( [ july-24th11 ] ) , and take inner product with to get now applying lemma [ le : z_t ] , ( [ eq : aux_bat ] ) and ( [ eq : err_vel(t ) ] ) the proof of ( [ postcuad1 ] ) is finished . to prove ( [ ulti1 ] ) , again we subtract ( [ posth0dis ] ) from ( [ july-24th11 ] ) , rearrange terms and apply the inf - sup condition ( [ inf - sup ] ) to get and the proof is finished with the same arguments used to prove ( [ postcuad1 ] ) . to estimate the error in we propose to take the difference between the postprocessed and the galerkin approximations : in the following theorem we prove that this error estimator is efficient and asymptotically exact both in the and norms and it has the advantage of providing an improved approximation when added to the galerkin mfe approximation .[ th_pos_esti ] let be the solution of ( [ onetwo])-([ic ] ) and fix any positive time .assume that condition ( [ saturacion ] ) is satisfied .then , there exist positive constants , , and , , and such that , for and , the error estimators satisfy the following bounds when and : furthermore , if , with , , or then for the mini element , the case in ( [ efi_posdis ] ) and ( [ asin_posdis ] ) must be excluded. we will prove the estimates for the velocity in the case , since the estimates for the pressure and the case are obtained by similar arguments but with obvious changes .let us observe that for on the other hand using ( [ saturacion ] ) we get taking and and sufficiently small , the bound ( [ efi_posdis ] ) is readily obtained .the proof of ( [ asin_posdis ] ) follows straightforwardly from ( [ july-24th ] ) , since in the case when with , , the term when tends to zero , and in the case when the term containing the parameter is not present .in practice , it is not possible to compute the mfe approximation exactly , and , instead , some time - stepping procedure must be used to approximate the solution of ( [ ten])-([ten2 ] ) .hence , for some time levels , approximations and are obtained . in this sectionwe assume that the approximations are obtained with the backward euler method or the two - step bdf which we now describe . for simplicity , we consider only constant stepsizes , that is , for integer , we fix , and we denote , . for a sequence we denote given , a sequence of approximations to , , is obtained by means of the following recurrence relation : where in the case of the backward euler method and for the two - step bdf . in this last case , a second starting value is needed . here, we will always assume that is obtained by one step of the backward euler method .also , for both the backward euler and the two - step bdf , we assume that , which is usually the case in practical situations .we now define the time - discrete postprocessed approximation . given an approximation to , the time - discrete postprocessed velocity and pressure are defined as the solution of the following stokes problem : for reasons already analyzed in and we define as an adequate approximation to the time derivative .for the analysis of the errors and we follow , where the mfe approximations to the stokes problem ( [ posth0n])([posth1n ] ) are analyzed .we start by decomposing the errors and as follows , where and are the temporal errors of the time - discrete postprocessed velocity and pressure .the first terms on the right - hand sides of ( [ eq : decomtilde])([eq : decomtildep ] ) are the errors of the postprocessed approximation that were studied in the previous section .let us denote by , the temporal error of the mfe approximation to the velocity , and by , the temporal error of the mfe approximation to the pressure . in the present section we bound and in terms of .the error bounds in the following lemma are similar to those of ( * ? ? ?* proposition 3.1 ) where error estimates for mfe approximations of the stokes problem ( [ posth0n])([posth1n ] ) are obtained .[ lema_need]there exists a positive constant such that let us denote by where . subtracting ( [ posth0n])([posth1n ] ) from ( [ posth0])([posth1 ] ) we have that the temporal errors of the time - discrete postprocessed velocity and pressure are the solution of the following stokes problem on the other hand , subtracting ( [ tend])-([tend2 ] ) from ( [ ten])-([ten2 ] ) and taking into account that , thanks to definition ( [ dtu ] ) , we get that the temporal errors of the fully discrete mfe approximation satisfy and thus is the mfe approximation to the solution of ( [ otro_stokes])([otro_stokes2 ] ) . usingthen ( [ stokespro+1 ] ) we get for the pressure we apply ( [ stokespre ] ) and ( [ prehr ] ) to obtain then , to conclude , it only remains to bound . from the definition of is easy to see that so that now , by writing as and using ( [ eq : adelanto0])([eq : adelanto1 ] ) we get from which we finally conclude ( [ error_post_fully ] ) and ( [ error_post_fully_pre ] ) .let us consider the quantities and as a posteriori indicators of the error in the fully discrete approximations to the velocity and pressure respectively .then , we obtain the following result : [ th_esti_time_d ] let be the solution of ( [ onetwo])([ic ] ) and let ( [ tildem2 ] ) hold .assume that the fully discrete mfe approximations , are obtained by the backward euler method or the two - step bdf ( [ tend])([tend2 ] ) , and let be the solution of ( [ posth0n])([posth1n ] ) .then , for , where is the constant in ( [ pri_fully])([pri_fully_pres ] ) , for the backward euler method and for the two - step bdf . in ( * ? ? ?* theorems 5.4 and 5.7 ) we prove that if ( [ tildem2 ] ) and the case in ( [ stokespro ] ) hold , the errors of these two time integration procedures satisfy for small enough that for a certain constants and , where for the backward euler method and for the two - step bdf .since , and then , from ( [ eq : orden1 ] ) and ( [ error_post_fully])-([error_post_fully_pre ] ) we finally reach that for small enough where is a positive constant .let us decompose the estimators as follows : which implies thus , in view of ( [ pri_fully])([pri_fully_pres ] ) we obtain ( [ decom_prin2 ] ) and ( [ decom_prin2p ] ) let us comment on the practical implications of this theorem .observe that from ( [ decom_prin ] ) and ( [ decom_prinp ] ) the fully discrete estimators and can be both decomposed as the sum of two terms .the first one is the semi - discrete a posteriori error estimator we have studied in the previous section ( see remark [ remark31 ] ) and which we showed it is an asymptotically exact estimator of the spatial error of and respectively . on the other hand , as shown in ( [ pri_fully])([pri_fully_pres ] ) , the size of the second term is in asymptotically smaller than the temporal error of and respectively .we conclude that , as long as the spatial an temporal errors are not too unbalanced ( i.e. , they are not of very different sizes ) , the first term in ( [ decom_prin ] ) and ( [ decom_prinp ] ) is dominant and then the quantities and are a posteriori error estimators of the spatial error of the fully discrete approximations to the velocity and pressure respectively . the control of the temporal error can be then accomplished by standard and well - stablished techniques in the field of numerical integration of ordinary differential equations .now , we remark that are obviously not computable .however , we observe that the fully discrete approximation of the evolutionary navier - stokes equation is also the approximation to the stokes problem ( [ posth0n])-([posth1n ] ) whose solution is .then , one can use any of the available error estimators for a steady stokes problem to estimate the quantities and , which , as we have already proved , are error indicators of the spatial errors of the fully discrete approximations to the velocity and pressure , respectively . to conclude , we show a procedure to get computable estimates of the error in the fully discrete approximations .we define the fully discrete postprocessed approximation as the solution of the following stokes problem ( see ) : where is as in ( [ posth0dis])-([posth1dis ] ) .let us denote by and the temporal errors of the fully discrete postprocessed approximation ( observe that the semi - discrete postprocessed approximation is defined in ( [ posth0dis])-([posth1dis ] ) ) . let us denote , as before , by the temporal error of the mfe approximation to the velocity .then , we have the following bounds .[ prop : err_post_fully ] there exists a positive constant such that for the following bounds hold the bound ( [ error_post_fully_vel ] ) is proved in ( * ? ? ?* proposition 3.1 ) . to prove ( [ duda2pp ] )we decompose the second term above is bounded in ( [ error_post_fully_pre ] ) of lemma [ lema_need ] . for the first we observe that is the mfe approximation in to the pressure in ( [ otro_stokes])-([otro_stokes2 ] ) so that the same reasoning used in the proof of lemma [ lema_need ] allow us to obtain using ( [ eq : orden1 ] ) as before , we get the analogous to ( [ pri_fully ] ) and ( [ pri_fully_pres ] ) , i.e. , for small enough the following bound holds where is a positive constant . similarly to ( [ decom_prin])([decom_prinp ] ) we write and , so that in view of ( [ pri_fully_dis])([pri_fully_dis_pre ] ) we have the following result .[ th_esti_fully_d ] let be the solution of ( [ onetwo])([ic ] ) and let ( [ tildem2 ] ) hold .assume that the fully discrete mfe approximations , are obtained by the backward euler method or the two - step bdf ( [ tend])([tend2 ] ) , and let be the solution of ( [ posth0ndis])([posth1ndis ] ) .then , for , where is the constant in ( [ pri_fully_dis])([pri_fully_dis_pre ] ) , for the backward euler method and for the two - step bdf .the practical implications of this result are similar to those of theorem [ th_esti_time_d ] , that is , the first term on the right - hand side of ( [ decom_prin2_dis ] ) is an error indicator of the spatial error ( see theorem [ th_pos_esti ] ) while the second one is asymptotically smaller than the temporal error . as a consequence ,the quantity is a computable estimator of the spatial error of the fully discrete velocity whenever the temporal and spatial errors of are more or less of the same size .as before , similar arguments apply for the pressure .we remark that having balanced spatial and temporal errors in the fully discrete approximation is the more common case in practical computations since one usually looks for a final solution with small total error . as in the semi - discrete case ,the advantage of these error estimators is that they produce enhanced ( in space ) approximations when they are added to the galerkin mfe approximations .we consider the equations in the domain \times[0,1] ] .the values in the norm for the velocity lie on the interval ] . to conclude , we show a numerical experiment to check the behavior of the estimators in the fully discrete case .we choose the forcing term such that the solution of ( [ onetwo_nu ] ) is ( [ solu_fix ] ) with .the value of and the final time are the same as before .( asterisks ) and ( circles ) for . on the left : euler ; on the right : two - step bdf for to .,title="fig:",width=226 ] ( asterisks ) and ( circles ) for . on the left : euler ; on the right : two - step bdf for to .,title="fig:",width=226 ] in figure [ ulti ] , on the left , we have represented the errors obtained using the implicit euler method as a time integrator for different values of the fixed time step ranging from to halving each time the value of . for the spacial discretizationwe use the mini - element with always the same value of .we use solid lines for the errors in the galerkin method and dashed lines for the estimations , as before .the norm errors are marked with asterisks while the norm errors are marked with circles .we estimate the errors using the postprocessed method computed with the same mini - element over a refined mesh of size . we observe that the galerkin errors decrease as decreases until a value that corresponds to the spatial error of the approximation . on the contrary ,the error estimations lie on an almost horizontal line , both for the velocity in the and norms and for the pressure .this means , as we stated in section 4.2 , that the error estimations we propose are a measure of the spatial errors , even when the errors in the galerkin method are polluted by errors coming from the temporal discretization . in this experimentthe error estimations are very accurate for the spatial errors of the velocity in the norm and for the errors in the pressure .as commented above , the fact that postprocessing linear elements does not increase the convergence rate in the norm is reflected in the precision of the error estimations in the norm .on the right of figure [ ulti ] we have represented the errors obtained when we integrate in time with the two - step bdf and fixed time step .the only remarkable difference is that , as we expected from the second order rate of convergence of the method in time , the temporal errors are smaller for the same values of the fixed time step .again , the estimations lie on a horizontal line being essentially the same as in the experiment on the left ., _ gamma function and related functions in handbook of mathematical functions with formulas , graphs , and mathematical tables _ , edited by milton abramowitz and irene a. stegun .reprint of the 1972 edition .dover publications , inc . , new york , 1992 .j. g. heywood and r. rannacher , _ finite element approximation of the nonstationary navier stokes problem_. i. _ regularity of solutions and second - order error estimates for spatial discretization _ ,siam j. numer ., 19 ( 1982 ) , pp . 275311 ., _ finite element approximation of the nonstationary navier stokes problem ._ iii : _ smoothing property and higher order error estimates for spatial discretization _ , siam j. numer ., 25 ( 1988 ) , pp . 489512 . | a posteriori estimates for mixed finite element discretizations of the navier - stokes equations are derived . we show that the task of estimating the error in the evolutionary navier - stokes equations can be reduced to the estimation of the error in a steady stokes problem . as a consequence , any available procedure to estimate the error in a stokes problem can be used to estimate the error in the nonlinear evolutionary problem . a practical procedure to estimate the error based on the so - called postprocessed approximation is also considered . both the semidiscrete ( in space ) and the fully discrete cases are analyzed . some numerical experiments are provided . |
a large of neurons is involved in any computation , and the presence of non - trivial correlations makes understanding the mechanisms of computation in brain a difficult challenge . the simplest model for describing multi - neuron spike statistics is the pairwise ising model .the inference of the couplings of an ising model from data , the _ inverse ising problem _ , has recently attracted attention , see where data from a simulated cortical network were considered ; the general idea is to find an ising model with the same means and pairwise correlations as the data .several approximate methods can be used , like that by sessak and monasson or the inversion of tap equations : equal - time correlations from data are used in those methods .the flow of information between spins is related , instead , to correlations at different times between variables : it can be expected that measuring the information flow between spins one may improve the estimate of couplings from data .two major approaches are commonly used to estimate the information flow between variables , transfer entropy and granger causality .recently it has been shown that for gaussian variables granger causality and transfer entropy are entirely equivalent as the following relation holds : _ granger causality = 2 transfer entropy_. this result provides a bridge between autoregressive and information - theoretic approaches to causal inference from data . the purpose of this work is to explore the use of granger causality to learn ising models from data .the _ inverse ising problem _ is here seen as belonging to the more general frame of the inference of dynamical networks from data , a topic which has been studied in recent papers : its relevance is due to the fact that dynamical networks model physical and biological behavior in many applications .we show that for weak couplings , the linear granger causality of each link is two times the corresponding transfer entropy , also for ising models : this occurrence justifies the use of autoregressive approaches to the inverse ising problem . in the same limit , for each link , the following relation exists between the coupling ( ) and the causality ( ) : . in the case of limited samples ,granger causality gives poor results : almost all the connections are not assessed as significative for low number of samples . in these cases ,we propose the use of least squares method , a penalized autoregressive approach tailored to embody the sparsity assumption , to recover the non -vanishing connections of a sparse ising model ; as expected the approach outperforms granger causality in this case .finally we show that nonlinear granger causality is related to multi - spin interactions .the paper is organized as follows . in the next sectionwe briefly recall the notions of granger causality and transfer entropy , and we also describe the ising models that we use for simulations . in section iiiwe describe our results on fully connected models , sparse ising models and models with higher order spin interactions .some conclusions are drawn in section iv .in this section we review the notions of granger causality analysis and transfer entropy .we also discuss the application of these methods to binary time series arising from ising models .granger causality has become the method of choice to determine whether and how two time series exert causal influences on each other .it is based on prediction : if the prediction error of the first time series is reduced by including measurements from the second one in the linear regression model , then the second time series is said to have a causal influence on the first one .the estimation of linear granger causality from fourier and wavelet transforms of time series data has been recently addressed .the nonlinear generalization of granger causality has been developed by a kernel algorithm which embeds data into a hilbert space , and searches for linear granger causality in that space ; the embedding is performed implicitly , by specifying the inner product between pairs of points , and a statistical procedure is used to avoid over - fitting .quantitatively , let us consider time series ; the lagged state vectors are denoted being the window length .let be the mean squared error prediction of on the basis of all the vectors ( corresponding to the kernel approach described in ) : is equal to , where , the predicted values of using , is the projection of on a suitable space .the prediction of on the basis of all the variables but , , corresponds instead to the projection on a space with . represents the information that one gains from the knowledge of .the multivariate granger causality index is defined as the ( normalized ) variation of the error in the two conditions , i.e. note that the numerator , in the equation above , coincides with the projection of on : as described in , one may write where are suitable pearson s correlations . by summing , in equation ( [ somma ] ) , only over significative correlations ,a _ filtered _ linear granger causality index is obtained which measures the causality without further statistical test . in it has been shown that not all the kernels are suitable to estimate causality .two important classes of kernels which can be used to construct nonlinear causality measures are the _ inhomogeneous polynomial kernel _ ( whose features are all the monomials , in the input variables , up to the degree ; corresponds to linear granger causality ) and the _ gaussian kernel_. note that in a different index of causality is adopted : };\ ] ] and coincide at small .the choice of the window length is usually done using the standard cross - validation scheme ; as in this work we know how data are generated , here we use .the formalism of granger causality is constructed under the hypothesis that time series assume continuous values . in recent papersthe application of granger causality to data in the form of phases has been considered .even though there is not theoretical justification in the case of binary variables , here we apply the formalism of granger casuality to binary time series , by substituting and in this work we justify the application of granger causality to binary time series in terms of its relation with transfer entropy . using the same notation as in the previous subsection , the transfer entropy index is given by and measures the flow of information from to . for gaussian variablesit has been shown that causality is determined by the transfer entropy and ; hence for gaussian variables .the probabilities s , in ( [ tau ] ) , must be estimated from data using techniques of nonlinear time series analysis . in the case of binary variables , the number of configurations is finite and the integrals in ( [ tau ] ) become sums over configurations ; the probabilities can be estimated as frequencies in the data - set at hand .therefore where is the fraction of times that the configuration is observed in the data set , and similar definitions hold for the other probabilities .we remark that the number of configurations increases exponentially as the number of spins grows , hence the direct evaluation of ( [ te ] ) is feasible only for systems of small size .the binary time series analyzed in this work are generated by parallel updating of ising variables : where the local fields are given by with couplings .starting from a random initial configuration of the spin , equations ( [ update ] ) are iterated and , after discarding the initial transient regime , consecutive samples of the system are stored for further analysis .in order to generalize granger causality to discrete variables , we consider the regression function of ( [ update ] ) . for weak couplings the conditional expectation ( which coincides with the regression function ) can be written and is a linear function of the couplings .therefore the linear causality is at the lowest order in s .analogously , expanding eq.([te ] ) at the lowest order , the transfer entropy reads .this means that , for low couplings , the value of , for any given link , determines both the transfer entropy and the linear granger causality for that link , and the two quantities differ only by the factor .the same relation , proved in the gaussian case , holds also for ising models at weak couplings .being related to the transfer entropy , granger causality thus measures the information flow for these systems , and this justifies the use of granger causality methods for ising systems .synaptic couplings are directed , so is not in general equal to ( the equilibrium ising model requires symmetric couplings ) .therefore we consider an asymmetric system of spins with couplings chosen at random from a normal distribution with zero mean and standard deviation ; no self interactions are assumed ( ) . in figure ( [ f1 ] )we report the plot of the numerical estimates of linear causality and transfer entropy , as a function of , for several realizations of the couplings and for some values of ; the simulations confirm that for low couplings a one - one correspondence exists between causality and transfer entropy . in figure ( [ f2 ] )we depict , as a function of the coupling , both the linear causality and the transfer entropy in a typical asymmetric model of six spins with and samples , and . in figure ( [ f3 ] )we depict , as a function of the number of samples , the difference between the values of transfer entropy and causality ( as estimated on samples ) and their estimates based on samples ( the difference is averaged over 1000 runs of the ising system ) : at low the estimates of causalities are more reliable than those of transfer entropy .moreover , the computational complexity of the estimation of transfer entropy is much higher than those corresponding to the evaluation of granger causality .another interesting situation is all the couplings being equal to a positive quantity ( still , without self - interactions ) . in figure ( [ f4 ] )we depict the granger causality and the transfer entropy as a function of ( these quantities are the same for all the links , due to symmetry ) . for small values of quantities coincide with the values that correspond to in the asymmetric model , and the relation holds . on the other hand ,as increases , this relation is more and more violated .the departure of the ratio from 2 is connected to the emergence of feedback effects in the system , see e.g. the dependency , on , of the auto - correlation time of the magnetization in fig.([f5 ] ) . in many applications one may hypothesize that the connections among variables are sparse .the main goal , in those cases , is to infer the couplings which are not vanishing , independently of their strengths , in particular when the number of samples is low .moreover , in the case of limited data , granger causality gives poor results ; indeed almost all connections would not be assessed as significative ( for a given amount a data , only couplings stronger than a critical value can be recognized by granger causality ) .a major approach to sparse signal reconstruction is the regularized least squares method .although it has been developed to handle continuous variables , we will apply this method to the configurations of ising models . for each targetspin , the vector of couplings , with , is sought for as the minimizer of where is a regularization parameter and is the norm of the vector of couplings .as is increased , the number of vanishing couplings in the minimizers increases : controls the sparsity of the solution .the strategy to fix the value of we use here is 10-fold cross - validation : the original sample is randomly partitioned into 10 subsamples and , out of the 10 subsamples , a single subsample is retained as the validation data .the remaining 9 subsamples are used in ( [ elleuno ] ) to determine the couplings ; the quality of this solution is evaluated as the average number of errors on the validation data .the cross - validation process is then repeated 10 times ( the folds ) , with each of the 10 subsamples used exactly once as the validation data , and the error on the validation data is averaged over the 10 folds .the whole procedure is then repeated as is varied .the optimal value of is chosen as the one leading to the smallest average error on the validation data .as an example , we simulate a system made of 30 spins constituted by ten modules of three spins each . the non - vanishing couplings of the ising model are given by : for . after evaluating the couplings , using the algorithm described in , we calculate the sensitivity ( fraction of non - vanishing connections leading to non - vanishing couplings ) and the specificity ( fraction of vanishing connections leading to vanishing couplings ) as a function of . the roc curves ( as is varied , the roc curve is sensitivity as a function of 1-specificity ) we obtain , in correspondence to three values of the number of samples n ( 100 , 250 and 500 ) , are depicted in fig .( [ f6 ] ) .the stars on the curves represent the points corresponding to the value of found by ten - fold cross validation ; these points correspond to a good compromise between specificity and sensitivity .the empty symbols , instead , represent the values of sensitivity and specificity obtained using granger causality in the three cases ; the specificity by granger causality is nearly one in all cases , while the sensitivity is strongly dependent on the number of samples and goes to zero as decreases . to conclude this subsection ,we have shown that in the case of low number of samples and sparse connections the regularized least squares method can be used to infer the connections in ising models and outperforms granger causality in these situations .we remark that direct evaluation of the transfer entropy in these cases is unfeasible .the case of higher order spin interactions requires use of nonlinear granger causality : in the presence of interactions , the kernel approach with the polynomial kernel of at least degree is needed . as an example, we consider a system of three spins with local fields given by : in figure ( [ f7 ] ) we depict the causalities and , as a function of , using the linear kernel and for the approach with the polynomial kernel .note that , due to symmetry , and ; all the other causalities are vanishing . the linear approach is not able to detect the three spins interaction , while using the nonlinear approach the interaction is correctly inferred .we stress that the presence of multispin interactions is connected to the presence of synergetic variables , see for a discussion about the notions of redundancy and synergy in the frame of causality .it is interesting to show the performance by transfer entropy on the same problem , see fig.([f8 ] ) : it correctly detects all the interactions , and the value of the transfer entropy is again very close to be half of those from nonlinear granger causality .we stress that transfer entropy can be applied without prior assumptions about the order of the spins interactions .a major problem in the inference of dynamical networks is the selection of an appropriate model ; in the case of transfer entropy this issue does not arise , although this advantage may be offset by problems associated with reliable estimation of entropies in sample .we have proposed the use of autoregressive methods to learn ising models from data .commonly , the formulation of the inverse ising problem assumes symmetric interactions and is solved by exploiting the relations that exist , at equilibrium , between the pairwise correlations ( at equal times ) and the matrix of couplings . in the general case of asymmetric couplings , no equilibrium is reached and also time delayed correlations among spins should be used to infer the connections .we have shown that autoregressive approaches can solve the inverse ising problem for weak couplings : for each link , whilst the sign of coincides with the sign of the linear correlation between and .for weak couplings , granger causality is proportional to the transfer entropy and requires less samples , than transfer entropy , to provide a reliable estimate of the information flow . for sparse connections and low number of samples ,the regularized least squares method is preferable to granger causality ; nonlinear granger causality is related to multispin interactions .99 f. rieke , d. warland , r.r .de ruyter van steveninck , and w. bialek , _ spikes : exploring the neural code _( mit press , cambrige , ma , 1997 ) .e. schneidman , m.j .berry , r. segev , w. bialek , nature * 440 * , 1007 ( 2006 ) .j. shlens , g.d .field , j.l .gauthier , m.i .grivich , d. petrusca , a. sher , a.m. litke , e.j .chichilnisky , j. neurosci .* 28 * 505 ( 2008 ) .y. roudi , j. tyrcha , j. hertz , physical review e * 79 * , 051915 ( 2009 ) .v. sessak and r. monasson , j. phys .a * 42 * , 055001 ( 2009 )kappen and f.b .rodriguez , neural computation * 10 * , 1137 ( 1998 ) .y. roudi , e. aurell and j.a .hertz , front .. neurosci . * 3 * , 22 ( 2009 ) .t. schreiber , phys .lett . * 85 * , 461 ( 2000 ) c.w.j .granger , econometrica * 37 * , 424 ( 1969 ) .l. barnett , a.b .barrett , and a.k .seth , phys .* 103 * , 238701 ( 2009 ) .d. yu , m. righero , l. kocarev , phys.rev.lett . * 97 * , 188701 ( 2006 ) .d. napoletani , t. sauer , phys . rev . * e 77 * , 26103 ( 2008 ) . d. marinazzo , m. pellicoro and s. stramaglia , phys . rev .e * 77 * , 056215 ( 2008 ) .d. materassi , g. innocenti , physica a * 388 * , 3866 ( 2009 ) .barabasi , _ linked : the new science of networks_. ( perseus publishing , cambridge mass . , 2002 ) .s. boccaletti , v. latora , y. moreno , m. chavez and d .- u .hwang , phys .rep . * 424 * , 175 ( 2006 ) .r. tibshirani , j. roy .* b 58 * , 267 ( 1996 ) . k. hlavackova - schindler , m. palus , m. vejmelka , j. bhattacharya , physics reports * 441 * , 1 ( 2007 ) . y. chen , g. rangarajan , j. feng , and m. ding , phys . lett . * a 324 * , 26 ( 2004 ) . m. dhamala , g. rangarajan , m. ding , phys.rev.lett . * 100 * , 18701 ( 2008 ) .d. marinazzo , m. pellicoro , s. stramaglia , phys .lett . * 100 * , 144103 ( 2008 ) .j. shawe - taylor and n. cristianini , _ kernel methods for pattern analysis_. ( cambridge university press , london , 2004 ) . after a linear transformation, we may assume all the time series to have zero mean and unit variance .m. stone , j. roy .* b 36 * , 111 ( 1974 ) .n. ancona and s. stramaglia , neural comput . * 18 * , 749 ( 2006 ) .l. angelini , m. pellicoro and s. stramaglia , phys . lett . * a 373 * , 2467 ( 2009 ) .d. smirnov and b. bezruchko , phys .e 79 * , 046204 ( 2009 ) h. kantz , t. schreiber , _ nonlinear time series analysis _( cambridge university press , cambridge , 1997 ) .kadanoff , _ statistical physics _ ( world scientific , singapore , 2000 ) .a. papoulis , _ probability , random variables and stochastic processes mcgraw hill , new york , 1965_. s.j .kim , k. koh , m. lustig , s. boyd , d. gorinevsky , ieee j. sel .top . in sig. process .* 1 * , 606 ( 2007 ) .m.h . zweig and g. campbell ,clin . chem .* 39 * , 561 ( 1993 ). l. angelini et al . , phys .e 81 * , 037201 ( 2010 ) . .the points , corresponding to 15 realizations of the couplings of six - spins ising systems with ranging in $ ] , are displayed ( the two quantities are estimated over samples of length ) .the curves are the quadratic expansions at weak coupling : ( dashed - dotted line ) and ( continuous line ) .( bottom ) the same points are displayed in the plane , showing that at weak coupling .[ f1],width=377 ] , the linear granger causality ( right ) and the transfer entropy ( left ) are depicted for each link , as a function of its coupling , for ( top ) , ( middle ) and ( bottom ) samples .the continuous curves represent the _ true _ values ( obtained by fitting the points in fig .1).[f2],width=377 ] the true value of the transfer entropy and its estimate based on samples , averaged over 1000 runs of the ising system , we define .the quantity e , thus obtained , is here plotted versus n ( empty circles ) .a similar quantity e , concerning granger causality , is also plotted ( stars).[f3],width=377 ] and uniform couplings , is considered . as a function of , the linear granger causality( stars ) and the transfer entropy ( empty circles ) are depicted versus .both quantities are the same for all links , due to symmetry .the two curves are the relations between the coupling and transfer entropy ( and between coupling and causality ) which hold for the asymmetric ising model ( obtained by fitting the points of fig .1 ) . [ f4],width=377 ] , 100 ( dashed line ) , 250 ( dotted line ) and 500 ( continuous line ) .the stars on these curves represent the points found by ten - fold cross validation .the other three symbols are the performances by granger causality on ( empty diamond ) , ( empty circle ) , ( empty square ) .[ f6],width=377 ] and are depicted as a function of for the three spins system described in the text .causalities are estimated using the linear kernel ( top ) and the polynomial kernel ( bottom ) .[ f7],width=377 ] | the inference of the couplings of an ising model with given means and correlations is called _ inverse ising problem_. this approach has received a lot of attention as a tool to analyze neural data . we show that autoregressive methods may be used to learn the couplings of an ising model , also in the case of asymmetric connections and for multi - spin interactions . we find that , for each link , the linear granger causality is two times the corresponding transfer entropy ( i.e. the information flow on that link ) in the weak coupling limit . for sparse connections and a low number of samples , the regularized least squares method is used to detect the interacting pairs of spins . nonlinear granger causality is related to multispin interactions . |
for any , fix any ^s ] and for each .then , the distribution of conditioned on is dirichlet with parameters .let be distributed , ( \alpha^\top { \bf 1})^{-1}\right) ] 3 . for any , .4 . for and = 0 ] .it also establishes that for _ any _ convex loss that is less `` spread out '' than in the sense ) ] \le { \mathds{e}}[l(y-{\mathds{e}}[y])] ] and there a crossing point such that : we write for this relationship .single crossing dominance is actually a stronger condition than ssd , as we show in proposition [ prop : relate scd ssd ] .in general the reverse implication is not true , as we demonstrate in example [ ex : ssd not sc ] . [prop : relate scd ssd ] let and be real - valued random variables with finite expectation then suppose with single crossing point .let .by we know for all and that is decreasing for all .now we consider the limit - { \mathds{e}}[y ] \ge 0 ] and independent of . by proposition[ prop : ssd equiv ] , however is not single crossing dominant for .we display the cdfs of these variables in figure [ fig : double cross ] , they are not single crossing . in particularthe ordering of switches at least three points .the main technical result in this paper comes in theorem [ thm : dir_norm ] , which we prove in section [ sec : proof thm ] .[ thm : dir_norm ] let for ^s ] and . at first glance , theorem [ thm : dir_norm ] may seem quite arcane , it provides an ordering between two paired families of gaussian and dirichlet distributions in terms of ssd .the reason this result is so useful is that , given matched prior distributions , the resultant posteriors for the gaussian and dirichlet models will remain ordered in this way _ for any _ observation data .the condition is technical but does not pose significant difficulties so long as at the posterior is updated with at least two observations .we present this result as corollary [ cor : dir_norm ] .[ cor : dir_norm ] let for ^s ] and = \frac{k_2}{k_1 + k_2 } ( \gamma_1+\gamma_2). ] and so .let , with independent , and let , so that let and so that define independent random variables and so that take and to be independent , and couple these variables with so that note that and .let and , so that and couple these variables so that and we can now say & = & { \mathds{e}}[(1- \tilde{p } ) v_1 + \tilde{p } v_d | x ] = { \mathds{e}}\left[\frac{v_1 \overline{\gamma}^0}{\overline{\gamma } } + \frac{v_d \overline{\gamma}^1}{\overline{\gamma } } \big| x\right ] \\ & = & { \mathds{e}}\left[{\mathds{e}}\left[\frac{v_1 \overline{\gamma}^0 + v_d \overline{\gamma}^1}{\overline{\gamma } } \big| \gamma , x\right ] \big| x \right ] = { \mathds{e}}\left[\frac{v_1 { \mathds{e}}[\overline{\gamma}^0 | \gamma ] + v_d { \mathds{e}}[\overline{\gamma}^1 | \gamma]}{\overline{\gamma } } \big| x \right ] \\ & = & { \mathds{e}}\left[\frac{v_1 \sum_{i=1}^d { \mathds{e}}[\gamma^0_i | \gamma_i ] + v_d \sum_{i=1}^dxp[\gamma^1_i | \gamma_i]}{\overline{\gamma } } \big| x \right ] \\ & \stackrel{\text{(a)}}{= } & { \mathds{e}}\left[\frac{v_1 \sum_{i=1}^d \gamma_i \alpha_i^0 / \alpha_i + v_d \sum_{i=1}^d \gamma_i \alpha_i^1/\alpha_i}{\overline{\gamma } } \big| x \right ] \\ & = & { \mathds{e}}\left[\frac{v_1 \sum_{i=1}^d \gamma_i ( v_i - v_1 ) + v_d \sum_{i=1}^d \gamma_i ( v_d - v_i)}{\overline{\gamma } ( v_d - v_1 ) } \big| x \right ] \\ & = & { \mathds{e}}\left[\frac{\sum_{i=1}^d \gamma_i v_i}{\overline{\gamma } } \big| x \right ] = { \mathds{e}}\left[\sum_{i=1}^d p_i v_i \big| x \right ] = x,\end{aligned}\ ] ] where ( a ) follows from lemma [ le : gamma ] .therefore , is a mean - preserving spread of and so by proposition [ prop : ssd equiv ] , .we complete the proof of theorem [ thm : dir_norm ] by showing that this auxilliary beta random variable defined in lemma [ lem : dir beta ] is second order stochastic dominant for the gaussian posterior .[ lem : gauss beta ] let for any and .then , ( and by proposition [ prop : relate scd ssd ] this implies ) whenever .we want to prove that the cdfs cross at most once on . by the mean value theorem ,it is sufficient to prove that the pdfs cross at most twice on the same interval .we lament that the proof as it stands is so laborious , but our attempts at a more elegant solution has so far been unsuccessful .the remainder of this appendix is devoted to proving this `` double - crossing '' property via manipulation of the pdfs for different values of .we write for the density of the normal and for the density of the beta respectively . we know that at the boundary and where the represents the left and right limits respectively .as these densities are positive over the interval , we can consider the log pdfs the function is injective and increasing ; if we can show that has at most two solutions on the interval we will be done .instead we will attempt to prove an even stronger condition , that has at most one solution in the interval .this sufficient condition may be easier to deal with since we can ignore the distributional normalizing constants . finally we consider an even stronger condition , if has no solution then must be monotone over the region and so it can have at most one root . with these definitions now let us define : our goal now is to show that does not have any solutions for ] then .we define the gap of the beta over the maximum of the normal log likelihood , if we can show the gap is positive then it must mean there are no crossings over the region ] .consider any ; we know from the ordering of the tails of the cdf that if there is more than one root in this segment then there must be at least three crossings .if there are three crossings , then the second derivative of their difference must have at least one root on this region .however we know that is convex , so if we can show that this can not be possible .we use a similar argument for $ ] and complete this proof via laborious calculus .we remind the reader of the definition in , .for ease of notation we will write .we note that : and we solve for .this means that and clearly .now , if we can show that , for all possible values of in this region , our proof will be complete . to make the dependence on more clear we write below = 1mu = 1mu = 1mu will demonstrate that for all of the values in our region . similarly , therefore , for any this means that .therefore this expression is maximized over for .we can evaluate this expression explicitly : this provides a monotonicity result which states that both are minimized at at the largest possible for any given over our region .we will now write . if we can show that for all and we will be done with our proof .we will perform a similar argument to show that is monotone increasing for all . note that the function is increasing in for .we can conservatively bound from below noting in our region . we can use calculus to say that : this expression is monotone decreasing in and with a limit .therefore for all . we can explicitly evaluate this numerically and so we are done . the final piece of this proof involves a similar argument for .= 1mu = 1mu = 1mu once again we can see that is monotone increasing we complete the argument by noting .this concludes our proof of the pdf double crossing in region .the results of sections [ sec : r1 ] , [ sec : r2 ] and [ sec : r3 ] together prove lemma [ lem : gauss beta ] . by proposition [ prop : ssd equiv ] ,lemmas [ lem : dir beta ] and [ lem : gauss beta ] together complete the proof of theorem [ thm : dir_norm ] .this work was generously supported by a research grant from boeing , a marketing research award from adobe , and stanford graduate fellowships , courtesy of paccar . | we consider the problem of sequential learning from categorical observations bounded in $ ] . we establish an ordering between the dirichlet posterior over categorical outcomes and a gaussian posterior under observations with noise . we establish that , conditioned upon identical data with at least two observations , the posterior mean of the categorical distribution will always second - order stochastically dominate the posterior mean of the gaussian distribution . these results provide a useful tool for the analysis of sequential learning under categorical outcomes . |
the usual task in quantum tomography is to determine an unknown quantum state from measurement outcome statistics .there are two obvious ways to vary this setting .first , our task need not be the determination of any possible input state but only some states belonging to a restricted subset of all states .second , we typically have some prior information , or premise , which tells us that the input state belongs to some subset of states .it is clear that with this additional information and restricted task , this problem should be easier than the problem of determing an unknown quantum state without any prior information . as an example , consider the usual optical homodyne tomography of a single mode electromagnetic field .if the state is completely unknown , then , in principle , one needs to measure infinitely many rotated field quadratures . however , as soon as one knows that the state can be represented as a finite matrix in the photon number basis , then already finitely many quadratures are enough , the exact number depending on the size of the matrix .it should be emphasized that the premise is not merely a mathematical assumption but carries also physical meaning .indeed , it simply means that the probability of detecting energies above a certain bound is zero . since one might expect that also in general a given task and premise leads to the requirement of less or worse resources ,an immediate question is the characterization of these resources .the task and the premise can be described as subsets of the set of all states , hence this modified setting is specified by two subsets ( task ) and ( premise ) of all states .clearly , we must have to make the formulation meaningful .smaller means less demanding determination task and smaller means better prior knowledge . in this workwe study the previously explained question from the point of view of quantum observables , mathematically described as positive operator valued measures ( povms ) . a quantum observableis called _ informationally complete _ if the measurement outcome probabilities uniquely determine each state , and this clearly relates to the usual task in quantum tomography . the previously described generalized setting leads to the concept of _ -informational completeness_. we present a general formulation of this property , and then concentrate on some interesting special cases .our main results are related to situations when the premise tells that the rank of the input state is bounded by some number , and the task is then to determine all states with rank or less .we show , in particular , that if there is no premise and the task is to determine all states with rank less than or equal to , where is the dimension of the hilbert space of the quantum system , then we actually need an informationally complete observable .perhaps the most important informationally complete observables are covariant phase space observables .these are widely used in both finite and infinite dimensional quantum mechanics .however , not all covariant phase space observables are informationally complete , and for instance noise can easily destroy this desired property .we will show that even if a covariant phase space observable fails to be informationally complete , it can be -informationally complete for some meaningful sets and ._ notation ._ we denote by the set of natural numbers ( containing ) and .we use the conventions for all nonzero . for every ,we denote by the largest integer not greater than , and we define . if not specified , is a finite dimensional or separable infinite dimensional complex hilbert space .we denote .we denote by the complex banach space of bounded linear operators on endowed with the uniform norm , and by the real banach subspace of selfadjoint operators .if is a complex linear space such that whenever , we denote by the selfadjoint part or , and regard it as a real linear space . then ; in particular , .we write for the complex banach space of the trace class operators on endowed with the trace class norm , and . clearly , if , then as linear spaces .we denote by } = 1\} ] for all .when is a finite or denumerable set , we will take , the set of all subsets of , and denote and for all for short .a _ weak*-closed real operator system _ on is a weak*-closed real linear subspace such that .( note that is a real operator system if and only if is an operator system in the standard sense of operator theory , and then we have ) .if is a weak*-closed real operator system on , then its _ annihilator _ is the following closed subspace of }=0 \\forall a\in{\mathcal{r}}\ } \ , .\ ] ] since , we have } = 0 ] for all .then for some observable .a. if , then this is proved in ( * ? ? ?* prop . 1 ) ( note that the proof is not affected if ) . for , we use the following slight modification of the proof of ( * ? ? ?* theorem 2.2 ) .we define the set then , is weak*-compact and metrizable , being a weak*-closed subset of the unit ball of . in particular , it is separable .let be a weak*-dense subset of , and define an observable by the series converges in norm , thus also in the weak*-topology .since is weak*-closed , we have .each can be written in the form since , we conclude that . by this fact and weak*-density of the set in , it follows that .b. let } = 0 \ \forall t\in\mathcal{x } \} ] .[ prop : prem ] let . for an observable ,the following conditions are equivalent : a. is -informationally complete . b. . c. .let and . then , using , } = { { \rm tr}\left[\varrho_2 { \mathsf{m}}(x)\right ] } \\ & \leftrightarrow \quad \forall x\in{\mathcal{a } } : { { \rm tr}\left[(\varrho_1-\varrho_2 ) { \mathsf{m}}(x)\right ] } = 0 \\ & \leftrightarrow \quad \varrho_1-\varrho_2 \in { \mathcal{r}}({\mathsf{m}})^\perp \ , .\end{aligned}\ ] ] thus, is -informationally complete if and only if .since is a linear space , the equivalence of ( ii ) and ( iii ) follows .a well known mathematical characterization of informationally complete observables is that . as an application of prop .[ prop : prem ] we give a short derivation of this fact . [ prop : well - known ] an observable is informationally complete if and only if .each nonzero element decomposes as , where are the positive and negative parts of . since }= 0 ] , and as otherwise . setting , we have .thus , , and the claim follows by prop .[ prop : prem ] .the rank of an operator is the dimension of its range : . for each that , we denote by the set of all states satisfying . clearly , and .moreover , if , we denote by the set of states with finite rank . in this sectionwe investigate observables that are ( ,)-informationally complete when and for some , with , or and , or . by the spectral theorem each has a spectral decomposition ; there exists an orthonormal basis of such that where and } ] .the following lemma will be used later several times .[ prop : pos - neg ] let be a nonzero operator with }=0 ] , it must have both strictly positive and strictly negative eigenvalues , i.e. , .the third inequality in now follows from , and the remaining inequalities are clear .b. let be the spectral decomposition of .we denote by and the positive and negative parts of respectively .we define and .since }=0 ] for all with .b. }=0 \mbox { and } { { \rm rank}\,}_\uparrow(t ) < \infty\} ] .a. if , then } = 0,\ , { { \rm rank}\,}_+ ( t ) \leq m \mbox { and } { { \rm rank}\,}_- ( t ) \leq n\}\ ] ] as an immediate consequence of lemma [ prop : pos - neg]b , c . since the claim follows .b. we have , hence the claim follows from ( a ) . c. similarly , , which by ( a ) and triviality of the condition implies the claim .the following theorem characterizes -informational completeness and -informational completeness for all values of and in both cases and .[ prop : structure ] let be an observable and with . a. the following conditions are equivalent : a. is -informationally complete .b. every nonzero has . b. if with , then the following conditions are equivalent : a. is -informationally complete .b. every nonzero has or .these are all immediate consequences of prop .[ prop : prem ] and lemma [ prop : lemma2]a . for ( a ) , note that , if , then the condition in lemma [ prop : lemma2]a is trivial . if , then we can also consider -informational completeness and -informational completeness .the following theorem characterizes these two properties .[ prop : structure - fin ] let be an observable and . a. the following conditions are equivalent : a. is -informationally complete .b. every nonzero has .b. the following conditions are equivalent : a. is -informationally complete .b. every nonzero has .these are all immediate consequences of prop . [prop : prem ] and lemma [ prop : lemma2]b , c . with certain choices of and the conditions in theorem [ prop : structure ] become simpler . in the following we list some special cases . since every nonzero with }=0 ] , we conclude from prop .[ prop : exm ] that there exists an observable such that . as and for every , it follows from theorem [ prop : structure]b that is -informationally complete , but not -informationally complete by theorem [ prop : structure]b ( if ) or theorem [ prop : structure]a ( if ) .( _ dimension .)[ex:3 ] let . using prop .[ prop : equivalence - p ] and prop .[ prop : equivalence - t ] we see that the property of -informational completeness is equivalent to informational completeness for five choices of : , , , and .the remaining property , namely -informational completeness , is not equivalent to -informational completeness ( and hence not to any other ) by prop .[ prop : inequivalence - p ] .[ prop : inequivalence - t ] ( inequivalence of different tasks . )let .let and such that and .the following properties are not equivalent : a. -informational completeness .b. -informational completeness .fix an orthonormal basis and define an operator by it follows from that , hence and , as we are also assuming , this definition makes sense . since and}=0 ] , and . by prop .[ prop : exm ] , there exists two observables and such that . as in the proofs of props .[ prop : inequivalence - p ] and [ prop : inequivalence - t ] , we have and for all nonzero , and for all nonzero .thus , by theorem [ prop : structure - fin ] the observable is -informationally complete but not -informationally complete .similarly , by theorem [ prop : structure - fin ] and cor . [ prop : well - known ] the observable is -informationally complete but not informationally complete .the content of prop .[ prop : sf ] is , essentially , that knowing that the unknown state has finite rank is useful information for state determination .in this section we assume that and . by a _minimal -informationally complete observable _ we mean a -informationally complete observable with minimal number of outcomes .more precisely , an observable with an outcome space is minimal -informationally complete if any other -informationally complete observable with an outcome space satisfies . since is finite dimensional , the real vector spaces and are the same and we then see that a -informationally complete observable with outcomes exists if and only if there is a -dimensional subspace satisfying 1 . } = 0 ] and ( cor .[ prop : spsp ] ) or ( theorem [ prop : structure]a ) , respectively . to find a good upper bound for the minimal number of outcomes , we need to find as large as possible .a useful method for constructing these kind of subspaces was presented in . using this method the following upper bounds ( a ) and ( b ) were proved in and , respectively .[ prop : upper ] let . there exists a. -informationally complete observable with outcomes .b. -informationally complete observable with outcomes . in the case of -informationally complete observables, it is possible to obtain lower bounds from the known non - embedding results for grassmannian manifolds . in some cases the obtained lower bounds agree or are very close with the upper bounds written in prop .[ prop : upper]a .in particular , it was proved in that in the case of -informational completeness , the minimal number of outcomes is not a linear function of but differs from the upper bound at most . also a slightly better upper bound was derived , and these results give the exact answer for many .for instance , for the dimensions between and , the results of give the exact minimal number in cases .in the case of -informational completeness , the upper bound for the minimal number of outcomes is . obviously , the known lower bound for minimal -informational completeness is also a lower bound for minimal -informational completeness .we are not aware of any better lower bound . in the following subsectionwe prove that the minimal number of outcomes for is .this means that is generally just an upper bound for the minimal number of outcomes , not the exact answer .our result for also implies that , as in the case of -informational completeness , the minimal number is not a linear function of .in this subsection we concentrate on minimal observables in dimension . a minimal informationally complete observablehas outcomes .further , it was shown in that a minimal -informationally complete observable has outcomes . in prop . [prop : d=4/2 ] below we give the minimal numbers for the remaining three inequivalent properties ( see example [ ex:4 ] ) .these results are summarized in fig .[ fig : equivalence-4 ] . before deriving the minimal numbers we characterize these properties in convenient forms . .each represents the property of -informational completeness , and equivalent properties are in the same box .as explained in example [ ex:4 ] , there are five inequivalent properties .the big numbers give the minimal number of outcomes that an -informationally complete observable must have . ][ prop : d=4/1 ] let .an observable is a. -informationally complete if and only if every nonzero satisfies }>0 ] . c. -informationally complete if and only if every nonzero satisfies }\neq 0 ] is the product of eigenvalues ,we conclude that every nonzero satisfies a. }>0 ] if and only if every nonzero satisfies . c. }\neq 0 ] .we need to prove that the sign of } ] and }>0 ] for some by the intermediate value theorem .[ prop : d=4/2 ] let .a. a minimal -informationally complete observable has outcomes .b. a minimal -informationally complete observable has outcomes . c. a minimal -informationally complete observable has outcomes .\(a ) for all , denote by the complex linear space of complex matrices , and by the real space of selfadjoint elements in . by prop .[ prop : d=4/1 ] we need to look for subspaces such that 1 . } = 0 ] for every nonzero . indeed , if has maximal dimension among all subspaces of satisfying these two conditions , then any observable with and outcomes ( which exists by prop .[ prop : exm ] and ) is minimal -informationally complete .it was shown in that the maximal dimension of a real subspace satisfying ( 2 ) is .we show that if the additional requirement ( 1 ) is added , this maximal dimension remains the same , and thus the minimal number of outcomes is .to do this , we introduce four matrices and define the following linear map note that next , we define five selfadjoint matrices and the following linear map clearly , } = 0 ] if and only if and .thus , has the required properties .\(b ) suppose is a real subspace of such that }<0 ] .thus , the counter assumption is false .we still need to prove that there exists a -dimensional subspace of such that }<0 ] for all nonzero .fix an orthonormal basis of , and set then , is a -dimensional subspace of such that }<0 ] for all nonzero .thus , there exists an observable with outcomes such that , and such observable is minimal -informationally complete by props .[ prop : exm ] and [ prop : d=4/1 ] .\(c ) this follows from prop .[ prop : d=4/1 ] combined with items ( a ) and ( b ) above .we now turn our attention to covariant phase space observables . after introducing these observables in the general case of a phase space defined by an abelian group, we treat the finite and infinite dimensional cases separately . as an application we study the effect of noise on the observable s ability to perform the required state determination tasks .let be a locally compact and second countable abelian group with the dual group .the composition laws of and will be denoted by addition , and the canonical pairing of and will be denoted by .we fix haar measures and on and , respectively . if is any bounded measure on , the _ symplectic fourier transform _ of is the bounded continuous function on given by this definition clearly extends to any integrable function : if , we define for the rest of this section , we will assume that the haar measures and are normalized so that whenever also .let .we define the following two unitary representations and of and on ( y ) = \psi(y - x ) \ , , \qquad [ v(\xi ) \psi ] ( y ) = { \left\langle\,\xi , y\,\right\rangle } \psi(y ) \ , .\ ] ] note that so that the following _ weyl map_ is a projective square integrable representation of the direct product group on ( for square integrability of , see e.g. in the case , and ( * ? ? ?* theorem 6.2.1 ) and for the general case ) .the weyl map has the useful properties and for any set , we denote if is a symmetric set ,i.e. , then from it follows that , and we can thus consider the selfadjoint part of .if in addition , then is a weak * closed real operator system on .let be the borel -algebra of the locally compact and second countable space .for any state , a _covariant phase space observable _ with the _ fiducial state _ is the following observable on ( see or for the case of , and ( * ? ? ?2.1 , p. 166 ) and ( * ? ? ?* theorem 3.4.2 ) for the general form of covariant phase space observables ) .the integral in the definition of is understood in the weak*-sense ,i.e. , for all , } = \int_x { { \rm tr}\left[sw(x,\xi)\tau w(x,\xi)^*\right ] } { \,{\rm d}}x { \,{\rm d}}\xi \qquad \forall x\in{\mathcal{b}({\mathcal{g}}\times{\widehat{\mathcal{g } } } ) } \ , .\ ] ] more generally , the map } ] , then , since } ] for all . in this case , using and , we obtain that the symplectic fourier transform } { \,{\rm d}}y { \,{\rm d}}\zeta = \widehat{s}(x,\xi)\overline{\widehat{\tau}(x,\xi ) } \equiv 0\ ] ] so that , but since is closed , this implies that . on the contrary , if is such that , then by the injectivity of the symplectic fourier transform we have }=0 ] .finally , define and , so that at least one of , is nonzero and for all .hence , and , as , the proof is complete . for any nonzero ,we denote by the cyclic group with elements , and let . then , , the pairing of and being . moreover , the haar measures of and are just the respective counting measures .let be a -dimensional hilbert space , and choose an orthonormal basis of .the weyl map is then given by since } = d \delta_{x , y } \delta_{\xi,\zeta} ] .[ prop : covariant_nassela ] now reduces to so that the real operator system is completely characterized by the zero set .therefore the essential question is the characterization of possible zero sets .this is done in the next proposition .[ prop : finite_zero_set ] for any state , we have and .conversely , if is such that and , then there exists a state such that .we have already observed that and }>0 ] by ( * ? ? ?5 ) . in particular , . in order to complete the proof we only need to show that this is not possibleby and linear independence of the set , we have .but since is symmetric , and the dimension is odd , must contain an even number of points , hence is even .as we have noted before , by increasing the size of the zero set the observable becomes less capable of performing state determination tasks .the next proposition shows that already in the simplest case of an informationally _ incomplete _ observable , namely , one having a zero set consisting of a single point , certain tasks become impossible .[ prop : z=1 ] suppose .the condition can hold for some fiducial state only if is even .if is a fiducial state with , then the observable is a. -informationally complete for all .b. not -informationally complete for any .let with . since is symmetric , we have , and this implies that is even and or . in particular , .we fix a square root of and denote it by . then the operator is selfadjoint ( by ) and generates ( by ) . since and } = 0 ] implies . in order to prove the converse statement ,suppose first that .choosing and defining , it is easy to check that . now let be a closed nonempty set such that and .by , there exists a probability measure ] .then a similar argument as before shows that , hence is not -informationally complete by prop .[ prop : prem ] . for the second example, we refer to ( * ? ? ?* prop . 9 ) where the authors constructed a state such that is nowhere dense but of infinite lebesgue measure . in other words, is informationally complete but neither nor is compact . in any realistic measurementone needs to take into account the effect of noise originating from various imperfections in the measurement setup .this typically results in a smearing of the measurement outcome distribution which appears in the form of a convolution : if is the probability distribution corresponding to the ideal measurement of , the actually measured distribution is for some probability measure modelling the noise .the convolution does not affect the covariance properties of the observable and hence the general structure of the observable remains the same .that is , the actually measured observable is a covariant phase space observable with the smeared fiducial state the inverse weyl transform of now gives . in particular , we have where we have defined analogously .consider next the special case where so that is informationally complete .for instance , one may think of the measurement of the husimi -function of a state , in which case and is the vacuum , i.e. , the ground state of the harmonic oscillator .now the overall observable s ability to perform any state determination task is completely determined by the support of . in the specific example with the -function we immediately see , e.g. , that any gaussian noise has no effect on the success of the task at hand . however , from prop .[ prop : infinite_dimension ] we know that any with but with compact results in an observable which is not informationally complete but still allows one to determine any finite rank state under the premise that the rank is bounded by some arbitarily high finite number . finally , if is compact , then even the simplest task of determining pure states among pure states fails .t.h . acknowledges financial support from the academy of finland ( grant no .j.s . and a.t .acknowledge financial support of the italian ministry of education , university and research ( firb project rbfr10coaq ) .d. t. smithey , m. beck , m. g. raymer , and a. faridina .measurement of the wigner distribution and the density matrix of a light mode using optical homodyne tomography : application to squeezed states and the vacuum . 70:12441247 , 1993 .h. rubin and t. m. sellke .zeroes of infinitely differentiable characteristic functions . in _a festschrift for herman rubin _ , volume 45 of _ ims lecture notes monogr ._ , pages 164170 .statist . ,beachwood , oh , 2004 . | the purpose of quantum tomography is to determine an unknown quantum state from measurement outcome statistics . there are two obvious ways to generalize this setting . first , our task need not be the determination of any possible input state but only some input states , for instance pure states . second , we may have some prior information , or premise , which guarantees that the input state belongs to some subset of states , for instance the set of states with rank less than half of the dimension of the hilbert space . we investigate state determination under these two supplemental features , concentrating on the cases where the task and the premise are statements about the rank of the unknown state . we characterize the structure of quantum observables ( povms ) that are capable of fulfilling these type of determination tasks . after the general treatment we focus on the class of covariant phase space observables , thus providing physically relevant examples of observables both capable and incapable of performing these tasks . in this context , the effect of noise is discussed . |
Subsets and Splits