article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
since the introduction of the burger s equation by bateman , many authors have used variety of numerical methods in attempting to solve the burger s equation .various forms of the finite element methods are constructed to compute the burger s equation numericallyad , ad1,nu2,sku , ad5,ofem3,bspgal , qq , zi . the spline collocation procedures are also presented for getting solutions of the burger s equationrt , er , ras , cbspcol , irk , ram , qbspcol2,qbspcol , dag , quincol .the spline functions are wished to be accompanied to the numerical method to solve the differential equations since the resulting matrix system is always diagonal and can be solved easily and approximate solutions having the accuracy of the degree less than the degree of the spline functions can be set up .high order continuous differentiable approximate solutions can be produced for the differential equations of higher order .the numerical procedure for nonlinear evolution equations based on the the b - spline collocation method have been increasingly applied to various fields of science .however application of the ctb collocation method to non linear evolution problems are a few in comparison with the method the collocation based on the b - spline functions .the numerical methods for solving a type of ordinary differential equations with quadratic and cubic ctb are given by a. nikolis in the papersg1,g2 .the linear two - point boundary value problems of order two are solved using cubic ctb interpolation method .the another numerical method employed the cubic ctb are set up to solve a class of linear two - point singular boundary value problems in the study . very recently a collocation finite difference scheme based on new cubic ctb is developed for the numerical solution of a one - dimensional hyperbolic equation ( wave equation ) with non - local conservation condition .a new two - time level implicit technique based on the cubic ctb is proposed for the approximate solution of the nonclassical diffusion problem with nonlocal boundary condition in the study .some researches have established types of the b - spline finite element approaches for solving the burger s equation but not with ctb as far as we know the literature . in this paper , ctb are used to establish a collocation method and then the suggested numerical method is applied to find the numerical solutions of burger s equation .it is also well - known that this problem arise in many branch of the science and the development of the numerical methods for the burger s equation have been attracted for finding the steep front solutions .the use of the lower viscosity in the burger s equation cause the appearance of the steep front and shock wave solutions . that makes difficulty in modelling solutions of the burger s equation when solved numerically .so many authors have developed various kinds of numerical scheme in computing the equation effectively for small values of the viscosity .we consider the burger s equation with appropriate initial conditions and the boundary conditions: , where subscripts and denote differentiation , and is the reynolds number characterizing the strength of viscosity . are the constants and is a sufficiently differentiable unknown function and is a bounded function . the implementation of the proposed scheme is given in the second section .two classical text problems are dealt with to show the robustness of the scheme .consider a uniform partition of the problem domain \\ & \\ & \\ & \\ \multicolumn{1}{c } { } & \multicolumn{1}{c}{}% \end{tabular}% \right .\label{r2}\ ] ] where are twice continuously differentiable piecewise trigonometric cubic b - spline on the interval \\ \hline & & 0 & \\ \hline & & - & \\ \hline & 0 & 0 & 0 \\ \hline \end{tabular}% \end{tabular}%\ ] ] , are a basis for the trigonometric spline space. an approximate solution to the unknown is written in terms of the expansion of the ctb as where are time dependent parameters to be determined from the collocation points and the boundary and initial conditions .the nodal values and its first and second derivatives at the knots can be found from the ( [ r4 ] ) as the time derivative and space derivatives can be approximated by using the standard finite difference formula and the crank nicolson scheme respectively to have the time - integrated burger s equation : is the solution of the equation at the time level .here , and is the time step , superscripts denote th time level , the nonlinear term in eq .( [ r6 ] ) is linearized by using the following form : linearized time - integrated burger equation have the following form: [ r4 ] into [ r8 ] and evaluation resulting equation at knots leads to the fully - discretized equation : where the system consist of linear equation in unknown parameters .the above system can be made solvable by elimination the time parameters help of the boundary conditions when written as variant of thomas algorithm is used to solve the system .initial parameters must be found to start the iteration process .to do so , initial condition and boundary values of derivative of initial conditions gives the following equation 1 . 2 . 3 . the above system yields an ( )( ) matrix system , which can be solved by use of the thomas algorithm .once the initial parameters has been obtained from the initial and boundary conditions , the recurrence relation gives time evolution of vector the time evolution of the approximate solution can be computed via the equation ( [ r6 ] ) .* ( a ) * analytical solution of the burger s equation with the problem sine wave initial condition and boundary conditions can be expressed as an infinite series where are the modified bessel functions .this problem gives the decay of sinusoidal disturbance .the convergence of the solution r11 is slow for small values of that the numerical solutions of the burger s equation are looked for . using the parameters graphical solutions at different times are depicted in the figs 1 - 4 the amplitude of the solution decays as time pass , seen in fig 1 - 2 clearly and the sharpness through the right boundary develops when the smaller viscosities are used .he same incidents also exist for studies given in the paper the results of proposed numerical methods are compared with the cubic b - spline collocation , cubic b - spline galerkin .galerkin procedure are seen to produce slightly same results with the ctb collocation method .our advantage is that the cost of the ctb procedure is less than the galerkin methods given in the tables 1 - 3 . * ( b ) * well - known other solution of the burger s equation is and are arbitrary constants .initial conditions are or for this form of the solution is known as the travelling wave equation and respresent the propogation of the wave front through the right .parameter determine the sharpness of the solution the initila solutions are taken from the analytical solution when .the program is run for the parameters and .solutions at some space values are presented in table 5 and compared with those obtained in the studies using cubic b - spline collocation , quadratic / cubic b - spline galerkin methods .solution behaviours are illustrated in fig 5 - 6 for the coefficient and at times with smaller sharp front is formed and propogates to right during run of the program .graphical presentation of the absolute errors at time is drawn i m figs 7 - 8 .}% \end{array}% \begin{tabular}{l } \frame{itbpf}{2.7224in}{2.3065in}{0in}{}{}{fig6.bmp}{\special{language " scientific word";type " graphic";maintain - aspect - ratio true;display " usedef";valid_file " f";width 2.7224in;height 2.3065in;depth 0in;original - width 2.6801in;original - height 2.2667in;cropleft " 0";croptop " 1";cropright " 1";cropbottom " 0";filename ' fig6.bmp';file-properties " xnpeu " ; } } \\figure 6 : solutions at different times for \\ % \end{tabular}%\ ] ] the collocation methods with trigonometric b - spline functions is made up to find solutions the burger s equation .we have hown that methods is capable of producing solutions of the burgers equation fairly .the method can be used as an alternative to the methods accompanied b - spline functions .nur nadiah abd hamid , ahmad abd .majid , and ahmad izani md .ismail , cubic trigonometric b - spline applied to linear two - point boundary value problems of order , world academy of science , engineering and technology 70 798 - 803 , 2010 .i. da , a. canivar and a. ahin , taylor galerkin and taylor - collocation methods for the numerical solutions of burgers equation using b - splines , commun nonlinear sci numer simulat vol.16 , pp . 26962708,2011 .m. abbas , a. a. majid , a. i. m ismail and a. rashid , the application of the cubic trigonometric b - spline to the numerical solution of the hyperbolic problems , applied mathematica and computation , vol 239 , 74 - 88 , 2014 .
the cubic trigonometric b - spline(ctb ) functions are used to set up the collocation method for finding solutions of the burgers equation . the effect of the ctb in the collocation method is sought by studying two text problems . the burgers equation is fully - discretized using the crank - nicholson method for the time discretizion and ctb function for discretizion of spatial variable . numerical examples are performed to show the convenience of the method for solutions of burgers equation * keywords : * collocation methods , cubic trigonometric b - spline , burgers equation
consider the following stochastic differential equation describing the kinetics of the proportion of one of two possible forms of certain proteins where is interaction coefficient of the two proteins , is the amplitude of the random gaussian perturbation , and is the standard brownian motion .the small circle before denotes the stochastic integral of stratonovich sense ( ) .there is no explicit solution to this equation , wherefore numerical computations simulating the propagation of is needed .one of the most well - known numerical methods for solving stochastic differential equations ( sdes ) is the euler - maruyama method , which is , however , only consistent to sdes of it type . for sdes of stratonovich type ,the consistent method is the midpoint rule , which , when applied to ( [ 1.1 ] ) , takes the form : where is the uniform time step size , and obeying the gaussian distribution and independent for different .the mean - square convergence of this method can be proved ( ) .the deterministic midpoint rule is an a - stable method suitable for dealing with stiff equations .for example , for the deterministic version of the equation ( [ 1.1 ] ) , i.e. , the equation ( [ 1.1 ] ) with is an asymptotically stable solution as .with we get the linearized equation of ( [ 1.3 ] ) which can be very stiff as is large , e.g. , . then in this case , the original non - linear equation ( [ 1.3 ] ) is also stiff , meaning that its numerical simulation may need the choice of very small step size . to illustrate this, we use the euler method and the midpoint rule to solve ( [ 1.3 ] ) numerically , and observe the effect in fig .[ f1 ] . in fig .[ f1.1 ] , , we see that both the euler method ( blue dotted ) and the midpoint rule ( red dash - dotted ) produce significant oscillating simulation error , while in fig .[ f1.2 ] , where , the midpoint rule creates a reasonable numerical solution remaining close to the true solution .note that there is no explicit true solution for ( [ 1.3 ] ) , we just use the midpoint rule with a tiny step size to approximate the true solution ( blue solid ) .however , if the step size is small enough , e.g. as used in fig .[ f1.3 ] , both of the two methods give very accurate simulations which coincide visually with the true solution .in fact , as gets larger and larger , the needed for effective computation becomes smaller and smaller .this illustrates that the simulation of the equation ( [ 1.3 ] ) is sensitive to the time step size , due to stiffness of the equation .meanwhile , it can be seen that the midpoint rule is more stable than the euler method .for the stochastic protein kinetic equation ( [ 1.1 ] ) , is also a stochastically asymptotically stable solution if .this can be seen from its linearized stochastic differential equation with ( ) the solution of which is .thus the lyapunov exponent of ( [ 1.7 ] ) is ( ) due to with probability 1 . at the same time ,similar to its deterministic counterpart ( [ 1.4 ] ) , the equation ( [ 1.7 ] ) can also be very sensitive to its simulation time step size as is large , e.g. .in other words , we say ( [ 1.7 ] ) is stiff for large . in this case , the original non - linear equation ( [ 1.1 ] ) is also stiff ( ) .this can be seen by numerical experiments , which we show in section 3 , where we see that although the stochastic midpoint rule ( [ 1.2 ] ) is applied , cautious choice of small time step size is still needed , which increases computational cost and accumulates round - off errors , especially for large time intervals .therefore it is meaningful to investigate effective measures to reduce the stiffness of the stochastic differential equations , such as the equation ( [ 1.1 ] ) , for efficient numerical simulation of such equations .for the purpose mentioned above , we employ the technique of variable transformation .[ lem2.1 ] for , the linearized stochastic differential equation ( [ 1.7 ] ) is equivalent to the following linear sde via the variable transformation * proof .* since the ordinary differential chain rule holds for sdes of stratonovich type , a direct calculation yields \\ & = -zdt-\frac{\sigma}{1+\lambda}z\circ dw(t ) .\end{split}\ ] ] note that , for large , the equation ( [ 1.7 ] ) is much more stiff than its equivalent equation ( [ 2.1 ] ) , since the lyapunov exponent for ( [ 2.1 ] ) is , obtained in the same way as in ( [ 1.8 ] ) .we thus find a way of reducing the stiffness of ( [ 1.7 ] ) via the variable transformation ( [ 2.2 ] ) . here , is to guarantee that is the stochastically asymptotically stable solution of ( [ 1.7 ] ) , and that the transformation ( [ 2.2 ] ) is meaningful at . applying the same transformation to the original non - linear stochastic protein kinetic equation ( [ 1.1 ] ) ,we obtain the following result .[ prop2.2 ] for , the stochastic protein kinetic equation ( [ 1.1 ] ) is equivalent to the following sde via the variable transformation the linearized equation of ( [ 2.4 ] ) at its stochastically asymptotically stable solution is the equation ( [ 2.1 ] ) .* proof . *the inverse transformation of ( [ 2.5 ] ) is using the differential chain rule together with ( [ 2.5])-([2.6 ] ) we obtain \\ & = \frac{-1}{1+\lambda}[(x+\lambda xx)dt+\sigma xx\circ dw(t)]\\ & = ( -x+\frac{\lambda}{1+\lambda}x^{2+\lambda})dt-(\frac{\sigma}{1+\lambda}x-\frac{\sigma}{1+\lambda}x^{2+\lambda})\circ dw(t ) .\end{split}\ ] ] the condition can ensures that the transformation ( [ 2.5 ] ) and its inverse ( [ 2.6 ] ) are meaningful at and , respectively. meanwhile , it makes a stochastically asymptotically stable solution of ( [ 2.4 ] ) .the linearized equation of ( [ 2.4 ] ) at is which is just the equation ( [ 2.1 ] ) .+ from proposition [ prop2.2 ] , it is clear that the equation ( [ 2.4 ] ) is much less stiff than the original equation ( [ 1.1 ] ) , since the linearized equation ( [ 2.1 ] ) of ( [ 2.4 ] ) is much less stiff than the linearized equation ( [ 1.7 ] ) of ( [ 1.1 ] ) for large ( ) .thus , for the simulation of the stochastic protein kinetic equation , we can firstly apply the stochastic midpoint rule to the transformed equation ( [ 2.4 ] ) to get , and then use the inverse transform ( [ 2.6 ] ) to get back to . in the following section ,we perform numerical tests to illustrate the difference the transformation ( [ 2.5 ] ) makes , that is , different degree of stiffness of ( [ 1.1 ] ) and ( [ 2.4 ] ) , reflected in the dependence on time step size in the numerical simulations of them using the stochastic midpoint rule .in this section we compare the effect of numerical simulation of the stochastic protein kinetic equation based on the original equation ( [ 1.1 ] ) and its transformed equation ( [ 2.4 ] ) , respectively .we apply the stochastic midpoint rule to both equations , with varying time step sizes for and .the results are shown in fig .[ f2 ] , for which we take , , and the number of iterations in each time step for the realization of the implicit stochastic midpoint rule is 10 . + in fig .[ f2.1 ] and [ f2.2 ] , .as , the midpoint rule applied to the equation before transform , i.e. , the original equation ( [ 1.1 ] ) loses accuracy severely , as shown by the blue dotted line in fig .[ f2.1 ] , while that applied to the equation after transform , i.e. , the equation ( [ 2.4 ] ) together with the inverse transformation ( [ 2.6 ] ) produces much better simulation as illustrated by the red dash - dotted line in the same figure .however , as is small enough , e.g. , the midpoint rule applied to both equations gives good numerical results , which can be seen in fig .[ f2.2 ] .as the stiffness increases by enlarging the absolute value of , for example , , we see in fig .[ f2.3 ] that the time step size loses its effectiveness if the numerical simulation is performed on the original equation ( [ 1.1 ] ) without transformation , while remains valid if the numerical simulation is performed on the transformed equation ( [ 2.4 ] ). however , for a much smaller time step size , both simulations work fairly well again , with the cost of much more computations , as shown by fig .[ f2.4 ] .note that , there is no explicit true solution for ( [ 1.1 ] ) or ( [ 2.4 ] ) .the solution lines ( blue solid ) are simulated by the midpoint rule approximation based on the original equation ( [ 1.1 ] ) with in fig .[ f2.1]- [ f2.3 ] , and in fig .[ f2.4 ] . to conclude , the numerical results show superiority of reducing stiffness of the sde ( [ 1.1 ] ) by variable transformation in the numerical computation of the equation .it permits larger time step sizes , and therefore reduces computational costs and increases computational robustness .the more general stochastic protein kinetic equations involve a varying parameter in the drift part in which case is not a stochastic stationary solution if .however , if we still employ the variable transformation ( [ 2.5 ] ) , obtaining the transformed equation instead of ( [ 2.4 ] ) , we can still observe the effect of stiffness - reduction in the numerical tests for $ ] , as shown by fig .[ f3 ] . as is small , or ,the transformation is not recommended .in fact , this method of reducing stiffness can be generalized to a more general class of sdes , e.g. , of the form where the functions and are regular enough for the following discussion .this class of equations may cover many stochastic differential equation models in biology , chemistry , physics and so on .assume that there exists a constant such that , and is a stochastically stationary solution of ( [ 4.1 ] ) .thus , with , the linearized equation of ( [ 4.1 ] ) is ( ) if is large , then the equation ( [ 4.2 ] ) is stiff , and so is the original equation ( [ 4.1 ] ) .we search for a variable transformation such that which is an ordinary differential equation with solution for the purpose of having inverse transformation , we take for , and for . in both caseswe have the following transformed equation of ( [ 4.2 ] ) which is less stiff than ( [ 4.2 ] ) for large .based on these , we perform the transformation for and for on the equation ( [ 4.1 ] ) , to get its transformed equation for and for .note that , if switches sign in the time interval of observation , then the transformation has no global inverse , which is a limitation of this method . to show that ( [ 4.12 ] ) or ( [ 4.13 ] ) is less stiff than the original equation ( [ 4.1 ] ) , we need to have the linearized equation of them at , which is just the equation ( [ 4.9 ] ) for both ( [ 4.12 ] ) and ( [ 4.13 ] ) , for . the less stiffness of ( [ 4.9 ] ) than ( [ 4.2 ] ) implies the less stiffness of ( [ 4.12 ] ) and ( [ 4.13 ] ) than ( [ 4.1 ] ) .note that the condition is also to guarantee that the drift parts of both ( [ 4.12 ] ) and ( [ 4.13 ] ) tends to as , and that is a stochastically asymptotically stable solution of ( [ 4.1 ] ) .the author is supported by the nnsfc ( no.11071251 , no.91130003 , no .11471310 ) and the 2013 headmaster funds of ucas . klden p.e . , platen e. , numerical solution of stochastic differential equations , springer - verlag berlin heidelberg ( 1992 ) .mao x. , stochastic differential equations and their applications , horwood , chichester ( 1997 ) .milstein g.n . , numerical integration of stochastic differential equations , kluwer academic publishers ( 1995 ) .
the stochastic protein kinetic equations can be stiff for certain parameters , which makes their numerical simulation rely on very small time step sizes , resulting in large computational cost and accumulated round - off errors . for such situation , we provide a method of reducing stiffness of the stochastic protein kinetic equation by means of a kind of variable transformation . theoretical and numerical analysis show effectiveness of this method . its generalization to a more general class of stochastic differential equation models is also discussed . + * ams subject classification : * 65c20 , 65c30 . + * key words : * numerical methods for sdes ; stochastic protein kinetic equation ; stiffness ; midpoint rule .
on large matrices are a cornerstone of computational linear algebra . with a few exceptions such as the approximate lanczos and power methods , most algorithms used by practitioners aimed to optimize speed of computation under the constraint of obtaining an exact result .recently , spurred by the seminal paper of frieze et al . , there has been a greater interest in finding algorithms which sacrifice the precision of the result for a gain in the speed of execution .consider the low - rank approximation problem ; i.e. , finding a matrix of rank at most which approximates a given matrix .the best matrix , best in the sense that it minimizes for any unitarily invariant norm ( e.g. , spectral or frobenius norms ) , can be obtained by computing the singular value decomposition ( svd ) of .( throughout this paper , we adopt the frobenius norm ; we use the notation to denote the best rank- approximation to and to denote an approximation to it it will be easy to avoid confusion with , which is used to denote the column of ) but in some instances , evaluating the svd , which scales as where is the largest dimension of , may be too costly .frieze et al . in showed that can be reasonably well approximated by computing the svd of a subset of the columns of only , where the columns are sampled according to their relative powers i.e . , , with the expected error coming from using the approximation instead of being of the form subsequent papers it has been argued that the additive error term in may be large , and thus other sampling techniques have been introduced to obtain relative approximation error bounds ( see , e.g. , ) . in this paper, we address the sparse representation of linear operators for the approximation of matrix products .an important object that will appear in our study is the so - called nystrm method ( see section [ sec : background ] ) to find a low - rank approximation to a positive kernel .this method , familiar to numerical analysts , has nowadays found applications beyond its original field , most notably in machine learning . in previous work , we proposed an approach for low - rank approximation in such applications ; here we show that the task of finding optimal sparse representation of linear operators , in order to evaluate their product , is related to the nystrm extension of a certain positive definite kernel .we will the use this connection to derive and bound the error of two new algorithms , which our simulations indicate perform well in practice .related to our work is that of for matrix products . in that paper , drineas et al. showed that a randomized algorithm sampling columns and rows of and in proportion to their relative powers and yields an expected error of .notice that this bound does not involve a low - rank approximation of or .in contrast , we obtain a randomized algorithm bound in which the approximating rank of a kernel related to and appears explicitly .the methods mentioned above are all _ adaptive _ , in the sense that they require some knowledge about and .a very simple non - adaptive method is given by an application of the johnson - lindenstrauss lemma : it is easy to show that if is a matrix with independent unit normal entries and , then for we have that letting denote the row of and the column of , we observe the following element - wise relation : and thus we see that approximating by yields a good result with high probability .later we compare this method to our algorithms described below .the remainder of this paper is organized as follows . in section [ sec : background ] , we briefly review the nystrm method used to approximate positive definite matrices . in section [ sec : mat_approx ] , we introduce the problem of approximating a matrix product and highlight two key aspects : the issue of best subset selection and the issue of optimal rescaling .we then solve the optimal rescaling problem and analyze a randomized and a deterministic algorithm for subset selection ; we conclude with simulations and a brief discussion of algorithmic complexity .to provide context for our results , we first introduce the so - called nystrm method to approximate the eigenvectors of a symmetric positive semi - definite ( spsd ) matrix .the nystrm method , familiar in the context of finite element methods , has found many applications in machine learning and computer vision in recent years ( see , e.g. , and references therein ) .we give here a brief overview : let \times [ 0,1 ] \rightarrow { \mathbb{r}} ] . define the points by with , so that the s are evenly spaced along the interval ] is the vector whose entries are all one , we have the following expression for : [\mathbf{1 } ] .\ ] ] hence , at the optimum , the error is ^t [ q_j \ y]^t { q_j}^{-1}[q_j \ y][\mathbf{1}]\\ & = { \| { ab } \| } ^2- [ \mathbf{1}]^t \widetilde q [ \mathbf{1 } ] , \end{aligned}\ ] ] where we see that is the _nystrm approximation of _ as described in section [ sec : background ] . using lemma [ lem : normhad ] below, we have ^t(q-\widetilde q)[\mathbf{1}],\ ] ] which finishes the proof of the theorem .the proof of the second part of theorem [ th : maintheo ] is based on the identity proven below : [ lem : normhad ] let and be real matrices of dimensions and , respectively , and let be the matrix with all entries equal to one .the following identity holds : recall that we can write the product as a sum of rank - one terms as follows : we thus have , by definition of the frobenius norm , that using the invariance of the trace with respect to cyclic permutations , the last equation yields and the relation is proved .having shown a solution to the optimal reweighting problem according to theorem [ th : maintheo ] , we now turn our attention to the companion problem of optimal subset selection . in order to minimize the approximation error, we have to find the subset whose associated schur complement has the lowest possible power along the one - dimensional subspace of spanned by the vector $ ] . determining the eigenvectors and eigenvalues of this schur complement , and relating them to and , is not an easy task . herewe present two approximations : one based on a random choice of subsets , and an alternative `` greedy '' approach which yields a worst - case error bound .we first discuss a random oracle which outputs a subset with probability defined below .recall our earlier definition of the matrix according to theorem [ th : maintheo ] ; this approach is motivated by the expression of the resultant squared error , conditioned upon having chosen a subset , as . since is positive definite , we have that is larger than the largest eigenvalue of , and we can bound this error as follows : note that equality is obtained when , and hence this bound is tight .we have investigated in an algorithm to minimize , which has been shown to be effective in the context of low - rank covariance matrix approximation .returning to our random oracle , note that both and are positive definite , and thus by the schur theorem , is also positive definite . from this , we conclude : 1 . there exists a matrix such that ; 2 .all the principal minors of are positive .consequently , we assume here that an oracle returns a subset with probability where is a normalizing constant , and the second fact above ensures that this probability distribution is well defined .we may then adapt the following result from the proof of theorem 1 in : [ th : randomorac ] let be a positive quadratic form with eigenvalues . if is chosen with probability , then combining with leads , via jensen s inequality , to an upper bound on the average error of this approach to random subset selection : where is defined via the relation , and denotes the optimal rank- approximation to obtained by truncating its singular value decomposition .despite the appearance of the term in this bound , it serves to relate the resultant approximation quality to the ranks of and , a feature reinforcing the well - foundedness of the accompanying algorithm we present below .in particular , if , then the approximation error is zero as expected . for practical reasons, we may also wish to relate this error to the eigenvalues of and . to this end , let and be two matrices , , and let ( resp . , ) be the singular values of ( resp . , ) sorted in non - increasing order .we then have the following majorization relation : in particular , if , , and , then the singular values of , , and are the squares of the singular values of and respectively : we may then conclude from that although the approach presented above relies on an oracle to sample in proportion to , we will subsequently outline a realizable algorithm based on these results .recall that theorem [ th : maintheo ] indicates we should ensure that the diagonal terms of are kept as small as possible .hence , as a _ deterministic _approximation to the optimal subset selection procedure , we may take such that it contains the indices of the largest terms . while yielding only a worst - case error bound, this approach has the advantage of being easily implementable ( as it does not require sampling according to ) ; it also appears to perform well in practice .this greedy algorithm proceeds as follows : given matrices and and a positive integer : 1 .set , , and take to be the indices of the largest elements of .set as , for .3 . set as , for .4 . set and for .return as an approximation to .since the error term is the sum of all the terms in the schur complement , we can look to bound its largest element . to this end, we have the following result : [ prop : greedy ] the largest entry in is smaller than the largest diagonal element of in .this lemma confirms that a good error - minimization strategy is to make sure that the diagonal terms of are as small as possible , or equivalently to take such that it contains the indices of the largest as per algorithm [ alg : pseudocodemult ] .matrix product approximation error using the power rescaling of applied to each of the four subset selection algorithms described in sec .[ sec : results ] ] the proof of lemma [ prop : greedy ] is based on the following set of simple results : [ lem : lardiag ] if is a positive definite matrix , then is positive and on the diagonal of .since is positive definite , we know there exists a matrix such that by the cauchy - schwartz inequality , we have from which we deduce that one of the following inequalities has to be satisfied : now if we suppose that is not a diagonal element , the relations of yield a contradiction and hence the largest entry of is on its main diagonal .the entries of , the schur complement of in , can be characterized explicitly according to the following formula : [ lem : crabtree ] let be a nonsingular leading principal submatrix of obtained by keeping the rows and columns with indices . then , the schur complement of in , is given element - wise by furthermore , it is possible to bound the diagonal entries of as follows : [ lem : fisch ] if is a positive definite matrix , then we are now ready to give the proof of lemma [ prop : greedy ] : the preceding two lemmas tell us that the diagonal entries of are bounded by ( i.e. , the largest diagonal element of , according to the partition of ) . and using lemma [ lem : lardiag ] , we know that every entry of is bounded by these diagonal entries . lemma [ prop : greedy ] can be further refined to give a worst - case error bound for deterministic matrix product approximation , conditioned on a choice of subset and the corresponding optimal reweighting procedure . appealing to the inequality of arithmetic and geometric means to further bound the elements of , the results of theorem [ th : maintheo ] and lemmas [ lem : lardiag][lem : fisch ]yield : matrix product approximation error using the optimal rescaling of theorem [ th : maintheo ] applied to the subset selection algorithms described in sec . [sec : results ] ]we now present preliminary experimental results and discuss the computational complexity of the algorithms under consideration .three sets of experiments were performed , in which we compared the performance of four subset selection methods : a baseline uniform sampling on -subsets ; sampling according the row / column powers ; sampling in proportion to the -principal minors of according to ; and selecting greedily according to step 1 of algorithm [ alg : pseudocodemult ] .we also compared the choice of reweighting following subset selection , in one case applying the optimal reweighting of theorem [ th : maintheo ] and in the other simply reweighting according to the row / column powers ( see ) : to test these various experimental conditions , we drew 200 random matrices and in total , each having independent unit normal entries .we then averaged the error of the randomized algorithm over 20 trials per matrix product , and report relative error in db as for each test condition . in the first set of experiments , shown in figure [ fig : fig1 ] , we compare the four different algorithms for subset selection described above , applied in conjunction with a reweighting according to row / column powers .the highest - error method in this case corresponds to choosing the subset uniformly at random , and thus should be understood as a baseline measure of performance as a function of approximant rank .it can also be seen that sampling according to the relative powers of the row / columns of and , and sampling via a metropolis - hastings algorithm ( with independent proposal distributions taken in proportion to row / column powers ) , yield similar results , with both improving upon the baseline performance .the best results in this case are obtained by the greedy subset selection method indicated by step 1 of algorithm [ alg : pseudocodemult ] . in a second set of experiments, we followed the same procedure as above to compare subset selection procedures , but applied the optimal reweighting of theorem [ th : maintheo ] rather than a rescaling according to row / column powers .performance in this case is ( as expected ) seen to be better overall , but with the ordering of the methods unchanged .as our final experiment , we compare the method of algorithm [ alg : pseudocodemult ] ( greedy subset selection followed by optimal rescaling ) to two non - adaptive methods : choosing row / columns of and uniformly at random and rescaling according to , and the simple johnson - lindenstrauss random projection approach outlined in section [ sec : intro ] .these non - adaptive methods can be seen to yield significantly worse performance than algorithm [ alg : pseudocodemult ] , suggesting its potential as a practical method of selecting sparse representations of linear operators that yield low approximation errors for the resultant matrix products .we conclude with a brief discussion of the algorithmic complexity of algorithm [ alg : pseudocodemult ] .first , assume without loss of generality that , and recall that straightforward matrix multiplication requires operations , though the best algorithm known so far ( the coppersmith - winograd algorithm ) can perform this operation in . evaluating in algorithm [ alg : pseudocodemult ] requires the computation of inner products of -or -dimensional vectors , and hence requires operations .extracting the largest elements of a set of size , as is necessary to construct , can be done efficiently using a variation on the quicksort algorithm ( see ) in .the matrix is symmetric and its diagonal is a restriction of .hence it requires the computation of an additional inner products , and thus operations .evaluating requires operations , taking into account the fact that terms of the sum also appear in . finally , evaluating can be done using gaussian elimination in operations .hence the overall complexity is given by .matrix product approximation error using non - adaptive random projections ( johnson - lindenstrauss ) , non - adaptive subset selection ( uniform ) , and adaptive subset selection ( algorithm [ alg : pseudocodemult ] ) ] c. k. i. williams and m. seeger , `` using the nystrm method to speed up kernel machines , '' in _ neural information processing systems _ ,t. g. dietterich , s. becker , and z. ghahramani , eds . , pp .mit press , 2001 .
thus far , sparse representations have been exploited largely in the context of robustly estimating functions in a noisy environment from a few measurements . in this context , the existence of a basis in which the signal class under consideration is sparse is used to decrease the number of necessary measurements while controlling the approximation error . in this paper , we instead focus on applications in numerical analysis , by way of sparse representations of linear operators with the objective of minimizing the number of operations needed to perform basic operations ( here , multiplication ) on these operators . we represent a linear operator by a sum of rank - one operators , and show how a sparse representation that guarantees a low approximation error for the product can be obtained from analyzing an induced quadratic form . this construction in turn yields new algorithms for computing approximate matrix products . = 1
a typical object of interest in many fields is the sample covariance matrix of a data matrix , , .the matrix can be seen as a sample of size of -dimensional data vectors . for fixed can show , as tends to infinity , that under certain assumptions the eigenvalues of the sample covariance matrix converge to the eigenvalues of the true underlying covariance matrix .however , the assumption may not be justified if one has to deal with high dimensional data sets , so that it is often more suitable to assume that the dimension is of the same order as the sample size , that is such that for a symmetric matrix with eigenvalues , we denote by the spectral distribution of , where denotes the dirac measure located at .this means that is equal to the number of eigenvalues of that lie in the set . from now on we will call the sample covariance matrix . due to [ npy ] , this change of normalization can be reversed by a simple transformation of the limiting spectral distribution . for notational conveniencewe suppress the explicit dependence of the occurring matrices on and where this does not cause ambiguity .the distribution of gaussian sample covariance matrices of fixed size was first computed in .several years later , it was marchenko and pastur who considered the case where the random variables are more general i.i.d . random variables with finite second moments , and the number of variables is of the same order as the sample size .they showed that the empirical spectral distribution ( esd ) of converges , as , to a nonrandom distribution , called limiting spectral distribution ( lsd ) , given by and point mass if ; in this formula , . here and in the following , convergence of the esd means almost sure convergence as a random element of the space of probability measures on equipped with the weak topology .in particular , the eigenvalues of the sample covariance matrix of a matrix with independent entries do not converge to the eigenvalues of the true covariance matrix , which is the identity matrix and therefore only has eigenvalue one .this leads to the failure of statistics that rely on the eigenvalues of which have been derived under the assumption of fixed , and random matrix theory is a tool to correct these statistics . in the case where the true covariance matrix is not the identity matrix ,the lsd can in general only be given in terms of a nonlinear equation for its stieltjes transform , which is defined by conversely , the distribution can be obtained from its stieltjes transform via the stieltjes perron inversion formula ( ( * ? ? ?* theorem b.8 ) ) , which states that )=\frac{1}{\pi}\lim_{\epsilon\to 0^+ } \int_a^b \im m_{\hat{f}}(x+ { \mathrm{i}}\epsilon ) { \mathrm{d}}x.\ ] ] for all continuity points of . for a comprehensive account of random matrix theorywe refer the reader to , and the references therein .our aim in this paper is to obtain a marchenko pastur type result in the case where there is dependence within the rows of .more precisely , for , the row of is given by a linear process of the form here , is an array of independent random variables that satisfies as well as the lindebergtype condition that , for each , clearly , [ z2 ] is satisfied if all are identically distributed .the novelty of our result is that we allow for dependence within the rows , and that the equation for is given in terms of the spectral density ,\ ] ] of the linear processes only , which is the fourier transform of the autocovariance function potential applications arise whenever data is not independent in time such that the marchenko pastur law is not a good approximation .this includes e.g. wireless communications and mathematical finance .note that a similar question is also discussed in .however , they have a different proof which relies on a moment condition to be verified .furthermore , they assume that the random variables are identically distributed so that the processes within the rows are independent copies of each other .more importantly , their results do not yield concrete formulas except in the ar(1 ) case and are therefore not directly applicable . in the context of free probability theory , the limiting spectral distribution of large sample covariance matrices of gaussian arma processes is investigated in .before we present the main result of this article , we explain the notation used in this article .the symbols , , and denote the sets of integers , natural , real , and complex numbers , respectively . for a matrix , we write for its transpose and for its trace .finally , the indicator of an expression is denoted by and defined to be one if is true , and zero otherwise ; for a set , we also write instead of .[ maintheorem ] for each , let , , be a linear stochastic process with continuously differentiable spectral density .assume that a. [ maintheoremconditionsz ] the array satisfies conditions , b. [ maintheoremconditionsc ] there exist positive constants and such that for all , c. [ maintheoremconditionsflevelsets ] for almost all , for at most finitely many ] .if conditions of [ maintheorem ] hold , then the empirical spectral distribution of converges , as , almost surely to a nonrandom probability distribution with bounded support .moreover , the stieltjes transform of is the unique mapping that satisfies where denotes the lebesgue measure of the set .in particular , if the entries of are i.i.d . with unit variance ,one recovers the limiting spectral distribution of the marchenko pastur law . in applicationsone often considers processes of the form with mean .if we denote by the column of the matrix , and define the empirical mean by , then the sample covariance matrix is given by the expression instead of .however , by ( * ? ? ?* theorem a.44 ) , the subtraction of the empirical mean does not change the lsd , and thus [ maintheorem , maintheorempiecewise ] remain valid if the underlying linear process has a nonzero mean .the proof of [ maintheorem , maintheorempiecewise ] can easily be generalized to cover noncausal linear processes , which are defined as . for this case oneobtains the same result except that the autocovariance function is now given by .if one considers a matrix which has independent linear processes in its columns instead of its rows , one obtains the same formulas as in [ maintheorem , maintheorempiecewise ] except that is replaced by .this is due to the fact that and have the same nontrivial eigenvalues . in [ proofs ]we proceed with the proofs of [ maintheorem , maintheorempiecewise ] .thereafter we present some interesting examples in [ examples ] .in this section we present our proofs of [ maintheorem , maintheorempiecewise ] . dealing with infiniteorder moving average processesdirectly is dfficult , and we therefore first prove a variant of these theorems for the truncated processes .we define the matrix , , .[ maintheoremmod ] under the assumptions of [ maintheorem ] ( [ maintheorempiecewise ] ) , the empirical spectral distribution of the sample covariance matrix of the truncated process converges , as tends to infinity , to a deterministic distribution with bounded support .its stieltjes transform is uniquely determined by [ eq - stieltjes ] ( [ eq - stieltjespiecewise ] ) .the proof starts from the observation that one can write , where , , , and in particular , . in order to prove convergence of the empirical spectral distribution and to obtain a characterization of the limiting distribution, it suffices , by ( * ? ? ?* theorem 1 ) , to prove that the spectral distribution of converges to a nontrivial limiting distribution .this will be done in [ lemma - lsdhh ] , where the lsd of is shown to be ; the distribution is computed in [ lemma - densitygamma ] if we impose the assumptions of [ maintheorem ] , respectively in [ lemma - densitygammapiecewise ] if we impose the assumptions of [ maintheorempiecewise ] . insertingthis expression for into equation ( 1.2 ) of shows that the esd converges , as , almost surely to a deterministic distribution , which is determined by the requirement that its stieltjes transform satisfies using the explicit formulas of computed in [ lemma - densitygamma , lemma - densitygammapiecewise ] , one obtains [ eq - stieltjes , eq - stieltjespiecewise ] .uniqueness of a mapping solving [ eq - stieltjescharacgamma ] was shown in .we complete the proof by arguing that the lsd of has bounded support .for this it is enough , by ( * ? ? ?* theorem 6.3 ) , to show that the spectral norm of is bounded in , which is also done in [ lemma - lsdhh ] .[ lemma - lsdhh ] let be the matrix appearing in [ eq - defh ] , and assume that there exist positive constants such that ( assumption of [ maintheorem ] ) . then the spectral norm of the matrix is bounded in . if , moreover , the spectral distribution of the toeplitz matrix converges weakly to some limiting distribution , then the spectral distribution converges weakly , as , to .we first introduce the notation as well as the block decomposition ] , and we will consider each of the two terms in turn . from the definition of it follows that the entry of is given by .the trace of the square of the upper left block of therefore satisfies ^ 2\\ { \leqslant } & \sum_{i , j , k , l=1}^n{|c_{i+k-1}||c_{j+k-1}||c_{i+l-1}||c_{j+l-1}|}\\ { \leqslant } & c^4 \sum_{i , j , k , l=2}^{n+1}{i^{-1-\delta}j^{-1-\delta}l^{-1-\delta}k^{-1-\delta}}\\ < & \left[c\zeta(1+\delta)\right]^4<\infty,\end{aligned}\ ] ] where denotes the riemann zeta function . as a consequence , the limit of as tends to infinity is zero .similarly , we obtain for the trace of the square of the off - diagonal block of the bound ^ 2\\ { \leqslant } & \sum_{i=1}^n\sum_{j=1}^{n}\sum_{k = j}^{n - i+1}\sum_{l = j}^{n - i+1}{c_{i+k-1}c_{k - j}c_{i+l-1}c_{l - j}}\\ { \leqslant } & \sum_{i=1}^n\sum_{j=1}^n\sum_{r=0}^n\sum_{s=0}^n{|c_{i+r+j-1}||c_r||c_{s+j-1}||c_s|}\\ { \leqslant } & c^4 \sum_{i , j , r , s=1}^{n+1 } { i^{-1-\delta } r^{-1-\delta } s^{-1-\delta } j^{-1-\delta}}\\ < & \left[c\zeta(1+\delta)\right]^4<\infty,\end{aligned}\ ] ] which shows that the limit of is zero .it follows that , as defined in [ eq - deltahh ] , converges to zero as goes to infinity , and therefore that the lsds of and coincide .the latter distribution is clearly given by , and we show next that the lsd of agrees with the lsd of . as before it suffices to show , by ( * ? ? ?* corollary a.41 ) , that converges to zero as tends to infinity .it follows from the definitions of and that can be estimated as ^ 2\\ = & \sum_{i , j=1}^n\left[\sum_{k=\max{(i , j)}}^n{c_{k - i}c_{k - j}}-\sum_{k=\max{(i, j)}}^\infty{c_{k - i}c_{k - j}}\right]^2\\ = & \sum_{i , j=1}^n\sum_{k , l=1}^\infty{c_{k+i-1}c_{k+j-1}c_{l+i-1}c_{l+j-1}}\\ { \leqslant } & c^4\sum_{i , j=2}^{n+1}\sum_{k , l=2}^\infty{i^{-1-\delta}j^{-1-\delta}k^{-1-\delta}l^{-1-\delta } } < \left[c\zeta(1+\delta)\right]^4<\infty.\end{aligned}\ ] ] consequently , converges to zero as goes to infinity , and it follows that . in order to show that the spectral norm of is bounded in , we use gerschgorin s circle theorem ( ( * ? ? ?* theorem 2 ) ) , which states that every eigenvalue of lies in at least one of the balls with centre and radius , , where the radii are defined as .we first note that the centres satisfy ^ 2<\infty.\ ] ] to obtain a uniform bound for the radii we first assume that . then ^ 2<\infty.\end{aligned}\ ] ] similarly we find that , for , ^ 2\end{aligned}\ ] ] is bounded , which completes the proof . in the following two lemmas, we argue that the distribution exists and we prove explicit formulas for it in the case that the assumptions of [ maintheorem ] or [ maintheorempiecewise ] are satisfied .[ lemma - densitygamma ] let be a sequence of real numbers , , and . under the assumptions of [ maintheorem ] it holds that the spectral distribution of converges weakly , as , to an absolutely continuous distribution withbounded support and density we first note that under assumption of [ maintheorem ] the autocovariance function is absolutely summable because ^ 2<\infty.\ ] ] szeg s first convergence theorem ( and ( * ? ? ?* corollary 4.1 ) ) then implies that exists , and that the cumulative distribution function of the eigenvalues of the toeplitz matrix associated with the sequence is given by :f(\omega){\leqslant}\lambda\}),\ ] ] for all such that the level sets :f(\omega)=\lambda\} ] is nonempty is a lebesgue null - set .hence it is enough to consider only for which this set is empty .let be the preimage of , which is a finite set by assumption .the implicit function theorem then asserts that , for every , there exists an open interval around such that restricted to is invertible .it is no restriction to assume that these are disjoint . by choosing sufficiently smallit can be ensured that the interval ] .observing that the lebesgue measure of an interval is given by its length , and that the derivatives of are given by the inverse of the derivative of , it follows that = & \frac{1}{2\pi}\sum_{\omega\in f^{-1}(\lambda)}\lim_{\delta\lambda\to 0}\frac{1}{\delta\lambda}\left|f_\omega^{-1}(\lambda+\delta\lambda)-f_\omega^{-1}(\lambda)\right|\\ = & \frac{1}{2\pi}\sum_{\omega\in f^{-1}(\lambda)}\left|\frac{{\mathrm{d}}}{{\mathrm{d}}\lambda}f_\omega^{-1}(\lambda)\right|\\ = & \frac{1}{2\pi}\sum_{\omega\in f^{-1}(\lambda)}\frac{1}{\left|f'(\omega)\right|}.\end{aligned}\ ] ] this shows that is differentiable almost everywhere with derivative .it remains to argue that the support of is bounded .the absolute summability of implies boundedness of its fourier transform .the claim then follows from [ eq - cdfgamma ] , which shows that the support of is equal to the range of .[ lemma - densitygammapiecewise ] let be the piecewise constant spectral density of the linear process , and denote the corresponding autocovariance function by . under the assumptions of [ maintheorempiecewise ] it holds that the spectral distribution of converges weakly , as , to the distribution .without loss of generality we may assume that .as in the proof of [ lemma - densitygamma ] one sees that exists , and that is given by :f(\omega){\leqslant}\lambda\}),\quad\forall \lambda\in[0,2\pi]\backslash \bigcup_{j=1}^k \{\alpha_j\}.\ ] ] the special structure of thus implies that , where is the largest integer such that .since must be rightcontinuous , this formula holds for all in the interval $ ] .it is easy to see that the function is the cumulative distribution function of the discrete measure , which completes the proof . of theoremsand it is only left to show that the truncation performed in [ maintheoremmod ] does not alter the lsd , i.e. that the difference of and converges to zero almost surely . by (* corollary a.42 ) , this means that we have to show that converges to zero . to this endwe show that has a limit , and that converges to zero , both almost surely . by the definition of and we have we shall prove that the variances of are summable . for this purposewe need the following two estimates which are implied by the cauchy schwarz inequality , the assumption that is finite , and the assumed absolute summability of the coefficients : [ eq - fubini ] therefore we can , by fubini s theorem , interchange expectation and summation to bound the variance of as considering separately the terms where and , we can write for the expectation in the first sum not to be zero , must equal and must equal , in which case its value is unity .the expectation in the second term can always be bounded by , so that we obtain due to [ npy ] and the assumed polynomial decay of there exists a constant such that the right hand side is bounded by , which implies that and therefore , by the first borel cantelli lemma , that converges to a constant almost surely .in order to show that this constant is zero , it suffices to shows that the expectation of converges to zero . since , andthe are independent , one sees , using [ eq - fubini1 ] and again fubini s theorem , that , which converges to zero because the are squaresummable .we now consider factor of expression and define .then because of and similarly , we have that \notag\\ = & { \underbrace}{\sum_{i=1}^p \sum_{t=1}^n \sum_{k = n+1}^\infty \sum_{m = n+1}^\infty c_k c_m z_{i , t - k } z_{i , t - m}}_{={\mathrm{ii}}\to 0\text { a.s.}}\notag\\ \label{eq - trdelta } & + 2\sum_{i=1}^p \sum_{t=1}^n \sum_{k = n+1}^\infty \sum_{m=1}^n c_k c_m z_{i , t - k } z_{i , t - m}.\end{aligned}\ ] ] allows us to apply fubini s theorem to compute the variance of the second term in the previous display as which is , by the same reasoning as we did for , bounded by for some positive constant . clearly , this is summable in .having , by [ eq - fubini1 ] , expected value zero , the second term of [ eq - trdelta ] and , therefore , also both converge to zero almost surely .thus , we only have to look at the contribution of in expression . from [ maintheoremmod ] we know that converges almost surely weakly to some nonrandom distribution with bounded supporthence , denoting by the eigenvalues of , almost surely .it follows that , in [ eq - iandii ] , factor is bounded , and factor converges to zero , and so the proof of [ maintheorem , maintheorempiecewise ] is complete .for several classes of widely employed linear processes , [ maintheorem ] can be used to obtain an explicit description of the limiting spectral distribution . in this sectionwe consider the class of autoregressive moving average ( arma ) processes as well as fractionally integrated arma models .the distributions we obtain in the case of ar(1 ) and ma(1 ) processes can be interpreted as oneparameter deformations of the classical marchenko pastur law .given polynomials and , an arma(p , q ) process with autoregressive polynomial and moving average polynomial is defined as the stationary solution to the stochastic difference equation if the zeros of lie outside the closed unit disk , it is well known that has an infiniteorder moving average representation , where are the coefficients in the power series expansion of around zero .it is also known ( ) that there exist positive constants and such that , so that assumption of [ maintheorem ] is satisfied .while the autocovariance function of a general arma process does not in general have a simple closed form , its fourier transform is given by .\ ] ] since is rational , assumptions of [ maintheorem ] are satisfied as well . in order to compute the lsd of ,it is necessary , by [ lemma - densitygamma ] , to find the roots of a trigonometric polynomial of possibly high degree , which can be done numerically .we now consider the special case of the arma(1,1 ) process , , for which one can obtain explicit results . by [ eq - specdensarma ], the spectral density of x is given by .\ ] ] implies that the lsd of the autocovariance matrix has a density , which is given by :f(\omega)=\lambda } \frac{1}{\left|f'(\omega)\right|}\\ = & \frac{1}{\pi(\vartheta+\varphi\lambda)\sqrt{\left[(1+\vartheta)^2-\lambda(1-\varphi)^2\right]\left[\lambda(1+\varphi)^2-(1-\vartheta)^2\right]}}{\mathbf{1}}_{(\lambda_-,\lambda_+)}(\lambda),\end{aligned}\ ] ] where by [ maintheorem ] , the stieltjes transform of the limiting spectral distribution of is the unique mapping that satisfies the equation \left[(1+\varphi)^2+m_z(1-\vartheta)^2\right]}}.\notag\end{aligned}\ ] ] this is a quartic equation in which can be solved explicitly .an application of the stieltjes inversion formula then yields the limiting spectral distribution of .if one sets , one obtains an ma(1 ) process ; plots of the densities obtained in this case for different values of and are displayed in [ fig - densitiesma1 ] .similarly , the case corresponds to an ar(1 ) process ; see [ fig - densitiesar1 ] for a graphical representation of the densities one obtains for different values of and in this case .for the special case , , [ fig - histoarma11 ] compares the histogram of the eigenvalues of with the limiting spectral distribution obtained from [ maintheorem ] for different values of . for the stieltjes transform of the limiting spectral distribution of the sample covariance matrix of an arma(1,1 ) processshould be compared to ( * ? ? ?* eq . ( 2.10 ) ) , where the analogous result is obtained for an autoregressive process of order one .they use the notation and consider the spectral distribution of instead of . if one observes that this difference in the normalization amounts to a linear transformation of the corresponding stieltjes transform , one obtains their result as a special case of [ eq - stieltjesarma11 ] . in many practical situations , data exhibit longrange dependence , which can be modelled by longmemory processes .denote by the backshift operator and define , for , the ( fractional ) difference operator by a process is called a fractionally integrated arma(p , d , q ) processes with and if is an arma(p , q ) process .these processes have a polynomially decaying autocorrelation function and therefore exhibit longrangedependence , cf .* theorem 13.2.2 ) and .we assume that , and that the zeros of the autoregressive polynomial of lie outside the closed unit disk .then it follows that has an infiniteorder moving average representation , where the have , in contrast to our previous examples , not an exponential decay , but satisfy , for some .therefore , if , one can apply [ maintheorem ] to obtain the lsd of the sample covariance matrix , using that the spectral density of is given by .\ ] ] both authors gratefully acknowledge financial support from technische universitt mnchen - institute for advanced study funded by the german excellence initiative , and from the international graduate school of science and engineering .
we derive the distribution of the eigenvalues of a large sample covariance matrix when the data is dependent in time . more precisely , the dependence for each variable is modelled as a linear process , where are assumed to be independent random variables with finite fourth moments . if the sample size and the number of variables both converge to infinity such that , then the empirical spectral distribution of converges to a nonrandom distribution which only depends on and the spectral density of . in particular , our results apply to ( fractionally integrated ) arma processes , which we illustrate by some examples .
feature extraction and model selection are important for sparse high dimensional data analysis in many research areas such as genomics , genetics and machine learning . motivated by the need of robust and efficient high dimensional model selection method , we introduce a new penalized quasi - likelihood estimation for linear model with high dimensionality of parameter space .consider the estimation of the unknown parameter in the linear regression model where is an -vector of response , is an matrix of independent variables with being its -th row , is a -vector of unknown parameters and is an -vector of i.i.d .random errors with mean zero , independent of .when the dimension is high it is commonly assumed that only a small number of predictors actually contribute to the response vector , which leads to the sparsity pattern in the unknown parameters and thus makes variable selection crucial . in many applications such as genetic association studies and disease classifications using high - throughput data such as microarrays with gene - gene interactions ,the number of variables can be much larger than the sample size .we will refer to such problem as ultrahigh - dimensional problem and model it by assuming for some .following , we will refer to as a non - polynomial order or np - dimensionality for short .popular approaches such as lasso , scad , adaptive lasso and elastic - net use penalized least - square regression : where is a specific penalty function .the quadratic loss is popular for its mathematical beauty but is not robust to non - normal errors and presence of outliers .robust regressions such as the least absolute deviation and quantile regressions have recently been used in variable selection techniques when is finite .other possible choices of robust loss functions include huber s loss , tukey s bisquare , hampel s psi , among others .each of these loss functions performs well under a certain class of error distributions : quadratic loss is suitable for normal distributions , least absolute deviation is suitable for heavy - tail distributions and is the most efficient for double exponential distributions , huber s loss performs well for contaminated normal distributions .however , none of them is universally better than all others . how to construct an adaptive loss function that is applicable to a large collection of error distributions ?we propose a simple and yet effective quasi - likelihood function , which replaces the quadratic loss by a weighted linear combination of convex loss functions : where are convex loss functions and are positive constants chosen to minimize the asymptotic variance of the resulting estimator . from the point of view of nonparametric statistics , the functions can be viewed as a set of basis functions , not necessarily orthogonal , used to approximate the unknown log - likelihood function of the error distribution .when the set of loss functions is large , the quasi - likelihood function can well approximate the log - likelihood function and therefore yield a nearly efficient method .this kind of ideas appeared already in traditional statistical inference with finite dimensionality .we will extend it to the sparse statistical inference with np - dimensionality .the quasi - likelihood function can be directly used together with any penalty function such as -penalty with , lasso i.e. -penalty , scad , hierarchical penalty , resulting in the penalized composite quasi - likelihood problem : instead of using folded - concave penalty functions , we use the weighted - penalty of the form for some function and initial estimator , to ameliorate the bias in -penalization and to maintain the convexity of the problem .this leads to the following convex optimization problem : when , the derivative of the penalty function , ( [ eq5 ] ) can be regarded as the local linear approximation to problem ( [ eq4 ] ) . in particular , lasso corresponds to , scad reduces to and adaptive lasso takes where .there is a rich literature in establishing the oracle property for penalized regression methods , mostly for large but fixed .one of the early papers on diverging is the work by under conditions of .more recent works of the similar kind include , , , which assume that the number of non - sparse elements is finite .when the dimensionality is of polynomial order , recently gave the conditions under which the scad estimator is an oracle estimator .we would like to further address this problem when with and for , that is when the dimensionality is of exponential order .the paper is organized as follows .section [ sec2 ] introduces an easy to implement two - step computation procedure .section [ sec3 ] proves the strong oracle property of the weighted -penalized quasi - likelihood approach with discussion on the choice of weights and corrections for convexity .section [ sec4 ] defines two specific instances of the proposed approach and compares their asymptotic efficiencies .section [ sec5 ] provides a comprehensive simulation study as well as a real data example of the snp selection for the down syndrome .section [ sec6 ] is devoted to the discussion . to facilitate the readability , all the proofsare relegated to the appendices a , b & c.we would like to describe the proposed two - step adaptive computation procedure and defer the justification of the appropriate choice of the weight vector to section [ sec3 ] . in the first step, one will get the initial estimate using the lasso procedure , i.e : and estimate the residual vector ( for justification see discussion following condition [ cond2 ] ) .the matrix and vector are calculated as follows : where is a choice of the subgradient of , is the -th component of , and should be considered as a consistent estimator of , which is the derivative of at .for example , when , then and .the optimal weight is then determined as in the second step , one calculates the quasi maximum likelihood estimator ( qmle ) using weights as * remark 1 * : note that zero is not an absorbing state in the minimization problem . those elements that are estimated as zero in the initial estimate a chance to escape from zero , whereas those nonvanishing elements can be estimated as zero in .* remark 2 * : the number of loss functions is typically small or moderate in practice .problem can be easily solved using a quadratic programming algorithm .the resulting vector can have vanishing components , automatically eliminating inefficient loss functions in the second step ( [ eq8 ] ) and hence learning the best approximation of the unknown log - likelihood function .this can lead to considerable computational gains .see section [ sec4 ] for additional details .* remark 3 * : problem ( [ eq8 ] ) is a convex optimization problem when s are all convex and latexmath:[ ] with and being and sub - matrices of , respectively .the following theorem shows that is the unique minimizer of on the whole space with an overwhelming probability . as a consequence, becomes the biased oracle .we establish the following theorem under conditions on the non - stochastic vector ( see condition [ cond2 ] ) .it is also applicable to stochastic penalty weights as in ; see the remark following condition [ cond2 ] .[ thm1 ] under conditions [ cond1]-[cond4 ] , the estimators and exist and are unique on a set with probability tending to one .furthermore , for a positive constant . for the previous theorem to be nontrivial, we need to impose the dimensionality restriction , where controls the rate of growth of the correlation coefficients between the matrices and , the important predictors and unimportant predictors ( see condition [ cond5 ] ) and is a non - negative constant , related to the maximum absolute value of the design matrix [ see condition [ cond4 ] ] .it can be taken as zero and is introduced to deal with the situation where is small or zero so that the result is trivial .the larger is , the more stringent restriction is imposed on the choice of . when the above conditions hold , the penalized composite quasi - likelihood estimator is equal to the biased oracle estimator , with probability tending to one exponentially fast .* remark 4 : * the result of theorem [ thm1 ] is stronger than the oracle property defined in once the properties of are established ( see theorem [ thm2 ] ) .it was formulated by for the scad estimator with polynomial dimensionality .it implies not only the model selection consistency and but also sign consistency : in this way , the result of theorem [ thm1 ] nicely unifies the two approaches in discussing the oracle property in high dimensional spaces .let and be the first components and the remaining components of , respectively .according to theorem [ thm1 ] , we have with probability tending to one . hence, we only need to establish the properties of .[ thm2 ] under conditions [ cond1]-[cond5 ] , the asymptotic bias of non - vanishing component is controlled by with furthermore , when , possesses asymptotic normality : where is a unit vector in and }{\left(\sum_{k=1}^kw_ke[\partial \psi_k(\varepsilon)]\right)^2}.\ ] ] since the dimensionality depends on , the asymptotic normality of is not well defined in the conventional probability sense .the arbitrary linear combination is used to overcome the technical difficulty .in particular , any finite component of is asymptotically normal .the result in theorem [ thm2 ] is also equivalent to the asymptotic normality of the linear combination stated in , where is a matrix , for any given finite number .this theorem relates to the results of in classical setting ( corresponding to ) where he established asymptotic normality of -estimators when the dimensionality is not higher than .the asymptotic normality ( [ eq10 ] ) allows us to do statistical inference for non - vanishing components .this requires an estimate of the asymptotic covariance matrix of .let be the residual and be its -th component .a simple substitution estimator of is see also the remark proceeding .consequently , by , the asymptotic variance - covariance matrix of is given by another possible estimator of the variance and covariance matrix is to apply the standard sandwich formula . in section [ sec5 ] , through simulation studies , we show that this formula has good properties for both smaller and larger than ( see tables 3 and 4 and comments at the end of section [ sec5.1 ] ) .note that only the factor in equation depends on the choice of and it is invariant to the scaling of .thus , the optimal choice of weights for maximizing efficiency of the estimator is where and are defined in section [ sec2 ] using an initial estimator , independent of the weighting scheme .* remark 5 : * the quadratic optimization problem does not have a closed form solution , but can easily be solved numerically for a moderate . the above efficiency gain , over the least - squares , could be better understood from the likelihood point of view .let denote the unknown error density .the most efficient loss function is the unknown log - likelihood function , .but since we have no knowledge of it , the set , consisting of convex combinations of given in , could be viewed as a collection of basis functions used to approximate it .the broader the set is , the better it can approximate the log - likelihood function and the more efficient the estimator in becomes .therefore , we refer to as the quasi - likelihood function .the restriction of guarantees the convexity of so that the problem becomes a convex optimization problem .however , this restriction may cause substantial loss of efficiency in estimating ( see table 1 ) .we propose a one - step penalized estimator to overcome this drawback while avoiding non - convex optimization .let be the estimator based on the convex combination of loss functions and be its nonvanishing components .the one - step estimator is defined as ^{-1 } \phi_{n,{\mathbf{w}}}(\hat{\mbox{\boldmath }}_1 ) , \\hat{{\mbox{\boldmath }}}^{\footnotesize{\mbox{os}}}_{\mathbf{w}2 } = \mathbf{0},\ ] ] where [ thm3 ] under conditions [ cond1]-[cond5 ] , if , then the one - step estimator enjoys the asymptotic normality : provided that , is lipchitz continous , and , where denote the maximum eigenvalue of a matrix and are defined as in theorem [ thm2 ] .the one - step estimator overcomes the convexity restriction and is always well defined , whereas is not uniquely defined when convexity of is ruined. note that if we remove the constraint of ( ) , the optimal weight vector in is equal to this can be significantly smaller than the optimal variance obtained with convexity constraint , especially for multi - modal distributions ( see table 1 ) .the above discussion prompts a further improvement of the penalized adaptive composite quasi - likelihood in section [ sec2 ] .use to compute the new residuals and new matrix and vector .compute the optimal unconstrained weight and the one - step estimator .in this section , we discuss two specific examples of penalized quasi - likelihood regression .the proposed methods are complementary , in the sense that the first one is computationally easy but loses some general flexibility while the second one is computationally intensive but efficient in a broader class of error distributions . first , we consider the combination of and loss functions , that is , and .the nuisance parameter is the median of the error distribution .let denote the corresponding penalized estimator as the solution to the minimization problem : if the error distribution is symmetric , then and the minimization problem ( [ eq17 ] ) can be recast as a penalized weighted least square regression which can be efficiently solved by pathwise coordinate optimization or least angle regression .if , the penalized least - squares problem ( [ eq17 ] ) is somewhat different from ( [ eq5 ] ) since we have an additional parameter . using the same arguments , and treating as an additional parameter for which we solve in , we can show that the conclusions of theorems [ thm2 ] and [ thm3 ] hold with the asymptotic variance equal to where ] , for some . for sufficiently small , ] . by the cauchy - schwarz inequality , note that for all , we have and which is of order by condition [ cond4 ] . hence , uniformly in . finally , we deal with .let .by lemma 3.1 of , we have for a positive constant . combining all of the above results , we have with probability tending to one that where the right hand side is larger than 0 when for a sufficiently large .since the objective function is strictly convex , there exists a unique minimizer such that [ lem6.3 ] under the conditions of theorem [ thm2 ] , ^{-1/2}\sum_{i=1}^n \psi_{\mathbf{w}}(\varepsilon_i ) { \mbox{\bf b}}^t{\mbox{\bf s}}_i \stackrel{\mathcal{d}}{\to } \mathcal{n}(0,1)\ ] ] where .* proof of lemma [ lem6.3 ] : * by condition [ cond1 ] , since is independent of , we have , and ^{-1/2 } \sum_{i=1}^n \psi_{\mathbf{w}}(\varepsilon_i ) \mathbf{b}^t { \mbox{\bf s}}_i \right ] = 1.\end{aligned}\ ] ] to complete proof of the lemma , we only need to check the lyapounov condition . by condition [ cond1 ] , . furthermore , condition [ cond4 ] implies for a positive constant . using these together with the cauchy - schwartz inequality, we have ^{-1/2 } \psi_{\mathbf{w}}(\varepsilon_i ) \mathbf{b}^t { \mbox{\bf s}}_i \right|^{2 + \xi}\\ & = & o(1 ) \sum_{i=1}^n \left| n^{-1/2 } { \mbox{\bf b}}^t { \mbox{\bf s}}_i \right|^{2 + \xi}. \\ & = & o(1 ) \sum_{i=1}^n \left| n^{-1/2 } \|{\mbox{\bf s}}_i\| \right|^{2 + \xi},\end{aligned}\ ] ] which tends to zero by condition [ cond4 ] .this completes the proof .the following bernstein s inequality can be found in lemma 2.2.11 of + .[ lem6.4 ] let be independent random variables with zero mean such that , for every ( and all ) and some constants and .then for .then the following inequality is a consequence of previous bernstein s inequality .let satisfy the condition of lemma [ lem6.4 ] with .for a given sequence , . a direct application of lemma [ lem6.4 ] yields * proof of theorem [ thm1 ] : * we only need to show that is the unique minimizer of in on a set which has a probability tending to one . since already satisfies , we only need to check .we now define the set .let and consider the event with being chosen later . then , by condition [ cond1 ] and bernstein s inequality , it follows directly from that where is the -th column of .taking , we have for some positive constant , by condition [ cond4 ] .thus , by using the union bound , we conclude that we now check whether holds on the set .let be the -dimensional vector with the -th element .then , by condition [ cond2 ] \right \|_\infty \nonumber \\ & = & o\left ( n^{1/2 } u_n + \left\| { \mbox{\bf q}}^t \mbox{diag } ( \partial { \mbox{\boldmath }}_{\mathbf{w}}({\mbox{\bf v } } ) ) { \mbox{\bf s}}(\hat { \mbox{\boldmath }}_{1}^o - { \mbox{\boldmath }}_1^ * ) \right\|_{\infty } \right ) \end{aligned}\ ] ] where lies between and . by condition[ cond5 ] , the second term in is bounded by where the equality follows from lemma [ lem6.2 ] . by the choice of parameters , by taking .hence , by lemma [ lem6.1 ] , is the unique global minimizer . by using , we have or equivalently , note that .we have for any vector , consequently , for any unit vector , by using conditions [ cond4 ] and [ cond5 ] .this shows that the second term in ( [ eqc4 ] ) , when multiplied by the vector is of order by condition [ cond2 ] .therefore , we need to establish the asymptotic normality of the first term in .this term is identical to the situation dealt by . using his result ,the second conclusion of theorem [ thm2 ] follows .this completes the proof .* proof of theorem [ thm3 ] : * first of all , by taylor expansion , where lies between and .consequently , by the definition of the one step estimator and , we have where first deal with the remainder term .note that and where . by the liptchiz continuity , we have where is the liptchiz coefficient of .let be the identity matrix of order and . by ( [ eqc8 ] ), we have hence , all of the eigenvalues of the matrix is no larger than .similarly , by ( [ eqc8 ] ) , and all of its eigenvalue should be at least .consequently , by condition [ cond5 ] and the assumption of , it follows from ( [ eqc7 ] ) that thus , for any unit vector , the main term in can be handled by using lemma [ lem6.3 ] and the same method as .this completes the proof .
in high - dimensional model selection problems , penalized least - square approaches have been extensively used . this paper addresses the question of both robustness and efficiency of penalized model selection methods , and proposes a data - driven weighted linear combination of convex loss functions , together with weighted -penalty . it is completely data - adaptive and does not require prior knowledge of the error distribution . the weighted -penalty is used both to ensure the convexity of the penalty term and to ameliorate the bias caused by the -penalty . in the setting with dimensionality much larger than the sample size , we establish a strong oracle property of the proposed method that possesses both the model selection consistency and estimation efficiency for the true non - zero coefficients . as specific examples , we introduce a robust method of composite l1-l2 , and optimal composite quantile method and evaluate their performance in both simulated and real data examples . _ key words _ : composite qmle , lasso , model selection , np dimensionality , oracle property , robust statistics , scad
impact factor was first described in 1972 and the isi journal impact factor is today the leading indicator of journal influence , despite its weaknesses .the impact factor of a journal in a given year measures the average number of citations to recent articles from articles published in the given year .the conventional two - year impact factor is calculated based on citation data from a three - year period .for example , the impact factor of journal in 2014 is the ratio between the number of citations from all considered journals in source year 2014 to articles published in in target years 20122013 and the number of articles published in in the target years .a five - year impact factor is calculated in a similar way with a five - year target window .the advantage with the widely used impact factor is that it is easy to calculate and explain , once the selection of journals is made .even though impact factor is not seen as a flow - based metric , it is in fact an example of a zero - order markov model of flow between journals .the simple count of citations to journals corresponds to measuring the visit frequency of a random walker that visits journals proportional to their citation counts . while the measure is widely used , a major drawback is the underlying assumption that all citations carry equal weight , irrespective of origin .several rankings have been suggested to overcome the problem with uniform citation weights .the eigenfactor score and its per - article normalized article influence score , for example , builds on the pagerank algorithm and takes advantage of the entire network of citations . generally speaking , the eigenfactor score measures the relative journal visit rate of a random walker that navigates between journals by following random citations .therefore , the eigenfactor score of a journal can be interpreted as a proxy for how often a researcher who randomly navigates the citation landscape accesses content from the journal . in this way, the eigenfactor score corresponds to a first - order markov model for evaluating journal influence . in a recursive fashion ,important journals are those that are highly cited by important journals . in practice ,a citation from an influential journal will be worth more than a citation from a less significant journal , because its importance is inherited from the citing journal .however , the inherited importance is aggregated across a journal and pushed further no matter where it came from . as a result ,the actual inheritance structure of the article - level citation network is lost with strongest effect on multidisciplinary journals .while the main difference between impact factor and eigenfactor is that they correspond to a zero- and a first - order markov model of flow between journals , respectively , they differ in two other ways as well . first , while the conventional impact factor uses a two - year citation target window , eigenfactor uses a five - year target window by default .the extended time window was introduced because , in many fields , articles are not frequently cited until several years after publication .moreover , the eigenfactor score considers inheritance of importance _ between _ journals and therefore ignores self - citations . as a result ,the incentive to boost the ranking of a journal with self - citations vanishes . in this paper, we focus on the general effects of markov order rather than specific implementations .therefore , we exclusively study rankings with five - year target windows and exclude all self - citations .to model citation flows between journals , we first aggregate article - level citation data in journals and then model the network flow with a random walk process .we construct citation flows with different amounts of memory by aggregating the citation data in networks that correspond to zero- , first- , and second - order markov models .below we in turn describe how we aggregate the data and model the flow .we use article - level citation data from thomson reuters web of science 1980 - 2013 .the data include almost one billion citations between more than 30 million articles published in about 20,000 journals . in this study, we focus on articles published in the years 20072012 and their citations to articles published in 2002 - 2007 . specifically , we are interested in articles published in 2007 , their citations to articles published in 20022006 , and citations to the articles published in 2007 from articles published in 20082012 .we need the two overlapping time windows to construct the second - order markov model .figure [ fig : schematic ] illustrates how we construct journal citation networks with different amount of memory from article - level citation data . in fig .[ fig : schematic]a , we show a schematic citation network with articles published in 11 different journals .the articles were published in three different time periods , the early target years 20022007 , the early source year 2007 , which also is the target year of the late source years 20082012 . for the zero- and first - order markov models, we used the early target and source years 20022007 , and for the second - order markov model we also included the late source years 20082012 .we excluded proceedings , but included all journals that received citations during the target period .for the zero - order markov model , we counted the number of citations to articles published in the early target years 2002 - 2006 from articles published in the early source year 2007 . to construct the journal network, we aggregated these citations in the journals of the cited articles .that is , each citation between an article published in journal in the early source year to an article published in journal in the early target years , adds a weight of one to the cited journal , .this procedure is exemplified in figs .[ fig : schematic]a and b , with articles published in the early target years in green and articles published in the early source year in blue .figure [ fig : schematic]a shows how one article published in journal 1 receives three citations , how four articles published in journal 3 receive eight citations , and how one article published in journal 4 receives two citations .for this zero - order markov network shown in fig .[ fig : schematic]b , journals are connected to other journals with weights proportional to the number of incoming citations , independent of citation source .that is , a random walk process on a zero - order markov network is memoryless such that the next step does not depend on the currently visited journal . for the first - order markov model, we aggregated the citations described above in pairs of citing and cited journals .that is , each citation between an article published in journal in the early source year to an article published in journal in the early target years adds a link weight of one between the citing and the cited journals , .figure [ fig : schematic]c illustrates how the 13 incoming links in the zero - order markov model have specific sources of the citing journals in the first - order markov model .accordingly , a random walk process on a first - order markov network has a one - step memory such that the next step depends on the currently visited journal . for the second - order markov model, we also included citations from articles published in the late source years .we used citation chains , trigrams of articles published in journal in the late source years that cite articles in journal in the early source year that in turn cite articles in journal in the early target years , as illustrated in fig .[ fig : schematic]a . to construct the second - order markov network, we aggregated the trigrams in memory nodes , such that each citation chain adds a link weight of one between memory nodes and , .that is , each journal has memory nodes , one for each other journal that cites it . constructed in this way , a random walk process on a second - order markov network has a two - step memory such that the next step depends not only on the currently visited journal , but also on the previously visited journal .the procedure to construct a second - order markov network above assumes that each article in the early source years only is cited by one journal in the later source years .for each citation from an article in journal in the early source year to an article in journal in the early target years , we identify all articles published in any journal in the late source years that cite the article in the early source year , and add a fractional link weight of between memory nodes and . moreover , if we can not identify a trigram , because the article in the early source year was never cited by an article in the late source years , we add a fractional link weight between memory nodes and for all memory nodes of journal . in this way , we obtain the first - order markov network if we aggregate the memory nodes in their respective journals .we use a random walk process on the networks with different amounts of memory to obtain the journal rankings .the random walk processes can be seen as proxies for how researchers navigate scholarly literature , as they read articles and follow citations in their search for information . in the zero - order markov model , a researcher would pick any citation and follow it to the cited journal irrespective of where the currently read article is published ( fig . [fig : schematicwalk]a ) . in the first - order markov model, a researcher would pick a citation from any article published in the same journal as the currently read article and follow it to the cited journal ( fig .[ fig : schematicwalk]b ) . in the second - order markov model ,a researcher would pick a citation from an article published in the same journal as the currently read article that is also cited by an article published in the previously visited journal and follow it to the cited journal ( fig .[ fig : schematicwalk]c ) . in this way, the random walk processes correspond to researchers with zero- , one- , and two - step memory .the first- , second- , and third order markov models are obtained with the same random walk process on the three networks with zero- , one- , and two - step memory .formally , we represent the journal visited at time by the random variable .the random walk process generates a sequence of visited journals .in general , the journal visited at time depends on the full history of the dynamic process , but for the processes we consider here the memory is limited .for the zero - order markov model illustrated in fig .[ fig : schematicwalk]a , the probability to step to journal next is simply given by the relative number of citations to that journal irrespective of the currently visited journal , which therefore also is the stationary solution of the zero - order markov model , for the first - order markov model illustrated in fig .[ fig : schematicwalk]b , the probability to step to journal next from journal is given by the relative number of citations to from , accordingly , the probability that the random walker visits node in step is in principle however , to ensure a unique solution independent of where the random walker is initiated , at a low rate the random walker instead moves according to the zero - order markov model , with stationary solution given by the zero - order markov step corresponds to random teleportation to journals proportional to their number of incoming citations .this link - weighted teleportation gives results that are more robust to changes in the teleportation rate .we use teleportation rate in all analyses . notethat this teleportation scheme is slightly different from the one used in eigenfactor .however , unrecorded teleportation to a journal proportional to the number of articles it publishes followed by a recorded first - order markov step , as used in eigenfactor , is approximately the same as a single recorded zero - order markov step .for example , they would be identical if all articles cited the same number of articles .for the second - order markov model illustrated in fig .[ fig : schematicwalk]c , the random walker moves from memory node to memory node proportional to the link weights between the memory nodes . for example , the probability to visit memory node after visiting memory node is accordingly , the probability that the random walker visits memory node in step is in principle but to ensure a unique solution we include teleportation steps also in this process , here is given by the relative number of links to memory node , which is equivalent to the relative number of links between node and , consequently , the stationary solution is given by this teleportation scheme gives unbiased comparisons because journals receive the same amount of teleported flow as in the first - order markov model , and proportional to the stationary solution of the zero - order markov model in eq .( [ eq : zerosolution ] ) .we obtain the nontrivial stationary solutions of eq .( [ eq : firststationary ] ) and ( [ eq : secondstationary ] ) with the power - iteration method . for per - article rankings , analogous to the impact factor and the article influence score, we simply divide the stationary solution of a journal by the number of articles published by that journal in the early target years . for easy comparison between the rankings of the different markov models ,we normalize with respect to the average journal . in this way, a ranking score of a journal larger than one tells how many times higher the stationary distribution per article is compared with the average journal .the common framework for the three ranking models makes it easy to study effects of the markov order alone .however , the common framework also means that the models studied here are not identical to the established impact factor and eigenfactor , and conclusions should be treated with care even if the differences are small . in summary , unlike impact factor , we disregard all self - links , and unlike eigenfactor , we use recorded teleportation to journals proportional to their citation counts .in this section , we show the results of comparisons between ranking scores obtained with zero- , first- , and second - order markov models . we first show comparisons of the top journals in explicit ranking lists , and then show quantitative results for ranking scores and robustness .figure [ fig : ranking ] shows the rankings of the top 20 journals obtained with the three different markov models .the ranking scores are given by the per article stationary distribution of random walkers normalized such that the average journal has score 1 , as described above .as with impact factor , review journals with few highly cited reviews have the highest rankings in all three models .they are followed by high impact multidisciplinary journals .journals that lose from the zero- to the first - order flow model also tend to lose from the first- to the second - order model , and , vice versa , journals that gain from the zero- to the first - order flow model also tend to gain from the first- to the second - order model .however , the multidisciplinary journals only gain marginally from the first- to the second - order model . for the similar ranking analysis with the less complete and more biased citation data from jstor reported in ref . , the effect on multidisciplinary journals was even stronger because leaking flow between fields did not cancel to the same degree . in any case , and as schematicallyillustrated in fig .[ fig : schematicwalk]c , the relative rankings show the largest change from the zero- to the first - order model . 1 . * zero - order markov * 2 .( start_same1 ) ; 3 .( start1 ) ; 4 .( start2 ) ; 5 .( start3 ) ; 6 .( start4 ) ; 7 .( start5 ) ; 8 .( start6 ) ; 9 .( start_same2 ) ; 10 .( start7 ) ; 11 .( start8 ) ; 12 .( start_removed1 ) ; 13 .( start9 ) ; 14 .( start10 ) ; 15 .( start11 ) ; 16 .( start12 ) ; 17 .( start13 ) ; 18 .( start14 ) ; 19 .( start15 ) ; 20 .( start_removed2 ) ; 21 .( start16 ) ; 22 .( zero_to_first_new ) ; 1 . * first - order markov * 2 .( start_same1end ) ; ( start_same3 ) ; 3 .( start6end ) ; ( start_same4 ) ; 4 .( start8end ) ; ( start17 ) ; 5 .( start9end ) ; ( start18 ) ; 6 .( start7end ) ; ( start19 ) ; 7 .( start10end ) ; ( start20 ) ; 8 .( start4end ) ; ( start21 ) ; 9 .( start_same2end ) ; ( start22 ) ; 10 .( start1end ) ; ( start23 ) ; 11 .( start3end ) ; ( start24 ) ; 12 .( start12end ) ; ( start25 ) ; 13 .( start2end ) ; ( start26 ) ; 14 .( start5end ) ; ( start27 ) ; 15 .( start13end ) ; ( start28 ) ; 16 .( start_new1 ) ; ( start29 ) ; 17 .( start14end ) ; ( start_same5 ) ; 18 .( start15end ) ; ( start30 ) ; 19 .( start11end ) ; ( start_removed3 ) ; 20 .( start16end ) ; ( start_removed4 ) ; 21 .( start_new2 ) ; ( start31 ) ; 22 .( first_removed_and_new1 ) ; ( first_removed_and_new2 ) ; 1 . * second - order markov * 2 .( start_same3end ) ; 3 .( start_same4end ) ; 4 .( start18end ) ; 5 .( start17end ) ; 6 .( start20end ) ; 7 .( start23end ) ; 8 .( start19end ) ; 9 .( start21end ) ; 10 .( start22end ) ; 11 .( start25end ) ; 12 .( start24end ) ; 13 .( start28end ) ; 14 .( start29end ) ; 15 .( start26end ) ; 16 .( start27end ) ; 17 .( start_same5end ) ; 18 .( start_new3 ) ; 19 .( start31end ) ; 20 .( start30end ) ; 21 .( start_new4 ) ; 22 .( first_to_second_removed ) ; tikz[remember picture , overlay ] ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; tikz[remember picture , overlay ] ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; ( ) ( ) ; the absolute ranking scores of the top journals in fig .[ fig : ranking ] show a similar increase from zero- to first- and from first- to second - order markov dynamics . in the zero - order markov model ,the five top ranking scores are about 30 times higher than the average article , in the first - order markov model they are about 40 times higher , and in the second - order markov model they are about 45 times higher .moreover , it is a trend that the higher - order markov models give wider range of scores .this effect can be explained by their non - uniform citations values ; citations from top ranked journals are worth more than citations from average ranked journals . in the second - order markov model with more detailed structural information and more specific re - distribution of flow value ,the range of scores is even wider . ]figure [ fig : distribution ] shows the cumulative journal frequency and ranking scores .the cumulative ranking scores show that the top 100 journals in the zero - order markov model share 15.9% of all flow , whereas the top 100 journals in the second - order markov model share 22.7% of all flow .the first - order model is in between the other models with 21.2% of all flow .overall , the higher - order markov models show a wide range of scores from the lowest to the highest values . a method that is good in theory is of little use if the results are not robust in practice . for journal rankings ,the most crucial factor is how robust the results are to the particular selection of journals included in the analysis . for journals indexed by thomson reuters web of science , the citation data are more or less complete for the indexed journals .however , only a fraction of journals are indexed and the rankings inevitably depend on the selection . therefore , we examined the robustness of the different models by performing analysis on random sub - samples of the set of all journals .we generated sub - samples that contained 90% , 80% , , 10% of all journals by randomly including the journals . since highly ranked journals are more likely to be included in practice , we complemented this uniform sampling with a proportional sampling in which we included journals proportional their citation counts . for each sub - sample size, we generated 10 samples and measured the ranking similarity between all pairs of rankings for each model .we used the normalized mutual information for rankings to measure the similarity .this measure quantifies between 0 and 1 how much information one ranking provides about the other for journals common to both rankings .results close to 1 mean that few journal pairs swap ranking order between rankings and indicate that the results are robust to the selection of journals .figure [ fig : samplerobustness ] shows that the ranking robustness to journal selection tends to increase with markov order .all models become less robust with decreasing sample sizes , but the median ranking similarities among all journals are generally higher for the first- and second - order markov models ( fig . [fig : samplerobustness]a ) , and among top 100 journals highest for the second - order markov model ( fig .[ fig : samplerobustness]b ) .for example , for the ranking similarity among top 100 ranked journals with 90% proportionally sampled journals reported in fig .[ fig : samplerobustness]b , the second - order model is more robust than the zero- and first - order models in 89% and 84% of the comparisons , respectively .at least three factors influence the robustness : the weight of citations , how local the model is , and the range of ranking scores . in the zero - order model ,all citations carry equal weight and perturbations of low - ranked journals will have as high impact as perturbations of high - impact journals . in the higher - order models ,the weight of citations from low - ranked journals is reduced and has less impact on the ranking .the zero - order and also the second - order model are more local in the sense that perturbations do not propagate across the network .for the zero - order model , it is simply because only directly cited journals are affected . for the second - order model , it is because the model can capture more constrained dynamics as illustrated in fig .[ fig : schematicwalk ] and demonstrated in ref . . finally , the range of scores can influence the robustness in two ways .first , simply because any two journals tend to be farther apart in units of ranking score .second , because in the normalized mutual information between rankings we make pairwise comparisons with weights proportional to the ranking scores of the journals .since the top journals have more extreme ranking values in the higher - order models , effectively fewer journal pairs will dominate the similarity measure .all these effects together make the higher - order markov models more robust to the selection of journals included in the analysis .finally , to validate that the data are sufficient for analysis with higher - order markov models , we conducted a cross - validation test .if the data were not sufficient , the models would overfit the data and this could lead to false conclusions .for example , the ranking scores of the higher - order markov models could be different from the lower - order markov models simply because of noise in sparse data .we performed the cross - validation test by predicting movements of the random walker in the second - order markov model in eq .( [ eq : secondordernextstep ] ) with the zero- , first- , and second - order models .first , we divided all articles in 2007 into 10 random sets and generated 10 corresponding sets of trigrams , such that aggregating them all would give the complete set of trigrams .then , in each of 10 folds , we aggregated nine sets into a training set and used the last set for validation . for each fold and each order model , we measured the cross - entropy of the probability distributions of the validation set and of the training set for the zero- , first- and second - order models , that is , we measured the cost in bits of predicting the next journal in a random walk on the validation set with transition rates obtained from the training set .we found that navigation on the validation set costs 10.1(1 ) bits with the zero - order , 9.1(1 ) bits with the first - order , and 9.2(1 ) bits with the second - order markov model fitted on the training set .thus , the two higher - order models have a clear advantage over the zero - order model .while the two higher - order models perform similarly averaged over all journals , a journal - by - journal comparison highlights their differences .the second - order model can better predict pathways through high - impact multidisciplinary journals ( see fig .[ fig : schematicwalk]c ) , and therefore gives a higher robustness for top 100 journals ( fig .[ fig : samplerobustness]b ) , at an increased risk of overfitting pathways through field - specific journals with fewer citations . to quantify this effect , we derived the ratio of the posterior probabilities of the second- to the first - order model from the cross - entropy with bayes theorem . with uniform prior on the models and , the ratio between the posterior probabilities of the two models is table [ tab : diff ] shows that this model probability ratio is particularly high for multidisciplinary journals such as science and nature . overall , the zero - order model underfits the data , the first - order model underfits multidisciplinary journals , and the second - order model has a tendency to overfit movements in field - specific journals with fewer citations , but succeeds in capturing movements in multidisciplinary journals .this result suggests that the best model is a combination of the first- and second - order markov model .we have shown that the robustness of flow - based rankings to the selection of included journals tends to increase with increasing markov order .lower - order rankings , of which impact factor is an example , depend more on the particular selection of journals because all citations carry equal weight , and because the range between the lowest and highest ranked journals is smaller than for higher - order models .since the decision about which journals to include is difficult to make objectively and rarely made transparently , the robustness of a ranking scheme is important . whereas our analysis indicates that higher - order models perform better , the performance gain for the second - order markov model comes at the cost of requiring more citation data over a longer time period . while rankings can have many different objectives and be subject to various constraints that would favour other ranking schemes , if the sole objective of the ranking is to accurately capture likely pathways of researchers navigating between journals , model selection shows that using the more complex models pay off .however , the first - order markov model underfits multidisciplinary journals and the second - order markov model shows a tendency to overfit journals with limited data .the results suggest that an adaptive method that combines first- , second- , and even higher - order dynamics for multidisciplinary journals could further improve the ranking .we thank s. karlsson and c. wiklander for providing the journal citation data .m.r . was supported by the swedish research council grant 2012 - 3729 .stevan harnad , tim brody , franois vallires , les carr , steve hitchcock , yves gingras , charles oppenheim , heinrich stamerjohanns , and eberhard r hilf .the access / impact problem and the green and gold roads to open access ., 30(4):310314 , 2004 .
as the number of scientific journals has multiplied , journal rankings have become increasingly important for scientific decisions . from submissions and subscriptions to grants and hirings , researchers , policy makers , and funding agencies make important decisions with influence from journal rankings such as the isi journal impact factor . typically , the rankings are derived from the citation network between a selection of journals and unavoidably depend on this selection . however , little is known about how robust rankings are to the selection of included journals . here we compare the robustness of three journal rankings based on network flows induced on citation networks . they model pathways of researchers navigating scholarly literature , stepping between journals and remembering their previous steps to different degree : zero - step memory as impact factor , one - step memory as eigenfactor , and two - step memory , corresponding to zero- , first- , and second - order markov models of citation flow between journals . we conclude that higher - order markov models perform better and are more robust to the selection of journals . whereas our analysis indicates that higher - order models perform better , the performance gain for the second - order markov model comes at the cost of requiring more citation data over a longer time period . biblabel[1]#1 . science builds on previous science in a recursive quest for new knowledge . researchers put great effort into finding the best work by other researchers and into achieving maximum visibility of their own work . therefore , they both search for good work and seek to publish in prominent journals . inevitably , where researchers publish becomes a proxy for how good their work is , which in turn influences decisions regarding hiring , promotion , and tenure , as well as university rankings and academic funding . as a consequence , researchers depend on the perceived importance of the journals they publish in . while actually reading the work published in a journal is the only way to qualitatively evaluate the scientific content , different metrics are nevertheless used to quantitatively assess the importance of scientific journals . in different ways , the metrics extract information from the network of citations between articles published in the journals . in this paper , we analyze three flow - based journal rankings that at different order of approximations seek to capture the pathways of researchers navigating scholarly literature . specifically , the metrics measure the journal visit frequency of random walk processes that correspond to zero- , first- , and second - order markov models . that is , given a citation network between journals and a random walker following the citations , movements in a zero - order model are independent of the currently visited journal , movements in a first - order model depend only on the currently visited journal , and movements in a second - order model depend both on the currently visited journal and the previously visited journal . evaluating ranking methods inevitably becomes subjective , as their objectives often are different . which method is best , the most transparent , the most difficult to game , or the one with highest predictive power ? irrespective of the specific objective , perhaps the most important criterion is nevertheless the robustness of the method . because journal rankings depend on the selection of journals included in the analysis , we compare the robustness of rankings obtained with zero- , first- , and second - order markov models with random resampling techniques . we first describe the commonly used metrics impact factor and eigenfactor , which correspond to specific implementations of zero- and first - order markov models , respectively . then we put them in the same mathematical framework and show how a second - order markov model can be devised in a similar way . we use data from thomson reuters web of science and compare the methods both qualitatively and quantitatively in terms of ranking order , ranking score distributions , and robustness .
there has been a lot of recent interest in the chaotic dynamics of nonlinear systems that possess invariant subspaces ; a number of quite subtle dynamical effects come to light in examining the interaction of attractors with the invariant subspaces .moreover a good understanding of these effects is essential in interpreting and predicting dynamics of simulations and experiments where the presence of discrete spatial symmetries implies the existence of invariant subspaces .a fundamental bifurcation in such a setting is the blowout bifurcation where a chaotic attractor within an invariant subspace loses stability to perturbations transverse to the invariant subspace . in doing so , can either create a nearby ` branch ' of chaotic attractors or lose stability altogether .in contrast with this , the presence of invariant subspaces can lead to the existence of what have been called robust heteroclinic cycles between equilibria , that is , heteroclinic cycles that are persistent under small perturbations that preserve the symmetry .these cycles may or may not be attracting .recently it has been recognised that cycles to more complicated invariant sets can also occur robustly in symmetric systems , in particular to chaotic invariant sets ; this behaviour was named ` cycling chaos ' in a recent paper by dellnitz __ and has been further investigated by field and ashwin . in this paperwe find there is a connection between these dynamical properties ; we show a scenario where a blowout bifurcation creates an attracting ` cycling chaotic ' state in a bifurcation that is analogous to a saddle - node homoclinic bifurcation with equilibria replaced by chaotic invariant sets .we also investigate how the attracting cycling chaotic state that is created in the blowout bifurcation loses stability at a resonance of liapunov exponents .( a _ resonance bifurcation _ in its simplest form occurs when a homoclinic cycle to an equilibrium loses attractiveness ; this occurs when the real parts of eigenvalues of the linearisation become equal in magnitude . ) in spite of the system being neither a skew product nor being a homoclinic cycle to a chaotic set as in we see similar behaviour and can predict the loss of stability by looking at a rational combination of liapunov exponents .we find this scenario of a blowout bifurcation to cycling chaos is a mechanism for transition from stable two - dimensional to fully three - dimensional magnetoconvection .the model we study is a galerkin truncation for magnetoconvection in a region with square plan , subject to periodic boundary conditions on vertical boundaries and idealised boundary conditions on horizontal boundaries .phenomenologically speaking we see a change from a chaotically varying two - dimensional flow ( with trivial dependence on the third coordinate , and which comes arbitrarily close to a trivial conduction state ) to an attracting state where trajectories spend increasingly long times near one of two symmetrically related two - dimensional flows interspersed with short transients .we explain and investigate this transition in terms of a blowout bifurcation of a chaotic attractor in an invariant subspace . in the paper of ott and sommerer that coined the phrase ` blowout bifurcation ' ,two scenarios are identified .either the blowout was _ supercritical _ in which case it leads to an _ on - off intermittent _ state , or it is _ subcritical _ and there is no nearby attractor after the bifurcation .we find an additional robust possibility for bifurcation at blowout . near this transitionthe three - dimensional flow patterns show characteristics of intermittent cycling between two symmetrically related ` laminar ' states corresponding to two - dimensional flows , but the time spent near the laminar state is , on average , infinite .this suggests that the blowout is supercritical , but in a weaker sense which we make precise .namely , we say a blowout is _ set supercritical _if there is a branch of chaotic attractors after the blowout whose limit contains the attractor in the invariant subspaces before the blowout .in particular there may be other invariant sets contained in this limit and so any natural measures on the bifurcating branch of attractors ( if they exist ) need not limit to the natural measure of the system on the invariant subspace .we also show that the attractors corresponding to two - dimensional flows are not liapunov stable , but are milnor attractors near the transition to three dimensions , and so in particular we expect the presence of noise to destabilise two - dimensional attractors near blowout by a _ bubbling _ type of mechanism .we find in our example that the state of cycling chaos is attracting once it has been created : trajectories cycle between neighbourhoods of the chaotic sets within the invariant subspaces , and the time between switches from one neighbourhood to the next increases geometrically as trajectories get closer and closer to the invariant subspaces . by estimating the rate of increase of switching times , we are able to show that cycling chaos ceases to be attracting in a resonance bifurcation .one remarkable aspect of this study is that we are able to predict the parameter values at which the blowout bifurcation and the resonance occur , requiring only a single numerical average over the chaotic set within the invariant subspace .the paper is organised as follows : in section [ secmodel ] we introduce the ode model for magnetoconvection , discuss its symmetries and corresponding invariant subspaces .this is followed by a description of the creation , persistence and loss of stability of the cycling chaos on varying a parameter in numerical simulations in section [ secnumerics ] .section [ secmap ] shows how one can , under certain assumptions , derive a map model of the dynamics of the ode that has the same dynamical behaviour . section [ secblowout ] is a theoretical analysis of the blowout bifurcation that creates the cycling chaotic attractor and is followed in section [ secresonance ] by a theoretical analysis of its loss of stability .finally , section [ secdiscuss ] discusses some of the implications of this work on the chaotic dynamics of symmetric systems .the model we study is an ode on described by the following equations these odes have been derived as an asymptotic limit of a model of three - dimensional incompressible convection in a plane layer , with an imposed vertical magnetic field ; for further details and details of its derivation , see . in the context of this model , and represent the amplitudes of convective rolls with their axes aligned in the and ( horizontal ) directions respectively , and represent modes that cause the rolls to tilt , and and represent shear across the layer in the and directions .the modes and represent the horizontal component of the magnetic field in the and directions , and represents the horizontally averaged temperature .the model has are five primary parameters : is proportional to the imposed temperature difference across the layer , with at the initial bifurcation to convection ; is related to the horizontal spatial periodicity length , but is an arbitrary small parameter in the model of ; and are dimensionless viscous and magnetic diffusion coefficients ; and is proportional to the square of the imposed magnetic field .note that , and are scaled by factors of , and from their usage in .two secondary parameters that we use are and . in the parameter regime of interest , all parameters are non - negative .consider acting on the plane with unit cell in the usual way , with the torus acting by translations on the plane , and by reflections in the axes and rotation through .we define the following group elements note that , and any reflection can be used to generate the group ..selected fixed point subspaces of the action of on together with name , representative point and dimension of .there are others ( e.g. ) but these are not important for the dynamics we discuss here . [ cols="<,<,<,^",options="header " , ] ( 50,60 ) ( 25,0)(0,0)[bc] ( 18,14)(0,0)[bc] ( 32,14)(0,0)[bc] ( 10,28)(0,0)[bc] ( 25,28)(0,0)[bc] ( 40,28)(0,0)[bc] ( 3,42)(0,0)[bc] ( 18,42)(0,0)[bc] ( 32,42)(0,0)[bc] ( 47,42)(0,0)[bc] ( 25,56)(0,0)[bc] ( 18,12)(2,-3)5 ( 32,12)(-2,-3)5 ( 10,26)(2,-3)5 ( 25,26)(2,-3)5 ( 40,26)(-2,-3)5 ( 37,26)(-2,-1)15 ( 3,40)(2,-3)5 ( 18,40)(2,-3)5 ( 33,40)(2,-3)5 ( 47,40)(-2,-3)5 ( 5,40)(4,-1)30 ( 22,40)(2,-1)15 ( 22,54)(-2,-1)15 ( 24,54)(-2,-3)5 ( 26,54)(2,-3)5 ( 28,54)(2,-1)15 we consider the subgroup since contains the subgroup , generated by and , of translations , it follows that is isomorphic to a semidirect product ( ) .the ode ( [ eqode ] ) is equivariant under the group of symmetries acting on by this action gives rise to a number of isotropy types , shown in table [ tabiso ] .figure [ figlattice ] gives a partial isotropy lattice for this group action .dynamics in always decays to the trivial equilibrium point , corresponding to the absence of convection .we refer to dynamics in and as two - dimensional , since these correspond to two - dimensional convection in the original problem ( though is 5 ) .dynamics in and corresponds to mirror symmetric two - dimensional rolls with their axes aligned along the and directions ; we refer to equilibrium points in these subspaces as -rolls and -rolls respectively . in and ,convection is two - dimensional but not mirror symmetric , and is referred to as tilted rolls .dynamics in and corresponds to three - dimensional convection that is still invariant under one mirror reflection ( tilted rolls with a cross - roll component ) .otherwise we say the dynamics is fully three dimensional . in a slight break from conventionwe say the fixed point subspaces as having isotropy subgroup rather than considering the isotropy subgroups as the fundamental objects .we present numerical simulations of the odes that demonstrate two aspects of cycling chaos that we seek to explain : how cycling chaos can be created in a blowout bifurcation , and how cycling chaos can cease to be attracting .we concentrate on parameter values that are known to have lorenz - like chaotic dynamics within and : ( and hence and ) , although we note that qualitatively similar attractors are found for a large proportion of nearby parameter values .these parameter values correspond to those in figures 15(c ) and 20(a ) in .the numerical method used was a bulirsch stoer adaptive integrator , with a tolerance for the relative error set to for each step .we use the parameter as a normal parameter ( see ) for the dynamics in and ; that is , it controls instabilities transverse to and in the directions and without altering the dynamics inside or . in figure[ figphaseportraits ] we show a typical example of timeseries when there is attracting cycling chaos ( with ) .the system starts with oscillating chaotically while is quiescent , switches to a state where oscillates chaotically while is quiescent , and so on .a more careful examination reveals that after a switch the trajectory remains close to a fixed point in or for an increasing length of time .physically , this corresponds to chaotic two - dimensional convection that switches between rolls aligned in the and directions .figure [ figphaseportraits](c ) shows the chaotic trajectories projected onto the plane , while ( d ) illustrates switching between ( the ` horizontal ' plane ) and ( the ` vertical ' plane ) .note how the chaotic behaviour in figure [ figphaseportraits](a ) and ( b ) repeats : trajectories spend longer and longer near the unstable manifolds of the -roll and -roll equilibrium points and take longer and longer between each switch .this is illustrated further in figure [ figslowingdown ] , which shows intersections of a trajectory with the poincar surface close to the trivial solution .there are two phases evident in the cycle : the order one chaotic behaviour of near ( while grows exponentially ) , and the exponential growth of as the trajectory moves away from ( while behaves chaotically ) .the time between switches increases monotonically and the rate of increase varies with , the normal parameter . in these numerical simulations ,the switching time saturates when certain components of the solution come close to the machine accuracy of the computations ( about for double precision ) .we argue that this is evidence for attracting cycling chaos : trajectories approach a structurally stable heteroclinic cycle between chaotic sets . in figure[ figcycle ] we show a schematic picture of the heteroclinic cycle .we recall that the fixed point spaces and are 2 dimensional , is 3 dimensional , and are 5 dimensional and and are 6 dimensional invariant subspaces in the 9 dimensional phase space .the system starts near the -roll equilibrium point in , which is unstable to shear ( ) ; the parameters are such that the unstable manifold of -rolls comes close to the trivial solution and returns to a neighbourhood of -rolls .this global near - connection within is the source of the chaotic behaviour .we refer to the chaotic sets in and as and respectively ; these contain the relevant roll equilibrium points and they both contain the trivial solution , so there are structurally stable connections from the trivial solution to the roll equilibrium points and from those to and . within , is unstable to cross - rolls ( ) since the trivial solution is equally unstable in the and directions .eventually grows large enough that there is a switch to the -roll equilibrium point in , at which point starts to grow .thus the cycle connects invariant sets in the following fixed point subspaces : between the equilibrium points in and , and and within and , with the structurally stable connections needed to complete the cycle from and to the -roll and -roll equilibrium points lying within and .note that our scenario is certainly a simplification of the full set of heteroclinic connections ; there are other fixed points contained in the closure of the smallest attracting invariant set , in particular the origin is contained within the cycle .we turn now to the question of how the cycling chaos is created . with find attracting two - dimensional chaos ( figure [ figblowout]a , b ) , which loses stability around ( c ) in a blowout bifurcation , and for ( d ) , there is exponential growth away from into . within ,the -roll equilibrium points are sinks , establishing the structurally stable connection from the chaos in to the equilibrium point in , and hence the creation of cycling chaos . as illustrated in figures [ figslowingdown ] , as decreases towards , trajectories spend progressively longer in each visit to and before switching to the conjugate chaotic invariant set . eventually , trajectories come arbitrarily close to the invariant subspaces and ( limited only by machine accuracy in the numerical simulations ). figure [ figresonance ] shows how the time intervals between switches between and increases as the system approaches this heteroclinic cycle , and how the rate of approach to the heteroclinic cycle decreases as approaches .the intervals between switches grow by a factor of about , and per switch for , and respectively , and and for and . by ,the heteroclinic cycle is no longer attracting , and for and , the system settles down to periodic behaviour that is bounded away from and , though the periodic orbits are actually quite close to these invariant subspaces . for these calculations , we imposed a cut - off of : the calculation ceased once any variable became smaller than this . in the next section ,we derive a map that allows us to compute longer trajectories more accurately , and we demonstrate cycling chaos , its creation in a blowout bifurcation and its loss of attractiveness using this map .we analyse the blowout bifurcation in section [ secblowout ] , and argue in section [ secresonance ] that the cycling chaos created in that bifurcation ceases to be attractive at a resonance .in this section , we discuss a map that models the behaviour of the odes in the parameter regime described above , and that helps clarify what happens near blowout bifurcation and the resonance of the cycling chaos .we rely on a map derived for the two - dimensional dynamics by breaking up the flow into pieces near and between equilibrium points . within the subspace , the leading stable eigendirection of the origin ( that is , the eigendirection with negative eigenvalue closest to zero ) is in the plane , and the unstable direction of the origin is along the axis .all trajectories leaving the origin in that direction follow the structurally stable connection within to one or other of the -roll equilibrium points .the one - dimensional unstable manifold of -rolls lies within , and the chaotic attractor is associated with a global bifurcation at which that unstable manifold collides with the origin . near this global bifurcation ,the dynamics within is modelled by an augmented lorenz map : defined as a map from the surface of section to itself .details of the derivation are given in , but briefly , is a parameter related to in the odes , with at the global bifurcation ; is a small positive constant that we scale to one ; if and if ; depends on the ratio of various stable and unstable eigenvalues at the origin and the -roll equilibrium points ( with ) ; and is a ( negative ) constant .it is a straightforward matter to include the effect of a small perturbation in the and directions . near the subspace , and will grow linearly at a rate that depends on .this means that we get a mapping of the form where and are constants and the exponents and again depend on ratios of eigenvalues , with and .the exponent is negative since a small value of implies that the trajectory spends longer near the origin , so has a longer time to grow .similarly , near we get the mapping as long as is much smaller than .these maps are valid provided that the trajectory remains close to the or subspaces .we model the switch from behaviour near to near by assuming that ( [ eqnearpx ] ) holds provided that ; otherwise , the trajectory leaves the neighbourhood of the origin along the -axis , visits a -roll equilibrium point and returns to the surface of section near the origin : where , as above , , and are constants and , and are ratios of eigenvalues .similarly , ( [ eqnearpy ] ) holds if ; otherwise the trajectory switches from to : then in the full map , as long as , the trajectory behaves chaotically under ( [ eqnearpx ] ) ( near the subspace ) , with obeying a lorenz map and growing or decaying according to the value of .if grows sufficiently that , there is one iterate of map ( [ eqswitchpxpy ] ) , which makes small as the system switches from to , followed by many iterates of ( [ eqnearpy ] ) , and one iterate of ( [ eqswitchpypx ] ) as the system switches back to . the exponents can be determined from the eigenvalues of the trivial solution and of -rolls .the relevant eigenvalues of the origin are ( in the notation of ) the growth rate of and and the slowest decay rate of , determined by the eigenvalue of \ ] ] closest to zero .the relevant eigenvalues of -rolls are the growth rate and the slowest decay rate of , determined by the eigenvalues of .\ ] ] the other important eigenvalues at the -roll equilibrium point are the decay rate of ( ) and the decay rate of ( ). then we have from fitting the map to trajectories of the odes with ( see figure [ figblowout]b ) , we find values for the constants : the eigenvalues that do not depend on are : with these eigenvalues , we have , , and , while ( negative ) and depend on .we seek to reproduce in the map what we have observed in the odes : the blowout bifurcation that creates cycling chaos , and the loss of attractiveness of cycling chaos .the first of these ( figure [ figmapone ] ) is straightforward : typical lorenz - like chaos is shown in figure [ figmapone](a ) , while the change from decay towards to growth away from at is shown in ( b d ) .cycling chaos is found after the blowout bifurcation ( figure [ figmaptwo]a ) : the system switches between chaos in and chaos in , getting closer and closer to the invariant subspaces after each switch and spending longer between switches ( figure [ figmaptwo]b ) .the rate of increase of the intervals between switches depends on , and is about a factor of 1.01 per switch for . decreasing to ( figure [ figmaptwo]c , d ) results in growth away from cycling chaos and saturation to a periodic orbit . these calculations required formulating the map ( [ eqnearpx][eqswitchpypx ] ) in terms of the logarithms of the variables in order to cope with the large ( ) dynamic range .the one place in which accuracy is inevitably lost is in the switch from to ( and back ) , using ( [ eqswitchpxpy ] ) : the term is swamped by the order one . as a result of this , the chaotic trajectories start in exactly the same way each time the system switches from one invariant subspace to the other . in terms of the odes ,the trajectory entering a neighbourhood of close to -rolls shadows the unstable manifold of that equilibrium point ( lying inside and leading to the chaotic set ) until the switch to .this is in agreement with the ode behaviour shown in figure [ figphaseportraits ] .we briefly review some definitions .if is a compact flow - invariant subset then we define the unstable set and the stable set ( or _ basin of attraction _ ) where ( resp . ) is the limit set of a trajectory of the ode passing through in the limit ( resp . ) . we say a compact invariant set is an _ attractor _ in the sense of milnor if where is lebesgue measure on .it is said to be a _minimal milnor attractor _ if there are no proper compact invariant subsets with . as shown in , milnor attractors can occur in a robust manner if the attractor lies within an invariant subspace .suppose is an invariant subspace and is a compact invariant set in such that is a minimal milnor attractor for the flow restricted to and such that has a natural ergodic invariant measure for the flow restricted to .it is possible to show ( under certain additional hypotheses ) that as an attractor in the full system if and only if , where is the most positive liapunov exponent for in a direction transverse to ( see for a detailed discussion ) . if we have access to a normal parameter such that we can vary the normal dynamics without changing the dynamics in , we can vary through zero and observe loss of attractiveness of at what ott and sommerer have termed a _ blowout bifurcation _ .ott and sommerer identify two possible scenarios at blowout . at a _ supercritical blowout _ the attractor bifurcates to a family of attractors displaying on - off intermittency , with trajectories that come arbitrarily close to andthus linger near for long times ( but with a well - defined mean length of lingering or ` laminar phase ' ) . at _ subcritical blowout _ there are no nearby attractors after loss of stability of .note that discusses the question of how to distinguish these cases .however , what we see in this paper is that there is at least one other possibility at blowout bifurcation that is also robust to normal perturbations , namely a bifurcation to a cycling state or a robust heteroclinic cycle between chaotic invariant sets .this can be seen to be set supercritical but not supercritical , in the following sense . by reparametrisingif necessary we can assume that we have a normal parameter so undergoes a blowout bifurcation at . by the argument above , is an attractor only if .if there is a family of minimal milnor attractors , such that for all then we say that the blowout is _set supercritical_. note that as discussed in and implied by the very word ` blowout ' , we can not typically expect the limit of the attractors to just be the set .if in addition supports a family of natural ergodic invariant measures ( ) and as then we say the blowout is _ measure supercritical _ or just _ supercritical_. ( convergence is the in the topology on probability measures . )this definition of criticality was used in .if the blowout is not set supercritical then we say the blowout is _subcritical_. it seems that one will often get bifurcations in symmetric systems that are set supercritical but not supercritical .for example one can generically get a blowout from a group orbit of attractors ( under a finite group ) which yields a single attractor that limits onto the whole group orbit .moreover , in the cycling chaos studied here , the attractor after blowout includes not only the chaotic invariant sets , but also fixed points involved in the heteroclinic cycle . as it is a cycle, it does not possess a natural ergodic invariant measure and averages of observables from the system in this state will typically not converge but continue to oscillate more and more slowly . the blowout scenario described above holds only for variation of normal parameters; in general the variation of a parameter will affect both the normal dynamics and the dynamics within the invariant subspace .if the dynamics in the invariant subspace is chaotic , we can expect to see a large number of bifurcations happening within the invariant subspace and these will cause the blowout to be spread over an interval of parameter values ; there is no reason why should vary continuously with a parameter that varies in a very discontinuous manner .nevertheless , for the numerical results presented in section [ secnumerics ] the dynamics in the invariant subspace vary in quite regular way .this is because in our system the parameter is a normal parameter for the attractor ; for more discussion of normal parameters , see . by mechanisms described in the pure and chaotic dynamics corresponding to dynamics within and can become chaotic by means of a symmetric global bifurcation that generates lorenz - like attractors approaching the equilibrium solution with full symmetry and -roll or -roll equilibrium solutions in or .now suppose there exist chaotic attractors and contained in and ( on average they have more symmetry ) , from here on we will mostly discuss but note that as is a conjugate attractor , the same will hold for .these attractors contain a saddle equilibrium in ( the trivial solution ) and so in particular they can not be liapunov stable attractors .this is because must be non - trivial manifold ( otherwise is an attractor ; thus is also non - trivial as is conjugate to .therefore and so can not be liapunov stable .however it is possible that is an attractor in the sense of milnor ; this will imply that the basin will be locally riddled in the sense of .we assume that and are minimal milnor attractors containing .we also assume that they have natural ergodic invariant measures and supported on them .we now concentrate on . whether is an attractor depends on its the spectrum of normal liapunov exponents .note that the zero liapunov exponent corresponding to time translation always corresponds to a perturbation tangential to the invariant subspace .if where is the most positive normal liapunov exponent for the measure , it is possible to show that satisfies ( [ eqmattr ] ) , and hence that is a milnor attractor .it is comparatively easy to compute in that case , as the largest transverse liapunov exponent of in the parameter regime discussed corresponds to directions in the direction , where linearised perturbations are described by thus we can see that where . from the equation for it is possible to show that along bounded trajectories of the ode . for the given parameter values it is possible to approximate and so .thus implying that the blowout bifurcation occurs at approximately , which is in good agreement with the simulations ( figure [ figblowout]b ) .note that whenever is hyperbolic and within the attractor there exists at least one ergodic invariant measure ( in particular that one supported on ) such that .in particular , this means that the basins of the attractors and are riddled for all parameter values in our problem ! likewise ,in the map ( [ eqnearpx ] ) near , the logarithm of obeys where is a function of .the most positive normal liapunov exponent in this case is then where the average is taken over the lorenz attractor . averaging over iterates of ( [ eqlorenz ] ) yields and , solving ( [ eqlambdae ] ) for , we obtain at the blowout bifurcation , in agreement with the data in figure [ figmapone](b ) .as passes through , loses stability and becomes a chaotic saddle , and in doing so it creates a continuum of connections from to the fixed point ( -rolls ) in .these connections are robust to -equivariant perturbations , as -rolls are sinks within , and so there is a robust cycle alternating between the equilibrium points in and and the chaotic sets in and .we observed in sections [ secnumerics ] and [ secmap ] that this cycling chaos is attractive once it is created ; we turn to the stability of cycling chaos in the next section .it is clear from figures [ figslowingdown ] , [ figresonance ] and [ figmaptwo ] that the key to understanding the loss of stability of cycling chaos lies in obtaining the rate at which trajectories approach that state .it is possible to estimate this rate for the map , and we use information gained in this calculation to carry out the same estimate in the odes , and thus are able to obtain the values of at which the loss of stability occurs , in the map and in the odes .what is remarkable is that , once a single average over has been computed numerically , the value of at the bifurcation point can be obtained analytically .we suppose that ( in the map ) the system arrives near with given values of , iterates using ( [ eqnearpx ] ) times ( ) , ending up in a state with .there follows one iterate of ( [ eqswitchpxpy ] ) , leaving the system near in a state ( we are ignoring changes of sign of the variables ) .we need to establish an estimate of , a measure of the distance from , given that were small when the system started close to .properly , the value of after iterates will depend on the values of over those iterates , but if is large , we approximate the detailed history of by its average and obtain where are liapunov exponents in the expanding and contracting directions around . is precisely the liapunov exponent that went through zero at the blowout bifurcation in ( [ eqlambdae ] ) .note that we have effectively approximated the chaotic set by an equilibrium point .the trajectory escapes from the neighbourhood of once ; since is typically of order one ( compared to the tiny initial value of ) , we assume that the escape takes place when , so obtaining with and both of order one. one iterate of ( [ eqswitchpxpy ] ) now yields these expressions will be dominated by once the trajectory is very close to the heteroclinic cycle , so , neglecting terms of order one , we obtain \left(\matrix{\log|y_0|\cr \log|y_2|\cr}\right ) + { \cal o}(1)\ ] ] for large negative values of and .a conjugate map will describe the return from to .one eigenvalue of the matrix is zero because of the way we approximated the behaviour near the chaotic set ; the other eigenvalue is which we refer to as the _ stability index_.the zero eigenvalue forces , so after one iterate of the composite map ( [ eqlinearisedmap ] ) , the dynamics will obey clearly if , and will tend to and the trajectory will asymptote to attracting cycling chaos .conversely , if , cycling chaos is unstable and trajectories will leave the domain of validity of the approximations we have made .we can also deduce from ( [ eqnumberits ] ) that the number of iterates between each switch will increase by a factor of per switch . at the point at which cycling chaos is created ( as increases through zero ), we see that is much greater than one , provided that is negative and is positive , both of which are true in the examples we have discussed .we deduce that cycling chaos is attracting near to its creation at the blowout bifurcation . from the condition that at the loss of stability of the chaotic cycle , we determine that this bifurcation occurs in the map at , in agreement with the data in figure [ figmaptwo ] .returning to the odes , we observe that in ( [ eqrhomap ] ) and are ratios of eigenvalues of -rolls ( proportional to decay rates of and ) , while and are the growth rate of and the decay rate of near the chaotic set . in the odes , the linearisation of ( [ eqode ] ) about yields for the decay rate of , while the growth rate of is given by ( [ eqlambdae ] ) .hence we have the stability index for the odes , where is given by ( [ eqlambdae ] ) .note that and are both functions of .the condition is readily solved for , and has solution . at the odes are still approaching cycling chaos , with switching times increasing by a factor of above 1.02 per switch ( see figure [ figresonance ] ) .however , the odes have not yet reached their asymptotic rate of slowing down , resolving this small discrepancy . on decreasing below , the stability index increases above unity and the cycling chaos will no longer be attracting .this loss of stability can broadly be classed as a resonance of liapunov exponents .observe that the resonance will be located at different for different invariant measures supported on and so we expect the presence of riddling and associated phenomena found in at a resonance of a simpler model displaying cycling chaos .we observe that for below the resonance bifurcation , the system exhibits behaviour that is numerically indistinguishable from periodic : there appear to be a large number of coexisting apparently periodic orbits .we hypothesize that the resonance creates a branch of ` approximately periodic attractors ' , i.e. , attractors that have a well - defined finite mean period of passage around the cycle , going to infinity at the resonance .these might lock onto long periodic orbits for progressively smaller , as found in the numerical simulations .for this example , the approximately periodic attractor branches set supercritically from the cycling chaos ; however one presumes that subcritical branching is also possible .research is presently in progress on understanding the more detailed branching behaviour at this bifurcation .this study has shown that one possible , apparently generic scenario for loss of stability of a chaotic attractor in an invariant subspace on varying a normal parameter is as follows : there is a blowout bifurcation that creates an attracting , robust heteroclinic cycle between chaotic invariant sets ( cycling chaos ) .the bifurcation is set supercritical but not supercritical , i.e. , the bifurcated attractors contain the attractor for the system in the invariant subspace , but unlike in a supercritical bifurcation ( to an on - off intermittent state ) the length of laminar phases increases unboundedly along a single trajectory even at a finite distance from the blowout .this cycling chaotic state can be modelled well by the network shown in figure [ figcycle ] although in reality the network is complicated by the facts that ( a ) there are other fixed points contained in the closure of unstable manifolds and ( b ) the fixed points in and are actually contained in the chaotic sets and rather than being isolated .we suspect this may have the consequence that there is no poincar section to the flow and so the cycle is ` dirty ' in the terminology of .nevertheless , the normal liapunov spectrum of the invariant chaotic set seems to determine the attraction or not of the cycle .the attracting cycling chaos is observed to lose stability via a mechanism that resembles a resonance of eigenvalues in an orbit heteroclinic to equilibria .such a resonance has been seen to occur in special classes of systems with skew product structure , in analogy to the branching of periodic orbits at a homoclinic resonance investigated by . throughout this investigation, it has been necessary to monitor carefully several numerical effects . in particular , for trajectories that display asymptotic slowing down characteristic of cycling chaos there will be a point at which rounding errors cause the dynamics either to transfer to the invariant subspace , or keep the dynamics a finite distance from the invariant subspace . in the context of physical systemsthere will always be imperfections in the system and noise that will destroy the invariant subspaces .nevertheless the perfect symmetry model will be very useful in describing what one expects to see in such imperfect situations . it still remains to proverigorously that the observed scenario is generic and so of interest to other , less specific models and in particular pde models of which this is a truncation .we could like to emphasise that the behaviour we see occurs for a reasonably large region of physically relevant parameters in the ode model and moreover we are unaware of any other classification which explains and predicts the observed dynamics to the degree that we have done here . in principal cycling chaos can be seen in ode models down to dimension 4 , though not smaller than this ; thus this dynamics should be seen as something that will not be created at a generic bifurcation from a trivial state , but rather in a more complicated dynamical regime far from primary bifurcation .we have discussed a possible route to cycling chaos through a blowout bifurcation , and how cycling chaos might cease to be attracting , both in general terms and in the context of a specific model .our general results ought to be applicable to a variety of other problems .in particular , behaviour that might be understood in terms of cycling chaos has been seen by knobloch _et al . _ in an ode model of the dipole quadrupole interaction in the solar dynamo . in their model, there is a weakly broken symmetry between the dipole and quadrupole subspaces , and the system switches between the two subspaces , favouring one over the other since they are not equivalent .finally , one comment that deserves to be made is that the choice of as the parameter allows an important simplification because of this parameter is normal for dynamics within and .one assumes that similar behaviour will be observed for non - normal parameters with the exception that the chaos in the invariant subspace will be fragile and destroyed by many arbitrarily small perturbations ; see .we acknowledge very interesting conversations with mike field , marty golubitsky and edgar knobloch concerning this work .the research of pa was partially supported by a nuffield ` newly appointed science lecturer ' award and epsrc grant gr / k77365 .amr is grateful for support from the royal astronomical society .
we examine a model system where attractors may consist of a heteroclinic cycle between chaotic sets ; this ` cycling chaos ' manifests itself as trajectories that spend increasingly long periods lingering near chaotic invariant sets interspersed with short transitions between neighbourhoods of these sets . this behaviour can be robust ( i.e. , structurally stable ) for systems with symmetries and provides robust examples of non - ergodic attractors in such systems ; we examine bifurcations of this state . we discuss a scenario where an attracting cycling chaotic state is created at a blowout bifurcation of a chaotic attractor in an invariant subspace . this is a novel scenario for the blowout bifurcation and causes us to introduce the idea of _ set supercriticality _ to recognise such bifurcations . the robust cycling chaotic state can be followed to a point where it loses stability at a resonance bifurcation and creates a series of large period attractors . the model we consider is a order truncated ordinary differential equation ( ode ) model of three - dimensional incompressible convection in a plane layer of conducting fluid subjected to a vertical magnetic field and a vertical temperature gradient . symmetries of the model lead to the existence of invariant subspaces for the dynamics ; in particular there are invariant subspaces that correspond to regimes of two - dimensional flows . stable two - dimensional chaotic flow can go unstable to three - dimensional flow via the cross - roll instability . we show how the bifurcations mentioned above can be located by examination of various transverse liapunov exponents . we also consider a reduction of the ode to a map and demonstrate that the same behaviour can be found in the corresponding map . this allows us to describe and predict a number of observed transitions in these models . 05.45+b 47.20.ky 47.65heteroclinic cycle , symmetry , chaotic dynamics , magnetoconvection .
let be an i.i.d .sequence of random variables with distribution , and denote by the distribution of a real - valued root under . in statistics and econometrics , it is often of interest to estimate certain quantiles of .two commonly used methods for this purpose are subsampling and the bootstrap .this paper provides conditions under which these estimators behave well uniformly over .more precisely , we provide conditions under which subsampling and the bootstrap may be used to construct estimators of the quantiles of and of the quantiles of , satisfying here , is understood to be , and is understood to be . for the construction of two - sided confidence intervals of nominal level for a real - valued parameter , we typically would consider , while for a one - sided confidence interval of nominal level we would consider either and , or and . in many cases ,it is possible to replace the and in ( [ eqtwosided ] ) with and , respectively .these results differ from those usually stated in the literature in that they require the convergence to hold uniformly over instead of just pointwise over .the importance of this stronger notion of convergence when applying these results is discussed further below .as we will see , the result ( [ eqtwosided ] ) may hold with and , but it may fail if and , or the other way round .this phenomenon arises when it is not possible to estimate uniformly well with respect to a suitable metric , but , in a sense to be made precise by our results , it is possible to estimate it sufficiently well to ensure that ( [ eqtwosided ] ) still holds for certain choices of and . note that metrics compatible with the weak topology are not sufficient for our purposes . in particular , closeness of distributions with respect to such a metric does not ensure closeness of quantiles . see remark [ remarklevy ] for further discussion of this point .in fact , closeness of distributions with respect to even stronger metrics , such as the kolmogorov metric , does not ensure closeness of quantiles either .for this reason , our results rely heavily on lemma [ lemmaquant ] which relates closeness of distributions with respect to a suitable metric and coverage statements .in contrast , the usual arguments for the pointwise asymptotic validity of subsampling and the bootstrap rely on showing for each that tends in probability under to the quantile of the limiting distribution of under . because our results are uniform in , we must consider the behavior of and under arbitrary sequences , under which the quantile estimators need not even settle down .thus , the results are not trivial extensions of the usual pointwise asymptotic arguments .the construction of satisfying ( [ eqtwosided ] ) is useful for constructing confidence regions that behave well uniformly over .more precisely , our results provide conditions under which subsampling and the bootstrap can be used to construct confidence regions of level for a parameter that are uniformly consistent in level in the sense that our results are also useful for constructing tests of level for a null hypothesis against the alternative that are uniformly consistent in level in the sense that \leq\alpha.\ ] ] in some cases , it is possible to replace the and in ( [ equnifcr ] ) or the and in ( [ equniftest ] ) with and , respectively .confidence regions satisfying ( [ equnifcr ] ) are desirable because they ensure that for every there is an such that for we have that is no less than for all .in contrast , confidence regions that are only pointwise consistent in level in the sense that for each fixed have the feature that there exists some and such that is less than infinitely often .likewise , tests satisfying ( [ equniftest ] ) are desirable for analogous reasons .for this reason , inferences based on confidence regions or tests that fail to satisfy ( [ equnifcr ] ) or ( [ equniftest ] ) may be very misleading in finite samples .of course , as pointed out by , there may be no nontrivial confidence region or test satisfying ( [ equnifcr ] ) or ( [ equniftest ] ) when is sufficiently rich .for this reason , we will have to restrict appropriately in our examples .in the case of confidence regions for or tests about the mean , for instance , we will have to impose a very weak uniform integrability condition .see also , , leeb and ptscher ( ) , for related results in more complicated settings , including post - model selection , shrinkage - estimators and ill - posed problems .some of our results on subsampling are closely related to results in , which were developed independently and at about the same time as our results .see the discussion on page 431 of .our results show that the question of whether subsampling can be used to construct estimators satisfying ( [ eqtwosided ] ) reduces to a single , succinct requirement on the asymptotic relationship between the distribution of and , where is the subsample size , whereas the results of require the verification of a larger number of conditions .moreover , we also provide a converse , showing this requirement on the asymptotic relationship between the distribution of and is also necessary in the sense that , if the requirement fails , then for some nominal coverage level , the uniform coverage statements fail .thus our results are stated under essentially the weakest possible conditions , yet are verifiable in a large class of examples . on the other hand ,the results of further provide a means of calculating the limiting value of in the case where it may not satisfy ( [ eqtwosided ] ) . to the best of our knowledge ,our results on the bootstrap are the first to be stated at this level of generality .an important antecedent is , who studies the uniform asymptotic behavior of confidence regions for a univariate cumulative distribution function .see also , who analyzes the uniform asymptotic behavior of some tests that arise in the context of an autoregressive model .the remainder of the paper is organized as follows . in section [ sectiongeneral ], we present the conditions under which satisfying ( [ eqtwosided ] ) may be constructed using subsampling or the bootstrap .we then provide in section [ sectionappl ] several applications of our general results .these applications include the multivariate mean , testing moment inequalities , multiple testing , the empirical process and -statistics .the discussion of -statistics is especially noteworthy because it highlights the fact that the assumptions required for the uniform asymptotic validity of subsampling and the bootstrap may differ .in particular , subsampling may be uniformly asymptotically valid under conditions where , as noted by , the bootstrap fails even to be pointwise asymptotically valid .the application to multiple testing is also noteworthy because , despite the enormous recent literature in this area , our results appear to be the first that provide uniformly asymptotically valid inference .proofs of the main results ( theorems [ theoremsubsample ] and [ theoremboot ] ) can be found in the ; proofs of all other results can be found in , which contains supplementary material .many of the intermediate results may be of independent interest , including uniform weak laws of large numbers for -statistics and -statistics [ lemmas s.17.3 and s.17.4 in , resp .] as well as the aforementioned lemma [ lemmaquant ] .let be an i.i.d .sequence of random variables with distribution .denote by the distribution of a real - valued root under .the goal is to construct procedures which are valid uniformly in . in order to describe the subsampling approach to approximate ,let be a sequence of positive integers tending to infinity , but satisfying , and define . for , denote by the subset of data of size .below , we present results for two subsampling - based estimators of .we first consider the estimator given by more generally , we will also consider feasible estimators in which is replaced by some estimator , that is , typically , , where is the empirical distribution , but this is not assumed below .even though the estimator of defined in ( [ equationsubdist ] ) is infeasible because of its dependence on , which is unknown , it is useful both as an intermediate step toward establishing some results for the feasible estimator of and , as explained in remarks [ remarkinvert ] and [ remarktesting ] , on its own in the construction of some feasible tests and confidence regions .[ theoremsubsample ] let be a sequence of positive integers tending to infinity , but satisfying , and define as in ( [ equationsubdist ] ) .then , the following statements are true : if , then holds for and any . if , then ( [ eqtwosidedsub ] ) holds for and any . if , then ( [ eqtwosidedsub ] ) holds for any and satisfying . [ remarkexactsub ]it is typically easy to deduce from the conclusions of theorem [ theoremsubsample ] stronger results in which the and in ( [ eqtwosidedsub ] ) are replaced by and , respectively .for example , in order to assert that ( [ eqtwosidedsub ] ) holds with and replaced by and , respectively , all that is required is that for some .this can be verified using the usual arguments for the pointwise asymptotic validity of subsampling .indeed , it suffices to show for some that tends in distribution to a limiting distribution that is continuous at the appropriate quantiles .see for details .[ remarkinvert ] as mentioned earlier , defined in ( [ equationsubdist ] ) is infeasible because it still depends on , which is unknown , through .even so , theorem [ theoremsubsample ] may be used without modification to construct feasible confidence regions for a parameter of interest provided that , and therefore , depends on only through . if this is the case , then one may simply invert tests of the null hypotheses for all to construct a confidence region for . more concretely ,suppose and .whenever we may apply part ( i ) of theorem [ theoremsubsample ] , we have that satisfies ( [ equnifcr ] ) .similar conclusions follow from parts ( ii ) and ( iii ) of theorem [ theoremsubsample ] . [ remarktesting ] it is worth emphasizing that even though theorem [ theoremsubsample ] is stated for roots , it is , of course , applicable in the special case where .this is especially useful in the context of hypothesis testing .see example [ exmomineqsub ] for one such instance .next , we provide some results for feasible estimators of .the first result , corollary [ corrsubsample ] , handles the case of the most basic root , while theorem [ theoremsubsample2 ] applies to more general roots needed for many of our applications .[ corrsubsample ] suppose , where is a sequence of normalizing constants , is a real - valued parameter of interest and is an estimator of .let be a sequence of positive integers tending to infinity , but satisfying , and define then statements of theorem [ theoremsubsample ] hold when is replaced by .[ theoremsubsample2 ] let be a sequence of positive integers tending to infinity , but satisfying .define as in ( [ equationsubdist ] ) and as in ( [ equationfeassubdist ] ) .suppose for all that then , statements of theorem [ theoremsubsample ] hold when is replaced by . as a special case , theorem [ theoremsubsample2 ] can be applied to studentized roots .[ corrsubsample2 ] suppose where is a sequence of normalizing constants , is a real - valued parameter of interest , and is an estimator of , and is an estimator of some parameter .suppose further that : the family of distributions is tight , and any subsequential limiting distribution is continuous . for any , let be a sequence of positive integers tending to infinity , but satisfying and . define then statements of theorem [ theoremsubsample ] hold when is replaced by .[ remarkdec6 ] one can take in corollary [ corrsubsample2 ] .since effectively cancels out from both sides of the inequality in the event , such a root actually leads to a computationally feasible construction . however , corollary [ corrsubsample2 ] still applies and shows that we can obtain a positive result without the correction factor present in corollary [ corrsubsample ] , provided the conditions of corollary [ corrsubsample2 ] hold .for example , if for some , we have that is asymptotically standard normal under any sequence , then the conditions hold . in corollaries [ corrsubsample ] and [ corrsubsample2 ], it is assumed that the rate of convergence is known .this assumption may be relaxed using techniques described in .we conclude this section with a result that establishes a converse for theorems [ theoremsubsample ] and [ theoremsubsample2 ] .[ theoremconverse ] let be a sequence of positive integers tending to infinity , but satisfying and define as in ( [ equationsubdist ] ) and as in ( [ equationfeassubdist ] ) .then the following statements are true : if , then ( [ eqtwosidedsub ] ) fails for and some .if , then ( [ eqtwosidedsub ] ) fails for and some . if , then ( [ eqtwosidedsub ] ) fails for some and satisfying . if , in addition , ( [ eqfeasclose ] ) holds for any , then statements above hold when is replaced by . as before ,let be an i.i.d .sequence of random variables with distribution .denote by the distribution of a real - valued root under .the goal remains to construct procedures which are valid uniformly in .the bootstrap approach is to approximate by for some estimator of .typically , is the empirical distribution , but this is not assumed in theorem [ theoremboot ] below . because need not a priori even lie in , it is necessary to introduce a family in which lies ( at least with high probability ) . in order for the bootstrap to succeed, we will require that be small for some function ( perhaps a metric ) defined on .for any given problem in which the theorem is applied , , and must be specified .[ theoremboot ] let be a function on , and let be a ( random ) sequence of distributions . then , the following are true : suppose for any sequences and satisfying . if for any sequence , then holds for and any .suppose for any sequences and satisfying .if ( [ eqihatethishere ] ) holds for any sequence , then ( [ eqtwosidedboot ] ) holds for and any .suppose for any sequences and satisfying .if ( [ eqihatethishere ] ) holds for any sequence , then ( [ eqtwosidedboot ] ) holds for any and satisfying . [ remarkexactboot ]it is typically easy to deduce from the conclusions of theorem [ theoremboot ] stronger results in which the and in ( [ eqtwosidedboot ] ) are replaced by and , respectively .for example , in order to assert that ( [ eqtwosidedboot ] ) holds with and replaced by and , respectively , all that is required is that for some .this can be verified using the usual arguments for the pointwise asymptotic validity of the bootstrap .see for details .[ remarklevy ] in some cases , it is possible to construct estimators of that are uniformly consistent over a large class of distributions in the sense that for any where is the levy metric or some other metric compatible with the weak topology . yeta result such as ( [ eqlevyconv ] ) is not strong enough to yield uniform coverage statements such as those in theorems [ theoremsubsample ] and [ theoremboot ] . in other words, such conclusions do not follow from uniform approximations of the distribution of interest if the quality of the approximation is measured in terms of metrics metrizing weak convergence .to see this , consider the following simple example .let be an i.i.d .sequence of random variables with distribution .denote by the distribution of the root under , where .let be the empirical distribution of or , equivalently , .lemma s.1.1 in implies for any that whenever is a metric compatible with the weak topology .nevertheless , it follows from the argument on page 78 of that the coverage statements in theorem [ theoremboot ] fail to hold provided that both and do not equal zero .indeed , consider part ( i ) of theorem [ theoremboot ] .suppose and .for a given and , let . under ,the event has probability .moreover , whenever such an event occurs , . therefore , .since the choice of was arbitrary , it follows that a similar argument establishes the result for parts ( ii ) and ( iii ) of theorem [ theoremboot ] . on the other hand , when is the kolmogorov metric , ( [ eqlevy ] ) holds when the supremum over is replaced with a supremum over for some .moreover , when is restricted to such an interval , the coverage statements in theorem [ theoremboot ] hold as well .before proceeding , it is useful to introduce some notation that will be used frequently throughout many of the examples below . for a distribution on , denote by the mean of , by the covariance matrix of , and by the correlation matrix of . for ,denote by the component of and by the diagonal element of . in all of our examples , will be an i.i.d .sequence of random variables with distribution and will denote the empirical distribution of . as usual, we will denote by the usual sample mean , by the usual sample covariance matrix and by the usual sample correlation matrix .for , denote by the component of and by the diagonal element of .finally , we say that a family of distributions on the real line satisfies the standardized uniform integrability condition if = 0.\ ] ] in the preceding expression , denotes a random variable with distribution .the use of the term standardized to describe ( [ equi ] ) reflects that fact that the variable is centered around its mean and normalized by its standard deviation .[ examplesubmean ] let be an i.i.d .sequence of random variables with distribution on .suppose one wishes to construct a rectangular confidence region for .for this purpose , a natural choice of root is in this setup , we have the following theorem : [ theoremsubmean ] denote by the set of distributions formed from the marginal distributions of the distributions in .suppose is such that ( [ equi ] ) is satisfied with for all .let be the distribution of the root ( [ eqmaxroot ] ) .let be a sequence of positive integers tending to infinity , but satisfying and define by ( [ equationsubdist ] ). then \\[-8pt ] & & \qquad = 1 - \alpha_1 - \alpha_2\nonumber\end{aligned}\ ] ] for any and such that .furthermore , ( [ eqsubmean ] ) remains true if is replaced by , where is defined by ( [ equationfeassubdist ] ) with . under suitable restrictions ,theorem [ theoremsubmean ] generalizes to the case where the root is given by where is a continuous , real - valued function and in particular , we have the following theorem : [ theoremsubgenmean ] let be defined as in theorem [ theoremsubmean ] .let be the distribution of root ( [ eqgeneralf ] ) , where is continuous .suppose further that for all that for any sequence such that under and , where .then \\[-8pt ] & & \qquad\geq1 - \alpha_1 - \alpha_2\nonumber\end{aligned}\ ] ] for any and such that .suppose further that if for some satisfying for all , then is continuously distributed .then , ( [ eqgensubmeancover ] ) remains true if is replaced by , where is defined by ( [ equationfeassubdist ] ) with .moreover , the and may be replaced by and , respectively . in order to verify ( [ eqgensubmeanconv1 ] ) and ( [ eqgensubmeanconv2 ] ) in theorem [ theoremsubgenmean ], it suffices to assume that is continuously distributed . under the assumptions of the theorem , however , need not be continuously distributed . in this case , ( [ eqgensubmeanconv1 ] ) and ( [ eqgensubmeanconv2 ] ) hold immediately for any at which is continuous , but require a further argument for at which is discontinuous .see , for example , the proof of theorem [ theorembootmomineq ] , which relies on theorem [ theorembootgenmean ] , where the same requirement appears .[ remarkonesided ] considers the following example .let be an i.i.d .sequence of random variables with distribution on .suppose it is known that for all and one wishes to construct a confidence interval for .a natural choice of root in this case is this root differs from the one considered in theorem [ theoremsubmean ] and the ones discussed in theorem [ theoremsubgenmean ] in the sense that under weak assumptions on , holds , but fails to hold . to see this ,suppose ( [ equi ] ) holds with .note that where and . since for any , is bounded from above by it now follows from the uniform central limit theorem established by lemma 3.3.1 of and theorem 2.11 of that ( [ eqonesidedunif ] ) holds .it therefore follows from theorem [ theoremsubsample ] that ( [ eqtwosidedsub ] ) holds with and any . to see that ( [ eqothersideunif ] ) fails , suppose further that , where for some . for , left - hand side of ( [ eqothersideunif ] ) is therefore greater than or equal to for any . in particular , if , then the second term is zero for large enough , and so the limiting value is .it therefore follows from theorem [ theoremconverse ] that ( [ eqtwosidedsub ] ) fails for and some . on the other hand , ( [ eqtwosidedsub ] )holds with and any . to see this , consider any sequence and the event . for the root in this example ,this event is scale invariant .so , in calculating the probability of this event , we may without loss of generality assume . since , we have for any that and similarly for . using the usual subsampling arguments, it is thus possible to show for that the desired conclusion therefore follows from slutsky s theorem .arguing as the the proof of corollary [ corrsubsample2 ] and remark [ remarkdec6 ] , it can be shown that the same results hold when is replaced by , where is defined as is defined but with replaced by . [ exmomineqsub ]the generality of theorem [ theoremsubsample ] illustrated in example [ remarkonesided ] is also useful when testing multisided hypotheses about the mean . to see this ,let be an i.i.d .sequence of random variables with distribution on .define and .consider testing the null hypothesis that versus the alternative hypothesis that at level .such hypothesis testing problems have recently received considerable attention in the `` moment inequality '' literature in econometrics .see , for example , , , , , and romano and shaikh ( ) .theorem [ theoremsubsample ] may be used to construct tests that are uniformly consistent in level in the sense that ( [ equniftest ] ) holds under weak assumptions on .formally , we have the following theorem : [ theoremsubmomineq ] let be defined as in theorem [ theoremsubmean ] .let be the distribution of let be a sequence of positive integers tending to infinity , but satisfying and define by the right - hand side of ( [ equationsubdist ] ) with .then , the test defined by satisfies ( [ equniftest ] ) for any . the argument used to establish theorem [ theoremsubmomineq ] is essentially the same as the one presented in for though lemma s.6.1 in is needed for establishing ( [ eqonesidedunif ] ) here because of studentization .related results are obtained by .[ exmultsub ] we now illustrate the use of theorem [ theoremsubsample ] to construct tests of multiple hypotheses that behave well uniformly over a large class of distributions .let be an i.i.d .sequence of random variables with distribution on , and consider testing the family of null hypotheses versus the alternative hypotheses in a way that controls the familywise error rate at level in the sense that where for , define according to the right - hand side of ( [ equationsubdist ] ) with and consider the following stepwise multiple testing procedure : [ algstepsub ] step 1 : set . if then stop . otherwise , reject any with and continue to step with step : if then stop .otherwise , reject any with and continue to step with we have the following theorem : [ theoremsubmultiple ] let be defined as in theorem [ theoremsubmean ] .let be a sequence of positive integers tending to infinity , but satisfying .then , algorithm [ algstepsub ] satisfies for any .it is , of course , possible to extend the analysis in a straightforward way to two - sided testing .see also for related results about a multiple testing problem involving an infinite number of null hypotheses .[ exsubemp ]let be an i.i.d .sequence of random variables with distribution on .suppose one wishes to construct a confidence region for the cumulative distribution function associated with , that is , \} ] .as described in example [ exsubemp ] , a natural choice of root in this case is given by ( [ eqks ] ) . in thissetting , we have the following theorem , which is a bootstrap counterpart to theorem [ theoremsubemp ] : [ theorembootemp ] fix any , and let be defined as in theorem [ theoremsubemp ] .let be the distribution of the root ( [ eqks ] ) .denote by the empirical distribution of .then \bigr\ } - p\bigl\{(-\infty , t]\bigr\}\bigr|\\ & & \qquad\hspace*{182pt } \leq j_n^{-1}(1-\alpha_2 , \hat p_n ) \bigr\}\\ & & \qquad = 1 - \alpha_1 - \alpha_2\end{aligned}\ ] ] for any and such that .some of the conclusions of theorem [ theorembootemp ] can be found in , though the method of proof given in is quite different .[ exbootustat ] let be an i.i.d .sequence of random variables with distribution on and let be a symmetric kernel of degree .suppose one wishes to construct a confidence region for given by ( [ eqthetaustat ] ) .as described in example [ exsubustat ] , a natural choice of root in this case is given by ( [ equstatroot ] ) . before proceeding ,it is useful to introduce the following notation . for an arbitrary kernel , and , denote by the set of all distributions on such that \leq b.\ ] ] similarly , for an arbitrary kernel and , denote by the set of all distributions on such that where is defined as in ( [ eqsigmap ] )finally , for an arbitrary kernel , and , let be the set of distributions on such that \leq b,\ ] ] whenever for all . using this notation , we have the following theorem : [ theorembootustat ] define the kernel of degree according to the rule \\[-8pt ] & & { } - h(x_1,\ldots , x_m)h(x_{m+1},\ldots , x_{2m}).\nonumber\end{aligned}\ ] ] suppose for some and .let be the distribution of the root defined by ( [ equstatroot ] ) .then for any and such that .note that the kernel defined in ( [ eqhprime ] ) arises in the analysis of the estimated variance of the -statistic .note further that the conditions on in theorem [ theorembootustat ] are stronger than the conditions on in theorem [ theoremsubustat ] .while it may be possible to weaken the restrictions on in theorem [ theorembootustat ] some , it is not possible to establish the conclusions of theorem [ theorembootustat ] under the conditions on in theorem [ theoremsubustat ] . indeed , as shown by , the bootstrap based on the root defined by ( [ equstatroot ] ) need not be even pointwise asymptotically valid under the conditions on in theorem [ theoremsubustat ] .to see ( i ) , first note that implies that for all .thus , , from which it follows that . similarly , to prove ( ii ) , first note that implies that for all , so . therefore , . to prove ( iii ) , note that because , it follows from ( i ) that .hence , . using the same reasoning ,( iv ) follows from ( ii ) and the assumption that . to see ( v ) , note that where the first inequality follows from the bonferroni inequality , and the second inequality follows from ( iii ) and ( iv ) . to prove ( vi ) , note that where the second inequality follows from ( i ) .a similar argument using ( ii ) establishes ( vii ) .finally , ( viii ) follows from ( vi ) and ( vii ) by an argument analogous to the one used to establish ( v ) .[ lemmavcreal ] let be an i.i.d .sequence of random variables with distribution .denote by the distribution of a real - valued root under .let , and define according to ( [ equationsubdist ] ) .then , for any , we have that let be given and define by denote by the symmetric group with elements . note that using this notation , we may rewrite as note further that which is a sum of identically distributed random variables .let be given .it follows that using markov s inequality , ( [ equationdumb ] ) can be bounded by \nonumber\\[-8pt]\\[-8pt ] & & \qquad = \frac{1}{\varepsilon } \int_0 ^ 1 p \bigl\ { \sup_{x \in\mathbf{r } } \bigl|s_n(x , p ; x_1,\ldots , x_n)\bigr| > u \bigr\ } \,du.\nonumber\end{aligned}\ ] ] we may use the dvoretsky kiefer wolfowitz inequality to bound the right - hand side of ( [ equationintreal ] ) by < \frac{1}{\varepsilon}\sqrt{\frac{2 \pi}{k_n}},\ ] ] which establishes ( [ equationbound2 ] ) .[ lemmafinite ] let be an i.i.d .sequence of random variables with distribution .denote by the distribution of a real - valued root under .let and define according to ( [ equationsubdist ] ) .let then , for any and , we have that : let and be given .note that where the final inequality follows from lemma [ lemmavcreal ] .assertion ( i ) thus follows from the definition of and part ( vi ) of lemma [ lemmaquant ] .assertions ( ii ) and ( iii ) are established similarly .proof of theorem [ theoremsubsample ] to prove ( i ) , note that by part ( i ) of lemma [ lemmafinite ] , we have for any and that where by the assumption on , we have that for every .thus , there exists a sequence tending to so that .the desired claim now follows from applying part ( i ) of lemma [ lemmafinite ] to this sequence. assertions ( ii ) and ( iii ) follow in exactly the same way .we prove only ( i ) .similar arguments can be used to establish ( ii ) and ( iii ) .let , and be given .choose so that whenever for and .for sufficiently large , we have that for such , we therefore have that it follows from part ( vi ) of lemma [ lemmaquant ] that for such since the choice of was arbitrary , the desired result follows .
this paper provides conditions under which subsampling and the bootstrap can be used to construct estimators of the quantiles of the distribution of a root that behave well uniformly over a large class of distributions . these results are then applied ( i ) to construct confidence regions that behave well uniformly over in the sense that the coverage probability tends to at least the nominal level uniformly over and ( ii ) to construct tests that behave well uniformly over in the sense that the size tends to no greater than the nominal level uniformly over . without these stronger notions of convergence , the asymptotic approximations to the coverage probability or size may be poor , even in very large samples . specific applications include the multivariate mean , testing moment inequalities , multiple testing , the empirical process and -statistics .
in this paper , the symmetric obrechkoff methods for solving special class of initial value problems associated with second order ordinary differential equations of the type in which the first order derivatives do not occur explicitly , are discussed.the numerical integration methods for ( [ dif ] ) can be divided into two distinct classes : 1 .problems for which the solution period is known ( even approximately ) in advance .problems for which the period is not known . for several decades ,there has been strong interest in searching for better numerical methods to integrate first - order and second - order initial value problems , because these problems are usually encountered in celestial mechanics , quantum mechanical scattering theory , theoretical physics and chemistry , and electronics . generally , the solution of is periodic , so it is expected that the result produced by some numerical methods preserves the analogical periodicity of the analytic solution [ 9 - 22 ] .computational methods involving a parameter proposed by gautschi , jain et al . , sommeijer and et al and steifel and bettis yield numerical solution of problems of class ( 1 ) .chawla and et al , ananthakrishnaiah , shokri and et al . , dahlquist , franco , lambert and watson [ 9 ] , tsitouras and simos , simos and et al . , hairer , wang et al . , saldanha and achar , and daele and vanden berghe have developed methods to solve problems of class ( 2 ) .consider obrechkoff method of the form for the numerical integration of the problem ( [ dif ] ) .the method ( [ ob ] ) is symmetric when , , , and it is of order if the truncation error associated with the linear difference operator is given as where is a constant dependent on .when the method ( [ ob ] ) is applied to the test problem , we get the characteristic equation as where and the method ( [ ob ] ) is said to have interval of periodicity if for all the roots of eq .( [ ro ] ) are complex and at least two of them lie on the unit circle and the others lie inside the unit circle . the method ( [ ob ] ) is said to be p - stable if its interval of periodicity is . for any symmetric multistep methods , the phase - lag ( frequency distortion ) of order given by where is the phase lag constant and is the phase - lag order .the characteristic equation of the method ( [ ob ] ) is given by where contains polynomial functions together with trigonometric polynomials the resulting methods are then based on a hybrid set of polynomials and trigonometric functions . if is limited to , we called method with zero phase - lag .we present here the trigonometric versions of the set . incase is purely imaginary one obtains the hyperbolic description of this set .this set is characterized by two integer parameters and .the set in which there is no polynomial part is identified by while the set in which there is no trigonometric polynomial component is identified by . for each problemone has , where is the maximum exponent present in the full polynomial basis for the same problem .from the form and without loss of generality we assume , , and we can write ,\ ] ] when we get \nonumber\\ & + & h^4\left[\beta_{20}(y_{n+1}^{(4)}+y_{n-1}^{(4)})+\beta _ { 21}y_n^{(4)}\right]\nonumber\\ & + & h^6\left[\beta_{30}(y_{n+1}^{(6)}+y_{n-1}^{(6)})+\beta _{ 31}y_n^{(6)}\right].\end{aligned}\ ] ] for method ( [ ob3 ] ) is 11 so that if , we obtain classic method and the coefficients of this method are where its phase - lag is given by and its local truncation error is given by if , we obtain the method with zero phase - lag ( pl ) , and the coefficients of this case are given in .if , , so we called pl , we have where and for small values of the above formulae are subject to heavy cancelations .in this case the following taylor series expansion must be used : the phase - lag and the local truncation error for the pl method are given by hence and where , is the frequency and is the step length . as , the lte of the method with derived coefficients tends to . which agrees with the lte of the three methods due to wang , simos and daele , achar , as .the behavior of the coefficients of the pl method are shown in figures 2.1 , to 2.6 .if , , so we called pl , we have where and for small values of the above formulae are subject to heavy cancelations . in this case the following taylor series expansion must be used : + the phase - lag and the local truncation error for the pl method are given by and where , is the frequency and is the step length .the behavior of the coefficients of the pl method are shown in figures 4 , 5 , 6 .the characteristic equation has complex roots of unit magnitude when , i.e. when . substituting for and for these the two - step methods , the interval of periodicity of the classical obrechkoff method , pl and pl methods when are obtained [ 0 , 25.2004 ] , [ 0,408.04 ] and [ 0,1428.84 ] respectivelyin this section , we present some numerical results obtained by our new two - step trigonometrically - fitted obrechkoff methods and compare them with those from other multistep methods as + achar : the 12th order obrechkoff method of achar . + daele : the 12th order obrechkoff method of van daele .+ neta : the p - stable 8th - order super - implicit method of neta .+ simos : the 12th order obrechkoff method of simos .+ wang : the 12th order obrechkoff method of wang .+ we consider the nonlinear _ undamped duffing equation _ where , and $ ] .we use the following exact solution for from , where in order to integrate this equation by a obrechkoff method , one needs the values of , which occur in calculating .these higher order derivatives can all be expressed in terms of and through , i.e. the absolute errors at , for the new method , in comparison with methods of simos , daele , achar , wang and the new method are given in table 3.1 and the cpu times are listed in table 3.2 .also the absolute errors at , with , for the new method pl , in comparison with methods neta and the new method are given in table 3.3 . consider the initial value problem with the exact solution .this equation has been solved numerically for using exact starting values . in the numerical experiment, we take the step lengths , , , , and . in table 3.4 , we present the absolute errors at the end - point and the cpu times are listed in table 3.5 . consider the initial value problem ,\ ] ] with the exact solution the theoretical solution of this problem is the absolute errors at for the new method , in comparison with methods of wang , simos , daele and achar are given in the table 3.6 .the relative cpu times of computation of the new method in comparison with the other four referred methods are given in table 3.7 . this paper , we have presented the new trigonometrically - fitted two - step symmetric obrechkoff methods of order 12 .the details of the procedure adapted for the applications have been given in section 2 .with trigonometric fitting , we have improved the local truncation error , phase - lag error , interval of periodicity and cpu time for the classes of two - step obrechkoff methods .the numerical results obtained by the new method for some problems show its superiority in efficiency , accuracy and stability .the authors wish to thank the professor theodore e. simos and the anonymous referees for their careful reading of the manuscript and their fruitful comments and suggestions .chawla , m. m. , rao , p. s. , a numerov - type method with minimal phase - lag for the integration of second order periodic initial value problems .ii : explicit method , j. comput . appl .( 1986 ) , 329 - 337 .neta , b. , and fukushima , t. , obrechkoff versus super - implicit methods for the integration of keplerian orbits , proc .aiaa / aas astrodynamics specialist conference , denver , co , august 14 - 17 , 2000 , paper number aiaa 2000 - 4029 .neta , b. , and fukushima , t. , obrechkoff versus super - implicit methods for the solution of first and second order initial value problems , j. comput .applic . , special issue on numerical methods in physics , chemistry and engineering , t. e. simos and g. abdelas ( guest editors ) , 45 , ( 2003 ) , 383 - 390 .psihoyios , g. , simos , t. e. , effective numerical approximation of schrodinger type equations through multiderivative exponentially - fitted schemes , appl .comput . math . 1 ( 1 ) ( 2004 ) , 205 - 215 .sakas , d. p. , simos , t. e. , multiderivative methods of eighth algebraic order with minimal phase - lag for the numerical solution of the radial schrodinger equation , j. comput appl .( 2005 ) , 161 - 172 .saldanha , g. , achar , s. d. , symmetric multistep obrechkoff methods with zero phase - lag for periodic initial value problems of second order differential equations , appl .comput . 218( 2011 ) , 2237 - 2248 .shokri , a. , rahimi ardabili , m. y. , shahmorad , s. , and hojjati , g. , a new two - step p - stable hybrid obrechkoff method for the numerical integration of second - order ivps ., j. comput .math . , 235 ( 2011 ) , 1706 - 1712 .wang , z. , zhao , d. , dai , y. and song , x. , a new high efficient and high accurate obrechkoff four - step method for the periodic non - linear undamped duffings equation , comput . phys .commun . 165( 2005 ) , 110 - 126 .
in this paper , we present the two - step trigonometrically fitted symmetric obrechkoff methods with algebraic order of twelve . the method is based on the symmetric two - step obrechkoff method , with 12 algebraic order , high phase - lag order and is constructed to solve ivps with periodic solutions such as orbital problems . we compare the new method to some recently constructed optimized methods from the literature . the numerical results obtained by the new method for some problems show its superiority in efficiency , accuracy and stability .
one can conceive of information theory in the broad sence as covering the theory of gaining , transferring , and storing information , where the first is usually called statistics." . shannon information theory and mathematical statistics interactionrevealed to be effective .this interplay is mutually fruitful , in some works results of probability theory and statistics were obtained with application of information - theoretical methods and there are studies where statistical results provide ground for new findings in information theory , , , , , , , .this paper can serve an illustration of application of information - theoretical methods in statistics : on one hand this is analogy in problem formulation and on the other hand this is employment of technical tools of proof , specifically of the method of types , .it is often necessary in statistical research to make decisions regarding the nature and parameters of stochastic model , in particular , the probability distribution of the object .decisions can be made on the base of results of observations of the object .the vector of results is called a sample .the correspondence between samples and hypotheses can be designed based on some selected criterion .the procedure of statistical hypotheses detection is called test .the classical problem of statistical hypothesis testing refers to two hypotheses .based on data samples a statistician makes decision on which of the two proposed hypotheses must be accepted .many mathematical investigations , some of which have also applied significance , were implemented in this direction .the need of testing of more than two hypotheses in many scientific and applied fields has essentially increased recently . as an instance microarray analysiscould be mentioned .the decisions can be erroneous due to randomness of the sample .the test is considered as good if the probabilities of the errors in given conditions are as small as possible .frequently the problem was solved for the case of a tests sequence , where the probabilities of error decrease exponentially as , when the number of observations tends to the infinity .we call the exponent of error probability the _ reliability_. in case of two hypotheses both reliabilities corresponding to two possible error probabilities could not be increased simultaneously , it is an accepted way to fix the value of one of the reliabilities and try to make the tests sequence get the greatest value of the remaining reliability .such a test is called _ logarithmically asymptotically optimal _such optimal tests were considered first by hoeffding , examined later by csiszr and longo , tusnady , ( he called such test series _ exponentially rate optimal _ ( ero ) ) , longo and sgarro .the term lao for testing of two hypotheses was proposed by birge . amongst papers on testing , associated with information theory ,we can also note works of natarajan , gutman , anantharam , han and of many others . some objectives in this direction were first suggested in original introductory article by dobrushin , pinsker and shiryaev .the achievable region of error probability exponents was examined by tuncel .the problem has common features with the issue studied in the information theory on interrelation between the rate of the code and the exponent of the error probability . in information theorythe relation is called according to shannon the _ reliability function _ , while is named the -_capacity _ , or the _ reliability - rate function _ , as it was introduced by haroutunian , , . simple but actual concept of not only separate but also simultaneous investigation of some number of objects of the same type , evidently , was first formulated by ahlswede and haroutunian for reliable testing of distributions of multiple items .but simultaneous examination of properties of many similar objects may be attractive and effective in plenty of other statistical situations .the organization of this paper is as follows .we start with the definitions and notations in the next section . in section 3we introduce the problem of multihypotheses testing concerning one object . in section 4we consider the reliability approach to multihypotheses testing for many independent and dependent objects .section 5 is dedicated to the problem of statistical identification under condition of optimality .section 6 is devoted to description of characteristics of lao hypotheses testing with permission of rejection of decision for the model consisting of one and of more independent objects .we denote finite sets by script capitals .the cardinality of a set is denoted as .random variables ( rvs ) , which take values in finite sets , are denoted by , .probability distributions ( pds ) are denoted by , , , , , .let pd of rv , characterizing an object , be , and conditional pd of rv for given value of state of the object be .the shannon entropy of rv with pd is : the conditional entropy of rv for given rv with corresponding pds is : the divergence ( kullback - leibler information , or distance " ) of pds and on is : and conditional divergence of the pd and pd is : for our investigations we use the method of types , one of the important technical tools in shannon theory .the type of a vector is a pd ( the empirical distribution ) where is the number of repetitions of symbol in vector .the joint type of vectors and is the pd where is the number of occurrences of symbols pair in the pair of vectors .the conditional type of for given is conditional pd defined by relation for all , we denote by the set of all types of vectors in for given , by the set of all types of vectors in and by the set of all possible conditional types of vectors in for given .the set of vectors of type is denoted by and the family of vectors of conditional type for given of type by .the set of all possible pds on and pds on is denoted , correspondingly , by and .we need the following frequently used inequalities : for any type and for any type and problem of optimal testing of _ multiple _ hypotheses was proposed by dobrushin , and was investigated in .fu and shen explored the case of two hypotheses when side information is absent .the problem concerning arbitrarily varying sources solved in was induced by the ideas of the paper of ahlswede .the case of two hypotheses with side information about states was considered in . in the same way as in from result on lao testing ,the rate - reliability and the reliability - rate functions for arbitrarily varying source with side information were obtained in .the problem of multiple hypotheses lao testing for discrete stationary markov source of observations was solved by haroutunian . in haroutunian and grigoryan generalized results from , for multihypotheses lao testing by a non - informed statistician for arbitrarily varying markov source . here for clearness we expose the results on multiple hypotheses lao testing for the case of the most simple invariant object .let be a finite set of values of random variable ( rv ) . possible pds , , of rv characterizing the object are known .the statistician must detect one among alternative hypotheses , using sample of results of independent observations of the object .the procedure of decision making is a non - randomized test , it can be defined by division of the sample space on disjoint subsets , .the set consists of all samples for which the hypothesis must be adopted .we study the probabilities of the erroneous acceptance of hypothesis provided that is true the probability to reject hypothesis , when it is true , is also considered a quadratic matrix of error probabilities is the _ power _ of the tests ._ error probability exponents _ of the infinite sequence of tests , which we call _ reliabilities _ , are defined as follows : we see from ( [ 5 ] ) and ( [ 6 ] ) that the matrix called the _ reliabilities matrix _ of the tests sequence is the object of our investigation .we recognize that a sequence of tests is lao if for given positive values of diagonal , elements of matrix the procedure provides maximal values for all other elements of it .now we form the lao test by constructing decision sets noted . given strictly positive numbers , , we define the following regions : and corresponding values : * theorem 3.1 : * _ if for described model all conditional pds , , are different in the sense that , , , and the positive numbers , are such that the following inequalities , called compatibility conditions , hold , \,\ , m=\overline { 2,m-1},\ ] ] then there exists a sequence of tests , the reliabilities matrix of which is defined in and all elements of it are positive. _ when one of inequalities is violated , then at least one element of matrix is equal to .the proof of theorem 3.1 is postponed to the appendix .it is worth to formulate the following useful property of reliabilities matrix of the lao test .* remark 3.1 : * _ the diagonal elements of the reliabilities matrix of the lao test in each row are equal only to the element of the last column : that is the elements of the last column are equal to the diagonal elements of the same row and due to are minimal in this row . consequently the first elements of the last column also can play a part as given parameters for construction of a lao test_.in ahlswede and haroutunian proposed a new aspect of the statistical theory investigation of models with many objects .this work developed the ideas of papers on information theory , , of papers on many hypotheses testing - and of book , devoted to research of sequential procedures solving decision problems such as ranking and identification .the problem of hypotheses testing for the model consisting of two independent and of two strictly dependent objects ( when they can not admit the same distribution ) with two possible hypothetical distributions were solved in . in specific characteristics of the model consisting of objects each independently of others following one of given probability distributions were explored . in model composed by stochastically related objects was investigated .the result concerning two independent markov chains is presented in . in this sectionwe expose these results .let us now consider the model with three independent similar objects . for brevity we solve the problem for three objects ,the generalization of the problem for independent objects will be discussed hereafter along the text .let , and be independent rvs taking values in the same finite set , each of them with one of hypothetical pds .these rvs are the characteristics of the objects .the random vector assumes values .let , , be a vector of results of independent observations of the family .the test has to determine unknown pds of the objects on the base of observed data .the detection for each object should be made from the same set of hypotheses : , .we call this procedure the _ compound test _ for three objects and denote it by , it can be composed of three individual tests , , for each of three objects .the test , , is a division of the space into disjoint subsets , .the set , , contains all vectors for which the hypothesis is adopted .hence test is realised by division of the space into subsets , , .we denote the infinite sequence of compound tests by .when we have independent objects the test is composed of tests , , ... , .the probability of the falsity of acceptance of hypotheses triple by the test provided that the triple of hypotheses is true , where , , , is : the probability to reject a true triple of hypotheses by analogy with ( [ 5 ] ) is defined as follows : we study corresponding reliabilities of the sequence of tests , definitions ( [ 17 ] ) and ( [ 18 ] ) imply ( compare with ( [ 7 ] ) ) that our aim is to analyze the reliabilities matrix of lao test sequence for three objects .we call the test sequence lao for the model with many objects if for given positive values of certain part of elements of reliabilities matrix the procedure provides maximal values for all other elements of it .let us denote by the reliabilities matrices of the sequences of tests , .the following lemma is a generalization of lemma from .* lemma 4.1 : * _ if elements , , are strictly positive , then the following equalities hold for , , : -k]}e_{l_i|m_i}{(\varphi^i)},\nonumber\\ \label{21 } & m_{k}=l_{k},\ , m_i\not = l_i , \,\,k=\overline { 1,3},\\ e _ { l_1,l_2,l_3|m_1,m_2,m_3}(\phi ) & = e_{l_i|m_i}{(\varphi^i)},\nonumber\\ \label{22 } & i=\overline { 1,3 } , \,m_{k}=l_{k},\ , m_i\not = l_i , \,\,k\in [ [ 1,2,3]-i].\\ \nonumber\end{aligned}\ ] ] _ equalities are valid also if for several pairs and several .the proof of lemma 4.1 is exposed in appendix .now we shall show how we can erect the lao test from the set of compound tests when strictly positive elements of the reliabilities matrix , and , , are preliminarily given . the following subset of tests : is distinguished by the property that when the elements + , and , , of the reliabilities matrix are strictly positive . really , because , , , then in view of ( [ 7 ] ) are also strictly positive . from equalities ( [ 20])([22 ] )we obtain that the noted elements are strictly positive for and for given positive elements define the following family of decision sets of pds : define also the elements of the reliability matrix of the compound lao test for three objects : -i],\\ e _ { m_1,m_2,m_3|l_1,l_2,m_3}^ * & { \stackrel{\bigtriangleup}{=}}\sum\limits_{i\not= k}\inf\limits_{q\in r_{l_i}^{(i)}}d(q||g_{m_i}),\nonumber\\ \label{26 } & m_{k}=l_{k},\,m_i\not = l_i,\,k=\overline { 1,3},\,i\in [ [ 1,2,3]-k],\\ \label{27 } e_{l_1,l_2,l_3|m_1,m_2,m_3}^ * & { \stackrel{\bigtriangleup}{=}}\sum\limits_{i= 1}^3\inf\limits_{q\in r_{l_i}^{(i)}}d(q||g_{m_i}),\ , m_i\not = l_i , i=\overline { 1,3}.\\ \nonumber \end{aligned}\ ] ] the following theorem is a generalization and improvement of the corresponding theorem proved in for the case .* theorem 4.1 : * _ for considered model with three objects , if all distributions , , are different , ( and equivalently , , ) , then the following statements are valid : _\a ) when given strictly positive elements , and , , meet the following conditions , \\\label{30 } e_{m , m , m|m , m , m } & < \min\left [ \min\limits_{l= \overline { 1,m-1 } } e_{m , l , m|m , m , m}^*,\,\min\limits_{l= \overline { m+1,m } } d(g_l||g_m)\right ] , \\\label{31 } e_{m , m , m|m , m , m } & < \min\left [ \min\limits_{l= \overline { 1,m-1 } } e_{m , m , l|m , m , m}^*,\,\,\min\limits_{l= \overline { m+1,m } } d(g_l||g_m)\right ] , \\\nonumber\end{aligned}\ ] ] then there exists a lao test sequence , the reliability matrix of which is defined in and all elements of it are positive , \b ) if even one of the inequalities is violated , then there exists at least one element of the matrix equal to . for the proof of theorem 4.1 see appendix .when we consider the model with independent objects the generalization of lemma 4.1 will take the following form .* lemma 4.2 : * _ if elements , , are strictly positive , then the following equalities hold for _ : for given strictly positive elements , + , .... , , , for independent objects we can find the lao test in a way similar to case of three independent objects . * comment 4.1 : * idea to renumber -distributions from to and consider them as pds of one complex object offers an alternative way of testing for models with objects. we can give diagonal elements of corresponding large matrix and apply theorem 3.2 concerning one composite object .in this direct algorithm the number of the preliminarily given elements of the matrix would be greater ( because , ) but the procedure of calculations would be longer than in our algorithm presented above in this section .our approach to the problem gives also the possibility to define the lao tests for each of the separate objects , but the approach with renumbering of -vectors of hypotheses does not have this opportunity . in the same time in the case of direct algorithmthere is opportunity for the investigator to define preliminarily the greater number of elements of the matrix . in applicationsone of two approaches may be used in conformity with preferences of the investigator .some illustrations of exposed results are in an example concerning two objects .the set contains two elements and the following pds are given on : .as it follows from relations ( [ 24])([27 ] ) , several elements of the reliability matrix are functions of one of given elements , there are also elements which are functions of two , or three given elements . for example , in fig . 1 and fig .2 the results of calculations of functions and are presented . for these distributionswe have and .we see that , when the inequalities ( [ 28 ] ) or ( [ 31 ] ) are violated , and .we consider characteristics of procedures of lao testing of probability distributions of two related ( _ stochastically _ , _ statistically _ and _ strictly dependent _ ) objects .we use these terms for different kinds of dependence of two objects .let and be rvs taking values in the same finite set and be the space of all possible distributions on .let be a sequence of results of independent observations of the pair of objects .first we consider the model , which consists of two _ stochastically _ related objects .we name so the following more general dependence .there are given pds the first object is characterized by rv which has one of these possible pds and the second object is dependent on the first and is characterized by rv which can have one of conditional pds joint pd of the pair of objects is where the probability of -vector is the following product : with and in somewhat particular case , when and are related _ statistically _ , , the second object depends on index of pd of the first object but not depends on value taken by the first object .the second object is characterized by rv which can have one of conditional pds . in the third case of _ strict _ dependence , the objects and can have only different distributions from the same given family of pds , , ... , .discussed in comment 4.1 the _direct approach _ for lao testing of pds for two related objects , consisting in consideration of the pair of objects as one composite object and then use of theorem 3.1 , is applicable for first two cases .but now we consider also another approach .let us remark that test can be composed of a pair of tests and for the separate objects : .denote by , and the infinite sequences of tests for the first , for the second and for the pair of objects , respectively .let and be related _stochastically_. for the object characterized by the non - randomized test can be determined by partition of the sample space on disjoint subsets , i.e. the set consists of vectors for which the pd is adopted .the probability of the erroneous acceptance of pd provided that is true , is defined by the set we define the probability to reject , when it is true , as follows corresponding error probability exponents are : it follows from ( [ 32 ] ) and ( [ 33 ] ) that for construction of the lao test we assume given strictly positive numbers , and we define regions , as in ( [ 8])([9 ] ) . for the second object characterized by rv depending on the non - randomized test , based on vectors and on the index of the hypothesis adopted for , can be given for each and by division of the sample space on disjoint subsets let the probabilities of the erroneous acceptance for are corresponding reliabilites are denoted and are defined as in ( [ 18 ] ) .we can upper estimate the probabilities of the erroneous acceptance for these upper estimates of for each we denote by consequently we can deduce that for new parameters are lower estimates for reliabilities + we can introduce for and also consider the corresponding estimates of the reliabilities of test , are the following it is clear from ( [ 34 ] ) and ( [ 35 ] ) that for every for given positive numbers , for and for each pair let us define the following regions and values : we denote by the matrix of lower estimates for elements of matrix * theorem 4.2 : * _ if for given all conditional pds , , are different in the sense that , for all , , when the strictly positive numbers , , ... , are such that the following compatibility conditions hold then there exists a sequence of tests , such that the lower estimates are defined in ( 4.29) ( 4.32 ) and are strictly positive ._ inequalities , are necessary for existence of test sequence with matrix of positive lower estimates having given elements in diagonal .let us define the following subsets of for given strictly positive elements , , : assume also that * theorem 4.3 : * _ if all pds , , are different , that is + , , , and all conditional pds , + , are also different for all in the sense that + , , then the following statements are valid . _ when given strictly positive elements and , , meet the following conditions ,\nonumber\\ \label { 32.c } & m_1=\overline { 2,m_1 - 1},\\ f_{l_1,m_2|l_1,m_2 } & < \min[{\min\limits_{l_2= \overline { 1,m_2 - 1 } } f_{l_1,l_2|l_1,m_2}^*,\,\,\min\limits_{l_2= \overline { m_2 + 1,m_2 } } \inf\limits_{q\in { \cal r}_{l_1}}d(g_{l_2|l_1}||g_{m_2|m_1}|q)}],\nonumber\\ \label { 32.d } & m_2=\overline { 2,m_2 - 1},\\ \nonumber\end{aligned}\ ] ] then there exists a test sequence , the matrix of lower estimates of which is defined in ( [ 31.a])-([31.e ] ) and all elements of it are positive. when even one of the inequalities ( [ 32.a])-([32.d ] ) is violated , then at least one element of the lower estimate matrix is equal to .when and are related statistically , we will have instead of ( [ 34 ] ) , ( [ 35 ] ) , and . in that casewe have error probabilities for the second object the conditional probabilities of the erroneous acceptance of pd provided that is true , for , , are the following the probability to reject , when it is true is denoted as follows thus in the conditions and in the results of theorems 4.2 and theorems 4.3 instead of conditional divergences , + we will have just divergences , + and in place of , , , , will be , , , . and in that case regions defined in , will be changed as follows : in case of two statistically dependent objects the corresponding regions will be so in this case the matrix of reliabilities , will have the following elements : * theorem 4.4 : * _ if all pds , , are different , that is + , , , and all conditional pds , , are also different for all in the sense that , , then the following statements are valid . _ when given strictly positive elements and , , , meet the following compatibility conditions ,\nonumber\\ & m_1=\overline { 2,m_1 - 1 } , \nonumber\\ e_{l_1,m_2|l_1,m_2 } & < \min\left [ { \min\limits_{l_2= \overline { 1,m_2 - 1 } } e_{l_1,l_2|l_1,m_2}^*,\,\,\min\limits_{l_2= \overline { m_2 + 1,m_2 } } d(g_{l_2|l_1}||g_{m_2|m_1})}\right ] , \nonumber\\ & m_2=\overline { 2,m_2 - 1},\nonumber\end{aligned}\ ] ] then there exists a test sequence , the matrix of which is stated above and all elements of it are positive. when even one of the compatibility conditions is violated , then at least one element of the matrix is equal to .in bechhofer , kiefer , and sobel presented investigations on sequential multiple - decision procedures .this book concerns principally with a particular class of problems referred to as ranking problems .chapter 10 of the book by ahlswede and wegener is devoted to statistical identification and ranking .problems of distribution identification and distributions ranking for one object applying the concept of optimality developed in , , were solved in . in papers , , and identification problems for models composed with two independent , or strictly dependent objects were investigated . in , , and considered in and and variations of these models inspired by the pioneering paper by ahlswede and dueck , applying the concept of optimality developed in , , , were studied .first we formulate the concept of the identification for one object , which was considered in .there are known possible pds , related with the object in consideration .identification gives the answer to the question whether -th pd occured , or not .this answer can be given on the base of a sample and by a test .more precisely , identification can be considered as an answer to the question : is result of testing algorithm equal to ( that is ) , or not equal ( that is ) .there are two types of error probabilities of identification for each : the probability to accept different from , when is in reality , and the probability that is accepted by test , when is not correct .the probability coincides with the error probability of testing ( see ( 6 ) ) which is equal to . the corresponding reliability is equal to which satisfies the equality ( [ 7 ] ) . and what is the reliability approach to identificationit is necessary to determine the dependence of optimal reliability upon given , which can be assigned a value satisfying conditions analogical to ( [ 15 ] ) .the result from paper is : * theorem 5.1 : * _ in the case of distinct hypothetical pds , for a given sample we define its type , and when ( see ( [ 8])([10 ] ) ) we accept the hypothesis . under condition that the a priori probabilities of all hypotheses are positivethe reliability of such identification for given is the following : _ can accept the supposition of positivity of a priory probabilities of all hypotheses with loss of generality , because the pd which is known to have probability , that is being impossible , must not be included in the studied family .now let us consider the model consisting of two independent objects .let hypothetical characteristics of objects and be independent rvs taking values in the same finite set with one of pds .identification means that the statistician has to answer the question whether the pair of distributions occurred or not .now the procedure of testing for two objects can be used .let us study two types of error probabilities for each pair , .we denote by the probability , that pair is true , but it is rejected . note that this probability is equal to .let be the probability that is identified , when it is not correct .the corresponding reliabilities are and .our aim is to determine the dependence of on given .let us define for each , , the following expression : .\ ] ] * theorem 5.2 :*_for the model consisting of two independent objects if the distributions , , are different and the given strictly positive number satisfy condition ,\ ] ] then the reliability is defined as follows : ,\ ] ] where and are determined by . now we will present the lower estimates of the reliabilities for lao identification for the _ dependent _ object which can be then applied for deducing the lower estimates of the reliabilities for lao identification of two _ related _ objects . there exist two error probabilities for each : the probability to accept different from , when is in reality , and the probability + that is accepted , when it is not correct .the upper estimate for is already known , it coincides with which is equal to + .the corresponding is equal to + , which satisfies the equality ( [ 19 ] ) .we determine the optimal dependence of upon given + .* theorem 5.3 : * _ in case of distinct pds , under condition that a priori probabilities of all hypotheses are strictly positive , for each the estimate of for given is the following : _ the result of the reliability approach to the problem of identification of the probability distributions for two _ related _ objects is the following .* theorem 5.4 : * _ if the distributions and , , , are different and the given strictly positive number satisfies the condition ,\end{aligned}\ ] ] + ,\end{aligned}\ ] ] then the lower estimate of the reliability + can be calculated as follows ,\ ] ] where and are determined respectively by ( [ 12 ] ) and ( [ 40 ] ) ._ the particular case , when and are related _ statistically _ , was studied in , .this section is devoted to description of characteristics of lao hypotheses testing with permission of decision rejection for the model consisting of one or more objects .the multiple hypotheses testing problem with possibility of rejection of decision for arbitrarily varying object with side information and for the model of two or more independent objects was examined by haroutunian , hakobyan and yessayan , .these works ware induced by the paper of nikulin concerning two hypotheses testing with refusal to take decision . an asymptotically optimal classification , in particular hypotheses testing problem with rejection of decision ware considered by gutman .in this section we consider multiple statistical hypotheses testing with possibility of rejecting to make choice between hypotheses concerning distribution of a discrete arbitrarily varying object .the arbitrarily varying object is a generalized model of the discrete memoryless one .let be a finite set of values of rv , and is an alphabet of states of the object . possible conditional pds of the characteristic of the object depending on values of states , are given : but it is not known which of these alternative hypotheses , , is real pd of the object. the statistician must select one among hypotheses , or he can withdraw any judgement .it is possible for instance when it is supposed that real pd is not in the family of given pds .an answer must be given using the vector of results of independent experiments and the vector of states of the object , , .the procedure of decision making is a non - randomized test , it can be defined by division of the sample space for each on disjoint subsets , .the set , , consists of vectors for which the hypothesis is adopted , and includes vectors for which the statistician refuses to take a certain answer .we study the probabilities of the erroneous acceptance of hypothesis provided that is true when decision is declined , but hypothesis is true , we consider the following probability of error : if the hypothesis is true , but it is not accepted , or equivalently while the statistician accepted one of hypotheses , , , or refused to make decision , then the probability of error is the following : corresponding reliabilities are defined similarly by to ( [ 5 ] ) : it also follows that for every test the matrix is the reliabilities matrix of the tests sequence for the described model .we call the test lao for this model if for given positive values of certain elements of the matrix the procedure provides maximal values for other elements of it . for construction of lao test positive elements , , are supposed to be given preliminarily . the optimal dependence of error exponents was determined in .this result can be easily generalized for the case of an arbitrarily varying markov source . for brevitywe consider the problem for two objects , the generalization of the problem for independent objects will be discussed along the text .let and be independent rvs taking values in the same finite set with one of pds , .these rvs are the characteristics of the corresponding independent objects .the random vector assumes values .let , , be a vector of results of independent observations of the pair of rvs .on the base of observed data the test has to determine unknown pds of the objects or withdraw any judgement .the selection for each object should be made from the same set of hypotheses : , .we call this procedure the compound test for two objects and denote it by , it can be composed of two individual tests , for corresponding objects. the test , , can be defined by division of the space into disjoint subsets , .the set , contains all vectors for which the hypothesis is adopted and includes all vectors for which the test refuses to take a certain answer .hence is division of the space into subsets , .we again denote the infinite sequences of tests by , , .let be the probability of the erroneous acceptance of the pair of hypotheses by the test provided that the pair of hypotheses is true , where , , , : when the pair of hypotheses , is true , but we decline the decision the corresponding probabilities of errors are : or or if the pair of hypotheses is true , but it is not accepted , or equivalently while the statistician accepted one of hypotheses , or refused to make decision , then the probability of error is the following : we study reliabilities of the sequence of tests , definitions ( [ 60 ] ) and ( [ 61 ] ) imply that we can erect the lao test from the set of compound tests when strictly positive elements of the reliability matrix and , , are preliminarily given ( see ) . * remark 6.1 : * it is necessary to note that the problem of reliabilities investigation for lao testing of many hypotheses with possibility of rejection of decision for the model consisting of two or more independent objects can not be solved by the direct method of renumbering . a broad class of statistical problems arises in the framework of hypothesis testing in the spirit of identification for different kinds of sources , with complete or partial side information or without it .paper is a start ." . in this paper , we exposed solutions of a part of possible problems concerning algorithms of distributions optimal testing for certain classes of one , or multiple objects .for the same models pd optimal identification is discussed again in the spirit of error probability exponents optimal dependence . but these investigations can be continued in plenty directions .some problems formulated in and , particularly , concerning the remote statistical inference formulated by berger , examined in part by ahlwede and csiszr and han and amari still rest open .all our results concern with discrete distribution , it is necessary to study many objects with general distributions as in . for multiple objects multistage and sequential testing can be also considered .problems for many objects are present in statistics with fuzzy data , bayessian detection of multiple hypotheses testing and geometric interpretations of tests ._ proof of theorem 3.1 : _probability for can be presented as follows : \right \}}.\label { 70}\end{aligned}\ ] ] let us consider the sequence of tests defined by the sets each is in one and only in one of , that is really , for , , for each let us consider arbitrary .it follows from ( [ 8 ] ) and ( [ 10 ] ) that there exists type such that and . from ( [ 15 ] ) and ( [ 12 ] ) we have . from definition of see that . definitions ( [ 13 ] ) , ( [ 15 ] ) and ( [ 10 ] ) show also that now , let us remark that for , using ( [ 1 ] ) , ( [ 2 ] ) , ( [ 4])([6 ] ) and ( [ 70 ] ) we can estimate as follows : for , , , using ( [ 1 ] ) , ( [ 2 ] ) , ( [ 4])([6 ] ) and ( [ 70 ] ) , we can obtain similar estimates : now let us prove the inverse inequality : according to the definition ( [ 6 ] ) , taking into account ( [ 72 ] ) , ( [ 73 ] ) and the continuity of the functional we obtain that + exists and in correspondence with ( [ 12 ] ) equals to .thus , , , .similarly we can obtain upper and lower bounds for , . applying the same resonnement we get that the reliability . by the definition ( [ 7 ] ) .the proof of the first part of the theorem will be accomplished if we demonstrate that the sequence of tests is lao , that is for given and every sequence of tests for all , these conditions are equivalent for large enough to the inequalities let us examine the sets , .this intersection can not be empty , because in that case let us show that , , . if there exists such that and , then when , we also obtain that thus it follows that , which in turn according to ( [ 7 ] ) provides that . from condition ( [ 15 ] ) it follows that , for all , hence for all , which contradicts to ( [ m ] ) .hence we obtain that for .the intersection is empty too , because otherwise which contradicts to ( [ 74 ] ) , hence , .the proof of the second part of the theorem 3.1 is simple .if one of the conditions ( [ 15 ] ) is violated , then from ( [ 12])([14 ] ) it follows that at least one of the elements is equal to 0 .for example , let in ( [ 15 ] ) the -th condition be violated .it means that , then there exists such that from latter and ( [ 12 ] ) we obtain that ._ proof of lemma 4.1 : _ it follows from the independence of the objects that -k ] } \alpha_{l_i|m_i}(\varphi_{n}^i),\ ] ] -i ] } \left ( 1-\alpha_{l_k|m_k}(\varphi_{n}^i)\right ) , \ ] ] remark that here we consider also the probabilities of right ( not erroneous ) decisions .because are strictly positive then the error probability + tends to zero , when .according this fact we have from definitions ( [ 18 ] ) , equalities ( [ 13.a])([13.c ] ) , applying ( [ 14.a ] ) we obtain relations ( [ 20])([22 ] ). _ proof of theorem 4.1 : _ the test , where , are lao tests of objects , belongs to the set .our aim is to prove that such is a compound lao test .conditions ( [ 28])([31 ] ) imply that inequalities analogous to ( [ 15 ] ) hold simultaneously for tests for three separate objects . according to remark 3.1 for lao test , , we obtain that ( [ 123 ] ) meets conditions ( [ 15 ] ) of theorem 3.1 for each test , , , hence it follows from ( [ 6 ] ) that are also strictly positive .thus for a test conditions of lemma 4.1 are fulfilled and the elements of the reliability matrix coincide with elements of matrix , , or sums of them .then from definition of lao test it follows that , then .consequently is a lao test and verify .\b ) when even one of the inequalities is violated , then at least one of inequalities ( [ 123 ] ) is violated .then from theorem 3.2 one of elements is equal to zero .suppose , then the elements + .ahlswede , r. ( 1979,1980 ) . coloring hypergraphs : a new approach to multi - user source coding. _ journal of combinatorics , information and system sciences_. i , vol . 4 , no .76 - 115 , ii , vol . 5 , no .3 , pp . 220268 .ahlswede , r. , aloyan , e. and haroutunian , e. ( 2006 ) .on logarithmically asymptotically optimal hypothesis testing for arbitrarily varying source with side information ._ lecture notes in computer science _ , vol . 4123 ,general theory of information transfer and combinatorics " , springer verlage , pp .457 461 .ahlswede , r. and haroutunian , e. ( 2006 ) . onlogarithmically asymptotically optimal testing of hypotheses and identification . _ lecture notes in computer science _ , vol . 4123 , general theory of information transfer and combinatorics " , springer verlage , pp .462 478 .csiszr , i. ( 1998 ) .information thoretic methods in probability and statistics . _ shannon lecture at isit in ulm germany 1997 , ieee information theory society newsletter_. vol .1 , pp . 1 and 2130 .dobrushin , r. , pinsker , m. and shiryaev , a. ( 1963 ) .application of the notion of entropy in the problems of detecting a signal in noise .( in russian ) , _ lithuanian mathematical transactions_. vol .1 , pp . 107122 .haroutunian , e. ( 1967 ) .upper estimate of transmission rate for memoryless channel with countable number of output signals under given error probability exponent .( in russian ) , _3rd all union conf . on theory of informationtransmission and coding , uzhgorod , publishing hous of the uzbek ac ., tashkent_. pp .83 - 86 .haroutunian , e. ( 1988 ) .many statistical hypotheses : interdependence of optimal test s error probabilities exponents .( in russian ) , abstract of the report on the 3rd all - union school - seminar , _ program - algorithmical software for applied multi - variate statistical analysis " _ , tsakhkadzor , part 2 , pp . 177178 .haroutunian , e. ( 1989 ) .asymptotically optimal testing of many statistical hypotheses concerning markov chain .( in russian ) , _ 5-th inter .vilnius conferance on probability theory and mathem .statistics _ , vol . 1 , ( a - l ) , pp .202203 , 1989 . haroutunian , e. and grigoryan , n. ( 2007 ) . on reliability approach for testing of many distributions for pair of markov chains ._ transactions of iiap of nas of ra , mathematical problems of computer science_. vol .99 - 96 .haroutunian , e. and grigoryan , n. ( 2009 ) . on arbitrarily varying markovsource coding and hypothesis lao testing by non - informed statistician ._ proc . of ieee int .inform . theory _ , seoul , south korea , pp .981 - 985 .haroutunian , e. and hakobyan , p. ( 2004 ) . on multiple hypotheses testing by informed statistician for arbitrarly varying object and application to source coding ._ transactions of iiap of nas of ra and of ysu , mathematical problems of computer science_. vol .36 - 46 .haroutunian , e. and hakobyan , p. ( 2010 ) .remarks about reliable identification of probability distributions of two independent objects ._ transactions of iiap of nas of ra , mathematical problems of computer science_. vol .33 , pp . 9194 .haroutunian , e. , hakobyan , p. and yessayan , a. ( 2011 ) .many hypotheses lao testing with rejection of decision for arbitrarily varying object . _ transactions of iiap of nas of ra , mathematical problems of computer science _ , vol .77 - 85 .haroutunian , e. , hakobyan , p. and yessayan , a. ( 2011 ) . on multiple hypotheses lao testing with rejection of decision for many independent objects ._ proceedings of international conference csit 2011_. pp .141 - 144 .haroutunian , e. , haroutunian , m. and harutyunyan , a. ( 2008 ) .reliability criteria in information theory and in statistical hypothesis testing . _ foundations and trends in communications and information theory_. vol .4 , no . 2 - 3 . haroutunian , e. and yessayan , a. ( 2007 ) . onlogarithmically asymptotically optimal hypothesis testing for pair of statistically dependent objects ._ transactions of iiap of nas of ra , mathematical problems of computer science _ , vol .97 103 .haroutunian , e. and yessayan , a. ( 2008 ) . on optimal hypotheses testing for pair of stochastically coupled objects ._ transactions of iiap of nas of ra , mathematical problems of computer science _ , vol .49 59 .haroutunian , e. , yessayan , a. and hakobyan , p. ( 2010 ) . on reliability approach to multiple hypotheses testing and identification of probability distributions of two stochastically coupled objects ._ international journal informations theories and applications"_. vol .259288 .haroutunian , e. and yessayan , a. ( 2011 ) . on reliability approach to multiple hypotheses testing and to identification of probability distributions of two stochastically related objects ._ ieee international symposium on information theory , proceedings _ .2671 - 2675 .longo , g. and sgarro , a. ( 1980 ) .the error exponent for the testing of simple statistical hypotheses : a combinatorial approach ._ journal of combinatorics , information and system sciences_. vol . 5 , no . 1 , pp .58 - 67 , 1980 .nikulin , m. ( 1986 ) . on one result of l. n. bolshev from theory of hypotheses statistical testing .( in russian ) , _ studies on mathematical statistics. notes of scientific seminars of saint - petersburg branch of the mathematical institute _129137 .yessayan , a. ( 2009 ) . on reliability approach to identification of probability distributions of two statistically dependent objects ._ transactions of iiap of nas of ra , mathematical problems of computer science_. vol .32 , pp . 65 69 .
we survey a series of investigations of optimal testing of multiple hypotheses concerning various multiobject models . these studies are a bright instance of application of methods and technics developed in shannon information theory to solution of typical statistical problems .
a basic question in random matrix theory is to ask for the probability distribution of the number of real eigenvalues for an ensemble of random matrices with real entries . with the ensemble made up of standard gaussian random matrices , i.e. in the circumstance that each element is independently chosen as a real standard gaussian , edelman was the first person to obtain results on this problem .the approach taken centered on knowledge of the explicit functional form of the probability density function ( pdf ) for the event that there are real eigenvalues denoted , and complex eigenvalues denoted with ( the fact that the complex eigenvalues occur in complex conjugate pairs implies must have the same parity as ) .thus it was shown that this is equal to where denotes the vandermonde determinant and ( see also ) .integrating ( [ 3.1 ] ) over gives the probability that there are exactly real eigenvalues .the simplest case to compute is when and thus all eigenvalues are real , for which the probability was found to equal .questions relating to the probability that all eigenvalues are real for random matrices with real entries occur in applications .consider first the tensor structure , represented as the column vector .as reviewed in , it is of interest to find matrices \in \mathbb r^{p \times r} ] , \in \mathbb r^{2 \times r} ] denotes the coefficient of in the power series expansion of , i.e. for even {j , l=1,\dots , n/2}\ ] ] and similarly for the odd case in terms of .a formula closely related to theorem [ t1 ] in the case was derived by akemann and kanzieper , and this working was soon after refined to obtain a formula equivalent to ( [ a1 ] ) .also for this case forrester and nagao gave a result more general than ( [ a1 ] ) , applying to a real random matrix formed from a general linear combination of gaussian symmetric and anti - symmetric matrices .we now turn our attention to the other primary aim of our work .this relates to the statistical state formed by the eigenvalues of the product ( [ yx ] ) . in the complex case, it has been remarked that the statistical state is a determinantal point process . in the real case, it is known from the work of ipsen and kieburg that the eigenvalue correlations form instead a pfaffian point process .thus , considering for definiteness the real eigenvalues , one now has {j , l=1,\dots , k } , \qquad \mathbf k^{\rm rr } ( x , y ) = \begin{bmatrix } d(x , y ) & s(x , y ) \\- s ( y , x ) & \tilde{i}(x , y ) \end{bmatrix},\ ] ] where and are antisymmetric functions of and .a concern of the present paper is to compute the explicit form of the correlation kernel in ( [ k0 ] ) in the case of the real eigenvalues of ( [ yx ] ) for real standard gaussian matrices , and also for the case of the complex eigenvalues . in this paper, we will see that these correlation kernels possess many similarities with other results for product of random matrices .for example , the kernel for the pfaffian point process specifying the scaled statistical state about the origin of the real eigenvalues of products of real ginibre matrices is given in terms of meijer -functions .in the simplest case of the one point function the resulting functional form is very succinct .[ t2 ] define \, \delta ( \lambda - \lambda^{(1 ) } \cdots \lambda^{(m)}).\ ] ] we have for singular values of products of complex ginibre matrices , it is similarly the case that the kernel for the scaled determinantal point process in the neighbourhood of the origin can be expressed in terms of meijer -functions ; see also the recent review .moreover , for fixed , knowledge of the real - to - real eigenvalue correlations gives information about the moments of the distribution function for the probability that there are real eigenvalues .in particular , integration of the spectral density ( one - point function ) gives the expected number of real eigenvalues .the rest of this paper is organised as follows . in section [ sec : jpdf ] we find the joint eigenvalue pdf for a gaussian product matrix with a given number of real eigenvalues . in section [ sec :skew ] we introduce the generalised partition function and find the skew - orthogonal polynomials ; we combine these results with the joint eigenvalues pdf to prove theorem [ t1 ] .section [ sec : corr ] focuses on the real - to - real and the complex - to - complex eigenvalue correlations . inparticularly , we study local and global scaling limits for the spectral densities and use the real spectral density to compute the expected number of real eigenvalues . the final section briefly sketches how all these results may be extended to products of rectangular matrices .our first task is to find the explicit functional form for the eigenvalue pdf of the random matrix product ( [ yx ] ) in the case that each is an independent standard real gaussian matrix . with this specificationthe joint probability measure for is equal to actually this task , extended to the general bi - orthogonal invariant ensembles , has already been addressed by ipsen and kieburg .however the workings therein are not sufficient for all our purposes . in particular proportionality constantsare ignored , meaning that it is not possible to proceed to derive the formulas of theorem [ t1 ] for the probabilities .these normalisation constants were included in the thesis but the pdf were given in terms of matrices , which is impractical for our purpose .furthermore , the case that the working of which is a generalisation of the strategies used in and in the cases and respectively treats the real and complex eigenvalues on an equal footing , whereas we prefer to proceed in the way used in for which distinguishes the real and complex eigenvalues from the outset .below we give a more practical formulation of the joint eigenvalue pdf .[ t3 ] let \, \delta ( \lambda - \lambda^{(1 ) } \cdots \lambda^{(m)}),\ ] ] referred to as the real ( or one - point ) weight function and let ^{1/2 } \big ) \ ] ] with \delta(g - g^{(1 ) } \cdots g^{(m)}),\ ] ] referred to as the complex ( or two - point ) weight function .consider the product .given that there are real eigenvalues ( of the same parity as the matrix dimension ) , the joint eigenvalue pdf is { \delta\big(\{\lambda_l\}_{l=1}^{k } \cup \ { x_j \pm i y_j \}_{j=1}^{(n - k)/2}\big ) } \prod_{j=1}^k w_r(\lambda_j)\prod_{j = k+1}^{(n+k)/2 } w_c(x_j , y_j)\ ] ] with given by and as above .the starting point is to use a generalised real schur decomposition to triangularise the matrices which appear in the product . assuming that the product matrix has real eigenvalues , the decompositions states that for invertible matrices ( gaussian matrices are invertible almost surely )we may write ( * ? ? ?a.26 ) with .here each is a real orthogonal matrix in with defined to be the set of matrices in with the first entry in each column positive .each is a ( block ) diagonal matrix with the first diagonal entries scalars and the next block entries matrices , while each is a strictly upper triangular matrix consisting of independent gaussian random variables .the generalised schur decomposition may be verified by applying an ordinary schur decomposition on the product matrix itself and then using ( partial ) qr decompositions on , recursively ( see ( * ? ? ? *appendix a ) for details ) . we stress that while it is possible to choose of the matrices in to be strictly diagonal rather than block diagonal ( due to the qr decompositions ), we do not do so as it would complicate the derivation of the jacobian . for the following, it will be convenient to introduce the product which again is a block diagonal matrix .the first diagonal entries are scalars , , while the latter entries are matrices , . with this notation , the jacobian for the above given change of variables reads ( * ? ? ?a.26 ) where refers to the eigenvalue(s ) of the -th entry of the block diagonal matrix , i.e. for and denotes the two ( complex ) eigenvalues of the block for .thus , using the notation with for the complex eigenvalues , we have this notation is the same as used by edelman ( * ? ? ?* eq . ( 6 ) ) .more compactly , we may write where the vandermonde determinant is defined as in . for the weight function in ( [ z1 ] )we have where we can integrate out the dependence on and according to the latter is equal to . using all the above results, it follows that , for a given , the joint probability measure for the eigenvalues is equal to .\end{gathered}\ ] ] we have , at this point , not yet explicitly introduced the constraint that the eigenvalues of each are not real and thus are consequently a complex conjugate pair . for this reason, we have a similarity with ( * ? ? ?4.26 ) . in order to explicitly impose our constraint that the product matrix has exactly real eigenvalues ,we suppose an orthogonal similarity transformation has been used to bring each matrix into the form with .the eigenvalues are then with , and we know too ( see e.g. ( * ? ? ?* proof of prop .15.10.1 and prop .15.10.2 ) ) that changing variables from the elements of to , where parametrises the orthogonal similarity transformation and introduces the jacobian the fact that the integrand in ( [ l3 ] ) is invariant under real orthogonal transformations allow us to simplify further .firstly , we may integrate out , which contributes with an extra factor of .secondly , we may replace the matrix by the diagonal matrix of its singular values , and say . in terms of the variables it is straightforward to compute that the singular values are given by ( [ l3a ] ) .combining these results completes the proof . due to the relatively involved expression for the two - point weight ,it might be beneficial to briefly expand on the simplest cases , and , where explicit expressions are known .for , the joint pdf ( [ 3.1a ] ) must , of course , reduce to the classical result ( [ 3.1 ] ) . inspection of and shows that the integration therein are immediate for due to the delta functions . in the real case we then read off that . in the complex case , substituting in ( [ l3a ] ) gives where the second equality first appeared in , albeit out by a factor of 2 as remarked in . substituting these evaluations inindeed reproduces .returning now to the case , the meijer -function ( [ c1a ] ) is a modified bessel function , to simplify ( [ l3a ] ) requires simplifying ( [ l3 ] ) . for this purpose , and without yet restricting , we introduce real matrices and set equal to the identity matrix .we note that which allows the integration over to be carried out in using the delta function , showing that \frac{e^{-\frac12 \operatorname{tr}((m^{(m-1)}m^{(m-1)t})^{-1}gg^t)}}{\sqrt{2\pi^3}}.\ ] ] this is the two - by - two matrix version of ( * ? ? ?* eq . ( 2.20 ) ) .a further change variables for each shows \frac{e^{-\frac12 \operatorname{tr}((a^{(m-1)})^{-1}gg^t)}}{\sqrt{2\pi^3 } } , \ ] ] where the integration is over positive - definite real symmetric matrices , . in the case we can also express the integral in terms of modified bessel functions .[ le1 ] we have \big ) } = 8 \int_1^\infty \frac{s}{(s^2 - 1)^{1/2 } } k_0(s \mu_+ ) k_0(s \mu_- ) \ , ds.\ ] ] write for the positive definite matrix then is equivalent to using the notation for the determinant , and expressing this equation as a delta function constraint allows us to write the working now is elementary .we first integral over , change variables , , and integrate over , then and , using the fact that the last step is to change variables .alternative expressions for are known .one , which involves not the bessel function but rather the bessel function is based on changing variables to the eigenvalues and eigenvectors of , , and using the matrix integration formula for the integral over haar measure of the orthogonal group restricted to matrices with elements in the first entry of each column positive , implying that another , which is based on working similar to that used in the proof of lemma [ le1 ] , but starting from ( [ l3b ] ) rather than ( [ l3c ] ) tells us that there is some advantage in the form ( [ 2.17 ] ) , due to its functional dependence on and , which according to ( [ l3a ] ) are given in terms of by recalling the definition of in ( [ l3c ] ) , this tells us that substituting in ( [ l3a ] ) and using the integral in ( [ wc1 ] ) to integrate over we obtain in the following , we will see that it is possible to calculate the probability finding exactly eigenvalues without such explicit knowledge of the two - point weight function . here , we make note of them to make contact with the existing literature and as a reference for a comment in section [ sec : c - eigenvalues ] .finally , we note that an important difference compared to the result presented in ( * ? ? ?4.26 ) is the shift from the two - by - two matrix weight function to which will be essential in the remaining sections .let us denote the joint pdf ( [ 3.1a ] ) by , and define the generalised partition function for real and complex conjugate pairs of eigenvalues by = \prod_{j=1}^k\int_{\mathbb{r}}d\lambda_j \ , u(\lambda_j ) \prod_{l=1}^{(n - k)/2 } \int_{{\mathbb{r}}\times{\mathbb{r}}_+ } dx_l dy_l \ , v(x_l , y_l ) \mathcal q(p_m).\ ] ] we have that with the generalised partition function is the probability of finding real eigenvalues and complex conjugate pairs of eigenvalues .functional differentiation of : = \sum_{k=0}^n z_{k,(n - k)/2}[u , v],\ ] ] where the sum is restricted to of the same parity of allows the correlation functions to be computed ; see e.g. .independent of the specific functional form of and in ( [ 3.1a ] ) , an observation of sinclair tells us that due to the product of difference , the method of integration over alternative variables implies that ] defined as in theorem [ t1 ] and given by .the matrix ] from the definition of a determinant we have where denotes the parity of the permutation and denotes the kronecker delta .averaging over using ( [ f2 ] ) shows that the only non - zero term comes from the identity permutation and furthermore this average is equal to .this establishes in . for the odd polynomials , multiplication by shows that a non - zero value will appear after averaging when there is a single monomial in the expansion of .we see from ( [ f1b ] ) that this is only possible in the case of the identity permutation , and that we require the coefficient of therein .thus here the final equality follows by noting that consists of a sum of terms which are monomials in the elements of the , and due to ( [ f2 ] ) the only terms that survives this averaging after squaring are the perfect squares , which contribute unity .substituting this result into establishes in .it remains to establish ( [ 2 m ] ) . on this point, we first note that from the meaning of |_{u = v=1} ] , where $ ] is specified by ( [ l3d1 ] ) . on the other hand ,it follows from ( [ 3.33 ] ) that for even = \big ( \frac{1 } { 2^{n(n+1)/4 } \prod_{l=1}^n \gamma(l/2 ) } \big ) ^m { \rm pf } [ \alpha_{j , l } + \beta_{j , l}]_{j , l=1,\dots , n}.\ ] ] setting , and using the skew - orthogonal polynomials , the rhs can be evaluated to give and ( [ 2 m ] ) follows .[ mario - remark ] examination of the above proof shows that invariance of a single matrix entry under the reflection implies it has already been remarked below the definition of the generalised partition function that with this quantity can be interpreted as the probability that for the ensemble of matrices specified by ( [ yx ] ) , with each therein an real standard gaussian , there are exactly real eigenvalues .this assumes and have the same parity ; if not the probability is zero .according to proposition [ u1 ] these probabilities can be written as pfaffians .let us suppose the polynomials therein are furthermore even ( odd ) when there degree is even ( odd ) .we then know , by the symmetry of the integrands , that each unless the parity of and is opposite . furthermore making use of the fact that the is anti - symmetric in the pfaffian to be written as a determinant of half the size , telling us that for even \det [ ( \zeta \alpha_{2j-1,2l } + \beta_{2j-1,2l } ) |_{u = v=1 } ] _ { j , l=1,\dots , n/2},\ ] ] and for odd \det\big [ ( \zeta \alpha_{2j-1,2l } + \beta_{2j-1,2l})|_{u = v=1 } ] \quad [ \mu_{2j-1 } ] \big]_{\substack{j=1,\dots,(n+1)/2 \\l=1,\dots,(n-1)/2}}.\ ] ] we are now well placed to establish ( [ a1 ] ) and ( [ 14a ] ) .we choose the polynomials as the skew - orthogonal polynomials ( [ 1 m ] ) so we have with the explicit value of being given by ( [ 2 m ] ) .thus we have been able to eliminate the dependence on , which from the definition ( [ be1 ] ) involves the weight a quantity which from ( [ l3a ] ) is not in general known in terms of explicit special functions .the remaining quantity is specified by ( [ be1 ] ) , and the weight therein is given as a meijer -function according to ( [ c1a ] ) .in fact this very same quantity , up to a proportionality has appeared in the earlier study ( * ? ? ?* proposition 3 ) and we read off the evaluation where we use the definition ( [ aa ] ) with since the lowest order odd skew - orthogonal polynomial is a monomial . substituting ( [ s1 ] ) in ( [ s0 ] ) , and substituting the result in turn in ( [ s5 ] ) we obtain after minor manipulation the formula ( [ a1 ] ) . to deduce ( [ 14a ] ) we require the additional evaluation , also contained in ( * ? ? ?* proposition 3 ) , , and similarly substitute in ( [ s6 ] ) .for the probabilities have been known since the late nineties and they are all of the form where and are rational numbers .tabulations can be found in ( * ? ? ?* table 5 ) and ( * ? ? ?* table 2 ) .recently , an evaluation of the meijer -function as a summation over a linear combination of has been given by kumar , and this was used to show which allows us to get explicit expressions for the probabilities ( i.e. ) .note in particular that this is of the form times a rational number , a feature which was conjectured in .substituting in ( [ a1 ] ) and ( [ 14a ] ) in the case makes the structure of the probabilities explicit for .these are all polynomials of degree in with rational coefficients ; probabilities for low values of are tabulated in table [ table : k - real ] .it is worth noting that similar probabilities for the real spherical and the truncated orthogonal ensembles are also given as polynomials in and ; see and references therein for an extensive summary . beyond the cases and ,evaluation formulas for the meijer -function in ( [ aa ] ) are challenging .in addition to the contour integral representation , we may also write the meijer -function as an -fold integral on the real line , \frac{x_1^k}{(1+x_1)^{j+k-1/2}},\ ] ] which may be checked to agree with for .such -fold integral representations give a relation to product of random scalars .however , explicit expressions in terms of elementary functions remain unknown for . with an explicit method for calculating the probability of finding real eigenvalues , it seems natural to ask for different types of number statistics. a prime example would be the expected number of real eigenvalues .albeit such expectation values may be calculated using theorem [ t1 ] , we will see in section [ sec : r - eigenvalues ] that the spectral density for the eigenvalues can be used to obtain a more efficient formula .the interest in real eigenvalue statistics , of course , extends beyond the expected number of real eigenvalues .another common question is to ask for extreme value statistics , i.e. the probability that there are abnormally many ( or few ) real eigenvalues . as mentioned in the introduction , the probability that all eigenvalues are realhas already be studied in , which led to the remarkable conclusion that this probability tends to unity for .it is more challenging to ask for the probability of finding only a few real eigenvalues in the large- limit , say the probability that an even dimensional product matrix has no real eigenvalues .a step in this direction was taken in , where using the relation to the brownian annihilation process , the first two terms of the large asymptotics of the probability that there are no real eigenvalues in an interval of size near the origin for real ginibre ( ) was computed .it was realized kanzieper et al . that heuristic at least this result implies for large with denotes the riemann zeta function and and moreover these authors gave a rigorous proof of the leading term .it is not known how to generalize the workings of , which are based on theorem [ t1 ] , beyond .however , our theorem [ t1 ] at least allows us to establish numerical estimates , e.g. fitting to for suggest that for even .we note that , which is in the agreement with the expectation that decreases with increasing .the pfaffian formulae of proposition [ u1 ] for the generalised partition function , combined with the simplification inherent in the use of skew - orthogonal polynomials , , allow the -point correlation to be expressed in the form ( [ k0 ] ) with entries given in terms of . while ( [ k0 ] ) refers to the real - to - real eigenvalue correlations , this same structure remains true for the general correlation functions .in fact , the entries of the correlation kernel also have the same structure ; see e.g. ( * ? ? ? * and 4.6 ) .let denote the upper complex half - plane , and specify and according to ( [ c1a ] ) and ( [ l3a ] ) .define \tau_j(\mu ) & = \left \ { \begin{array}{ll } \displaystyle - \frac12 \int_{-\infty}^\infty { \rm sgn } \ , ( x - y ) q_j(y ) \ , dy & \mu = x\in \mathbb r \\i q_j(x+iy ) , & \mu = x + i y \in \mathbb c_+ \end{array } \right .\nonumber \\[.5em ] \epsilon(\mu,\eta ) & = \left \ { \begin{array}{ll } \frac12 { \rm sgn } \ , ( \mu - \eta ) , & \mu,\eta \in \mathbb r , \\ 0 , & { \rm otherwise } \end{array } \right .\end{aligned}\ ] ] in this notation , the entries of the correlation kernel ( [ k0 ] ) in the case of the correlation between real eigenvalues only , or the correlation between complex conjugate pairs of eigenvalues are given by for odd these expressions require modification ; see e.g. , .for efficiency of presentation , we will restrict attention to the even case .our main interest in section [ sec : c - eigenvalues ] and [ sec : r - eigenvalues ] will be spectral densities ( one - point correlation functions ) and quantities derivable from these .for this reason , we focus on the complex - to - complex and the real - to - real eigenvalue correlations , but real - to - complex correlations can be treated in a similar manner .we see from ( [ ab ] ) and ( [ ab1 ] ) that in the case of the correlation between complex eigenvalues , up to factors involving all the quantities are polynomials , and are related by thus it suffices to consider , where and . for this , ( [ ab ] ) and ( [ ab1 ] )tell us that upon use of the skew - orthogonal polynomials given by proposition [ prop : poly ] this simplifies to we are typically interested in either a global scaling regime ( where the eigenvalues are concentrated within a region with compact support ) or local scaling regimes ( where the eigenvalue interspacing is order unity ) . for simplicity ,let us focus on the one - point function ( i.e. the spectral density ) which for complex eigenvalues is given by .the global scaling regime for the spectral density is known from free probability , where if is true , otherwise .this holds because the full spectral density ( i.e. including complex as well as real eigenvalues ) is dominated by the complex spectrum in the global scaling regime .we note that there also exists a global scaling regime for the real spectrum , albeit sub - dominant .we will return to this limit in section [ sec : r - eigenvalues ] . on the local scale ,the region near the origin is of greatest interest since it gives rise to new types scalings ( i.e. different than the ordinary ginibre case ) .the local density near the origin is given by since we recall from section [ sec : jpdf ] that the weight function has an explicit and concise expression for but not for .we note that if then the meijer -function in evaluates as with the latter being a modified bessel function . combining this with the weight functions from section [ sec : jpdf ] reproduces known formulae for the density ( the case was given in ) . in this sectionwe focus on the part of the spectrum which is located on the real axis . similarly to the complex spectrum described above , all correlations may be expressed in terms of the pre - kernel .we see from ( [ ab ] ) and ( [ ab1 ] ) that which produce the correlation functions by insertion in .we note that the relations between the pre - kernels are more complicated for the real - to - real correlations than for the complex - to - complex correlations where the pre - kernels are related according to . on the other hand ,the weight functions are simpler in the real case than in the complex case . using ( [ ab ] ) and ( [ ab1 ] ) and the skew - orthogonal polynomials ( proposition [ prop : poly ] ) , we write the pre - kernel as for ( i.e. the ordinary ginibre ensemble ), the sum may be rewritten as an incomplete gamma function times an exponential and the integral over can be performed , which yields this formulation of the pre - kernel is extremely useful in the study of large- asymptotics .unfortunately there are no direct generalisation of this result to , which makes asymptotic analysis more challenging .however , it is possible to perform the integral over in for arbitrary .to do so , we rewrite ( [ sxy ] ) as now , standard identities for the meijer -function give \displaystyle y^{1+j } { g^ { { m } , { 1 } } _ { { 1 } , { m+1 } } \big ( \begin{matrix } -(j-1)/2 \\0,\dots,0,-(j+1)/2 \end{matrix}\ , \big\vert\ , \frac{y^2}{2^m } \big ) } , & j \\textup{even}. \end{array } \right.\ ] ] this form of the pre - kernel is useful if we are interested in the expected number of real eigenvalues .we recall that the expected number of real eigenvalues , , can be found by integration over the real spectral density .thus , it follows from that where the quantity is precisely the same quantity appearing in the study , which evaluates to with given by the meijer -function ( [ aa ] ) . in the case where the first index of is even and the second index odd ,we use the anti - symmetric property .this gives where and denote the ceiling and floor function , respectively .we recall that the formulae above assume that is even ( for odd the expression is altered by the addition of unity ) .as already mentioned , an evaluation of in terms of arithmetic constants is only known for ; consequently the same holds for .the case is known since the mid nineties , while the case is evaluated using ; the results for small are tabulated in table [ table : e(real ) ] .as anticipated , table [ table : e(real ) ] reveals that the expected value of real eigenvalues are consistingly larger for than for . for a computation of the expectation value requires numerical evaluation of the meijer -functions .the expected number of real eigenvalues can , of course , also be obtained using the probabilities given by theorem [ t1 ] .in fact , for and small the expected number of real eigenvalues follows immediately from table [ table : k - real ] , e.g. for we see that which agrees with table [ table : e(real ) ] .{m=2 } & n=2 & n=3 & n=4 & n=5 & n=6 & n=7\\ \hline\hline \text{exact } & \frac12\pi & 1+\frac{5}{16}\pi & \frac{755}{1024}\pi & 1+\frac{4185}{8192}\pi & \frac{3821355}{4194304}\pi & 1+\frac{22392747}{33554432}\pi \\ \text{aprox . } & 1.5708 & 1.9817 & 2.3163 & 2.6049 & 2.8622 & 3.0966 \\ \text{simul . } & 1.5704 & 1.9813 & 2.3168 & 2.6030 & 2.8607 & 3.0948 \\ ( \mathbb e[\#\text{reals}]_{m=1 } ) & ( 1.4142 ) & ( 1.7071 ) & ( 1.9445 ) & ( 2.1490 ) & ( 2.3312 ) & ( 2.4971 ) \end{array}\ ] ] let us return to the pre - kernel and consider large- asymptotics for the real spectral density . similarly to section [ sec : c - eigenvalues ] we focus on the local density near the origin and the global density . using , it is immediately seen that the local scaling regime near the origin gives ( [ r - local ] ) announced in theorem [ t2 ] .compared to the same result for the complex density , the real density has the advantage that the weight function has a known expression as a meijer -function for all while does not .we note that for the meijer -functions in are all simple exponentials ; this allows integration over and confirms that the local spectral density is constant for .moreover , for the corresponding -point correlation takes on the explicit form as obtained in .we remark that it has been argued by beenakker and co - workers that the statistical state implied by ( [ k1 ] ) is realised by the level crossings of so - called majorana zero modes for a disordered semiconducting wire at a josephson junction , in a weak magnetic field . andthis same correlation kernel appears in the seemingly unrelated problem of the annihilation process in the limit .a study of the global scaling regime for the real spectrum is more challenging . unlike the complex spectral density ( section [ sec : c - eigenvalues ] ), we have no help from free probability . a qualified guess for this spectral densitymight be obtained by looking at the -th power of a real ginibre matrix rather than at the product of independent matrices .it is immediate that the -th power and the -th product share the same complex macroscopic spectral density , thus assuming that this extends to the real spectrum we expect that where is defined as in .for the density is well - known ; a verification follows from using known asymptotics for the incomplete gamma functions .moreover , we see that the real spectrum develops a non - integrable singularity at the origin when tends to infinity similarly to as we would expect . for have no rigorous derivation of but the form is supported by ( i ) a heuristic saddle point analysis and ( ii ) numerical data .let us first look at the saddle point analysis , which takes as the starting point .the first step is to introduce an approximation for the sum in .we know from ( * ? ? ?* appendix c ) that for while exponentially suppressed in for .an approximation for the weight function is known from the literature on special functions , and we have we insert these approximations into and want to evaluate the integral over using a saddle point approximation . note that there are two maxima of the integrand symmetrically distributed around ( the integrand is equal to zero at ) .these two maxima tend to from left or right , respectively , as tends to infinity .thus , we will use an ansatz for our saddle points where is sub - dominant in . with this ansatz and expanding to lowest order, the saddle points are found to be evaluation at either of these saddle points yields the conjectured form up to a normalisation .finally , let us compare the density with a simulation of the random matrix product .figure [ fig : histogram ] shows the visual similarity between the density for and numerical data stemming from a simulation of matrix products with .it should be noted that convergence is expected to be exponentially fast in the bulk but considerably slower near the edges .similar numerical tests have been performed for and it has been verified that the difference between the analytic formula and the numerical data decreases with increasing .furthermore , we expect that the real global density is universal in the sense that the gaussian entries may be replaced by other independent variables under suitable assumptions on their moments .this type of universality is known to hold for the complex spectra and the expectation that such results extend the real spectra is strengthend by numerical comparison generated from random sign ( ) matrices .although it seems a very natural problem , this type of universality for the real global spectrum has received little attention in the literature ; this is true even for the classical ginibre ensemble ( ) . , while the histogram shows the distribution of the real eigenvalues from realisations of a product of two real asymmetric gaussian matrices ( eigenvalues in total ) . ]a generalisation to the case of rectangular matrices is also available and we briefly treat it here . the main idea when dealing with a product of random matrices is to reformulate problem as a product of square random matrices with the same eigenvalue properties ; this is possible due to a general reduction procedure presented in ( see also ( * ? ? ?2.4 ) ) . after this reformulation, the approach is similar to the previous sections because proposition [ u1 ] as well as the formulae and are completely general . due to this similaritywe will only sketch the main ideas here .we consider a product matrix , where each matrix has dimensions with denoting non - negative integers such that . herethe constraint is introduced to ensure that the product matrix is square and has non - trivial eigenvalues .we note that if but for some ( i.e. the smallest matrix dimension is still ) then there will be eigenvalues which are trivially equal to zero ( and therefore real ) but the joint pdf otherwise remains the same except for an obvious change in normalisation .consequently , all formulae given below may effortlessly be extended to the case if desired .the generalisation of the probabilities with for a purely real spectrum have already appeared in the thesis ( * ? ? ?* prop . 4.29 ) .they are given by {k=1,\ldots , n/2}^{j=1,\ldots , n/2}\big ] , & n\ \text{even}\\[2 mm ] \det\big [ [ a_{j , k}^\nu]_{k=1,\ldots,(n-1)/2}^{j=1,\ldots,(n+1)/2}\quad [ \tilde a_j^\nu]_{j=1,\ldots,(n+1)/2}\big ] , & n\ \text{odd } \end{cases}\ ] ] with these formulae allow us to make some straightforward generalisations of the exact expressions presented by kumar in the case . following , we have the next step would be to rewrite gamma functions with a non - integer argument using gauss duplication formula .the right - hand side of evaluates as for even and for odd where denotes some rational constant ( depending on both and ) .this difference in the power of for even and odd has a remarkable consequence : for even the probabilities are given as a rational number times but for odd these constants are simple rational constants ( i.e. there is no powers of ) .the probabilities of a purely real spectrum are tabulated in table [ table : rect-1 ] for small values of and . as we have seen in previous sections , to extend the probabilities for a purely real spectrum to the probabilities we need a formula for the joint pdf of the eigenvalues and a formula for the skew - orthogonal polynomials , i.e. generalisations of theorem [ t3 ] and proposition [ prop : skew ] .given such generalisations the rest of the results presented in previous sections may be extended as well due to the generality of proposition [ u1 ] .[ prop : jpdf - rect ] given a gaussian product matrix of dimension with real eigenvalues , , and complex conjugate pairs of a eigenvalues , , the joint pdf for these eigenvalues is given by { \delta\big(\{\lambda_l\}_{l=1}^{k } \cup \ { x_j \pm i y_j \}_{j=1}^{(n - k)/2}\big ) } \prod_{j=1}^k w_r^\nu(\lambda_j)\prod_{j = k+1}^{(n+k)/2 } w_c^\nu(x_j , y_j)\ ] ] where \ , \delta ( \lambda - \lambda^{(1 ) } \cdots \lambda^{(m)}).\ ] ] with as in and \delta(g - g^{(1 ) } \cdots g^{(m)}).\ ] ] the normalisation is given by the proof follows the same lines as the proof of theorem [ t3 ] .we use generalised real schur decomposition to get an expression for the joint pdf in terms of real eigenvalues and matrices , see ( * ? ? ?finally , changing variables in this expression from the general matrix to a matrix using an orthogonal similarity transformation and introducing the singular values , , completes the proof .[ prop : poly ] for the skew - product defined in accordance with the joint pdf given by proposition [ prop : jpdf - rect ] , the polynomials form a skew - orthogonal set with normalisation for a product square matrices , we found the skew - orthogonal polynomials by exploiting that elements taken of different rows and columns are uncorrelated .this property is still true for rectangular matrices , thus skew - orthogonal polynomials are obtained following the exact same steps .likewise for the normalisation where we evaluate the generalised partition function at and use . with these two propositions established , it is straightforward to extend the rest of our results from square to rectangular matrices . inparticularly , we have that the probability of finding exactly eigenvalues are real is given by \det \big [ b_{j , l}^\nu(\zeta ) \big ] _ { j , l=1,\dots , n/2},\ ] ] for even , while the probability of finding real eigenvalues is \det \big [ [ b_{j , l}^\nu(\zeta ) ] ^{j=1,\dots,(n+1)/2}_{k=1,\dots,(n-1)/2 } \quad[ \tilde a_j^\nu ] _ { j=1,\dots,(n+1)/2 } \big],\ ] ] for odd . here , we have defined with given by , and ( ) given by and .the similarity with theorem [ t1 ] is immediate .moreover , the local densities at the origin is given by for the complex eigenvalues and for the real eigenvalues .this generalises and , respectively .the generalised formulae and follows from the derivations in section [ sec : c - eigenvalues ] and [ sec : r - eigenvalues ] now using the weights and polynomials from proposition [ prop : jpdf - rect ] and [ prop : poly ] . the global densities remains unaltered as long as are kept fixed in the large- limit .we would like to thank mario kieburg and oleg zaboronski comments on this manuscript .remark [ mario - remark ] on page was given to us by mario kieburg .the work of pjf was supported by the australian research council grant dp140102613 , and that of jri by the arc centre of excellence for mathematical and statistical frontiers .beenakker , j.m .edge , j.p .dahlhaus , d.i .pikulin , shuo mi , and m. wimmer , _ wigner - poisson statistics of topological transitions in a josephson junction _ , phys .* 111 * ( 2013 ) , 037001 [ arxiv:1305.2924 ] .
random matrices formed from i.i.d . standard real gaussian entries have the feature that the expected number of real eigenvalues is non - zero . this property persists for products of such matrices , independently chosen , and moreover it is known that as the number of matrices in the product tends to infinity , the probability that all eigenvalues are real tends to unity . we quantify the distribution of the number of real eigenvalues for products of finite size real gaussian matrices by giving an explicit pfaffian formula for the probability that there are exactly real eigenvalues as a determinant with entries involving particular meijer -functions . we also compute the explicit form of the pfaffian correlation kernel for the correlation between real eigenvalues , and the correlation between complex eigenvalues . the simplest example of these the eigenvalue density of the real eigenvalues gives by integration the expected number of real eigenvalues . our ability to perform these calculations relies on the construction of certain skew - orthogonal polynomials in the complex plane , the computation of which is carried out using their relationship to particular random matrix averages . _ department of mathematics and statistics , + arc centre of excellence for mathematical and statistical frontiers , + the university of melbourne , victoria 3010 , australia _
the aharonov - bergmann - lebowitz ( abl ) rule was proposed in 1964 to compute measurement result probabilities of systems that are both preselected and postselected .it was meant to provide a time - reversal invariant formulation of nonrelativistic quantum mechanics . as long as a measurementis actually carried out between pre- and postselection , the abl rule is a straightforward consequence of the born probability rule and usual assumptions on the quantum state following a measurement .attempts have been made , however , to interpret the abl rule counterfactually , that is , to cases where no intermediate measurement is made .the resulting controversy is , i believe , partly due to the fact that proposed definitions of counterfactuals in the present quantum - mechanical context do not adequately capture the intuitive meaning of the notion .in this paper i first review the derivation of the abl rule and some of its properties , in particular contextuality .next i analyze alleged proofs of the impossibility of interpreting the rule counterfactually , as well as various arguments supporting or opposing them .i attempt to clarify the meaning of a counterfactual interpretation , and point out circumstances in which the counterfactual assertion of the abl rule is or is not correct .consider a quantum system and three observables , and pertaining to it . for simplicity, we shall first assume that these observables are discrete and nondegenerate .suppose that at time , is prepared in an eigenstate of . at ,the observable is measured . according to the born rule , the probability of obtaining result ( where is the set of eigenvalues of ) is given by .suppose the result is indeed obtained upon measurement at .we assume the measurement interaction is such that immediately after , the system is in state . at a later time , the observable is measured .the probability of obtaining result is then given by .hence the probability of result and result , conditional on preparation , is equal to we are interested here in the probability of obtaining at , on the condition that is prepared in at and found in at .this we shall denote by . from the definition of conditional probability, we have ( for ) here is the total probability of ( given preparation ) , equal to the sum of over all possible results . from ( [ joint ] )we obtain this is the abl rule . in the right - hand side , and are projectors on states and .the operator projects on or , if the eigenvalue is degenerate , on the associated subspace .is then carried out on these subspaces . ]the abl rule ( expressed in terms of projection operators ) also holds in that case , provided the intermediate measurement obeys the lders rule ( where is a normalization constant ) .the abl rule can also be written for multiple intermediate measurements or for selection by means of density matrices rather than pure states , but we wo nt need these generalizations here .a much discussed property of the abl rule is its contextuality .let , and be an orthonormal basis in a three - dimensional hilbert space .each of these kets can represent a state wherein a particle is in one of three disjoint boxes .take the initial and final states as let the intermediate observable be so chosen as to have as one of its eigenvectors , corresponding to a nondegenerate eigenvalue .the abl rule then yields it turns out that the right - hand side of this equation is not well - defined unless itself is well - defined .let , and be three different real numbers and define elementary algebra shows that .but if then .hence the abl probability of depends not only on the eigenspace associated with that eigenvalue , but also on the structure of the observable in the orthogonal eigenspace .this is contextuality . in terms of boxes , this means the following . if an observable ( ) distinguishes the first box from the other two taken together , then the abl probability of being in the first box is 1 .if another observable ( ) distinguishes the three boxes one from another , the abl probability of being in the first box is 1/3 .it is easy to check that if , likewise if , from ( [ pa ] ) and ( [ pb ] ) , albert , aharonov and damato have argued that noncommuting observables and must be simultaneously well - defined at .this conclusion hinges on a counterfactual use of the abl rule .it was indeed intended to apply specifically to the case where neither nor are measured between pre- and postselection .can the abl rule be interpreted counterfactually ?that question was answered in the negative by sharp and shanks , cohen and miller , and much debated afterwards .the crux of the sharp and shanks argument ( as well as others ) can be stated rather succinctly .let and be the possible final states of a two - state system .suppose is not measured , and assume that the abl rule correctly gives the probability of nondegenerate result , conditional on pre- and postselection , had been measured .the total probability of should then be given as a weighted sum on the possible final states , that is , here is the probability of final state when no intermediate measurement is made , and is the conditional probability of based on the counterfactual interpretation of the abl rule . but according to standard quantum mechanics , is given by .hence we should have since this is not true in general ( counterexamples are easily found ) , sharp and shanks conclude that the counterfactual use is invalid .the validity of the counterfactual use and the relevance of the sharp and shanks argument were debated between vaidman and kastner .vaidman s objection to the proof consists in pointing out that the weight in ( [ ss ] ) is the probability of if no intermediate measurement occurs .since we are asking for the total probability of result , we must use the expression for the probability of if is measured at . that probabilityis given by .but then eq . ( [ ss ] ) becomes p(c_1 |a ) & = & _ j ( p_b p_c_j p_a p_c_j ) p(c_1 |a , b ) + _ j ( p_b p_c_j p_a p_c_j ) p(c_1 |a , b ) + & = & ( p_b p_c_1 p_a p_c_1 ) + ( p_b p_c_1 p_a p_c_1 ) + & = & ( p_c_1 p_a p_c_1 ) + & = & |c_1 |a |^2 , [ ss1 ] in accordance with standard quantum mechanics .the significance of this calculation of is best brought out by quoting vaidman s definition of the counterfactual meaning of the abl rule : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if a measurement of an observable were performed at time , then the probability for would equal , provided that the results of measurements performed on the system at times and are fixed . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this is a statement about the statistical distribution of results of unperformed experiments , made on the basis of a law derived from a large number of performed experiments _ identical to the former in all relevant respects_. it is like saying , on the basis of numerous tosses of a die yielding essentially uniform outcomes , that had the die been tossed one additional time , the probability of obtaining `` 3 '' would have been 1/6 .both this statement and vaidman s definition express the regularity of nature .both assert , on the basis of a general rule drawn from numerous experimental trials , that if the experiment were done over again in the same relevant conditions , the results would fall under the same general rule .hence vaidman s statement is ( presumably ) true , genuinely counterfactual , but not particularly illuminating , at least as far as specificities of quantum mechanics are concerned .in contrast with vaidman , kastner defended the sharp and shanks proof and argued that a nontrivial counterfactual assertion of the abl rule ( i.e. one that provides information about specific systems ) is false .kastner s analysis draws from both goodman s and lewis s theories of counterfactuals .her argument can be summarized as follows .a counterfactual ( read `` if it were that , it would be that . '' ) is true if is not true and the conjunction of with laws of nature and suitable background conditions implies , that is , now clearly , there must be restrictions on for , if includes the statement ( `` not '' ) , implication ( [ pls ] ) will hold trivially .these restrictions usually state that should be `` cotenable '' with , a notion not so easy to define but essentially meaning that is independent of the truth of .suppose that in the real world , preparation is followed by postselection , with no intermediate measurement .then had there been an intermediate measurement , the intermediate state would have changed , and the final measurement result could no longer be expected to obtain . in other words ,result is not cotenable with measuring .does it follow from this argument that the nontrivial counterfactual assertion of the abl rule is false ?the answer is affirmative if the background conditions must include the result .but this is not necessarily the case .the background conditions can be construed as encompassing everything that characterizes the state of the system at , including whatever might induce it to yield result upon measurement of at . with such a definition ,the result itself is not part of the background conditions .and although cotenability might be analyzed in that context , i shall use a different approach .i submit that the proper way of investigating the counterfactual validity of the abl rule is to enquire about the `` true value '' of at .strict empiricists will no doubt quit reading right here , since it is absolutely impossible , in a situation where has not been measured between and , to empirically ascertain what the true value of was at the intermediate time .but that does nt prevent different theories or interpretations to make claims on what the true value is , as we will presently see .it is instructive to analyze the sharp and shanks argument in terms of true values .if , following standard quantum mechanics , is equal to , then the first equality in ( [ ss1 ] ) holds identically .but sharp and shanks claim that the counterfactual meaning of the abl rule is encapsulated in eq .( [ ss ] ) .it is easy to see that ( [ ss ] ) and ( [ ss1 ] ) will coincide if together with a similar equation with replaced by . in quantum mechanics , eq .( [ bcac ] ) does not hold in general .the left - hand side of ( [ bcac ] ) is the uncontroversial probability of when no intermediate measurement is performed , while the right - hand side is the uncontroversial probability of when is measured at the intermediate time . that the equality is false means that whichever true value does or does not have at the intermediate time , it can not be one that is simply revealed in and unaffected by an eventual measurement .it was pointed out in that ( [ bcac ] ) holds if makes up a consistent family of histories . indeed in that case one can maintain that has a well - defined value ( equal to one of its eigenvalues ) without running into contradictions .but of course one is not compelled to do so .associating or not associating true values to observables belonging to a consistent family of histories contributes in defining the interpretation of the theory .it is instructive to recall the example of sect . 2 , with and defined as in eqs . ( [ ar ] ) and ( [ br ] )let and be defined as in ( [ obsc ] ) and ( [ obsc1 ] ) and let it is easy to check that and make up distinct consistent families of histories .the abl probability that is one , and so is the abl probability that .we wo nt run into contradictions if we maintain that the particle was surely in box 1 at .likewise we wo nt run into contradictions if we maintain that it was in box 2 .of course both statements can not be held at once , since they are logically contradictory .note that the family of histories more refined than either ( [ h1 ] ) or ( [ h2 ] ) , is not consistent .the families and , where and stand for the set of eigenvalues of and , are always consistent .hence it can be maintained that has the true value at intermediate times , or that has the true value .we have assumed that .therefore and are not orthogonal .so it is not a priori logically inconsistent to assume that and both have true values at intermediate times .denotes the projector on the subspace spanned by and , then the family of histories is consistent . ]it is not clear whether a full - fledged interpretation can be implemented along these lines , for arbitrary observables and .the abl rule is symmetric under permutation of preselection and postselection .this means that for the purpose of making probabilistic statements about intermediate measurement outcomes , the initial and final states and have exactly the same utility .this does not necessarily entail that they are equally useful for the purpose of making ontological statements . in von neumann s measurement theory , for instance ,a measurement is an interaction between a quantum system and an apparatus , followed by a collapse . from the time when the system is prepared in state to the time when an ontological statement is to be made ,no physical action occurs on the system .such is not the case , however , between and . indeeda physical interaction of the quantum system with an apparatus has to occur sometime before , for the system to collapse to at . in that context, it may be more natural to hold that , rather than , is the correct state at , and that , rather than , expresses a true value . in some interpretations of quantum mechanics ,true values of observables can be assigned outside the context of consistent families of histories .their statistical distributions , however , will not obey the abl rule .an example is bohmian mechanics , where the true value of position is defined at any intermediate time .but in general , does not make up a consistent family . in this context , the meaning of background conditions proposed at the end of the last section is particularly clear .suppose that and correspond to one - dimensional wave functions and , with a gaussian and a function uniform over some interval and zero elsewhere ( i.e. postselects through a slit of width ) .all bohmian trajectories going through at have gone through some other interval at .hence the background conditions associated with preselection and postselection are the wave function at together with the interval of true positions .of course if position is measured at , the wave function will change accordingly , and so will the measurement result probabilities at .it was pointed out by vaidman that no hidden variable theory can reproduce the abl rule in all situations .indeed take an ensemble of spin 1/2 particles prepared in the state .if there is no intermediate measurement , postselection in the state will in fact introduce no additional selection .hence if no backward causality is assumed , true values at must be the same whether postselection does or does not occur , irrespective of any hidden variables .but then the probability that a measurement at of the spin along yields must be equal to , which differs from the abl value . in the copenhagen interpretation of quantum mechanics ( or at least in the most instrumentalistic versions of it ) , an observable has a value only when a measurement of indicates that value .this is also the case in mohrhoff s more recent interpretation , which incorporates the abl rule explicitly .the fact that the abl rule predicts a statistical distribution of values of implies , according to mohrhoff , an objective fuzziness of in the case where the experiment is not performed .i believe that statement is genuinely and nontrivially counterfactual .it asserts that no unmeasured observable of no individual system whatsoever has a true value in the interval between pre- and postselection .the noncounterfactual meaning of the abl rule is not controversial .the validity of the rule is then a straightforward consequence of standard quantum mechanics and usual hypotheses on the state of a quantum system immediately after measurement .the rule is also true counterfactually if it simply expresses the reproducibility of experiments .a counterfactual meaning more relevant to the specificities of quantum mechanics refers to the true value of an observable at an intermediate time , when no observable is actually measured between pre- and postselection . in generalthe abl rule is then counterfactually false .it can be true , however , when associated with a consistent family of histories or when asserting objective fuzziness or nonvaluedness of observables not being measured .although statements about true values between pre- and postselection may not have definite empirical meaning , they can fruitfully be viewed as contributing to define the interpretation of the quantum - mechanical theory .99 y. aharonov , p. g. bergmann and j. l. lebowitz , `` time symmetry in the quantum process of measurement , '' phys .b * 134 * , 141016 ( 1964 ) .d. z. albert , y. aharonov and s. damato , `` curious new statistical predictions of quantum mechanics , '' phys . rev .lett . * 54 * , 57 ( 1985 ) .w. d. sharp and n. shanks , `` the rise and fall of time - symmetrized quantum mechanics , '' philos .* 60 * , 48899 ( 1993 ) .o. cohen , `` pre- and postselected quantum systems , counterfactual measurements , and consistent histories , '' phys .a * 51 * , 437380 ( 1995 ) .d. j. miller , `` realism and time symmetry in quantum mechanics , '' phys .a * 222 * , 3136 ( 1996 ) .l. vaidman , `` time - symmetrized quantum theory , '' fortschr .phys . * 46 * , 72939 ( 1998 ) .l. vaidman , `` time - symmetrized counterfactuals in quantum theory , '' found .* 29 * , 75565 ( 1999 ) .l. vaidman , `` defending time - symmetrised quantum counterfactuals , '' stud . hist .. mod .phys . * 30 * , 37397 ( 1999 ) .l. vaidman , `` the meaning of elements of reality and quantum counterfactuals : reply to kastner , '' found .* 29 * , 86576 ( 1999 ) .l. vaidman , `` discussion : time - symmetric quantum counterfactuals , '' ` http://philsci-archive.pitt.edu/archive/00001108/ ` ( 2003 ) .r. e. kastner , `` time - symmetrised quantum theory , counterfactuals and ` advanced action ' , '' stud. hist .phys . * 30 * , 23759 ( 1999 ) .r. e. kastner , `` tsqt ` elements of possibility ' ? '' stud . hist .* 30 * , 399402 ( 1999 ) .r. e. kastner , `` the three - box ` paradox ' and other reasons to reject the counterfactual usage of the abl rule , '' found .* 29 * , 85163 ( 1999 ) .r. e. kastner , `` the nature of the controversy over time - symmetric quantum counterfactuals , '' philos .sci . * 70 * , 14563 ( 2003 ) .n. goodman , `` the problem of counterfactual conditionals , '' j. philos . *44 * , 11328 ( 1947 ) .d. lewis , _ counterfactuals _ ( harvard university press , cambridge ma , 1973 ) .p. horwich , _ asymmetries in time _( mit press , cambridge , ma , 1987 ) , chap .l. marchildon , _ quantum mechanics . from basic principlesto numerical methods and applications _ ( springer , berlin , 2002 ) .j. von neumann , _ mathematical foundations of quantum mechanics _ ( princeton university press , princeton , 1955 ) .y. aharonov and d. z. albert , `` the issue of retrodiction in bohm s theory , '' in _ quantum implications _ , b. j. hiley and f. d. peat , eds.(routledge and kegan paul , london , 1987 ) .u. mohrhoff , `` what quantum mechanics is trying to tell us , '' am. j. phys .* 68 * , 72845 ( 2000 ) .u. mohrhoff , `` objective probabilities , quantum counterfactuals , and the abl rule a response to r. e. kastner , '' am .j. phys .* 69 * , 86473 ( 2001 ) .r. e. kastner , `` comment on ` what quantum mechanics is trying to tell us , ' by ulrich mohrhoff , '' am .j. phys .* 69 * , 86063 ( 2001 ) .
the aharonov - bergmann - lebowitz rule assigns probabilities to quantum measurement results at time on the condition that the system is prepared in a given way at and found in a given state at . the question whether the rule can also be applied counterfactually to the case where no measurement is performed at the intermediate time has recently been the subject of controversy . i argue that the counterfactual meaning may be understood in terms of the true value of an observable at . such a value can not be empirically determined for , by stipulation , the measurement that would yield it is not performed . nevertheless , it may or may not be well - defined depending on one s proposed interpretation of quantum mechanics . various examples are discussed illustrating what can be asserted at the intermediate time without running into contradictions .
certain operations permitted in classical physics can not be done perfectly in quantum mechanics .this is best exemplified by the celebrated no - cloning theorem which forbids to create two perfect copies of an unknown quantum state .the no - cloning theorem is closely related to another no - go theorem stating that one can not gain some information on an unknown quantum state without disturbing it .namely , if this would be possible one would be able to prepare two approximate replicas of this state which would be better than the best ones allowed by quantum mechanics .therefore , in quantum mechanics any operation on an unknown quantum state giving some information on the state inevitably disturbs the state and in addition , the more information it extracts the larger is the disturbance .this fundamental property of quantum operations is reflected in the plane of values of quantities quantifying the information gain and the state disturbance by a certain optimal trade - off curve that can not be overcome by any quantum operation . among all quantum operationsparticularly interesting are those which lie on this curve since they to the best possible extent as quantum mechanics allows approximate ideal disturbance - free measurement device .these operations , conventionally denoted as minimal disturbance measurements ( mdms ) , in general depend on the set of input states , their a priori distribution and also on the quantities quantifying the information gain and the state disturbance .the most successful approach to the finding of the optimal trade - offs and corresponding mdms proved to be that based on the quantification of the information gain by the mean estimation fidelity and the state disturbance by the mean output fidelity . using this approach it was possible to derive analytically the optimal trade - offs between and and to find the mdms for a single copy of a completely unknown pure state of a -level system , a completely unknown pure state of a -level system produced by independent phase shifts of some reference state and a completely unknown maximally entangled state of two -level systems . besides , the first two above mentioned mdms were also demonstrated experimentally for ( qubit ) .the studies on mdm were not restricted to finite - dimensional quantum systems and also mdm for a completely unknown coherent state was found and realized experimentally in .multi - copy mdms were introduced by banaszek and devetak who considered partial measurement on identical copies of a pure qubit state .they assumed mdms which output disturbed quantum copies of the state and a classical estimate and they _ numerically _ found the optimal fidelity trade - off for this scenario .the mdms are not only of fundamental importance but they can be also applied to increase transmission fidelity of certain lossy and noisy channels . in this paper we further investigate the minimum disturbance measurement on several copies of the state .in contrast to ref . we assume operations which output only a _ single _ quantum copy of the input state .we derive _ analytically _ the optimal trade - off between the mean estimation fidelity and the mean output fidelity for an ensemble of identical pure qubits which is given by the formula moreover , we also conjecture that the optimal fidelity trade - off for an ensemble of identical pure states of a -level system has the form the paper is organized as follows . the general formalism allowing to determine the mdmis presented in sec .[ sec_1 ] . in sec .[ sec_qubits ] we find the optimal fidelity trade - off and the corresponding mdm for identical qubits . in sec .[ sec_2 ] we present a conjecture of the optimal fidelity trade - off for identical -level systems .finally , sec .[ sec_3 ] contains conclusions .let us investigate a general mdm for identical pure states of a -level system ( qudit ) .such states are represented by vectors in a -dimensional hilbert space with an orthonormal basis .the qudits form an orbit of the group of unitary matrices with determinant , , where , , is a unitary representation of on .we consider here quantum operations on identical qudits - and also yields a classical estimate of . without loss of generality , these estimates can be labeled by the elements of the group . note that the input hilbert space of the operation is the symmetric subspace of the hilbert space of qudits , and the output hilbert space is the space of a single qudit , .our task is to find an operation which exhibits the best possible performance in the following protocol . in each run of the protocol, the operation is applied on the quantum state ( [ nquditstate ] ) .we assume that is chosen randomly with uniform a priori distribution from the set of states .if the outcome is detected the operation produces a single qudit output state .this state is not normalized and its trace ] where .as shown in the optimal partial measurement can be assumed to be covariant which means that are generated from a single properly normalized operator , \,\chi_n^{(d ) } \,[u_d^{t \otimes n}(h ) \otimes u_d^\dagger(h ) ] .\label{chicovariant}\ ] ] the overall operation must be trace - preserving which imposes the constraint , d h = \openone_{\mathrm{in } } , \label{chinorm}\ ] ] where stands for the partial trace over the output single - qudit hilbert space and denotes the identity operator on the input space .the formula ( [ chinorm ] ) expresses the completeness of the measurement carried on the input state .the unitary representation of acts irreducibly on . for the covariant map ( [ chicovariant ] )the integral in eq .( [ chinorm ] ) can thus be evaluated with the help of schur s lemma and we get \openone_{\mathrm{in}} ] .the operator generating the optimal partial measurement is proportional to a rank - one projector and can be written as where is the eigenvector of a positive - semidefinite operator \ ] ] corresponding to its maximum eigenvalue . here\psi(g ) dg,\label{rf}\\ r^{(d)}_{g}&=&\mathrm{tr}_{\rm out}[r^{(d)}_f \openone_{\rm in } \otimes \psi(0)]\otimes\openone_{\rm out}\label{rg}.\end{aligned}\ ] ] using the map the fidelities and can be expressed as , \qquad g=\mathrm{tr}[\chi^{(d)}_{n}r^{(d)}_{g}].\ ] ] the operator can be easily evaluated using schur s lemma and after some algebra we arrive at where stands for the partial transposition with respect to the first qudits and is the projector onto the subspace . in what follows itis convenient to work with the occupation number basis that forms an orthonormal basis in the subspace . here is the symmetrization operator for qudits , the symbol stands for summation over all permutations of qudits and denotes the permutation operator of qudits ; the integers , , are the numbers of qudits in the states , that satisfy the constraint . making use of the occupation number basis the operator can be expressed as to find the desired mdm for identical qudits we have to diagonalize a large matrix .for a general this is a complex task which can be solved numerically .however , if we resort to the qubit case ( ) we can find the optimal fidelity trade - off and the mdm analytically .the obtained result then can be used to make at least a conjecture about the optimal fidelity trade - off for qudits .for qubits the operator ( [ pi ] ) reads as where is a completely symmetric state of qubits in which qubits are in the basis state and the remaining qubits are in the basis state .hence , making use of the formula and eq .( [ rfd ] ) one finds that .\end{aligned}\ ] ] further , substitution of the obtained expression into eq .( [ rg ] ) gives the operator in the form in order to determine the optimal we have to find the maximum eigenvalue and the corresponding eigenvector of the matrix .the matrix has a block diagonal structure with two one - dimensional blocks and two - dimensional blocks .the elements of the one - dimensional blocks are the eigenvalues and with the characteristic subspaces spanned by the basis vectors and , respectively .the two - dimensional blocks correspond to the invariant subspaces spanned by the basis vectors , and have the form , where the matrix possesses two eigenvalues from which one obtains the remaining eigenvalues of the matrix as .the larger eigenvalue is a decreasing function of attaining maximum for .obviously , . moreover , using in eq . ([ lambda ] ) the inequality following from the inequality one can show that also holds and therefore is the maximum eigenvalue of the matrix .the eigenvalue is non - degenerate and its eigenvector determining the optimal map reads as where and are nonnegative real numbers satisfying the condition . on inserting into eqs .( [ fg ] ) one arrives after some algebra at the optimal fidelities ,\label{qubitf}\\ g&=&\frac{1}{n+2}\left(n+\alpha^{2}\right)\label{qubitg}.\end{aligned}\ ] ] expressing now the parameters using eq .( [ qubitg ] ) and the normalization condition and substituting the obtained formulas into eq .( [ qubitf ] ) we finally obtain the optimal fidelity trade - off for identical qubits ( [ nqubit ] ) .the trade - off is depicted for several numbers of copies in fig .[ fig1 ] .the specific feature of the optimal map ( [ nqubitmap1 ] ) is that it can be rewritten as the following coherent superposition of two maps : where , and . the first map in the superposition is described by the vector and corresponds to the choice ( ) .since in this case the map apparently realizes optimal estimation of a qubit from identical copies .the second map is obtained by choosing ( ) and it is represented by the second vector on the right hand side of eq .( [ nqubitmap2 ] ) .it gives and which corresponds to optimal estimation of a qubit from copies while one copy is left intact by the map .interestingly , the fact that one can create a mdm as a coherent superposition of the two extreme maps is a general property of the mdms that is valid not only for the present case of qubits , but it holds also for a single phase covariant qudit , two maximally entangled qudits or a single completely unknown qudit when is the maximally entangled state of two qudits and satisfy the condition .thus although we are not able to solve analytically the above optimization task of finding the mdm for qudits for a general we can use the superposition principle together with eqs .( [ nqubitmap2 ] ) and ( [ quditmap ] ) to guess the desired optimal map to read where and in order to facilitate the following calculations we rearrange the terms on the right hand side of the map ( [ nquditmap1 ] ) and rewrite it in the form : where fulfill the condition and where we have used the short hand notation for a completely symmetric state of qudits containing qudits in the basis state and a single qudit in the basis state .the fidelities and for this map can be again calculated with the help of eq .( [ fg ] ) . substituting eq .( [ nquditmap2 ] ) into eq .( [ fg ] ) and taking into account the symmetry of the projector that implies the problem of finding and reduces to the calculation of the following scalar products where we have used the short hand notation for a completely symmetric state of qudits containing qudits in the basis state , a single qudit in the state and a single qudit in the state .the scalar products can be easily evaluated using eq .( [ occupation ] ) as , and .hence , one obtains ,\nonumber\\ g&=&\frac{1}{n+d}\left(n+\alpha^{2}\right).\end{aligned}\ ] ] eliminating now the parameters and from these equations using the same procedure as in the qubit case we arrive finally at the fidelity trade - off ( [ nqudit ] ) .although the found trade - off was not shown to be optimal here , there are several indications supporting our conjecture that it is really optimal .first , for we obtain using eqs .( [ quditfg ] ) and therefore these optimal fidelities satisfy our trade - off .second , by putting and one finds that and which means that also the second extreme case is fulfilled .finally , for the trade - off reduces to the optimal trade - off for identical qubits ( [ nqubit ] ) while for it boils down to the optimal trade - off for a single completely unknown qudit .the trade - off ( [ nqudit ] ) is depicted in fig .[ fig2 ] for and .in summary , in the present paper we have derived analytically the optimal trade - off between the mean estimation fidelity and the mean output fidelity for partial measurements on identical pure qubits .furthermore , based on the structure of the optimal map saturating the trade - off we have made a conjecture about the optimal fidelity trade - off for partial measurements on identical pure qudits .the obtained results provide an insight into the generic structure and properties of mdms .the optimal partial measurements saturate the fundamental bound on conversion of quantum information onto classical information and may thus find applications in quantum communication and information processing .the research has been supported by the research projects `` measurement and information in optics , '' ( msm 6198959213 ) and center of modern optics ( lc06007 ) of the czech ministry of education .partial support by the secoqc ( ist-2002 - 506813 ) project of the sixth framework program of eu is also acknowledged .
we derive analytical formula for the optimal trade - off between the mean estimation fidelity and the mean fidelity of the qubit state after a partial measurement on identically prepared qubits . we also conjecture analytical expression for the optimal fidelity trade - off in case of a partial measurement on identical copies of a -level system .
rrently deployed wired and wireless communication systems employ half duplex .namely , either frequency division duplex ( fdd ) or time division duplex ( tdd ) has been used for separate transmission and reception . in fdd and tdd , transmitted signal does not interfere with received signal due to orthogonal use of time / frequency resources for transmission and reception . since two orthogonal channels are needed in half duplex systems , twice of time and/or frequency resources are required in half duplex compared to full duplex systems .it is clear that the capacity can be doubled by simultaneous transmission and reception in the same frequency at the same time .str or full duplex systems not only improve the physical layer capacity but also provide other important benefits in layers beyond physical layer .for example , str can reduce or completely eliminate hidden node problem which is typical issue in csma networks such as wireless local area networks ( wlan ) . when a node receives a packet designated to it and meanwhile has a packet to transmit , by having str capability it can transmit the packet while receiving the designated packet .this not only provides twice throughput but also enables hidden nodes to better detect active nodes in their neighborhoods . on the other hand ,when the node has no packet to send , it can transmit a dummy signal so that any hidden node can detect the activity in its vicinity and realize that the channel is in use .another benefit of str is significant reduction in end - to - end delay in multi - hop networks . in half duplex systems ,each node can start transmission of a packet to the next node only when it is fully received from the previous node in network .therefore , the end - to - end delay is equal to packet duration multiplied by the number of hops .however , when str is employed , a node can forward a packet while receiving it , and consequently the end - to - end delay in str systems can be just a bit longer than the packet duration .this will be a huge advantage over half duplex systems especially as the number of hops grows .meanwhile , the forwarded packet to next node can play a role of implicit acknowledgement ( ack ) to previous node as well .interesting application of str includes channel sensing in cognitive radio systems . in cognitive radios ,active secondary users have to release the spectrum when primary users start their transmissions . without str capability , it is a challenge for secondary users to detect activity of primary users while they are using the spectrum for their own communications . however , str enabled secondary users can scan the activities of primary users frequently ( even as they transmit ) and stop their transmissions immediately once they detect primary users transmissions .likewise , str makes the device discovery easier in device - to - device ( d2d ) systems .this is due to the fact that in d2d systems , when user equipment ( ue ) has str capability , it can discover neighboring ues easily by monitoring ul signals from proximate ues without stopping its own ul transmission .it is interesting to note that str techniques can be used for interference cancellation in co - existence of multiple radios in the same device .multiple radios such as wlan , bluetooth , gps receiver , 2 g and 3 g cellular transceivers are put into the same device particularly a small handheld device type .although those radios operate at different rf carriers , due to proximity of transceivers in the same device , they can still interfere with each other .this interference can be treated as echo since the transmitted signal from a radio is arrived at other radios in the same device .hence , by using the proposed technique in this paper , the co - existence issues in the same device can be resolved .multiband support requires large number of switched duplexers in fdd , resulting in quite complicated rf architecture , increased cost and form factor , .multiple input multiple output ( mimo ) and carrier aggregation techniques aggravate the situation .however , by str techniques duplexer free systems are possible by cancelling transmitted signal appeared at receive band .despite all the advantages mentioned , implementation of str has its own challenges .the main difficulty in str system is echo cancellation at receive chain .the strongest echo is introduced to the system when transmitted signal is leaked into receive chain through circulator .this causes a large interference to the desired received signal as the echo power level is much higher than desired received signal .for example , with 46 dbm output power at power amplifier ( pa ) , assuming 25 db isolation at circulator , the echo power will be 21 dbm .in addition to the aforementioned echoes , echoes can be caused by impedance mismatch at antenna .besides , transmitted signal can bounce off objects such as buildings and mountains , and return to the antenna as echoes at receive chain . hence , without echo cancellation , the received signal can not be decoded .furthermore , in order to avoid saturation at lna and high resolution adc , echo cancellation should be performed before lna .therefore , it is imperative to cancel echoes in analogue domain in order for str systems to be commercially deployable .an off - the - shelf hardware is available for rf interference cancellation in analogue and can be used for echo cancellation in rf .however , it is a narrowband interference canceller and performs interference cancellation by adjusting phase and magnitude of interference .thus , the performance of echo cancellation in wideband signals is very limited . in 20 db echo suppression is reported for 5 mhz ieee 802.15.4 signals .in addition , this narrowband interference canceller can create higher sidelobe power than in - band interference power .an str implementation in analogue assuming separate transmit and receive antennas was reported in claiming 37 db suppression by the rf echo canceller itself .recently , presented an open - loop technique for full duplex mimo . a hardware based experiment shows 50 db of echo suppression . in , the echo cancellation using transmit beamforming is studied .this technique uses two transmit antennas and one receive antenna .the additional two transmit antennas create a null toward receive antenna , resulting in reduced echo power at receive chain .however , the transmitted signal is not omni - directionally transmitted due to the transmit beamforming .when the intended receiver is located in the null direction , signal - to - noise power ratio ( snr ) loss is inevitable .additionally , the transmit beamforming is narrowband beamforming .thus , it is not suitable for wideband signal .since this technique needs separate transmit and receive antenna set , increased form factor is unavoidable .nonetheless , the echo suppression of 20 30 db by this technique is not promising .however , with the aids from digital baseband cancellation and the narrowband noise canceller , the imperfect prototype achieves 1.84 times of throughput compared to half duplex system throughput .a scalable design to mimo using extra antennas with special placement antennas is proposed in . in , assuming variable delay line , one - tap echo canceller is suggested .it is shown that 45 db suppression of echo is achieved by using heuristic approach in adaptive parameter adjustment .the throughput measurements in show 111% gain in downlink .another prior work proposed to cancel echo before lna .firstly , to avoid saturation at lna , separate transmit and receive antennas with large displacement are used .various antenna orientations are investigated in .the receiver estimates the echo channel at every subcarrier using ofdm signal . using another transmit radio chain , an echo cancelling ofdm signalis generated based on the estimated channels of echo path and the additional radio , and added to the receive chain before lna .the measured echo cancellation is about 31 32 db in and 24 db in .this type of open - loop technique is directly sensitive to impairments as it relies on high accurate echo channel estimation . since the suppression in this case is not sufficient to realize the full duplex gain , other mechanisms such as digital domain cancellation and separate transmit / receive antenna have to be incorporated . without using additional antennas, we propose an adaptive echo cancellation technique that can be implemented in analogue domain for wideband signals .the proposed technique is a closed - loop technique which provides more robustness to impairments than open - loop techniques .furthermore , the proposed technique is scalable to any mimo system . as mentioned earlier , with perfect echo cancellation , str systemscan achieve the doubled capacity especially in isolated links such as point - to - point communications and wireless backhaul .however , in cellular systems the situation is different .in addition to the regular co - channel interference present in half duplex systems , namely base station ( bs ) to ue and ue to bs interferences , there are two unique and serious interferences caused by system operation in full duplex mode : one is _bs - bs _ and the other is _ ue - ue_. on one hand , due to str at bss , neighboring bss dl signals interfere with desired ul signal at home bs .this is called _ bs - bs interference _ and is extremely severe .firstly , unlike bs to ue channel ( downlink or uplink ) , bs - to - bs channel is closer to line - of - sight ( los ) with much smaller path loss . secondly , the transmit power and antenna gain at bs is much larger than those of ue .hence , the interferences from neighboring bss to home bs easily dominate desired weak ul signal .hence , without cancelling bs - bs interferences , ul communication is impossible . in this paperwe suggest a solution for bs - bs interferences and provide system level evaluations for str systems with and without bs - bs interference cancellation . on the other hand , in str systems , ul signal transmitted by a uecreates interference to dl signals received at other ues _i.e. _ , dl signal will be corrupted by proximate ul signals .this is called _ ue - ue interference _ and results in loss of dl capacity .unfortunately , with simple transceiver and omni - directional single antenna at ue , no fancy transmit and receive beamforming can be utilized in ue to handle ue - ue interference .therefore , one option to confront ue - ue interference is to coordinate between neighboring bss such that scheduler avoids scheduling proximate ue pairs that cause serious ue - ue interference to each other .however , in this paper we propose a non - cooperative technique which , despite its simplicity , can hugely reduce ue - ue interference and greatly improve dl capacity . unlike cellular systems ,the aforesaid bs - bs and ue - ue interferences are absent in csma network such as wlan systems even when multiple co - channel cells are deployed .the reason is that due to csma protocol , access points ( aps ) or terminals will hold their transmissions if they hear any transmission from other proximate aps or terminals .in fact owing to str , the hidden node issue can be easily solved .of course collisions still can happen between nodes due to other factors such as propagation delay and some delay in header decoding .however , this is common problem even without employing str . in this paperwe investigate a few protocols and demonstrate how str can reduce the hidden node problem .we show that the throughput is improved even with the assumptions of asynchronous packet arrival and some delay in decoding header or decision on channel activity .unlike where detail issues such as variable packet length , fairness and scheduler design are addressed , we study the hidden and exposed node issues in simplified model of csma which renders comparisons with theoretical throughput expression .the paper is organized as follows : in section ii analogue domain echo cancellation technique is described together with performance evaluations . in section iii , the application of str to csma network for reduction of hidden node problem is discussed and throughput improvements are demonstrated with multiple cell simulations .section iv presents the solutions for bs - bs and ue - ue interferences .finally , conclusions are provided in section v.echo cancellation is a well - known topic studied extensively in the literature and proved technology in the field . despite the comprehensive investigations ,the derivation of algorithms and implementations are performed in digital baseband . however , as described previously in section i , the echoes should be cancelled before lna _i.e. _ in analogue domain .implementing digital signal processing techniques in analogue domain is very challenging due to rf impairments and the fact that only limited mathematical operations are allowed in analogue domain . in this section ,we derive echo cancellation techniques in analogue domain and show that our method can cancel echoes very effectively even with rf impairments .the transmitted signal in passband can be written as where represents the real part , is the carrier frequency in radian / sec , and and are baseband in - phase and quadrature phase signals , respectively . without loss of generality , we assume one echo with unknown gain and delay . then , the received signal at receive chain can be modeled as where and are the unknown gain and delay of the echo , respectively , is the desired received signal and is additive white gaussian noise ( awgn ) . in this paperwe cancel the echo in passband by creating estimated echo signal using multiple replica of with different delays and subtracting it from the signal corrupted by echo .for instance , the echo can be estimated by a linear combination of and where is the delay of -th tap . in order to rotate the phases of and , we also need to use the hilbert transform of and .then , the estimated echo using two taps can be written as where and are in - phase and quadrature component of -th tap weight , and is the hilbert transform of .then , the echo canceller output can be expressed as in order to simplify derivation of the weights and , we transform the passband model to equivalent complex baseband model .the transmitted complex baseband signal is defined by then , from , the received signal in complex baseband is in the form of where and are complex baseband versions of desired received signal and awgn , respectively .likewise , we can develop the complex baseband model of the echo canceller output from ( [ e - t0 ] ) and ( [ cancelloutput0 ] ) as follows after some manipulations , it is not difficult to show that .for the concise representation , the echo canceller output can be written in vector form as where superscript represents transpose , ^t ] .although it is not possible to implement wiener solution in analogue domain , minimum mean squared error ( mmse ) solution provides valuable insights on the performance and design parameters . minimizing the power of the echo canceller output means that the echo is cancelledthis is due to the fact that unknown desired receive signal and awgn can not be cancelled by linear combination of as is uncorrelated with and noise .therefore , we define the cost function which minimizes the power of the echo canceller output as in appendix a the derivation of wiener solution in analogue domain is presented .echo cancellation performance is shown in fig .[ fig_mse ] where the suppression level is defined by the suppression is not a function of the carrier frequency and echo power as shown in appendix a. rather , it is a function of delay difference between taps and the number of them .1 ( a ) exhibits suppression as a function of tap delay difference normalized by signal bandwidth ( b ) assuming that the echo delay is in the middle of the first and second tap delay . as it can be observed from the figure ,smaller normalized tap delay difference and larger number of taps provide better echo suppression .even with two taps , the suppression level close to 90 db can be achieved with the normalized tap delay difference equal to 0.01 which means 1 nsec of delay difference between taps in 10 mhz bandwidth systems .this is equivalent to 100 times oversampling in digital signal processing .it is apparent that as the tap delay difference goes to zero the suppression level goes to infinity ( _ i.e. _ no residual echo ) .we emphasize that having a small tap delay difference is not a challenging issue .for example , assuming the speed of electromagnetic ( em ) wave in transmission line is equal to the speed of light , difference in length will yield pico second delay difference . in fig .[ fig_mse ] ( b ) the effect of echo delay on echo canceller performance is illustrated assuming that the normalized tap delay difference is 0.1 . in realitythe delay of echo is unknown .however , typical value or range of values of circulator delay can be measured .interestingly , when the tap delay coincides with the true delay of the echo , even with only two taps suppression beyond 150 db can be easily achieved .the reason that even when is equal to either or the suppression is not infinity is that is not approaching zero .it is simple to show that when the delay of either taps coincides with the echo delay , one tap echo canceller instead of two tap echo canceller has infinity suppression level . in two tap wiener filter , we study the impact of number of echoes under the condition of same total echo powers .assume one echo having delay located in the middle of neighboring two taps where . without loss of generality assume .let us add one more echo with the same power as the first echo but closer to the second tap with arbitrary phase and delay where and .then , the received echo signal becomes the power of is depending on the phase , the new echo can reduce total echo power .however , we consider the worst case _i.e. _ .note due to the assumption . for fair comparison ,we normalize such that maximum power of is equal to the power of where .then , and have the same power but has one echo in the middle of taps while contains two echoes : one in the middle of the taps and the other closer to either taps .the cross correlation vector can be written as where and are defined in appendix a and .\ ] ] it is not difficult to show the residual echo powers of and as following and , respectively where which is the residual echo power of the second echo when the first echo is not present . from fig .[ fig_mse2echoes ] it is clear that as the second echo moves toward the second tap , the residual echo power is reduced .notice that means that only one echo exists .hence , the assumption of one echo in the middle of neighboring taps is the worst simulation condition .this is from the fact that echo closer to either tap is easier to estimate as shown in fig .[ fig_mse ] ( b ) .the argument can be easily extended to arbitrary number of echoes with arbitrary echo powers located between neighboring taps . in the previous subsection ideal phase shifteris assumed for hilbert transform . however , due to phase imbalance in phase shifter , hilbert transform of will not be orthogonal to . to overcome the phase imbalance problem we use multiple phase shifters .as far as those phases are not identical , by a linear combination of the multiple phase shifter outputs we can rotate and scale even with large phase imbalances . assuming taps and phase shifters per tap , -th phase shifter output at -th tap can be written as where is phase shifter gain to represent amplitude imbalance , and , which can be modeled as , is phase shift including phase imbalance is clear that with , as far as the phase imbalance is less than , those three phases will not be identical . by a linear combination of phase shifter outputs, echo can be estimated as follows where is real valued weight for -th tap and -th phase shifter .the complex baseband model of echo canceller output can be developed from ( [ cancelloutput0 ] ) and ( [ e - t ] ) as where is complex baseband version of .the real and imaginary parts of can be written as and respectively . since implementing wiener filter in analogue domainis quite difficult if not impossible , it is desirable to design techniques which are implementable in analogue domain . for this end, we apply well - known steepest - descent method to the cost function .this leads to where is step - size and superscript represents complex conjugate .notice that cross correlation can be served as echo channel estimation at -th tap delay of -th phase shifter output .since it is quite noisy estimation due to random signal , noise and impairments which will be discussed in next subsection , the noisy estimation is low - pass filtered by integrator with small step - size as shown in ( [ lms ] ) .together with , leads to the block diagram of the adaptive echo cancellation in analogue domain shown in fig .[ fig_blk ] .note that the multipliers in the outputs of phase shifters are in fact variable attenuators since the magnitudes of weights can be limited to less than 1 by design .also notice that as the step - size is much less than 1 , it can be implemented by a fixed attenuator .overall , it is quite clear that the proposed structure is implementable .unfortunately , however , and can not be obtained due to phase noise , phase / amplitude imbalances in downconverter . with unknown phase error and in in - phase and quadrature phaserespectively , and unknown gains of and in in - phase and quadrature phase respectively , following signal has to be used where represents imaginary part and is phase noise .similarly , we can define .in addition , although all downconverters are driven by the same oscillator , each downconverter may have different phase rotation .hence , effectively following will be performed where is the phase difference between downconverter for and downconverter for .it is desirable to have in order to avoid divergence or slow convergence .next , the effect of phase distortion in the variable attenuator is investigated . in wideband variableattenuator , different attenuation causes different phase rotation .thus the second term in the right - hand side of is changed to as in , the phase distortion can be modeled as a linear function of attenuation in log scale .for example , assuming and phase rotations at 3 db and 38 db attenuations respectively leads to the additional phase rotation due to the phase distortion in the attenuator can be absorbed in .the phase distortion is large when the attenuation is large .however , when the attenuation is large , its contribution to echo estimation is small .thus , the phase distortion at high attenuation can be ignored .the values of above mentioned impairments are unknown . hence , we do not take any attempt to compensate those impairments in our study .simply , the outputs of downconverters which include all those impairments are used to train the weights .although the second terms in the right hand side of and are quite different due to impairments , as far as the signs of the second terms are same , the adaptive technique can converge as explained in appendix b. this is why the proposed method exhibits immunity to impairments unlike open loop techniques .the fixed phase shifter , combiner , splitter and coupler are quite linear components . in ,the input third order intercept point ( iip3 ) of the variable attenuator is 50 dbm which means with 0 dbm input power non - linear distortion power can be less than -100 dbm .hence , 0 dbm input power at the variable attenuator is maintained .[ cols="^,^,^,^,^,^,^ " , ] however , one difficulty in solving this optimization problem is the lack of a well - known cost function that can capture gains in cell edge and cell average in both dl and ul . in general different cost functionscan be considered depending on the application and the required gain in each parameter . here , our objective is to simply maximize the sum of all gains _ i.e. _ , dl cell edge gain + ul cell edge gain + dl cell average gain + ul cell average gain .table v - vii show gains in average and 5% cell edge capacity of str with null forming over non - str with 8 antennas without null forming .first , consider the large cell . as discussed earlier in this case, ue - ue interference is not a serious problem .it can be seen from table v that more than 100% of gain in dl cell average capacity can be achieved by str with null forming in both 8 and 12 antennas and decent gain in cell edge with 12 antennas .the gains beyond 100% in cell average come from the reduction of dl co - channel interference due to null forming .however , since ul is less interference limited , the gain is smaller .the cell edge capacity is not close to twice of non - str capacity since the signal power loss is unavoidable at cell edge due to null forming .next , let us consider the case where ue - ue interference is more dominant than dl co - channel interference _ i.e. _ small cell with small .as it can be observed from table vi , without addressing ue - ue interference , cell edge users experience serious loss in dl capacity .nevertheless , by employing the proposed resource block structure and optimizing above cost function , positive gains can be achieved with str and 12 antennas by sacrificing ul capacity as shown in table vii . although we assumed that each ue uses the whole bandwidth for time division multiplexing ( tdm ) , it is possible to allocate fraction of bandwidth to each ue for frequency division multiplexing ( fdm ) .then , ue - ue interference will be larger unless the power spectral density is maintained the same .in addition , due to proximate ues , in the same sector higher resolution adc will be needed . due to path loss , the saturation at lna can be avoided .we have suggested an echo cancellation technique which can be implemented in analogue domain and demonstrated sufficient suppression of echo before lna .the technique is robust to rf impairments exhibiting outstanding performance without requiring additional antennas .str can be employed in csma networks .we showed that str can reduce the hidden node problem and the suggested protocols improve throughput in both single and multiple cells .the application of str to cellular systems creates bs - bs and ue - ue interferences .we have provided a solution for the complete cancellation of bs - bs interference . for ue - ue interference, we studied non - cooperative method .first , we define normalized correlation as \ } \nonumber\\ & = [ sinc(b(\tau-\tau_1))~sinc(b(\tau-\tau_2))]\end{aligned}\ ] ] where is the average power of .similarly , define normalized auto - correlation as \nonumber\\ = & \left [ { { \begin{array}{*{2}c } 1 & sinc(b(\tau_1-\tau_2 ) ) \\sinc(b(\tau_2-\tau_1 ) ) & 1\\ \end{array } } } \right].\end{aligned}\ ] ] then , the cross correlation vector can be written as where is a diagonal matrix defined as .\ ] ] likewise , the auto - correlation matrix can be expressed as the weight vector for the wiener solution is obtained by after manipulations , the residual echo power defined in the denominator of is in the form of finally , the suppression level of echo power is calculated from order to give some insight on how phase noise affects the performance , without loss of generality , the following simplified model is considered .we consider one echo and assume ideal knowledge of the delay . with one tap echo canceller and zero valued weight initially , we have however , the echo channel estimation ( [ errsig ] ) is far from unless it is divided by .in addition , the echo channel estimation will be corrupted by noise and various impairments .note also that random ofdm signal has large peak - to - average power ratio ( papr ) .hence , in adaptive echo canceller , the echo channel estimation is low - pass filtered by an integrator with very small step - size as the echo channel estimation is quite noisy .thus , the estimation error in echo channel estimation does not directly appear in the weight .as far as the sign of is equal to the sign of true echo channel , the weight will be gradually updated toward the true echo channel until the power of is small enough . due to this nature , even if the echo channel estimation is not perfect and corrupted by impairments and noise , the adaptive echo canceller can converge . \cos(\delta ) \nonumber \\ & & { -}\ ! \left [ \cos(\theta_h ) \left\ { \begin{array}{l } g_{x , i}g_{z , q}(\sin(\delta_{i - q}+\varepsilon_{x - z}(t))+\sin(2\theta_x(t)-\delta_{i+q}-\varepsilon_{x+z}(t)))+ \\ g_{x ,q}g_{z , i}(\sin(\delta_{q - i}+\varepsilon_{x - z}(t))-\sin(2\theta_x(t)-\delta_{q+i}-\varepsilon_{x+z}(t)))\end{array } \right \ } \right .\nonumber \\ & & { + } \ : \left . \left .\sin(\theta_h ) \left\ { \begin{array}{l } g_{x , i}g_{z , q}(\cos(\delta_{i - q}+\varepsilon_{x - z}(t))+\cos(2\theta_x(t)-\delta_{i+q}-\varepsilon_{x+z}(t)))+ \\g_{x , q}g_{z , i}(\cos(\delta_{q - i}+\varepsilon_{x - z}(t))-\cos(2\theta_x(t)-\delta_{q+i}-\varepsilon_{x+z}(t)))\end{array } \right \ } \right ] \sin(\delta ) \right\}. \nonumber \\\end{aligned}\ ] ] \cos(\delta ) \right .\nonumber \\ & & \left .- \left [ \cos(\theta_h ) \left\ { g_{x , i}g_{z , q}\sin(\delta_{i - q})+ g_{x , q}g_{z , i}\sin(\delta_{q - i } ) \right \ } + \sin(\theta_h ) \left\ { g_{x , i}g_{z , q}\cos(\delta_{i - q})+ g_{x , q}g_{z , i}\cos(\delta_{q - i } ) \right \ } \right ] \sin(\delta ) \right \}.\end{aligned}\ ] ] then , the echo channel estimation can be written as shown in ( [ errwpn ] ) where is the angle of assuming uniformly distributed over to and is the phase difference between downconverters for and .when in majority of samples the sign of is equal to the sign of , the weight will be updated to the right direction . due to random time varying phase from transmitted signal , phase noise and , the echo channel estimation will fluctuate .however , by the low pass filtering , those time varying noises will be filtered . since we can assume the phase noise and the phase of transmitted signal are stationary, the integration over time can effectively perform ensemble average .assuming gaussian phase noise , the ensemble average over , and leads to ( [ eerrwpn ] ) where is the variance of phase noise and is the average power of .as can be seen , the phase noise reduces the power of echo channel estimation .hence , the performance will be more sensitive to the noise .when all downconverters are driven by the same oscillator , the phase noises can be identical . in this scenario ,the phase noise difference will be zero .then , the scale factor disappears as if no phase noise is present .yang - seok choi ( s91,m01 ) received the b.s .degree from korea university , seoul , korea , in 1990 , the m.s.e.e .degree from korea advanced institute of science and technology ( kaist ) , taejon , korea , in 1992 , and the ph.d .degree from polytechnic university , brooklyn , ny , usa in 2000 , all in electrical engineering . from 1992 to 1996 , he was with samsung electronics , co. , ltd . , suwon , korea , where he developed various modems for hdtv and dbs . during 2000 summer he held a summer intern position at at&t labs - research shannon lab , florham park , nj , usa . in 2000 , he joined national semiconductor , east brunswick , nj , usa where he was involved in the development of w - cdma . during 2001 - 2002 , he was a senior technical staff member at at&t labs - research , middletown , nj , usa where he researched on mimo systems , ofdm systems and information theory . from 2002 to 2004 he had been with vivato , inc ., spokane , wa , usa working on mimo ofdm systems , smart antenna systems , and antenna / beam selection techniques . in 2004, he joined intel corporation , hillsboro , or , usa where he studied on broadband wireless communications systems and was a director of radio systems engineering leading standards and technology development . in 2013 , he joined intel labs where he researches future generation wireless systems .his research interests include various aspects of wireless communications systems .hooman shirani - mehr ( s03- m10 ) received the b.s .degree from sharif university of technology , tehran , iran , in 2001 and m.s . and ph.d .degrees from the university of southern california , los angeles , ca in 2006 and 2010 , respectively , all in electrical engineering .since 2010 , he has been with intel corporation where he is currently working on 3gpp lte and lte - advanced wireless systems .his research interests include communication theory , information theory and signal processing with applications in wireless communications .
full duplex or simultaneous transmission and reception ( str ) in the same frequency at the same time can potentially double the physical layer capacity . however , high power transmit signal will appear at receive chain as echoes with powers much higher than the desired received signal . therefore , in order to achieve the potential gain , it is imperative to cancel these echoes . as these high power echoes can saturate low noise amplifier ( lna ) and also digital domain echo cancellation requires unrealistically high resolution analog - to - digital converter ( adc ) , the echoes should be cancelled or suppressed sufficiently before lna . in this paper we present a closed - loop echo cancellation technique which can be implemented purely in analogue domain . the advantages of our method are multiple - fold : it is robust to phase noise , does not require additional set of antennas , can be applied to wideband signals and the performance is irrelevant to radio frequency ( rf ) impairments in transmit chain . next , we study a few protocols for str systems in carrier sense multiple access ( csma ) network and investigate mac level throughput with realistic assumptions in both single cell and multiple cells . we show that str can reduce hidden node problem in csma network and produce gains of up to 279% in maximum throughput in such networks . moreover , at high traffic load , the gain of str system can be tremendously large since the throughput of non - str system is close to zero at heavy traffic due to severe collisions . finally , we investigate the application of str in cellular systems and study two new unique interferences introduced to the system due to str , namely _ bs - bs interference _ and _ ue - ue interference_. we show that these two new interferences will hugely degrade system performance if not treated appropriately . we propose novel methods to reduce both interferences and investigate the performances in system level . we show that bs - bs interference can be suppressed sufficiently enough to be less than thermal noise power , and with favorable ue - ue channel model , capacities close to double are observed both in downlink ( dl ) and uplink ( ul ) . when ue - ue interference is larger than dl co - channel interferences , we propose a simple and non - cooperative " technique in order to reduce ue - ue interference . full duplex , simultaneous transmission and reception ( str ) , echo cancellation , csma , null forming , hidden node .
according to the prebiotic soup hypothesis , the early earth had the supply of organic compounds needed to jump - start polymerization reactions that , through gradual complexification , led to the first biochemical networks displaying some of the characteristics attributed to life , such as metabolic activity and replication .although the road is still obscure , the situation is not all bleak . in 1953, stanley miller simulated tentative conditions of early - earth in the laboratory to obtain amino acids from simple chemical compounds .that same year , frank proposed that auto - catalytic polymerization could explain the emergence of biomolecular homochirality , a clear signature of terrestrial and , possibly , all life : terrestrial amino acids belonging to proteins are overwhelmingly left - handed , while sugars are right - handed .if a bottom - up approach to the early development of life is adopted , the homochirality of life s biochemistry must have emerged dynamically , as reactions among the simplest molecular building blocks occurred with high enough yield .alternatively , one may assume that , somehow , only monomers of a single chirality were present in the prebiotic soup : they were made that way or brought here during the intense bombardment of earth s infancy , that lasted to about 3.8 gyr ago .we would , however , still need to understand how homochirality developed elsewhere in the cosmos and not here , and whether it developed with the same chiral bias in more than one place . here, we consider the homochirality of life as an emergent process that took place on early - earth and , possibly , other planetary platforms . as a starting point ,we use the reaction - network model proposed by sandars , which includes enantiometric cross - inhibition .as shown in the interesting work of brandenburg and multamki ( bm ) , sandar s polymerization reaction network can be reduced to an effective spatio - temporal mean - field model , where the order parameter is the chiral asymmetry between left and right - handed polymers . to this , we add the effects of an external environment , showing that they can be crucial in the final determination of the net value of enantiometric excess , if any .sandars proposed the following polymerization reactions : ; ; ; and , , where ( ) are the reaction rates for adding monomers of the same ( opposite ) chirality to a given chain . the network is completed by adding the four opposite reactions , that is , by interchanging , and by adding a substrate from which both left and right - handed monomers emerge , and , where determine the enzymatic enhancement of left and right - handed monomers , which is not known .we follow sandars and choose ] .as remarked in bm , it is possible to truncate the system to and still maintain the essential aspects of the dynamics leading to homochiralization .this allows us to model the reaction network as a mean - field theory exhibiting spontaneous chiral symmetry breaking .our approach blends the work of bm with the pioneering work of kondepudi and nelson , where the reaction network was coupled to time - dependent external effects : chirality evolves spatio - temporally in contact with an environment .the equations can be simplified by assuming that the rate of change of ] is much slower than that of ] .the same for the substrate ] , where and are the source terms for monomers generated from the substrate ] , and (pc_r + qc_l)~ ] and (2k_s / q)^{1/2}$ ], bm have shown that the polymerization equations reduce to where , with dimension of inverse time . is a fixed point : the system will tend towards this value at time - scales of order . with ,the equation for the chiral asymmetry has fixed points at , as pointed out in bm .an enantiometric excess is only possible if .we model the external environment via a stochastic spatio - temporal langevin equation , rewriting the equations above as where is the diffusion constant and represent white noise with zero mean and a two - point correlation function given by , and measures the strength of the external influence .for example , in mean - field models of phase transitions , it is common to write , where is boltzmann s constant , is the temperature , and is a viscosity coefficient .the equations can be made dimensionless by introducing and , which determine the typical spatio - temporal scales in the system .the noise amplitude scales as , where is the number of spatial dimensions . using as nominal values for the parameters , , , the diffusivity of water , we obtain , ) and ) . as in bm[ see also ] , the concentrations are spatially - dependent quantities .this implicitly assumes that it is possible to define an effective correlation volume within which the value of the chiral asymmetry is fairly homogeneous .( we take . )using well - known results from the mean - field theory of phase transitions , we can easily compute the correlation length . from the equation for , we obtain an effective potential , ~.\ ] ] for , has a typical double - well shape , with minima at the fixed points .the correlation length , , is given by , where denotes a minimum of the potential .for the fixed points , we get , . at correlation length diverges , as it should for a critical point. however , the noise parameter also controls the behavior of the system .indeed , even if , an enantiometric excess may not develop if is above a critical value . in analogy with ferromagnets , where above a critical temperature the net magnetization is zero, one may say that above the stochastic forcing due to the external environment overwhelms any local excess of over enantiometers within a domain of correlation volume : racemization is achieved at large scales and chiral symmetry is restored throughout space .salam suggested that there should be a critical temperature above which any net homochirality is destroyed .however , he conceded that calculating would be quite challenging using the electroweak theory of particle physics . here , we chose a different route which , we believe , will allow us to explore the qualitative aspects of the problem more effectively : the noise amplitude may represent a sudden increase in temperature and/or pressure due to a meteoritic impact or volcanic eruption , or , possibly , due to a source of circularly - polarized ultraviolet light .the equation dictating the evolution of the enantiometric excess , eq .[ aeq ] , was solved with a finite - difference method in a grid and a grid with and , and periodic boundary conditions . in 2d , this corresponds to simulating a shallow pool with linear dimensions of cm .we prepared the system initially in a chirally pure phase , which we chose to be .the equation is then solved for different values of the external noise , . as can be seen in figure [ tcritfig ] , for , ,that is , the system becomes racemized . approaches a constant for large times , indicating that the reaction network reaches equilibrium with the environment .the results are ensemble averaged . for , .we can describe the environmental impact on homochirality at the microscopic level by introducing the `` ginzburg criterion '' familiar of phase transitions .consider a correlation volume with ( or ) .what is the energy barrier ( ) to flip half the molecules in the volume so that ? if is the number of molecules in a correlation volume , , where is the energy to flip one molecule .the ginzburg criterion says that this energy is , where .comparing the two expressions we obtain , . from equation[ v ] , .now , .( we set . )using for the microscopic spatial scale , and that , we obtain . [ the energy has dimensions of ( length) . ] to complete the argument , we use that the critical `` environmental '' energy to restore the chiral symmetry was obtained numerically to be ( cf .figure [ tcritfig ] ) , .we thus obtain the ratio , .it is tempting to compare this result with possible sources of homochirality .for example , weak neutral currents are expected to induce an excess at room temperature of . in the language of the present work, they would induce a tilt in the potential proportional to .thus , within the violent environment of prebiotic earth , effects from such sources , even if cumulative , would be negligible : any accumulated excess could be easily wiped out by an external disturbance .further work along these lines is in progress .
the development of prebiotic homochirality on early - earth or another planetary platform may be viewed as a critical phenomenon . it is shown , in the context of spatio - temporal polymerization reaction networks , that environmental effects be them temperature surges or other external disruptions may destroy any net chirality previously produced . in order to understand the emergence of prebiotic homochirality it is important to model the coupling of polymerization reaction networks to different planetary environments .
graphical models of codes and the decoding algorithms associated with them are now a major focus area of research in coding theory . turbo codes , low - density parity - check ( ldpc ) codes , and expander codes are all examples of codes defined , in one way or another , on underlying graphs . a unified treatment of graphical models and the associated decoding algorithms began with the work of wiberg , loeliger and koetter , , and has since been abstracted and refined under the framework of the generalized distributive law , factor graphs , and normal realizations , .the particular case of graphical models in which the underlying graphs are cycle - free has a long and rich history of its own , starting with the study of trellis representations of codes ; see _ e.g. _ , and the references therein .briefly , a graphical model consists of a graph , an assignment of symbol variables to the vertices of the graph , an assignment of state variables to the edges of the graph , and a specification of local constraint codes at each vertex of the graph .the full behavior of the model is the set of all configurations of symbol and state variables that satisfy all the local constraints .such a model is called a realization of a code if the restriction of the full behavior to the set of symbol variables is precisely .the realization is said to be cycle - free if the underlying graph in the model has no cycles .a trellis representation of a code can be viewed as a cycle - free realization in which the underlying graph is a simple path .a linear code has a realization on a graph that is not connected if and only if can be expressed as the direct sum of the codes that are individually realized on the connected components of .thus , there is no loss of generality in just focusing , as we do , on the case of realizations on connected graphs . in this paper , we will be concerned with tree realizations cycle - free realizations in which the underlying cycle - free graph is connected , _i.e. _ , is a tree .it is by now well known that the sum - product algorithm on any tree realization provides an exact implementation of maximum - likelihood ( ml ) decoding ,,, . a good initial estimate of the computational complexity of such an implementation is given by the constraint complexity of the realization , which is the maximum dimension of any of the local constraint codes in the realization .now , distinct tree realizations of the same code have , in general , distinct constraint complexities .the treewidth of a code is defined to be the least constraint complexity of any of its tree realizations .thus , treewidth may be taken to be a measure of the ml decoding complexity of a code .since trellis realizations are instances of tree realizations , the treewidth of a code can be no larger than the minimum constraint complexity of any of its trellis realizations . in the abstract of his paper , forney claimed that `` the constraint complexity of a general cycle - free graph realization can be [ strictly ] less than that of any conventional trellis realization , but not by very much . '' while he substantiated the first part of his claim by means of an example , he left the `` not by very much '' part as a conjecture ( * ? ? ?* , conjecture 2 ) .but he also admitted that none of the arguments he gave in support of his conjecture `` is very persuasive , '' and that it is equally plausible that ( * ? ? ?* conjecture 3 ) there exists no upper bound on the gap between the treewidth of a code and the minimum constraint complexity of any of its trellis realizations .one of the main contributions of this paper is an example that affirms the validity of forney s conjecture 3 .we present , in section [ complexity_section ] , a family of codes for which the difference between the minimum trellis constraint complexity and the treewidth grows logarithmically with codelength .we conjecture that this is in fact the maximal rate of growth of this difference .our construction of this example is based upon results from the graph theory and matroid theory literatures that connect the notions of treewidth and trellis complexity of a code to certain complexity measures defined for graphs .this paper makes two other contributions , both relating to minimal tree realizations .a mapping of the set of coordinates of a code to the vertices of a tree is called a tree decomposition .a tree decomposition may be viewed as an assignment of symbol variables to the vertices of the tree .it is known that given a code , among all tree realizations of that extend a given tree decomposition , there is one that minimizes the state space dimension at each vertex of the underlying tree .this minimal tree realization , an explicit construction of which was also given in , is unique up to isomorphism .we give two new constructions of minimal tree realizations .the first construction involves a generalization of the idea of state merging that can be used to construct minimal trellis realizations ( * ? ? ?* section 4 ) .we show that any tree realization of a code can be converted to a minimal realization by a sequence of state merging transformations .the state space and constraint code dimensions do not increase at any step of this process . from this , we obtain the fact that a minimal realization also minimizes the constraint code dimension at each vertex of the underlying tree .our second construction of minimal tree realizations uses extensions of the code decomposition techniques that were presented in .the main advantage of this construction is its recursive nature , which makes it suitable for mechanical implementation .also , it is relatively straightforward to estimate the computational complexity of this construction .we show that the complexity is polynomial in the length and dimension of the code , as well as in the size of the underlying tree , but is exponential in the state - complexity of the minimal realization , which is the maximum dimension of any state space in the realization .the paper is organized as follows . in section [ background_section ], we provide the necessary background on tree realizations of linear codes .the construction of minimal realizations by means of state merging is presented in section [ state_merging_section ] .code decomposition techniques are developed in section [ decomp_section ] , and used in section [ new_construct_section ] to derive a recursive construction of minimal tree realizations .proofs of some of the results from sections [ background_section][new_construct_section ] are deferred to appendices to preserve the flow of the exposition .treewidth and related complexity measures are defined in section [ complexity_section ] , which also establishes connections between these code complexity measures and certain complexity measures defined for graphs .these connections are used to derive the example of a code family for which the gap between minimum trellis constraint complexity and treewidth is arbitrarily large .we also touch upon the subject of codes of bounded complexity , observing that many hard coding - theoretic problems become polynomial - time solvable when restricted to code families whose treewidth is bounded .section [ conclusion ] contains a few concluding remarks .our treatment of the topic of tree realizations in this section is based on the exposition of forney , ; see also .we start by establishing some basic notation .we take to be an arbitrary finite field .given a finite index set , we have the vector space . for and , the notation will denote the _ projection _ .also , for , we will find it convenient to reserve the use of to denote the set difference .a _ linear code _ over , defined on the index set , is a subspace . we will only consider linear codes in this paper , so the terms `` code '' and `` linear code '' will be used interchangeably .the dimension , over , of will be denoted by .an ] code .the dual code of is denoted by , and is defined on the same index set as .let be a subset of the index set .the _ projection _ of onto is the code , which is a subspace of .we will use to denote the _ cross - section _ of consisting of all projections of codewords that satisfy . to be precise , . note that .also , since is isomorphic to the kernel of the projection map defined by , we have that .furthermore , projections and cross - sections are dual notions , in the sense that , and similarly , .if and are codes over defined on mutually disjoint index sets and , respectively , then their _ direct sum _ is the code defined on the index set , such that and .this definition naturally extends to multiple codes ( or subspaces ) , where is a code identifier that takes values in some set .again , it must be assumed that the codes are defined on mutually disjoint index sets .the direct sum in this situation is denoted by .a tree is a connected graph without cycles . given a tree , we will denote its vertex and edge sets by and , respectively , or simply by and if there is no ambiguity .vertices of degree one are called _ leaves _ , and all other vertices are called _ internal nodes_. given a ,the set of edges incident with will be denoted by .removal of an arbitrary edge from produces a disconnected graph , which is the disjoint union of two subtrees , which we will denote by and , of .note that and form a partition of .let be a code over , defined on the index set .to each , we associate a _ symbol variable _ , which is allowed to take values in .a _ tree decomposition _ of is a pair , where is a tree ( _ i.e. _ , a connected , cycle - free graph ) and is a mapping from to the vertex set of .pictorially , a tree decomposition is depicted as a tree with an additional feature : at each vertex such that is non - empty , we attach special `` half - edges '' , one for each index in ; see figure [ tree_decomp ] . at this point, we introduce some notation that we will consistently use in the rest of the paper . given a tree decomposition of an index set , and an edge , we define and . thus , and are the subsets of that get mapped by to vertices in and , respectively .clearly , and form a partition of .recall that , , denotes the set of edges incident with in .consider a tuple of the form , where * is a tree decomposition of ; * for each , is a vector space over called a _ state space _ ; * for each , is a subspace of , called a _ local constraint code _ , or simply , a _local constraint_. such a tuple will be called a _tree model_. the elements of any state space are called _states_. the index sets of the state spaces , , are taken to be mutually disjoint , and are also taken to be disjoint from the index set corresponding to the symbol variables . finally , to each , we associate a _ state variable _ that takes values in the corresponding state space .a _ global configuration _ of a tree model as above is an assignment of values to each of the symbol and state variables . in other words, it is a vector of the form .a global configuration is said to be _valid _ if it satisfies all the local constraints .thus , is a valid global configuration if for each , .the set of all valid global configurations of a tree model is called the _ full behavior _ of the model . notethat the full behavior is a subspace . as usual , denotes the projection of onto the index set .if , then the model is called a _ ( linear ) tree realization _ of . a tree realization of is said to _ extend _ ( or be an extension of ) the tree decomposition of the index set of .any tree decomposition of the index set of a code can always be extended to a tree realization of the code , as explained in the following example .let be a code defined on index set , and let be a tree decomposition of .pick an arbitrary , and define .now , consider the set , , of edges incident with .removal of any produces the two subtrees and .we specify to be the subtree that does _ not _ contain the vertex , and as usual , . for each ,the state space is taken to be a copy of .the remaining state spaces and local constraints are chosen so that , for each , the symbol variables indexed by simply get relayed ( unchanged ) to the state variable ; see figure [ trivial_ext ] .it should be clear that the resulting tree model is a tree realization of the code .this will be called a _trivial extension _ of .we will present constructions of non - trivial extensions of tree decompositions a little later .[ trivial_example ] a _ simple path _ is a tree with exactly two leaves ( the end - points of the path ) , in which all internal nodes have degree two ; see figure [ path_fig ] .let be a code defined on index set , and let be a tree decomposition of , in which is a simple path , and is a surjective map .any tree realization of that extends is called a _ trellis realization _ of .when is a bijection , then any trellis realization extending is called a _ conventional _ trellis realization . when is not a bijection ( but still a surjection ) , a trellis realization that extends called a _sectionalized _ trellis realization . in trellis terminology ,the local constraint codes in a trellis realization are called _branch spaces_. the theory of trellis realizations is well established ; we refer the reader to for an excellent survey of this theory .[ trellis_example ] let be the full behavior of a tree model . we will find it useful to define certain projections of , other than for .let be a global configuration in . at any given ,the _ local configuration _ of at is defined as the set of all local configurations of at is then defined as . by definition , .similarly , for , and as above , we define the projections and .clearly , is a subspace of .if consists of a single edge , then we simply denote the corresponding projections by and .the following elementary property of the projections will be useful later ; a proof for it is given in appendix [ sec2_lemmas_app ] .let be the full behavior of some tree realization of a code , defined on the index set , that extends the tree decomposition .suppose that and are such that .then , .[ b|e_lemma ] a tree model ( or realization ) , with full behavior , is said to be _ essential _ if for all .this definition actually implies something more . if the tree model , with full behavior , is essential , then for all .[ ess_lemma ] a proof of the lemma can be found in appendix [ sec2_lemmas_app ] .+ an arbitrary tree model can always be `` essentialized '' . to see this ,let be a tree model with full behavior .recall that is a subspace of , and is a subspace of .define the _ essentialization _ of to be the tree model .it is readily verified that has the same full behavior as .+ given a code and a tree decomposition of its index set , there exists an essential tree realization , , of with the following property , : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if is a tree realization of that extends , then for all , . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ this _ minimal _ tree realization , which we henceforth denote by , is unique up to isomorphism . more precisely , if is also a tree realization of with the above property ( except that is replaced by ) , then for each , and for each . we will not distinguish between isomorphic tree realizations .we outline a construction , due to forney , of . for any edge , the sets and form a partition of the index set .e^ * = / ( _ j(e )_ |j(e ) ) , [ se*_def ] and let _e^ * : / ( _ j(e ) _|j(e ) ) [ se*_def ] be the canonical projection map .in other words , for , is the coset .now , let be the vector space consisting of all global configurations corresponding to codewords , where .it is worth noting that , and furthermore , , since implies that .we can now define for each , the local constraint c_v^ * = |_v = \{(|_^-1(v ) , ( _ e^ * ( ) , ee(v ) ) ) : c}. [ cv*_def ] the minimal realization is the tuple .it may be verified that is the full behavior of , so that is indeed an essential tree realization of . from the definition of in ( [ se*_def ] ) ,it is clear that for each , ( _ e^ * ) = ( ) - ( _ j(e ) ) - ( _ |j(e ) ) .[ dimse * ] it is useful to point out that may also be expressed as ( _ e^ * ) = ( |_j(e ) ) + ( |_|j(e ) ) - ( ) , [ dimse*_alt ] a consequence of the fact that for any , .thus , by the uniqueness of minimal tree realizations , if is a tree realization of with the property that for all , equals one of the expressions in ( [ dimse * ] ) or ( [ dimse*_alt ] ) , then is in fact .forney also derived an expression for the dimension of the local constraints .consider any .for each , we specify to be the component of that does _ not _ contain . as usual , .then ( * ? ? ?* theorem 1 ) , ( c_v^ * ) = ( ) - _ e e(v ) ( _ j(e ) ) .[ dimcv * ] forney gave the following bound for ( * ? ? ?* theorem 5 ) : for any , , where denotes the length of the code .the upper bound can be improved slightly . in the minimal tree realization , we have , for and , [ dimcv*_bnd ] the upper bound may be proved as follows .since for any , we may write ( [ dimcv * ] ) as now , let be fixed .we have however , as can be seen from figure [ vpersp_fig ] , is the disjoint union of and the sets , . therefore , and hence , the lemma now follows from ( [ dimse*_alt ] ) .as already mentioned , among all tree realizations of extending , the minimal realization minimizes state space dimension at each edge of the tree .it is natural to ask whether also minimizes local constraint code dimension at each vertex of .we will show in the next section that does in fact have the following property : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ if is a tree realization of that extends , then for all , ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ we will deduce this fact from an alternative construction of that we present next .the construction we describe in this section takes an arbitrary tree realization that extends the tree decomposition for example , the trivial extension given in example [ trivial_example ] and via a sequence of transformations , converts to .these transformations constitute a natural generalization of the state - merging process in the context of minimal trellis realizations ; see , for example , ( * ? ? ?* section 4 ). it would be useful to keep this special case in mind while going through the details of the description that follows .let be an essential instead ; see theorem [ merge_theorem ] . ]tree realization of a code with index set , and let be the full behavior of . as is essential , we have that for all ( by definition ) , and for all ( by lemma [ ess_lemma ] ) .pick an arbitrary edge , and for ease of notation , set and .let be the subspace of defined by we will define a new tree model , such that states in the same coset of in are represented by a single `` merged '' state in .let be the mapping defined by define .it is clear from the definitions that , and that .consider now the tree model , where for each , and for each .note that , and for , we have .all states in belonging to the same coset of , say , , are mapped to ( or merged into ) the single state in . further note that if is not one of the two vertices incident with , then . at the two vertices that are incident with ,the local constraints are appropriately modified to take into account the state - merging at edge .in any case , we have ( |_e ) = ( ||_e ) ( |_e ) = ( _ e ) , e e , [ dimse_ineq ] and ( |c_v ) = ( ||_v ) ( |_v ) = ( c_v ) , v v. [ dimcv_ineq ] + we claim that is an essential tree realization of . to prove this claim , we must show that for all , and that , where denotes the full behavior of .note that we do _ not _ claim that ; indeed , this may not be true .it is easy to see that the inclusion holds . indeed , since , it is evident that any satisfies all the local constraints of , and hence is in . therefore , , and in particular , .the reverse inclusion , , follows from part ( a ) of the lemma below .let be a global configuration in .then , * ; and * if and only if . [ gbar_lemma1 ] we defer the proof of the lemma to appendix [ gbar_lemmas_app ] . lemma [ gbar_lemma1](a ) shows that , thus proving that is a tree realization of .it remains to show that is essential , _i.e. _ , that for all .this is shown by the following simple argument .we have already seen that , and hence , for all . on the other hand , at any , is , by definition , a subspace of .hence , is essential , thus proving our original claim , which we record in the following proposition .the tree model is an essential tree realization of .[ gbar_prop ] let us call the process described above of obtaining from as the _ state - merging process at edge . we use the notation to denote this transformation .our goal now is to show that , starting from an essential tree realization , if we apply the state - merging process at each edge of the underlying tree , then we always end up with a minimal realization .a proof of this assertion requires the following technical lemma , whose proof we also defer to appendix [ gbar_lemmas_app ] .suppose that there exists such that the full behavior , , of satisfies the following property : for , we have if and only if .then , for any , we also have if and only if .[ gbar_lemma2 ] we are now in a position to prove the main result of this section , which provides a construction of via state merging .let be a tree realization of that extends the tree decomposition .let be a listing of the edges of .set , and for , set .then , is the minimal tree realization .[ merge_theorem ]let denote the full behavior of ( and hence , also of ) , and for , let denote the full behavior of . by proposition [ gbar_prop ] , each is an essential tree realization of . by lemma [ gbar_lemma1](b ) , for any , we have if and only if .furthermore , by lemma [ gbar_lemma2 ] , for any , satisfies the following property : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for any , we have if and only if . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in particular , satisfies the following property for : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for any , we have if and only if ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ let us call the above property ( p ) .property ( p ) has two important consequences .firstly , it implies that if is such that , then for all .this means that the projection defined by is in fact an isomorphism . for the second consequence of ( p ) ,consider , for any , the homomorphism defined by .this map is well - defined since is an isomorphism .property ( p ) is equivalent to the assertion that , for any , the kernel of is precisely .therefore , .thus , for each , state space is isomorphic to defined in ( [ se*_def ] ) , and the map is the canonical projection map given by ( [ se*_def ] ) .it easily follows that for each , is isomorphic to defined in ( [ cv*_def ] ) .hence , is the minimal realization .observe that at each step of the procedure outlined in theorem [ merge_theorem ] , the dimensions of the state spaces and the local constraints do not increase . to make this precise , given tree models and , let us say that for all , and for all . then , for and , , as in the statement of theorem [ merge_theorem ] , we have by virtue of ( [ dimse_ineq ] ) and ( [ dimcv_ineq ] ) , thus , we have that if is any tree realization of that extends the tree decomposition , then .we record this strong property of minimal realizations as a corollary to theorem [ merge_theorem ] .let be a tree decomposition of the index set of a code , and let be the corresponding minimal tree realization of .then , for any tree realization , , of that extends , we have for all , and for all . [ dimcv_cor ]the procedure outlined in theorem [ merge_theorem ] does not translate to an efficient algorithm for the construction of .this is because the state - merging procedure that creates from requires knowledge of the full behavior of , which may not be easily determined .so , as a practical method for constructing , given and , we propose a novel construction that relies upon the code decomposition techniques of the next section .in previous work , it was demonstrated that techniques from the decomposition theory of matroids , could be put to good use in a coding - theoretic setting .the decomposition theory in that work was presented in the context of binary linear codes . as we will now show, the basic elements of that theory can be easily extended to cover the case of nonbinary codes as well .the object of this exercise is not just to create a more general code decomposition theory , but as we will see in the next section , this decomposition theory ties in very nicely with the theory of tree realizations .let and be linear codes over the finite field , defined on the index sets and , respectively .let denote the symmetric difference , , of the index sets .we will construct a code with as its index set . for and , let be defined by setting , we see that has as its index set .we take to be the cross - section .note that when , we have . for ,let and denote the projection and the cross - section , respectively .the codes and , for , all have as their index set .the dimension of can be expressed in terms of the codes , and , , as stated in the following lemma . for codes , we have where .[ dim_sum_prop ] for a code , and a subset of its index set , the kernel of the projection map is isomorphic to , and hence , .thus , taking , and , we find that since .so , we must show that .let be a copy of defined on an index set that is disjoint from .for each , denote by its copy in .consider the homomorphism defined by .note that iff and .equivalently , iff , , and .it follows that the kernel of is isomorphic to which is simply .hence , , as desired .we will restrict our attention to a particular instance of the construction , in which we require that the codes and , , take on a specific form .we need to introduce some notation first . for each positive integer ,set , and fix an matrix , which we denote by , over , with the property that each pair of columns of is linearly independent over .note that is a parity - check matrix for an ] code over _ generated _ by .the code is sometimes referred to as a _ simplex code_. we take a moment to record an important property of the matrix that we will use later .the column vectors of form a maximal subset of with the property that each pair of vectors from the subset is linearly independent over .this is due to the fact that the number of distinct one - dimensional subspaces of is precisely .therefore , any ( column ) vector in is a scalar multiple of some column of . given an , suppose that the codes and , defined on the index sets and , respectively , are such that , and for , we have and . in such a case , is called the _ -sum _ of and , and is denoted by .it is convenient to extend this definition to the case of as well : when , the -sum is defined to be the direct sum .consider the case of codes defined over the binary field .note that .suppose that , and that the coordinates of are are ordered so that the index common to and corresponds to the last coordinate of and the first coordinate of .the conditions necessary for the 1-sum to be defined can then be stated as * is not a codeword of , and the last coordinate of is not identically zero ; * is not a codeword of , and the first coordinate of is not identically zero .the composite code resulting from that satisfy ( p1 ) , ( p2 ) above was studied in , where it was actually called a `` 2-sum '' .we would also like to point out that the specialization of our -sum operation to the case was called `` -sum '' in .- sum operations defined in imposed additional conditions on the lengths of the codes involved in the sum , which we have dropped here . ] to add to the confusion , there was in fact an operation called `` 3-sum '' defined in , but that , in a certain sense , dualizes the 2-sum operation we have given in this paper . for , note that if and ( ) are in the form needed to define an -sum , then , and .therefore , as a corollary to proposition [ dim_sum_prop ] , we have the following result ( which also applies trivially to the case ) . for ,if , are such that can be defined , then [ dim_rsum_cor ] an elementary property of direct sums ( _ i.e. _ , 0-sums ) is that a code is expressible as a direct sum of smaller codes if and only if there exists a partition of the index set of such that .this property extends beautifully to -sums in general .let be a linear code over , defined on the index set , and let be a positive integer .then , the following statements are equivalent .* for some codes , .* there exists a partition of , with , such that [ rsum_theorem ] : see appendix [ rsum_app ] .+ : we give here a complete proof of this direction of the theorem , as it gives an explicit construction of codes , such that , given a partition as in ( b ) .the proof generalizes ideas from similar constructions presented in .let be a partition of such that .set and , and let be a generator matrix for . without loss of generality , we may assume that the columns of are ordered so that the first columns are indexed by the elements of , and the rest by the elements of .in the following exposition , we will often permute the columns of to bring the matrix into some desired form .whenever this is the case , it will be tacitly assumed that column indices migrate with the columns .let and denote the restrictions of to the columns indexed by the elements of and , respectively ; thus , ] must have rank equal to implies that must have rank .hence , has linearly independent rows , call them , which form a basis of the row - space of . permuting the first rows of if necessary, we may assume that constitute the first rows of .( permuting these rows of will also permute the rows of the matrix , but the effects of this can be negated by appropriately permuting the first columns of . )any row of is uniquely expressible as a linear combination ( over ) of . in particular , for , the row of can be uniquely expressed as for some .let us denote by , the rows of the generator matrix , , of the code .let be the matrix such that for , the row of equals , where the s are such that the row of is .thus , the row - space of is the span of , _i.e. _ , it is the code . to the columns of , we assign indices from some set disjoint from .now , define the matrix g_1 = , [ g1_def ] allowing the submatrix ] to retain its column indices from .thus , the index set of the columns of is , while that of the columns of is .finally , for , let denote the code over generated by .the following facts about and may be verified : * , .* can be defined , so that by corollary [ dim_rsum_cor ] , . *all rows of are in .since generates the same code as ( recall that column indices get permuted along with columns ) , we see that contains all the codewords of .we leave the details of the routine verification of the above facts to the reader .it only remains to point out that facts ( ii ) and ( iii ) above show that , thus completing the proof of the implication ( b ) ( a ) .the procedure described in the above proof can be formalized into an algorithm that takes as input a generator matrix ( over ) for , and a partition of the index set of , and produces as output generator matrices of two codes and ( and their associated index sets ) such that , where .the run - time complexity of this procedure is determined by the following : * an rref computation to find as in ( [ rref_eq ] ) ; this can be carried out in time , which is the run - time complexity of bringing a matrix to reduced row - echelon form via elementary row operations ; * the computations required to identify a basis ( ) of the row - space of the matrix , and correspondingly the coefficients ; this could be done by computing the rref of , which would also take time ; * the computations needed to determine the matrix ; each row of the matrix requires computations , and there are rows , so the computation of takes time .therefore , the entire procedure can be carried out in time .it is worth noting that the run - time complexity of the procedure is polynomial in , and , but exponential in .the above procedure for determining an -rum decomposition of a given code forms the basis of a new construction of minimal tree realizations that we present in this section .the key observation behind this construction is that if a code has a partition of its index set such that , then has an essential tree realization of the form depicted in figure [ rsum_fig ] .the tree in the figure consists of a single edge , the state space is the code , and the local constraint codes at the two vertices are the codes and such that . in fact, this is the minimal realization , for the tree consisting of the single edge , and the index map such that and .this is simply because , so by virtue of ( [ dimse*_alt ] ) , has the same dimension as the state space in the minimal realization .so , by the uniqueness of minimal tree realizations , the tree realization depicted in figure [ rsum_fig ] is . to summarize ,if is a code defined on the index set , and is a tree decomposition of such that consists of the single edge , then we may construct as follows .set and , and compute . assign an index set that is disjoint from to the code .use the procedure in the proof of theorem [ rsum_theorem ] to determine codes and , defined on the respective index sets and , such that . for , assign to be the local constraint code at vertex , and assign to be the state space at edge .the resulting tree model is the minimal tree realization . before describing how the construction may be extended to the case of trees with more than one edge, we deal with the trivial case of trees without any edges .if is a tree consisting of a single vertex , and no edges , then given any code defined on some index set , there is only one way of realizing on .this is the realization , where is the unique mapping , and is the code itself .of course , this is also the minimal realization . at this point , we know how to construct , for any code , and any tree decomposition such that has at most one edge .from this , we can recursively construct for any and any , as we now describe .suppose that we know how to construct for any , and any such that has at most edges , for some integer .let be a code defined on the index set , and let be a tree decomposition such that . pick any , and as usual , let and be the two components of .we will assume that and .let and .compute r = ( |_j ( ) ) + ( | _ ( ) ) - ( ) , [ r_def ] which determines the code .assign an index set that is disjoint from . use the procedure in the proof of theorem [ rsum_theorem ] to determine codes and , defined on the respective index sets and , such that .now , define the index maps and as follows ( see figure [ rec_constr_fig ] ) : thus , and are tree decompositions of the index sets of and , respectively . as neither nor contains the edge , we have and .therefore , by our assumption , we know how to construct and .let finally , set , where _ e = _ e^(1 ) , & e e(t _ ) + _ r , & e = + _ e^(2 ) , & e e ( _ ) , [ se_def ] and c_v = c_v^(1 ) , & v v(t _ ) + c_v^(2 ) , & v v ( _ ) .[ cv_def ] figure [ gamma_star ] contains a depiction of .it is easy to see that is a tree realization of .indeed , is a realization of , and is a realization of , and hence ( as should be clear from figure [ gamma_star ] ) , is a realization of .it is not immediately obvious that is actually , but this is in fact true , as stated in the following proposition , a proof of which is given in appendix [ rec_constr_app ] . is the minimal tree realization .[ rec_constr_prop ] in summary , we have the following recursive procedure for constructing , given a code and a tree decomposition .+ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ * procedure * ` min_realzn`( ) + _ input _ : a generator matrix for a code , and a tree decomposition of the index set of .+ _ output _ : a specification of the state spaces and the local constraints in the minimal realization . +* step m1*. if consists of a single vertex , then return . +* step m2*. if contains at least one edge , then choose an .let be the vertex of incident with , and let be the vertex of incident with .+ ( m2.1 ) compute .+ ( m2.2 ) determine , and assign it an index set disjoint from .+ ( m2.3 ) determine codes and , with index sets and , respectively , such that .+ ( m2.4 ) determine the index maps and as in ( [ omega1_def ] ) and ( [ omega2_def ] ) .+ * step m3*. determine by calling ` min_realzn`( ) ; determine by calling ` min_realzn`( ) .we may assume that and are in the form given in ( [ minc1_form ] ) and ( [ minc2_form ] ) .+ * step m4*. return , where and are as defined in ( [ se_def ] ) and ( [ cv_def ] ) ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ a simplified version of the above procedure may be obtained by choosing , in step m2 , the edge to be an edge incident with a leaf of .then , one of the two components of , say , , consists of a single vertex , so that the call to ` min_realzn`( ) may be avoided , as it would simply return .we will use this modification of the procedure to give an estimate of its run - time complexity .let denote the length of , let , and let .also , define r _= _ e e ( |_j(e ) ) + ( |_(e ) ) - ( ) .[ rmax_eq ] observe that , as a result of the modification suggested above , in the determination of , the procedure ` min_realzn ` gets called times , once for each edge .the run - time complexity of any particular run of ` min_realzn ` is determined by the computations in step m2 . in the run ,the procedure acts upon some code of length and dimension , and in step m2 , it computes an , a code with index set , and a code . via lemma[ appd_lemma ] , we have that .we bound and as follows .note that is the code that the run of the procedure takes as input .thus , we have , and . since , , and , we have , for , and .now , by the estimate given in section [ decomp_section ] of the run - time complexity of the -sum decomposition procedure , we see that the run of step m2 of ` min_realzn ` takes time .hence the overall run - time complexity of ` min_realzn ` may be estimated to be .this expression can be simplified by observing that \\ & = & ( k^2 + r_{\max}q^{r_{\max } } ) \ , \left[n|e| + ( 1/2)|e|(|e|-1)q^{r_{\max}}\right].\end{aligned}\ ] ] it follows that ` min_realzn ` runs in time .note that this is polynomial in , , and , but exponential in .as observed in , any graphical realization of a code specifies an associated decoding algorithm , namely , the sum - product algorithm . the sum - product algorithm specified by a tree realization, , of a code provides an exact implementation of ml decoding for . a reasonable initial estimate of the computational complexity of the sum - product algorithm on is provided by the _ constraint complexity _ of , which is defined as .as implied by corollary [ dimcv_cor ] , given a tree decomposition of the index set of , the minimal realization has the least constraint complexity among all tree realizations of that extend .let denote the constraint complexity of .note that , by ( [ dimcv * ] ) , ( ; t , ) = _ v v ( ( ) - _ e e(v ) ( _ j(e ) ) ) .[ k_ct_omega ] thus , is a measure of the complexity of implementing ml decoding for as a sum - product algorithm on .let us now define the _ treewidth _ of the code to be ( ) = _ ( t , ) ( ; t , ) , [ treewidth_def_0 ] where the minimum is taken over all tree decompositions of the index set of .the treewidth of a code is an indicator of how small the computational complexity of an ml decoding algorithm for can be .the notion of treewidth ( _ i.e. , _ minimal constraint complexity ) of a code was first introduced by forney . a related notion , called minimal tree complexity ,was defined and studied by halford and chugg .treewidth , as defined in ( [ treewidth_def_0 ] ) , is an upper bound on the minimal tree complexity measure of halford and chugg .a tree is called _ cubic _ if all its internal nodes have degree 3 .forney showed that the minimum in ( [ treewidth_def_0 ] ) is always achieved by a tree decomposition in which is a cubic tree , and is a bijection may be taken to be such that is a cubic tree and is a surjective map onto the leaves of .however , the symbol - splitting argument in section v.f of his paper actually implies that in the minimizing tree decomposition may be taken to be one - to - one as well . ] between the index set of and the set of leaves of .let denote the set of all tree decompositions in which is cubic and maps the index set of bijectively onto the set of leaves of .we may then re - write ( [ treewidth_def_0 ] ) as ( ) = _ ( t , ) ( ) ( ; t , ) .[ treewidth_def ] an alternate measure of code complexity may be obtained from the notion of _ state complexity _ of a tree realization , which is the largest dimension of a state space in .thus , by virtue of ( [ dimse * ] ) and ( [ dimse*_alt ] ) , the state complexity of a minimal realization is given by we then define , in analogy with ( [ treewidth_def ] ) , ( ) = _ ( t , ) ( ) ( ; t , ) .[ branchwidth_def ] note that the minimum in the above definition is taken over tree decompositions in only .it must be emphasized that , as defined in ( [ branchwidth_def ] ) , need _ not _ be the same as the least over all tree decompositions of the index set of .a notion analogous to is known as branchwidth in the matroid theory literature ; see _ e.g. _ , . in keeping with that nomenclature ,we will call the _ branchwidth _ of the code .branchwidth and treewidth are very closely related , as shown by the following result , which can be obtained in a straightforward manner from the bounds in lemma [ dimcv*_bnd ] . given a code , if , then hence , .[ bw_tw_bnds ] the notions of state and constraint complexity have been studied extensively in the context of conventional trellis realizations of a code ; see _ e.g. _ , .recall from example [ trellis_example ] that a conventional trellis realization of a code is a tree realization that extends a tree decomposition in which is a simple path and is a bijection between the index set of and the vertices of .this special case of a tree decomposition is referred to as a _ path decomposition_. specifically , a path decomposition of a code defined on the index set is a pair , where is a simple path on vertices , and is a bijection .let denote the set of all path decompositions of .we then define _( ) = _ ( t , ) ( ) ( ; t , ) [ k_trellis_def ] and _ ( ) = _ ( t , ) ( ) ( ; t , ) . [ s_trellis_def ] it is well - known , and indeed readily follows from lemma [ dimcv*_bnd ] , that .it is clear from ( [ treewidth_def_0 ] ) and ( [ k_trellis_def ] ) that .forney asked the question of whether could be significantly smaller than .he conjectured that either for all codes , or is unbounded .we show here that it is in fact the latter that is true .to do so , we need to introduce some new concepts .in their fundamental work on graph minors , robertson and seymour introduced two notions of complexity of graphs , namely , treewidth and pathwidth .these notions have proved to be invaluable tools with many applications in graph theory and theoretical computer science .an overview of such applications can be found , for example , in .we will define the notions of treewidth and pathwidth of a graph in this subsection , and subsequently , relate them to the complexity measures and defined above for codes .let be a graph with vertex set and edge set .the graph may contain self - loops and parallel edges .a _ tree decomposition _ of is a pair , where is a tree , and is a mapping that satisfies the following : * ; * for each pair of adjacent vertices , we have for some ; and * for each pair of vertices , if is any vertex on the unique path between and , then .it may be helpful to point out that ( t3 ) above is equivalent to the following : * for each , the subgraph of induced by is a ( connected ) subtree of . a reader familiar with the notion of `` junction trees '' ( see _ e.g. _ , ) will recognize a tree decomposition of to be a junction tree .the _ width _ of a tree decomposition as above is defined to be .the _ treewidth _ of , which we denote by , is the minimum among the widths of all its tree decompositions .note that if has at least one edge , then , because of ( t2 ) , any tree decomposition of must have width at least one .thus , for any graph with , we have . for any tree with at least two vertices , we have .this can be seen as follows .fix a vertex .define a mapping as follows : , and for , , where is the first edge on the unique path from to .it is easily verified that is a tree decomposition of .since this tree decomposition has width one , it follows that .[ treewidth_example1 ] if is a tree decomposition in which is a simple path , then is called a _path decomposition_. the minimum among the widths of all the path decompositions of is called the _ pathwidth _ of , which we denote by .it is evident that .analogous to the situation of example [ treewidth_example1 ] , a simple path has pathwidth one .however , trees may have arbitrarily large pathwidth .the following example is due to robertson and seymour .let be the complete bipartite graph .for , we inductively define by taking a copy of , and to each leaf of this graph , adding two new vertices adjacent to .figure [ y_trees ] shows the trees , and .the pathwidth of , , is .[ treewidth_example2 ] thus , for trees , the difference can be arbitrarily large .we will use this fact to construct codes for which is arbitrarily large .we remark that the problem of determining the treewidth or pathwidth of a graph is known to be np - hard , .as we will see a little later , this implies that the problem of determining the treewidth of a code , or its trellis counterpart , is also np - hard .let be an arbitrary finite field . to any given graph , we will associate a code ] is defined to be the linear code over generated by the matrix . when is the binary field , the code ] is due to hlinn and whittle .if is a graph with at least one edge , then ) ] to the branchwidth of the graph , the latter being a notion we have not defined in this paper .unfortunately , it is not true that ) ] over the binary field , for an arbitrary tree .it is not hard to see that = \{0,1\}^{|e(t)|} ] .but as we have already noted , trees can have arbitrarily large pathwidth .we get around this problem by means of a suitable transformation of graphs . given a graph ,let be a graph defined on the same vertex set as , having the following properties ( see figure [ gbar ] ) : * is loopless ; * a pair of distinct vertices is adjacent in iff it is adjacent in ; and * in , there are exactly two edges between each pair of adjacent vertices .define to be the graph obtained by adding an extra vertex , , to , along with a pair of parallel edges from to each ( see figure [ gbar ] ) .it is easy to see that is constructible directly from in time .the following result was used in to show that the problem of determining for an arbitrary code ( over any fixed finite field ) is np - hard .if is the graph constructed from a given graph as described above , then ) = \k_\path(\cg ) + 1 ] , where refers to the graph obtained from by the transformation depicted in figure [ gbar ] .note that , since , as shown in example [ treewidth_example1 ] .thus , on the one hand , from theorem [ treewidth_theorem ] , we have . and on the other hand , from ( [ k_trellis_bnd ] ) and example [ treewidth_example2 ], we have . using standard facts known about the incidence matrix for a graph ( see , for example , ( * ? ? ?* chapter 5 ) ) , it may be verified that the codes , , constructed in the above proof are ] code , and a such that , the minimal realization may be constructed in time . note that appears in the exponent of the constant implicit in the -notation .this leaves us with the problem of finding , for a given code , a tree decomposition such that .unfortunately , there appears to be no efficient algorithm known for solving this problem .however , reasonably good algorithms do exist for solving a closely related problem : given a code , find a tree decomposition such that .several polynomial - time algorithms for solving this problem are given in , the most efficient of these being an algorithm that runs in time - notation depends exponentially on .] , being the length of .now , by proposition [ bw_tw_bnds ] , any is also in , and furthermore , .therefore , the algorithms of find , for a given code , a tree decomposition such that .this is sufficient for our purposes , as the computational complexity of the sum - product algorithm on the resulting would still be , except that the constant in the -notation would now be proportional to .while code families of bounded treewidth have the desirable property of having linear decoding complexity , it is very likely that they do not have good error - correcting properties .we give an argument to support the plausibility of this statement .recall from coding theory that a code family is called _ asymptotically good _ if there exists a sequence of ] code : _ ( ) ( d-1 ) .[ s_trellis_bnd ] since , the above is also a lower bound on . now , suppose that , , is a sequence of ] codes such that .then , and can not both be strictly positive .in particular , for any , the code family is not asymptotically good .[ conj2 ] we wrap up our discussion on complexity measures for codes by elaborating on a comment we made at the beginning of this subsection , in which we implied that hard coding - theoretic problems often become polynomial - time solvable when restricted to codes of bounded complexity .we saw earlier several examples of algorithms that , given a code , solve some problem in time polynomial in the length of . in each of these cases ,the computational complexity of the algorithm displayed an exponential dependence on the parameter .but since was a fixed constant , this exponential dependence could be absorbed into the constant hidden in the `` big- '' estimate of the complexity .thus , fixing the parameter allowed a potentially intractable coding - theoretic problem to become tractable . problems that may be hard in general , but which become solvable in polynomial time when one of the parameters of the problem is fixed , are called _ fixed - parameter tractable_. we noted previously that the problems of computing the treewidth and branchwidth of a code are np - hard .it should come as no surprise that these problems are in fact fixed - parameter tractable .hlinn gives an algorithm that , for a fixed integer , determines whether or not a given length- code is in . from this, one can also prove the existence of an algorithm for deciding membership of a given length- code in .perhaps the most significant problem that remains open in the context of minimal tree realizations of codes is the resolution of conjecture [ conj2 ] , which proposes that codes of bounded treewidth can not be asymptotically good . it may be possible to resolve this by deriving a lower bound on treewidth along the lines of the bound in ( [ s_trellis_bnd ] ) .another possibility is an inductive approach using code decompositions .however , an open problem of far greater significance is the development of a general theory of minimal realizations of codes on graphs with cycles . at present, such a theory only exists for the case of realizations of codes on graphs consisting of a single cycle , _i.e. _ , tail - biting trellis realizations .this simplest case of graphs with cycles is already more difficult to study than the cycle - free case for example , there can be several non - equivalent definitions of minimality in the context of tail - biting trellis realizations .the challenge posed by graphs with more complex cycle structures can only be greater ._ proof of lemma [ b|e_lemma]_. consider an arbitrary .an arbitrary global configuration may be written in the form . now , suppose that is such that , _i.e. _ , .observe that the global configurations also satisfy all local constraints ( since for each ) , and hence are in .therefore , , and so by definition of , we have . similarly , , so that . hence , .+ _ proof of lemma [ ess_lemma]_. for any tree model ( essential or not ) , we have , by definition , for all .so we need only show the reverse inclusion in the case when is an essential tree model .pick an arbitrary .let be the edges of incident with .for , let denote the component of that does not include .set , and .we will write an arbitrary configuration as consider any , where , and for .as the tree model is essential , we have for all .in particular , , so that there exists such that .as is in , its `` sub - configuration '' satisfies the local constraints of at all vertices in .hence , satisfies the local constraints of at all vertices in .now , is the only vertex of that is not in .but , by construction , , and so , also satisfies the local constraint at .thus , satisfies all local constraints of , so that .hence , is in , which proves the lemma ._ proof of lemma [ gbar_lemma1]_. for simplicity of notation , let denote the edge set of the subtree , and let denote that of the subtree . note that . throughout this proof, we will write an arbitrary global configuration , belonging to or , in the form . consider any .let and be the two vertices incident with the edge in .we assume that and , as depicted in figure [ lemma1_pf_fig ] .we write the local configuration as , and as .suppose first that ; note that the zero element of ( ) is . by definition of , .hence , there exists , for some .now , ( being a `` sub - configuration '' of ) satisfies the local constraints of at all vertices in .but these local constraints are of the form which , for , is identical to .therefore , the sub - configuration satisfies the local constraints of at all vertices in .it follows that satisfies the local constraints of at all vertices in , including . by a similar argument, there exists a such that satisfies the local constraints of at all vertices in .now , by definition of , there exist and in , such that , , and .note , in particular , that the sub - configuration of satisfies the local constraints of at all vertices in .therefore , the global configuration satisfies the local constraints of at all vertices in , and hence is in the full behavior , , of .a similar argument shows that is also in .as is a vector space , it must also contain and since is a tree realization of , we have . in particular , , and similarly , . hence , and .however , and were chosen so that and . thus , we also have and .this finally yields , thus proving one direction of part ( b ) of the lemma. we will next show that if , then but .this will prove both part ( a ) and the reverse direction of part ( b ) .so , suppose that .thus , is some coset of in , but is not itself .pick some . as , there exists some such that .observe that , but since , .define , so that .furthermore , , and ( ) is in but not in .we have already noted ( prior to the statement of lemma [ gbar_lemma1 ] ) that .therefore , , and since is a vector space , .however , , and as we showed above , this implies that . since is in but not in , we find that , but .the proof of the lemma is now complete .+ _ proof of lemma [ gbar_lemma2]_. as is a tree realization of , lemma [ b|e_lemma ] shows that for any , we have only if .thus , we need only prove the converse .suppose that is such that , but .now , , the last equality being a consequence of the fact that . therefore , there exists a such that .note that , by the hypothesis of the lemma , .set , so that .observe that , and since , we also have .thus , and .but now , we have , with , and .this contradiction of lemma [ b|e_lemma ] proves that there exists no such that , but ._ proof of _ ( a ) _ _ ( b ) _ in theorem [ rsum_theorem]_. let for codes and defined on the index sets and , respectively . by definition , .set and , so that forms a partition of . in what follows, words defined on the index set will be written in the form ; words defined on the index set will be written in the form ; words defined on the index set will be written as ; and finally , words on the index set will be written as .we begin by proving that .this is accomplished by a two - step argument : we first show that , and then we show that . if , then there exists some , such that . however , as lies outside .hence , .conversely , suppose that . then , there exists such that .since , there exists .now , , and hence .thus , , which completes the proof of the fact that . now , to show that , let us consider the projection map defined by .this map is a homomorphism , with kernel isomorphic to , which is by definition .hence , is in fact an isomorphism , which proves that .we have thus shown that .a similar argument yields the fact that .hence , by corollary [ dim_rsum_cor ] .it remains to show that .note that since , and , we must have .therefore , . by a similar argument, we also have .the proof of proposition [ rec_constr_prop ] requires the following lemma , which presents a property of the codes and obtained via the -sum decomposition procedure of section [ decomp_section ] .let be a code defined on the index set , and let be a partition of , with .suppose that and are the codes , defined on the respective index sets and , that are obtained by the procedure described in the proof of theorem [ rsum_theorem ] .then , for any , and any , we have [ appd_lemma ] we use notation from the proof of the ( b ) ( a ) direction of theorem [ rsum_theorem ] .thus , , and are generated by the matrices , and given by ( [ rref_eq ] ) , ( [ g1_def ] ) and ( [ g2_def ] ) , respectively , which we reproduce here for the sake of convenience . , \\g_1 & = & \left[\begin{array}{ccc } i_{k_1 } & a & x \\ \end{array } \right ] , \\ g_2 & = & \left[\begin{array}{ccc } x & \o & b \\\o & i_{k - k_1 } & c \end{array}\right].\end{aligned}\ ] ] for any matrix , given a subset of the column indices of , we will denote by the restriction of to the columns indexed by .thus , ,\ \ \ \ { g_1|}_j = \left [ \begin{array}{cc } i_{k_1 } & a \end{array } \right],\ ] ] .\ ] ] our proof of the lemma uses only elementary linear algebra . we prove ( [ appd_lemma_eq1 ] ) first .consider any .it is clear that ] .now , , and hence , \right).\ ] ] thus , we have to show that \right ) = \rank\left(\left[\ { g_1|}_{j - j_1 } \ \ \ x\ \right]\right ) . ] , and ( [ appd_lemma_eq1 ] ) follows .+ to show ( [ appd_lemma_eq2 ] ) , consider any .arguments similar to the ones above establish that ( |_j_2 ) = ( _ 2|_j_2 ) ( |_i - j_2 ) = k_1 + ( |_- j_2 ) .[ appd_eq1 ] now , consider .noting that , we see that the matrix has the form ,\ ] ] with , for some .since the columns of are contained in the column - space of , we can perform column operations on to bring it into the form .\ ] ] hence , ) \notag \\ & = & ( k_1+k_2-k ) + \rank\left({\left[i_{k - k_1}\ \ \ c\right]\bigm|}_{\oj - j_2}\right ) .\label{appd_eq2}\end{aligned}\ ] ] some trivial manipulations of ( [ appd_eq1 ] ) and ( [ appd_eq2 ] ) yield ( [ appd_lemma_eq2 ] ) , which proves the lemma ._ proof of proposition [ rec_constr_prop]_. recall that , where and are as defined in ( [ se_def ] ) and ( [ cv_def ] ) . to show that is the minimal realization , it is enough to show that for all , equals the expression in ( [ dimse*_alt ] ) , _ i.e. _ , ( _ e ) = ( |_j(e ) ) + ( |_|j(e ) ) - ( ) .[ appd_eq3 ] note that this is true when , since , and from ( [ r_def ] ) , we have .we must therefore show that ( [ appd_eq3 ] ) holds for .we will prove this for ; the proof for is similar .so , consider any .one of the two components , and , of is contained in .without loss of generality , we may assume that it is that is a subtree of , as depicted in figure [ subtree ] .hence , .now , by ( [ se_def ] ) , , the latter being the state space associated with in .therefore , by ( [ dimse*_alt ] ) , but , by lemma [ appd_lemma ] , the above expression is equal to the expression on the right - hand side of ( [ appd_eq3 ] ) .hence , ( [ appd_eq3 ] ) holds for any , and the proposition follows .bodlaender , `` dynamic programming on graphs of bounded treewidth , '' _ proc .15th international colloquium on automata , languages and programming _ , vol .317 , lecture notes in computer science , springer - verlag , pp .105118 , 1988 .
a tree decomposition of the coordinates of a code is a mapping from the coordinate set to the set of vertices of a tree . a tree decomposition can be extended to a tree realization , _ i.e. _ , a cycle - free realization of the code on the underlying tree , by specifying a state space at each edge of the tree , and a local constraint code at each vertex of the tree . the constraint complexity of a tree realization is the maximum dimension of any of its local constraint codes . a measure of the complexity of maximum - likelihood decoding for a code is its treewidth , which is the least constraint complexity of any of its tree realizations . it is known that among all tree realizations of a code that extends a given tree decomposition , there exists a unique minimal realization that minimizes the state space dimension at each vertex of the underlying tree . in this paper , we give two new constructions of these minimal realizations . as a by - product of the first construction , a generalization of the state - merging procedure for trellis realizations , we obtain the fact that the minimal tree realization also minimizes the local constraint code dimension at each vertex of the underlying tree . the second construction relies on certain code decomposition techniques that we develop . we further observe that the treewidth of a code is related to a measure of graph complexity , also called treewidth . we exploit this connection to resolve a conjecture of forney s regarding the gap between the minimum trellis constraint complexity and the treewidth of a code . we present a family of codes for which this gap can be arbitrarily large .
ordinal categorical data appear frequently in the biomedical research literature , for example , in the analysis of independent binary random variables related to an increasing ordered categorical variable .it is well - known that for such data it is not possible to use the classical test - statistics such as chi - square or likelihood ratio with chi - squared asymptotic distribution , but there exist appropriate order - restricted test - statistics with chi - squared - bar asymptotic distribution . to illustrate this problem a modification of an example given in silvapulle and sen ( 2005 )is considered in this introductory section .2.8pt [ c]||clc||crr||crc|crc|| & & & + & & & & & & & & & & & + & & & & & & & & & & & + & & & & & & & & & & & + & & & & & & & & & & & + table [ tt1 ] contains a subset of data from a prospective study of maternal drinking and congenital malformations .women completed a questionnaire , early in their pregnancy , concerning alcohol use in the first trimester ; complete data and details are available elsewhere ( graubard and korn , 1987 ) .specifically , women were asked what was the amount of alcohol taken during the first three months of their pregnancy and four categories of drink doses are considered ( ) , no alcohol consumption ( ) , average number of alcoholic drinks per day less than one but greater than zero ( ) , one or more and less than three alcoholic drinks per day ( ) and three or more alcoholic drinks per day ( ) . in terms of a binary random variable with individuals in total ( see the second column in table [ tt1 ] ) with independent behavior with respect to having congenital malformations , the individuals not having congenital malformations are considered to be unsuccessful ( , see the last column in table [ tt1 ] ) and successful otherwise ( , see the third column in table [ tt1 ] ) .let be the probability of a success associated with the -th alcohol dose .let us consider some statistical inference questions that may arise in this example and in similar ones with binomial probabilities . 1 .is there any evidence of maternal alcohol consumption being related to malformation of sex organ ? to answer this question , the null and alternative hypotheses may be formulated as respectively . however , this formulation is unlikely to be appropriate because the main issue of interest is the possible increase in the probability of malformation as alcohol consumption increases .2 . is there any evidence that an increase in maternal alcohol consumption is associated with an increase in the probability of malformation ? .this question , as it stands , is quite broad to give a precise formulation of the null and the alternative hypotheses .one possibility is to formulate the problem in the following way, consider an experiment with increasing ordinal categories for a variable .suppose that prefixed individuals are assigned to the -th category and .the individuals are followed over time for the development of an event of interest and the events related to the individuals are independent .let be the random variable that represents the number of individuals related to successful events ( ) out of the total assigned to the -th category , , if we denote by the probability of a success associated with the -th category , we have that is a binomial random variable with parameters and , .let denote the number of unsuccessful events associated with the -th category , i.e. , then the contingency table of a realization of , , is in the last two columns of the following table{||c||cc||}\hline\hline & & \\ & & \\ & & \\ & & \\ & & \\\hline\hline \end{tabular } \ .\ ] ] our purpose in this paper is to propose new order - restricted test statistics , wald - type and phi - divergence based test - statistics for testing the classical likelihood ratio test statistic will appear as a particular case of phi - divergence based test - statistics .a log - linear formulation of ( [ j1 ] ) is proposed in section [ sec1b ] , fundamental for defining the wald type test - statistics . in section [ sec2 ]the families of phi - divergence test statistics are presented .section [ sec3 ] is devoted to solve the problem presented in this section [ sec1 ] for a illustrative example .an extensive simulation study is carried out in section [ sec4 ] .reparametrizating the initial problem through log - linear modeling , the formulation of the null hypothesis is strongly simplified since all the interaction parameters are zero under the null hypothesis and this is appealing , in special , to create wald type test - statistics .let be the probability vector of the following saturated log - linear model with being the identifiability constraints, the unknown parameters vector and with the redundant parameters , obtained through taking into account , . in terms of the log - linear formulation , ( [ j1a ] )is equivalent to notice that is a nuisance parameter since it does not interfere in ( [ j1b ] ) .in particular , under the null hypothesis of , the value of the nuisance parameter is ] = -1.5=-1=-0.5=0==1.3068.8173.4920.4057.2076.4895(t_).1177.0911.0650.0413.0169.0090.2993.8124.4896.4057.2107.4942(s_).1181.0913.0651.0413.0169.0090{cccccc}\hline\hline $ ] ( , ) .5979(,).6363(,,).6462(w(,)).1686(h(,)).1653(d(,,)).1645 testing ( [ j1 ] ) , by considering binomial random variables , the following scenarios will be considered : * scenario * a * ( small / big proportions ) : , , , . ** scenario * a-0 * : . * * scenario * a-1 * : , . * * scenario * a-2 * : , , . * * scenario * a-3 * : , , , . *scenario * b * ( small / big proportions ) : , , , . * * scenario * b-0 * : . * * scenario * b-1 * : , . * * scenario * b-2 * : , , . * * scenario * b-3 * : , , , .* scenario * c * ( small / big proportions ) : , , , . * * scenario * c-0 * : . * * scenario * c-1 * : , . * * scenario * c-2 * : , , . * * scenario * c-3 * : , , , . *scenario * d * ( intermediate proportions ) : , , , . * * scenario * d-0 * : . * * scenario * d-1 * : , . * * scenario * d-2 * : , , . * * scenario * d-3 * : , , , . *scenario * e * ( intermediate proportions ) : , , , . * * scenario * e-0 * : . * * scenario * e-1 * : , . * * scenario * e-2 * : , , . * * scenario * e-3 * : , , , . *scenario * f * ( intermediate proportions ) : , , , . * * scenario * f-0 * : . * * scenario * f-1 * : , . * * scenario * f-2 * : , , . * * scenario * f-3 * : , , , .the simulation experiment is performed with replications and in each of them , apart from the wald type test - statistics , , , , all the power divergence test statistics , , , associated with the interval are considered . from the p - values it is possible to calculate the proportion of replications rejected according with the nominal size , i.e. where represents the indicator function .the scenarios ending in 0 represent that the null hypothesis is true , and are useful to obtain the simulated significance levels , , , with different sample sizes and kinds of test - statistics .the scenarios ending in either 1 , 2 or 3 represent that the null hypothesis is false and are useful to obtain the simulated powers , , with different alternatives , sample sizes and types of test - statistics . for calculating both , and , ( [ prop ] ) is applied , each one in the corresponding scenario . in figures[ fig1 ] and [ fig2 ] , and for all the aforementioned test - statistics are plotted in different scenarios .the curves represent either or power divergence test - statistics , located respectively on left or right of the panel of plots .the asterisk , square and circle symbols , represent , and respectively and all of them are repeated on the left as well as on the right in order to make easier their comparison with or respectively . the black color lines and symbols , representing , are useful to select the test - statistics closed to nominal level according to the criterion given by dale ( 1986 ) , a reasonable exact significance level should verify for , being closed to nominal level the exact significance levels verifying the inequality with and fairly closed to nominal level the ones with . in this studyonly the test statistics satisfying the condition with are considered , and the corresponding upper and lower bounds appear plotted with two horizontal lines , having in the middle the line associated with the nominal level , . among the test - statistics with simulated significance levels closed to the nominal level , the test - statistics with higher powers should be selected but since in general high powers correspond to high significance levels , this choice is not straightforward . for this reason , based on or as baseline , the efficiencies relative to the likelihood ratio test ( ) or the bartholomew s test ( ) are considered . in table[ table1 ] , the efficiency of is compared with respect to , and then if , since is better than , the plot of the efficiencies in figures [ fig3 ] and [ fig4 ] will be only focussed on ( [ ro ] ) .similarly , if , since is worse than , the plot of the efficiencies in figures [ fig3 ] and [ fig4 ] will be only focussed on ( [ ro * ] ) .[ c]ccccccc & sc a & sc b & sc c & sc d & sc e & sc f + 1 & & & & & & + 2 & & & & & & + 3 & & & & & & + in view of the plots , it is possible to propose test - statistics with better performance in comparison with and . from figures[ fig1 ] and [ fig3 ] , the so - called cressie - read test - statistic , , can be recommended for small / big proportions either for small or moderate sample sizes . on the other hand , , and the test - statistic based on the hellinger distance , ,can be recommended for intermediate proportions and moderate sample sizes , however for small sample sizes the likelihood ratio test - statistic still remains being the best one .silvapulle , m. j. and sen . , p. k. ( 2005 ) ._ constrained statistical inference . inequality , order , and shape restrictions_. wiley series in probability and statistics .wiley - interscience ( john wiley & sons ) .
in this paper new test statistics are introduced and studied for the important problem of testing hypothesis that involves inequality constraint on proportions when the sample comes from independent binomial random variables : wald type and phi - divergence based test - statistics . as a particular case of phi - divergence based test - statistics , the classical likelihood ratio test is considered . an illustrative example is given and the performance of all of them for small and moderate sample sizes is analyzed in an extensive simulation study . = 1 _ _ keywords and phrases _ _ * * : * * wald - type statistics , phi - divergence statistics , inequality constrains , loglinear model , logistic regression .
simulating the data that a space instrument like corot will provide might look presomptuous .indeed , it is certainly , when comparing to previous comparable instruments like iphir or golf .these two examples show that the nominal behaviour of the instrument is not always reached , but this does not prevent this instrument to provide very interesting data .however , despite some technical problems , iphir and golf yielded a wealth of scientific results .thus , what is the interest of simulating corot data ? how close to reality these simualtions will get ? this might not be the most important fact as the preparation of these simulations will help us to prepare the analysis of real data and to be ready in case of unexpected technical behaviour of the instrument perturbating the data , or unexpected physical behaviour of the targets of the instrument .a consequence of that is that the simulation tool must include technical and physical aspects , making the task even more difficult .these aspects cover : photon noise , p modes excitation , granulation signal , stellar activity signal , orbital perturbations , stellar rotation ... the software presented here is freely available at : + www.lesia.obspm.fr//simulightcurve.htmlthe photon noise is certainly the easiest component of the noise to simulate ... as far as it has the expected behaviour : a true white noise with a variance depending on the photon counts .the corot specifications impose a level of photon noise of ppm in the amplitude spectrum of a star of magitude for an observing duration of 5 days . the stellar flux for a given magnitude being related to the stellar magnitude by : and knowing that the level of noise varies as the square root of the flux , the level of noise is related to the star magnitude by : as indicated above , this simple relation can model the photon noise but not some other photon counting perturbations as slow ( periods of hours or more ) trends in photon countings which might contributes to the low frequency spectrum of the noise .the solar - like oscillations are stochastically excited and simulated here following the recipe of , recalled below .each solar - like oscillation is a superposition of a large number of excited and damped proper modes .each solar - like oscillation can then be decomposed as : \ , exp[-\eta ( t - t_j ) ] h(t - t_j ) + { \rm c.c } \label{simu : eq1}\ ] ] where is the amplitude at which the mode with proper frequency is excited by turbulent convection , at which it is excited , is the ( linear ) mode damping rate , is the heaviside function and `` c.c .'' means complex conjugate .the fourier transform of eq.[simu : eq1 ] yields the spectrum : whith , and is a complex number proportional to $ ] .as the excitations are random , is random and hence has a random phase .as the excitation are very numerous , according to the central limit princip the real and imaginary parts of the complex number are distributed according to a normal statitics .hence the spectrum of the oscillations , , can be simulated by generating the real and imaginary parts of the complex number according to a normal statitics .an inverse fourier transform is next applied in order to simulate the oscillations in the time domain .this is the principe of the s recipe .the question is now that is the constraints on the mean and variance of . from eq.[simu :eq2 ] we deduce the power spectrum of the oscillation : the mean mode profile is obtained by averaging eq.[simu : eq3 ] over a larger number of realizations : where , which is by definition the variance of the complex number since .we see from eq.[simu : eq4 ] that the mean profile has a lorentzian shape as it is in first approximation observed . according to the parseval - plancherel relation , is related to the mean square of the mode intrinsic amplitude in terms of luminosity fluctuations , , as ( see eg . ) : where is a visibilty factors , which depends on the degree .once the mode intrinsic amplitude , the mode line width and are given , we have then a constraint on the variance of .note that the derivation of eqs.[simu : eq2]-[simu : eq5 ] assumes that the the mode life - time ( ) is much shorter than the duration of the observation ( ) , that is .for life - times much longer than , different expressions are derived but the principe of the simulation remains the same . for sake of simplicity , we use the adiabatic assumption formulated by to deduce the maximum of from the maximum of the root mean square mode velocity according to : where is the effective temperature and the symbol refers to quantities related to the sun .we take for the sun the _ rms values _ ppm ( see table 2 in and more recently ) and 27 cm / s according to s seismic observations . in turnthe root mean square of the mode intrinsic velocity , , is related to the rate at which energy is injected into the mode by turbulent convection and the mode damping rate as : where is the mode mass , the mode inertia , the mode radial displacement and the height above the photosphere where oscillations are measured ( we consider ) . the way the quantities involved in eq.[simu : eq6 ] are model is explained in the next two sects .theoretical mode damping rates are obtained from the tables calculated by .these calculations rely on the non - local and time - dependent formulation of the convection of ( for more details , see ) .the computation of the excitation rates is performed according to the model of stochastic excitation of .the calculations assume - as in - a lorenzian function for modelling the convective eddy time - correlations .furthermore , the characteritic wavenumber involved in the theory is assumed to be constant according to the simplification proposed by .its value is related to the value inferred in from a 3d simulation of the sun as : where is the mixing - length evaluated at the layer the convective velocity is maximum .the solar model we consider is calculated with the cesam code ( ) and appropriate input physics , described in details in .in particular , convection is modelled according to the classical mixing - length theory ( * ? ? ? * hereafter mlt ) with a mixing - length where is the pressure scale height and is the mixing - length parameter .in contrast with , the atmosphere is restored from the eddington classical gray atmosphere and microscopic diffusion is included according to the simplified formalism of .the calibration of the solar model , in luminosity and radius for an age of gyr , fixes the initial helium content , metallicity and the mlt parameter .the oscillation eigenfunctions and hence the mode masses , , in eq .[ simu : eq6 ] are calculated with the adiabatic adipls pulsation code ( ) .granulation can be considered as noise if the aim of the observation is the stellar oscillations , but it carries some information about the physics of the star , and very valuable information as the convection is a very badly described phenomenon in stellar modelling .thus , we call it a and not a noise . however , granulation can be described only with a statistical approach . moreover ,its contribution in the fourier domain is not independent of frequency : it will contribute more at low frequency .a common description is to consider that signal as a random signal with some memory : then , its autocorrelation function is : where is the amplitude of this signal ( to be related to its variance ) and a characteristic time . knowing that the fourier transform of the autocorrelation function is the squared modulus of the function , one has : thus , the fourier spectrum of the granulation is modelled as a lorentzian which , if it is summed over all frequencies , yields the variance of the signal and then its relation with : so , knowing the intensity standard deviation due to granulation and its characteristic time , it is possible to model its fourier spectrum ( and considering its phase is random , it is possible to model the corresponding time series by an inverse fourier transform ) .this is in fact the approach developped by to model the solar noise spectrum , including granulation , mesogranulation and supergranulation .it can be used to describe any noise with memory ( some electronic noise for example ) . in the case of granulation ,the required parameters are estimated from theory of convection .the eddy size is assumed to be equal to where is the mixing - length parameter and the pressure scale height .the eddy overturn time is related to the eddy convective velocity , , by and is considered as the characteristic time of granulation .the number of number of eddies seen at the star surface , , is where is the star radius .this relation ignores of course that the medium is higly anisotropic .according to the mixing - length theory ( see eg . ) , the eddies contrast ( border / center of the granule ) , , can be related to the difference between the temperature gradient of the eddy , and that of the surrounding medium , , according to the relation : in turn , eq .[ dl_l_eddy ] can be reduced to : where is the convection efficiency and the ratio between the convective flux and the total flux of energy .this relation is finally calibrated in order to match the solar constraints , and the intensity standard deviation for the whole observed disc is all the quantities are obtained on 1d stellar models computed as explained in sect .[ sect : solar - like - exc ] .the stellar magnetic activity will induce intensity variation in time , mainly due to the presence of starspots crossing the observed disc because of the rotation .however , some other sources of intensity variations are expected ( flares for example ) .a first approach is to consider the intensity variations as described by the same way than granulation . knowing the standard deviation and the characteristic time of intensity variations allows to build the fourier spectrum of these variations as a lorentzian .this approach has been used for example by .the difficulty in the case of magnetic activity is that there is no theoretical description of the phenomenon .thus , the parameters describing it are empirically derived .+ the characteristic time is taken as the period of rotation of the star , or , if the rotation is slow , the instrinsic lifetime of a spot .the latter is arbitrarily chosen as the one in the solar case , computed by .the rotation period is computed from an empirical law involving the age and the color index of the star described in the same reference .+ the standard deviation of intensity variations due to magnetic activity is also derived from an empirical law .however , this law involves the rossby number ( ratio of rotation period and the overturn convective time at the base of the convection zone ) .this number can be derived empirically , but in the present simulations , we derive from models. then we use empirical laws to estimate .another approach is to simulate in the intensity time series the influence of individual spots , to estimate their number and contrast ( as for example ) .the expected result in the fourier spectrum should be similar to the first approach , but this detailed simulation should allow for rotation measurements .this approach will be included in a further version of our simulation software . , showing that the p modes are easily detected , as well as granulation and activity , width=359 ]a first example of the output of this software is shown in fig.[comp_sun ] : a simulation of a sun with a magnitude ( in order to have a very weak phoon noise ) is compared with a spectrum from the loi photometer ( ) onboard soho .the agreement between simulation and ovservation is good , except maybe at very low frequency : the activity component of the signal is overestimated in the simulation ( this is explained as the sun does not fit very well the empirical law used for activity estimation ) . + the following example is a sun with a magnitude , showing that both p modes and activity should be visible with corot for such a star ( fig.[sun_m6 ] ) .another example is shown in fig.[star1.5_m8 ] for a star of 1.5 solar mass and an age of 2.4 gyr : again , p modes and activity are detectable . for a star with a mass and a magnitude , showing also p modes , granulation and activity signatures , width=359 ]this simulator software will continue to evolve with time . as indicated above ,intensity modulation due to starspots will be included , as well as other stellar or instrumental signals , as for example instrumental perturbations due to orbital vraiations .moreover , this effort of simulation will not end with the delivery of first data but will be continued after that .the comparison with real data will allow to check for the validity of physical hypothesis used to simulate the different signals of astrophysics origin in the data .this shoud bring a great amount of information on our knowmedge of these often not well known phenomena , which stellar simulation is often derived from the solar case . in parallel , the simulation of instrumental components of the signal will be improved to help the interpretation of real data .all these reasons justify in our opinion the need for the simulation tool presented here . ,c. , de ridder , j. , mazumdar , a. , et al .2004 , in proceedings of the soho 14 / gong 2004 workshop ( esa sp-559 ) .`` helio- and asteroseismology : towards a golden future '' .12 - 16 july , 2004 .new haven , connecticut , usa . editor : d. danesy .
in order to prepare the analysis of corot data , it has been decided to build a simple tool to simulate the expected light curves . this simulation tools takes into account both instrumental constraints and astrophysical inputs for the corot targets . for example , granulation and magnetic activity signatures are simulated , as well as p modes , with the expected photon noise . however , the simulations rely sometimes on simple approach of these phenomenons , as the main goal of this tool is to prepare the analysis in the case of corot data and not to perform the most realistic simulations of the different phenomenons .
positive and negative feedback loops are frequently - occurring motifs in gene transcription regulatory networks and signaling pathways .the components of a feedback loop are genes , proteins and other molecules which are connected by regulatory interactions .depending on the components and their interactions , feedback loops have distinct roles in diverse regulatory systems .a regulatory interaction is positive ( negative ) if an increase in the amount or activity of one component increases ( decreases ) the amount or activity of its interaction partner .a feedback loop is positive ( negative ) if the number of repressing interactions is zero or even ( odd ) .a large number of experiments and theoretical studies elucidate the major functional characteristics of feedback loops with simple structure .positive feedback in a gene transcription regulatory network ( gtrn ) tends to enhance protein levels whereas negative feedback favours homeostasis , i.e. , maintenance of proteins at a desired level .the simplest feedback loop has only one component which is thus self - regulating .for such a motif in a gtrn , a protein promotes / represses its own production via autoactivation / autorepression of the expression of its gene .a positive feedback loop with two components and two regulatory interactions is of two types : double negative and double positive . again , considering a gtrn , the protein products of the two genes in a double negative feedback loop repress each other s synthesis .the construction of a synthetic circuit , the genetic toggle , is based on this motif .the double positive feedback loop is defined by two genes , the protein products of which promote each other s synthesis .there are several examples of two - component positive feedback loops in natural cellular networks , a prominent example being the cell division cycle , the regulatory network of which contains both double positive and double negative feedback loops . in this case, the loops control enzymatic activity .the double negative feedback loop , because of its more common occurrence , has been extensively studied in contrast to the double positive feedback loop .the next stage of complexity in feedback loops involves linked positive and negative feedback loops .the key variables in the dynamics of feedback loops are the concentrations of the component molecules . in the case of a gtrn, these may be the protein concentrations . in a deterministic description ,the time evolution of the concentrations is determined by solving a set of coupled differential equations , the number of equations being equal to the number of variables . in reality , the biochemical events associated with gene expression are probabilistic in nature and this is reflected in the presence of fluctuations ( termed noise ) around mean protein levels .a stochastic description of time evolution is thus more appropriate .a single positive feedback loop has a tendency to amplify noise , also the time taken to reach the steady state protein level is longer than that in the case of an unregulated gene .interlinking of two positive feedback loops with slow and fast dynamics results in a switch with rapid activation and slow deactivation times and a marked resistance to noise in the upstream signaling pathway .addition of a single negative feedback loop leads to rapid deactivation in the absence of the signal which activates the switch .the combination of positive and negative feedback loops may give rise to excitability with transient activation of protein levels .recent experiments suggest that competence development in b. subtilis is achieved via excitability . in this paper, we study the functional characteristics of a two - gene double positive feedback loop coupled with autorepression of the expression of one of the genes .the major motivation for studying this specific motif is its presence in the gtrn controlling the pancreatic -cell function .the hormone insulin is a small protein that is synthesized in the -cells and secreted when an increase in the blood glucose level is sensed .glucose metabolism releases energy needed by cells to do useful work .insulin is necessary to metabolize glucose and thereby control its level in the blood .diabetes occurs due to an excessive accumulation of glucose in the blood brought about by an insufficient production of or reduced sensitivity to insulin .the core of the -cell transcriptional network consists of a double positive feedback loop in which the transcription factors and , belonging to the nuclear hepatocyte family , activate each other s synthesis .there is also some evidence that autorepresses its own synthesis .mutations in the transcription factors and give rise to a type of diabetes known as maturity - onset diabetes of the young ( mody ) which has an early onset with age less than usually years .there are six different forms of mody of which mody 1 and mody 3 are caused by mutations in the genes and respectively .the structure of the regulatory network , of which the two genes and are integral components , is not fully known .a partial structure of the complex network is shown in involving the genes and the genes collectively control the transcription of a number of important genes involved in glucose metabolism in the .these include the glucose transporter ( gene , the glucokinase gene encoding the glycolytic enzyme glucokinase which acts as glucose sensor and also the insulin gene .odom et al . chromatin immunoprecipitation assays with promoter microarrays to gain insight on the regulatory circuits formed by and both the proteins are found to control the activity of a large number of target genes in the .this recent finding as well as earlier experiments indicate that the and genes play a prominent role in the pancreatic function .mutations in the genes give rise to mody resulting in the impairment of glucose - stimulated insulin secretion .several experiments provide clues on the possible molecular origins of mody .the cross - regulatory interactions between and are switched on as pancreatic -cells receive the signals to differentiate .the double positive feedback loop has the potential for bistability , i.e. , two stable steady states .the two states are a basal state in which the two genes have low activity and an activated state which corresponds to high protein levels .the states are analogous to the off and on states of a switch .normal functioning of the pancreatic -cells requires the two - gene feedback loop to be in the on state .the circuit operation is , however , vulnerable to decreased gene dosage caused by mutations ( in a diploid organism each gene has two identical copies ) .genetic disorders , termed haploinsufficiency , are known to occur due to reduced gene dosage resulting in decreased protein levels .gene expression noise increases the probability that a protein level falls below a threshold value so that the protein amount is insufficient for meaningful activity .the loss of vital protein functions is responsible for the occurrence of genetic disorders .mody , brought about by reduced gene dosage , is thus an example of haploinsufficiency .we construct a mathematical model to study the dynamics of the core circuit consisting of a double positive feedback loop coupled with autorepression of the gene .we use both deterministic and stochastic approaches to identify the functional features of the motif and discuss their possible relevance in the occurrence of modythe circuit diagram of the motif to be studied is shown in the figure 1 . and represent the genes and respectively .the arrow sign denotes activation by the appropriate protein product and the hammerhead sign denotes repression .the chemical kinetic schemes corresponding to the expression of genes and are shown in figures 2(a ) and 2(b ) .the protein products of and are denoted by and .we assume that the regulation of gene expression is mediated by the protein dimers and , and being the binding constants of dimerization . for each gene , there are two rates of protein synthesis : a basal rate ( rate constants and ) and an activated rate ( rate constants and ) . in the second case ,protein synthesis occurs in the activated state of the gene ( and ) attained via the binding of protein dimers and to the genes and respectively .the associated binding constants are and .the rate constants for protein degradation are and with denoting the degradation product .dimer degradation is not taken into account as its rate is few - fold lower than the degradation rate of protein monomers .for the gene , there is an extra biochemical event representing autorepression .the dimers and bind the promoter region of the gene gs competitively , i.e. , the binding of one type of dimer excludes the binding of the other type . when the dimer binds at , there is complete repression. the binding constant is denoted by .the protein concentrations and are the dynamical variables in the system .the time scale of binding events , in general , is much faster than that of protein synthesis and degradation .the bound complexes thus reach the steady state at an earlier time point . taking this into account ,the differential rate equations describing the time evolution of the protein concentrations and are : with there are two conservation equations for the total concentrations and of the genes and . after an appropriate change in variables the differential rate equations ( 1 ) and ( 2 ) are transformed into the different parameters are given by the variable is dimensionless whereas the variables and have the dimensions of concentration expressed in units of [ nm ] .the parameters and are dimensionless while the parameters , and are expressed in units of ^{2}} ] with one gene copy corresponding to approximately $ ] . from now on ,the units will not be explicitly mentioned .table 1 displays all the parameters and rescaled parameters of the two - gene model as well as their meanings and defining formulae .we use the software package xppaut to probe the dynamics of the double positive feedback loop and the effect of autorepression of the proteins on the dynamics .we focus on how the steady state value of u ( rescaled concentration of proteins ) changes as a function of the different parameters in equations ( 8) and ( 9 ) . in the steady state ,the rates of change and are zero .figure 3(a ) shows a plot of versus when the autorepression strength given by is zero .the other parameters have values = , , , and .the plot shows that a region of bistability separates two region of monostability .the two stable states in the bistable region correspond to low and and high values of . in this region and at a specific value of , the choice between the stable steady states is history - dependent , i.e. , depends on initial conditions .if the value of is initially low , the system ends up in the low state .as increases , the system enters the region of bistability but continues to be in the low expression state till a bifurcation point is reached . at this point ,a discontinuous jump to the high state occurs and the system becomes monostable .bistability is accompanied by hysteresis , i.e. , the value of at which the switch from the low to the high expression state occurs is greater than the value of ( the lower bifurcation point ) at which the reverse transition takes place .the two stable branches are separated by a branch of unstable steady states ( dash - dotted line ) which are not experimentally accessible .there are now several known systems in which bistability and hysteresis have been observed experimentally .figure 3(b ) shows the plot of versus for the same parameter values as in figure 3(a ) except that the copy number of the gene is reduced from two to one , i.e. , has the value . a comparison of figures 3(a ) and( b ) shows that with reduced copy number the extent of the region of bistability in considerably increased .the same conclusion is reached when the steady state values of are plotted versus the parameter the region of bistability is lower in extent when the parameter a measure of the autorepression strength , is increased from zero .the value of is changed by modifying the value of ( equation ( 10 ) ) , the binding constant for repressor binding at the gene .figure 4 shows the phase portrait corresponding to equations ( 8) and ( 9 ) with the parameter values and the system is bistable for the parameter values quoted .the nullclines , obtained by putting intersect at three points , the fixed points of the dynamics .the lower and upper fixed points are stable whereas the intermediate fixed point is unstable , in fact , a saddle node .the stable manifold of the saddle node divides the space into two basins of attraction .trajectories starting in the lower ( upper ) basin of attraction end up at the lower ( upper ) stable fixed point as shown in figure 4 .a trajectory initiated on the stable manifold stays on it and ends at the saddle node .a typical trajectory asymptotically approaches the unstable manifold as a trajectory is obtained by plotting the values of and at different time points , determined by solving equations ( 8) and ( 9 ) .the arrow direction on a trajectory denotes increasing time .> p2.5in > p3 in * parameter / rescaled parameter * & * * meaning / defining formula * * + & rate constants for basal protein synthesis + & rate constants for activated protein synthesis + & binding constants of protein dimers and + & binding constants for the binding of protein dimers and at the genes + & rate constants for protein degradation + & binding constant for repressor dimer binding at gene thus denotes the strength of repression + & , ratio of activated and basal rate constants for synthesis of s proteins + & ratio of activated and basal rate constants for synthesis of proteins + & with and kept fixed , can be varied by changing thus providing a measure of repression strength + & + & + & abbreviations defined in equations ( 3 ) and ( 4 ) + figures 5(a ) shows the plot of versus exhibiting regions of monostability and bistability .the parameter values are the same as before with and .the regions of bistability , enclosed within the red and black curves , correspond to and respectively .the difference in the locations of the two loops in the logarithmic plots clearly shows that the bistable region is of greater extent when the gene copy number is reduced from two to one .the region of bistability is decreased in extent when autorepression is taken into account ( figure 5(b ) with figure 6 shows the plot with the regions of bistability falling within the red ( ) and black ( ) curves respectively .the value of is changed by varying ( equation 10 ) with .a major advantage of combining a double positive feedback loop operating between two genes with autorepression of the expression of one of the genes lies in dosage compensation .this relates to the fact that the fall in steady state protein levels , brought about by a reduction in the gene copy number , is less when autorepression is included , compared to the case when there is no autorepression .a measure of dosage compensation is provided by the quantity , termed percentage gain , defined as where denotes the steady state concentration of proteins when the copy number of the gene , , is one .the parameter is a measure of the repression strength . is calculated by keeping the mean level of proteins to be the same in the two cases and when .this is achieved by adjusting the binding constant contained in the parameter in equation ( 10 ) .the other parameter values are , , and .figure 7 shows the plot of versus for the parameter values mentioned .as increases from zero , there is initially a sharp increase in the value of g followed by a slower growth which ultimately leads to a near - saturation of g values .the results obtained in the deterministic approach provide insight on the advantages of autorepression in the non - occurrence of the genetic disorder mody .the normal functioning of pancreatic -cells requires the and protein levels to be high , i.e. , the two - gene system should be in the on state .the genesis of mody lies in a substantial fraction of the -cells being in the off state .this is brought about by mutations in the and genes giving rise to a fall in the steady state protein levels . in terms of the two - gene model studied by us ,the monostable high state , in which the levels of the p and s proteins are both high , represents the on state of normal -cells .the system may enter a region of bistability , in which both the on and off states are possible , due to the loss of a gene copy brought about by mutations .we will show in the next section that fluctuations in the protein levels are responsible for transitions between the on and off states . in the deterministic scenario ,the major advantages of the autorepressive feedback loop appear to be dosage compensation ( figure 7 ) as well as a lesser possibility of the system being in the bistable region due to a reduction in gene copy number .the continuance of the system in the monostable high state ensures the normal functioning of cells .similar conclusions are reached if the gene copy number is reduced from two to one .there is , however , an asymmetry in the and protein levels as the expression of the gene is not autorepressed .consider the two - gene network to be originally in the monostable high state . in the deterministic formalism, the system continues to be in the high , i.e. , on state even if it enters the region of bistability due to the loss of a gene copy .this is due to history dependence , since the system is initially in the on state it continues to be in the on state in the bistable region .the protein levels corresponding to the on state are , however , lower in magnitude in the bistable region . in the pancreatic -cells ,the occurrence of mody is possible only when a sizable fraction of cells is in the off state .the on and off transitions can be understood only when stochasticity in gene expression is taken into account .we now give a simple physical picture of the origin of stochastic transitions . in the case under consideration ,the dynamical variables are the protein concentrations and . in the case of deterministic time evolution ,trajectories starting in individual basins of attraction stay confined to the specific basins with no possibility of a trajectory crossing from one basin to another . in the stochastic approach ,the trajectories are no longer deterministic as the dynamical variables and are fluctuating . in the deterministic case , given the initial state defined by , the trajectory is fixed in the -phase space . in the stochastic case ,different trajectories are generated in repeated trials . a transient fluctuation , if sufficiently strong , switches the system dynamics from one basin tothe other brought about by the excursion of the trajectory across the boundary separating the two basins of attraction . in terms of the pancreatic -cells ,the switch to the off state hampers the normal functioning of the cells . for proper regulatory functions as transcription factors , the and protein levels are high with an optimal value as excessive protein amountsare known to be harmful rather than beneficial . in this context, it is pertinent to undertake a comparison of the functional characteristics of two - gene network models with and without the autorepressive loop and with the mean protein levels kept at the same high values in the two cases .the last condition ensures the normal functioning of the cells in both the cases . in section 2 , we have identified certain advantages of the autorepressive loop as regards the system dynamics in a deterministic framework .our goal is now to identify the desirable features of the model incorporating both a double positive feedback loop and an autorepressive loop taking the stochastic aspects of the dynamics into consideration .this is done with the help of a detailed computer simulation based on the gillespie algorithm .the algorithm enables one to keep track of the stochastic time evolution of the system .the different biochemical reactions considered in the simulation are depicted in figures 2(a ) and 2(b ) .the reactions are sixteen in number and are given by the different symbols are as explained in section 2 .the stochastic rate constants , associated with the reactions , are c(i ) , i=1, ...,16 , in the appropriate units .the results of the simulation are shown in figures 8 - 9 .figures 8(a)-(d ) show the distribution of protein levels , , in an ensemble of 4500 cells for repressor strengths and respectively after a simulation time of time units .the gene copy numbers are and so that the system is in the region of bistability .the values of the stochastic rate constants are ,,, , .the value of is changed by varying the stochastic rate constant .the value of the rate constant is changed to keep the mean protein levels to be the same when for all values of .the value implies that only the double positive feedback loop contributes to the dynamics .the distribution is found to be bimodal , i.e. , has two distinct peaks corresponding to the off and on states .in all the cases , the cells are in the on state at time one finds that the fraction of cells in the off state decreases as the value of increases .in fact , when the number of cells which are in the off state is larger than that in the on state . since initiallyall the cells are in the on state , a large number of on off state transitions occur during the simulation time . for reverse transition is , however , much rarer .the role of the autorepressive loop thus appears to be to reduce the number of stochastic transitions from the on to the off state .this makes the occurrence of mody , brought about by a sizeable fraction of the cell population existing in the off state , less probable .there are two distinct time scales over which protein fluctuations occur .the probability distribution versus has a two - peaked structure .fluctuations on a short time scale confine the values to lie predominantly within individual peaks .the long time scale corresponds to the time at which large fluctuations occur bringing about transitions between states belonging to different peaks .the `` escape time '' is often very large and a quantitative measure is provided by the mean first passage time . in the present case ,the values of and are quite large for different values of the maximum simulation time is time units for all values of for is around time units whereas is even larger . as increases , increases whereas decreases . for as large as because of large escape times , the probability distribution versus is metastable on a large time scale . over shorter periods of time ,the shape of the distribution remains more or less invariant .the plots in figure 9 are obtained for an ensemble of 4500 cells .for gene copy numbers and , the mean protein levels are adjusted to be the same irrespective of the values of .the parameter values are so chosen that the system is in the monostable high region .on reduction of to ( one copy of the gs gene ) , the system enters the region of bistability and is in the on state at time . after a period time units of stochastic time evolution ,the percentage of cells in the off state is determined .the red curve shows this percentage as a function of the repression strength .the drop in the percentage of cells in the off state is found to be exponential .the black curve shows the percentage of cells in the on state after time units , with all the cells being initially in the off state .one finds that with increasing , the fraction of cells in the on state becomes larger .the autorepressive loop has the effect of making the on state more stable and the off state more unstable .this feature enhances the probability of the nonoccurrence of mody as there are infrequent transitions from the on to the off state . on the other hand, the system has a lesser probability of remaining stuck in the off state compared to the case when there is no autorepressive loop .in this paper , we have studied the functional characteristics of a motif consisting of a double positive feedback loop operating between two genes and a negative feedback loop in which the protein product of one gene represses its own synthesis .the motif appears in the gene regulatory network controlling the pancreatic -cell function .loss of a gene copy due to mutations has been shown to be responsible for abnormal -cell function resulting in mody .we have studied the effect of reduced gene copy number on the dynamics of the model describing the two - gene motif . in a deterministic formalism based on differential rate equations , we identified regions of bistability in appropriate parameter regions .the stable steady states , designated as the off and on states , correspond respectively to low and high protein levels .the normal -cells are expected to be in the monostable on state .the occurrence of mody is brought about by a fraction of -cells being in the off state .the on switch can occur only in the bistable region .negative feedback reduces the extent of the bistable region making it less likely that the cellular state falls in this part of the phase diagram .the region of bistability , however , increases in size on reduction of the gene copy number making the on transitions more probable .negative feedback also produces a mechanism of dosage compensation ( figure 7 ) .the results hold true for a wide range of parameter values .since switching to the off state is detrimental , one would have thought , from an evolutionary point of view , that two genes which are constitutively on would be more appropriate . in reality , the genes and form a positive feedback loop .cross - regulation between the two genes is established when the pancreatic cells receive signals to differentiate .the positive feedback loop provides a stable mechanism of gene expression since the two genes reset each other s activity to the functional state under physiological perturbations .this serves to self - perpetuate the activity of the two genes and their targets in the pancreatic .normal functioning of these cells requires both the protein levels to be high .the system of two genes that are constitutively on are less robust under physiological perturbations since there is no resetting mechanism by which both the genes are in the functional on state .the theoretical suggestions of bistability due to the existence of a positive feedback loop , backed up by the results of our mathematical model , should be tested in actual experiments .the on switch is brought about by protein fluctuations the origin of which lies in stochastic gene expression .our major finding is that negative feedback makes the on transitions less probable and the off transitions more probable .thus the function of the negative feedback appears to be to protect the normal -cell function since the cell is more likely to be in the on state in this case .the asymmetric response to fluctuations prevents switching off and facilitates switching on of the high expression state . in the deterministic scenario, one finds that the difference between the on state and the unstable steady state protein level increases as the autorepression strength is increased whereas the difference between the unstable steady state and off state protein levels decreases on increasing the autorepression strength .this may explain the asymmetry in the on and off switches when stochasticity is taken into account . for moderate strengths of autorepression ,the system is locked in the on state for extremely long times . in our simulations, we did not encounter on switches for very long trajectories ( seconds ) with .this translates into lifetimes measured in years and explains the delayed onset of the diabetic phenotype .the phenotype generally appears after several years indicating that the activation of the on switch is rare .the average age at which mody is manifest could thus be dictated by the probability that a sufficient number of -cells is locked in the off state .we have considered the simplest form of negative autoregulation in our two - gene model .there are recent suggestion that negative autoregulation of the hnf-4 gene in the pancreatic -cells may be more complex .also , the number of transcription factor binding sites of the two genes is not known with certainty .cooperative binding at multiple sites is expected to promote the stability of the gene expression states .our two - gene motif constitutes a minimal model which seeks to explains the desirable features of combining a double positive feedback loop with an autorepressive loop vis - - vis the normal functional activity of -cells . the insight gained from the model studyis expected to provide a basis for the investigation of more complex cases .the two - gene network model .the protein products of the and genes activate each other s synthesis .there is also an autorepressive loop in which the proteins of the gene repress their own synthesis .fig3 . versus curves showing bistability and hysteresis .the solid ( dashed ) lines represent stable ( unstable ) steady states for gene copy numbers ( a ) , and ( b ) , .the parameter , a measure of the autorepression strength , is zero .phase portrait described by equations ( 8) and ( 9 ) .the dark and light solid lines represent the nullclines intersecting at three fixed points .the stable and unstable fixed points are denoted by solid and empty circles respectively .the stable manifold divides the phase space into two basins of attraction .some typical trajectories are shown with arrow directions denoting increasing time .plots of versus showing regions of monostability and bistability when the parameter a measure of the autorepression strength , is zero ( a ) and 0.005 ( b ) .the regions of bistability are enclosed within the red and black curves with gene copy numbers , and , respectively .distribution of steady state protein levels , , in an ensemble of cells for repressor strengths ( a ) , ( b ) , ( c ) and ( d ) respectively . fig9 . for gene copy numbers and after a time interval time units of stochastic time evolution , the percentage of cells in an ensemble of cells in the off state ( red curve ) versus the repression strength with all the cells being in the on state at time .the black curve shows the percentage of cells in the on state versus with all the cells being in the off state at .
we study the functional characteristics of a two - gene motif consisting of a double positive feedback loop and an autoregulatory negative feedback loop . the motif appears in the gene regulatory network controlling the functional activity of pancreatic -cells . the model exhibits bistability and hysteresis in appropriate parameter regions . the two stable steady states correspond to low ( off state ) and high ( on state ) protein levels respectively . using a deterministic approach , we show that the region of bistability increases in extent when the copy number of one of the genes is reduced from two to one . the negative feedback loop has the effect of reducing the size of the bistable region . loss of a gene copy , brought about by mutations , hampers the normal functioning of the -cells giving rise to the genetic disorder , maturity - onset diabetes of the young ( mody ) . the diabetic phenotype makes its appearance when a sizable fraction of the -cells is in the off state . using stochastic simulation techniques , we show that , on reduction of the gene copy number , there is a transition from the monostable on to the on state in the bistable region of the parameter space . fluctuations in the protein levels , arising due to the stochastic nature of gene expression , can give rise to transitions between the on and off states . we show that as the strength of autorepression increases , the on state transitions become less probable whereas the reverse transitions are more probable . the implications of the results in the context of the occurrence of mody are pointed out .. p.a.c.s . nos . : 87.18cf , 87.18tt , 87.18vf department of physics , bose institute , 93/1 , a. p. c road , kolkata-700009 , india
over the last two decades , the striking optical properties of noble metal nanoparticles have raised considerable interest . among the variety of morphologies obtainable by chemical synthesis , `` nanorods '' ( elongated cylindrical or , more commonly , spherocylindrical particles ) are some of the most studied , due to their strong longitudinal resonance , which can be tuned by varying their anisotropy .silver and gold nanorods were thoroughly studied by uv - vis - ir spectroscopy .although numerical simulation techniques are nowadays quite accessible , see e.g. refs ., these particles are often modeled as ellipsoids , since this shape has the great advantage of being amenable to analytical treatment , as briefly discussed in the following ; see for a detailed and up - to - date discussion .the scattering problem was solved exactly for a sphere and for an arbitrary spheroid , but the expressions are cumbersome . in the rayleigh ( or electrostatic )limit , analytical approximations can be derived for ellipsoids .they are valid for particles much smaller than the wavelength , a condition often fulfilled in practice .more elaborate but still tractable approximations exist , for spheres and spheroids beyond the rayleigh regime .unfortunately , the currently prevailing opinion is that the ellipsoid model does not accurately describe more realistic shapes , in particular with respect to the longitudinal plasmon resonance ( that associated to the longest dimension ) .the authors reach this conclusion by comparing the position of the longitudinal plasmon peak ( lpp ) for cylinders , spherocylinders , and prolate spheroids with the same length and transverse diameter and hence the same aspect ratio .however , the literature provides no justification for using as the comparison parameter .we consider the relevant criterion for identifying the ellipsoid corresponding to a given particle .in contrast with the established procedure discussed above , we argue that the relevant quantities are the various moments of the mass distribution , leading to an effective aspect ratio .the spectra of particles with different shapes : cylinder , spherocylinder and ellipsoid and the same agree much better than those for an equal aspect ratio .the most general description of a mass distribution , for a body or a system of particles ) , is in terms of its various moments .the first of these are , in increasing tensorial order , ( the total mass of the system ) , ( with the center - of - mass vector ) and the inertia tensor : \ ] ] where is the volume of the body , is the local mass density , is the unit tensor and denotes the outer product . is widely used in mechanics , but is also relevant for the interaction of particles with radiation : for instance , the gyration radius defined as is extensively used in small - angle scattering techniques . this strategy is also very similar to the traditional way of describing a charge distribution by its total charge , dipole , quadrupole and higher multipolar moments . in this framework ,the ellipsoid equivalent to a given body is the one with the same distribution moments .geometrically , an ellipsoid is completely defined by three parameters ( e.g. the semi - axes ) chosen such that the three eigenvalues of are the same as those of the initial body .the particle position does not influence the absorbance ; as to the mass , we will return to it in appendix [ sec : appmass ] .three nanoparticles ( from top to bottom : cylinder , spherocylinder and prolate ellipsoid ) with the same total length and diameter , and thus the same aspect ratio . for the top two shapes, the superimposed contour represents the equivalent ellipsoid ( see text).,scaledwidth=45.0% ] we studied three types of particles : cylinders , spherocylinders and prolate ellipsoids , described by their aspect ratio ( exemplified in figure [ fig : geom ] for . ) and considered homogeneous : . for ellipsoids , one can also write ( the ratio of the two semi - axes ) .all these shapes have azimuthal symmetry around their length , taken as parallel to , so that only and need to be determined . \right . \\ & + \left. \dfrac{4}{3 } \left [ \dfrac{83}{320 } + \left ( r - 1 + \dfrac{3}{8}\right ) ^2 \right ] \right \rbrace\\ i_z & = \rho \pi r^5 \left [ ( r-1 ) + \dfrac{8}{15}\right ] \end{array } \right.\ , \text{spherocylinder}\\ \label{eq : isphcyl}\ ] ] [ eq : iall ] where and are the masses of the cylinder and ellipsoid , respectively . for the spherocylinder ,the moments of inertia are not easily expressed as a function of the total mass of the object and are therefore given in terms of the geometrical parameters and of the mass density .for a given cylinder or spherocylinder we determine and as a function of and from or and identify them with , yielding the semi - axes of the equivalent ellipsoid and or , equivalently , and .it is easily checked that does not depend on the radius : where .we performed numerical simulations of the extinction cross - section of anisotropic gold nanoparticles using the discrete dipole approximation ( dda ) code ddscat 7.3 with the filtered coupled dipole method .the refractive index of bulk gold is that given by johnson & christy , with no corrections for boundary dissipation . the ambient medium is water , described by a constant refractive index .the particles are discretized using 90 dipoles along the diameter and along the length , where goes from 1 to 8 .the particle radius is 5 nm .the electric field is parallel to the long axis of the particles ( along ) in order to probe the lpp . in the transverse configuration ( electric field perpendicular to ), the spectrum changes very little with the aspect ratio . for randomly oriented particles , e.g. in colloidal solution ,the overall spectrum is a superposition of the longitudinal and transverse components , so that a satisfactory description of the longitudinal spectrum ensures that the total spectrum is also correctly described . to extend the results to larger particle sizes ( 10 and 20 nm radii ) we performed additional simulations using the boundary element method ( bem ) as implemented by the matlab package mnpbem .the two methods yield very similar results , as shown in appendix [ sec : ddabem ] .we fitted the extinction spectrum with the sum of a lorentzian peak and a cubic background to determine the position of the lpp , plotted in figure [ fig : simul ] for the three shapes , as a function of ( left panel ) and ( right panel ) .as reference , we also added the lpp position for ellipsoids in the rayleigh limit , using the gans formula ( ref . , eq .( 5.32 ) ) . as increases , so does the difference in lpp position between different shapes . when plotted against , however , the lpp values are much closer together and the variation is non - monotonic .the tendency also holds for larger particles , as shown in figure [ fig : r10r20 ] for radii of 10 and 20 nm . for clarity ,only the differences with respect to the rayleigh limit are shown , corresponding to the bottom of figure [ fig : simul ] .the peak positions for the spherocylinder and ellipsoid are remarkably close together , while those of the cylinder exhibit a red shift which is increasingly pronounced with the aspect ratio and the radius .this is probably due to field concentration at the sharp edges of the cylindrical particles , a localized effect that can not be captured by our simplified model .we also checked our model against the experimental data measured for gold nanorods with varying aspect ratio in ref . .the mean aspect ratio and its standard deviation are determined via tem and are shown in figure [ fig : ar ] as grey bars versus the sample code .the uv - vis - ir spectroscopy curves are fitted with an ellipsoid model , yielding the distribution of _ effective _ aspect ratio .its mean and standard deviation are shown as solid diamonds with error bars . if our model is correct , the values ( open dots ) obtained from the experimental distribution by inverting the middle relation in eq .are the true geometrical ones and should coincide with the tem results . for most samplesthis is indeed the case , confirming the improvement .one should however keep in mind that tem only samples a very small fraction of the particles ( those deposited on grids and , among these , only those visible in the images ) in contrast with uv - vis - ir spectroscopy , which averages over all particles contained in a few milliliters of solution .the latter technique should therefore be much more representative of the complete particle distribution .determined by transmission electron microscopy ( tem ) ( grey bars ) and effective aspect ratio measured by uv - vis - ir spectroscopy ( solid diamonds ) . the geometric aspect ratio for spherocylinders ( open dots )is then extracted from by inverting eq . .the error bars ( uv - vis - ir ) and bar height ( tem ) indicate the mean standard deviation : or .based on figure 5 of ref ..[fig : ar],scaledwidth=45.0% ]for ellipsoids in the rayleigh approximation , the lpp is completely described by a geometrical factor .one can then define an equivalent ellipsoid of a particle by fitting its simulated spectrum with the mie - gans formula and finding the corresponding .our approach here is completely different : we find the equivalent ellipsoid of the particle by applying very general principles to its mass distribution and without any consideration of its electrostatic or optical properties .we conclude that the lpp positions of particles with the same moments of inertia ( defined in ) are significantly closer than for particles with the same aspect ratio . in conclusion ,when the mass distribution is properly taken into account , the effective ellipsoid approximation describes fairly well the optical properties of elongated gold nanoparticles , in particular those of spherocylinders .we therefore expect the results can be generalized to other rounded particles ( e.g. dumbbells ) but probably not to those exhibiting edges or tips ( e.g. prisms ) , where the field concentration is significant. it would also be interesting to study the influence of faceting , observed in certain nanorods , on their optical spectra . combining this correction with an approximate analytical relation for the response of ellipsoids and accounting for the polydispersity in aspect ratio should yield a quantitatively accurate model for the experimental extinction spectra .the three moments of inertia and completely define the equivalent ellipsoid for a given particle , including its volume and hence its mass . in figure [fig : volume ] we show the ratio of to the volume of the target particle , for the cylinder and spherocylinder and for aspect ratios from 1 to 8 . for reference , we also show the same ratio for ellipsoids having the same aspect ratio as the target particle .ratio of the volume of the equivalent ellipsoid to the volume of the target particle ( cylinder or spherocylinder ) , based on identifying the moments of inertia ( dotted line ) or by using the same aspect ratio ( solid line ) , compared to the exact value of 1 ( dashed line).,scaledwidth=45.0% ] clearly , the volumes are much closer when the corresponding ellipsoid is chosen based on the moments of inertia ( the discrepancy is below 3 % ) than based on the aspect ratio ( where the discrepancy can reach 33 % ) .this result is noteworthy on two counts : first , because it comforts our choice of the moments of inertia as relevant parameters and second because the volume must be correctly described for practical applications , e.g. when estimating the particle concentration in solution from the extinction spectrum .the dda method is more time - consuming , so we only used it for the smaller particles ( 5 nm in radius ) and employed the bem technique for the larger objects . to make sure that the results are compatible, we also ran the bem simulations for the small particles , with the results shown in figure [ fig : ddabem ] .
we compare the light extinction spectra of elongated gold nanoparticles with different shapes ( cylinder , spherocylinder and ellipsoid ) and sizes of 10 to 100 nm . we argue that the equivalence of the various moments of mass distribution is the natural comparison criterion rather than the length - to - diameter ( aspect ) ratio generally used in the literature and that it leads to better spectral correspondence between the various shapes .
matrix completion amounts to estimate all the entries in a matrix from a partial and potentially noisy observation where is a binary matrix with 1/0 entries masking a portion of through the element - wise multiplication , and an additive noise . with matrix rows , columns , and entry values assigned to various attributes , matrix completion may have numerous applications .for example , when the rows and the columns of are attributed to users and items such as movies and books , and an entry at position records a score given by the -th user to the -th item , matrix completion predicts users scores on the items they have not yet rated , based on the available scores recorded in , so that personalized item recommendation system becomes possible .this is a classical problem in collaborative filtering . to solve an ill - posed matrix completion problem, one must rely on some prior information , or in other words , some data model .the most popular family of approaches in the literature assumes that the matrix follows approximately a low rank model , and calculates the matrix completion with a matrix factorization .theoretical results regarding the completion of low - rank matrices have been recently obtained as well , e.g. , and references therein .more elaborative probabilistic models and some refinement have been further studied on top of matrix factorization , leading to state - of - the - art results . in image processing , assuming that local image patches follow gaussian mixture models ( gmm ) , yu , sapiro and mallat have recently reported excellent results in a number of inverse problems . in particular , for inpainting , which is an analogue to matrix completion where the data is an image , the maximum a posteriori expectation - maximization ( map - em ) algorithm for solving the gmm leads to state - of - the - art results , with a computational complexity considerably reduced with respect to the previous state - of - the - art approaches based on sparse models , ( which are analogous to the low - rank model assumption for matrices ) . in this paper, we investigate gaussian modeling ( a particular case of gmm with only a single gaussian distribution ) for matrix completion .subparts of the matrix , typically rows or columns , are regarded as a collection of signals that are assumed to follow a gaussian distribution .an efficient map - em algorithm is introduced to estimate both the gaussian parameters ( mean and covariance ) and the signals .we show through numerical experiments that the fast map - em algorithm , based on the gaussian model which is the simplest probabilistic model one can imagine , leads to results in the same ballpark as the state - of - the - art in movie rating prediction , at significantly lower computational cost .recent theoretical results further support the consideration of gaussian models for the recovery of missing data .section [ sec : model : algo ] introduces the gaussian model and the map - em algorithm . after presenting the numerical experiments in section [ sec : numeric ], section [ sec : conclusion ] concludes with some discussions .similar to the local patch decomposition often applied in image processing , let us consider each subpart of the matrix , the -th row for example , as a signal .let denote the -th row in the binary matrix , and let be the number of non - zero entries in .let denote a masking operator which maps from to extracting entries of corresponding to the non - zero entries of , i.e. , all but the -th the entries in the -th row of are zero , with the index of the -th non - zero entry in , .let and be respectively the _ sub - vector _ of the -th row of and , where the entries of are non - zero . with this notation , we can rewrite in a more general linear model for all the signals , . in the reduced dimension leads to a calculation in dimension instead of , which is considerably faster if . ]note that can also be columns , or 2d sub - matrices of rendered in 1d .the gaussian model assumes that each signal is drawn from a gaussian distribution , with a probability density function where and are the unknown covariance and mean parameters of the gaussian distribution .the noise is assumed to follow a gaussian distribution with zero mean and covariance , here assumed to be known ( or calibrated ) . estimating the matrix from the partial observation can thus be casted into the following problem : * estimate the gaussian parameters , from the observation . * estimate from , , using the gaussian distribution .since this is a non - convex problem , we present an efficient maximum a posteriori expectation - maximization ( map - em ) algorithm that calculates a local - minimum solution . following a simple initialization , addressed in section [ subsec : initialization ] ,the map - em algorithm is an iterative procedure that alternates between two steps : 1 .e - step : signal estimation . assuming the estimates are known ( following the previous m - step ) , for each one computes the maximum a posteriori ( map ) estimate of .m - step : model estimation . assuming the signal estimates , ,are known ( following the previous e - step ) , one estimates ( updates ) the gaussian model parameters .it is well known that under the gaussian models assumed in section [ subsec : gaussian : model ] , the map estimate that maximizes the log a - posteriori probability a linear estimator and is optimal in the sense that it minimizes the mean square error ( mse ) , i.e. , = \min_{g}e_{{\mathbf{f}}_i , { \mathbf{w}}_i } [ \|{\mathbf{f}}_i- g({{\mathbf{y}}}_i)\|_2 ^ 2 ] , \nonumber ] where is any mapping from . the second equality of follows from the bayes rule , the third follows from the gaussian models and , and the last is obtained by deriving the third line with respect to .the close - form map estimate can be calculated fast .observe that is a sparse extraction matrix , each row containing only one non - zero entry with value , whose index corresponds to the non - zero entry in .therefore , the multiplication operations that involve or can be realized by extracting the appropriate rows or columns at zero computational cost .the complexity of is therefore dominated by the matrix inversion . as is positive - definite , can be implemented with flops through a cholesky factorization . in a typical case where is the -th row of the matrix , , to estimate the total complexity of the e - step is therefore dominated by flops . for typical rating prediction datasets that are highly sparse , among a large number of items , most users have rated more or less only a small number of items , where is large .the total complexity of the e - step is thus dominated by flops .the parameters of the model are estimated / updated with the maximum likelihood estimate , with the gaussian model , it is well known that the resulting estimate is the empirical one , the empirical covariance estimate may be improved through regularization when there is lack of data ( let us take an example of standard rating prediction , where there are items and users , the dimension of the covariance matrix is ) .a simple and standard eigenvalue - based regularization is used here , where is a small constant .the regularization also guarantees that the estimate of the covariance matrix is full - rank , so that is always well defined . to estimate , the computational complexity of the m - stepis dominated by the calculation of the empirical covariance estimate requiring flops , which is negligible with respect to the e - step . as the map - em algorithm iterates , the map probability of the observed signals increases .this can be observed by interpreting the e- and m - steps as a coordinate descent optimization . in the experiments ,the algorithm converges within 10 iterations .the map - em algorithm is initialized with an initial guess of , .the experiments show that the result is insensitive to the initialization for movie rating prediction . in the numerical experiments ,all the unknown entries are initialized to 3 for datasets containing ratings ranging from 1 to 5 or 6 . in a typical case where the matrix row decomposition in is considered , the overall computational complexity of the map - em to estimate an matrixis dominated by , with the number of iterations ( typically ) and the available data ratio , with typically large ( for the standard movie ratings data ) .the algorithm is thus very fast . as each row treated as a signal and the signals can be estimated in sequence , the memory requirement is dominated by ( to store the covariance matrix ) .the experimental protocols strictly follow those described in the literature .the proposed method is evaluated on two movie rating prediction benchmark datasets , the _ eachmovie _ dataset and the _ 1 m movielens _ dataset .two test protocols , the so - called `` weak generalization '' and `` strong generalization , '' are applied . * * weak generalization * measures the ability of a method to generalize to other items rated by the _users used for training the method .one known rating is randomly held out from each user s rating set to form a test set , the rest known ratings form a training set . the method is trained using the data in the training set , and its performance is evaluated over the test set . * * strong generalization * measures the ability of the method to generalize to some items rated by _ novel _ users that have _ not _ been used for training .the set of users is first divided into training users and test users .learning is performed with all available ratings from the training set . to test the method ,the ratings of each test users are further split into an observed set and a held - out set .the method is shown the observed ratings , and is evaluated by predicting the held - out ratings .the eachmovie dataset contains 2.8 million ratings in the range for 1,648 movies ( columns ) and 74,424 users ( rows ) . following the standard procedure ,users with fewer than 20 ratings and movies with less than 2 ratings are removed .this leaves us 36,656 users , 1,621 movies , and 2.5 million ratings ( available data ratio ) .we randomly select 30,000 users for the weak generalization , and 5,000 users for the strong generalization .the 1 m movielens dataset contains 1 million ratings in the range for 3,900 movies ( columns ) and 6,040 users ( rows ) . the same filtering leaves us 6,040 users , 3,592 movies , and 1 million ratings ( available data ratio ) .we randomly select 5,000 users for the weak generalization , and 1,000 users for the strong generalization .each experiment is run 3 times and the average reported .the performance of the method is measured by the standard normalized mean absolute error ( nmae ) , computed by normalizing the mean absolute error by a factor for which random guessing produces a score of 1 .the factor is 1.944 for eachmovie , and 1.6 for movielens .in contrast to most exiting algorithms in the literature , the proposed method , thanks to its simplicity , enjoys the advantage of having very few intuitive parameters .the covariance regularization parameter in is set equal to ( whose square root is one order of magnitude smaller than the maximum rating ) , the results being insensitive to this value as shown by the experiments .the number of iterations of the map - em algorithm is fixed at , beyond which the convergence of the algorithm is always observed .the noise in is neglected , i.e. , is set to , as the movie rating datasets mainly involve missing data , the noise being implicit and assumed negligible .the experiments show that considering row of the matrix as signals leads to slightly better results than taking columns or 2d sub - matrices .this means that each user is a signal , whose dimension is the number of movies . as in previous works , a post - processing that projects the estimated rating to an integer within the rating rangeis performed .a matlab code of the proposed algorithm is available at http://www.cmap.polytechnique.fr/~yu/research/mc/demo.html[_http://www.cmap.polytechnique.fr//research/mc/demo.html_ ] .the results of the proposed method are compared with the best published ones including user rating profile ( urp ) , attitude , maximum margin matrix factorization ( mmmf ) , ensemble of mmmf ( e - mmmf ) , item proximity based collaborative filtering ( ipcf ) , gaussian process latent variable models ( gplvm ) , mixed membership matrix factorization ( m ) , and nonparametric bayesian matrix completion ( nbmc ) . for each of these methods ,more than one results produced with different configurations are often reported , among which we systematically cite the best one .all these methods are significantly more complex than the one here proposed. tables [ tab : each : movie ] and [ tab:1m : movielens ] presents the results of various methods for both weak and strong generalizations on the two datasets .nbmc generates most often the best results , followed closely by the proposed method referred to as gm ( gaussian model ) and gplvm , all of them outperforming the other methods .the results produced by the proposed gm , with a by far simpler model and faster algorithm , is in the same ballpark as those of nbmc and gplvm , the difference with nmae being smaller than about 0.005 , marginal in the rating range that goes from 1 to 5 or 6 ..nmaes generated by different methods for eachmovie database .the smallest nmae is in boldface . [cols="^,^,^",options="header " , ]we have shown that a gaussian model and a map - em algorithm provide a simple and computational efficient solution for matrix completion , leading to results in the same ballpark as state - of - the - art ones for movie rating prediction .future work may go in several directions .on the one hand , the proposed conceptually simple and computationally efficient method may provide a good baseline for further refinement , for example by incorporating user and item bias or meta information . on the other hand ,gaussian mixture models ( gmm ) that have been shown to bring dramatic improvements over single gaussian models in image inpainting , are expected to better capture different characteristics of various categories of movies ( comedy , action , etc . ) and classes of users ( age , gender , etc . ) .however , no significant improvement by gmm over gaussian model has yet been observed for movie rating prediction .this needs to be further investigated , and such improvement might come from proper grouping and initialization .* acknowledgments : * work partially supported by nsf , onr , nga , aro , and nsseff . we thank s. mallat for co - developing the proposed framework , and m. zhou and l. carin for their comments and help with the data .
a general framework based on gaussian models and a map - em algorithm is introduced in this paper for solving matrix / table completion problems . the numerical experiments with the standard and challenging movie ratings data show that the proposed approach , based on probably one of the simplest probabilistic models , leads to the results in the same ballpark as the state - of - the - art , at a lower computational cost . matrix completion , inverse problems , collaborative filtering , gaussian mixture models , map estimation , em algorithm
exchanges in networks have been studied for a long time in both sociology and economics . in sociology , they appear under the name of _ network exchange theory _ , a field which studies the behaviour of agents who interact across a network to form bilateral relationships of mutual benefit .the goal is to determine how an agent s location in the network influences its ability to negotiate for resources . in economics , they are known as _ cooperative games _ and have been used for studying the distribution of resources across a network , for example in the case of two - sided markets . from a theoretical perspectivethe most commonly used framework for studying such exchanges is that of _ network bargaining games_. the model consists of an undirected graph with edge weights and vertex capacities .the vertices represent the agents , and the edges represent possible pairwise contracts that the agents can form .the weight of each edge represents the value of the corresponding contract .if a contract is formed between two vertices , its value is divided between them , whereas if the contract is not formed neither vertex receives any profit from this specific contract .the capacity of each agent limits the number of contracts it can form .this constraint , together with an agent s position in the network determine its bargaining power .a _ solution _ for the network bargaining model specifies the set of contracts which are formed , and how each contract is divided . specifically , a solution consists of a pair , where is a -matching of the underlying graph , and is a vector which assigns each edge two values corresponding to the profit that agent , respectively agent , earn from the contract . to be a valid solution , the two values and must add up to the value of the contract whenever the edge belongs to the -matching , and must be zero otherwise .solutions to network bargaining games are classified according to two main concepts : _ stability _ and _ balance_. a solution is stable if the profit an agent earns from any formed contract is at least as much as its outside option .an agent s _ outside option _ , in this context , refers to the maximum profit that the agent can rationally receive by forming a new contract with one of its neighbours , under the condition that the newly formed contract would benefit both parties .the notion of balance , first introduced in , , is a generalization of the nash bargaining solution to the network setting .specifically , in a balanced solution the value of each contract is split according to the following rule : both endpoints must earn their outside options , and any surplus is to be divided equally among them .balanced solutions have been shown to agree with experimental evidence , even to the point of picking up on subtle differences in bargaining power among agents .this is an affirmation of the fact that these solutions are natural and represent an important area of study .there is a close connection between network bargaining games and _ cooperative games_. specifically given a solution to the network bargaining game we can define a corresponding payoff vector , where is just the total profit earned by vertex from all its contracts in the solution .then this vector can be seen as a solution to a corresponding cooperative game defined as follows : we let denote the set of players , and for every subset of players , we define its _ value _ as the weight of the maximum weight -matching of } } w(m ) . \notag\end{aligned}\ ] ] then the pair denotes an instance of the _ matching game _ of shapley and shubik .we will refer to this as the matching game associated with the instance .given and two vertices we define the _ power _ of vertex over vertex with respect to the vector as where .we write instead of whenever the context is clear .the _ core _ of the game is defined as the set the _ prekernel _ of the game is the set first attempt towards computing balanced solutions for the network bargaining game with general capacities is to use the connection to cooperative games presented in . for the special class of unit capacity andconstrained bipartite games , bateni et al .show that the set of stable solutions corresponds to the core , and the set of balanced solutions corresponds to the intersection of the core and prekernel of the associated matching game .this implies that efficient algorithms , such as the one of , can be used to compute points in the intersection of the core and prekernel from which a balanced solution can be uniquely obtained .the first question of interest is whether this equivalence between balanced solutions and the intersection of the core and prekernel extends to network bargaining games with arbitrary capacities .the following lemma proves that this is not always the case .[ lem : 1 ] there exists an instance of the network bargaining game and a vector such that there exists no balanced solution satisfying . consider the following graph where every vertex has capacity 2 and the edge weights are given above each edge consider the vector defined as for all .we now show that the vector is in the intersection of the core and prekernel and there exists no balanced solution corresponding to .let denote the outer cycle on vertices and let and denote the inner cycles on vertices and respectively .the unique optimal -matching is the cycle with weight . since any stable , and therefore balanced, solution must occur on a maximum weight -matching , any balanced solution will have .it can be easily checked that . to check that we compute the powers for all pairs of vertices . for the pair we have : similarly for the pair we have : and for the pair : the pairs and satisfy the prekernel condition . by symmetryso do and .note that for any pair of non - adjacent vertices , one of the two cycles or will contain but not , and viceversa .therefore for all non - adjacent pairs .this proves that is in the prekernel .we now show that there is no vector such that is a balanced solution corresponding to the vector .first note that vertices and have an outside option of zero in any solution , since there are no edges in incident with these vertices .hence the contracts and have to be split evenly in any balanced solution .since each vertex must have a total profit of from its two contracts in , this uniquely determines all values of the vector , which are shown in the figure below the minimum contract of both and is and therefore . however , the edge ( and also the edge by symmetry ) violates the balance condition since while .note that this instance does possess a balanced solution as shown in the figure below here the outside option of both and is and all edges in the matching satisfy the balance condition .we also remark that the allocation associated with this balanced solution is also in the intersection of the core and prekernel . in view of lemma [ lem : 1 ], we can not hope to extend the correspondence between balanced solutions and allocations in the intersection of the core and prekernel to all network bargaining games .however we can generalize the results of by characterizing a larger class of network bargaining games , including unit capacity and constrained bipartite games , for which this correspondence holds .we achieve this by defining a certain gadget whose absence , together with the fact that the -matching is acyclic , will be sufficient for the correspondence to hold .let be an instance of the network bargaining game and a solution .consider a vertex with and let be a neighbour of in .let be vertex s best outside option and if is saturated in , let be its weakest contract . using these definitions we have }z_{v'u'}. \notag\end{aligned}\ ] ]we say that is a vertex in the solution if at least one of the following two conditions holds : 1 .there is a path in , 2 .there is a path in , that does not pass through vertex .we refer to such or paths as of the solution .the following figure depicts these two types of gadgets , solid lines denote edges in and dashed lines denote edges in .we can now state our main theorem of this section .[ thm : 1 ] let be an instance of the network bargaining game .let and be a corresponding stable solution so that for all . if the following two conditions are satisfied 1 . is acyclic , 2 .there are no bad vertices in the solution , then , the following statement holds fix .note that it suffices to show , since this would imply that if and only if .our strategy is to first show that is upper bounded , after which it will be sufficient to find a set for which achieves this upper bound .we start with the following lemma .[ lem : 2 ] .let such that and .let be a maximum weight -matching in ] define the sets of ordered pairs note that all these sets are pairwise disjoint , and for all , } \\ { \left\ { } \newcommand{\rc}{\right\}}\phi_i { \left(}a_i , y { \right)}\rc & \text{otherwise . } \\\end{cases } \notag\end{aligned}\ ] ] it follows from the definition of the sets and the choice of the mapping that the sets are well defined , are pairwise disjoint , and are all subsets of .let then from equation and the fact that each ordered pair of belongs to at most one set we obtain now it follows from stability that for all we have to see this , consider . if for some \backslash { \left\ { } \newcommand{\rc}{\right\}}i \rc ] and ] be obtained using the construction given in section [ const ] .note that and have the same number of edges and each edge is mapped to the unique edge where and .this allows us to go back and forth between solutions on and by dividing the weight of each edge in the same way as its corresponding pair .note that if and only if .the following lemma shows that the mapping given by the function and its inverse defines a bijection between the and let and .we show that .take .suppose and .then from the construction of and it follows that .we have furthermore if then from the definition of we have .now let and .we show that .take . from the construction of and there must exist an edge such that and .we have furthermore if is uncovered in then by definition .[ lem : key ] let be an instance of the network bargaining game and a -matching on .suppose the auxiliary instance and the matching were obtained using the construction given in section [ const ] .let be a solution to and a solution to such that .then for any and any $ ] we have we first show that .we may assume that since otherwise there is nothing to show .let be vertex s best outside option in .that is and } \min_{vw \in m } z_{vw } \notag .\end{aligned}\ ] ] 1 . is not saturated in .then the vertex is in and it is not covered by .since we have 2 . is saturated in .let .suppose that . then is covered in and .since we have we now show that .we may assume that .let be vertex s best outside option in .that is , such that and since we must have .again , we have two cases : 1 . is not covered in .then the vertex is not saturated in and 2 . is covered in .then there exists such that and .we have using lemma [ lem : key ] we can now prove that stability and balance are preserved when mapping between solutions of the network bargaining game and the corresponding unit capacity game of the auxiliary instance .[ thm : main ] let be an instance of the network bargaining game and a -matching on .suppose the auxiliary instance and the matching were obtained using the construction given in section [ const ] .let be a solution to and a solution to such that . then: let .suppose that . then and using lemma [ lem : key ] we have it remains to show thatif is stable then for any unsaturated vertices of .suppose is such a vertex. then the vertex is not covered in and therefore . if is stable then and by lemma [ lem : key ] we have as desired .this completes the proof of the first statement . to prove the second statementlet and suppose that and .then , and by lemma [ lem : key ] we have : this completes the proof . 1 .find a maximum -matching in .2 . obtain unit capacity game with matching using the construction from section [ const ] .3 . find a balanced solution on the matching in .4 . set and return .we note that step 3 of the algorithm can be implementing using the existing polynomial time algorithm of kleinberg and tardos .given any instance of a network bargaining game with unit capacities together with a maximum weight matching , their algorithm returns a balanced solution on the given matching , whenever one exists . using theorem [ thm : main ] we know that stable solutions of the original problem map to stable solutions of the matching problem and viceversa . since any stable solution must occur on a -matching , respectively matching , of maximum weight we have the following corollarylet be an instance of the network bargaining game and a -matching on .suppose the auxiliary instance and the matching were obtained using the given construction. then 1 . is a maximum weight -matching for if and only if is a maximum weight matching for .2 . there exists a balanced solution for on the -matching if and only if there exists a balanced solution for on the matching .it was previously shown in that a unit capacity game possesses a balanced solution if and only if it has a stable solution , which in turn happens if and only if the linear program for the maximum weight matching of the underlying graph has an integral optimal solution .for the case of network bargaining game with general capacities , have shown that a stable solution exists if and only if the linear program for the maximum weight -matching of the underlying graph has an integral optimal solution . in terms of existence of balanced solutions ,they only obtain a partial characterization by proving that if this integral optimum is unique , then a balanced solution is guaranteed to exist .our results imply the following full characterization for the existence of balanced solutions , thus extending the results of : , devanur , n. , jain , k. , and rabani , y. 2010 .monotonicity in bargaining networks . in _ proceedings of the twenty - first annual acm - siam symposium on discrete algorithms_. society for industrial and applied mathematics , 817826 .
we study _ balanced solutions _ for _ network bargaining games _ with general capacities , where agents can participate in a fixed but arbitrary number of contracts . we provide the first polynomial time algorithm for computing balanced solutions for these games . in addition , we prove that an instance has a balanced solution if and only if it has a stable one . our methods use a new idea of reducing an instance with general capacities to a network bargaining game with unit capacities defined on an auxiliary graph . this represents a departure from previous approaches , which rely on computing an allocation in the intersection of the _ core _ and _ prekernel _ of a corresponding _ cooperative game _ , and then proving that the solution corresponding to this allocation is balanced . in fact , we show that such cooperative game methods do not extend to general capacity games , since contrary to the case of unit capacities , there exist allocations in the intersection of the core and prekernel with no corresponding balanced solution . finally , we identify two sufficient conditions under which the set of balanced solutions corresponds to the intersection of the _ core _ and _ prekernel _ , thereby extending the class of games for which this result was previously known .
`` _ follow me _ '' ( fm ) is probably the most natural strategy to solve the guidance problem , that is , to guide something or somebody along a given trajectory or towards a specific target , whether physically along geographical paths ( streets , roads , buildings ) , conceptually ( language learning , politics ) or even spiritually ( religion , social networks ) . the success of the fm strategy is based on the effect of the attraction that the guide ( the leader , the driver ) exerts on the guided ( followers , driven ) .also successfull is the `` _ do as i do _ '' strategy , based on imitation ( of behavior ) or alignment ( of opinion ) , and used in the above mentioned fields , among others ( social learning ) .less expected is the effectiveness of the opposite strategy , namely , `` _ move away from me _ ''( ma ) , based on repulsive interactions .that repulsion can serve to guide something or somebody is shown by nature itself , not only by means of gradient fields ( electromagnetic , temperature or chemical fields ) , where attraction towards high densities can be viewed as repulsion from low densities , but also by specifically repelling targets or agents . in neural development , axonogenesis takes place by combining attractive and repulsive guidance , so that the axon growth follows guidance cues presented by chemoattractant and chemorepellent molecules located in the environment of the cell . in animal herding ,sheepdogs are used to guide sheep flocks through a repulsive force that dogs exert on sheep .guidance cues can also be magnetic , as in drug targeting , thermotactic , as in sperm guidance , cognitive , as in crowd motion and traffic flow or in opinion formation , acoustic signals , as in animal alarm calls or instrumental conditioning , food trail pheromones in ants ( chemical at a scale larger than the cell ) , etc .attractive and alignment guidance problems are being studied for a long time by means of agent based models , with attention to guidance by leadership , and optimal strategies to minimize time guidance ( optimal evacuation times ) or distance travelled by the agents have been found for several systems . very recently , in 2015 ,the optimal strategy for a flocking model to reach a target point or to follow a given trajectory through attractive and alignment guidance has been presented . in this model ,individuals from the flock interact through attractive - repulsive and alignment forces with the rest of individuals .interactions are symmetric , except for one specific individual the leader which exerts on each other individual an extra attractive force .the result is that the leader is followed by the flock , so that , by controlling the behavior of the leader , a fm strategy can be used to make the flock reach a given target or move along a given trajectory .repulsive forces in attractive - repulsive models have mostly been considered for collision avoidance with obstacles or with other agents .guidance by repulsion has received much less attention , which is reduced , to our knowledge , to the above mentioned axonogenesis and animal herding . also very recently , in 2015 , an agent based model has been introduced to describe a so - called defender - intruder interaction , where repulsion is used by a defender to expel an intruder away as far as possible from a protected target .the authors in find an optimal ma strategy , which not always consists in approaching the intruder as close as possible , but to simply drive the intruder away beyond a short distance of security .repulsion in the intruder - defender interaction is qualitatively different than in collision avoidance or in interception , where the attractive and/or alignment forces determine ( most of ) the behavior .defender - intruder problems fall into the category of `` conflicting interactions '' , which are well described by the classical pursuit - evasion ( pe ) framework .the simplest scenario for a pe interaction consists of a single pursuer that follows and tries to capture a single evader that tries to escape to infinite from the pursuer .although our interest does not focus on conflicting interactions , we adopt here the pe framework .the guidance by repulsion can indeed be described with the simple two - agents pe framework , provided two considerations are taken into account : first , the guide is not exactly a pursuer , as it often separates from the direction towards the evader and the guide can not be arbitrarily close to the evader , and second , the agent to be guided is not exactly an evader , as it does nt try necessarily to escape to infinite but simply moves away a short distance from the repelling guide .we present here a guidance by repulsion model based on the two - agents pe framework .we will refer to the guiding agent as _ the driver _ , which tries to drive the _evader_. the driver thus follows the evader but can not be arbitrarily close to it .this is especially interesting if the driver can not approach the evader or contact between agents should be avoided ( because of chemical reactions , animal conflict , etc ) .the evader moves away from the driver but does nt try to escape beyond a not so large distance .the driver is of course faster than the evader . at a critical short distance, the driver can display a circumvention maneuver around the evader that forces the evader to change the direction of its motion .thus , by adjusting the onset and offset of the circumvention maneuver , the evader can be driven towards a desired target or along a given trajectory .our goal is to find optimal strategies to drive the evader in the most efficient way .we use an inertial model where interactions between agents take place through asymmetric newtonian forces .the asymmetry consists in that one agent is attracted and repulsed , while the other is simply repulsed .this kind of interaction has been coined `` anti - newtonian '' by sprott and others .thus , velocities are not constant ( they depend on the state of the system ) , and no alignment forces are considered . we denote by and ( and indexes and ) the driver and the evader agents respectively .both agents obeys the newton s second law , that is , and , where , , and denote the position vector , the velocity vector and the mass of agent , and the resultant force to which agent is subject , respectively , for .the force acting on the evader has only one component , which is in the direction of escape from the pursuer , . the force acting on the pursuer has a component collinear to and a lateral perpendicular component which allows the pursuer to surround the evader , therefore forcing the evader to change the direction of escape . here .the perpendicular component of the force acting on the pursuer can be switched on and off by means of the control parameter which takes values on .the control parameter is the key ingredient of the model , as it determines the behavior of the pursuer , which in turn determines the behavior of the evader .the resulting dynamical system can be considered as a driver - evader system with two operating modes controlled by a single parameter .[ [ contents - of - the - paper - and - sketch - of - the - results ] ] contents of the paper and sketch of the results + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + section titles are self - explanatory . in sec . 2, we introduce the model equations and parameters , describing in detail the interactions between agents and with the environment .we show that the driver - evader system can be viewed as having two operating modes controlled by a single parameter , so that the system can be moved from one state to another in order to make the driver to guide the evader from any point to any other point .we study the controllability of the system , and prove that , although the system is not fully controllable , agents positions remain asymptotically close to each other ( _ i.e. _ , an agent can not go to infinity ) . in sec . 3, we consider what are the optimal strategies which allow the driver to guide the evader to a desired target .activating a system has a cost , as well as keeping it in the active mode . as an illustrating example, the driver can be viewed as a spacecraft with two lateral propellers whose ignition process and fuel consumption are very high with respect to the consumption of the back propeller .our interest is in reducing the cost by minimizing 1 ) the number of activation and 2 ) the time of use of the lateral propellers . to do that, we formulate an optimal control problem for a cost functional including these two costs .whe then find , in sec .3.1 , the ( unique ) optimal strategy allowing the reach the target in an initially active system ( , and the ( unique ) optimal strategy which reduces the length of the time interval ] , the system tends to a periodic configuration where both agents are separated by a constant distance ( not shown in the figure ) and have the same angular velocity of norm ( the period of the oscillations is ; bottom - right panel of fig .[ modes ] shows that ) .= 0.95 when the driver is sufficiently close to the evader , the circumvention mode is effective and triggers the circular behavior of the agents .see the right panels of fig .[ modes ] , where the driver initial position at is so far from the initial position of the evader that , although the control is set to from to , the first part of the trajectory is almost a straight line . until , both agents are almost still in the horizontal axis : , . at , and ,the driver is close enough to the evader and the circular behavior becomes perceptible .see the oscillations of and , of period , in the right - bottom panel of fig .[ modes ] .[ remark1 ] * 1 : * when the driver is far from the evader ( _ i.e. _ , ) , the term between brackets in eq .( [ eq - vp - d ] ) is such that , so the first term of this equation ( which has coefficient ) is of order , while the term with is of order .then , eq . ( [ eq - vp - d ] ) can be reduced to [ remark2 ] * 2 : * in particular , it can be observed that , when , then so that the term deviating the driver from the pure pursuit trajectory is negligible with respect to the term corresponding to the attracting force exerted by the evader .therefore , when the driver is sufficiently far from the evader , the value of has no influence on the behavior of the driver , meaning that can be set to zero .the circumvention mode can be viewed as the _ active _ state of the system , where the control is set to on , while the pursuit mode is the _ rest _state of the system , where is set to off . with the appropriate combination of both modes ,the driver is able to make the evader reach any given target point or move along any ( relatively smooth ) given trajectory .the resulting behavior of such combination is what we call a _ driving _ behavior .see fig . [ tray ] .= 0.95 this strongly suggests that the system can be controlled with a single control parameter , . in controltheory , a system is said to be _ fully controllable _ when , starting from any arbitrary initial state , every possible state of the system can be reached by appropriately adjusting the control parameters ( see , _e.g. _ , ) . in this sense, our numerical simulations have shown that agents can not be separated an arbitrarily long distance , so that the driver - evader system can not be labelled as fully controllable .in fact , we prove analytically in appendix a that , for any initial state , the separation between agents is bounded by above and tends to a distance ( of order 1 ) , showing that the evader can not escape from the driver to infinity , and the driver can not move away from the evader to infinity .moreover , numerical simulations show that agents velocities are also bounded and that , when remains unchanged for sufficiently long time intervals , both velocities tend asymptotically to the same constant value ( or ) . finally , agents are prevented from occupying the same place at the same time ( _ i.e. _ , ) by the strong short - range repulsion force that the driver exerts on the evader , as our numerical simulations confirm .we will say instead that the driver - evader system is _ partially controllable _ , in the sense that each agent can be controlled separately : the driver can force the evader to reach any point in the plane ( this is shown in the successive sections ) , and vice - versa , a series of targets for the evader can be selected so that the driver is driven to reach any point in the plane .the question arises now as how the driver - evader system behavior can be optimized to minimize a given cost functional accounting for the use of the lateral propellers .denote by a ball of radius centered in , assume that the evader is initially far from the driver ( _ e.g. _ , ) , and assume also that , and the ball are not aligned , that is , .consider now the objective of driving the evader into the target ball at a final time by controlling the driver with an appropriate strategy .as , and are not aligned , the circumvention mode has to be activated and/or has to remain active a suitable time to modify the trajectory of the evader and guide it towards the target ball .such an objective must appraise the cost of 1 ) forcing the system to leave its resting state , and/or 2 ) keeping the system in an active state .this cost is given by the functional where is the number of times that the system is forced to leave its resting state in the time interval ] ( _ i.e. _ , the time spent with a lateral propeller in active mode , where has a non zero value ) , and are nonnegative weights fixed to balance the contribution of each partial cost .an optimal control problem can then be formulated as \to \{-1,0,1\ } \mbox { such that } \vec{u}_e(t_f ) \in b_\rho(t ) \big\ } \end{array } , \right.\end{aligned}\ ] ] where is the set of admissible controls . herewe solve for and in sections 3.1 and 3.2 respectively , finding the corresponding optimal open - loop controls .our results show that the system is highly sensitive to small variations of the conditions of the problem , so that a general open - loop control for would not be of practical interest .instead , we provide in section 3.3 a feedback control law for that substantially reduces the cost of the open - loop controls for , and preserves the number of ignitions at a relatively low value ( ; see later ). solving for consists in minimizing for , with . indeed , if , then , because and are initially not aligned with the target ball and no inition process can be used to modify the trajectory of the evader , so for all .similarly , solving for consists in minimizing for of the form in a time interval ] , where is the initial value at time : .then , if is sufficiently large , there exists an interval \subset ( t_0,t_f) ] , there exists a time for which the evader is in the interior of the ball of radius centered in the target .that is : , \ ; \exists \ ; t \in ( t_0,t_f ) \mbox { such that } \| \vec{u}_e(t ) -\vec{u}_t \| < r.\end{aligned}\ ] ] moreover , if , then the interval ] , so that the optimal strategy is . for a sufficiently small value of ( we used in fig . [ shoot-1change ] ) , the interval ] minimizes .= 0.92 fig .[ shoot - double ] shows the cost function as a function of : .when is sufficiently large , the cost tends to a constant value which is the time is takes to the driver to make the evader turn back towards the target . see fig .[ turnbacks ] , where two examples with different ignition times yield ( approximately ) the same cost .this value constitutes a substantial reduction of the optimal cost found for .in fact , fig . [ shoot - double ] shows that there is a wide range of values of yielding a similar value of the cost , so that , for a tolerance larger than , the optimal control would not be unique .this would allow the system to accept another criteria or another restriction to determine the optimal strategy , which can be for instance to minimize the total time or the total distance travelled by the agents ( and which would be equivalent to consider the cost of back - propellers ) .= 0.95 however , if the variations of of order are relevant , a closer analisys shows that the first linear decreasing range in fig .[ shoot - double ] , which lasts until , is due to the fact that during this interval of time ] . in turn , feedback laws have to pay the cost of continuously monitoring the position and velocity of the agents .we present here a feedback control law based on the following observations : 1 . in real situations , the orientation of the vector used in the shooting method can be difficult to observe with the accuracy required by the high sensitivity of the system . + instead , the alignment of the driver and the evader with the target point is easier to observe and is a good approximation of .when the driver is sufficiently far from the evader , can be set to zero ( remarks 1 & 2 ) .the instantaneous information about the state of the system is processed in real time to determine the distance separating both agents and the alignment .the alignment can be charaterized by the following scalar product ( time dependence is omitted to lighten notation ) : the sign of reveals in which half - plane the target is with respect to the line , and can be used to determine the sign of .moreover , is an instantaneous measure of how urgently the control must be set to on .let us consider a maximal tolerance of deviation .the feedback control law is based on the idea that when is smaller than , it is possible to consider that is practically on the line , so that can be set to off , thus saving cost , and when , the deviation is excessive and the control must be set to on .the tolerance of deviation is an effective bound for both the angle and the intensities of the velocities ( ) , so restricts also the velocities of the agents : a slightly deviated evader at a high speed can miss the target as well as a largely deviated evader at a lower speed .note also that when , the control can be switched off provided the evader and the target are at the same side with respect to the driver , in order to prevent the driver from driving the evader away from the target ; that is , can be set to zero only if the scalar product is negative .finally , remark 2 is introduced into the feedback law by means of the characteristic function which serves to switch off the control when the driver is far ( ) from the evader .the feedback control law can then be written as follows : we have solved the system ( [ eq - up])([condinis ] ) numerically using the condition to have in expression ( [ fidbaclaw ] ) .we have considered an alignment tolerance .the rest of values are as in previous sections .let us refer to this case as case ( a ) .the result is that the feedback law reduces substantially the cost obtained with the open - loop control in sec . 3.1 : , so an improvement of 60% with respect to , with however a slight increase of the number of ignition processes , from to .see figs .[ feedbk-1 ] , [ feedbk-2 - 3 ] and [ feedbacks](a1),(a2 ) .= 0.98 figs .[ feedbk-1 ] shows the trajectories of the agents ( the whole trajectories in the left panel , and a zoom of ] , with and .the solution found in fig .[ feedbk-1 ] with the feedback law ( [ fidbaclaw ] ) can indeed be considered a good solution of the control problem . however , this is not a general situation , as shown by the wide range of cases analysed in our numerical simulations , because of the high sensitivity exhibited by the system .let us illustrate this observation here by showing the results for two slightly different external conditions ; see the cases ( b ) and ( c ) in figs .[ feedbacks ] and [ feedbk-2 - 3 ] .\(b ) ( c ) = 0.9 case ( b ) and ( c ) use an alignment tolerance .[ feedbk-2 - 3 ] shows that the evader follows almost the same trajectory than in case ( a ) ( depicted in the figure to facilitate the comparison ) and reaches the target with a more accurate orientation .however , such a small deviation requires an enormous increase of the use of the lateral propellers , as shown in fig .[ feedbacks](b2 ) .not only the cost of having the control set to 1 is larger , ( and larger than with the open - loop controls ) , but a much greater number of ignition processes is required ( ) , moreover involving both the right and the left propellers alternatively ( _ i.e. _ , ) .on the other hand , case ( c ) shows that small variations of the switching times of the feedback control can produce huge qualitative differences in the behavior of the agents .we have introduced a small perturbation of the first interval of activation ] , the driver can guide the evader to any desired target or along any ( relatively smooth ) path .we have then formulated an optimal control problem to find the optimal guidance strategy minimizing the cost in terms of the number of times the system is activated from its resting state and the time the system spends in the active mode . by means of ( numerical ) shooting methods ,we have obtained the optimal open - loop strategies for the case where the number of activations is equal or smaller than one , finding that the system is highly sensitive to small variations of the activation / deactivation times .these results show that open - loop controls would not be of practical interest in real problems due to the presence of external perturbations , thus suggesting the use of a feedback law . taking advantage of the information provided in the study of the behavior of the system under open - loop controls for , we have designed a feedback law for the case where , which allows to correct in real time for deviations from the desired trajectory .we have found that the feedback law is also highly sensitive to small variations of the conditions of the problem , in this case , of the accuracy with which the target is reached ( _ i.e. _ , the radius of the target ball ) . moreover, our results show that the feedback law , and therefore the resulting behavior of the system , are highly sensitive to possible delays in the switching times of the control .this means that , in systems or devices where the manipulation of the control can not be carried out at arbitrarily close instants of time , the behavior of the driver can exhibit large oscillations that can produce an increase of the cost .this may happen in situations where time delays exist in collecting and interpreting the data about the state of the system or in the reaction time of the system once the control is changed , especially when two consecutive changes are very close in time .the main direction for the immediate future work consists in taking into account the cost for the driver to get close to the evader , both in time and travelled distance .this would correspond to add the cost of back propellers to the cost functional : and find a feedback control law for the two controls and .the interest of guidance by repulsion could also be extended to the case where the evader s behavior has a stochastic component and when multiple agents ( evaders and/or drivers ) are considered .the feedback law will be especially relevant when noise is considered in both the behavior of the agents and in the manipulation of the data .we thank the anonymous reviewers for their interesting suggestions and for pointing out the interest of open problem which include the cost of back propellers in the cost functional .this material is based upon work that has received funding from the european union s horizon 2020 research and innovation programme under the marie skodowska - curie grant agreement no 655235 , entitled `` smartmass '' , the advanced grant numeri - waves / fp7 - 246775 of the european research council executive agency , the berc 2014 - 2017 program and the pre - doctoral grant pre-2014 - 1 - 461 of the basque government , the fa9550 - 15 - 1 - 0027 of afosr , the mtm2014 - 52347 and mtm2011 - 29306 grants and the severo ochoa program sev-2013 - 0323 of the mineco , and a humboldt award at the university of erlangen - nrnberg .numerical simulations show that when continuously for a sufficiently large time , both agent s velocities converge asymptotically to the same constant velocity . in that state , and , from eqs .( [ eq - vp - d])([eq - ve - d ] ) , we have = - { c^d_e \over \nu_e } { \vec{u}_d - \vec{u}_e \over \delta_{\rm as}^2 } = \vec{v}_{\rm as}.\end{aligned}\ ] ] thus , comparing norms , we obtain , and extracting , we obtain provided , as it is the case for the values we are considering . note also that a necessary condition to have an effective short - range repulsion acting on the driver is that the factor between large parentheses in eq .( [ eq - vp - d ] ) is positive , so and the radicand in expression ( [ radis ] ) is positive .we show here that driver and evader are prevented from separating infinitely from each other and tend asymptotically to be separated a distance of order one .the proof follows the idea of `` free agents '' used in ( see also for a more similar model ) . a driver agent is said to be a _ free agent _ at time if its distance to the evader is greater than an arbitrarily large positive constant ; that is , . if the driver is a free agent , then the system ( [ eq - up])([eq - vs ] ) can be reduced as follows : using remark 1 .if , then , , [ lema1 ] let and : be the following potential functional : then , for free agents ( _ i.e. _ , ) and under the hypotheses of lemma [ lema1 ] , is positive .if the driver is a free agent , then is bounded from below and is negative along the agents trajectories defined by the system ( [ eq - up])([eq - vs ] ) .consequently , converges in time to a minimum which is reached when the distance between both agents is .[ teolyapunov ] the time - derivative of along the agents trajectories is given by : then : which , under the conditions of lemma [ lema1 ] , is negative .then decreases and is bounded from below , so has a minimum , which is reached when and ( which is the minimum value of for a free agent ) .thus , agents are prevented from separating infinitely from each other because as soon as , the driver becomes a free agent and is forced to move back towards the evader , provided the balance between the mass and the friction of the agents verifies lemma [ lema1 ] .once the evader is turning back towards the target , that is , , we check the direction of the velocity vector of the evader with the line described by the evader and the target .then , if for some time the vector points towards a point located below , the tentative value of must be reduced ; if instead , at the final time , the vector points towards a point located above , then the tentative value of must be augmented . 1 .solve the system ( [ eq - up])([condinis ] ) with for and anotate the value of , which is the first time such that .if no such time is reached , this means that , so take a larger value of and shoot again ( goto 0 ) .2 . for each time , evaluate the instantaneous alignment of the velocity vector of the evader with respect to the target point : then , if , take a smaller value of and shoot again ( goto 0 ) .3 . if at time the velocity vector of the evader is still pointing above the target , that is , , then take a larger value of and shoot again ( goto 0 ) .the new value of for the next shoot can be selected with a simple method ( _ e.g. _ , bisection ) .+ stop when , for a small value of the tolerance ; the value of has been found , proceed to the next value of .coppinger rp , feinstein m. _ how dogs work . _ university of chicago press ( 2015 ) .see also coppinger rp , coppinger l. _ dogs : a startling new understanding of canine origin , behavior and evolution . _scribner , new york ( 2001 ) .garnier s , combe m , jost c , theraulaz g. _ do ants need to estimate the geometrical properties of trail bifurcations to find an efficient route ?a swarm robotics test bed ._ plos comput biol 9 ( 2013 ) e1002903 .gazi v , passino km ._ stability analysis of swarms_. ieee trans on automatic control 48 ( 2003 ) 6927 .gazi v , passino km ._ a class of attraction / repulsion functions for stable swarm aggregations_. int j control 77 ( 2004 ) 156779 .gazi v , passino km ._ swarm stability and optimization_. springer science & business media ( 2011 ) .escobedo r , muro c , spector l , coppinger rp . _group size , individual role differentiation and effectiveness of cooperation in a homogeneous group of hunters_. j roy soc interface 11 ( 2014 ) 20140204 .
we present a _ guidance by repulsion _ model based on a driver - evader interaction where the driver , assumed to be faster than the evader , follows the evader but can not be arbitrarily close to it , and the evader tries to move away from the driver beyond a short distance . the key ingredient allowing the driver to guide the evader is that the driver is able to display a circumvention maneuver around the evader , in such a way that the trajectory of the evader is modified in the direction of the repulsion that the driver exerts on the evader . the evader can thus be driven towards any given target or along a sufficiently smooth path by controlling a single discrete parameter acting on driver s behavior . the control parameter serves both to activate / deactivate the circumvention mode and to select the clockwise / counterclockwise direction of the circumvention maneuver . assuming that the circumvention mode is more expensive than the pursuit mode , and that the activation of the circumvention mode has a high cost , we formulate an optimal control problem for the optimal strategy to drive the evader to a given target . by means of numerical shooting methods , we find the optimal open - loop control which reduces the number of activations of the circumvention mode to one and which minimizes the time spent in the active mode . our numerical simulations show that the system is highly sensitive to small variations of the control function , and that the cost function has a nonlinear regime which contributes to the complexity of the behavior of the system , so that a general open - loop control would not be of practical interest . we then propose a feedback control law that corrects from deviations while preventing from an excesive use of the circumvention mode , finding numerically that the feedback law significantly reduces the cost obtained with the open - loop control . * keywords : * guidance by repulsion , driver - evader agents , optimal strategies , feedback control law , nonlinear dynamics , numerical simulations
models of quasistatic elasto - plasticity date back to prandtl s model of an elastic - ideal plastic element , which can be thought of as a cascade connection of an ideal hook s spring and a coulomb dry friction element .this simple model accounts for two important effects , saturation of stress with increasing deformation ( strain ) and hysteresis in the stress - strain relationship .hysteresis is a manifestation of the fact that stress at a moment is not a single - valued function of the concurrent deformation value , but rather a function of state of the elasto - plastic material , which depends on the history of variations of the deformation prior to the instant .two parameters of prandtl s model are the stiffness of the spring and the maximal spring force ( which equals the friction force in the sliding regime for quasistatic deformations ) . in order to account for complex relationship between deformation and stress in real materials ,prandtl proposed to model the constitutive law of the material with a parallel connection of elastic - ideal plastic elements .a similar idea was developed by ishlinskii who modeled individual fibers of wire ropes by prandtl s elements . in the prandtl - ishlinskii phenomenological model , a finite or infinite set of prandtl s elements ( characterized by different values of parameters of stiffness and maximal stress )are all subject to the same deformation , and the total force ( or stress ) is proportional to the sum of all spring forces .the operator that maps the time series of the deformation ( input ) to the time series of stress ( output ) , given a set of initial stresses of all the springs ( initial state ) , is known as the prandtl - ishlinskii ( pi ) operator in one - dimensional elasto - plasticity .thanks to the set of good mathematical properties of this operator ( see , for example , ) , its equivalent counterparts have been used in several other disciplines including tribology ( maxwell - slip friction model ) , damage counting and fatigue estimation ( the rain flow counting method ) , and , more recently , modeling constitutive laws of smart materials such as piezo - electric and magnetostrictive materials and shape memory alloys .one useful property , called the composition rule , is that a composition of pi operators is also a pi operator and , as a consequence , the inverse operator for a pi operator is another pi operator .furthermore , a pi operator and its inverse admit an efficient analytic and numerical implementation .this property , in particular , underpins the design of compensation - based control schemes for microactuators and sensors , which use smart materials for energy conversion .another fact that facilitates modeling various constitutive laws with the pi operator , and is also central to this paper , is stated by the representation theorem , which allows one to determine whether a set of input - output data can be modeled by a pi operator and , moreover , equips one with an algorithm for identifying parameters of a pi operator from a simple measurement procedure .the representation theorem states that if ( a ) the input - output relationship between deformation and stress is _ rate - independent _ that maps the time series of deformation to the time series of stress commutes with any increasing transformation of time , . ] ; ( b ) hysteresis loops corresponding to periodic inputs are _ closed _ ; and , ( c ) every hysteresis loop is _ centrally symmetric _ , then , where is a pi operator .furthermore , if properties ( a)(c ) are satisfied , then starting from the initial state in which all the springs are relaxed , applying an increasing input ( deformation ) , and measuring the corresponding increasing output , one obtains the so called _ loading curve _ , which completely defines the pi operator on the class of all continuously varying inputs \to\mathbb{r} ] .slanted lines have the slope ; horizontal lines are . ]-10truemm define the vector of deformations ( where ) of all the springs and the vector of their corresponding stresses .similarly , denotes the vector of elastic deformations , and we will also use the vector of _ plastic deformations _ .the model of moreau defines dynamics in the configuration space , where is the total number of springs . in what follows, the set of geometric constraints with the additional affine moving constraint will be expressed as the inclusion with an appropriate choice of the subspace of the configuration space and a vector .we remark that geometric constraints in parametric form can be equivalently expressed in terms of the cycles of the graph with the added edge .namely , the sum of deformations along the edges of every cycle must be zero : the main assumption of moreau model is that deformations are quasistatic .therefore , the balance of forces at each node is zero at any moment : where the summation is over . the balance of forces at each of the nodes and connected by the moving constraint includes the ( unknown ) reaction of this constraint : relations follow from .let us briefly summarize the results of moreau for the particular case of a system of springs with one control that we are considering .for a closed convex set , we denote by the external normal cone to the set at a point : where the standard scalar product is used .let us introduce the parallelepiped of admissible stresses defined by . as was shown by j. j. moreau , system ,is equivalent to the system with time dependent variables .here is the diagonal positive matrix with the entries such that is equivalent to ; recall that , hence is nothing else as the relationship , which is equivalent to the set of geometric constraints with the additional moving affine constraing ; the inclusion , where denotes the orthogonal complement of in the space , is equivalent to equations that state the balance of forces ; and , the differential inclusion , where dot denotes differentiation with respect to time and , is equivalent to the variational inequality that expresses the constitutive law of prandtl s spring .following j. j. moreau , it is convenient to use the rescaled variables where the diagonal matrix is the positive square root of . introducing the orthogonal subspaces and the scaled parallelepiped and noticing that we see that equations are equivalent to the system now , introducing the orthogonal projection of the vector on the subspace , and using the new variables one can rewrite this system as finally , combining inclusion with and using the identity , we arrive at the differential inclusion which is known as the _ moreau sweeping process _ with the characteristic set ( input ) .a few remarks are in order .first , is a centrally symmetric convex polytope .its central symmetry is important for the following discussion .second , the characteristic set of the moreau process is obtained as time - dependent shift of this polytope .the moreau process with a characteristic set of the form , where is a single - valued function and is a convex set , is known as the multi - dimensional play operator with input . in our case ,the input has a fixed direction , _i.e. _ we consider the moreau process of the play type , which in effect has a one - dimensional input .in general , for a system with multiple inputs ( moving constraints and external forces ) , the shape of the characteristic set may change with time , _i.e. _ is a more general regular set - valued convex - valued function .is equivalent to the differential inclusion known as the multi - dimensional stop operator with the input .this differential inclusion coupled with the initial condition has a unique solution for any input ; regularity properties of the solution operator that maps to in spaces , , and are well understood .furthermore , system coupled with the initial conditions ( we also assume ) has a solution .while forces of the springs ( the component ) are defined uniquely for a given input , simple examples show that the solution may be non - unique , that is deformations are not necessarily uniquely defined .it should be noted that examples with multiple solutions are non - generic .however , the author is not aware of results that would establish uniqueness of deformations under genericity assumptions .consider the system of springs shown in fig .the configuration space of this system consists of vectors ( where prime denotes the transposition ) , which satisfy the following set of geometric constraints : and ( cf . ) .we assume that the system is subject to the additional moving affine constraint .combining all the constraints , we obtain where this equation is equivalent to relation with the balance of forces at the nodes 2 , 3 , 4 reads or , equivalently , with .the matrix has the form in this example , the set is is a 6-dimensional centrally symmetric convex polytope .the prandtl - ishlinskii operator has scalar - valued inputs and outputs and is obtained as a weighted sum of a finite number of one - dimensional stop operators : :=\sum_{n=1}^k \bar a_{n } \mathcal{s}_{\bar\rho_n}[g],\ ] ] where and are continuous input and output , respectively ; is the young s modulus of the -th spring in the elastic domain ; is the maximal stress of the spring \,{\rm d\rho} ] of each spring is related to the controlled distance between the nodes and by another stop operator ] .the general linear connection is considered in section 4 .-10truemm an important characterization of prandtl - ishlinskii operator is the so - called _ loading curve _ defined as the response to the linear input : \ \quad { \rm with } \\quad g_{id}(t):=t , \ \t\ge 0\ ] ] ( where , in accordance with our agreement , we assume that for all the stop operators ] and the reaction force is given by .consider a graph with edges and the corresponding system of connected prandtl s springs with parameters ( stiffness ) and ( maximal stress ) , see sections [ 1.1 ] , [ 1.2 ] .assume that the system is driven by one moving affine constraint and the set of all the constraints is described by inclusion .denote by \to \pi \cap v ] by preimages of the points : 2truemm * theorem 1 . *_ suppose that and suppose that the parallelepiped belongs to the polytope .assume that the map \to\gamma ] . _2truemm as we establish below , under the conditions of this theorem , the moreau sweeping process behaves as an analog of the prandtl - ishlinskii operator with vector - valued outputs and the vector - valued \to \pi\cap v ] by setting .\ ] ] formulas , immediately follow from and the relationship .given an input ;\mathbb{r}) ] and the inclusion for almost every ] and so is the prandtl - ishlinskii operator in the space ;\mathbb{r}) ] .furthermore , given such an input , it suffices to establish that if the inclusion is valid for a.e . from an interval \subset [ 0,t) ] . in order to establish is a solution of on an initial small interval ] and the trajectory , which satisfies , remains in the interior of the polytope for all ] for a decreasing too . assuming that satisfies for ] .the cases when increases ( ) and decreases ( ) on will be considered separately .let us introduce some notation .suppose that the polytope is the intersection of subspaces where each hyperplane contains one facet of and is the unit outward normal vector to this facet .condition ensures that the face of belongs to the intersection of hyperplanes , hence we can number these hyperplanes in such a way that recall that the interior of the face in the affine subspace ( with the agreement that ) . with this notation , the outward normal cone to on coincides with the positive linear span of the vectors : in particular , since relations ensure that for , the inclusion for is equivalent to with . combining these relations with , we obtain one can also see that the inclusion for is equivalent to the equality where is the orthogonal projection of the vector onto the subspace , which is parallel to in . note that since , now , suppose that holds at the moment and for .if is sufficiently small , then according to the definition of the main extremum values of , one has with \ ] ] ( cf . ) . therefore , comparing formula at the moments and ( note that the sum in contains nonzero terms for at the moment and for at the moment ) , we obtain and using , and , we arrive at the relation ,\ ] ] which agrees with .finally , suppose that for and , again , relations are valid at the moment . in this case, assuming that is sufficiently small , with .\ ] ] therefore , implies hence now , note that there is a , , such that and since , one also has for a sufficiently small . combining these relations with and, we see that also , relations and imply where for and .hence , for , becomes ( note that all terms with in the sum in are zero ) and implies since , it follows that and hence combining this relation with and , we conclude that with holds for .this completes the proof .a few remarks are in order .* 1 . * under the assumptions theorem 1 , the complex connection of springs defined by the graph responds to arbitrary variations of the given moving affine constraint in the same way as a simple parallel connection of springs described by an effective prandtl - ishlinskii opearator .indeed , the stress of each spring of the complex system is related to via the prandtl - ishlinskii operator and the total force applied to the system at node 1 relates to the displacement via the prandtl - ishlinskii operator ( cf . )(t).\ ] ] 2truemm * 2 . *according to formula , under the conditions of theorem 1 , hysteresis loops of the moreau model and the loops of the prandtl - ishlinskii model have similar properties and can be constructed by the same simple manipulations with the graph of the loading curve , see for details .the only difference is that this graph is two - dimensional for the prandtl - ishlinskii model , while the loading curve for the moreau model in theorem 1 is multi - dimensional .2truemm * 3 . *[ fig6 ] illustrates the role of conditions and of theorem 1 .[ fig6](a ) violates condition because . in fig .6(b ) , , hence conditions are violated . in both cases , we see that the trajectory of the moreau process does not have the shape prescribed by formula , which generalizes the prandtl - ishlinskii operator . .the vector used in the definition of the moreau process points in the direction of the vector .( a ) the prism does not belong to .the polyline that contains the the polyline represents the trajectory of a solution to for the input where increases from zero to a maximum value and then decreases to the minimum value . formula defines a different polyline for the same input .( b ) conditions are violated because .the polylines and the trajectory of the inclusion and curve prescribed by formula in response to the same input as in panel ( a ) ., title="fig : " ] . the vector used in the definition of the moreau process points in the direction of the vector .( a ) the prism does not belong to .the polyline that contains the the polyline represents the trajectory of a solution to for the input where increases from zero to a maximum value and then decreases to the minimum value .formula defines a different polyline for the same input .( b ) conditions are violated because .the polylines and the trajectory of the inclusion and curve prescribed by formula in response to the same input as in panel ( a ) ., title="fig : " ] -8truemm 1truecm ( a ) 5.5truecm ( b ) 2truemm * 4 . * the assumption that the map \to\gamma$ ] is invertible is made for simplicity .it is straightforward to extend the theorem to the case when it is not satisfied .2truemm * 5 .* examples in which the system of springs shown in fig . [ fig1 ] can not be reduced to a prandtl - ishlinskii operator can be found in . of a general _ linear _ connection of springs ( thick lines ) .thin dashed lines represent possible additional edges as introduced in theorem 2 . ]-11truemm * 6 .* examples with complex topology that satisfy the conditions of theorem 1 can be created by a perturbation of simple systems considered in section [ sect ] . in particular , let us consider a _ linear _ connection of springs with the corresponding graph shown in fig .[ fig7 ] ( the graph shown in fig .[ fig5 ] is an example of such a graph ) . here and the list of all edges with is as follows : node is connected with nodes ; node is connected with nodes ; and , each node with is connected with node for .the total number of edges equals .denote with ( see example in section 2.6 ) and set 2truemm * theorem 2 .* _ consider a linear connection of springs ( see fig .[ fig7 ] ) .assume that for .let us extend this system with any set of additional connections of nodes by springs with thresholds and young s moduli .then , given any there is a such that if for all the added connections and simultaneously , i.e. the young s moduli of the added springs are sufficiently small , then the extended system of springs satisfies all the conditions of theorem 1 . _2truemm 2truemm _ proof of theorem 2 ._ first , we consider the linear connection of springs . for this connection ,the set of geometric and moving affine constraints has the form therefore , the subspace of is defined by the relations its orthogonal complement is given by and the components of the vector in have the form therefore , the orthogonal subspaces and ( see ) are defined by the systems with , respectively . from these relations, it follows that the components of vector are given by and the polytope is an -dimensional parallelepiped defined by relations and where , are defined in , . using , , without loss of generality , we can assume that and for each , i.e. ( because the edges can always be labeled so as to ensure these relationships ) .now , from , and , it is easy to derive explicit expressions for the components of vectors and coordinates of points : with . these explicit formulas ensure that : * the point belongs to the open face of the parallelepiped , i.e. where denotes the facet of for .* and conditions , are satisfied . * the subspace and the hyperplanes which are parallel to the facets of , are in general linear position in the sense that this implies in the notation of the previous section . * for ( where ) .the next step is to show that the same properties ( i)-(iv ) are satisfied for the extended system of springs . to this end, we first consider a simple embedding of the geometric picture considered above into the space by identifying each vector with components , , with the vector with the components ( recall that is a subgraph of , hence ) . with this identification, becomes a subspace and we denote by its orthogonal complement with respect to the scalar product ( cf . ) , i.e. .also , define the sets and the diagonal matrix by then , it is easy to see that all statements ( i1)-(i4 ) , in which we replace the space with , the subspaces with , the parallelepipeds , and their faces , with the polyhedra , and their faces , , and the matrix with the matrix , remain valid .that is , * for . * the faces of satisfy conditions , . * the subspace and the hyperplanes , are in general linear position . * the vectors satisfy for . finally , consider the moreaux process corresponding to the system of springs , in which the springs with have small young s moduli . for this system , we denote by , and the counterparts of the vector , the subspaces and the matrix ( cf. section 2.4 ) .furthermore , we denote by , the parallelepiped and its facets note that the subspace is a small perturbation of the subspace in the sense that the intersections of and with any ball centered at the origin can be made arbitrarily close by making the coefficients , , sufficiently small . similarly , the sets , are small perturbations of , , respectively . also , the matrix and hence the vector are small perturbations of the matrix and the vector , respectively , because for .therefore , properties ( j3 ) , ( j4 ) imply that the subspace and the hyperplanes , are in general linear position ; each subspace is a small perturbation of the subspace ; and , each vector is a small perturbation of the vector , hence .now , since is the intersection point of the ray with the parallelepiped for every , property ( j1 ) implies that the relation uniquely defines a point whenever a point is sufficiently close to and the vector is sufficiently close to .therefore , for sufficiently small , , the points satisfy with , and the polyline is a small perturbation of the polyline . since for all , we also see that the vertex of belongs to the open face of the polytope and the faces satisfy conditions , .it remains to show that the set ( cf .) belongs to . to this end , denote by the standard basis of vectors in : for , , consider the estimate with .here , hence for and therefore for and since is a small perturbation of the vector , combining this estimate with the second relation in results in the relation for , hence is equivalent to where the last equality follows from and .now , consider , i.e. for some , .formulas , imply that for all , and for .therefore , taking into account that and arguing exactly in the same way as above , we see that the unperturbed polyline satisfies furthermore , formulas , and imply hence where , due to , therefore , implies since is a small perturbation of , a similar estimate is true for : , is small . combining , and , we obtain .this completes the proof of theorem 2 .we considered a set of prandtl s elastic - ideal plastic elements , which are arranged into a network and deform quasistatically when the distance between two nodes is varied according to a given law ( input ) .as in the general setting of the model of moreau , no _ a priori _ constraint was imposed on the topology of the network .we defined the loading curve for the corresponding sweeping process as a graph of the solution corresponding to the zero initial condition and an increasing input .it was shown that if this curve satisfies simple geometric conditions , then the structure of hysteresis loops of the model in the space of stresses extended by one dimension representing the input is similar to the structure of hysteresis loops of the pi model .furthermore , the relationship between the input and the varying stress of each prandtl s element in the network is given by a pi operator , and the loading curve of this operator is the corresponding projection of the loading curve of the sweeping process .the question of how the geometric conditions of the main theorem can be related to the topology and parameters of the network in general remains open .however , we showed that these conditions are satisfied for any moreau network obtained by a small perturbation of a pi model .in other words , they are satisfied for any network with sufficiently small coupling .this is in line with the results from obtained by a different approach based on the composition property of pi operators . in this work ,we considered networks of springs aligned along a straight line and used the distance between two nodes a and b as the input .however , the results can be easily generalized to networks of prandtl s elements connecting nodes in a two- or three - dimensional space .it would be interesting to consider other types of scalar - valued inputs , for example , the force applied at the node a. in this case , the input of the sweeping process is a set with changing size and shape and hence is more complicated than the inputs we considered. this will be the subject of future work .the author thanks ivan gudoshnikov for multiple stimulating discussions of the moreau model .p krejci , j p okane , a pokrovskii , d rachinskii , properties of solutions to a class of differential models incorporating preisach hysteresis operator , physica d : nonlinear phenomena 241 ( 22 ) , 2012 , 2010 - 2028
the sweeping process was proposed by j. j. moreau as a general mathematical formalism for quasistatic processes in elastoplastic bodies . this formalism deals with connected prandtl s elastic - ideal plastic springs , which can form a system with an arbitrarily complex topology . the model describes the complex relationship between stresses and elongations of the springs . on the other hand , the prandtl - ishlinskii model assumes a very simple connection of springs . this model results in an input - output operator , which has many good mathematical properties . it turns out that the sweeping processes can be reducible to the prandtl - ishlinskii operator even if the topology of the system of springs is complex . in this work , we analyze the conditions for such reducibility . * on geometric conditions for reduction of the moreau sweeping process to the prandtl - ishlinskii operator * d. rachinskii
the understanding of plasma equilibria is one of the most important problems in magnetohydrodynamics and arises in several fields including solar physics and thermonuclear fusion .such an equilibria is often governed by the well known steady hydromagnetic equations which describe the balance of the lorentz force by pressure . here and are respectively the magnetic field and the pressure .notice that system ( [ eqn : ms])+([eqn : div1 ] ) is quite similar to steady inviscid fluid equations this analogy is due to the vectorial identity system of equations ( [ eqn : ms])+([eqn : div1 ] ) must be completed with some boundary conditions on and .physical considerations suggest to prescribe the boundary normal field component : where satisfies the compatibility condition due to the equation . defining the inflow boundary as one can also prescribe the normal component of the current density and the pressure on one can notice that if the pressure is neglected , equations ( [ eqn : ms])+([eqn : div1 ] ) become equation ( [ eqn : fff ] ) means that the magnetic field and its curl , which represents the current density , are everywhere aligned .the magnetic field is said _ beltrami _ or _ force - free _ ( ff ) . a usual way to tackle the problem ( [ eqn : fff ] ) + ( [ eqn : div2 ] ) consists to rewrite equation ( [ eqn : fff ] ) into the form where is a scalar function which can be a constant function or can depend on . in the former, the field is said linear ff . in the latter ,it is said non linear . + some partial results concerning existence of 3d solutions of equations ( [ eqn : ms])+([eqn : div1 ] ) in bounded domains are given in and .linear force - free - fields were studied in . for the existence of non - linear ones the reader can refer to , .the numerical solving of equations ( [ eqn : ms])+([eqn : div1 ] ) and equations ( [ eqn : fff])+([eqn : div2 ] ) is of importance in magnetohydrodynamics studies and in solar physics . as it is known , the reconstruction of the coronal magnetic field has is of a great utility in observational and theoretical studies of the magnetic structures in the solar atmosphere . in this paper, we propose an iterative process for solving these equations ( section [ sec:2 ] ) .a finite element method is proposed for solving each one of the arising problems .our objective here is to expose an iterative method for solving the non - linear equations ( [ eqn : ms])+([eqn : div1 ] ) in a bounded and simply - connected domain .the starting idea of the method consists to split the current density into the sum where the vector field is collinear to , while is perpendicular to .the problem is decomposed formally into a curl - div system on and two first order hyperbolic equations on and .+ more precisely , writing where is a scalar function and taking the divergence of ( [ omega123 ] ) , gives notice that the pressure satisfies a similar equation since equation ( [ eqn : ms ] ) becomes which means that if . in consideration of these remarks, we are going now to propose an iterative process to solve non - linear systems ( [ eqn : ms])+([eqn : div1 ] ) . in this process the transport equation ( [ eqn : mu ] ) is perturbed by adding an artificial reaction term .namely , we construct a sequence as follows : * the starting guess is chosen as the irrotational field associated to defined by this is a usual problem which can be reduced to a scalar neumann problem since the domain is simply - connected .* for all , is solution of the system where is a small parameter and . * for all , and , where satisfies here and is a small parameter .* for all , , with solution of where while is solution of the laplace problem notice that the appearance of the correction term is due to the fact that is not zero in general .+ the convergence of this iterative process is not an easy matter .we conjecture that it converges if is sufficiently small and in for some constant .nevertheless , in the case of linear force - free fields ( in that case the algorithm is simplified since at each iteration , and is a fixed real ) boulmezaoud and amari proved that this process is super - convergent .the proof of convergence in the general case is not given and remains an open question .+ notice that the same algorithm can be used for computing linear or non - linear force - free fields which are solutions of ( [ eqn : div2])+([eqn : fff2 ] ) , provided that the computation of the pressure and the vector field are dropped .here we give a short description of the finite elements methods we use for solving problems arising in the iterative process exposed above .observe first that at each iteration of the algorithm one should solve two problems : + ( a ) a reaction - convection problem of the form : _ find solution of _ ( b ) a vector potential problem : _ find the pair satisfying _ we begin with the approximation of ( [ eqn : hyperbolic ] ) . it is well known that the direct application of a galerkin finite elements method to the singularly perturbed problem ( [ eqn : hyperbolic ] ) may lead to the appearance of spurious oscillations and instabilities . in the two last decades, several methods were proposed to remove this drawback ( especially in the two dimensional case ) . among these methods, one can recall the _ streamline diffusion _ method ( see brookes and hughes , see also , e. g. , johnson et _ al . _ ) , the _ discontinuous _ galerkin method ( see lesaint ) and _ bubble functions _ methods ( see , e. g. , brezzi et _ al . _ ) . herewe shall use the method of streamline diffusion .thus , let us consider a family of regular triangulations of .the discrete problem we consider is where here stands for the finite elements space where for each , denotes the space of polynomials of degree less or equal .+ one can prove that the problem has a unique solution when .moreover , if for some constant and if and for some , then where + now , we deal with the approximation of the curl - div system ( [ eqn : systemb ] ) , which can be dispatched into two problems : a variational problem ( ) in terms of and the fictitious unknown , and laplace equation ( [ eqn : lap_cor ] ) in terms of .we only deal with the approximation of , since we shall see that the computation of the is useless .denote by the space equipped with its usual norm .the statement of problem ( [ eqn : systemb ] ) suggests the use of an approximation .define the space let , two finite - dimensional subspaces and set we make the following assumptions the discrete version of problem ( [ eqn : systemb ] ) writes according to amrouche and _ , the problem has one and only one solution with , and a simple manner for constructing the spaces and is to use the conforming elements of ndelec ( see amrouche and _ al . _ ) . in that case , the following estimate holds .+ an important feature of the discrete system is that only the discrete vector field is really unknown . actually , we know that . this property can be exploited from a practical viewpoint to reduce the discrete system to a smaller one by eliminating . in term of matrices ,the system writes where is a symmetric and positive square matrix ( is not definite neither invertible ) .we can state the following [ matrix_lemma ] let be a square positive , definite and symmetric matrix having the same size as .then , the pair is solution of ( [ eqn : syst ] ) if and only if and is solution of in lemma [ matrix_lemma ] , the matrix and the rhs are not arbitrary . indeed , if denotes the matrix of the operator , then necessarily and .these identities are the discrete counterpart of the continuous relations and .a serious advantage of the new system ( [ eqn : dec_syst ] ) comparing with ( [ eqn : syst ] ) is that number of unknowns is reduced .in this last section , we expose some computational results we obtain with a 3d code .this code use the iterative method and the finite elements discretization exposed above to solve problem ( [ eqn : ms])+([eqn : div1 ] ) and problem ( [ eqn : fff2])+([eqn : div2 ] ) .we compare the exact solution and the numerical solution and we show the behavior of the errors in terms of .two exact solutions are used for the tests . * * test 1 ( a non - linear force - free - field ) . * + let the cylindrical coordinates with respect to a point ( and ) .the pair is given this is a non - linear force - free field with table [ tab:1 ] shows the behavior of the residue and the product versus the iteration number .this example illustrates the superconvergence of the algorithm .+ lll & & + 0 & 0.09912 & 6.740e-15 + 1 & 0.00566 & 0.06781 + 2 & 0.00036 & 0.01939 + 3 & 2.644e-05 & 0.01910 + * * test 2 : ( bennet pinch ) * .the pair is given by in table [ tab:2 ] , the relative errors on and after convergence of the algorithm are shown .these error decreases as , which confirms the high accuracy of the method . + lll & & + 0.69282 & 0.03837 & 0.08648 + 0.23094 & 0.00492 & 0.01102 + 0.13856 & 0.00191 & 0.00396 + 0.09897 & 0.00108 & 0.00201 + + alber , h.d . :existence of three - dimensional , steady , inviscid , incompressible flows with non - vanishing vorticity .ann . * 292 * , 493528 ( 1992 ) amrouche , c. , bernardi , c . , dauge , m. , girault , v. : vector potentials in three - dimensional non - smooth domains , math .methods appl .* 9 * , 823864 ( 1998 ) .boulmezaoud , t.z . ,amari , t. : on the existence of non - linear force - free fields in three - dimensional multiply - connected domains . paratre dans zeitschrift fr angewandte mathematik und physik ( zamp ) .brooks , a. , hughes , t .r. j. : streamline upwind / petrov - galerkin formulations for convection dominated flows with particular emphasis on the incompressible navier - stokes equations .fenomech 81 , part i ( stuttgart , 1981 ) .methods appl .engrg . * 32 * , 199259 ( 1982 ) .
we propose an iterative finite element method for solving non - linear hydromagnetic and steady euler s equations . some three - dimensional computational tests are given to confirm the convergence and the high efficiency of the method .
* 2m8 cm when a finger points at the moon , one must not mistake the finger with the moon .old zen buddhist advice . & with four parameters i can fit an elephant and with five i can make him wiggle his trunk . attributed to j. von neuman .+ + some thoughts and some equations transcending their greatness , also transcend their purpose . during the last quarter of the xix century ludwig boltzmann ( * ? ? ?9 ) derived an equation which predicts the proportion particles , in an ensemble of non interacting particles , that are in a state with particle energy where is called the _ particle partition function _ or when dealing with molecules , the _ molecular partition function_. when only two energy levels are dealt with , the ratio of particles in energy level with particles in energy level predicted by equation [ e : boltzmann ] is in an extension of eq , ( [ e : boltzmann ] ) for _ degenerate _ systems ( when more then one states have the same energy ) a statistical weight , equal to the number of superimposed levels , is included .then which is the boltzmann distribution law in its most general form .the average kinetic energy is the middle term in eq .( [ e : meanenerg ] ) includes the _ statistical weights _ account for , so called , _ degenerate _ levels .the molecular partition function is useful only when the system of interest can be considered to be made up of noninteracting particles , molecules with no appreciable intermolecular forces .only then , can we define and enumerate the states of the system in either terms of quantum mechanical energy states of individual molecules , or classical positions and moments of individual molecules .when interactions between molecules occur , the description of the states of the system must include potential energy terms , such as , which are functions of intermolecular distances .equation ( [ e : boltzmann ] ) may be rewritten for ensembles of interacting particles as where , is the particle mass , is velocity and is potential energy . equation ( [ e : maxwboltz ] ) is the _ maxwell - boltzmann _ distribution function .if there are only two possible states in the system eq .( [ e : maxwboltz ] ) becomes when dealing with cell membranes , a boltzmann equation is expressed as free energy in voltage units ( the electrical potential difference existing across cell membranes ) in general the form used looks like where is used to more clearly specify that we deal with electrical potential differences , is customarily referred to as slope factor(see for example ) .since these situations deal with ensembles of particles the slope factoris usually parametrized , at room temperature if : the ionic valence , _ _z__=1 , _ f _ is the faraday constant , _r _ is the gas constant and _ t _ is the absolute temperature . in electrophysiology , an equation of the form ( [ e : boltzvolt ] ) was introduced for the first time by ( h&h for brevity ) , and was used to describe the distribution , inside or outside axons , of hypothetical particles associated with na^+^ and k^+^ currents crossing the nerve membrane . since the work was seminal for electrophysiology , a plethora of papers have used boltzmann functions in connection with electrical properties of cells and isolated ionic channels . in electrophysiology , however equation ( [ e : boltzvolt ] ) is modified , represent _ cell trans membrane potential _ (_ membrane potential _ for short ) expressed plainly as and since eq .( [ e : boltzvolt ] ) takes values between 0 and 1 , is taken as the membrane potential where and is usually termed , eq .( [ e : boltzvolt ] ) thus becomes when eq .( [ e : boltzelect ] ) is used in the original fashion of h&h , to represent trans membrane distribution of some charged particle , is expressed in respect to the potential at which 50% of the particles are in one side of the membrane , and 50% is at the other side .( [ e : boltzelect ] ) is thus reduced to a situation where a dependent variable may be fitted by some nonlinear optimization procedure to an independent variable ( usually expressed in mv ) using eq .( [ e : boltzelect ] ) .the optimization procedure enables to estimate the parameters and . in h&h work (* pg 501 , eq .1 ) , which is presented here with post h&h membrane potential sign conventions . was estimated to be close to the resting membrane potential .h&h also used a boltzmann function to estimate properties of hypothetical particles gating or triggering the mechanism controlling na^+^ conductance in nerve ( * ? ? ?* part i , pp .503504 ) the form of the boltzmann function in this case was where is trans membrane potential , is work done while molecule from the inside to the outside of the membrane , is valence of the molecule or number of positive charges on the molecule and is the absolute charge of an electron . with large , eq .( [ e : gating1 ] ) was used in the limit form to estimate of hypothetical particles ; in ec ( [ e : gatinglim ] ) , is a constant . quoting ( * ? ? ?* part i , pp .503504 ) : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ whose distribution changes must bear six negative electronic charges , or , if a similar theory is developed in terms of the orientation of a long molecule with a dipole moment , it must have at least three negative charges on one end and three positive charges on the other .a different but related approach is to suppose that sodium movement depends on the presence of six singly charged molecules at a particular site near the inside of the membrane . __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ but _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ __ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ details of the mechanism will probably not be settled for some time , but it seems difficult to escape the conclusion that the changes in ionic permeability depend on the movement of some component of the membrane which behaves as though it had a large charge or dipole moment . if such components exist it is necessary to suppose that their density is relatively low and that a number of sodium ions cross the membrane at a single active patch ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in modern terms , expresses _ availability _ of sodium channels to be activated when the membrane depolarizes . yet, the term _ channel _ was never used by in their famous series of papers , which never mentioned the term _ probability _ in a statistical sense . when h&h work was carried out ,the existence of cellular plasma membranes has not been proven , and nothing was known on the nature of macromolecules associated with nerve excitation .the model is an outstanding example of model fitting to data based on bright intuition .it took close to 20 years for evidence to appear proving the existence of gating currents due to charge movements preceding channel activation as predicted by h&h intuition .equations of the form of eq .( [ e : boltzelect ] ) have become extremely popular to describe diverse biological situations , where it is referred as a boltzmann function or as a boltzmann equation .part of the popularity is the easy availability of software which fits boltzmann functions to data , without much knowledge of the fitting procedure or the statistical properties of the parameters derived from the procedure .the purpose of this paper is to explore the plasticity of the boltzmann function to fit data , some aspects of the optimization procedure to fit the function to data and on how to use this plastic function to differentiate the effect of a treatment ( anything that may change the system ) on data , when effect is marred by uncertainty .to test the goodness of fitting curves to data , random data with known statistical properties were generated using monte carlo simulation . for this purpose sets of pairs generated at ( in the fashion of ( * ? ? ?* 7 ) ) with defined using like in eq .( [ e : expboltz ] ) , and thus the random normal variate contains as [ eq . ( [ e : ranvar ] ) ] , the experimental error .the uniform pseudo - random variables in the closed interval ] ] required by the algorithm to generate , were produced using the mersenne twister of with improved initialization ( www.math.sci.hiroshima-u.ac.jp/ m - mat / mt / mt2002/emt19937ar.html ) coded in gnu c++ ( g++ v. 4.9.2 ) , the seed required to initialize the algorithm was obtained from the ubuntu linux 15.04 entropy gathering device * _ /dev/ random_*. a copy of the program source is provided in a digital annex to this manuscript . for the purpose of the simulations ,initially , was set to 6 mv and was set to -40 mv , the choices are arbitrary and irrelevant , they approximate with modern conventions the values for nerve determined by h&h and may be taken as a small tribute to their monumental work after its 60^th^ anniversary . the values chosen may easily be replaced by any other as long as the variance modeled is kept .curves were adjusted to data using a simplex minimization .the statistical significance analysis of the differences between boltzmann curves was done using statistics as described in sections [ s : compboltz ] .results are presented as medians and their 95% confidence interval ( ci ) determined with the hodges and lehman method .comparisons of two samples were done with the nonparametric mann - whitney ( wilcoxon ) test ; for all nonparametric statistical procedures please refer to .statistical differences between samples were considered significant when the probability that they stem from chance was ( ) .all statistical procedures were carried out using the program _est _ which is freely available in c++ source form , and compiled for linux , windows and macintosh os x from .+ n(0,0.1) ] calculated with the genrand_res53 ( ) function of .some statistical properties of this set are shown in table [ tab : samplepar ] .the discrepancies between estimated sample parameters and the theoretical values in the table are very small , and most of them are related to the numerical approximations needed in computer simulation and the precision used to store the test file , set to 15 significant digits here ; the discrepancies are reduced when * long double * functions are used instead of * double * functions in the c++ program , for example .table [ tab : samplepar ] , does not stress enough that boltzmann processes _ per se _ are highly stochastic independently of the inherent uncertainty of any data collecting empirical procedure ; the variability of depends only on as indicated by the 2^nd^ moment ( ) in equation set ( [ e : fourmom ] ) .figure [ fig : figure3 ] simulates an experiment where 30 values of obeying eq .( [ e : boltzelect ] ) were recorded and an empirical cdf ( _ edf _ ) was constructed ( stepped line ) for the set , in both cases .the edf is a step function that jumps up by at each of the data points [ eq .( [ e : edf ] ) ]. the edf estimates the cdf underlying the points in the sample and converges with probability 1 according to the glivenko - cantelli theorem .as indicated in the figure , the edf for 30 points which has no measurement uncertainty , deviates significantly from the theoretical cdf , _ due only to the stochasticity inherent to the boltzmann pdf_. the parameters for the simulations in figure [ fig : figure3 ] were chosen to mimic values commonly observed in electrophysiology ( * ? ? ?* pg 501 , eq .the maximum difference between the 2 curves in figure [ fig : figure3 ] is 0.203 ( arrow in figure [ fig : figure3 ] ) , the smirnoff test in used to compare the two curves the probability that the differences between the two cdfs in the figure stems from chance will be estimated as , which leads to a statistical error of type i , the incorrect rejection of a true null hypothesis . under experimental conditions , usually 15 to 20 points are recorded in a single experiment which is replicated in some 3 to 5 subjects ( or preparations ) , under conditions where preparation deterioration , recording noise or instrumental limitations add uncertainty to the variable studied . .the stepped line is the empirical cdf calculated via monte carlo simulation for a sample of 30 points generated as ] a uniform random variable in the closed interval ] uniformly distributed in the interval [ 0,1 ] , the so called _ standard uniform distribution _ ] occurs with a probability . .this is demonstrated by the following theorem ( * ? ? ?156 , theo .7.1.1 ) .[ t : cdf_unif ] if is a random variable having a cdf then the random variable has the rectangular distribution .this follows at once from the fact that the cdf of is = \begin{cases } 1 , & y > 1 \\y , & 0 < y \leqslant 1 \\ 0 , & y \leqslant 0 \end{cases}\ ] ] which is the cdf of the rectangular distribution . theorem ( [ t : cdf_unif ] ) is obviously true for the boltzmann function such as eqs .( [ e : maxwboltz ] ) through ( [ e : gating1 ] ) .thus the boltzmann function ( ) is a cdf , of the pdf ^ 2}=\frac{1}{2 \kappa \left[1 + \cosh\left ( \frac{v - v_{\text{\textonehalf}}}{\kappa } \right ) \right]}\ ; .\ ] ] subject to necessary condition that , where is the set of real numbers , then and eq .( [ e : boltz_pdf ] ) hold . the function expressed by eq .( [ e : boltz_pdf ] ) is the boltzmann pfd .two pdfs are equal if they have the same moment generating function ( _ mgf _ ) .the mgf of the boltzmann pdf may be obtained as follows : and since the mgf is generally defined as then is the _k^it^ _ _ central moment _ of . then , if the first four central moments , , of are as may be appreciated in fig .[ fig : figure2]a the boltzmann pdf described by eq .( [ e : boltz_pdf ] ) is symmetric about .the skewness ( * ? ? ?265 for details on skewness ) of is although the plot in fig .[ fig : figure2]a evokes the gaussian bell , the distributions are different , has a kurtosis ( * ? ? ?265 for details on kurtosis ) exactly equal to higher than the gauss pdf kurtosis which is is exactly 3 .the _ hessian _ matrix of a function is a matrix of second partial derivatives of the form \ ] ] if are all linearly independent , then is the diagonal matrix : \ ] ] where are the matrix _ eigenvalues_. the hessian matrix , of eq .( [ e : boltzelect ] ) is the non diagonal matrix : \\ \kappa\left[\kappa-w\tanh \left(\xi\right)\right]&w\left[2 \kappa w \tanh \left(\xi\right)\right ] \end{matrix } \right ] \ ] ] where ; is symmetric and non singular , which indicates that are not linearly independent .also = \dfrac{1}{4}\left [ \begin{matrix } -1&0 \\ 0&1 \end{matrix}\right ] \ ] ] which evaluating the matrix as a determinant ( called the _ discriminant _ ) , has a value of - with eigenvalues \{-1,1}. it is a negative definite matrix ; meaning that no mater what is the value of , the surface containing all the solutions of has a maximum at which is a _critical point _ of . if is a set of random variables ordered so that . then , called by an edf , has the following properties : then to measure how close , a distribution function , is from defined the supremum where the bars indicates absolute values , _ x _ is the point where the supremum occurs . by the glivenko cantelli theorem , if the sample comes from distribution , then converges to 0 almost surely . strengthened this result , by effectively providing the rate of this convergence .the following two theorems are described inspired on theo .( 1 ) and ( 2 ) in ) .[ t : kolmogorov ] for every fixed if denotes the probability that then where is the cumulative distribution function , which for , is given by either of the following equivalent relations for , .the proof of theo . ( [ t : kolmogorov ] ) demands that is a uniform random variable such as $ ] .equation ( [ e : cumulprob ] ) characterizes the cdf of any continuous pdf .another theorem , due to states : [ t : smirnov ] let be a sets of mutually independent random variables ordered so that for which may be defined . then if define and suppose that where is a constant . then for every fixed theorem ( [ t : smirnov ] ) is a generalization of theo .( [ t : kolmogorov ] ) to compare two empirical distribution functions based on statistics and is the base of the so called kolmogorov - smirnov test .if points are drawn to produce a sample from a process described by are ordered as where all comply with the conditions in equation set ( [ e : edf ] ) and all have the form where is a random variable distributed as some pdf with expectation and variance .the subindex introduced in equation ( [ e : expboltz ] ) allows for different experimental boltzmann systems , or for the same boltzmann system under different experimental condition often called _treatments_. then , for two experimental samples we may define the set which has a _ supremum _, the supremum of a subset of a totally or partially ordered set is the least element of that is greater than or equal to all elements of .if the supremum exists , it is unique meaning that there will be only one supremum .the supremum expressed by eq .( [ e : d ] ) , may easily be evaluated numerically . in practice eqs .( [ e : deltaminf ] ) and ( [ e : d ] ) are defined only for , and thus , un practice , the supremum is according to kolmogorov - smirnov statistics if , with , the null hypothesis may be rejected with an confidence level . may be calculated with the algorithm of , is tabulated by and may also be expressed as where is tabulated by ( * ? ? ?122 and table 55 ) .this manuscript was written in latex using _ texstudio _ for linux ( also available for apple os x and ms windows , http://www.texstudio.org ) , an open source free tex editor .51 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 s. peigneur , c. sevcik , j. tytgat , c. castillo , and g. dsuze .subtype specificity interaction of bactridines with mammalian , insect and bacterial sodium channels under voltage clamp conditions ._ febs j. _ , 2012 . in press ,doi : 10.1111/j.1742 - 4658.2012.08808.x .h. koegel , s. kaesler , r. burgstahler , s. werner , and c. alzheimer . unexpected down - regulation of the hik1 ca^2+^-activated k^+^ channel by its opener 1-ethyl-2- benzimidazolinone in hacat keratinocytes .inverse effects on cell growth and proliferation ._ j. biol ._ , 278:0 33233330 , 2003 .p. schuster .boltzmann and evolution : some basic questions of biology seen with atomistic glasses . in g.gavallotti , j. yngvasson , and w. l.reiter , editors , _ boltzmanns legacy _ , lectures in mathematics and physics , pages 217241 .european mathematical society publishing house , zrich , switzerland , 2008 .b. cambien , r. rezzonico , s. vitale , b. rouzaire - dubois , j .-dubois , r. barthel , b. k. soilihi , b. mograbi , a. schmid - alliana , and h. schmid - antomarchi .silencing of hslo potassium channels in human osteosarcoma cells promotes tumorigenesis .j. cancer _ , 123:0 365371 , 2008 .observation of a new boson at a mass of 125 gev with the cms experiment at the lhc .arxiv:1207.7235.v2 , january 2013 .european organization for nuclear research ( cern ) cern - ph - ep/2012 - 220 2013/01/29 cms - hig-12 - 028 .e. j. arins , a. m. simonis , and j. m. van rossum ._ molecular pharmacology .the mode of action of biologically active compounds _ , volume 1 of _ medicinal chemistry _ , chapter drug - receptor interaction : interaction of one or more drugs with one reeptor system , pages 119286 . academic press , new york , 1964 .
sigmoid semilogarithmic functions with shape of boltzmann equations , have become extremely popular to describe diverse biological situations . part of the popularity is due to the easy availability of software which fits boltzmann functions to data , without much knowledge of the fitting procedure or the statistical properties of the parameters derived from the procedure . the purpose of this paper is to explore the plasticity of the boltzmann function to fit data , some aspects of the optimization procedure to fit the function to data and how to use this plastic function to differentiate the effect of treatment on data and to attest the statistical significance of treatment effect on the data . boltzmann , energy states , probability , ion channels
the problem of _ variable - length _ lossy source coding is one of the fundamental research topics in shannon theory . for this problem ,several studies have adopted the _ excess distortion probability _ as a distortion criterion ( e.g. , , , , ) .the excess distortion probability is defined as the probability that the distortion between a source sequence and its reproduction is greater than a certain threshold . for the problem of variable - length lossy source coding under the excess distortion probability , there are mainly two criteria on codeword length : the _ mean codeword length _ and the _overflow probability_. kostina et al . have considered the mean codeword length and shown the non - asymptotic characterization on the optimal mean codeword length .they also have performed the asymptotic analysis on the optimal mean codeword length for i.i.d .sources . on the other hand , yagi and nomura and nomura and yagi have considered the overflow probability . in ,they have treated the case where either the overflow probability or the excess distortion probability is less than or equal to a positive constant asymptotically for general sources . in , they have dealt with the case where the probability of union of events that the overflow occurs and the excess distortion occurs is less than or equal to a positive constant asymptotically for general sources .this paper considers the excess distortion probability and the overflow probability as in and .however , the primary differences are 1 ) we address the case where both the excess distortion probability and the overflow probability may be positive and 2 ) we analyze both non - asymptotic and asymptotic cases . the contribution of this paper is the non - asymptotic ( one - shot ) and asymptotic characterizations on the minimum threshold of the overflow probability by using a new smooth max entropy - based quantity . in the non - asymptotic regime ,coding theorems are shown for both stochastic and deterministic encoders . to show the achievability results, we give an explicit code construction instead of using the random coding argument .it turns out that the constructed code satisfies the properties of the optimal code shown in .further , using the results obtained in the non - asymptotic regime , we establish asymptotic coding theorem for general sources .let be a source alphabet and be a reproduction alphabet , where both are finite sets .let be a random variable taking a value in and be a realization of .the probability distribution of is denoted as .a distortion measure is defined as the pair of an encoder and a decoder is defined as follows .an encoder is defined as where denotes the set of all binary strings and the empty string , i.e. , an encoder is possibly _stochastic _ and produces a non - prefix code . for ,the codeword length of is denoted as .decoder is defined as variable - length source coding _ without _ the prefix condition is discussed as in , for example , and .the performance criteria considered in this paper are the excess distortion and the overflow probabilities . given , the excess distortion probability for a code defined as given , the overflow probability for a code is defined as using these criteria , we define a code .given and , a code satisfying is called a code .the fundamental limits are the minimum thresholds and for given , , and .given and , we define consider the special case . from a code, we can construct a fixed - length code achieving rate and the excess distortion probability .thus , or represents the minimum rate in fixed - length lossy source coding under the excess distortion criterion .the _ smooth max entropy _ , which is also called the smooth rnyi entropy of order zero ,has first introduced by renner and wolf .later , uyematsu has shown that the smooth max entropy can be defined in the following form .given , the smooth max entropy is defined as where represents the cardinality of the set .one of the useful properties of the smooth max entropy , which is used in the proof of the achievability result in our main theorem , is schur concavity . to state the definition of a schur concave function ,we first review the notion of _let be the set of non - negative real numbers and be the -th cartesian product of , where is a positive integer .suppose that and satisfy if and satisfy , for , then we say that _ majorizes _ ( it is denoted as ) .schur concave functions are defined as follows .we say that a function is a _ schur concave _ function if for any satisfying . from the definition of the smooth max entropy and schur concave functions , it is easy to see that the smooth max entropy is a schur concave function is a schur concave function for and a schur convex function for . ] .next , using the smooth max entropy , we introduce a new quantity , which plays an important role in producing our main results . given and , is defined as where denotes a conditional probability distribution of given . fora given and , suppose that then , there are no codes whose excess distortion probability is less than or equal to .conversely , if such codes do not exist for given and , ( [ inf ] ) holds . in this case, we define and .further , if ( [ inf ] ) holds , we also define because there is no conditional probability distribution on satisfying .the next lemma shows the achievability result on of a code .assume that for any and , there exists a code such that see section iv - a . to prove the achievability result , we do not use the random coding argument but give an explicit code construction .the constructed code satisfies the properties of the optimal code shown in .the next lemma shows the converse bound on of a code . for any and , any code satisfies see section iv - b . combining lemmas 1 and 2, we can immediately obtain the following result on . for any and , it holds that by theorem 1 , the minimum threshold can be specified within one bit in the interval not greater than , regardless of the values , , and .this result is mainly due to an explicit construction of good codes , rather than the random coding argument , given in section iv - a .the next lemma shows the achievability result on of a deterministic code .assume that for any and , there exists a deterministic code such that see section iv - c . from lemma 3 and the fact that , the following result on is obtained .for any and , it holds that by theorem 2 , can be specified in the interval within four bits , which is slightly weaker than the result for stochastic codes .let and be the -th cartesian product of and , respectively .let be a random variable taking a value in and be a realization of .the probability distribution of is denoted as . in this section ,coding problem for general sources is considered . a distortion measure defined as an encoder is possibly stochastic and produces a non - prefix code .a decoder is deterministic .we define an code as follows .given and , a code satisfying for all with some is called an code . the fundamental limit is the following minimum threshold . given and , the next theorem characterizes by the smooth max entropy - based quantity . for any and , where is defined as see section iv - d .as shown in theorem 3 , we characterize the minimum threshold on the overflow probability by the quantity related to the _ entropy _ , whereas previous studies such as and have characterized it by the quantity related to the _ mutual information_. in the non - asymptotic ( one - shot ) regime , the results on the minimum threshold are different for stochastic encoders and deterministic encoders as shown in theorems 1 and 2 . in the asymptotic regime , however , the restriction to only deterministic encoders does not change the minimum threshold . instead of ( [ cd1 ] ) and ( [ cd2 ] ) , as a generalization of the problem in , we can consider the conditions and define as the minimum threshold under the conditions ( [ cd1d ] ) and ( [ cd2d ] ) . then , by almost the same proof of theorem 3 , we have for any and .first , some notations are defined before the construction of the encoder and decoder is described . * for and , is defined as * we define ( ) by the following procedure and are finite sets .however , we can assume countably infinite and if this operation is admitted for countably infinite and . ] .let be defined as and for , let be defined as * for , from the definition , we have * if , let be the integer satisfying if , we define .* let be the integer satisfying from this definition , it holds that .* let and be defined as and * let be the -th binary string in in the increasing order of the length and ties are arbitrarily broken .for example , etc .we construct the following encoder and decoder * [ encoder ] * * for ( ) , set . * for , set from ( [ ks1 ] ) . ] * for , set . *[ decoder ] * set ( ) .now , we evaluate the excess distortion probability .we have for ( ) since .furthermore , we have with probability for .thus , therefore , we have next , we evaluate the overflow probability . from the construction of the encoder, it is easily verified that ( ) .hence , setting , we have where the last inequality is due to the definition of and ( [ ih1 ] ) and the last equality is due to the definition of . therefore , the code is a code with . to complete the proof of the theorem, we shall show .we define .notice that where and follow from the definition of the encoder and decoder and is due to ( [ hc ] ) .then , the equality in ( [ notice ] ) does not hold . however , is true since .] , which implies that . hence, if is shown , the desired equation is obtained .to show ( [ hg ] ) , the following lemma is useful .if which is induced by satisfying majorizes any which is induced by satisfying , then it holds that the lemma follows from the fact that the smooth max entropy is a schur concave function and the definition of . in view of lemma 4, we shall show that majorizes any induced by satisfying .to show this fact , suppose the following condition : ( ) _ there exists a satisfying but not being majorized by ._ assuming ( ) , we shall show a contradiction .let give the largest in , give the largest in , give the largest in , etc .that is , and for all considering the fact that the support of is and the assumption ( ) , we can say that there exists a satisfying on the other hand , the excess distortion probability under is evaluated as where is the indicator function and the last inequality is due to for all . for the second term in ( [ ed ] ), it holds that where follows from the definition of , follows from ( [ a1 ] ) and ( [ a2 ] ) , and follows from ( [ y11 ] ) and ( [ y12 ] ) . plugging ( [ st ] ) into ( [ ed ] ) gives where the last inequality is due to ( [ mu ] ) .this is a contradiction to the fact that . for any code , set definition of a code gives let then , the first inequality in ( [ npc ] ) is rewritten as hence , where the superscript `` '' represents the complement . from ( [ npc3 ] ) and the definition of the smooth max entropy , on the other hand , since for , combining ( [ npc4 ] ) and ( [ npc5 ] ) yields thus , from the second inequality in ( [ npc ] ) , we have first , some notations are defined .* let be the integer satisfying ( [ ks1 ] ) and ( [ ks2 ] ) .* define as .then , it holds that .* let be the integer satisfying we construct the following _ deterministic _ encoder and decoder * [ encoder ] * * for ( ) , set .* for , set . *[ decoder ] * set ( ) .now , we evaluate the excess distortion probability . from the definition of the encoder and decoder , therefore, we have next , we evaluate the overflow probability . from the definition of the encoder , we have setting , it holds that , then . ] where the last inequality is due to ( [ js1 ] ) .therefore , the code is a deterministic code with .let be the integer satisfying ( [ ih1 ] ) and ( [ ih2 ] ) .then , from the proof of lemma 1 , it holds that . since , it is easily verified that and , meaning that . if , obviously holds .then , assuming that , we shall show that this leads to where the rightmost inequality is due to taylor s expansion , and we obtain ( [ th4 ] ) .the first step to show is the following inequality : where follows from ( [ ih1 ] ) and ( [ js2 ] ) and follows from inequality ( [ jsks ] ) is equivalent to thus , we obtain , implying that .( direct part ) from lemma 3 , there exists a code satisfying fix arbitrarily .then , it holds that and for all with some .then , from ( [ t31 ] ) , we have for all .thus , from ( [ t30 ] ) and ( [ t33 ] ) , is an code with for all .since is arbitrary , this indicates that ( converse part ) for any code , lemma 2 gives therefore , it holds that for any code .hence , we have work was supported in part by jsps kakenhi grant numbers 26289119 , 16k00195 , 16k00417 , and 16k06340 .s. saito and t. matsushima , `` threshold of overflow probability using smooth max - entropy in lossless fixed - to - variable length source coding for general sources , '' _ ieice trans .fundamentals _ , vol .e99-a , no .12 , pp . 22862290 , dec .2016 .t. weissman and n. merhav , `` tradeoffs between the excess - code - length exponent and the excess - distortion exponent in lossy source coding , '' _ ieee trans .inf . theory _ ,2 , pp . 396415 , feb . 2002 .
this paper investigates the problem of variable - length lossy source coding . we deal with the case where both the excess distortion probability and the overflow probability of codeword length are less than or equal to positive constants . the infimum of the thresholds on the overflow probability is characterized by a smooth max entropy - based quantity . both non - asymptotic and asymptotic cases are analyzed . to show the achievability results , we do not utilize the random coding argument but give an explicit code construction .
radiation condition onboard leo spacecrafts is determined , particularly , by the charged particles penetrating from outside of the magnetosphere .the measure of such a process in any given near - earth space location is a local geomagnetic cutoff rigidity value . in this work we assume , that `` cutoff rigidity '' phrase denotes an effective vertical cutoff rigidity ( ) , which value is gained from calculated discrete penumbra structure by ordinary technique using white spectra ( for example , ) .it is well known , that magnitude of and penetration boundaries position depend on local time and different magnetospheric condition - related parameters .cutoff rigidity value s variations can be measured in experiments or obtained by some type of computation using earth s magnetosphere model . however , usual method for calculation is based on resource consuming trajectory computation technique . in this workwe offer the method to calculate with accounting for local time and main empirical parameters , that characterizes magnetospheric condition , in any point of near - earth s space . for the applications where calculation is often needed ( e.g. transmissions for leos ) our method provides a huge speedup in comparison with direct trajectory computations .the only things needed for the method are the igrf rigidity in exploring point and rigidity attenuation quotient s dependence on empirical parameters .the first one can be obtained using interpolation in tabulated data ( see , for example , ) , which is regularly updated for coming epochs . and the second can be obtained only once by using particle trajectory computations in corresponding numerical magnetospheric model . in our casethe dynamic paraboloid model of earth s magnetosphere was chosen for solution of the problem .we have adopted in our work the cutoff rigidity attenuation formalism , which has already successfully applied for transmission calculations using tsyganenko-89 model .the rigidity attenuation quotient in our case is calculated as where value corresponds the set of model parameters for non - disturbed magnetosphere condition , and is correspond to some set of varied parameters regarding to quiet ones . as it was wordlessly postulated in , the dependence on point s geographical positionmight not be taken into account because it is relatively small .generally , it seems to be true at least for gv with good enough accuracy ( error s order is about percents even for lowest in this range ) .hence , the approach of calculating a world - wide grid can be essentially simplified .the algorithm we propose for obtaining in given point contains 3 stages : * calculating for the given point ; for practical needs , the interpolation in pre - calculated grid for some altitude is often used , it is especially easy when large amount of results is needed quickly .the formula can be applied to transform value to new altitude . * calculating a model - dependent cutoff rigidity attenuation quotient + by interpolation in pre - calculated database ( or by extrapolation if value is out of applied basic array , see appendix ) * computing the resulting value table of values for some given altitude is needed to be renewed every 5 years with new igrf epoch coming .values were obtained by trajectory computations using dynamic paraboloid model with parameter set $ ] ( sequence of variables is according to table [ t1 ] below ) . to compute database which is intended to provide dependence of on parameters and local time , we used the varying of all model parameters in wide enough ranges , comprising extreme , quiet and `` anti - extreme '' parameter sets .the multidimensional grid , applied for this calculation , is presented in table [ t1 ] ..the applied parameter grid . [ cols="^,^,^,^,^ " , ] values at the edges of this grid were obtained ( approximately ) from .all calculations were performed for local time values and hours .technical details of calculations are summarized in appendix .let us now demonstrate some results of cutoff rigidity dependence on local time and model parameters , that have been computed using presented technique .fig.[f1 ] gives an opportunity to weigh deposit of every model parameter in resulting value for all points of noted above set .such case of partial dependencies is obtained by fixating all parameters as quiet and excepting the given one .note , that the role of as depression factor wanes with decreasing of , but the one rises rapidly .the weight of and seems to be not important for values , at any hand , in presented case of quiet basic values , because they result in changes that are comparable with the value of applied rigidity step size ( see appendix ) .contrary , decrease of effects in huge grow ( see also fig.[f2 ] ) .the effect of and here is clear but of moderate magnitude .fig.[f2 ] shows family of daily variations for three points having normal cutoff values of different order . for the first picturethere are some hours where is negligibly small , so such a point can be accessible for particles of any rigidity .the middle picture demonstrates the effect of `` anti - extreme '' setting , leading not only to very high mean values ( for this case ) , but also to the translocation of curve s maximum for time axis s positive direction .the upper curve on the most right part of fig.[f2 ] exhibits the insufficiency of applied step size for penumbra calculation ( see appendix ) , leading to the underestimation of value for , which causes an ambiguous interpolation results .however , this effect is rare and too weak to affect resulting transmission , for which calculating presented method was created .a `` gnarlyness '' of some presented curves is due to effect of scarce grid , currently imperfect interpolation procedure and scale .it is does not sufficiently affect the results of applications where much of values to be calculated , because interpolation uses many values and their errors ( which are of order of step size ) are to be evened .extrapolating parameter values to out of grid ( but not very far ) is also possible and seems to be correct enough to get reliable result .magnetospheric transmission functions are directly connected with such tasks as leo radiation conditions evaluating and the interpretation of orbital experiment results .because of their importance , it is necessary to be able to account for more empiric factors that affecting transmission . herewe ll give some examples of calculated magnetospheric transmission functions for several ( quiet , disturbed and very extreme ) model parameters sets for iss - like orbit , obtained using presented technique . the essential noteis that , unlike tsyganenko models , the paraboloid one is primordially intended to describe even very extreme conditions , as it is not internally limited by experimental data arrays .fig.[f3 ] presents some modeled transmissions for km circular orbit under different conditions for long mission duration , where all localities are averaged and transmission curve became smoothed . in comparison with additionally figured transmission , obtained by tsyganenko-89 ( t89 ) model with , it is obviously seen that the paraboloid model gives unapproachable for t89 results in modeling most extreme conditions .the method we have developing is intended for applications using intensive geomagnetic cutoff calculations , first of all for express magnetospheric transmission calculation for given leo missions , even under changing magnetospheric conditions during flight .test and transmission calculation , based on the presented technique , is available online by url http://dec1.sinp.msu.ru/vovka/riho ( a simplified version , where transmission calculation is allowed for only static conditions and circular orbits ) .current version of the method uses the igrf rigidity values table as basic ( in step 1 of scheme ) , although it seems that the best choice would be to apply some other world basic grid , for example , one directly calculated table with using paraboloid model .nevertheless , it does not understate the attenuation quotient methodology itself , because its drawback in our case is only correspond to relatively small systematic inaccuracies between conditional `` quiet '' values and igrf ones .author thanks dr.nymmik for fruitful discussions on the topic of paper , and dr.kalegaev for the provided data and cluster computing facilities .here we summarize technical moments that are related to the performed trajectory calculations with using paraboloid model .vertically directed protons were test particles for reverse trajectory calculations of penumbra .fourth order integration scheme was used for it , with accounting for time changing ( hence , magnetic field ) during particle s modeled flight .the geomagnetic field there was superposition of igrf epoch 2005 field with dynamic paraboloid model ( version 2004 ) , with uniform field vector outside of the magnetopause , which position is natively given by the code realizing the paraboloid model .maximal flying distance was equal to , the particles walked it over during motion were considered as allowed for penetration .the earth was represented by wgs-82 ellipsoid with atmosphere layer at km above its surface , the particles which fell below it was considered as reentrant . rigidity step size for penumbra calculation was equal to gv . all calculations were performed for altitude km above mean earth s radii km for six selected points in northern geographic hemisphere with basic igrf rigidities gv . shea m.a . , smart d.f . and k.g.mccraken .jgr , 70:4117 , 1965 stone e.c .jgr , 69(17):3577 , 1964 smart d.f . , shea m.a . and r.gull .jgr , 74(19):4731 , 1969 danilova o.a . and tyasto m.i24th icrc , 5:1066 - 1069 , 1995 t.a .ivanova et al ., 25(1):7 - 12 , 1985 e.o .flckiger et al .jgr , 91(a7):7925 - 7930 , 1986 smart d.f .magnetospheric models and trajectory computations .space sci ., 93:305 - 333 , 2000 smart d.f .asr , 37:1206 - 1217 , 2006 smart d.f .et al . proc .26th icrc , 7 ( sh 3.6.28 ) , 1999a smart d.f . et al .26th icrc , 7 ( sh 3.6.29 ) , 1999b smart d.f . and shea m.a25th icrc , 2:397 - 400 , 1997 nymmik r.a . ,diurnal variations of geomagnetic cutoff boundaries and the penetration function .cosmic research , 29(3):491 - 493 , 1991 r.a .nymmik et al .30th icrc , 1:701 - 704 , 2007 n.a .planet space sci .37(1):5 - 20 , 1989 a.j .tylka et al , creme96 : a revision of the cosmic ray effects on micro - electronics code .ieee transactions on nuclear science , 44 , 2150 - 2160 ( 1997 ) ; and references therein .alexeev et al .magnetic storms and magnetotail currents .jgr , 101(4):7737 - 7748 , 1996 i.i .alexeev et al .the model description of magnetospheric magnetic field in the course of magnetic storm on january 9 - 12 , 1997 .jgr , 106(a11):25683 - 25694 , 2001 i.i .alexeev et al .modelling of the electromagnetic field in the interplanetary space and in the earth s magnetosphere .space sci ., 107 , n1/2:7 - 26 , 2003 v.v .paraboloid model input for two large events in 2003 - 2004 .private communication , 2006 http://www.magnetosphere.ru/iso r.a .. doctoral dissertation .sinp msu , moscow , 1998
using dynamic paraboloid model of earth s magnetosphere , a large set of particles trajectory computations was performed . based on its result , the numerical algorithm for calculating effective cutoff rigidity dependence on empiric parameters has been developed for further use in magnetospheric transmission calculations . _ keywords : cutoff rigidity , dynamic paraboloid model , magnetospheric transmission , radiation condition on leo _
symmetric matrix inversion is one of the most important problem for many practical tasks e.g. analysis of electrical circuits with inductance elements , synthesis of kalman or wiener filters , using of finite element method .existing symmetric matrix inversion methods are cholesky decomposition , ldl decomposition , bordering method , and the most efficient krishnamoorthy - menon s method ( based on cholesky decomposition , and requires operations with square roots computation ) .the aim of this paper is to propose a symmetric matrix inversion method , which reduces machine time spending compared to krishnamoorthy - menon s method by avoiding of square root computations .moreover , this fact allow us to use the proposed method for efficient inverse of symmetric matrix not only with strict diagonal dominance , but without diagonal dominance as well .in this section shows modified gaussian elimination , which proposed method based on - .let there is a system of linear equations ( [ sle ] ) , where , , . during modified gaussian elimination an addition matrix changes instead of matrix , butaddition memory is not necessary in this case .let vector consist of two types of variables : required , which should be find during elimination , and unrequired , which is not interesting for researcher .re - solution of ( [ sle ] ) after some changes of could be done with reduced number of multiplications and divisions , using formulae from - .let be an column of ; let be a matrix after change ; let be an row of , ; let , where is an identity matrix. then solution of ( [ sle ] ) could be done with formulae ( [ modification ] ) below : * for , , and is an unrequired variable ; * for . if , then two rows of should be permitted .such permutation is always possible for non - singular matrix ( see - ) ; * for other required ; * for any required . it can easily be checked that , , for any required and for , .further , if all elements of are required variables , then .matrix inversion using modified gaussian elimination requires multiplications and divisions . if only the last element of is a required variable , then gaussian elimination requires multiplies and divisions .let be a number of required variables in : is a required variable .then number of multiplies and divisions for gaussian elimination could be determined with formula ( see ) .in this section shows a method for efficient symmetric matrix inversion based on modified gaussian elimination . the first variant of the proposed method consist of two different stages .on the first stage we use formulae ( 2 ) for , and only is a required variable ( indeed , it is a valid proposition for as well ) . on the second stage we use addition formulae described below .let us introduce some notation .let be an element from row and column of ; let be an element from row and column of ; let a submatrix of , such that , , ; let be a submatrix of , such that , , .it is easily shown that after the first stage is a lower triangular matrix .after the second stage became a matrix , as if we use ( 2 ) for . for symmetric matrix inversion using modified gaussian elimination that is enough to use only lower triangular matrix .to prove this statement , we need a lemma [ lemma_1 ] .[ lemma_1 ] let is a matrix from ( [ modification ] ) for , . then . using ( [ modification ] ) , let us consider 4 cases .let . then case 2let . then case 3 .. then it can be checked easily for , ; case 4 .. then it can be checked easily for : , , .since lemma [ lemma_1 ] , it follows that is a symmetric matrix .this statement is needed for the second stage of the method .let be a matrix after change , , where is a lower triangular matrix after the first stage , and .let is an element from row and column of .then the second stage describes with formulae below : * for , , , and for , , ; * for , , . it is easily to prove that , where is a diagonal matrix , such that , .number of multiplications and divisions for the first stage of the method describes with formula , as we note earlier .it is easily shown that number of multiplications and divisions for the second stage describes with formula .both stages for symmetric matrix inversion using the first variant of the proposed method requires multiplications and divisions .it is less then requirements of cholesky decomposition or ldl decomposition ( see table 1 ) .let us remark that proposed method avoid square root computations ; this considerably reduce machine time spending , and make it possible to use proposed method not only for positive determined , but for any invertable symmetric matrices as well . the second variant of the proposed method consist of only one stage .suppose , that where , but all elements from row of are zeros .let be an element of row and column of ; let be a column of . for explanation of the following formulae, we need a lemma [ lemma_2 ] .[ lemma_2 ] let is a matrix from ( [ modification ] ) for , .then from ( [ second_variant ] ) be . from ( [ second_variant ] ) .using ( [ modification ] ) , we get \1 . \2 . if we combine lemma [ lemma_1 ] with lemma [ lemma_2 ] , we get formulae below : * for , , * for , , * for , , , , * for , , * for , , , * for , , . the formulae ( [ second_a ] ) describe an idea of the method in detail , but for practical tasks it is better to use different formulae : * for , , * for , , * for , , , * for , , , * for , , , * for , , . it can easily be shown that ( [ second_a ] ) and ( [ second_b ] ) are equivalent . number of multiplications and divisions for the second variant of the proposed method describes with formulae .it is less then requirements of cholesky decomposition , ldl decomposition or krishnamoorthy - mennon method .it is the same requirements as for bordering method , but it should be noted that bordering method could not be use for inversion of matrix with , , where is a minor of , , . at the same time , proposed method could be use for inversion of matrix with , if is not a singular matrix .let us remark that the second variant of the proposed method avoid square root computations as well .in order to demonstrate advantages of the proposed algorithms , we use matlab based simulation via different cpus . we give results for intel core i5 - 3230 m 2.60 ghz below ( intel pentium dual core t 2390 1.86 ghz and intel atom n450 1.67 ghz gives familiar results ) .we compare proposed algorithms with the most efficient notable algorithms for symmetric matrix inversion : cholesky decomposition , ldl decomposition , and krishnamoorthy - mennon method , , . we generate table with full equations , which describes number of multiplications , divisions and square roots computation for every noted method . the first row of the table 1 describes cholesky decomposition and solving of systems of linear equations , and .the second row describes ldl decomposition and solving of sle , , and .the third row describes matrix inversion using krishnamoorthy - mennon method , based on cholesky decomposition , .the fourth row describes the first variant , and the fifth row describes the second variant of the proposed method .experiment 1 .inversion of a real symmetric matrix with strict diagonal dominance .let be a number of multiplications and divisions , and be a number of square root computations , determined with formulae from the table 1 .let and be numbers of operations , determined with counter variables from matlab scripts .results of the experiment are given in tables 2 and 3 .experiment 2 .inversion of real symmetric matrices of order with strict diagonal dominance .let be a time for matrix inversion ; let be a second , where is a matrix , inverted via one of described methods , and is a matrix , inverted via matlab function .results of the experiment are given in tables 4 and 5 .experiment 3 .inversion of real symmetric matrices of order without diagonal dominance .results of the experiment are given in the table 6 .from tables 2 and 3 we can conclude that formulae from the table 1 are correct . from tables 4 and 5we can conclude that both variants of the proposed method provide notable reduction of machine time spending and has a good accuracy . from the table 6we can conclude that both variants of the proposed method increase advantages for matrices without diagonal dominance .let us remark that it is especially important for inductance matrix inversion .we propose a new method for symmetric matrix inversion based on modified gaussian elimination with avoiding of square root computations .proposed method could be useful for any scientific and technical problem with symmetric matrix inversion , especially if matrix has not a diagonal dominance .chua , l. o. ; lin , p .-: `` _ _ machine analysis of electronic circuits _ _ '' , in russian , the energy , moscow , 1980 .happonen , a. , burian , a. , hemming , e. : `` a reconfigurable processing elements implementation for matrix inversion using cholesky decomposition '' _ world academy of science , engineering and technology international journal of computer , information , systems and control engineering _ , 2007 , vol:1 no:3 .streng , g , fix , g. : `` _ _ an analysis of the finite element method _ _ '' , in russian , the world , 1977 .golub , g. h. ; loan , c. f. v. : `` _ _ matrix computations _ _ '' , in russian , the world , moscow , 1999 .krylov , v. i. , bobkov , v. v. , monastyrskii , p. i. : `` _ _ numerical methods in applied mathematics _ _ '' , in russian , the higher school , 1972 .krishnamoorthy , a. ; menon , d. : matrix inversion using cholesky decomposition , _ arxiv.org : url : http://arxiv.org / abs/1111.4144_. krishnamoorthy , a. , menon d. : `` matrix inversion using cholesky decomposition '' , _ ieee signal processing : algorithms , architectures , arrangements , and applications ( spa ) conference ._ 26 - 28th of september , 2013. poznan , poland .savelov , n. s. : `` transient analysis for preliminarily ordered electrical circuits '' , in russian , _ russian electromechanic _ , 1985 ,no:4 , p. 85 - 92 .savelov , n. s. : `` forming of state equation for changing electrical circuits '' , in russian , _ russian electromechanic _ , 1987 ,no:12 , p. 13 - 18 .savelov , n. s. , kochubei , p. m. : `` fast electrical circuit analysis with multiprocessor system '' , in russian , _ russian electromechanic _ , 2012 , no:5 ,savelov , n. s. , khlyntsev s. g. : `` efficient method for parametric synthesis of electrical circuits '' , in russian , _ russian electromechanic _, 2013 , no:5 , p. 9savelov , n. s. , kochnev , a. v. : `` fast electrical circuit analysis with reduced machine time and memory spending '' , in russian , _ russian electromechanic _ , 2013 ,no:4 , p. 3samarskii a. a. , gulin , a. v. : `` _ _ numerical methods _ _ '' , in russian , the science , moscow , 1989 .matlabcentral / 1994 - 2014 the mathworks , inc . : _ http://www.mathworks.com/matlabcentral/fileexchange/41957-matrix-inversion-using-cholesky-decomposition _
in this paper we present two different variants of method for symmetric matrix inversion , based on modified gaussian elimination . both methods avoid computation of square roots and have a reduced machine time s spending . further , both of them can be used efficiently not only for positive ( semi- ) definite , but for any non - singular symmetric matrix inversion . we use simulation to verify results , which represented in this paper .
in this study , we consider the generalized -component reaction diffusion system with : in , where is an open bounded domain of class in with boundary .the diffusion matrix is assumed to be of the form{cccccc}\alpha_{1 } & \gamma_{1 } & 0 & \cdots & \cdots & 0\\ \beta_{1 } & \alpha_{2 } & \gamma_{2 } & \ddots & & \vdots \\ 0 & \beta_{2 } & \alpha_{1 } & \gamma_{1 } & \ddots & \vdots \\ \vdots & \ddots & \beta_{1 } & \alpha_{2 } & \gamma_{2 } & 0\\ \vdots & & \ddots & \beta_{2 } & \ddots & \ddots \\ 0 & \cdots & \cdots & 0 & \ddots & \ddots \end{array } \right ) _ { m\times m } , \label{trimat1}\ ] ] with being positive real numbers representing the self and cross diffusion constants and satisfying the inequality the laplacian operator has a spatial dimension of and is a polynomially growing functional representing the reaction terms of the system .the boundary conditions and initial data for the proposed system are assumed to satisfy or and respectively . for generality , we will consider three types of boundary conditions in this paper : 1 .nonhomogeneous robin boundary conditions , corresponding to 2 .homogeneous neumann boundary conditions , corresponding to 3 .homogeneous dirichlet boundary conditions , corresponding to note that denotes the outward normal derivative on and the vectors , , and are defined as the initial data is assumed to be in the region given by subject to the study at hand builds upon numerous previous works found in the literature . among the most relevant studiesis that of abdelmalek in where he considered an -component tridiagonal matrix of the form{ccccc}\alpha & \gamma & 0 & \cdots & 0\\ \beta & \alpha & \gamma & \ddots & \vdots \\ 0 & \beta & \ddots & \ddots & 0\\ \vdots & \ddots & \ddots & \ddots & \gamma \\ 0 & \cdots & 0 & \beta & \alpha \end{array } \right ) _ { m\times m},\ ] ] and proved the global existence of solutions subject to the parabolicity condition which can be easily shown to fall under the general condition in ( [ para1 ] ) with , , and .another important study is that of kouachi and rebiai in where the authors established the global existence of solutions for a tridiagonal matrix of the form{ccc}\alpha_{1 } & \gamma_{1 } & 0\\ \beta_{1 } & \alpha_{2 } & \gamma_{2}\\ 0 & \beta_{2 } & \alpha_{1}\end{array } \right ) , \ ] ] subject to the parabolicity condition note that this condition is weaker than which is obtained from ( [ para1 ] ) for .although the work carried out in is important to us here , it is necessary to note that the authors failed to identify all the invariant regions of the proposed system and settled for only of them .this paper will build upon the work of these two studies by assuming the diffusion matrix to be tridiagonal and determining all the possible invariant regions for the system .a lyapunov functional will be used to establish the global existence of solutions in these regions .the remainder of this paper is organized as follows : section [ seceigen ] uses the three point chebyshev recurrence relationhip of polynomials to derive the eigenvalues and eigenvectors of the transposed dffusion matrix for the odd and even dimension cases , respectively .section [ secpara ] derives the parabolicity conditions for the proposed system , which is essential for the diagonalization process , which follows in section [ secmain ] .section [ secmain ] shows how the invariant regions of the equivalent digonalized system can be identified and proves the local and global existence of solutions .the last section of this paper will present a confirmation and validation of the findings through the use of numerical examples solved by means of the finite difference approximation method .for reasons that will become apparent in the following section , we will first derive the eigenvalues and eigenvectors of matrix with being the proposed tridiagonal diffusion matrix .we refer to the work of gover in where the characteristic polynomial of a tridiagonal matrix was shown to be closely connected to polynomials that satisfy the three point chebyshev recurrence relationship .first , we have{cccccc}\alpha_{1 } & \beta_{1 } & 0 & \cdots & \cdots & 0\\ \gamma_{1 } & \alpha_{2 } & \beta_{2 } & \ddots & & \vdots \\ 0 & \gamma_{2 } & \alpha_{1 } & \beta_{1 } & \ddots & \vdots \\ \vdots & \ddots & \gamma_{1 } & \alpha_{2 } & \beta_{2 } & 0\\ \vdots & & \ddots & \gamma_{2 } & \ddots & \ddots \\ 0 & \cdots & \cdots & 0 & \ddots & \ddots \end{array } \right ) _ { m\times m}. \label{2.0}\ ] ] the exact shape and characteristics of differ for odd and even values of the dimension .hence , we will consider the two cases separately .before we present the main findings of , let us define the constants we also define the polynomials{l}q_{0}\left ( \mu \right ) = 1,\text { } q_{1}\left ( \mu \right ) = \mu+\beta \\ q_{n+1}\left ( \mu \right ) = \mu q_{n}\left ( \mu \right ) -q_{n-1}\left ( \mu \right ) , \end{array } \right .\label{2.1}\ ] ] and{l}p_{0}\left ( \mu \right ) = 1,\text { } p_{1}\left ( \mu \right ) = \mu \\ p_{n+1}\left ( \mu \right ) = \mu p_{n}\left ( \mu \right ) -p_{n-1}\left ( \mu \right ) , \end{array } \right .\label{2.2}\ ] ] whose zeros are denoted by and , respectively , for .we note that is a chebyshev polynomial of the first kind , whereas is not .as shown in , the zeros of can be given by whereas for no explicit form was found .let us now summarize the eigenvalues and eigenvectors for the odd and even cases separately .first , for , we obtain the following results : [ oddeigval]the eigenvalues of the matrix of order given in ( [ 2.0 ] ) are along with the solutions of the quadratic equations for .note that for every there exist two eigenvalues for matrix , which along with yields eigenvalues . for notational purposes ,let us define a duplicated set of zeros given by for .[ oddeigvec]the eigenvector of the matrix of order given in ( [ 2.0 ] ) associated with the eigenvalue , for , is given by where{ll}s^{\frac{\ell-1}{2}}q_{^{\frac{\ell-1}{2}}}\left ( p_{r}^{^{\prime}}\right ) , & \ell \text { is odd}\\ -\frac{1}{\beta_{1}}s^{\frac{\ell}{2}-1}\left ( \alpha_{1}-\lambda_{r}\right ) p_{\frac{\ell}{2}-1}\left ( p_{r}^{^{\prime}}\right ) , & \ell \text { is even,}\end{array } \right .\label{2.5.1}\ ] ] for .the eigenvector associated with the eigenvalue with{ll}\left ( -\frac{\gamma_{1}}{\beta_{2}}\right ) ^{\frac{\ell-1}{2 } } , & \ell \text { is odd}\\ 0 , & \ell \text { is even,}\end{array } \right .\label{2.5.3}\ ] ] for .the second case is where the matrix ( [ 2.0 ] ) has an even dimension .the following holds : [ eveneigval]the eigenvalues of the matrix of order given in ( [ 2.0 ] ) denoted by are the solutions of the quadratic equations for , where are the zeros of .similar to , there exist two eigenvalues for matrix associated with every value of , which yields eigenvalues . in order to simplify the notation , we define the duplicated set of zeros given by for .[ eveneigvec]the eigenvector of the matrix of order given in ( [ 2.0 ] ) associated with the eigenvalue is given by with{ll}s^{\frac{\ell-1}{2}}q_{^{\frac{\ell-1}{2}}}\left ( q_{r}^{^{\prime}}\right ) , & \ell \text { is odd}\\ -\frac{1}{\beta_{1}}s^{\frac{\ell}{2}-1}\left ( \alpha_{1}-\lambda_{r}\right ) p_{\frac{\ell}{2}-1}\left ( q_{r}^{^{\prime}}\right ) , & \ell \text { is even,}\end{array } \right .\label{2.7.1}\ ] ] for .in this section , we will derive the parabolicity condition for the proposed system .parabolicity is crucial to the diagonalization process , which we will be discussed later on in section [ secmain ] . in order to ensure the parabolicity of the system , we examine the positive definiteness of the proposed diffusion matrix . generally speaking ,a matrix is said to be positive definite if and only if its top - left corner principal minors are all positive . to this end ,andelic and da fonesca and others examined the parabolicity condition for a tridiagonal symmetric matrix .the following theorem holds .[ andelicprop]let be the tridiagonal matrix defined as{ccccc}a_{1 } & b_{1 } & 0 & \cdots & 0\\ b_{1 } & a_{2 } & b_{2 } & & \vdots \\ 0 & b_{2 } & \ddots & \ddots & \vdots \\ \vdots & & \ddots & \ddots & b_{m-1}\\ 0 & \cdots & \cdots & b_{m-1 } & a_{m}\end{array } \right)\ ] ] with positive diagonal entries . if for , then is positive definite .since the diffusion matrix considered here is not symmetric , proposition [ andelicprop ] does not apply directly to it .however , we know that if a matrix is not symmetric , its quadratic form , with being an arbitrary column vector , is said to be positive definite if and only if the principal minors in the top left corner of are all positive . in order to derive sufficient conditions for matrix in ( [ trimat1 ] ) , we apply proposition [ andelicprop ] to produce the following theorem . let be the tridiagonal matrix defined in ( [ trimat1 ] ) .the quadratic form of is positive definite iff condition ( [ para1 ] ) is satisfied .it follows that subject to ( [ para1 ] ) , the reaction diffusion system ( [ sys1 ] ) satisfies the parabolicity condition .condition ( [ andeliccond ] ) can be rearranged to the form the symmteric counterpart of as defined in ( [ trimat1 ] ) can be given by{cccccc}\alpha_{1 } & \frac{\beta_{1}+\gamma_{1}}{2 } & 0 & \cdots & \cdots & 0\\ \frac{\beta_{1}+\gamma_{1}}{2 } & \alpha_{2 } & \frac{\beta_{2}+\gamma_{2}}{2 } & \ddots & & \vdots \\ 0 & \frac{\beta_{2}+\gamma_{2}}{2 } & \alpha_{1 } & \frac{\beta_{1}+\gamma_{1}}{2 } & \ddots & \vdots \\ \vdots & \ddots & \frac{\beta_{1}+\gamma_{1}}{2 } & \alpha_{2 } & \frac { \beta_{2}+\gamma_{2}}{2 } & 0\\ \vdots & & \ddots & \frac{\beta_{2}+\gamma_{2}}{2 } & \ddots & \ddots \\ 0 & \cdots & \cdots & 0 & \ddots & \ddots \end{array } \right ) .\label{para3}\ ] ] now , substituting ( [ para3 ] ) in ( [ para2 ] ) yields the set of conditions{ll}\text{for } i=1 : & \sqrt{\alpha_{1}\alpha_{2}}>\left ( \beta_{1}+\gamma _ { 1}\right ) \cos \left ( \frac{\pi}{m+1}\right ) \\ \text{for } i=2 : & \sqrt{\alpha_{1}\alpha_{2}}>\left ( \beta_{2}+\gamma _ { 2}\right ) \cos \left ( \frac{\pi}{m+1}\right ) \\ \text{for } i=3 : & \sqrt{\alpha_{1}\alpha_{2}}>\left ( \beta_{1}+\gamma _ { 1}\right ) \cos \left ( \frac{\pi}{m+1}\right ) \\ \multicolumn{1}{c}{\vdots } & \multicolumn{1}{c}{\vdots}\\ \text{for } i = m-1 : & \left \ { \begin{array } [ c]{ll}\sqrt{\alpha_{1}\alpha_{2}}>\left ( \beta_{2}+\gamma_{2}\right ) \cos \left ( \frac{\pi}{m+1}\right ) , & \text{if } m\text { is odd}\\ \sqrt{\alpha_{1}\alpha_{2}}>\left ( \beta_{1}+\gamma_{1}\right ) \cos \left ( \frac{\pi}{m+1}\right ) , & \text{if } m\text { is even.}\end{array } \right . \end{array } \right.\ ] ] however , we notice that the conditons reduce to only , which can be combined to form condition ( [ para1 ] ) .this section shows how the proposed system can be diagonalized using the eigenvectors derived in section [ seceigen ] above .we start by examining the invariant regions of the system and then move to diagonalize the system and establish the local and global existence of solutions given the initial data lies within the invariant regions .let us denote the positive and descendingly ordered eigenvalues of matrix by , with , and the corresponding eigenvectors by , where .assuming the proposed system satisfies the parabolicity condition ( [ para1 ] ) , matrix is guaranteed to have strictly positive eigenvalues , and thus is unitarily diagonalizable .generally , the diagonalizing matrix can be formed containing as its columns the normalized eigenvectors of . recalling that for every eigenvalue there exist two eigenvectors with unit norm and opposite directions, we can define the diagonalizing matrix as where each power is either equal to or . in order to simplify the notation ,let us consider the two disjoint sets and which satisfy the properties each permutation of and satisfying ( [ 3.3 ] ) yields a valid diagonalizing matrix .the total number of possible permutations is thus , which is also the number of invariant regions for the proposed system .these regions may be written as subject to for simplicity , we will only consider one of the invariant regions which corresponds to the sets and and is defined in ( [ invreg1 ] ) and ( [ invreg2 ] ) .this yields the diagonalizing matrix note that the work carried out in the following subsections can be trivially extended to the remaining regions . in order to establish the local existence of solutions for the proposed system ( [ sys1 ] ) , we start by diagonalizing the system by means of the diagonalizing matrix defined in ( [ 3.6 ] ) .we follow the same work performed in to obtain the equivalent diagonal system . first , let{ll}\left \langle v_{\ell},u\right \rangle , & \ell \in \mathfrak{l}\\ \left \langle \left ( -1\right ) v_{\ell},u\right \rangle , & \ell \in \mathfrak{z.}\end{array } \right.\end{aligned}\ ] ] let us also define the functional with each function fulfilling the following conditions : 1 . must be continuously differentiable on for all , satisfying , for all .2 . must be of polynomial growth ( see the work of hollis and morgan ) , which means that for all : 3 .must satisfy the inequality: where for all .all the constants satisfy where are sufficiently large positive constants .note that and are uniformly bounded positive functions defined on .finally , let now , by observing the similarity transformation we can propose the following : [ diagprop]diagonalizing system ( [ sys1 ] ) by means of yields with the boundary condition or and the initial data the proof of proposition [ diagprop ] is trivial and can be looked up in .the diagonal system in ( [ eqsys ] ) is equivalent to ( [ sys1 ] ) in the invariant region given in ( [ invreg1 ] ) and ( [ invreg2 ] ) . by considering the equivalent diagonal system in ( [ eqsys ] ), we can now establish the local existence and uniqueness of solutions for the original system ( [ sys1 ] ) with initial data in or , using the basic existence theory for abstract semilinear differential equations ( friedman , henry and pazy ) .it simply follows that the solutions are classical on , with denoting the eventual blow up time in .the local solution is continued globally by _ apriori _ estimates .the aim here is to establish the global existence of solutions for the equivalent system ( [ eqsys ] ) and consequently the original system ( [ sys1 ] ) subject to the parabolicity condition ( [ para1 ] ) through the use of an appropriate lyapunov functional .the results obtained here are similar to those of .hence , no detailed proofs will be given here .let us define ^{2},\text { } r=3, ... ,l\text { , } \label{mycond}\ ] ] where \right ) ^{2^{\left ( r - k-2\right ) } } , \text { } r=3, ... ,l-1\text{,}\]] , \ ] ] and .\ ] ] the term denotes the determinant of the square symmetric matrix obtained from by removing the rows and the columns . where , ... , \det \left [ m\right ] ] .[ corollary1]under the assumptions of theorem [ globaltheo ] , all solutions of ( [ eqsys ] ) with positive initial data in are in for some .[ proposition2]under the assumptions of theorem [ globaltheo ] and given that the condition ( [ para1 ] ) is satisfied , all solutions of ( [ eqsys ] ) with positive initial data in are global for some .in order to put the findings of this study to the test , let us consider the following -component system where the transposed diffusion matrix is given by{ccccc}1 & 0.5 & 0 & 0 & 0\\ 0.3 & 1.5 & 0.7 & 0 & 0\\ 0 & 0.25 & 1 & 0.5 & 0\\ 0 & 0 & 0.3 & 1.5 & 0.7\\ 0 & 0 & 0 & 0.25 & 1 \end{array } \right ) , \label{6.2}\ ] ] and the reaction functional is of the form{ccccc}f_{1 } & f_{2 } & f_{3 } & f_{4 } & f_{5}\end{array } \right ) ^{t},\ ] ] with for the purpose of this example , let be the symmetric matrices given by{ccccc}0.0146 & -0.0257 & 0.0073 & -0.0088 & 0\\ -0.0257 & 0.0202 & -0.004 & 0 & 0.0044\\ 0.0073 & -0.004 & 0.0005 & 0.0011 & -0.0015\\ -0.0088 & 0 & 0.0011 & -0.0043 & 0.0027\\ 0 & 0.0044 & -0.0015 & 0.0027 & -0.0007 \end{array } \right ) , \]]{ccccc}0.1142 & 0.228 & 0.0571 & 0.1293 & 0\\ 0.228 & -0.2281 & -0.0153 & 0 & -0.0646\\ 0.0571 & -0.0153 & 0.0041 & 0.0158 & -0.0122\\ 0.1293 & 0 & 0.0158 & 0.0489 & -0.0244\\ 0 & -0.0646 & -0.0122 & -0.0244 & -0.0061 \end{array } \right ) , \]]{ccccc}0.3702 & -0.1245 & 0.1851 & 0.0194 & 0\\ -0.1245 & 0.0371 & -0.0817 & 0 &-0.0097\\ 0.1851 & -0.0817 & 0.0132 & 0.0364 & -0.0397\\ 0.0194 & 0 & 0.0364 & -0.0079 & 0.0133\\ 0 & -0.0097 & -0.0397 & 0.0133 & -0.0198 \end{array } \right ) , \]]{ccccc}-0.1316 & -0.1013 & -0.0658 & -0.0743 & 0\\ -0.1013 & 0.1177 & 0.0236 & 0 & 0.0371\\ -0.0658 & 0.0236 & -0.0047 & -0.0154 & 0.0141\\ -0.0743 & 0 & -0.0154 & -0.0252 & 0.0108\\ 0 & 0.0371 & 0.0141 & 0.0108 & 0.0070 \end{array } \right ) , \ ] ] and{ccccc}-0.1651 & 0.5295 & -0.0825 & 0.2108 & 0\\ 0.5295 & -0.4429 & 0.0539 & 0 & -0.1054\\ -0.0825 & 0.0539 & -0.0059 & -0.0081 & 0.0177\\ 0.2108 & 0 & -0.0081 & 0.0949 & -0.0567\\ 0 & -0.1054 & 0.0177 & -0.0567 & 0.0088 \end{array } \right ) .\ ] ] also , suppose that{l}\sigma_{1}=\left ( \begin{array } [ c]{ccccc}0.0795 & 0.0303 & -0.0243 & -0.014 & 0.0059 \end{array } \right ) ^{t}\\ \sigma_{2}=\left ( \begin{array } [ c]{ccccc}-0.6466 & -0.6144 & 0.0798 & 0.2844 & 0.0572 \end{array } \right ) ^{t}\\ \sigma_{3}=\left ( \begin{array } [ c]{ccccc}0.4549 & -0.2791 & -0.2846 & 0.1292 & 0.1635 \end{array } \right ) ^{t}\\ \sigma_{4}=\left ( \begin{array } [ c]{ccccc}0.2682 & 0.3879 & 0.0097 & -0.1796 & -0.0618 \end{array } \right ) ^{t}\\ \sigma_{5}=\left ( \begin{array } [ c]{ccccc}-1.6033 & -0.8159 & 0.4251 & 0.3777 & -0.0608 \end{array } \right ) ^{t}. \end{array } \right.\ ] ] the system clearly satisfies the parabolicity condition ( [ para1 ] ) as we have from ( [ 2.3]) hence , we can form the polynomial as{l}p_{0}\left ( \mu \right ) = 1,\text { } p_{1}\left ( \mu \right ) = \mu \\ p_{2}\left ( \mu \right ) = \mu(\mu)-1=\mu^{2}-1 , \end{array } \right.\ ] ] with solutions now , the eigenvalues are along with the solutions of the following two equations derived from ( [ 2.4]){l}\frac{\left ( 1-\lambda \right ) \left ( 1.5-\lambda \right ) } { \sqrt { 0.5\times0.7\times0.3\times0.25}}-\frac{1}{1.0801}-1.0801=1\\ \frac{\left ( 1-\lambda \right ) \left ( 1.5-\lambda \right ) } { \sqrt { 0.5\times0.7\times0.3\times0.25}}-\frac{1}{1.0801}-1.0801=-1 , \end{array } \right.\ ] ] which can be simplified to{l}6.1721\left( \lambda-1\right ) \left ( \lambda-1.5\right ) -3.0059=0\\ 6.172\,1\left ( \lambda-1\right ) \left ( \lambda-1.5\right ) -1.0059=0 .\end{array } \right.\ ] ] solving the two quadratic equations in yields the four eigenvalues of , which in descending order can be given by{l}\lambda_{1}=1.9913\\\lambda_{2}=1.7248\\ \lambda_{3}=1\\ \lambda_{4}=0.77516\\ \lambda_{5}=0.50871 .\end{array } \right . \label{6.6}\ ] ] hence, similarly , formula ( [ 2.7])-([2.7.1 ] ) can be used to derive the eigenvectors of , which are arranged according to the corresponding eigenvalues to form the diagonalizing matrix{ccccc}0.3848 & -0.5265 & -0.5632 & -0.9063 & -0.8769\\ 0.7629 & -0.7633 & 0.5534 & 0.0000 & 0.3943\\ 0.3705 & -0.0195 & -0.5423 & 0.3884 & -0.0325\\ 0.3531 & 0.3534 & 0.2562 & 0.0000 & -0.1825\\ \underset{v_{1}}{\underbrace{0.0891 } } & \underset{v_{2}}{\underbrace{0.1219 } } & \underset{v_{3}}{\underbrace{-0.1303 } } & \underset{v_{4}}{\underbrace { -0.1665 } } & \underset{v_{5}}{\underbrace{0.2030}}\end{array } \right ) .\label{6.7}\ ] ] matrix is used to diagonalize the system yields the equivalent system{l}\frac{\partial w_{1}}{\partial t}-1.9913\delta w_{1}=-0.5w_{1}w_{5}+0.65w_{2}\\ \frac{\partial w_{2}}{\partial t}-1.7248\delta w_{2}=0.5w_{1}w_{5}-0.65w_{2}\\ \frac{\partial w_{3}}{\partial t}-\delta w_{3}=-0.32w_{3}w_{5}+0.41w_{4}\\ \frac{\partial w_{4}}{\partial t}-0.77516\delta w_{4}=0.32w_{3}w_{5}-0.41w_{4}\\ \frac{\partial w_{5}}{\partial t}-0.50871\delta w_{5}=-0.5w_{1}w_{5}+0.65w_{2}-0.32w_{3}w_{5}+0.41w_{4}. \end{array } \right .\label{ex2diag}\ ] ] note that for simplicity , we have neglected small terms and rounded the polynomial coefficients to four decimal points .the resulting reaction terms clearly satisfy conditions ( a1 ) through ( a3 ) as discussed in section [ subseclocal ] above .observe that the proposed system has invariant regions where the resulting is guaranteed to be positive .we consider one of these regions corresponding to and given by which yields five inequalities{l}0.3848u_{01}+0.7629u_{02}+0.3705u_{03}+0.3531u_{04}+0.0891u_{05}\geq0\\ -0.5265u_{01}-0.7633u_{02}-0.0195u_{03}+0.3534u_{04}+0.1219u_{05}\geq0\\ -0.5632u_{01}+0.5534u_{02}-0.5423u_{03}+0.2562u_{04}-0.1303u_{05}\geq0\\ -0.9063u_{01}+0.3884u_{03}-0.1665u_{05}\geq0\\ -0.8769u_{01}+0.3943u_{02}-0.0325u_{03}-0.1825u_{04}+0.2030u_{05}\geq0 , \end{array } \right. \label{6.8}\ ] ] with solving this system of inequalities yields the first region where the initial data is assumed to lie .we will consider for instance the initial data the equivalent diagonalized system ( [ ex2diag ] ) was solved numerically by means of the finite difference ( fd ) method .figures [ fig1 ] and [ fig2 ] show the solutions to the diagonalized system ( [ ex2diag ] ) and the original system ( [ ex2 ] ) , respectively , in the diffusion free case . in the one dimensional case, a sinusoidal perturbation is added to the initial data to introduce spatial diversity into the model .the solutions are shown in figures [ fig3 ] and [ fig4 ] .s. abdelmalek , _ existence of global solutions via invariant regions for a generalized reaction diffusion system with a tridiagonal toeplitz matrix of diffusion coefficients _ , accepted for publication in func .& apps .s. kouachi and b. rebiai , _ invariant regions and the global existence for reaction diffusion systems with a tridiagonal matrix of diffusion coefficients _ , memoirs on diff .eqs . and math .( 2010 ) : 93108 .
this paper is concerned with the local and global existence of solutions for a generalized -component reaction diffusion system with a tridiagonal diffusion matrix and polynomial growth . we derive the eigenvalues and eigenvectors and determine the parabolicity conditions in order to diagonalize the proposed system . we , then , determine the invariant regions and utilize a lyapunov functional to establish the global existence of solutions for the proposed system . a numerical example is used to illustrate and confirm the findings of the study . reaction diffusion systems , invariant regions , diagonalization , global existence , lyapunov functional .
we observed all the second - to - second trades and instant messages of all 66 stock day traders in a typical trading firm from 9/07 - 02/09 ( see material and methods ) .these day traders traded only stocks and made million trades . of their trades are live , non - computerized trades ( computerized trades were omitted from the analysis ) .day traders typically do not hold stocks for more than a day .they typically enter and fully exit all their positions daily , which creates a standardized measure of performance : whether the trader made or lost money at the end of each day . on average ,these traders make money on just of their trades . despite sitting in the same firm , these traders generally trade independently rather than in teams .this is because they typically trade different stocks .one trader might trade high tech , one trader health care , another trader autos , and so on .trading different stocks helps them diversify the firm s holdings , exploit their specialized sector knowledge , and avoid trading against each other .this means that traders have little incentive to simply mimic each other s trades or trading behavior .nevertheless , despite trading different stocks , traders do process common market information .common information includes federal reserve announcements , new job figures , housing market change , speculations about bankruptcies , or other global socio - economic data that traders attempt to disambiguate by exchanging information with others as they endeavor to discover the right time to trade .a key form of information exchange here , and increasingly in other human complex systems , is instant messaging .instant messages among traders and their network are based on elective relationships .each trader has the autonomy to communicate with persons of their choice .hence , they are hierarchical ties in which orders are dictated from managers to subordinates . within the communications, information is both professional and personal .typical information includes interpretations of market news , expectations of where the market might be moving , rumors , and forms of personal information commonly exchanged among business friends .our extensive field research and interviews with traders at the firm confirmed that the content of these traders messages included information consistent with the content found in other research .all the traders in the firm exchanged instant messages throughout the day with their network .instant messages were sent and received from their terminals or mobile devices . by federal lawall instant messages tied to trading go through the firm s capture system .the importance of instant messages to these traders is instantiated by the intensity its use .we analyzed the full population of million instant messages that our traders exchanged with their network of contacts in the industry .to measure the synchronous trading of each trader with other traders , we defined a measure that quantifies the extent to which an individual s specific selection of time to trade is the same as the selection of other traders . to compare the synchronous trading among all traders across our observation period, we quantified the degree to which the number of traders trading within the same time windows as trader in day compares to the same value when randomizing just the trades of trader ( fig .1 ) . specifically , this randomization ensures normalizing individual activity , while keeping the trading structure and information heterogeneities of each specific trading day constant ( e.g. , number of traders , total number and timing of interactions , and number of interactions per trader ) .mathematically , we defined synchronous trading as , where is the observed number of simultaneous traders and and are the average and standard deviation of simultaneous traders across an ensemble of random replicates within which the trades of trader were randomly shuffled . the greater the degree of a trader s daily , the greater her synchronous trading can be , and vice versa . additionally , we defined the advanced trading and delayed trading in a similar fashion as the synchronous trading at zero - time lag , but quantified the number of traders trading one window late and one window early , respectively .we examined multiple time windows and reported the full analysis for 1-second windows .this window size was chosen for the main results for several reasons .first , the 1-second level of resolution comports well with the frenetic information environment and fast reaction time dynamics of modern markets .the time scale in which information heterogeneities exist has increasingly shortened with the growth of computerized trading , which now accounts for between to of the trading volume on financial exchanges . in computerized trading , preset algorithms trade very large volumes of shares in hundreds of a second , which means that traders must react to market opportunities that appear and disappear on a second - by - second scale .also consistent with the view that information moves at high speeds in modern markets and that traders react at that level , we found that the traders in this firm do display a propensity to trade on 1-second time scales .the average empirical interval of consecutive 1-second trades is with a standard deviation of .the maximum interval was and it occurred only twice in the data . similarly , reaction time research has found that human reaction occurs in less than 1-second time frames .second , we chose 1-second intervals for synchronous trading because it is the finest , most conservative time resolution in our data .larger than 1-second intervals require a priori knowledge to find the appropriate balance between a window large enough to encapsulate changes in slow , non - computerized information heterogeneities and yet not so large that unrelated activities appear synchronized because they occur in a large window . working empirically to estimate this balance , we tested larger than 1-second intervals .we found that our results exist for intervals up to 15 seconds .this window size seems to be a realistic limit for slower moving types of information and suggests that synchronicity is associated with individual thresholds that range across different information heterogeneities in this complex system .our examination of the existence of synchronous trading revealed three interesting findings .first , figure 2a pools all of our data and shows the probability density of synchronous trading at zero - time lag , advanced trading and delayed trading .we observed that synchronous trading is significantly different ( using kolmogorov - smirnov test ) to advanced and delayed trading .values greater than comprise of the entire data and are due to two traders , who could have a better access to information or better reaction times .we conservatively omitted these outliers from our statistical analyses and found that our results did not change , confirming that synchronous trading is a special characteristic of collective behavior .this also suggests that timing is a key factor driving the decision of traders , and reminds us about the high - frequency changes in the market .second , figure 2b shows that the average synchronous trading increases with the market s daily uncertainty ( using markov randomizations ) , as given by the standard market volatility index , the vix .this finding supports the idea that collective behavior is associated with uncertainty as in the case of biological systems .the greater the level of uncertainty faced by individuals , the more likely is a collective behavior such as schooling or flocking to arise .these findings suggest a parallel in human systems . as the level of uncertainty in the market increases ,the more likely is synchronous behavior to occur . under high - uncertainty days, the average synchronous trading can increase to , i.e. the average synchronous trading of all traders is almost two standard deviations higher than the expected by chance .third , we found evidence that synchronous trading does not appear to be due to coordination .unlike coordinated behavior , where pairs or sets of actors consistently align their behavior , synchronous activity commonly displays the opposite pattern .we found that of all pairwise correlations between activity patterns of two different traders are non - significant .this is probably because no two individuals consistently follow the same strategic behavior and the same two actors are not always correct in their assessments about the market .similarly , if synchronous trading was driven by coordination , we should observe simultaneous trades of predominantly the same stock .however , we find that 96% of our simultaneous trades are of different stocks . moreover ,the trades are of different types : 60% of the simultaneous trades involve both buying and selling activities .individual daily performance can be assessed by whether the trader loses or makes money at the end of the day . moreover , because the amount of money made by a trader at day s end depends on various factors , such as market volatility , number of stocks traded , and size of trades , a simple binary outcome variable appropriately standardizes their performance by considering whether the trader lost ( ) or made money ( ) .this is coded as and , respectively .we quantified the relationship between a trader s synchronous trading and performance with a logistic regression of the form ( see materials and methods ) .we found that synchronous trading was significantly ( ) and positively associated with a trader s performance ( fig .3 ) . using the same logistic analysis, we compared advanced and delayed trading with end - of - day performance .the results indicate that both advanced and delayed trading are statistically unrelated ( ) to end - of - day performance .this reveals that synchronous trading , though arising without apparent coordination , indicates a uniquely beneficial time to trade that neither advanced nor delayed trading can reveal .an important proposed contribution of our work is to identify those factors associated with the level of collective human behavior . in biological systems , local communication channelshave been identified as a correlate of the rise of synchronous behavior . following this line of reasoning, we found distinctive associations between traders instant messaging patterns and synchronous trading .first , we found that instant messaging volume is associated with trading volume throughout the day , suggesting a close connection between the two . pooling alltraders i m and trade activity over our observation period , figure 4a shows that ims have a significant correlation ( ) with trades over the day . on average ,i m and trade activity rise rapidly after the 9:30am opening bell , peak at 10 am , decline at lunch time , uptick again from 1 - 3 pm , and finally decline precipitously at the 4 pm closing bell .second , research has shown that collective synchronous activity can arise when a coupling mechanism delays or pauses the timing of individual activities .for example , cicadas have been found to have an internal clock that stimulates chirping .this clock would induce a cicada to regularly chirp whether or not it was exposed to the chirps of other cicadas .synchrnous chirping arises because the internal and individual chirp activity is delayed by the chirp of another cicada , coupling the timing of the internal chirp with the collective chirps of other cicadas . to determineif instant messages can play a coupling role , we observed whether they were associated with the rise of synchronous trading .as noted above , instant messages play the important function of transferring information that helps traders disambiguate market information , and , because a trader can not trade and i m at the same time , instant messaging necessarily can delay his or her trades .this would suggest that if we were to compare the observed intensity of synchronous trading of a trader relative to the synchronous trading expected by chance , as we did above , but this time we randomized the trader s trades across all 1-second time windows of the day except those 1-second time windows where there was an instant message , we would expect the intensity of synchronous trading to increase in the presence of non - random instant messages . to quantify the im - trade coupling , we compared the degree to which synchronous trading changes when randomizing a single trader s trades in any second in which an i m was not sent or received .methodologically , we quantified the im - trade coupling by , where is the synchronous trading calculated by randomly shuffling over the seconds with no i m activity .consistent with our conjectures , we found a positive and significant association ( using markov randomizations ) between the im - trade coupling and the synchronous trading of each trader ( fig .4b , materials and methods ) .this reveals that the instant messaging patterns of traders is associated to their trades such that the observed level of synchronous trading increases as the communication pattern is increasingly different from what would be expected by chance .the more non - random the instant messaging pattern , the greater the synchronous trading .this suggests that the local communication patterns of individuals have an important association with the rise of simultaneous activity , which in turn is associated to their performance .if one assumes that ims are used to corroborate the meaning of the market throughout the day , then our findings suggest that the increasingly structured communication strategically aims to help each individual trader make a decision about when to trade .synchronicity is a pervasive and mysterious drive in nature . in animal ,biological , and physical systems synchronicity reduces uncertainty , as when school of anchovies evade predation , neurons co - fire to process complex information , or perturbations reduce noise in physical systems .synchronicity also apparently arises from local interactions without the aid of centralized leadership .this suggests that while more research needs to be done on synchronicity s functional role in complex human systems , it may furnish a functional alternative to leadership in rich informational environments .we examined the association between synchronicity and performance in a complex system where performance increases with uncertainty reduction . examining a typical proprietary trading firm whereinthe traders individually race to be the first to disambiguate a constant stream of uncertain market information in an effort to make profitable trades , we found that when a stock trader in that firm trades at the same time as other traders in the firm , his or her financial performance is significantly increased .we also found a coupling mechanism ; we found that traders instant message communication patterns are positively associated with the rise of sync .building on synchronicity principles found in other complex systems , we speculate that the mechanisms underlying these empirical patterns involve rapid information aggregation through instant messaging networks . because separate traders in the firm have different instant message contacts in the market , each trader samples , separate local inferences about the eventual meaning of market information .when these diverse points of view converge , the traders trade in synchronicity such that the synchronous timing of trades reflects a point of crowd wisdom despite no conscious intention to do so on the part of any individual trader .these mechanisms suggest that synchronicity in human systems reflect some of the same principles found in animal systems , namely that synchronicity appears to arise with attention to local information rather than centralized leadership . in the human systemwe examine , and in human systems where quick response behavior is likely a mix of being reactive and thoughtful about the information presented , we also purport that the rapid aggregation of local information from diverse points of view plays a role in the performance benefits of the synchronicity we observe .if one assumes that each of the traders have their own expertise , training , and assumptions that go into deriving inferences from the market information they process , it suggests that actions consistent with the corroboration of diverse viewpoints are likely to be a better approximation of the true meaning of information than singular or myopic points of view .the powerful information processing capacities of humans in complex systems may furnish unique opportunities to apply the ideas developed here about human synchronicity to other contexts .we would speculate that in many increasingly rich information environments there are benefits to understanding synchronicity .for example , currently in the domain of intelligence and national security , many security officers face a frenetic pace of information not unlike the traders we studied .they too receive constant feeds of information videos , text , voice , blogs , rss newsfeeds , and tweets and are in constant communication with their own instant message network throughout the day .moreover , like traders they are also racing to disambiguate news .disambiguating information quickly means a potential pre - empt of an attack whereas advanced or delayed disambiguation can mean jumping the gun " or waiting too long respectively .disease control agencies around the world all monitor large amounts of time sensitive data in attempts to identify possible outbreaks . in both situations , and more generally , in situations where information overloads might overwhelm individual decision making ability and informationis time sensitive , creating systems that can capture moments of synchronicity may help identify whether an action is functional or not . using observational data to describe this phenomenon provided a rich mix of real data but we are unable to completely test these mechanisms .future work might devise experiments , perhaps in one of the mock trading labs now in existence in universities , by manipulating the content and rate of change of market news , providing access vs. no access to instant message communication networks , and changing the information sampled from the instant message networks from myopic to diverse .future research might also begin to examine the potential dysfunctions of synchronicity in human complex systems . under what conditions does collective genius turn into mob madness ?another direction for future research is to explore the differences between synchronicity and other collective behavior mechanisms .we observed all the 66 day traders at an anonymous trading firm from 9/26/2007 - 02/20/2009 .day traders keep short - term positions and do not hold inventories of stocks ; they enter and exit positions each day , normally between 9.30am-4:00pm .our traders are point - and - clickers" they make trades in real - time in 98% of the time ( the 1.2% of the trades done algorithmically were omitted and did not affect the results ) .40 - 70% of the trading on nyse is point - and - click .we observed these traders trading approximately 4500 different stocks over various exchanges , which suggests that they sample a large part of the market .as in most trading firms , traders do not trade everyday of every week for various reasons . similarly , in this firm , no more than 22 traders were at their desks on any one day we analyzed all the million intraday stock trades of these day traders and their million instant messages exchanged across their networks .the performance data was calculated by the firm using standard industry metrics .traders can not trade via ims .to check the robustness of the association between synchronous trading and individual performance to other potential influences such as number of trades and daily uncertainty , we performed the same analysis with terms for number of trades , an interaction term for number of trades and , and we also controlled for market volatility ( i.e. vix ) .the extended model has the form . additionally , to control for unobservable factors of each particular trader , we used fixed effects ( dummy variables for each trader ) in the logistic regression . under all circumstances ,synchronous trading was significantly and positively associated with individual performance .to check the robustness of the association between im - trade coupling and synchronous trading to other potential influences such as number of ims and daily uncertainty , we performed a linear regression with terms for number of ims , an interaction term for number of ims and , and we also controlled for market volatility ( i.e. vix ) .the extended model has the form . additionally , to control for unobservable factors of each particular trader , we used fixed effects ( dummy variables for each trader ) in the regression . under all circumstances ,im - trade coupling was significantly and positively associated with synchronous trading .we would like to thank alex arenas , jordi bascompte , jordi duch , william kath , tae - hyun kim , eduardo lpez , hani mahmassani , dean malmgren , alejandro morales gallardo , mason porter , mark rivera , daniel stouffer , felix reed - tsochas , marta sales - pardo , uri wilensky , and the members of the nico weekly seminar series for useful discussions that led to the improvement of this work .we also thank the kellogg school of management , northwestern university , the northwestern institute on complex systems ( nico ) for financial support .research was also sponsored by the army research laboratory and was accomplished under cooperative agreement number w911nf-09 - 2 - 0053 .the views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies , either expressed or implied , of the army research laboratory or the u.s .government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation here on .steven r , galloway t , berka c , behneman a ( 2010 ) a neurophysiologic approach for studying team cognition . _proceedings of the interservice / industry training , simulation , and education conference ( i / itsec ) _ : 1 - 8 .
successful animal systems often manage risk through synchronous behavior that spontaneously arises without leadership . in critical human systems facing risk , such as financial markets or military operations , our understanding of the benefits associated to synchronicity is nascent but promising . building on previous work illuminating commonalities between ecological and human systems , we compare the activity patterns of individual financial traders with the simultaneous activity of other traders an individual and spontaneous characteristic we call synchronous trading . additionally , we examine the association of synchronous trading with individual performance and communication patterns . analyzing empirical data on day traders second - to - second trading and instant messaging , we find that the higher the traders synchronous trading , the less likely they lose money at the end of the day . we also find that the daily instant messaging patterns of traders are closely associated with their level of synchronous trading . this suggests that synchronicity and vanguard technology may help cope with risky decisions in complex systems and furnish new prospects for achieving collective and individual goals . ynchronous behavior has been found to enhance individual and group performance across a variety of domains even though the individuals might make no conscious effort to coordinate their behavior . similarly , in systems of collaboration and competition , synchronous behavior can elude simple associations with individual benefits . cicadas that chirp simultaneously with others find the best balance between risk and reward . cicadas that chirp in advance or in delay of the full chorus relish the best chances of finding a mate but may suffer the greatest risk of being spotted by a predator . congruently timed humans actions have been surmised to provide potential benefits revealed by the meaningful coincidence of synchronous behavior . for instance , it has been found that simultaneous discoveries , or the times when multiple individuals arrive at a similar conclusion simultaneously , is collective evidence that a solution is valid . in this paper , we studied the association between individual performance and the simultaneous activity patterns followed by independent decision makers under risk . these conditions exist in many high - frequency decision contexts but are uniquely well documented in financial systems , where continuous change in information creates recurring uncertainty about when to trade and the second - to - second actions of financial traders are recorded . reducing the risk of losing money is the essence of trading . over time the risks of trading can decrease as information is disambiguated . however , as this happens , the increasing certainty of information is incorporated into low - return prices . thus , racing to be the first to discover the right time to trade is the critical problem to be solved . by analogy , this optimal timing may represent the mating sweet spot observed for cicadas . chirp in the sweet - spot and the chance of mating / returns is relatively high and predation / losses is relatively low . this suggests that as separate traders disambiguate their local view of news , they can spontaneously and simultaneously react as a group , without intention to coordinate , producing a synchronous behavior that might reveal the right time to trade in the market . here , we tested whether traders performance is relatively better when trading simultaneously with other traders an individual and spontaneous characteristic we call synchronous trading . additionally , traders need to assess whether information is positive or negative for a stock , the potential magnitude of the information s impact , and the degree to which the information is already reflected in the price . social corroboration is key to making these assessments . it reduces cognitive overload and ambiguity when diverse views converge on a common interpretation and typically takes place among persons tied through social network relations . we tested whether daily instant messaging patterns of traders are associated with the rise of synchronous trading . we believe our results can have broad implications for understanding fast collective action solutions to decision making under uncertainty .
ultrasonic ( us ) backscattered echoes from blood contain frequency - dependent information that can be used to obtain quantitative parameters reflecting the aggregation state of red blood cells ( rbcs ) .recently , two parameters describing rbc aggregation , the packing factor and mean fractal aggregate diameter , were extracted from the structure factor size estimator ( sfse). the sfse is a second - order data reduction model based on the structure factor and adapted to a dense medium such as blood .this approach is based on the analysis of the backscattered power spectrum that contains information about the size , spatial organization , concentration and mechanical properties of scatterers ( i.e. rbcs ) .the difficulty to apply the sfse _ in vivo _ is that the spectral content of backscattered echoes is also affected by attenuation caused by intervening tissue layers between the probe and the blood flow .more generally , ultrasound scatterer size estimation techniques for tissue characterization ( such as liver , kidney , prostate or breast ) are facing similar challenges. to evaluate correctly microstructural parameters , it is thus of major interest to take into account tissue attenuation effects .a few groups developed measurement techniques to evaluate the frequency - dependent attenuation in order to compensate _ a posteriori _ the backscattered power spectrum .the goal of this letter is to further develop this strategy for _ in vivo _ measures of rbc scatterer sizes .we propose to determine simultaneously blood structural parameters and total attenuation by using an optimization method , termed the structure factor size and attenuation estimator ( sfsae ) .this method consists in fitting the spectrum of the backscattered radio - frequency ( rf ) echoes from blood to an estimated spectrum by a modified sfse model .this approach is similar to that presented by bigelow _ _et al.__ , who estimated the effective radius of tissue microstructure and total attenuation from simulated backscattered signals . herein , _ in vitro _experimental evaluation of the sfsae is performed .porcine rbcs were sheared in a couette flow system , and ultrasonic rf echoes were obtained using a 25 mhz center - frequency transducer . since skin is one of the most attenuating tissue layers during _ in vivo _ scanning , three skin - mimicking phantoms with different attenuation coefficients were successively introduced between the transducer and the blood flow .this study shows the sfsae ability to evaluate three parameters ( the packing factor , mean fractal aggregate diameter and total attenuation ) .blood can be seen as a very dense suspension of red cells .these rbcs can not be treated as independent scatterers since particle interactions ( collision , attraction , deformation , flow dependent motions ) are strong .the theoretical model of ultrasound backscattering by blood that we developped is based on the particle approach , which consists of summing contributions from individual rbcs and modeling the rbc interaction by a particle pair - correlation function .assuming that the born approximation is valid ( weak scattering ) , the model proposed in ref .1 can be modified to predict the theoretical backscatter coefficient from blood : where is the wave vector , is the number density of rbcs in blood estimated by measuring the hematocrit by microcentrifugation ( , where is the volume of a rbc ) , is the backscattering cross section of a single rbc , is the structure factor describing the spatial organization of rbcs , and is the frequency - dependent attenuation function .the backscattering cross - section of a weak scattering particle small compared to the wavelength ( rayleigh scatterer ) can be determined analytically as follows : , where is the variation of impedance between the rbc and its suspending medium ( i.e. the plasma ) .the structure factor is by definition the fourier transform of the pair - correlation function and can be approximated by its second - order taylor expansion in as in this expression , represents the probability of finding two particles separated by a distance . is the low - frequency limit of the structure factor ( ) called the packing factor. is the radius of 3d rbc aggregates assumed to be isotropic .we introduce as the isotropic diameter of an aggregate ( expressed in number of rbcs ) with the radius of one rbc sphere - shaped model of volume . the attenuation function is given by : , where is the frequency and is the attenuation coefficient ( in db / mhz ) defined by : , where and are respectively the intervening tissue layer attenuations ( in db / cm / mhz ) and thicknesses . according to the above equation , we thus assume , as a first approximation , that the attenuation increases linearly with the frequency : .the measured backscatter coefficient reported in this study was computed as in eq .( 3 ) , the mean backscattered power spectrum was obtained by averaging the power spectra of 400 backscattered echoes from blood .the mean power spectrum was obtained from a reference sample of non - aggregated rbcs at a low hematocrit of 6 ( i.e. rayleigh scatterers). in this case , 400 echoes were also averaged .the backscatter coefficient of this reference sample was estimated using the `` rayleigh estimation '' approach used by yu and cloutier , which theoretical value is given by eq .( 13 ) in ref . 8 ( three dimensional perkus yevick packing factor for cylinders ) .this reference sample was used to compensate the backscattered power spectrum for the electromechanical system response , and the depth - dependent diffraction and focusing effects caused by the us beam .the packing factor , aggregate diameter and total attenuation along the propagation path were determined by matching the measured given by eq .( [ eq : bscmeas ] ) with the theoretical given by eq .( [ eq : bscwiths ] ) . for this purpose , we searched values of , and minimizing the cost function = . in all studied cases , the cost function seemed to have a unique global minimum , as was observed by plotting the cost function surface with varying values of .an example is given in fig .us measurements were performed in a couette flow system to produce a linear blood velocity gradient at a given shear rate ( see figure 1 in ref .the system consisted of a rotating inner cylinder with a diameter of 160 mm surrounded by a fixed concentric cylinder of diameter 164 mm .a 60 ml blood sample was sheared in the 2 mm annular space between the two coaxial cylinders . the us scanner ( vevo 770 , visualsonics ,toronto , canada ) equipped with the rmv 710 probe was used in m - mode for this study .the single - element focused circular transducer had a center frequency of 25 mhz , a diameter of 7.1 mm and a focal depth of 15 mm .we operated at a sampling frequency of 250 mhz with 8 bits resolution ( gagescope , model 8500cs , montreal , canada ) .the probe was mounted in the side wall of the fixed outer cylinder and was positioned to have its focal zone at the center of both cylinders . to ensure ultrasonic coupling ,the hole within the outer stationary cylinder ( containing the probe ) was filled with a liquid agar gel based mixture .when solidified , this gel was cut to match the curvature of the cylinder to avoid any flow disturbance .the gel was a mixture of distilled water , 3 ( w / w ) agar powder ( a9799 , sigma chemical , saint - louis , mo ) , 8 ( w / w ) glycerol and a specific concentration of 50 cellulose scattering particles ( s5504 sigmacell , sigma chemical , saint - louis , mo ) that determined the attenuation coefficient .four experiments were performed with four mixtures having sigmacell ( sc ) concentrations varying from 0 to 0.75 ( w / w ) .the 0 concentration constituted the non - attenuating gel and the three other mixtures mimicked skin attenuations .the attenuation coefficients of the reference ( 0 ) and three skin - mimicking phantoms were determined by using a standard substitution method .two transducers with center frequencies of 25 mhz ( vevo 770 , visualsonics , toronto , canada ) and 20 mhz ( v317-sm panametrics , waltham , ma ) were aligned coaxially facing each other for transmission measurements .transmitted signals were recorded both with and without the agar gel sample in the acoustic path .the attenuation coefficient was then estimated using a log spectral difference technique. for a given concentration of sc , measurements were obtained from two different sample thicknesses , and for each , four regions were scanned for averaging purpose .values obtained were 0.007 0.002 , 0.101 0.028 , 0.208 0.029 and 0.317 0.039 db / cm / mhz for sc concentrations of 0 , 0.25 , 0.50 and 0.75 , respectively .the thickness of the skin - mimicking phantoms being fixed to 1 cm , their attenuation coefficients were thus in the same range as the human dermis ( 0.21 db / mhz at 14 - 50 mhz considering a 1 mm dermis thickness ) .fresh porcine whole blood was obtained from a local slaughter house , centrifuged and the plasma and buffy coat were removed .two blood samples were then prepared : ( i ) a h6 reference sample , which was a 6 hematocrit non - aggregating rbcs resuspended in saline solution ; and ( ii ) a 40 hematocrit t40 test sample , which consisted of rbcs resuspended in plasma to promote aggregation .the h6 sample was sheared at 50 s and coupled with the 0 sc concentration agar gel .echoes were selected with a rectangular window of length 0.8 mm at four depths every 0.2 mm ( i.e. with 75 overlap between windows ) . for each depth , the power spectra of the backscattered echoes were averaged over 400 acquisitions to provide .then , the h6 sample was removed and the t40 blood was introduced in the couette device . in the first 30 s ,a shear rate of 500 s was applied to disrupt rbc aggregates .the shear rate was then reduced to residual values of 2 , 5 , 10 , 20 and 30 s for 90 s. after that , for each shear rate , acquisitions of 400 rf lines were performed for 80 s. echoes were windowed as for the h6 sample at the same depths and their power spectra were averaged to obtain .this protocol was repeated four times with the four agar - based phantoms .the experiment with the 0 sc phantom was realized in order to have reference results on packing factors and aggregate diameters obtained from the classical sfse. these parameters were assumed to be true values of packing factors and aggregate diameters for all shear rates , and will be compared in the next section with packing factors and diameters estimated by the sfsae and by the sfse when skin - mimicking phantoms are used .it is important to emphasize the fact that the h6 reference sample was also measured with the 0 sc phantom .the phantom attenuation , although small with no sc , therefore affected equivalently both spectra and in eq .the resulting measured backscatter coefficient was thus not biased by attenuation .the terminology no attenuation " was used for this experiment in the following .figure 2a reports results on and for the sfse in the case of no attenuation .typical results of the sfsae minimization procedure for the different agar phantoms at a shear rate of 5 s are given in fig .all results on , and from the sfsae are summarized in fig . 3 for all residual shear rates .in this figure , the relative errors for each parameter correspond to : , and , with measured in transmissions .more specifically , corresponds to , where is the skin - mimicking phantom attenuation estimated in transmission , and is the blood attenuation taken equal to 0.022 db / mhz for all shear rates . to underline the necessity to take into account the attenuation ,parameters and were evaluated with the sfse without attenuation - compensation when skin - mimicking phantoms were used . because of the frequency - dependent distortion produced by the attenuating medium , large relative errors can be seen in fig .4a for both parameters . however , by compensating the backscatter coefficients in the sfse with the value measured in transmission ( section 3.2 ) , relative errors in fig .4b are largely reduced to values comparable to those estimated with the sfsae ( see fig .3b ) . the sfsae ( fig .3 ) gave quantitatively satisfactory estimates of , and with relative errors below 22 , for shear rates between 5 and 20 s .the sfse with attenuation - compensation ( fig .4b ) gave estimates of and with relative errors below 12 for shear rates between 2 and 10 s , and below 28 for the shear rate of 20 s .however , for the sfsae , the average estimates for the shear rate of 2 s were less accurate ( relative errors below 57 for and below 30 for ) .the estimation of was satisfactory at that shear rate ( relative errors below 14 ) .the worse results of , and were obtained at 30 s for the highest attenuation . the apparent limit of applicability of the sfsae method for shear rates of 2 and 30 s may be explained by considering the following . at 2 s , for the frequency bandwidth considered ( 9 - 30 mhz ) , the sfse and consequently the sfsae seem to reach their limit of applicability for large aggregate sizes ( typically 17.5 in fig . 2a , i.e. = 4.8 ) .this limit is illustrated by the bad fit of the sfse model in fig .2a at 2 s .the bad estimations of the sfsae at 30 s are explained by the fact that the aggregate diameters were estimated to zero and attenuations were overestimated . at this high shear rate , rbc aggregationis partially inhibited and the signal - to - noise ratio ( snr ) of our measurements was reduced ( -4 db between 20 and 30 s for all phantoms ) . the accuracy of the estimates was thus degraded with increasing attenuations , as can be seen from the large relative errors at the highest attenuation with the sfsae but also with the sfse with attenuation - compensation ( and ) . to conclude , the sfsae performed well for 2.7 ( i.e. 10 at 5 s ) and under the condition that the snr is sufficiently good ( snr 28 db corresponding to the snr at 30 s for the 0.25 sc ) .although the sfsae gave less accurate estimates for 2 and 30 s , the estimated parameter values presented in fig .3a show that the sfsae gave qualitatively satisfactory estimates for the three sc skin - mimicking phantoms at all shear rates , since the estimates of and versus shear rates had the same behaviors as and .the performance of the new sfsae was assessed with experimental measurements on blood in a couette flow device .the accuracy of the estimates obtained with the sfsae was not as satisfactory as those obtained with the sfse with attenuation - compensation ( i.e when _ a priori _ are known about the attenuation ) .nevertheless , the sfsae has the major advantage to be easily applicable _ in vivo _ because of the simultaneous estimation of the blood structural properties and total attenuation ( contrary to the sfse attenuation - compensation method , needing the attenuation and thickness of the tissue intervening layers to be known ) .this work thus confirms the _ in vivo _ applicability of rbc aggregate size and structure estimations .complementary studies are nevertheless required to determine the validity domain of the sfsae according to and attenuation .this work was supported by the canadian institutes of health research ( grants -84358 and cmi-72323 ) , by the heart and stroke foundation of canada ( grant -05 - 0313 ) , and by the national institutes of health of usa ( grant ) .dr cloutier is recipient of a national scientist award of the fonds de la recherche en sant du qubec .we are also thankful to dr f. destrempes for his helpful discussion on the optimization tool .* .t. h. yu and g. cloutier , experimental ultrasound characterization of red blood cell aggregation using the structure factor size estimator " , j. acoust . soc . am . *122 * , 645 - 656 ( 2007 ) .* .roberjot , s. l. bridal , p. laugier , and g. berger , absolute backscatter coefficient over a wide range of frequencies in a tissue - mimicking phantom containing two populations of scatterers " , ieee trans . ultras ., ferroelect .contr . * 43 * , 970 - 978 ( 1996 ) . * t .a. bigelow , m. l. oelze , and w. d. obrien , estimation of total attenuation and scatterer size from backscatter ultrasound waveforms " , j. acoust .117 * , 1431 - 1439 ( 2005 ) . *he and j. f. greenleaf , application of stochastic analysis to ultrasonic echoes - estimation of attenuation and tissue heterogeneity from peaks of echo envelope " , j. acoust .79 * , 526 - 534 ( 1986 ) .* .j. oosterveld , j. m. thijssen , p. c. hartman , r. l. romijn , and g. j. e. rosenbusch , ultrasound attenuation and texture analysis of diffuse liver disease : methods and preliminary results " , phys .biol * 36 * , 1039 - 1064 ( 1991 ) .* .y. l. mo and r. s. c. cobbold , theoretical models of ultrasonic scattering in blood " , in _ ultrasonic scattering in biological tissues _ , edited by k. k. shung and g. a. thieme ( crc , boca raton , fl , 1993 ) , chap . 5 , pp .125 - 170 .* .twersky , low - frequency scattering by correlated distributions of randomly oriented particles " , j. acoust .81 * , 1609 - 1618 ( 1987 ) .* .k. shung , on the ultrasound scattering from blood as a function of hematocrit " , ieee trans ., ferroelect ., freq . contr . *su-26 * , 327 - 331 ( 1982 ) . * .h. wang and k. k. shung , an approach for measuring ultrasonic backscattering from biological tissues with focused transducers " , ieee trans . biomed .* 44 * , 549 - 554 ( 1997 ) .* r. kuc and m. schwartz , estimating the acoustic attenuation coefficient slope for liver from reflected ultrasound signals " , ieee trans .sonics ultrasonics * su-26 * , 353 - 362 ( 1979 ) .* l .- c .nguyen , f. yu , and guy cloutier , _ in vitro _study of frequency - dependent blood echogenicity under pulsatile flow " , proc .- ieee ultrason .2007 , in press . *i. raju and m. a. srinivasan , high - frequency ultrasonic attenuation and backscatter coefficients of _ in vivo _ normal human dermis and subcutaneous fat " , ultrasound in med .* 27 * , 1543 - 1556 ( 2001 ) .* figure 1 .( color online ) ( a ) typical aspect of the logarithm of the cost function for a fixed value of . the logarithm is shown here in order to enhance the visual contrast .this cost function has one minimum denoted ( ) that depends on .( b ) typical aspect of the function for varying values of ( and being calculated for each ) .this cost function has a single minimum .* figure 2 .( color online ) ( a ) backscatter coefficients for blood sheared at different residual shear rates and measured with the 0 sc concentration phantom ( no attenuation ) , and corresponding fitting with the classical sfse with no compensation for attenuation .( b ) backscatter coefficients for blood sheared at 5 s and measured with each of the four phantoms .the corresponding fitted models are the sfse for the 0 sc phantom , and the sfsae for the three other skin - mimicking phantoms ( 0.25 , 0.5 and 0.75 sc ) .* figure 3 .( color online ) ( a ) values of , and ( in db / mhz ) for different residual shear rates estimated by the classical sfse for the 0 sc concentration and by the sfsae for the three skin - mimicking phantoms .( b ) corresponding relative errors .* figure 4 .( color online ) relative errors of the packing factor and aggregate diameter for the three skin - mimicking phantoms obtained with the sfse ( a ) with no compensation for attenuation ( and ) , and ( b ) with attenuation - compensation using the attenuation values estimated in transmission ( and ) .parameters and and similarly and are compared with and , respectively .
the analysis of the ultrasonic frequency - dependent backscatter coefficient of aggregating red blood cells reveals information about blood structural properties . the difficulty to apply this technique _ in vivo _ is due to the frequency - dependent attenuation caused by intervening tissue layers that distorts the spectral content of backscattering properties from blood microstructures . an optimization method is proposed to simultaneously estimate tissue attenuation and blood structure factor . in an _ in vitro _ experiment , the method gave satisfactory estimates with relative errors below 22 for attenuations between 0.101 and 0.317 db / cm / mhz , signal - to - noise ratios db and .7 ( being the wave number and the aggregate radius ) . .5 cm pacs numbers : 43.80.qf , 43.80.cs , 43.35.bf , 43.35.yb
cardiac interbeat intervals normally fluctuate in a complex manner. recent studies reveal that under normal conditions , beat - to - beat fluctuations in the heart rate may display extended correlations of the type typically exhibited by dynamical systems far from equilibrium .it has been argued, for example , that the various stages of sleep may be characterized by long - range correlations of heart rates separated by a large number of beats .the interbeat fluctuations in the heart rates belong to a much broader class of many natural , as well as man - made , phenomena that are characterized by a degree of stochasticity .turbulent flows , fluctuations in the stock market prices , seismic recordings , the internet traffic , and pressure fluctuations in packed - bed chemical reactors are example of time - dependent stochastic phenomena , while the surface roughness of many materials are examples of such phenomena that are length scale - dependent .the focus of the present paper is on the intriguing statistical properties of interbeat interval sequences , the analysis of which has attracted the attention of researchers from different disciplines. analysis of heartbeat fluctuations focused initially on short - time oscillations associated with breathing , blood pressure and neuroautonomic control. studies of longer heartbeat records , however , revealed behavior. recent analysis of very long time series indicates that under healthy conditions , interbeat intervals may exhibit power - law anticorrelations, follow universal scaling in their distributions, and are characterized by a broad multifractal spectrum. such scaling features change with the disease and advanced age. the possible existence of scale - invariant properties in the seemingly noisy heartbeat fluctuations is may be attributed to highly complex , nonlinear mechanisms of physiological control, as it is known that circadian rhythms are associated with periodic changes in key physiological processes. in figure 1 samples of interbeats fluctuations of healthy subjects and those with congestive heart failure ( chf ) are shown .= 9truecm recently , friedrich and peinke were able to derive a fokker - planck ( fp ) equation for describing the evolution of the probability distribution function of stochastic properties of turbulent free jets , in terms of the relevant length scale .they pointed out that the conditional probability density of the _ increments _ of a stochastic field ( for example , the increments in the velocity field in turbulent flow ) satisfies the chapman - kolmogorov ( ck ) equation , even though the the velocity field itself contains long - range , nondecaying correlations .as is well - known , satisfying the ck equation is a necessary condition for any fluctuating data to be a markovian process over the relevant length ( or time ) scales. hence , one has a way of analyzing stochastic phenomena in terms of the corresponding fp and ck equations . in this paperthe method proposed by friedrich and peinke is used to compute the kramers - moyal ( km ) coefficients for the _ increments _ of interbeat intervals fluctuatations , . here, is the interbeat increments which , for all the samples , is defined as , , where is the standard deviations of the increments in the interbeats data .it is shown that the first and second km coefficients representing , respectively , the drift and diffusion coefficients in the fp equation , have well - defined values , while the third- and fourth - order km coefficients are small .therefore , a fp evolution equation is developed for the probability density function ( pdf ) which , in turn , is used to gain information on changing the shape of pdf as a function of the time scale ( see also ref . [ 37 ] for another interesting and carefully - analyzed example of the application of the ck equation to stochastic phenomena ) .the plan of this paper is as follows . in section 2we describe the friedrich - peinke method in terms of a km expansion and the fp equation .we then apply the method in section 3 to the analysis of the increments in the interbeat fluctuations .a complete characterization of the statistical properties of the interbeat fluctuation requires evaluation of the joint pdfs , , for an arbitrary , the number of data points .if the phenomenon is a markov process , an important simplification arises in that , the -point joint pdf is generated by the product of the conditional probabilities , for .thus , as the first step of analyzing a stochastic time series , we check whether the increments in the data follow a markov chain .as mentioned above , a necessary condition for a stochastic phenomenon to be a markov process is that the ck equation, should hold for any value of , in the interval therefore , we check the validity of the ck equation for describing the data using many values of the triplets , by comparing the directly - evaluated conditional probability distributions with those calculated according to right - hand side of eq .( 1 ) . in fig .2 , the directly - computed pdf is compared with the one obtained from eq . ( 1 ) . allowing for a statistical error of the order of the square root of the number of events in each bin , we find that the pdfs are statistically identical . = 9truecm it is well - known that the ck equation yields an evolution equation for the distribution function across the scales .the ck equation , when formulated in differential form , yields a master equation , which takes on the form of a fp equation: (\delta x,\tau)\;.\end{aligned}\ ] ] the drift and diffusion coefficients , and , are estimated directly from the data and the moments of the conditional probability distributions : the coefficients are known as the kramers - moyal ( km ) coefficients . as an application of the method , we analyzed both daytime ( 12:00 pm to 18:00 pm ) and nighttime ( 12:00 am to 6:00 am ) heartbeat time series of healthy subjects , and the daytime records of patients with chf .our data base includes 10 healthy subjects ( 7 females and 3 males with ages between 20 and 50 , and an average age of 34.3 years ) , and 12 subjects with chf , with 3 females and 9 males with ages between 22 and 71 , and an average age of 60.8 years ) .the resulting drift and diffusion coefficients , and , are displayed in figures 3 and 4 .it turns out that the drift coefficient is a linear function of , whereas the diffusivity is quadratic in .estimates of these coefficients are less accurate for large values of and , thus , the uncertainties increase . using the data set for the healthy subjects we find that , = 9truecm = 9truecm whereas for the patients with chf we obtain , we also computed the _average _ of the coefficients and for the entire set of the healthy subjects , as well as those with chf .according to the pawulas theorem, the km expansion is truncated after the second term , provided that the fourth - order coefficient vanishes .for the data that we analyze the coefficient is about for the healthy subjects , and about for those with chf .equations ( 5 ) and ( 6 ) state that the drift coefficients for the healthy subjects and those with chf have the same order of magnitude , whereas the diffusion coefficients for given and are different by about one order of magnitude .this points to a relatively simple way of distinguishing the two classes of the subjects .moreover , the -dependence of the diffusion coefficient for the healthy subjects is stronger than that of those with chf ( in the sense that the numerical coefficients of the are larger for the healthy subjects ) .these are shown in figures 3 and 4 .the strong of the diffusion coefficient for the healthy subjects indicates that the nature of the pdf of their increments for given , i.e. , , is intermittent , and that its shape should change strongly with .however , for the subjects with chf the pdf is not so sensitive to the change of the time scale , hence indicating that the increments fluctuations for the subjects with chf is _ not _ intermittent .these results are completely compatible with the recent discoveries that the interbeat fluctuations for healthy subjects and those with chf have fractal and multifractal properties , respectively. have shown that the probability density of the interbeat interval increments satisfies a fokker - planck equation , which encodes the markovian nature of the increments fluctuations .we have been able to compute reliably the first two kramers - moyal coefficients for the stochastic processes - the drift and diffusion coefficients in the fp representation - and , using the polynomial ansatz, obtain simple expressions for them in terms of and the time scale .we have shown that the drift and diffusion coefficients of the increments in the interbeat fluctuations of healthy subjects and patients with chf have different behavior , when analyzed by the method we use in this paper .hence , they help one to distinguish the two groups of the subjects . moreover, one can obtain the form of the path probability functional of the increments in the interbeat intervals in the time scale , which naturally encodes the scale dependence of the probability density .this , in turn , provides a clear physical picture of the intermittent nature of interbeat intervals fluctuations .let us emphasize that the previous analysis of the data that we consider in this paper indicated that there may be long - range correlations in the data which might be characterized by self - affine fractal distributions , such as the fractional brownian motion or other types of stochastic processes that give rise to such correlations. in that method one distinguishes healthy subjects from those with chf in terms of the _ type _ of the correlations that might exist in the data .for example , if the data follow a fractional brownian motion , then the corresponding hurst exponent is used to distinguish the two classes of the subjects , as ( ) indicates negative ( positive ) correlations in the data , while indicates that the increments in the data follow brownian motion .the method proposed in the present paper is different from such analyses in that , the _ increments _ in the data are analyzed in terms of markov processes .this is _ not _ in contradiction with the previous analyses .our analysis does indicate the existence of correlations in the increments , but , as is well - known in the theory of markov processes , such correlations , though extended , eventually decay .we distinguish the healthy subjects from those with chf in terms of the _ differences _ between the drift and diffusion coefficients of the fokker - plank equation that we construct for the incremental data which , in our view , provides a clearer and more physical way of understanding the differences between the two groups of the subjects than the previous method .lipsitz , j. mietus , g.b .moody , and a.l .goldberger , _ circulation _ * 81 * , 1803 ( 1990 ) ; d.t .kaplan , _ et al ._ , _ biophys. j. _ * 59 * , 945 ( 1991 ) ; n. iyengar , _ et al ._ , _ am . j. physiol . _ * 271 * , r1078 ( 1996 ) .peng , s.v .buldyrev , s. havlin , m. simons , h.e .stanley , and a.l .goldberger , _ phys .e _ * 49 * 1685 ( 1994 ) ; c .- k .peng , j.m .hausdorff , and a.l .goldberger , in _ nonlinear dynamics , self - organization , and biomedicine _ , edited by j. walleczek ( cambridge university press , cambridge , 1999 ) .
statistical properties of interbeat intervals cascade are evaluated by considering the joint probability distribution for two interbeat increments and of different time scales and . we present evidence that the conditional probability distribution may obey a chapman - kolmogorov equation . the corresponding kramers - moyal ( km ) coefficients are evaluated . it is shown that while the first and second km coefficients , i.e. , the drift and diffusion coefficients , take on well - defined and significant values , the higher - order coefficients in the km expansion are very small . as a result , the joint probability distributions of the increments in the interbeat intervals obey a fokker - planck equation . the method provides a novel technique for distinguishing the two classes of subjects in terms of the drift and diffusion coefficients , which behave differently for two classes of the subjects , namely , healthy subjects and those with congestive heart failure . = 2 17 cm 112 cm = 2 ]
nowadays various estimates of lyapunov dimension of attractors of generalized lorenz systems are actively developed . in the work the following result on the estimate of lyapunov dimension of the generalized lorenz systemis formulated .[ errortheorem ] let be invariant compact set of the generalized lorenz system .suppose that , , , , and . then if it is satisfied one of the following conditions : 1 . , 2 . , 3 .d = 0 ; and also one of the conditions : 1 . , 2 . , then is valid .however in the proof of this result in there is an inaccuracy , the consideration of which the present work is devoted .introduce a notion of lyapunov dimension .consider , for this purpose , the linearization of dynamical system along a solution where is the jacobian matrix evaluated along the trajectory of system .the fundamental matrix of linearized system is defined by the variational equation let be singular values of a fundamental matrix ( the square roots of the eigenvalues of the matrix are ordered for each ) .[ def : le ] the _ lyapunov exponents ( les)_[multiblock footnote omitted ] in the point are the numbers ( or symbols ) following , introduce a notion of lyapunov dimension . a local lyapunov dimension of a point in the phase space of dynamical system , generated by equation , is a number where are lyapunov exponents ; ] and $ ] , suppose that there is a continuously differentiable scalar function and a nonsingular matrix such that then . here is derivative of the function with respect to the system .[ theorem : th2 ] assume that there is a continuously differentiable scalar function and a nonsingular matrix such that then any solution of system , bounded on , tends to an equilibrium as .thus , if condition holds , then the global attractor of system coincides with its stationary set .for proving theorem [ errortheorem ] in it is used above described approach with where and the function where the parameters are chosen in dependence of system parameters in such a way that the conditions of theorem [ theorem : th1 ] or [ theorem : th2 ] are satisfied .in proving theorem [ errortheorem ] in the work there is committed an inaccuracy , which does not permit one to obtain a condition for estimating the attractor dimension in the case .namely , in the proof there is an inaccurate assertion that in the case the inequality yields the inequality where .let us show that this assertion may be not true .consider specific values of system parameters and running parameters for which the assertion is not satisfied .take , , , for which , and , .then one has .condition is satisfied since .however and consequently for the relation is impossible , i.e. condition under such parameters is not satisfied . correcting the above - mentioned inaccuracy ,we obtain the following result . [ our ] denote by the bounded invariant set of system , involving the point .suppose that , , , and there exists such that the inequality is satisfied .let where is defined as in . in this case 1 .if then any bounded on solution of system tends to a certain equilibrium as ; 2 .if then for proving theorem [ our ] we uses the matrix , defined according to , and the function ^{\frac{1}{2}}},\ ] ] where exact value of dimension is obtained here from the comparison of dimension estimate and the value of lyapunov dimension in zero point .the obtained value corresponds to the dimension of global attractor or a ( local ) b - attractor , involving zero equilibrium .in 2012 g.a . leonov suggested to consider for the chen and lu systems the following change of variables with . under this transformation for the chen and lu systems are transformed to the generalized lorenz system with the parameters note that for the chen and the lu systems become linear and their dynamics have minor interest .thus , without loss of generality , one can assume that .the transformation with does not change the direction of time for the positive chaotic parameters considered in the works .remark that the applying of transformations with the time inversion ( see corresponding discussion of the lorenz , chen and lu systems ) is not suitable for the study of lyapunov dimension of invariant sets since the absolute values of lyapunov exponents and corresponding local lyapunov dimension in direct and backward time may be different ( see , e.g. , ) .below there are represented domains in the obtained two - dimensional parameter space for which the condition of theorem [ our ] is satisfied . on a plane thereare also shown the points , corresponding to standard parameters of chaotic attractors .for the lu system such parameters are , and after the change ; for the chen system and . 2 as shown in figs [ figleft]-[figright ] , nor the lu system nor the chen system standard parameters are situated in the obtained domain .while lyapunov dimension of the classical lorenz attractor has been obtained analytically , for the chen and lu attractors it is still an open problem . also by numerical simulationwe do not find any chaotic attractor of the chen and lu systems with the parameters from the obtained domains . applying the same approach ,we extend the results , obtained in the case in theorem [ errortheorem ] . [ yangtheorem ] suppose that , and the inequality is satisfied .let be valid , where is the greater root of the equation where is given according to for . in this case 1 .if then any bounded on solution of system tends to a certain equilibrium as ; 2 . if then consider , for example , the parameters for which there exists a chaotic attractor . for them , i.e. is satisfied .however .thus , for the above - mentioned parameters theorem [ errortheorem ] can not be applied .but for them theorem [ yangtheorem ] can be applied since for them a higher root of equation is equal to . ]similarly it is possible to consider the generalized lorenz system not only with positive , but with the negative ones . [main ] suppose that is a bounded invariant set of system involving the point .suppose that , the inequality is valid , and there exist numbers such that where . in this case 1 .if then any bounded on solution of system tends to a certain equilibrium as ; 2 . if then url # 1`#1`urlprefixhref # 1#2#2 # 1#1 g. a. leonov , n. v. kuznetsov , on differences and similarities in the analysis of lorenz , chen , and lu systems , applied mathematics and computation 256 ( 2015 ) 334343 .http://dx.doi.org/10.1016/j.amc.2014.12.132 [ ] .y. chen , q. yang , the nonequivalence and dimension formula for attractors of lorenz - type systems , international journal of bifurcation and chaos 23 ( 12 ) ( 2013 ) 1350200 . http://dx.doi.org/10.1142/s0218127413502003 [ ] .n. v. kuznetsov , t. a. alexeeva , g. a. leonov , invariance of lyapunov characteristic exponents , lyapunov exponents , and lyapunov dimension for regular and non - regular linearizations , arxiv e - printshttp://arxiv.org / abs/1410.2016 [ ] .
nowadays various estimates of lyapunov dimension of lorenz - like systems attractors are actively developed . within the frame of this study the question arises whether it is possible to obtain the corresponding estimates of dimension for the chen and lu systems using the reduction of them to the generalized lorenz system . in the work leonov s method was applied for the estimation of lyapunov dimension , and as a consequence the lyapunov dimension of attractors of the chen and lu systems with the classical parameters was estimated . in the present work an inaccuracy in is corrected and it is shown that the revised domain of parameters , where the estimate of lyapunov dimension is valid , does not involve the classical parameters of the chen and lu systems . lorenz - like system , lorenz system , chen system , lu system , kaplan - yorke dimension , lyapunov dimension , lyapunov exponents .
many neural systems composed of a number of interacting neurons exhibit self - sustained oscillatory behavior leading to the formation of various space - time patterns .such patterns are believed to play a key role in signal and information processing functions of the brain .one of the fundamental problems is the understanding of possible dynamic mechanisms of such patterns to appear and to evolve in time and space .there are two basic phenomena of emergence of oscillations ( regular or chaotic ) in neuron assemblies .the first one is obvious and deals with the presence of local intra - cellular oscillations .being coupled such units are capable of various oscillatory patterns , clustering and synchronization .the second one , found in recent theoretical and experimental studies , concerns the possibility of oscillations in assemblies of non - oscillatory cells .the oscillations may also appear in coupled non - identical cells for sufficiently strong coupling .the assembly is characterized by an oscillatory `` average '' cell dynamics which makes the non - oscillatory cells oscillating .another studies have reported that coupling even identical excitable cells can modify the dynamics to form oscillatory attractors co - existing with a stable fixed point .such attractors are characterized by anti - phase spiking .the effects of electrical coupling between neural cells also include the appearance of bursting in two coupled pacemaker cells , modification of burst period in coupled bursters , synchronization and chaos . a model approach , in order to display the dynamical origin of neural oscillations ,is to use a simplified behavior - based description of the system . forthis purposes nonlinear maps could be helpful as they can provide an appropriate qualitative description of complex dynamic processes including chaotic behavior in lower dimensional systems .in this paper we study the system of two coupled piece - wise linear one - dimensional maps .the single map is derived from the dynamics of an isolated neural cell modelled by the fitzhugh - nagumo excitable system .the fitzhugh - nagumo neuron model can be derived from hodgkin - huxley conductance - based model , for some parameter values , when we take into account the difference of kinetics between the _ potential dependent gating variables _ .the first variable describes the evolution of neuron membrane potential , the second mimics the dynamics of outward ionic currents .then , the model can describe the salient features of neuron dynamics including the action potential generation , excitability and excitation threshold.the map has one globally stable fixed point and a discontinuity corresponding to the excitation threshold of the cell .we shall show how linear diffusive coupling between the two maps leads to the appearance of chaotic oscillations with anti - phase spiking .a number of studies of coupled chaotic maps have shown that anti - phase chaotic oscillations may appear when the synchronization manifold looses transverse stability and the system evolves to the off - diagonal attractors .the paper is organized as follows . in sect .ii we show how the dynamics of excitable fitzhugh - nagumo model can be described by a piece - wise continuous point map and introduce two - dimensional map modeling a pair of coupled cells . in sect .iii we analyze the dynamics of the map .we numerically show the emergence of strange attractor in an invariant domain defined by invariant manifolds of saddle fixed point and saddle periodic orbit .iv describes the statistical characteristics of the attractor set and the emergent chaotic oscillations with anti - phase spiking .section v contains a brief discussion of the results .to replicate the excitable dynamics of an isolated neural cell one can use the fitzhugh - nagumo - like model .it can be taken in the following form the -variable describes the evolution of the membrane potential of the neuron , describes the dynamics of the outward ionic currents ( the recovery variable ) .the function is taken piece - wise linear , , if , and , if .the parameters and control the shape and the location of the -nullcline , hence the dynamics of the recovery term .the parameter defines the time scale of excitation pulse and the parameter is a constant stimulus .the excitable behavior of eqs .( [ fhn ] ) is illustrated in fig .appropriate values of the parameters provide the existence of three fixed points , and .the points and are stable and unstable foci , respectively , the point is a saddle with the incoming separatrix defining the excitation threshold .then , if a perturbation of the rest state , , is large enough , i.e. lies beyond the separatrix , the system responds with an excitation pulse , otherwise it decays to the stable rest point ( fig .1 ) . ) . illustrates the transversal half - line . ] in order to describe the dynamics of the unit ( [ fhn ] ) by a point map , we introduce the transversal half - line ( fig .it is found that the return map of the flow given by eqs .( [ fhn ] ) at the section defines a map , , for all points excluding the one , , corresponding to the intersection of the incoming saddle separatrix with the half - line .this point never returns to the cross - section .consequently , we also must exclude all the pre - images of the point , , .then , the poincar map can be written as with accounting for -coordinates of the points at the poincare section .the map is invertible and defined in the interval excluding the points .the shape of the curve calculated numerically is shown in fig .it is given by a piece - wise continuous curve with one stable fixed point .then , the dynamics of the map is trivial .all its trajectories represent sequences of points monotonically approaching to the fixed point .the discontinuity point plays the role of the excitation threshold , i.e. the neuron exhibits spiking if the map evolves above this point .note , that such map obtained from ( [ fhn ] ) gives an `` average '' description of the cell dynamics .obtained from eqs .( [ fhn ] ) at the section half - line .parameter values : .] let us approximate the piece - wise continuous curve by a piece - wise linear function and define the map for all points excluding the points .then , for two cells we introduce difference coupling term ( electrical coupling ) and consider the following two - dimensional map : where the variables refer to the dynamics of the two cells , is the coupling coefficient , the function is taken in the form .* a. general properties of map * since is given by the piece - wise linear function , then the differential of the map is a constant matrix , with eigenvalues .hence , lyapunov exponents are defined by , for a arbitrary trajectory of the map , except zero lebesgue measure set of initial conditions .let us consider the system ( [ 2d ] ) in the parameter region for convenience , we shall fixed .this region is chosen in order to have lyapunov exponents , hence any trajectory , , where is a set of discontinuous points of the map , is heavily dependent on initial conditions , and to have the modulus of jacobian of the map less than one , in .all these features allow us to speak of chaotic behavior .we shall study the trajectories of the map missing the discontinuity set , where for convenience , let us change variables , according to the eigenvectors of then , the discontinuity lines become the lines and divide the phase plane of the map into four regions in each region the map is continuous and has the form with the map has one hyperbolic fixed point , . in region invariant curves coincide with the coordinate axes . analyzing map in the regions , we find that it has a hyperbolic periodic orbit , , of period two with coordinates if the parameters satisfy the inequality }.\ ] ] stable and unstable invariant manifolds of the orbit are it follows from ( [ 9 ] ) that then periodic orbit appears from infinity in the phase plane for the parameters belonging to the curve in the parameter plane .note , that on this curve one of the multiplier hits the bifurcation value , .then fixed point changes its stability becoming of saddle type in . *b. invariant region and chaotic attractor of the map * let us consider the location of invariant curves of the points and periodic orbit in the phase plane with respect to the lines .the unstable invariant curve ( the separatix ) intersects the line at point with coordinates and invariant curve at the point with coordinates figure 3 illustrates the location of the invariant curves of fixed point and orbit in the phase plane .let us introduce region in the phase plane defined by let us find the conditions for region to be an invariant region of map .since the system ( [ y1y2 ] ) is symmetric with respect to -axis it is sufficient to consider only region . assuming , where , let us obtain the conditions on the parameter values for which . in region we find that it follows from ( [ 13 ] ) that if the following inequalities are satisfied . using the symmetry of with respect to , we can restrict the above condition to the right part of , and since the inequalities ( [ 14 ] ) are satisfied if the first inequalities in ( [ 15])impose the following condition on : thus , under the condition ( [ 16 ] ) .similarly we find that the image of by is also included in , for the parameter in . and periodic orbit , and discontinuity lines on the phase plane of map . ] finally , is invariant region of map in the parameter region on the parameter plane . ] figure shows region in the parameter plane . in this planethe boundary of consists of three components : the line corresponds to the appearance of periodic orbit and changing stability of the fixed point .the line is the boundary of the monostability of the uncoupled , , maps ( [ 2d ] ) .the points of the curve corresponds to the `` tangency '' of the separatricies and , and for these parameter values .therefore , if the parameter values of system ( [ y1y2 ] ) belongs to region , then invariant ( absorbing ) region and exists in the phase plane .consequently , contains strange ( chaotic ) attractor of the map .figure 5 illustrates possible structure of the attractor in the phase plane . on the phase plane .parameter values : . ]to characterize the complexity of the chaotic attractor we calculated numerically its fractal dimension .it appears that , indeed , takes non - integer values greater than ( fig .note that the box dimension of the set increases with the increase of coupling coefficient .corresponding estimate of the dimension using lyapunov exponents is shown by dashed curve .of the chaotic attractor calculated by box counting method versus coupling coefficient .parameter values : . ] ) . are shown by solid and dashed curves , respectively .( b ) spike - burst behavior of binary variables .parameter values : . ]* c. chaotic oscillations and attractor . * figure 7 ( a ) illustrates time evolution of the variables corresponding to chaotic attractor .let us characterize the oscillations in terms of original model ( [ fhn ] ) describing neuron excitability .note that the neuron excitation threshold accounted by saddle separatrix ( fig .1 ) corresponds to the discontinuity lines in the map description .thus , if a map trajectory jumps above these lines then we can refer this event as a neuron spike , if it evolves below , the neuron is not excited .in such a way the map oscillations can be described with binary variables : the evolution of -variables is shown in fig . 7 ( b ) .it appears that map oscillation represent sequence of bursts containing anti - phase spiking .note that characteristic time scales of oscillations ( burst duration and inter - burst period ) depend coupling . for smaller one can find longer lasting subthreshold period and shorter burst period .then , the map describes a kind of chaotic spike - burst behavior typical for many neural systems .the global behavior of the chaotic dynamics is described by the probability of finding the trajectory in any given region of the attractor. it can be visualized by a distribution of a cloud of points , each moving under the deterministic mapping .a stationary distribution may be reached by the system after long term evolution . to obtain this distribution ,we start from a histogram of clouds of points in each small `` box '' of the phase space .these points are mapped by and a new cloud is thus obtained .thus to the initial probability density correspond a new density .the operator which maps is called the perron - frobenius operator : defined by : for any region of the phase space .the stationary density is an eigenfunction of corresponding to the eigenvalue .the existence and uniqueness of the stationary distribution has been the object of many works .it is nevertheless difficult to obtain analytical exact expression of , apart from the restricted case of markov maps .we shall use an approximating algorithm inspired from the method of ding and zhou .the phase space of the coupled system is divided into identical rectangles .we consider an initial probability density that is constant on each : where , is the lebesgue measure of and is the probability of finding a phase point in is the characteristic function of : it is clear that has no reason to be of the `` coarse - grained '' form ( [ 18 ] ) .smoothing this density by integrating it on each we obtain by using the definition ( [ 17 ] ) : \\ & = & \sum_i \frac{a_i(0)}{m(\delta_i ) } \int_{\delta_j } \chi_{\delta_i}(y)]dy \\ & = & \sum_i a_i(0 ) \frac{m(f^{-1}(\delta_j ) \bigcap \delta_i)}{m(\delta_i ) } \\ \end{array}\ ] ] thus , the transition probability from to is given by the stochastic matrix : the matrix is an approximate value of the perron - frobenius operator .it can be proved that converges to as .( [ 19 ] ) can be written under the matrix form : where is the row vector with .thus , an approximate stationary probability is the row eigenvector such that : we first calculate the matrix elements of and then we compute the eigenvector . to calculate we use ( [ 20 ] ) . in each rectangle we put points uniformly distributes - then we compute the number of point such that , and we divide this number by the number of points in .this provides .we used this method to approximate the stationary distribution for and ( fig .8) showing a density distributed over some region , which is to be compared with the fig .5 where we have used only one trajectory .it is to be noted that the last figure shows the attractor generated by one trajectory whereas fig.8 shows the approximate density of the attractor .the above method allows to obtain the statistical distribution of synchronized spikes .recall that the variable is spiking if and only if and is spiking if and only if , and no one is spiking if which we denote .recall also that the region corresponds to and are simultaneously spiking , which lies outside the invariant region .moreover , because are disjoint , and can not be simultaneously spiking .thus we shall consider a partition of the invariant domain corresponding to one of the three possible states of spiking : where means that the only spiking variable is ; and means that no one is spiking .this partition induces symbolic dynamics , that we shall study .we calculate the stochastic matrix from any one of these states to any other , where are the states , for two values of the coupling parameter ( see fig .stochastic matrices of transition probabilities for three values of the coupling d. ] this table shows that we have only the following probable transitions in one step : and .less probable transitions are : but these transition probabilities depend monotonically on the coupling .thus the transitions and ( i.e. successive spikes ) decreases slightly with coupling .so , strengthening the coupling in this range decreases the bursting probability .we can understand these occurrences in considering the intersections of the image of each region by with these same regions ( see fig .. the most probable transition and correspond respectively to and .the less probable transitions and correspond respectively to and .the positive lebesgue measure of these intersected regions give the transition probabilities of each of these occurrences .some of the transition probabilities given by the calculation of the lebesgue measures of the intersected regions are shown in fig .11 , with their dependance over the coupling constant , which can be compared to those obtained numerically in the table fig .all the calculations are made in appendix a , where we also show that , when the parameters are in ,a neuron can not fire twice , i.e. . of the partitions of the invariant region . ] ; . ]we can now check the memory effects in studying the conditional probability of a state given the past .if we denote by the values of the variable at time , etc , we like to know if conditional probability is equal to or not . in the first casethere is no memory of the process . in order to compute the above conditional probability we have to find which implies calculations .but , neglecting rare events like having probability for , we have only significant transitions . the first one is the probability of having if it is preceded by a sequence of and and for the others the definition is the same .thus we obtain the following table ( fig.12 ) . the fact that the probability is not equal to means that the system has acquired a memory , the chain is not a markov chain .the memory of a chain is given by the smallest positive integer such that : . it is also remarkable that the probability of having if it is preceded by the sequence is the double of the probability of having preceded by simply .this explain the frequent appearance of sequence of spikes .how could we estimate the memory of the process depends on the full calculation of all conditional probabilities for all past .another very interesting aspect is the dependence of this memory on the coupling coefficient as shown in fig .it is clear that the memory effects increase with coupling ( the distance between the two curves increases with coupling ) . .we have investigated chaotic dynamics of two coupled maps each formally derived from fitzhugh - nagumo model of neuron excitability .being uncoupled such maps are trivial with only one stable fixed point corresponding to neuron rest state .the excitation threshold is given by the discontinuity point .we have introduced the formal definition of spike if the map evolves above this threshold .the linear `` diffusive '' coupling term introduced for the maps can be treated as a kind of electrical inter - neuron coupling that mimics an `` integral '' coupling current between the two cells .we have shown that increasing the coupling coefficient above certain threshold leads to the appearance of chaotic attractor .the attractor appears in the invariant region of the phase space confined by the invariant manifolds of saddle periodic orbit .this region attracts trajectories from outside and does not contain any stable trajectory inside .the oscillations emerging with such an attractor have a spike - burst shape with synchronous bursts with anti - phase spiking . using probabilistic descriptionwe have found the probabilities to find a spike in different conditions .it has been also found that the system acquires a memory ( in contrast with markov s processes ) .this research has been supported by russian foundation for basic research ( grants 03 - 02 - 17135 ) and by grant of president of russian federation ( mk 4586.2004.2 ) .v.b.k . acknowledges russian science support foundation for financial support .v.i.n . acknowledges the financial support from university paris vii .in order to understand the oscillatory behavior it is interesting to look at the image of the invariant region after one iteration ( see fig . 10 ) .first of all , we have to determine the images of the regions of the invariant region , defined by the discontinuities , namely , and (e.q . to ) . is a polygon having for vertices , , and ; , , , and ; and , , , and ( when or equivalent , which is always satisfied for ) .let us compute the images of all these vertex , according to the definition of in each region .we have to keep in mind that some of these points belong to the lines of discontinuity ; in this case , the computation is taken in the sense of a limit starting from the interior of each region .we first shall see that is completely included in and excluded of , which implies that one neuron can not spike twice . for the simplicity of proof, we shall use indirect calculations .we are going to show that : the first three inequalities allow us to say that is under and on the left of ; but , as is on the segment ] , and so it is under and on the left of .the last inequality show that ( and by consequence)is not in .99 johnson s.w . , seutin v. and north r. a. , science , * 258 * , 665 ( 1992 ) e.r .kandel , j.h .schwartz , t.m .jessell ( eds . ) _ principles of neural science .third edition _( prentice - hall intern .a. scott , neuroscience : a mathematical primer ( oxford univ .press , oxford , 1998 ) .murray , mathematical biology ( springer - verlag , berlin , 1989 ) .y. manor , j. rinzel , i. segev , and y. yarom , j. neurophysiol . * 77 * , 2736 ( 1997 ) .y. loewenstein , y. yarom , and h. sompolinsky , proc .98 * , 8095 ( 2001 ) .a. sherman and j. rinzel , proc .89 * , 2471 ( 1992 ) x .- j .wang , d. golomb and j. rinzel , proc . natl .92 * , 5577 ( 1995 ) n.f .rulkov , phys .86 * , 183 ( 2001 ) ; phys .e * 65 * 041922 ( 2002 ) .g. de vries , phys .e * 64 * , 051914 ( 2001 ) .b. cazelles , m. courbage and m. rabinovich , europhys .lett . * 56*(4 ) 504 ( 2001 ) .c.c . chow and n. kopell , neural computation * 12 * , 1643 ( 2000 ) r.d .et al _ , phys .e * 62 * , 2644 ( 2000 ) .n.f . rulkov _et al . _ ,e * 64 * , 016217 ( 2001 ) .izhikevich , int .j. bifurc . chaos *10*(6 ) , 1171 ( 2000 ) ; s.p .dawson , m.v .dangelo and j.e .pearson , phys .a * a265 * , 346 ( 2000 ) .kazantsev , phys .e * 64 * , 056210 ( 2001 ) .
the dynamics of two coupled piece - wise linear one - dimensional monostable maps is investigated . the single map is associated with poincar section of the fitzhugh - nagumo neuron model . it is found that a diffusive coupling leads to the appearance of chaotic attractor . the attractor exists in an invariant region of phase space bounded by the manifolds of the saddle fixed point and the saddle periodic point . the oscillations from the chaotic attractor have a spike - burst shape with anti - phase synchronized spiking . * spiking - bursting activity is a common feature of the temporal organization in many neural firing patterns . bursting activity means that clusters of spikes ( of action potentials ) occur more or less rhythmically and separated by phases of quiescence . spiking - bursting dynamics can be regular or chaotic depending on the concentration of neuromodulators , currents and other control parameters . for example , many of the talamocortical neurons from central patterns generators generate chaotic spiking - bursting dynamics . in order to understand the emergence of chaotic oscillations in a neurones network , we use a two variables fitzhugh - nagumo model of the membrane potential of an isolated neural cell . for appropriate values of the parameters , the system possesses a stable fixed point a focus surrounded by a separatrix such that for large enough perturbations , the system responds by an excitation pulse which also decays to the stable focus . hence , introducing a section by a half - line , we obtain a one - dimensional piece - wise smooth map with one discontinuity point representing the excitation threshold . then we consider a difference coupling between two such maps . in the linear approximation of the maps and for suitable values of the parameters , this two - dimensional system is locally hyperbolic having discontinuity lines and a strange attractor . the interesting fact is that these lines define regions in the phase space corresponding to the thresholds of excitability of each one of these neurones , or none of them . the symbolic dynamics represents then the bursting activity of the neurones . it is then possible to apply the approximation tools of statistical analysis of time series generated by the system , namely through perron - frobenius operator , in order to study and interpret some important features of neural networks , like synchronization and time correlations . *
in their pioneering work , gupta and kumar posed and studied the limits of communication in ad hoc wireless networks . assuming nodes are uniformly distributed in a plane and grouped into source - destination ( s - d ) pairs at random , they showed that one can achieve a sum throughput of .this is achieved using a multihop transmission scheme in which nodes transmit to one of the nodes in their neighboring cells , requiring full connectivity with at least one node per cell . a trade - off between throughput and delay of fully - connected networkswas studied in and was extended in to trade - offs between throughput , delay as well as energy .the work in has studied relay networks in which a single source transmits its data to the intended destination using the other nodes as relays . using percolation theory , they showed that a constant rate is achievable for a single s - d pair if we allow a small fraction of nodes to be disconnected .this result can be applied to ad hoc networks having multiple s - d pairs and the work in proposed an indirect multihop routing protocol based on such partial connectivity , that is all s - d pairs perform multihop transmissions based on this partially - connected sub - network .they showed that the indirect multihop routing improves the achievable sum throughput as .information - theoretic outer bounds on throughput scaling laws of ad hoc wireless networks were derived in .these bounds showed that the multihop routing using neighbor nodes is order - optimal in the power - limited and high attenuation regime .recently , a hierarchical cooperation scheme was proposed in and was shown to achieve better throughput scaling than the multihop strategy in the interference - limited or low attenuation regime , achieving a scaling very close to their new outer bound .a more general hierarchical cooperation was proposed in , which works for an arbitrary node distribution in which a minimum separation between nodes is guaranteed .recently _ hybrid network _models have been studied as well .hybrid networks are ad hoc networks in which the nodes communication is aided by additional infrastructures such as base stations ( bss ) .these are generally assumed to have high bandwidth connections to each other . in connectivity of hybrid networks has been analyzed . in throughput scaling of hybrid networks has been studied . in order for a hybrid network s throughput scaling to outperform that of a strictly ad hoc network , it was determined that the number of bss should be greater than a certain threshold .the existing literatures have focused on the throughput scaling of a _ single _ network .however , the necessity of extending and expanding results to capture _ multiple _ overlapping networks is becoming apparent .recent measurements have shown that despite increasing demands for bandwidth , much of the currently licensed spectrum remains unused a surprisingly large portion of the time . in the us , this has led the federal communications commission ( fcc ) to consider easing the regulations towards _ secondary spectrum sharing _ through their _ secondary markets initiative _ . the essence of secondary spectrum sharing involves having primary license holders allow secondary license holders to access the spectrum .different types of spectrum sharing exist but most agree that the primary users have a higher priority access to the spectrum , while secondary users _ opportunistically _ use it .these secondary users often require greater sensing abilities and more flexible and diverse communication abilities than legacy primary users .secondary users are often assumed to be _ cognitive radios _ , or wireless devices which are able to transmit and receive according to a variety of protocols and are also able to sense and independently adapt to their environment .these features allow them to behave in a more `` intelligent '' manner than current wireless devices . in this paper , we consider _ cognitive networks _ , which consist of secondary , or cognitive , users who wish to transmit over the spectrum licensed to the primary users .the single - user case in which a single primary and a single cognitive s - d pairs share the spectrum has been considered in the literature , see for example and the references therein . in the primary and cognitive s - d pairs are modeled as an interference channel with asymmetric side - information . in the communication opportunities are modeled as a two - switch channel . recently , a single - hop cognitive network was considered in , where multiple secondary s - d pairs transmit in the presence of a single primary s - d pair .it was shown that a linear scaling law of the single - hop secondary network is obtained when its operation is constrained to guarantee a particular outage constraint for the primary s - d pair .we study a more general environment in which a _ primary ad hoc network _ and a _ cognitive ad hoc network _ both share the same space , time and frequency dimensions . two types of primary networks are considered in this paper : an ad hoc primary network and an infrastructure - supported primary network . for the ad hoc primary model ,the primary network consists of nodes randomly distributed and grouped into s - d pairs at random . for the infrastructure - supported primary model , additional bss are regularly deployed and used to support the primary transmissions . in both cases , the cognitive network consists of secondary nodes distributed randomly and s - d pairs are again chosen randomly .our main assumptions are that ( 1 ) the primary network continues to operate as if no secondary network were present , ( 2 ) the secondary nodes know the locations of the primary nodes and ( 3 ) the secondary network is denser than the primary network . under these assumptions , we will illustrate routing protocols for the primary and secondary networks that result in the _ same throughput scaling _ as if each were a single network .note that the constraint that the primary network does _ not _ alter its protocol because of the secondary network is what makes the problem non - trivial .indeed , if the primary network were to change its protocol when the secondary network is present , a simple time - sharing scheme is able to achieve the throughput scaling of homogeneous networks for both primary and secondary networks . for the ad hoc primary model, we use a routing protocol that is a simple modification of the nearest neighbor multihop schemes in . for the infrastructure - supported primary model, we use a bs - based transmission similar to the scheme in .we propose novel routing protocols for the secondary network under each primary network model .our proposed protocols use multihop routing , in which the secondary routes _ avoid _ passing too close to the primary nodes , reducing the interference to them .we show that the proposed protocols achieve the throughput scalings of homogeneous networks _simultaneously_. this implies that when a denser `` intelligent '' network is layered on top of a sparser oblivious one , then both may achieve the same throughput scalings as if each were a single network .this result may be extended to more than two networks , provided each layered network obeys the same three main assumptions as in the two network case .this paper is structured as follows . in section [ sec : model ]we outline the system model : we first look at the network geometry , co - existing primary and secondary ad hoc networks , then turn to the information theoretic achievable rates before stating our assumptions on the primary and secondary network behaviors . in section [ sec : adhoc ]we outline the protocols used for the ad hoc primary model and prove that the claimed single network throughput scalings may be achieved .we also prove the claimed single network throughput scalings for the infrastructure - supported primary model in section [ sec : infra ] .we conclude in section [ sec : conclusion ] and refer the proofs of the lemmas to the appendix .in order to study throughput scaling laws of ad hoc cognitive networks , we must define the underlying network models .we first explain the two geometric models that will be considered in sections [ sec : adhoc ] and [ sec : infra ] .we then look at the transmission schemes , resulting achievable rates , and assumptions made about the primary and secondary networks . throughout this paper, we use to denote the probability of an event and we will be dealing with events which take place with high probability ( w.h.p . ) , or with probability 1 as the node density tends to infinity . ] .we consider a planar area in which a network of primary nodes and a network of secondary nodes co - exist .that is , the two networks share the same space , time , code , and frequency dimensions .two types of networks are considered as the primary network : an _ ad hoc network _ and an _ infrastructure - supported network _ , while the secondary network is always ad hoc .the two geometric models are illustrated in fig .[ fig:2models ] . as shown in fig .[ fig:2models ] .( a ) , in the ad hoc primary model , nodes are distributed according to a poisson point process ( p.p.p . ) of density over a unit square , which are randomly grouped into primary s - d pairs . for the secondary network ,nodes are distributed according to a p.p.p . of density over the same unit square and are also randomly grouped into secondary s - d pairs .our second model is the infrastructure - supported primary model , shown in fig .[ fig:2models ] .( b ) . there , primary nodes are still randomly distributed over the square according to a p.p.p . of density , but these nodes are supported by additional regularly spaced bss ( the number of bss is equal to , which is also the density of bss ) .the bss sole purpose is to relay data for the primary network , they are neither sources nor destinations .we assume that the bss are connected to each other through wired lines of capacity large enough such that the bs - bs communication is not the limiting factor in the throughput scaling laws .secondary nodes again form an ad hoc network with random s - d pairs , distributed according to a p.p.p . of density .the densities of the primary nodes , secondary nodes , and bss are related according to where and .we focus on the case where the density of the secondary nodes is higher than that of the primary nodes .we also assume that the densities of both the primary nodes and secondary nodes are higher than that of the bss , which is reasonable from a practical point of view .the wireless propagation channel typically includes path loss with distance , shadowing and fading effects .however , in this work we assume the channel gain depends only on the distance between a transmitter and its receiver , and ignore shadowing and fading .thus , the channel power gain , normalized by a constant , is given by where denotes the distance between a transmitter ( tx ) and its receiver ( rx ) and denotes the path - loss exponent .each network operates based on slotted transmissions .we assume the duration of each slot , and the coding scheme employed are such that one can achieve the additive white gaussian noise ( awgn ) channel capacity . for a given signal to interference and noise ratio ( sinr ) , this capacity is given by the well known formula bps / hz assuming the additive interference is also white , gaussian , and independent from the noise and signal .we assume that primary slots and secondary slots have the same duration and are synchronized with each other .we further assume all the primary , secondary , and bs nodes are subject to a transmit power constraint .we now characterize the rates achieved by the primary and secondary transmit pairs .suppose that primary pairs and secondary pairs communicate simultaneously . before proceeding with a detailed description ,let us define the notations used in the paper , given by table [ table : simbols ] .then , the -th primary pair can communicate at a rate of where denotes the euclidean norm of a vector . and are given by and similarly , the -th secondary pair can communicate at a rate of where and are given by and throughout the paper , the achievable per - node throughput of the primary and secondary networks are defined as follows .a throughput of per primary node is said to be achievable w.h.p .if all primary sources can transmit at a rate of ( bps / hz ) to their primary destinations w.h.p . in the presence of the secondary network .let denote an outage probability of the secondary network , which may vary as a function of .a throughput of per secondary node is said to be -achievable w.h.p .if at least fraction of secondary sources can transmit at a rate of ( bps / hz ) to their secondary destinations w.h.p . in the presence of the primary network . for both ad hoc and infrastructure - supported primary models, we will propose secondary routing schemes that make as is equivalent to since . ] .thus , although we allow a fraction of secondary s - d pairs to be in outage , for sufficiently large , almost all secondary s - d pairs will be served at a rate of .let us define as the sum throughput of the primary network , or times the number of primary s - d pairs since the nodes are thrown at random according to a p.p.p . of density .the actual number of nodes in the network will vary in a particular realization . ] .similarly , we define as the sum throughput of the secondary network , or times the number of _ served _ secondary s - d pairs at a rate of . while and represent the per - node and sum throughputs of the primary network _ in the presence of the secondary network _ , we use the notations and to denote the per - node and sum throughputs of the primary network _ in the absence of the secondary network _ , respectively . as primary and secondary nodes must share the spectrum , the rules or assumptions made about this co - existence are of critical importance to the resulting achievable throughputs and scaling laws .primary networks may be thought of as existing communication systems that operate in licensed bands .these primary users are the license holders , and thus have higher priority access to the spectrum than secondary users .thus , our first key assumption is that _ the primary network does not have to change its protocol due to the secondary network . _ in other words , all primary s - d pairs communicate with each other as intended , regardless of the secondary network .the secondary network , which is opportunistic in nature , is responsible for reducing its interference to the primary network to an `` acceptable level '' , while maximizing its own throughput .this acceptable level may be defined to be one that does not degrade the throughput scaling of the primary network .more strictly , the secondary network should satisfy w.h.p . during its transmission , where is the maximum allowable fraction of throughput loss for the primary network .notice that the above condition guarantees .the secondary network may ensure ( [ eq : fraction_loss ] ) by adjusting its protocol based on information about the primary network .thus , our second key assumption is that _ the secondary network knows the locations of all primary nodes ._ since the secondary network is denser than the primary network , each secondary node can measure the interference power from its adjacent primary node and send it to a coordinator node . based on these measured values ,the secondary network can establish the locations of the primary nodes .we first consider the throughput scaling laws when both the primary and secondary networks are ad hoc in nature . since the primary network needs not change its transmission scheme due to the presence of the secondary network , we assume it transmits according to the direct multihop routing similar to those in and .we also consider the indirect multihop routing proposed in as a primary protocol . of greater interestis how the secondary nodes will transmit such that the primary network remains unaffected in terms of throughput scaling .the main results of this section describe achievable throughput scaling laws of the primary and secondary networks .we simply state these results here and derive them in the remainder of this section ._ suppose the ad hoc primary model .for any , the primary network can achieve the following per - node and sum throughputs w.h.p .: where and .the following per - node and sum throughputs are -achievable w.h.p . for the secondary network : where , which converges to zero as . _this result is of particular interest as it shows that not only can the primary network operate at the same scaling law as when the secondary network does not exist , but the secondary network can also achieve , with an arbitrarily small fraction of outage , the exact same scaling law obtained by the direct multihop routing as when the primary network does not exist .thus almost all secondary s - d pairs can communicate at a rate of in the limit of large .in essence , whether the indirect multihop or the direct multihop is adopted as a primary protocol , the secondary network can achieve the sum throughput of w.h.p . while preserving fraction of the primary network s stand - alone throughput . in the remainder of this section , we first outline the operation of the primary network and then focus on the design of a secondary network protocol under the given primary protocol .we analyze achievable throughputs of the primary and secondary networks , which will determine the throughput scaling of both co - existing networks . throughout this work ,we place the proofs of more technical lemmas and theorems in the appendix and outline the main proofs in the text .we assume the primary network communicates according to the direct multihop routing protocol .the indirect multihop routing will be explained in section [ subsec : indirect_routing ] , which can be extended from the results of the direct routing .the challenge is thus to prove that the secondary nodes can exchange information in such a way that satisfies w.h.p .. we first outline a primary network protocol , and then design a secondary network protocol which operates in the presence of the primary network .we assume that the primary network delivers data using the direct multihop routing , in a manner similar to and .the basic multihop protocol is as follows : * divide the unit area into square cells of area . *a - time division multiple access ( tdma ) scheme is used , in which each cell is activated during one out of slots .* define the horizontal data path ( hdp ) and the vertical data path ( vdp ) of a s - d pair as the horizontal line and the vertical line connecting a source to its destination , respectively .each source transmits data to its destination by first hopping to the adjacent cells on its hdp and then on its vdp .* when a cell becomes active , it delivers its traffic .specifically , a tx node in the active cell transmits a packet to a node in an adjacent cell ( or in the same cell ) .a simple round - robin scheme is used for all tx nodes in the same cell .* at each transmission , a tx node transmits with power , where denotes the distance between the tx and its rx .this protocol requires full connectivity , meaning that each cell should have at least one node .let denote the area of a primary cell .the following lemma indicates how to determine satisfying this requirement . [ thm : node_number ] the following facts hold .+ ( a ) the number of primary nodes in a unit area is within w.h.p . , where is an arbitrarily small constant .+ ( b ) suppose .then , each primary cell has at least one primary node w.h.p .. the proof is in the appendix .based on lemma [ thm : node_number ] , we set . under the given primary protocol , and are achievable w.h.p . when the secondary network is absent or silent .results similar to lemma [ thm : node_number ] can be found in and , where their proposed schemes also achieve the same and .note that the gupta - kumar s model , assumes uniformly distributed nodes in the network and a constant rate between tx and rx if sinr is higher than a certain level .although we assume that the network is constructed according to a p.p.p .( rather than uniform ) and that the information - theoretic rate is achievable ( rather than a constant rate ) , the above primary network protocol provides the same throughput scaling as that under the gupta - kumar s model .since the secondary nodes know the primary nodes locations , an intuitive idea is to have the secondary network operate in a multihop fashion in which they circumvent each primary node in order to reduce the effect of secondary transmissions to the primary nodes . in network with holes is considered and geographic forwarding algorithms that establish routing paths around holes are proposed . aroundeach primary node we define its _ preservation region : _ a square containing secondary cells , with the primary node at the center cell .the secondary nodes , when determining their routing paths , need to avoid these preservation regions : our protocol for the secondary ad hoc network is the same as the basic multihop protocol except that * the secondary cell size is . * at each transmission a secondary node transmits its packet _ three _ times repeatedly ( rather than once ) using three slots .* the secondary paths avoid the preservation regions ( see fig .[ fig : secodary_routing_adhoc ] ) .that is , if the hdp or vdp of a secondary s - d pair is blocked by a preservation region , this data path circumvents the preservation region by using its adjacent cells .if a secondary source ( or its destination ) belongs to preservation regions or its data path is disconnected by preservation regions , the corresponding s - d pair is not served .* at each transmission , a tx node transmits with power , where denotes the distance between the tx and its rx and .since converges to zero as , there exists such that the power constraint is satisfied for any if . we will show in lemma [ thm : primary_rate_adhoc1 ] that adjusting induces a trade - off between the rates of the primary and secondary networks while the scaling laws of both networks are unchanged , which allows the condition ( [ eq : fraction_loss ] ) to be meet .unlike the primary protocol , each secondary cell transmits a secondary packet three times repeatedly when it is activated . as we will show later, the repeated secondary transmissions can guarantee the secondary receivers a certain minimum distance from all primary interferers for at least one packet , thus guaranteeing the secondary network a non - trivial rate .therefore , the duration of the secondary -tdma scheme is three times longer than that of the primary -tdma .the main difference between this scheme and previous multihop routing schemes is that the secondary multihop paths must circumvent the preservation regions and that a portion of secondary s - d pairs is not served .but this portion will be negligible as . by re - routing the secondary nodes transmission around the primary nodes preservation regions, we can guarantee the primary nodes a non - trivial rate .similar to lemma [ thm : node_number ] , we can also prove that the total number of secondary nodes is within w.h.p . and that each secondary cell has at least one secondary node w.h.p .. in this subsection, we analyze the per - node and sum throughputs of each network under the given protocols and derive throughput scaling laws with respect to the node densities .let us consider the primary network in the presence of the secondary network .we first show that each primary cell can sustain a constant aggregate rate ( lemma [ thm : primary_rate_adhoc1 ] ) , which may be used in conjunction with the number of data paths each primary cell must transmit ( lemma [ thm : primary_traffic_adhoc ] ) to obtain the per - node and sum throughputs in theorem [ thm : primary_throughput_adhoc ] .let and denote the achievable aggregate rate of each primary cell in the presence and in the absence of the secondary network , respectively .we define having a finite value for , which will be used to derive an upper bound on the interference power of the ad hoc primary and secondary networks . then the following lemma holds .[ thm : primary_rate_adhoc1 ] suppose the ad hoc primary model .if , then where and is given by ( [ eq : def_i ] ) . moreover , is lower bounded by , where is a constant independent of .the proof is in the appendix .the essence of the proof of lemma [ thm : primary_rate_adhoc1 ] lies in showing that the secondary nodes , even as , do not cause the aggregate rate of each primary cell to decay with .this is done by introducing the preservation regions , which guarantee the minimum distance of from all secondary txs to the primary rxs .this lemma will be used to show that ( [ eq : fraction_loss ] ) can be satisfied w.h.p . if in theorem [ thm : primary_throughput_adhoc ] .the next lemma determines the number of data paths that each cell should carry . to obtain an upper bound, we extend each hdp to the entire horizontal line and all cells through which this horizontal line passes should deliver the corresponding data of hdp ( see fig .[ fig : routing_analysis_adhoc ] ) .similarly , we extend each vdp to the entire vertical line .we define this entire horizontal and vertical line as an _ extended _ hdp and an _ extended _ vdp , respectively . throughout the rest of the paper , our analysis deals with extended hdps and vdps instead of original hdps and vdps .since we are adding hops to our routing scheme , the extended traffic gives us a lower bound on the achievable throughput .[ thm : primary_traffic_adhoc ] under the ad hoc primary model , each primary cell needs to carry at most data paths w.h.p .. the proof is in the appendix .lemma [ thm : primary_traffic_adhoc ] shows how the number of data paths varies with the node density .lemmas [ thm : node_number]-[thm : primary_traffic_adhoc ] may be used to prove the main theorem , stated next . [ thm : primary_throughput_adhoc ] suppose the ad hoc primary model . for any , by setting , the primary network can achieve and w.h.p . , where and the definitions of and are given in lemma [ thm : primary_rate_adhoc1 ] .first consider the stand - alone throughput of the primary network .since each primary cell can sustain a rate of ( lemma [ thm : primary_rate_adhoc1 ] ) , each primary s - d pair can achieve a rate of at least divided by the maximum number of data paths per primary cell .the number of data paths is upper bounded by w.h.p .( lemma [ thm : primary_traffic_adhoc ] ) .therefore , is lower bounded by w.h.p .. now the whole network contains at least primary s - d pairs w.h.p .( lemma [ thm : node_number ] ) .therefore , is lower bounded by w.h.p .. finally lemma [ thm : primary_rate_adhoc1 ] shows that , for any , if we set , then is achievable in the limit of large . since the number of primary data paths carried by each primary cell and the total number of primary s - d pairs in the network holds regardless of the existence of the secondary network , and are also achievable w.h.p . , which completes the proof .let us now consider the per - node throughput of the secondary network in the presence of the primary network .the main difference between the primary and secondary transmission schemes arises from the presence of the preservation regions .recall that the secondary nodes wish to transmit according to a multihop protocol , but their path may be blocked by a preservation region . in this case , they must circumvent the preservation region , or possibly the _cluster _ of primary preservation regions .however , as we will see later circumventing these preservation regions ( clusters ) does not degrade the secondary network s throughput scaling due to the relative primary and secondary node densities : the secondary nodes increase at the rate and .thus , intuitively , as the density of the primary nodes increases , the area of each preservation region ( which equals 9 secondary cells ) decreases faster than the increase rate of the primary node density ( and thus number of preservation regions ) .these clusters of preservation regions remain bounded in size , although their number diverges as .this can be obtained using percolation theory .let us introduce a poisson boolean model on .the points are distributed according to a p.p.p . of density and each point the center of a closed ball with radius .notice that s are random variables independent of each other and independent of , whose distributions are identical to that of .the _ occupied _region is the region that is covered by at least one ball and the _ vacant _ region is the complement of the occupied region .note that the occupied ( or vacant ) region may consists of several occupied ( vacant ) components that are disjointed with each other. then the following theorem holds .[ thm : percolation ] for a poisson boolean model on , for , if , then there exists such that for all , we refer readers to the proof of theorem 3.3 in . by scaling the size of the above poisson boolean model and setting as a deterministic value , we apply theorem [ thm : percolation ] to our network model .[ thm : clusters_of_preservation_regions ] any cluster of preservation regions has at most preservation regions w.h.p . , where is an integer independent of .let us consider a poisson boolean model on .all balls in this model have deterministic radii of and the density of the underlining p.p.p .is a function of decreasing to zero as .thus , and there exists such that for all . as a consequence , ( [ eq : percolation ] ) holds for all .since this result holds on , the same result still holds if we focus on the area of ^ 2 ] and on ^ 2 ] , the number of balls in any occupied component is upper bounded by w.h.p . , where is an integer independent of .in the case of on ^ 2 ] , where . then is given by where .note that is half of the side length of an avoidance region , while is half of the length of the strips which are free of avoidance regions .similarly , let denote the region in which none of vdps are blocked . we can shift a blocked vdp in to using the analogous mapping to the horizontal case .if a hdp is shifted , it requires a series of short vertical hops to reach the shifted hdp , where we denote these vertical hops as a short vdp .it also requires short horizontal hops to reach a destination after the vdp if that vdp is shifted , where we denote these horizontal hops as a short hdp .let us consider the secondary protocol when the primary network is in the uplink phase .we can also define an avoidance region at each tx ( primary node ) of the primary network . due to the irregular placement of primary nodes , however , it is hard to construct a re - routing protocol when each data path is blocked by an avoidance region .more importantly , we can not set the area of each avoidance region as large as in the downlink case since the density of primary nodes is higher than that of bss , leading to a smaller throughput than the downlink case .note that if we operate the secondary network during the uplink and downlink phases separately , then throughput scalings of the secondary network follow the maximum of the uplink and downlink throughputs .therefore , overall throughput scalings follow those of the downlink phase . in this subsection , we analyze the per - node and sum throughputs of each network under given protocols and derive the corresponding scaling laws .let us consider the per - node throughput of the primary network in the presence of the secondary network .we first show that all primary cells may sustain a constant , non - trivial rate in lemma [ thm : primary_rate_infra ] .we then determine the number of uplink and downlink transmissions each of these cells must support in lemma [ thm : primary_traffic_infra ] . using these results, we obtain the primary per - node and sum throughputs in theorem [ thm : primary_throughput_infra ] .let and denote the achievable aggregate rate of each primary cell in the presence and in the absence of the secondary network , respectively .we define having a finite value for , which will be used to derive an upper bound on the interference power of the infrastructure - supported primary network . then the following lemma holds .[ thm : primary_rate_infra ] suppose the infrastructure - supported model . if , then where and is given by ( [ eq : def_i ] ) .moreover , is lower bounded by , where is a constant independent of .the proof is in the appendix .[ thm : primary_traffic_infra ] under the infrastructure - supported model , each primary cell needs to carry at most downlink and uplink transmissions w.h.p .. the proof is in the appendix .[ thm : primary_throughput_infra ] suppose the infrastructure - supported model . for any , by setting , the primary network can achieve and w.h.p ., where and the definitions of and are given in lemma [ thm : primary_rate_infra ] .first consider the stand - alone throughput of the primary network .let and denote the per - node throughput during downlink and uplink , respectively .then , where arises from the fact that a source delivers a packet to its destination using one downlink and one uplink transmission .since each primary cell can sustain a constant rate of ( lemma [ thm : primary_rate_infra ] ) , is upper bounded by divided by the maximum number of downlink transmissions in each primary cell .this number of downlink transmissions is upper bounded by w.h.p .( lemma [ thm : primary_traffic_infra ] ) .therefore , is lower bounded by w.h.p .. since the same lower bound can be obtained for the case of , is lower bounded by w.h.p .. from the fact that there are at least primary s - d pairs ( lemma [ thm : node_number ] ) , is lower bounded by w.h.p .. the remaining proof about and w.h.p .is the same as theorem [ thm : primary_throughput_adhoc ] , which completes the proof .let us now consider the throughput scalings of the secondary network in the presence of the primary network .we first show that the fraction of the unserved s - d pairs due to the preservation regions will be negligible w.h.p . in lemma [ thm : number_of_served_sd_infra ] .unlike the ad hoc primary model , the overall multihop transmission of each s - d pair is divided into phases 1 and 2 depending on each tx s location . hence the per - node throughput scales as the minimum of the rate scalings related to phases 1 and 2 , respectively . we will show that although the aggregate rate of each secondary cell in the avoidance regions decreases as ( lemma [ thm : secondary_rate_infra ] ) , the number of data paths delivered by this cell is much less than that of each secondary cell outside the avoidance regions ( lemmas [ thm : secondary_traffic_infra ] and [ thm : secondary_traffic_infra2 ] ) .thus the cells in the avoidance regions provide higher rate per each hop transmission than the cells outside the avoidance regions w.h.p . and , as a result, and are determined by the transmissions outside the avoidance regions , which is phase .[ thm : number_of_served_sd_infra ] under the infrastructure - supported primary model , the fraction of unserved secondary s - d pairs is upper bounded by w.h.p . , which converges to zero as .the proof is in the appendix .[ thm : secondary_rate_infra ] under the infrastructure - supported primary model , each secondary cell in the avoidance regions and each secondary cell outside the avoidance regions can sustain a rate of and respectively , where , which tends to zero as , and is a constant independent of .the definitions of and are given by ( [ eq : def_i ] ) and ( [ eq : def_i ] ) , respectively .the proof is in the appendix .as in the ad hoc primary model , we define the secondary cells which border the preservation regions as the _ loaded _ cells and the other cells as _ regular _ cells . then , the following lemmas hold .[ thm : secondary_traffic_infra ] suppose the infrastructure - supported primary model . each regular secondary cell and each loaded secondary celloutside the avoidance regions need to carry at most and data paths w.h.p ., respectively , where is given in corollary [ thm : clusters_of_preservation_regions ] .the proof is in the appendix .[ thm : secondary_traffic_infra2 ] suppose the infrastructure - supported primary model . each regular secondary cell and each loaded secondary cell in the avoidance regions need to carry at most and data paths w.h.p ., respectively , where is given in corollary [ thm : clusters_of_preservation_regions ] .the proof is in the appendix .we can now use the previous corollaries and lemmas to obtain the per - node and sum throughputs of the secondary network in the following theorem .[ thm : secondary_throughput_infra ] suppose the infrastructure - supported primary model. for any , by setting within , the following per - node and sum throughputs are -achievable for the secondary network w.h.p . : and where , which converges to zero as .the definitions of , , and are given in lemma [ thm : primary_rate_infra ] , lemma [ thm : secondary_rate_infra ] , and corollary [ thm : clusters_of_preservation_regions ] , respectively .note that by setting , the secondary network satisfies ( [ eq : fraction_loss ] ) during its transmission .let us first consider .let ( similarly , ) denote the number of secondary s - d pairs whose original , including shifted one , or re - routed hdps are in ( ) and pass through loaded cells .similarly , we can define and for extended vdps . to obtain an upper bound on ,we consider extended hdps , which is the same as lemma [ thm : secondary_traffic_infra ] , and study the geometric scenario that requires re - routing the largest number of data paths to the loaded cells .this worst - case scenario is obtained when the projections of all preservation regions on the -axis are separated at a distance of at least and all preservation regions are in the avoidance - region free zone .thus , all nodes located in the area of pass through loaded cells , where arises from the shifted hdps along with the original hdps. therefore , an upper bound on follows .similarly , an upper bound on follows , where we assume that all preservation regions are in for this case .the vertical worst - case scenario may be similarly derived . using the same analysis from ( [ eq : prob_zero ] ) to ( [ eq : upper_m_ah_av ] ) , we obtain w.h.p . where .if we treat the s - d pairs passing through the loaded cell and the s - d pairs not served as outage , w.h.p ., where we use the result of lemma [ thm : number_of_served_sd_infra ]. then the achievable per - node throughput is determined by the rate of s - d pairs passing through only the regular cells .let us consider the regular cells in the avoidance regions , which perform transmissions during phase .for this case , since each cell sustains a rate of w.h.p .( lemma [ thm : secondary_rate_infra ] ) , and based on lemma [ thm : secondary_traffic_infra2 ] , the rate per each hop transmission provided by these cells is lower bounded by w.h.p .. if we consider the regular cells outside the avoidance regions , from lemmas [ thm : secondary_rate_infra ] and [ thm : secondary_traffic_infra ] , the rate per each hop transmission is lower bounded by w.h.p .. since , for sufficiently large , the rate provided by the cells in the avoidance regions is greater than that provided by the cells outside the avoidance regions , is lower bounded by w.h.p . if .let us now consider .again , we obtain a lower bound on by considering the most heavily loaded scenario in which all served s - d pairs pass through loaded cells . then .similarly , we can derive the rate per each hop transmission related to phases and from the results in lemmas [ thm : secondary_rate_infra ] to [ thm : secondary_traffic_infra2 ] . as a result , is lower bounded by w.h.p . if .finally is lower bounded by w.h.p . , which completes the proof .in this paper , we studied two co - existing ad hoc networks with different priorities ( a primary and a secondary network ) and analyzed their simultaneous throughput scalings .it was shown that each network can achieve the same throughput scaling as when the other network is absent .although we allow outage for the secondary s - d pairs , the fraction of pairs in outage converges to zero as node densities increase .furthermore , these scalings may be achieved by adjusting the secondary protocol while keeping that of the primary network unchanged .in essence , the primary network is unaware of the presence of the secondary network . to achieve this result, the secondary nodes need knowledge of the locations of the primary nodes , and the secondary nodes need to be denser than the primary . for ( primary is denser than the secondary network ) , on the other hand , it seems to be more challenging to achieve similar throughput scaling results while keeping the primary unchanged , as there are many primary nodes around each secondary node .as mentioned before , if we allow the primary protocol to adapt to the presence of the secondary network , we can achieve throughput scalings of two homogenous networks by employing tdma between two networks .our result may be extended to more than two networks , provided each layered network obeys the same three main assumptions as in the two network case .let denote the number of primary nodes in a unit area . for part ( a ), we wish to show that as .noting that is a poisson random variable with mean and standard deviation , we use chebyshev s inequality to see that clearly , as tends to infinity we can make this quantity arbitrarily small .for part ( b ) , let denote the number of primary nodes in a primary cell .then is given by therefore , the probability that there is at least one cell having no node is upper bounded by , where the union bound and the fact that there are at most primary cells are used . since as , ( b ) holds w.h.p . , which completes the proof .suppose that at a given moment , there are active primary cells and active secondary cells , including the -th active primary cell .then , the rate of the -th active primary cell is given by where indicates the loss in rate due to the -tdma transmission of primary cells .the rate of the -th active primary cell in the absence of the secondary network is given by by setting .[ fig : i_p_adhoc ] illustrates the worst case interference from the secondary interferers to the rx of the -th active primary cell , where the dotted region denotes the preservation region around the primary rx and the shaded cells denote the active secondary cells based on the -tdma .because of the preservation region , the minimum distance of can be guaranteed from all secondary transmitting interferers to the primary rx .thus , there exist secondary interferers at a distance of at least , and secondary interferers at a distance of at least , and so on .then , is upper bounded by where we use the fact that .then notice that is the value of such that the right - hand side of ( [ eq : fraction_lower ] ) is equal to .thus , if we set , then . because the above inequality holds for any , we obtain .similarly , there exist primary interferers at a distance of at least , and primary interferers at a distance of at least , and so on .then where we use the fact that .thus , therefore , lemma [ thm : primary_rate_adhoc1 ] holds .let denote the number of extended hdps that should be delivered by a primary cell .similarly , denotes the number of extended vdps that should be delivered by a primary cell .when hdps are extended , the extended hdps of all primary sources located in the area of should be handled by the primary cell . by assuming that all primary nodes are sources , the resulting upper bound on follows . using lemma [ thm : upper_bound_poission ] , we obtain similarly , the extended hdps of all primary destinations located in the area of should be also handled by the primary cell . by assuming that all primary nodes are destinations , we obtain from ( [ eq : upper_n_ph ] ) and ( [ eq : upper_n_pv ] ) , we obtain where the last inequality comes from the union bound . therefore , the probability that there is at least one primary cell supporting more than extended data paths is upper bounded by , where the union bound and the fact that there are at most primary cells are used . since as , each primary cell should deliver the corresponding data of at most extended data paths w.h.p . , where .note that the above bounds also hold for the original data paths , which completes the proof .let denote the area of all preservation regions , denote the area of all disjoint regions due to the preservation regions except the biggest region , and .define as the number of secondary nodes in the area of that follows .the number of secondary s - d pairs not served is clearly upper bounded by . from lemma [ thm : upper_bound_poission ] , we obtain an upper bound on is obtained if we assume none of the regions overlap .thus , as each preservation region has an area of and there are at most such regions w.h.p ., we obtain w.h.p . to derive an upper bound on , we assume all preservation regions form clusters having preservation region each ( corollary [ thm : clusters_of_preservation_regions ] ) shown in fig .[ fig : fraction ] .( a ) , where the shaded regions denote .then the maximum disjoint area generated by a cluster of preservation regions is given in fig .[ fig : fraction ] .( b ) as a circle maximizes the area of a region for a given perimeter . because each preservation region contributes a length of at most to the circumference of this circle , the radius is upper bounded by .thus , is upper bounded w.h.p .by where we use the fact that the total number of clusters having preservation regions in each cluster is upper bounded by w.h.p .. from ( [ eq : upper_a_p_1 ] ) and ( [ eq : upper_a_p_2 ] ) , is upper bounded by w.h.p .. by substituting for its upper bound in ( ) , we obtain thus , we obtain w.h.p . where .since the total number of secondary s - d pairs is lower bounded by w.h.p . , the fraction of unserved s - d pairs is upper bounded by w.h.p . , which completes the proof .since the same secondary packet is transmitted three times , the minimum distance of from all primary interferers to the secondary rx can be guaranteed for one out of three transmissions .then the interference from primary interferers of that packet is upper bounded by where we use the same technique as in lemma [ thm : primary_rate_adhoc1 ] .similarly , is lower bounded by .thus , the rate of each secondary cell is lower bounded by where indicates the rate loss due to the -tdma and repeated ( three times ) transmissions of the same secondary packet .therefore , lemma [ thm : secondary_rate_adhoc ] holds .let and denote the number of extended hdps including re - routed paths that should be delivered by a secondary regular cell and by a secondary loaded cell , respectively .similarly , we can define and for extended vdps .let us first consider a regular cell .this regular cell delivers the corresponding data of extended hdps passing through it .then all extended hdps of the secondary sources located in the area of should be handled by the regular cell , where we ignore the effect of s - d pairs not served , which yields an upper bound on the total number of hdps . by assuming that all secondary nodes are sources , the resulting upper bound on follows . from lemma[ thm : upper_bound_poission ] , we obtain we obtain the same bound for by assuming that all secondary nodes are destinations and then from the union bound and the fact that there are at most secondary cells , each regular cell should deliver the corresponding data of at most extended data paths w.h.p . , where we use the fact that as .let us now consider a loaded cell . unlike in the primary data path which has no obstacles , a secondary data path should circumvent any preservation regions which lie on its path .therefore , the loaded cells should deliver more data paths than the regular cells w.h.p .. suppose a cluster of preservation regions located on the boundary of the network in fig .[ fig : cluster_preservation_region ] , whose projection on -axis has a length of .then all extended hdps of the secondary sources located in the area of is re - routed through the dotted cells , where we ignore the effect of s - d pairs not served ( which yields an upper bound on the total number of extended hdps ) .the other loaded cells will deliver less hdps than the dotted cells w.h.p .. recall that w.h.p .( corollary [ thm : clusters_of_preservation_regions ] ) and the dotted cells need to deliver re - routing paths of at most two such clusters .therefore , by assuming that all secondary nodes are sources , the resulting upper bound on follows .note that the upper bound on is the same as the upper bound on except for a constant factor of , where comes from the re - routed hdps of two adjacent clusters and comes from the original hdps .therefore , we can apply the same analysis used in the regular case . in conclusion, each loaded cell should deliver the corresponding data of at most extended data paths w.h.p .. since the above bounds also hold for the original data paths , lemma [ thm : secondary_traffic_adhoc ] holds .the overall procedure of the proof is similar to that of lemma [ thm : primary_rate_adhoc1 ] .let us first consider downlink transmissions , where all primary cells are activated simultaneously at a given moment .let and denote the interference from all primary interferers and all secondary interferers during downlink , respectively .let and denote the downlink rates of a primary cell in the presence of the secondary network and in the absence of the secondary network , respectively .then if . from the same bounds in ( [ eq: upper_i_sp ] ) and ( [ eq : fraction_lower ] ) , we obtain for .the same bound can be derived for the uplink .thus , ( [ eq : r_p_r_alone ] ) holds .now consider the bound on .since there exist primary interferers at a distance of at least and primary interferers at a distance of at least and so on ( see fig .[ fig : i_p_infra ] ) , we obtain where we use the fact that the transmit power of each bs is upper bounded by .then in a similar manner , the rate of each primary cell during uplink is also lower bounded by .therefore , we can guarantee a constant rate of for each primary cell during both downlink and uplink , which completes the proof .let denote the number of primary nodes in a primary cell , which follows . from lemma [ thm : upper_bound_poission ], we obtain from the union bound , each primary cell has at most primary nodes w.h.p . , where we use the fact that as .if we assume that all primary nodes are destinations ( or sources ) , the number of downlink transmissions ( or the number of uplink transmissions ) per primary cell is upper bounded by w.h.p .. therefore , the lemma holds . let denote the area of all preservation regions around bss and denote the number of secondary nodes in the area of .then , from lemma [ thm : upper_bound_poission ] , since each preservation region around bs has an area of and there are such regions , which are not overlapping with each other , .thus , we know w.h.p . ,where combining this with the result of lemma [ thm : number_of_served_sd_adhoc ] , we obtain w.h.p .. since the number of s - d pairs not served is clearly upper bounded by , the fraction of unserved s - d pairs is upper bounded by w.h.p . , which completes the proof .first consider the rate of a secondary cell in the avoidance regions ( but not in the preservation regions ) . due to the preservation regions around bss, the minimum distance of can be guaranteed from all primary interferers .thus , . similarly .then the rate of each secondary cell in the avoidance regions is upper bounded by where arises from -tdma , the time fraction of phase 1 , and the time fraction of downlink . in the case of a secondary cell outside the avoidance regions ,the minimum distance of can be guaranteed from all primary interferers. then the rate of each secondary cell outside the avoidance regions is upper bounded by where arises from -tdma , the time fraction of phase 2 , and the time fraction of downlink .therefore , lemma [ thm : secondary_rate_infra ] holds .consider phase in which the secondary cells outside the avoidance regions are activated .let and denote the number of extended hdps that should be delivered by a secondary regular cell and by a secondary loaded cell , respectively .we can define and analogously for vdps .let us first consider a regular cell in .there are two types of hdps in : the first type is an original ( or a shifted ) hdp and the second type is a short horizontal hops in order to reach each destination . note that a short hdp only occurs if its original vdp is blocked by an avoidance region .we assume that a short hdp always occurs regardless of its vdp and extend it to the entire horizontal line including the short hdp .[ fig : routing_analysis_infra ] illustrates examples of original ( or shifted ) hdps ( left ) and their extended hdps ( right ) in .note that the -axis of an extended hdp from an original ( or shifted ) hdp originates from a source node .similarly , the -axis of an extended hdp from a short hdp originates from a destination node . as a result , under this extended traffic , all secondary nodes generate extended hdps on because each node is a source or a destination , where we ignore the effects of the s - d pairs not served and the s - d pairs that do not generate traffic on .since a regular cell in delivers the corresponding data of all extended hdps passing through it , all extended hdps of the secondary nodes located in the area of should be delivered by the regular cell .additionally , it should deliver the corresponding data of all nodes in the area of because these extended hdps are shifted to .therefore , the resulting upper bound on follows , where . from lemma[ thm : upper_bound_poission ] , we obtain the same bound can be obtained for . from the fact that the number of data paths that should be delivered by a regular cell in is given by , we obtain by the union bound and the fact that there are at most secondary cells , each regular cell in should deliver at most extended data paths w.h.p . , where we use the fact as .unlike the previous case , all s - d pairs that generate hdps in are not vertically blocked such that only original hdps exist in .then , is upper bounded by w.h.p . in this case .therefore the regular cells in , , and deliver w.h.p .less data paths compared to the regular cells in . in conclusion , each regular cell should deliver the corresponding data of at most extended data paths w.h.p .. to obtain an upper bound on , consider again the cluster of the preservation regions located on the boundary of the network in fig .[ fig : cluster_preservation_region ] ( or the boundary of an avoidance region in this case ) . then all nodes located in the area of generate extended hdps passing through the dotted cells in .additionally , all nodes located in the area of , belonging to , generate extended hdps passing through the dotted cells since they are shifted to .thus , from the fact w.h.p . , w.h.p .. by applying the same bound on , we conclude that each loaded cell should deliver the corresponding data of at most data paths w.h.p .. note that the loaded cells in , , and deliver w.h.p .less data paths than the loaded cells in .thus , lemma [ thm : secondary_traffic_infra ] holds .consider phase in which the secondary cells in the avoidance regions are activated .since the avoidance regions are in , there exists no shifted data path .the overall procedure is similar to the proof of lemma [ thm : secondary_traffic_infra ] .let us first consider the secondary regular cells .if we extend hdp to the line having the length of , which is the length of half an avoidance region side , all nodes in the area of generate extended hdps passing through a regular cell .thus , the number of extended hdps delivered by each regular cell is upper bounded by w.h.p .. by the same analysis for vdp , each regular cell should deliver the corresponding data of at most extended data paths w.h.p .. similarly , each secondary loaded cell should deliver the corresponding data of at most extended data paths w.h.p ., which completes the proof .xie and p. r. kumar , `` on the path - loss attenuation regime for positive cost and linear scaling of transport capacity in wireless networks , '' _ ieee trans .inf . theory _ ,23132328 , june 2006 .
we study two distinct , but overlapping , networks that operate at the same time , space , and frequency . the first network consists of randomly distributed _ primary users _ , which form either an ad hoc network , or an infrastructure - supported ad hoc network with additional base stations . the second network consists of randomly distributed , ad hoc _ secondary users _ or _ cognitive users_. the primary users have priority access to the spectrum and do not need to change their communication protocol in the presence of secondary users . the secondary users , however , need to adjust their protocol based on knowledge about the locations of the primary nodes to bring little loss to the primary network s throughput . by introducing preservation regions around primary receivers and avoidance regions around primary base stations , we propose two modified multihop routing protocols for the cognitive users . base on percolation theory , we show that when the secondary network is denser than the primary network , _ both _ networks can _ simultaneously _ achieve the same throughput scaling law as a stand - alone network . furthermore , the primary network throughput is subject to only a vanishingly fractional loss . specifically , for the ad hoc and the infrastructure - supported primary models , the primary network achieves sum throughputs of order and , respectively . for both primary network models , for any , the secondary network can achieve sum throughput of order with an arbitrarily small fraction of outage . thus , almost all secondary source - destination pairs can communicate at a rate of order . cognitive radio , scaling law , heterogeneous networks , interference management , routing algorithm
_ slow - fast _ vector fields have the form with the fast variable , the slow variable and a small parameter that represents the ratio of time scales .the pair will be denoted by and the vector field will be written .simulation of these systems with explicit numerical integration algorithms is limited to time steps that are due to numerical instabilities .however , invariant _ slow manifolds _ on which the motion of the system has speed that is are a common feature of slow - fast systems .indeed , trajectories often spend most of their time following stable slow manifolds .implicit `` stiff '' integration methods compute trajectories along the stable slow manifolds , taking time steps that are while avoiding the numerical instabilities of explicit methods . however , no initial value solver will compute forward trajectories that evolve on unstable slow manifolds because the geometric instability of these trajectories is such that nearby initial conditions diverge from one another at exponential rates commensurate with the fast time scale .even an exact initial value solver in the presence of round - off errors of magnitude will amplify this round - off error to unit magnitude in a time that is .trajectories on slow manifolds that expand in all normal directions can be computed by reversing time , but different strategies are needed to compute trajectories that lie on slow manifolds of saddle type .this paper presents the first algorithms that directly compute accurate trajectories of slow manifolds of saddle type .the existence of _ normally hyperbolic _ slow manifolds is established by _fenichel theory _ .the singular limit of system is a differential algebraic equation with trajectories confined to the _ critical manifold _ defined by . at points of where is a regular matrix , the implicit function theorem implies that is locally the graph of a function .this equation yields the vector field for the _ slow flow _ on .the geometry is more complicated at _ fold points _ of where is singular .it is often possible to extend the slow flow to the fold points after a rescaling of the vector field . where all eigenvalues of have nonzero real parts , fenichel proved the existence of invariant slow manifolds for small .these _ normally hyperbolic _ slow manifolds are within an distance from the critical manifold and the flow on converges to the slow flow on as .fenichel theory is usually developed in the context of _ overflowing _ slow manifolds with boundaries which trajectories may leave through the boundaries . in thissetting , slow manifolds are not unique , but the distance between a pair of slow manifolds is `` exponentially small '' , i.e. of order for a suitable positive , independent of .this section describes a collocation method for computing slow manifolds of saddle type in slow - fast systems that we call the smst algorithm .the numerical analysis employed in the algorithm is straightforward ; the subtlety of the problem appears in the formulation of discrete systems of equations with well - conditioned jacobians .the crucial part of the geometry is to specify boundary conditions for trajectory segments on a slow manifold that yield well - conditioned discretizations .a trajectory segment \to r^{m+n} ] of the slow flow on the critical manifold together with two submanifolds and of dimensions and that pass close to the initial and final points and of .the manifold is assumed to be transverse to the stable manifold of and the manifold is assumed to be transverse to the unstable manifold of .see figure [ bv_su ] . .a trajectory that approaches the slow manifold along a strong stable direction and departs along a strong unstable manifold is drawn blue .the initial point of this trajectory lies in a two dimensional manifold transverse to the stable manifold of , and the final point lies in a one dimensional manifold transverse to a unstable manifold of ., height=480 ] denote the mesh points in the discretization of by .the algorithm described here is based on a set of equations for that yield an approximate solution to the boundary value problem that is a trajectory of system connecting to is allowed to vary , then the number of boundary conditions is increased by one . ] .the discretized equations are based upon interpolation with cubic hermite splines , though higher degree splines can be used in conjunction with automatic differentiation methods that compute taylor polynomials of the vector field at mesh points . from points , a cubic spline is constructed with the as knot points and tangent vectors at these points . on the mesh interval ] contributes equations to the system by requiring that .the values of and in these equations can be expressed as implicit equations for the boundary value manifolds constitute the remaining equations in .the system is solved with newton s method starting with the data in .the jacobian of the system can be computed , using the derivatives of the equations with respect to .two types of error estimates are of interest for this algorithm . on each mesh interval, there is a local error estimate for how much the spline differs from a trajectory of the vector field .the spline satisfies at the collocation points and .if is the trajectory of the vector field through one these points , this implies that . since this classical estimate is based upon the assumption that the norm of the vector field is , it is only likely to hold for intervals that are short on the fast time scale .globally , the trajectories of the flow display a strong separation due to the normal hyperbolicity . in_ fenichel coordinates _ , stable coordinates converge rapidly to the slow manifold while unstable coordinates diverge rapidly from the slow manifold . in the case of a one dimensional slow manifold , _shadowing _ implies that any _ pseudo - trajectory _ pieced together from local approximations to the flow will lie close to a unique trajectory of the flow .moreover , in this case , different choices of boundary conditions that lie in the same strong stable manifold at and the same strong unstable manifold at yield trajectories that are exponentially close to each other and to the slow manifold outside of small subintervals near the ends of the time interval ] , these equations yield a solution that is a discretization of an exact solution of the differential equation .convergence occurs in a single step .assume now that and set to be he difference between the coordinate of a point and a point of the slow manifold .the boundary value equations become with .note that these equations are satisfied when the vanish , so discretizations of exact solutions along the slow manifold satisfy the boundary value equations . solving the equation for in terms of yields like the solutions of the differential equation , the values decrease exponentially as a function of time .the ratio is a function of whose taylor expansion agrees with that of through terms of degree , and its value always lies in the interval .thus the the solutions of the boundary value equation converge geometrically toward the slow manifold along its stable manifold with increasing time .if the mesh intervals have length , then the relative error of the decrease satisfies for large values of , the solution is no longer accurate near if the boundary conditions do not satisfy . a similar , butsimpler argument establishes that the solution of the discretized problem converges to the slow manifold at an exponential rate with decreasing time from .thus , the boundary value solver is stable and yields solutions that qualitatively resemble the exact solution for all meshes when applied to this linear problem . in particular ,the solution of the discretized problem is exponentially close to the slow manifold away from the ends of the time interval $ ] .as the mesh size decreases to zero , the algorithm has fourth order convergence to the exact solution .action potentials are a primary means for communicating information within the nervous system .neurons are said to burst when they fire several consecutive action potentials between `` silent '' periods free of bursts .there is no universally accepted definition of bursts , but computational models are widely used to predict in terms of membrane channel properties when a neuron will burst .rinzel introduced a singular perturbation perspective to the investigation of bursting in model neurons , viewing the phenomenon as a relaxation oscillation in which a system makes fast time scale transitions between slowly varying equilibrium and periodic attractors .several classifications of bursting distinguish qualitatively different dynamics .for example , izhikevich classifies bursts in terms of the bifurcations that mark the transitions between bursts and silent intervals .terman studied changes in the number of spikes per burst that occur as system parameters are varied .he gave numerical examples in a version of the morris - lecar model first analyzed by rinzel and ermentrout : this system has periodic bursting solutions with different numbers of spikes per burst as the parameters and vary .figure [ terman_traj ] illustrates that there are narrow parameter ranges with two stable periodic orbits having different spike numbers .terman described the dynamics of the transition from periodic solutions with spikes to those with , relying upon numerical simulations of trajectories in his analysis .flow along a slow manifold of saddle type is a central aspect of this transition , but the trajectory simulations are incapable of following trajectories that remain close to this slow manifold for more than a short distance .the boundary value solver introduced in this paper is used to compute trajectories that contain segments which follow the slow manifold of saddle type .geometric structures involved with the transition from to spikes are visualized , and the analysis of the transition from to spikes is carried further .the fast subsystem of is the morris - lear model for action potentials of barnacle muscle .the morris - lecar model itself has a rich dynamical structure .there is an interval of values for in which the system has three equilibrium points .saddle - node bifurcations occur at the endpoints of this interval .the equilibrium points of the morris - lecar model constitute the critical manifold of , and its saddle - node bifurcations ( with varying ) are the folds of the critical manifold .there is also a family of periodic orbits that collapses at a subcritical hopf bifurcation and terminates at a homoclinic bifurcation .this family of periodic orbits is folded , i.e. , there is a saddle - node of limit cycle bifurcation within the family .the periodic orbits of the family between the fold and homoclinic bifurcations are stable .the bursting orbits of the vector field follow a branch of the critical manifold of equilibrium points to one of its folds , jump to the family of stable periodic orbits , follow this family to its homoclinic bifurcation and then jump back to the branch of stable equilibria .these bursting orbits occur when the value of the parameter is chosen so that increases slowly during the quiescent part of the cycle and decreases slowly during the active spiking portion of the cycle .see figure [ terman_traj ] .the homoclinic orbit of the singular limit is a transversal intersection of the stable and unstable manifolds of the branch of saddle equilibria of .the branch of equilibria become a slow manifold of saddle type when and the homoclinic orbit persists as an intersection of the stable and unstable manifolds of .the transition between and spikes per burst occurs when the periodic bursting cycle encounters the intersection of and .the final spike of a periodic orbit with spikes follows the intersection of and of back to before jumping to the stable slow manifold .figure [ terman_mflds ] visualizes and , and shows two trajectories that bracket the intersection of and . because the system is smooth and does not have an equilibrium point near the intersection of and , the transition from to spikes consists of trajectories that undergo a continuous evolution .these trajectories contain saddle canards , segments that follow for varying lengths of time before leaving along one of its strong unstable manifolds .trajectories lying close enough to turn and flow along when they approach it .the distance that they travel along before leaving along its unstable manifold depends logarithmically on the initial distance of the trajectory to . if close enough , the trajectory will follow all the way to its end near a fold of the critical manifold before making a fast excursion to the stable slow manifold . accurate computation of is essential to understanding the details of the transition from bursts with spikes to bursts with spikes .the critical manifold of the vector field is given by explicit formulas when parametrized by .uniform meshes of values for were used to generate starting values for the boundary value computation of the slow manifold . to compute ,the vector field was rescaled so that . with this rescaling ,the value of remains constant during the newton iteration to find the solutions .typical meshes that do not come close to the fold points result in convergence of newton s method within three or four steps .figure [ terman_test ] illustrates the accuracy of the computations of and the behavior of numerical simulations of trajectories that start near .a point on is chosen , and the jacobian of the fast subsystem at this point is computed to obtain approximations for the directions of its strong stable and unstable manifolds . if does lie on the slow manifold , then trajectories with initial conditions on opposite sides of on its strong unstable manifold will flow along but then jump in opposite directions . similarly , backward trajectories with initial conditions on opposite sides of onits strong stable manifold will flow along but then jump in opposite directions . if is displaced from , its distance to can be estimated by finding the closest pairs of bracketing trajectories that do jump from in opposite directions .figure [ terman_test ] displays the results of such a test .eight pairs of trajectories displaced along the strong unstable manifold at distances are plotted in blue and green , and eight pairs of backward trajectories displaced along the strong stable manifold at distances are plotted in red and magenta .pairs of trajectories displaced by distance ( not drawn in figure [ terman_test ] ) fail the test , jumping in the same direction .this suggests that the distance from to the slow manifold is smaller than .note also that increments in the distance that each successive pair of bracketing trajectories flows along are similar , consistent with the exponential separation of trajectories within the strong stable and unstable manifolds . extrapolating these increments yields the estimate that a numerically simulated trajectories starting on the slow manifold near will only be able to remain close to for time approximately .this estimate is based on round - off error of the order of and the observation that the times at which trajectories displaced from by distances and appear to jump from are approximately and .these crude estimates explain why initial value solvers are unable to follow the continuous evolution of trajectories in the transition from to spikes per burst .the value of at is approximately and the jump from of numerically simulated trajectories seems to occur before increases to , but the fold of the critical manifold occurs when is approximately .the exponential instability of in both forward and backward directions precludes initial value solvers from computing trajectories that flow along from the intersection of and to the fold of . along the strong stable and unstable manifolds at the point on the slow manifold , which is drawn as a filled black circle .the trajectories are computed for a time interval and color coded so that the trajectories along the two branches of the strong unstable manifold are drawn blue and green while the trajectories along the strong stable manifold are drawn red and magenta .parameter are and the objects are projected into the plane.,height=480 ] computation of periodic orbits with long canard segments near the slow manifold of saddle type appears to be challenging , even with continuation methods .computation of the slow manifold with the boundary value solver introduced here can be coupled with the analysis of terman and lee and terman to solve this problem .computations of the slow manifold are augmented with numerical forward and backward simulations of trajectories that terminate at a cross - section along the family of periodic orbits .this extends the approach introduced by guckenheimer and lamar to efficiently compute periodic orbits containing canards . of saddle type in systemis swept out by trajectories starting close to the slow manifold .the blue and green trajectories are followed until they intersect the cross - section with increasing .red trajectories on the stable manifold of are followed backward until they intersect this cross - section .the trajectories that are drawn reach the cross - section close to the points on the unstable manifold of .parameters are .,height=480 ] figure [ terman_umfld ] visualizes the invariant manifold as a collection of trajectories for parameter values in system .the heavy black curve is a segment of the slow manifold of saddle type , and the dotted black curve is the critical manifold . at twenty initial points along ,trajectories have been computed with initial conditions displaced from along its strong unstable manifolds by a distance .the trajectories starting on one side of are drawn blue and the trajectories starting on the other side of are drawn green .the blue trajectories make a loop around the unstable branch of the slow manifold and then flow past to the stable branch of the slow manifold .the green branches flow to the stable branch of the slow manifold with decreasing .both sets of branches then turn and flow along the stable branch of the slow manifold .when they reach the fold of the slow manifold , they jump to the family of rapid oscillations .as trajectories follow these oscillations , decreases .the displayed trajectories are terminated when they reach the plane with decreasing .the red curves displayed in figure [ terman_umfld ] are four backwards trajectories that begin at distance from along its stable manifold and end on the cross - section .these trajectories were chosen on a short section of so that they reach the cross - section near the ends of the blue and green trajectories .figure [ terman_sect](a ) shows the ends of the blue , green and red trajectories with the cross - section .figure [ terman_sect](b ) and ( c ) show similar plots for the system with parameter values and . as varies, these plots demonstrate that the trajectories in the unstable manifold of sweep across the stable manifold of .figure [ terman_umfld ] supports the following procedure for finding periodic orbits containing canards .fix a short segment transverse to . with varying , trajectories with initial conditions on sweep out a three dimensional manifold in space .the exchange lemma implies that if intersects transversally in space , then part of will stretch along the length of and depart from it along . in particular , will intersect , giving a unique value of for which there is a periodic orbit intersecting .figure [ terman_sect ] gives numerical evidence that does intersect , and it indicates that the value of will be almost constant along the family of periodic orbits containing canards .computing trajectories with initial conditions on with an initial value solver will not produce these periodic orbits .figure [ terman_retmap](a ) shows a return map , giving initial and final values for the variable , with 300 initial points chosen on a linear approximation to the intersection of with shown in figure [ terman_sect](a ) .this return map has two apparent jumps .the trajectories beginning between the two jumps make three spikes before returning to while the other trajectories make two spikes before returning .none of the trajectories flows along with decreasing to a value smaller than .figure [ terman_retmap](b ) plots five of the trajectories from the return map , four that bracket the jumps and one from the local maximum of the return map in figure [ terman_retmap](a ) .the approximations of obtained with the boundary value solver can be used in approximating periodic orbits with canards .the strategy we propose is illustrated by figures [ terman_umfld ] and [ terman_sect ] .the periodic orbit will be calculated in three segments that are illustrated as black , red and blue / green curves in figure [ terman_umfld ] .the canard segment of a periodic orbit is exponentially close to except at its arrival and departure points .when the periodic orbit departs from , it will follow a blue or green trajectory starting at a point exponentially close to that is numerically indistinguishable from points on .similarly , the orbit segment that arrives at , does so at a point that is exponentially close to , that is numerically approximated by backward integration beginning at a point of . as ( or another parameter )is varied , the forward trajectory along and backward trajectories along sweep out a curve and a surface of intersection with a cross - section in space ( here ) .a root solver can be used to locate a parameter value for which a trajectory of and one on arrive at the same point of the cross - section .the periodic orbit will then be approximated by the union of the two trajectories and a curve that flows along from the chosen arrival point to the chosen departure point .normal hyperbolicity implies that there is a unique trajectory that connects these two points .as shown in figure [ terman_sect ] , the intersections occur for for all arrival and departure points .analysis of the bifurcations and attracting limit sets of the vector field requires additional information .numerically , it is necessary to `` fill in '' the jumps in the return map shown in figure [ terman_retmap ] , describing more carefully how the trajectories with canards return and determining the stability of trajectories containing canard segments .figure [ terman_uv ] plots the final values of in trajectories on vs. their initial values of .it is apparent that a large portion of contracts enough when it flows along the stable branch of the slow manifold that its intersection with the cross - section is very small .the image appears to lie inside a disk centered at of radius .the minimum return value of in the points plotted in figure [ terman_retmap ] is approximately .thus it appears that the local minima of the return map are only a distance about below the lowest points plotted in this figure .the variational equations of system can be used to estimate how much expansion takes place along canard segments of trajectories and how much contraction takes place along the stable branch of the slow manifold .on points of the critical manifold with the same value of , the strong unstable eigenvalue on the middle branch has larger magnitude than the weaker stable eigenvalue on the stable branch .if a canard segment is long enough , then the accumulated expansion will dominate the subsequent contraction that takes place on the stable branch of the slow manifold .this suggests that the return map of the system will have an expanding direction for canards that are sufficiently long , consistent with figure [ terman_retmap ] . as the maximal canards of the return map move across the diagonal with changing parameters, the return map is likely to have chaotic invariant sets similar to those found in the henon map .the numerical computations reported here are insufficient to adequately determine the details of these invariant sets because the slow manifold is not computed close enough to the fold that that the maximal canards are determined with good accuracy .terman suggests that the system has trajectories with three different spike numbers in its bursts .the calculations here cast doubt abut whether this is possible for trajectories that lie in the forward limit set of the system .for the parameters , the decrease in the value of between spikes along the surface of oscillations appears to be approximately in the region between the arrival of trajectories jumping from the fold of the stable branch of the slow manifold to the intersection of and . on the other hand , the trajectories that flow along the stable branch of the slow manifold appear to pass by the fold in a set that has a diameter at least an order of magnitude smaller than the observed separation between spikes .our analysis of canards makes it clear that the trajectories with long canard segments all flow through a tiny region as they pass the fold .lee and terman give asymptotic estimates of the size of these regions in terms of that also suggest that it is unlikely that the limit set of this system reaches the surface of oscillations in a set that is large enough to contain trajectories with three different spike numbers .the fitzhugh - nagumo equation is a model for the electric potential of a nerve axon interacting with an auxillary variable ( see , ) : where and and are parameters . assuming a travelling wave solution with to ( [ eq : fhn_original ] ) we get : a homoclinic orbit of ( [ eq : fhn_temp ] ) corresponds to a travelling pulse solution in ( [ eq : fhn_original ] ) .an analysis of ( [ eq : fhn_temp ] ) using numerical continuation has been carried out by champneys et al .they fixed the parameters , , and investigated bifurcations in -parameter space .we shall fix the same values and hence write . to bring ( [ eq : fhn_temp ] ) into the standard form ( [ sfs ] ) set , , and change to the slow time scale : refer to ( [ eq : fhn ] ) as `` the '' fitzhugh - nagumo equation .our goal is to use the fast slow structure of ( [ eq : fhn ] ) and the smst algorithm to compute its homoclinic orbits .the critical manifold of the fitzhugh - nagumo equation is the cubic curve : the two local non - degenerate extrema of yield the fold points of .denote the local minimum by and the local maximum by .the critical manifold has three normally hyperbolic components : fenichel s theorem provides associated slow manifolds , and outside neighbourhoods of the fold points .the manifolds and are of saddle - type for sufficiently small .the middle branch is completely unstable in the fast directions .denote the unique equilibrium point of ( [ eq : fhn ] ) by .the location of depends on the parameter and moves along the cubic . for the analysis of homoclinic orbitswe shall assume that .in this case , the unstable manifold is one - dimensional and the stable manifold is two - dimensional .this also covers the case by a symmetry in the fitzhugh - nagumo equation and avoids the region where is completely unstable , .homoclinic orbits exist if .+ we focus first on the case of relatively large wave speeds ( `` fast waves '' ) . the existence proof of these homoclinic orbits contructs them as perturbations of a singular trajectory consisting of four segments : a fast subsystem heteroclinic connection from to at , a slow segment on , a fast subsystem heteroclinic from to at for some constant and a slow segment on connecting back to .we aim to compute homoclinic orbits by a similar procedure for a given small in several steps : 1 .find parameter values such that a homoclinic orbit exists very close or exactly at .this can be achieved by a splitting algorithm without computing the homoclinic orbit , even for very small values of .carry out all the following compuations for .[ step1 ] compute the slow manifolds and using the smst algorithm .[ step2 ] compute the unstable manifold of the equilibrium by forward integration .4 . define a section where the constant is chosen between and e.g. .compute the transversal intersection of and on , call the intersection point ( see figure [ fig : hom_intersect_truncated ] ) . integrate forward and backward starting at to obtain trajectories and .[ step4 ] to compute the homoclinic orbit we use the objects computed so far as approximants in different regions .compute the closest points in and to and concatenate and to at these points .proceed similarly with and .remove all parts of the slow manifolds not lying between the concatenation points and past the equilibrium .( green ) and ( magenta ) .the manifolds are trucated at the yellow section and the trajectory started on at the transversal intersection point is shown in red.,scaledwidth=60.0% ] note that all figures for the fast wave case have been computed for , and .this is a classical case for which the existence of homoclinic orbits is known . in figure[ fig : hom_umfld_smfld ] we show the result from the smst algorithm and the unstable manifold of the equilibrium , i.e. the output of steps [ step1 ] and [ step2 ] . due to the exponential separation along the trajectory obtained from numerical integration can not track the slow manifold for an o(1 ) distance and escapes after following the slow manifold for a very short time .this happens despite the fact that we have computed parameter values with maximal accuracy in double precision arithmetic at which we expect to follow almost up to the fold point .this observation is relevant to figure [ fig : hom_orbit_fast_pieces ] where the result of step [ step4 ] is shown .all the fast segments ( red ) had to be truncated almost immediately after they entered a neighourhood of a slow manifold .the final output of the algorithm after interpolation near the truncation points is shown in figure [ fig : hom_orbit_fast ] .+ now we consider the case of `` slow waves '' and work with smaller wave speeds .homoclinic orbits representing slow waves should be thought of as perturbations of singular limit orbits for the fitzhugh - nagumo equation ( [ eq : fhn ] ) with . in this casethe fast subsystem is hamiltonian .singular homoclinic orbits exist in a single fast subsystem with the y - coordinate of the equilibrium .a direct application of fenichel theory implies that a perturbed singular `` slow '' homoclinic orbit persists for .again it is possible to compute parameter values at which homoclinic orbits for exist . to compute the orbits themselves a similar approach as described above can be used .we have to track when enters a small neighbourhood of respectively of .figure [ fig : slow_waves ] shows two computed homoclinic orbits for and .+ the orbits spiral around the middle branch and do not enter the vicinity of . thisis expected as the middle branch of the critical manifold consists of unstable spiral equilibria for the fast subsystems .the hamiltonian analysis for the case shows that the singular slow homoclinic orbits are not close to an equilibrium on for values of approximately between and ( see ) . in figure[ fig : slow_wave1 ] a homoclinic orbit enters the vicinity of the slow manifold and returns directly to .figure [ fig : slow_wave2 ] shows a homoclinic orbit that makes one additional large excursion around after it was close to and then returns to ; hence we refer to the orbit in [ fig : slow_wave2 ] as a double pulse homoclinic orbit .the same double pulse phenomenon exists for fast waves as well . in this casethe double pulse orbit has no additional interaction with the middle branch and therefore it is difficult to distinguish between different pulse types for fast waves numerically and graphically as the second loop follows the first one very closely .this example demonstrates the use of our algorithm to compute trajectories in saddle - type slow manifolds of systems with two slow variables . the model is a caricature of a pair of neurons that are coupled with _ reciprocal inhibition _the vector field is in this model , and are interpreted as the membrane potential of two neurons that are coupled synaptically through the terms involving .the variables and represent the gating of membrane channels in the neurons .the model is a caricature in that it does not incorporate the fast membrane currents which give rise to action potentials .still more reduced models have been used to study reciprocal inhibition of a pair of neurons . reciprocal inhibition between a pair of identical neuronshas long been viewed as a mechanism for generating repetitive alternating activity in motor systems .guckenheimer , hoffman and weckesser investigated the properties of this model when the two neurons have different parameters and therefore are not identical .they observed that canards of several kinds were encountered while continuing periodic orbits with auto calculations .the bifurcation mechanisms encountered in these continuation studies still have not been identified despite intensive efforts by lust to compute the multipliers of periodic orbits accurately .our algorithm for computing invariant slow manifolds of saddle - type provides a promising new tool for investigating the bifurcations that take place in this system with two slow and two fast variables .here we illustrate that the algorithm is indeed capable of computing trajectories that lie on these manifolds without pursuing bifurcation analysis of the system .the periodic trajectory discussed in section 5.2 of has three different canard segments .here we focus on the segment labelled in figure 6(c ) of .the segment is a _ fold - initiated _ canard that begins as a fast trajectory , flows near a fold of the critical manifold and then moves along a saddle - type sheet of the slow manifold .both neurons in the model have parameter values while and .one of the points on the segment has coordinates .we projected onto the critical manifold along the directions retaining the coordinates of and computed a trajectory of the slow flow on the critical manifold with this initial condition .while the slow flow is an algebraic - differential equation , the critical manifold of is easily written as a graph of a function and the slow flow equations can be written as a vector field in .the trajectory was taken as input for our algorithm .boundary conditions were selected so that the initial point of the trajectory retains the same coordinates as .figure [ rifig ] displays the trajectory obtained from our algorithm in black together with trajectories of its strong stable and unstable manifolds .the distance of the initial conditions for the trajectories on the strong stable and unstable manifolds from is .note that the first trajectories of the strong stable manifold at the bottom of the figure both flow down and to the right , reflecting that the initial points of these trajectories do not straddle the slow manifold in the strong stable direction .similar behavior occurs at the final point of in the strong unstable direction .this behavior is to be expected because the boundary conditions constrain the strong stable coordinate of the first point of to have a value close to that on the critical manifold rather than the invariant slow manifold . at the final point of ,the strong unstable coordinate is determined by the critical manifold .the behavior of is what we expect from our algorithm : the computed trajectory approaches the slow manifold of saddle - type along a strong stable direction at its beginning , flows along the slow manifold to a high degree of accuracy to near its end and then leaves the slow manifold along a strong unstable direction .the length of is much longer than the segment shown in figure 6(c ) of . on a saddle - type slow manifold of system .the trajectory is drawn black .trajectories with initial conditions that are displaced by from along the direction of its strong stable manifolds are drawn red , while trajectories with initial conditions that are displaced by from along the direction of its strong unstable manifolds are drawn green ., height=288 ] a l gorman and m v thomas ( 1978 ) changes in the intracellular concentration of free calcium ions in a pace - maker neurone , measured with the metallochromic indicator dye arsenazo iii , j physiol . 275:357 - 76 .j. guckenheimer and d. lamar , periodic orbit continuation in multiple time scale systems .numerical continuation methods for dynamical systems , 253267 , underst .complex syst . , springer , dordrecht , 2007 .e. hairer and g. wanner , solving ordinary differential equations .stiff and differential - algebraic problems .second edition .springer series in computational mathematics , 14 .springer - verlag , berlin , 1996 .
slow manifolds are important geometric structures in the state spaces of dynamical systems with multiple time scales . this paper introduces an algorithm for computing trajectories on slow manifolds that are normally hyperbolic with both stable and unstable fast manifolds . we present two examples of bifurcation problems where these manifolds play a key role and a third example in which saddle - type slow manifolds are part of a traveling wave profile of a partial differential equation . initial value solvers are incapable of computing trajectories on saddle - type slow manifolds , so the slow manifold of saddle type ( smst ) algorithm presented here is formulated as a boundary value method . we take an empirical approach here to assessing the accuracy and effectiveness of the algorithm .
we consider billiard dynamics on a rectangular domain , i.e. , point shaped `` balls '' moving with linear motion with specular reflections at the boundary , and similarly for rectangular box shaped domains in three dimensions .we wish to determine the distribution of free path lengths of ensembles of trajectories defined by selecting a starting point and direction at random .the question seems quite natural and interesting on its own , but we mention that it originated from the study of electromagnetic fields in `` reverberation chambers '' under the assumption of highly directional antennas . briefly , the connection is as follows ( we refer to the forthcoming paper for more details ) : given an ideal highly directional antenna and a highly transient signal , then the wave pulse dynamics is essentially the same as a point shaped billiard ball traveling inside a chamber , with specular reflection at the boundary .signal loss is dominated by ( linear ) `` spreading '' of the electromagnetic field and by absorption occurring at each interaction ( `` bounce '' ) with the walls .the first simple model we use in this paper neglects absorption effects , and models signal loss from spreading by simply terminating the motion of the ball after it has travelled a certain large distance .the second model only takes into account signal loss from absorption , and completely neglects spreading ; here the motion is terminated after the ball has bounced a certain number of times .we remark that the distribution of free path lengths is very well studied in the context of the lorentz gas here a point particle interacts with hard spherical obstacles , either placed randomly , or regularly on euclidean lattices ; recently quasicrystal configurations have also been studied ( cf . . )let be large and let a rectangular -dimensional box be given , where .we send off a large number of particles , each with a random initial position chosen with respect to a given probability measure on , and each with a uniformly random initial direction , , for a total distance each .each particle travels along straight lines , changing direction precisely when it hits the boundary of the box , where it reflects specularly .we record the distance travelled between each pair of consecutive bounces for each particle .( note in particular that we obtain more bounce lengths from some particles than from others . )let be the uniformly distributed random variable on this finite set of bounce lengths of all the particles .more precisely , a random sample of is obtained as follows : first take a random i.i.d .sample of points ( with respect to the measure ) , and a random sample of directions ( with respect to the uniform measure ) .each pair then defines a trajectory of length , and each such trajectory gives rise to a finite multiset of lengths between consecutive bounces . finally , with denoting the ( multiset ) union of bounce length multisets , we select an element of with the uniform distribution .( that is , with denoting the integer valued set indicator function for , and we select the element with probability . )we are interested in the distribution of for large and , and this turns out to be closely related to a model arising from integral geometry .namely , let denote the unique ( up to a constant ) translation- and rotation - invariant measure on the set of directed lines in , and consider the restriction of this measure to the set of directed lines intersecting , normalized such that it becomes a probability measure. denote by the random variable where is chosen at random using this measure .[ thm_n ] for any dimension , and for any distribution on the starting points , the random variable converges in distribution to the random variable , as we take followed by taking , or vice versa .the mean free path length has a quite simple geometric interpretation .we have } = 2\pi \dfrac{|\mathbb s^{n-1}|}{|\mathbb s^n|}\dfrac{\operatorname{vol}(k)}{\operatorname{area}(k ) } = 2\sqrt{\pi } \cdot \dfrac{\gamma(\frac{n+1}2)}{\gamma(\frac n2)}\dfrac{\operatorname{vol}(k)}{\operatorname{area}(k ) } \label{santalo - formula } \end{gathered}\ ] ] where is the -dimensional surface area of the box , is the volume of the box , is the gamma function , and where is the -dimensional surface area of the sphere .the formula in has been proven in a more general setting earlier ( see e.g. formula ( 2.4 ) in ) ; for further details , see section [ sec : discussion ] . for the convenience of the readerwe give a short proof of formula in our setting in section [ sec : integral - geometry - mean - value ] . throughout the paper , we will write and for the probability density function and the cumulative distribution function of , respectively , for random variables .we next give explicit formulas for the probability density function of in dimensions two and three .[ thm_2 ] for a box of dimension with side - lengths , the probability density function of is given by for .we note that the probability density function in theorem [ thm_2 ] is analytic on all open subintervals of not containing or moreover , it is constant on the interval and has singularities of type and just to the right of and , respectively .see figure [ fig : easy - rectangle ] for more details . for an explanation of these singularities , see remark [ rem : singularity - explained ] .[ fig : easy - rectangle ] for .( simulation used particles , each starting at the origin with a uniformly random direction , going for a total distance each . )the plot is cutoff at since tends to infinity as and ., title="fig:",width=491 ] [ thm_3 ] for a box of dimension with side - lengths , the probability density function of is given by where is the piecewise - defined function given by for , and by for , and by for [ fig : easy_3d ] for .( simulation used particles , each starting at the origin with a uniformly random direction , going for a total distance each . ) the fact that is not smooth at is barely noticeable . , title="fig:",width=491 ] we note that the probability density function in theorem [ thm_3 ] is analytic on all open subintervals of not containing any of the points moreover , it is linear on the interval and has positive jump discontinuities at the points . at the points , it is continuous and differentiable .note that the probability distribution gives a larger `` weight '' to some particles than others , since some particles get more bounces than others for the same distance .one could also consider a similar problem where we send off each particle for a certain number of bounces , and then consider the limit as followed by taking the limit , where is the number of particles .this would give each particle the same `` weight '' .denote the finite version of this distribution by and its limit distribution as and then by . with regard to the previous discussion about signal loss, we call the limit distribution of the * spreading model * and we call the limit distribution of the * absorption model*. determining the probability density function of the absorption model appears to be the more difficult problem , and we give a formula only in dimension two : [ thm_hard ] for a box of dimension with side - lengths , the random variable converges in distribution to the random variable , as we take followed by taking , where the probability density function is given by for , and by for , and by for .[ bounce_vs_distance ] ( red line ) from theorem [ thm_2 ] vs absorption model ( black dashed line ) from theorem [ thm_hard ] , for . , title="fig:",width=491 ] see figure [ bounce_vs_distance ] for a comparison between the probability density functions for the two different models in dimension .it is not a priori obvious that the two limit distributions should differ , and it is natural to ask how much , if at all , they differ .we start by remarking that the expression for does not simplify into the expression for ; indeed , for we have but on the interval .for very skew boxes , with and , it is straightforward to show that as . given a closed convex subset with nonempty interior it is possible to define a natural probability measure on the set of lines in that have nonempty intersection with .the expected length of the intersection of a random line is then , up to a constant that only depends on , given by ; this is known as santalo s formula in the integral geometry and geometric probability literature ( cf .3 ) ) . a billiard flow on a manifold with boundary gives rise to a billiard map ( roughly speaking , the phase space is then the collection of inward facing unit vectors at each point ) .given we define the associated free path as the distance the billiard particle , starting at in the direction , covers before colliding with again .as the billiard map carries a natural probability measure we can view the free path as a random variable , and the mean free path is then just its expected value .remarkably , the mean free path ( again up to a constant that only depends on the dimension ) is then given by even for non - convex billiards .this was deduced in the seventies at the moscow seminar on dynamical systems directed by sinai and alekseev but was never published and hence rederived by a number of researchers . for further details and an interesting historical survey ,see chernov s paper ( * ? ? ?* sec . 2 ) . in spiritour methods are closely related to the ones used by barra - gaspard in their study of the level spacing distribution for quantum graphs , and this turns out to be given by the distribution of return times to a hypersurface of section of a linear flow on a torus . in particular , for graphs with a finite number of disconnected bonds of incommensurable lengths , the hypersurface of section is the `` walls '' of the torus , and the level spacings of the quantum graph is exactly the same same as the free path length distribution in our setting when all particles have the _ same _ starting velocity .( in particular , compare the numerator in for fixed with ( * ? ? ?* equation ( 49 ) ) . ) in , marklof and strmbergsson used the results by barra - gaspard to determine the gap distribution of the sequence of fractional parts of .the gap distribution depends on whether is trancendental , rational or algebraic ; quite remarkably the density function for these gaps share a number of qualitative features with the density function for free paths in our setting .namely , the density functions both have compact support and are smooth apart from a finite number of jump discontinuities .further , in some cases the density function is constant for small ; compare figure [ fig : easy - rectangle ] ( here ) with ( * ? ? ?* figure 4 ) ( here ) .however , there are some important differences : for , left and right limits exist at the jump discontinuities , whereas for , the right limit of is at the jumps ( cf . figure [ fig : easy - rectangle ] . ) further , despite appearences , is not linear near ( cf .* figure 1 ) corresponding to ) whereas for , is indeed linear near ( cf . figure [ fig : easy_3d ] ) .we would like to thank z. rudnick for some very helpful discussions , especially for suggesting the connection with integral geometry .we also thank j. marklof for bringing references to our attention .was partially supported by a grant from the swedish research council ( 621 - 2011 - 5498 ) .was partially supported by grants from the gran gustafsson foundation for research in natural sciences and medicine , and the swedish research council ( 621 - 2011 - 5498 ) .in this section , we prove theorem [ thm_n ] . for notational simplicity ,we give the proof in dimension three ; the general proof for dimensions is analogous . given a particle with initial position and initial direction ,let be the number of bounce lengths we get from that particle as it has travelled a total distance , and let be the number of such bounce lengths of length at most .the uniform probability distribution on the set of bounce lengths of particles with initial positions and initial directions has the cumulative distribution function ( note that the denominator is uniformly bounded from below , which follows from equation below . ) by the strong law of large numbers , the function converges almost surely to as , where is the probability measure with which we choose the starting points , and is the surface area measure on the sphere . by symmetry, we may restrict the inner integrals to .we now look at the limit of as , and we note that since the integrands are uniformly bounded , we may move the limit inside the integrals by the lebesgue dominated convergence theorem .fix one of the integrands , and denote it by .we will show that its limit exists for all and all directions .moreover , if and denote random variables corresponding to an initial position and an initial direction , respectively , as above , then is a random variable with finite variance ( and similarly for the terms in the denominator of ; in particular recall it is uniformly bounded from below ) , and thus the strong law of large numbers gives that the limit of as , and then almost surely equals .this shows that exists almost surely and is equal to .consider a particle with initial position and initial direction . by `` unfolding '' its motion with specular reflections on the walls of the box to the motion along a straight line in see figure [ unfold_rgb ] for a 2d illustration we see that the particle s set of bounce lengths is identical to the set of path lengths between consecutive intersections of the straight line segment with any of the planes , .thus we see that for large , and therefore as .[ unfold_rgb ] now project the line to the torus where and let us identify the torus with the box ; see figure [ unfold_rgb ] .each bounce length corresponds to a line segment which starts in one of the three planes , or and runs in the direction to one of the three planes or .there are line segments which start from the plane , and thus the probability that a line segment starts from the plane is as . by the ergodicity of the linear flow on tori ( for almost all directions ) ,the starting points of these line segments become uniformly distributed on the rectangle \times[0,{b}]\times\{0\} ] which satisfy this are precisely those outside the rectangle \times[0,{b}-t{v_y}] ] . the area of that region is if and otherwise it is .since the starting points are uniformly distributed in the rectangle \times[0,{b}] ] and the lines with and direction has area , as in , and its projection onto the plane has area .\end{gathered}\ ] ] by symmetry it follows that the area of the set of directed lines with and direction projected down to is + \\ & \phantom{{\mathrel{\mathop:}=}.}{v_y}{\mathopen{}\mathclose\bgroup\oldleft}[{a}{c}-({a}-t{v_x})({c}-t{v_z})\chi({a}\geq t{v_x},{b}\geq t{v_y},{c}\geq t{v_z}){\aftergroup\egroup\oldright } ] + \\ & \phantom{{\mathrel{\mathop:}=}. } { v_z}{\mathopen{}\mathclose\bgroup\oldleft}[{a}{b}-({a}-t{v_x})({b}-t{v_y})\chi({a}\geqt{v_x},{b}\geq t{v_y},{c}\geq t{v_z}){\aftergroup\egroup\oldright}],\end{aligned}\ ] ] and it follows that = \dfrac1{c_{{a},{b},{c } } } \int_{x\leq t}{\mathop{d\ell } } = \dfrac 8{c_{{a},{b},{c}}}\int_{\mathbb s^2_+ } u(v , t ) { \mathop{ds}}(v),\end{gathered}\ ] ] which we see is identical to , and we have thus proved that converges in distribution to as we take and then .this concludes the proof of theorem [ thm_n ] .[ sec : integral - geometry - mean - value ] we will determine the mean value of ; to do this we exploit the integral geometry interpretation of the random variable . by symmetry it suffices to restrict to directed lines with . for fixed , denote by the set of such that the directed line parametrized by intersects .we note that is a volume element of the box for any fixed , and thus integrating over all yields the volume of the box .hence the mean value is }= \dfrac8{c_{{a},{b},{c } } } \int_{\mathbb s^2_+}\int_{q(v)}x{\mathop{da}}(q){\mathop{ds}}(v ) = \dfrac{8{a}{b}{c } } { c_{{a},{b},{c } } } \int_{\mathbb s^2_+ } { \mathop{ds}}(v ) = \dfrac{2{a}{b}{c } } { { a}{b}+{a}{c}+{b}{c}}.\end{gathered}\ ] ] in dimensions we get a normalizing factor , so with the aid of the lemma [ integral_of_vn ] in the appendix , it follows that the mean value in dimensions is } = \dfrac1{2^n \dfrac1{\pi}\dfrac{|\mathbb s^n|}{2^{n } } \frac{\operatorname{area}(k)}2 } 2^n\operatorname{vol}(k)\dfrac{|\mathbb s^{n-1}|}{2^n } = 2\pi \dfrac{|\mathbb s^{n-1}|}{|\mathbb s^n|}\dfrac{\operatorname{vol}(k)}{\operatorname{area}(k)}\end{gathered}\ ] ] where is the -dimensional surface area of the box , and is the volume of the box .using formula in dimension , we get we use polar coordinates so that .then the above becomes _ { \cos{^{-1}}(\min(a / t , 1 ) ) } ^ { \sin{^{-1}}(\min(b / t , 1 ) ) } .\label{two_easy } \end{gathered}\ ] ] the numerator of the second term may be written which can be simplified to inserting this into and differentiating yields theorem [ thm_2 ] .we will evaluate the cumulative distribution function and then differentiate .the denominator of the second term of is as may be evaluated by switching to spherical coordinates .define by symmetry , we have and thus we can write the numerator in the second term of as exploiting the symmetries , it suffices to evaluate and ( note the order of the arguments to ) .we will evaluate these integrals by switching to spherical coordinates , but first we need to parametrize the part of the sphere inside the box .[ param_lemma ] fix .we have for any integrable function , where , where and where we have used the shorthand .we will parametrize the set of points on the sphere such that switch to spherical coordinates .the non - negativity conditions of are equivalent to the condition . for such angles ,the condition is equivalent to and the conditions are equivalent to the interval is non - empty for precisely those such that since thus we may restrict to the interval given by the inequalities note that we have for all since we conclude that we can write for , note that is defined precisely when and that is defined precisely when .we have if and only if , and we have if and only if .moreover we note that we always have ] .but since we know that we will get a function symmetric with respect to the values , it suffices to keep only those terms with and , say , and then the other terms may be evaluated by just switching the order of . upon inserting the limits and differentiating ,one obtains ( after tedious calculations ) that where .\end{align } \end{gathered}\ ] ] rewriting as a piecewise function , we get theorem .consider the distribution of the random variable . since we record the _ same _ number of bounces for each choice of angle may replace the -particle system with a one particle system as follows : randomly select , with uniform distribution , the angle and generate bounce lengths and randomly select one of these bounce lengths ( with uniform distribution ) ; by the strong law of large numbers , converges in distribution to as .we now determine the limit distribution of .as before , we first unfold the motion , and replace motion in a box with specular reflections on the walls with motion in ; see figure [ unfold_rgb ] .the path lengths between bounces is then the same as the lengths between the intersections with horizontal or vertical grid lines . to understand the spatial distribution , we project the dynamics to the torus where is the lattice and we may identify the torus with the rectangle \times[0,{b}] ] .( note that gives a probability measure on these angles . )if the particle travels a large distance , the number of intersections with horizontal , respectively vertical , grid lines is , respectively .thus , in the limit , the probability of a line segment beginning at a horizontal ( respectively vertical ) grid line is given by , respectively ( here we suppress the dependence on ) where the unfolded flow on the torus is ergodic for almost all , and thus the starting points of the line segments becomes uniformly distributed as for almost all .let since , we obtain that let denote the angle of the diagonal in the box , and assume that .we then observe the following regarding the line segment lengths .first , if the segment begins at a horizontal line , it must end at a vertical line , and the possible lengths of these segment lie between and .we find that these lengths are uniformly distributed in ] , whereas the latter happens with probability in which case the segment is always of length .now , ] , and noting that we find that the probability of observing a line segment of length is the sum of a `` singular part '' ( the segment begins and ends on vertical lines ; note that all such segments have the _ same _ lengths ) and a `` smooth part '' ( the segment does not begin and end on vertical lines ) .moreover , the smooth part contribution equals which , on inserting ( [ eq : vert - horiz - probs ] ) , equals on the other hand , the `` singular part contribution '' , provided , to the probability of a segment having length equals in case , a similar argument ( we simple reverse the roles of and ) shows that the smooth contribution equals and that the singular contribution ( if ) equals thus , if we let denote the `` singular contribution '' to the probability density function we find the following : if , then if ] , then [ rem : singularity - explained ] note that has a singularity of type just to the right of ( and similarly just to the right of ) . in a sensethis singularity arises from the singularity in the change of variables since .the reason for the singularities in the spreading model for is similar , as the spreading model can be obtained from the absorption model by a smooth change of the angular measure .similarly , the `` smooth part '' of the contribution is ( for $ ] ) given by hence the probability density function of the distribution of the segment length is given by we will now evaluate .an antiderivative of with respect to for is where for .( a quick calculation shows that whenever . ) we can rewrite as by lhpital s rule we have so the limit of as is the limit of as is thus , assuming , we can write as if , or as if or as if . adding to this , we get theorem [ thm_hard ] .[ integral_of_vn ] write for the -dimensional surface area of the sphere . then we have where is the part of the sphere with positive coordinates. we may parametrize with for .we have the spherical area element thus we get introducing an additional integration variable , we recognize the integrand as the spherical area element in dimensions , and thus the above is since .m. bckstrm , s. holmin , p. kurlberg , d. mnsson ._ randomized ray tracing for modeling uwb transients in a reverberation chamber . _ in preparation .n. chernov .entropy , lyapunov exponents , and mean free path for billiards . , 88(1 - 2):129 , 1997 .
we consider the distribution of free path lengths , or the distance between consecutive bounces of random particles , in an -dimensional rectangular box . if each particle travels a distance , then , as the free path lengths coincides with the distribution of the length of the intersection of a random line with the box ( for a natural ensemble of random lines ) and we give an explicit formula ( piecewise real analytic ) for the probability density function in dimension two and three . in dimension two we also consider a closely related model where each particle is allowed to bounce times , as , and give an explicit ( again piecewise real analytic ) formula for its probability density function . further , in both models we can recover the side lengths of the box from the location of the discontinuities of the probability density functions .
the study of the solutions corresponding to the electrical impedance equation where represents the conductivity and denotes the electric potential , is fundamental for the proper understanding of a wide variety of boundary value problems , that posses special relevance in different branches of mathematical physics . among many important problems , it is particularly interesting the one corresponding to the electrical impedance tomography , because of its importance in applied physics and engineering . as a matter of fact , only taking into account the medical imaging applications , no doubts remain about its relevance . the study of this inverse problem , correctly posed in mathematical terms by a.p .calderon , is based upon iterative methods that employ solutions of the forward problem , attempting to fulfil a certain boundary condition ( usually upcoming from physiological measurements ) , by introducing variations in the conductivity function at every step , in order to reduce the difference between the theoretical and the measured one ( see e.g. , the classical work ) . yet, the mathematical complexity of ( [ int:00 ] ) posed so strong challenges , that the electrical impedance tomography remained only as an alternative medical imaging technique , seldom considered among the basic clinical applications .as a matter of fact , the electrical impedance tomography is still considered an ill posed problem . in this direction ,the mathematical foundations dedicated to this topic , perhaps received one of the most important contributions when v. kravchenko in 2005 , and independently k. astala and l. pivrinta in 2006 , discovered that the two - dimensional case of ( [ int:00 ] ) was completely equivalent to a special kind of vekua equation , which had been deeply studied in a variety of works , been the most important those published by l. bers and by i. vekua .the list of novel works arising after such discovering , is as long as it is interesting .still , it shall be pointed out that just some of them are dedicated to the engineering applications , since it is not clear how to model a wide variety of physical phenomena , in order to make them susceptible to be analysed by means the modern theory of pseudoanalytic functions ( see e.g. ) .this work intends to make a positive contribution in this direction .already in was posed a basic idea for interpolating values of conductivity within bounded domains , in order to obtain a certain class of analytic representations : two - dimensional separable - variables functions , one of the keys to fully applied the novel mathematical methods into engineering problems . in this work ,a more general methodology is posed , based upon the main conjecture arising from . starting with some examples for which the conductivity functions are known in exact form , the elements of the pseudoanalytic function theory are used , for the first time , to approach solutions of the forward boundary value problem for ( [ int:00 ] ) , considering conductivity distributions upcoming from geometrical figures .even the full set of examples is provided for a circular domain , the results can be extended to a wide variety of bounded domains .thus , the methods provided along these pages , could well be ready for studying images corresponding already to physical experimental models .following , let the complex - valued functions satisfy the condition where denotes the complex conjugation of : , and is the standard imaginary unit .thus , any complex - valued function can be expressed by means of the linear combination of and : where and are purely real functions .two complex functions that fulfil ( [ pre:00 ] ) shall be called a _ generating pair _ .bers introduced the -derivative of the function according to the expression : but this derivative will exist if and only if here notice that these operators are classically introduced with the factor , but it will result somehow more convenient to omit it int his work . introducing the functions the expression ( [ pre:01 ] ) of the -derivative will turn into as well the condition ( [ pre:02 ] ) can be written as the functions defined in ( [ pre:03 ] ) are called the _characteristic coefficients _ of the generating pair , and the functions satisfying the condition ( [ pre:05 ] ) are named -pseudoanalytic .indeed , the equation([pre:05 ] ) is know as the _ vekua equation _ , and in many senses is the foundation of the present work .the following sentences were originally presented in and , and they have been slightly adapted for the purposes of this paper .[ th:00 ] the elements of the generating pair are -pseudoanalytic : let be a non - vanishing function within a bounded domain .the functions constitute a generating pair , whose characteristic coefficients are let and be two generating pairs of the form ( [ pre:07 ] ) , and let their characteristic coefficients satisfy the relation thus the pair will be called a successor of the pair , whereas will be called a predecessor of .let be a set of generating pairs , and let every be a successor of .thus , the set is called a generating sequence . moreover , if there exist a number such that the generating sequence is said to be periodic , with period . finally ,if , the generating pair will be embedded into the generating sequence .[ th:01 ] let be a generating pair of the form ( [ pre:06 ] ) , and let be a separable - variables function : where .thus is embedded into a periodic generating sequence , with period , such that when is even , and when is odd . moreover , if in particular , it is easy to see that the generating sequence in which is embedded will be also periodic , but it will posses period .l. bers also introduced the concept of the -integral of a complex - valued function .the detailed conditions for its existence can be found in .let be a generating pair of the form .its adjoin generating pair is defined according to the formulas the -integral of a complex - valued function ( when it exists ) is defined as : where is a rectifiable curve within a domain .specifically , if we consider the -integral of the -derivative of , we will have : here , and is a fixed point in the complex plane . but according to the theorem [ th:00 ] , the -derivative of and of vanish identically , thus the expression ( [ pre:09 ] ) can be considered as the -antiderivative of .[ def:00 ] the formal power belonging to the generating pair , with formal degree , complex constant coefficient , center at , and depending upon , is defined according to the expression : where and are complex constants that fulfil the condition : the formal powers with higher degrees are calculated according to the recursive formulas : where notice the integral operators in the right side of the last expression are -antiderivatives .the formal powers posses the following properties : 1 .every is an -pseudoanalytic function . 2 .let , where and are real constants .the following relation holds let be an -pseudoanalytic function .then it can be expressed by means of the so - called taylor series in formal powers : since any -pseudoanalytic function accepts this expansion , this is an analytical representation for the general solution of the vekua equation ( [ pre:07 ] ) .let us consider the two - dimensional case of the equation ( [ int:00 ] ) : as it has been shown in several previous works ( see e.g. and ) , if can be expressed by means of a separable - variables function introducing the notations the equation ( [ int:00 ] ) will turn into the vekua equation for which the functions conform a generating pair . from ( [ eie:00 ] )it is easy to see that this pair is embedded into a generating sequence with period , because is separable - variables , according to theorem [ th:01 ] .the possession of an explicit generating sequence , allows the construction of the formal powers ( [ pre:10 ] ) , so we can approach any solution for ( [ eie:01 ] ) , which will be closely related with the solutions of ( [ int:00 ] ) according to the relations ( [ eie:00 ] ) . as a matter of fact ,one special and very important relation between the solutions of ( [ int:00 ] ) and of ( [ eie:01 ] ) was elegantly posed in , and this idea will play a central role in the present work . let us consider the set of formal powers corresponding to the generating pair ( [ eie:02 ] ) , and let be a bounded domain such that , but .then the set of functions valued on : conform a complete system for approaching solutions of the dirichlet boundary value problem corresponding to ( [ int:00 ] ) .the last statement implies that any boundary condition , imposed for the solutions of ( [ int:00 ] ) , can be approached according to the expression : where and are real constants .summarizing , when a separable - variables conductivity function is given within a bounded domain , and a boundary condition is imposed for the solution of ( [ int:00 ] ) , it will be always possible to construct a finite set of functions , subset of ( [ eie:03 ] ) , such that ^{2}dl<\epsilon , \label{eie:04}\ ] ] where and .one of the main objectives of this work is to contribute into the construction of a new theory for the electrical impedance tomography problem .hence it is natural to search for the mathematical tools that will allow us to fully apply the modern pseudoanalytic function theory into the analysis , e.g. , of medical images .this means that it is necessary to introduce interpolation methods that , given a set of conductivity values defined into a bounded domain on the plane , can reach separable - variables functions .one of the first approaches in this direction was posed in , and it was properly analysed in . indeed, the last reference is completely dedicated to prove the following assessment .[ con:00 ] let be a function defined within a bounded domain , possessing discontinuities only of the first kind .then it is possible to approach by means of a piecewise separable - variables function of the form : .\end{array } \right.\ ] ] this separable - variables function can be employed for numerically constructing a finite set of formal powers of the form ( [ eie:03 ] ) , in order to approach solutions for the dirichlet boundary value problem of the two - dimensional electrical impedance equation ( [ int:00 ] ) , in the sense of ( [ eie:04 ] ) , when a boundary condition is imposed .a simplified illustration of the procedure for the construction of such piecewise function , is provided along in figure 1 , and a brief explanation of the central ideas will be provided now . for simplicity , we will consider hereafter the domain as the unitary circle , noticing that all postulates can be extended to a wide set of bounded domains , that are of special interest in many branches of applied mathematics , physics , and engineering .the first step is to divide the domain into a number of subsections , as shown in the figure [ fig : eie00a ] . along a strait line crossing every subsection , parallel to the -axis , we will collect a number of conductivity values , in order to introduce an interpolating function depending only upon the spatial variable .we will posses then a set of interpolating functions , one for every subsection , as shown in figure [ fig : eie00b ] .finally , we state that the conductivity function in every subregion will have the separable - variables form where is the number of the subregion , is the common -coordinate of all points collected in the -subregion , and is a positive real constant such that .the postulate remains a conjecture because not any formal extension of the theorem posed in , about the completeness of the set ( [ eie:03 ] ) , is known for the case of piecewise separable - variables functions within bounded domains .yet , the full set of examples presented in shows that the representation ( [ eie:05 ] ) is useful for solving the dirichlet boundary value problem of ( [ int:00 ] ) , since the numerical calculations succeed to approach the boundary conditions provided by known exact solutions , for a variety of conductivities that were both separable - variables and non separable - variables by definition .this property will be now extended for analysing conductivities upcoming from geometrical distributions , precisely as those employed in physical measurements . to achieve this, we will consider a limit case of the conjecture [ con:00 ] that will result specially useful for these classes of conductivities .in behalf of simplicity , will be considered as the unitary circle , but it is possible to verify that the next statements can be generalized for a wide variety of bounded domains .[ pro:00 ] every conductivity function , defined within a bounded domain , and possessing only discontinuities of the first kind , can be considered the limit case of a piecewise separable - variables conductivity function of the form ( [ eie:05 ] ) , when the number of subsections , and the number of collected values at every subsection , tend to infinity .this is moreover , since where is the number of the subsection , it follows from theorem [ th:01 ] , that the generating sequence corresponding to this limit case , employed for numerically approaching the formal powers , will be periodic with period .let us consider that has been sectioned into subdomains , by employing the set of equidistant -axis parallel lines since coincides with the unitary circle , it can be described by the union of subdomains , defined according to the expressions,\ x^{2}+y^{2}\leq 1\right\rbrace . \nonumber\end{aligned}\ ] ] let us consider also the set of lines it is clear that and in consequence , for ; we will have that it immediately follows that this implies that every will be confined into a segment of a parallel line to the -axis .indeed , when the , can be represented by the set of the line segments beside , since the number of conductivity values , collected along the line within every , also tends to infinite , not any interpolation method will be required for obtaining the functions .they will simply coincide with the original values of the function over the elements of the set ( [ eie:08 ] ) .finally , from ( [ eie:07 ] ) , it follows that which , according to the last sentence of the theorem [ th:01 ] , will provoke that the corresponding generating sequence , employed to numerically approach some of the formal powers ( [ eie:03 ] ) , will be periodic with period .we now analyse a selected set of analytic conductivity functions for which exact solutions are known , in order to examine the effectiveness of the method posed in proposition [ pro:00 ] .thereafter , we will consider conductivity distributions upcoming from geometrical distributions , whose analytical representation is , in general , unknown , imposing certain boundary conditions that will help us to appreciate the behaviour of the technique in this special and important cases .a detailed description of the numerical methods used to approach the solution , that fulfil the boundary condition of every example , can be found in .since we are considering the unitary circle , and taking into account the validity of the expression ( [ eie:04 ] ) , that as a matter of fact is a lebesgue integral - type operator , let us introduce an inner product for the elements of the finite set according to the formula where , and it follows that we can obtain a set of orthonormal functions , such that we can approach an imposed boundary condition according to the expression where are real constant coefficients .notice that the apparent lose of one base function is because , by virtue of the definition [ def:00 ] , we have thus .it should be also noticed that the orthonormalization procedure has been performed considering , first , the subset of functions [ ns:04 ] followed by the subset of functions this remark is important for adequately examining the illustrations where the absolute values of the coefficients are displayed .let then the function is a particular solution of the electrical impedance equation ( [ int:00 ] ) .this case was selected because of the variations of conductivity that take place within the unitary circle .unfortunately , for avoiding the indetermination of the tangent functions contained into the particular solution ( [ ns:02 ] ) , we could only consider the case .an illustration of this conductivity is given in the figure 2 .within the unitary circle.,title="fig : " ] [ fig : ns00 ] the number of points located around , when imposing the boundary condition , does not necessarily has to coincide with the number of orthonormal functions , as it is shown in , where the collocation method is employed . nevertheless , on behalf of simplicity , this work will consider equally distributed points around the perimeter of the unitary circle , and base functions .the boundary condition will be obtained by evaluating the solution , presented in ( [ ns:02 ] ) , over this set of points .the absolute error is defined as the classical lebesgue norm where represents the solution ( [ ns:02 ] ) valued on the boundary , and .the result of this integral will be approached using the standard trapezoidal method over equally distributed points on the segment $ ] . the figure [ fig : ns01a ] displays a logarithmic plot of the absolute magnitudes corresponding to every coefficient used for approaching the boundary condition. we shall remember that the first coefficients correspond to orthonormal system obtained from the set ( [ ns:04 ] ) , whereas the remaining correspond to the set ( [ ns:05 ] ) .the figure [ fig : ns01b ] displays a comparison between the imposed boundary condition ( [ ns:02 ] ) valued on , drew in blue , and the approached solution , in red .it is not possible to detect any difference at first sight in this case . beside , the obtained error ,thus it is possible to assess the approach is adequate .the table [ tab : ns00 ] show the numerical values of some of the most relevant coefficients used in the reconstruction .it is important to pay attention to the number selected , since , in general , they are not presented in consecutive order ..[tab : ns00]values of the coefficients corresponding to the boundary value problem with . [ cols="^,^,^,^,^,^,^,^",options="header " , ]on of the most important contributions of the present work is the full opening of the path for applying the modern theory of pseudoanalytic functions , into the analysis of wide class of conductivity functions upcoming from physical problems , by virtue of the proposition [ pro:00 ]. tested in a variety of examples , the numerical analysis based upon the cited proposition , succeed to approach the imposed boundary conditions with considerable accuracy , presenting the highest divergence only in a kind of problem that is well known for its complexity when analysing boundary value problems for the two - dimensional electrical impedance equation .an immediate implication of these results is the possibility of analysing most classes of images corresponding to classical applications of the electrical impedance tomography , as it is the medical imaging clinical monitoring . in this precise direction, it is possible to assure that the search for patterns of change in the boundary electric potentials , when changes in the conductivity within the domain of interest are taking place , is completely viable by applying the techniques posed before .it is important to remark that all presented results can be extended without mayor complications to a wide class of bounded domains , beside the unitary circle , that on behalf of simplicity was considered in these pages . indeed , most problems of mathematical physics , closely related with the modern pseudoanalytic function theory , as those elegantly studied in , as well as in , where a special case of the fokker - planck equation is posed , could well be susceptible for this kind of analysis .a. bucio r. , r. castillo - perez , m.p .ramirez t. ( 2011 ) , _ on the numerical construction of formal powers and their application to the electrical impedance equation _ , 8th international conference on electrical engineering , computing science and automatic control , ieee catalog number : cfp11827-art , isbn:978 - 1 - 4577 - 1013 - 1 , pp .769 - 774 .h. m. campos , r. castillo - perez , v. v. kravchenko ( 2011 ) , _ construction and application of bergman - type reproducing kernels for boundary and eigenvalue problems in the plane _ , complex variables and elliptic equations , 1 - 38 .r. castillo - perez . , v. kravchenko , r. resendiz v. ( 2011 ) , _ solution of boundary value and eigenvalue problems for second order elliptic operators in the plane using pseudoanalytic formal powers _, mathematical methods in the applied sciences , vol .34 , issue 4 .v. v. kravchenko ( 2005 ) , _ on the relation of pseudoanalytic function theory to the two - dimensional stationary schrdinger equation and taylor series in formal powers for its solutions _ , journal of physics a : mathematical and general , vol .3947 - 3964 .m. p. ramirez t. ( 2010 ) , _ on the electrical current distributions for the generalized ohm s law , applied mathematics and computation _ , elsevier ( submitted for publication ) , available in electronic at http://arxiv.org m. p. ramirez t. , r. a. hernandez - becerril , m. c. robles g. ( 2011 ) , _ first characterization of a new method for numerically solving the dirichlet problem of the two - dimensional electrical impedance equation _ , available in electronic at http://arxiv.org
using a conjecture that allows to approach separable - variables conductivity functions , the elements of the modern pseudoanalytic function theory are used , for the first time , to numerically solve the dirichlet boundary value problem of the two - dimensional electrical impedance equation , when the conductivity function arises from geometrical figures , located within bounded domains .
numerical computation of miscible displacement in porous media has attracted much attention in recent decades due to its applications in reservoir simulations and exploration of underground oil ; see .the model describes the motion of a miscible fluid of two ( or more ) components in porous media , where the velocity of the fluid is given by darcy s law in the last equation , denotes the pressure of the fluid mixture , denotes the permeability of the porous media , and is the viscosity of the fluid depending on the concentration of the first component .the incompressibility of the fluid is described by where and are given injection and production sources .the concentration is governed by a convection - diffusion equation where denotes the porosity of the media and denotes the diffusion - dispersion tensor , which is given by .\end{aligned}\ ] ] in this formula , is a function of the local molecular peclet number , where , , , and are positive constants related to the porous media .it is straightforward to verify that for some positive constants , and .existence of weak or semiclassical solutions for the miscible displacement equations was studied in , and numerical analysis of the model has been done by many authors . in particular , a galerkin fem was studied by ewing and wheeler , and a galerkin - mixed fem was analyzed by douglas et al , where the galerkin method was used to solve the parabolic concentration equation and a mixed fem was applied to solve the elliptic pressure equation . for both methods ,a linearized semi - implicit euler scheme was used for the time stepping and optimal error estimates were presented roughly under the restriction . in , a characteristic method was applied to the parabolic concentration equation and the mixed fem was used to solve the elliptic pressure equation .optimal error estimates were established under the same condition , i.e. .more recently , a galerkin method combined with a post - process technique was studied in , an euler lagrange localized approximation method was studied in and a modified method of characteristics combined with mixed fem was studied in . in all these works ,error estimates were derived with certain restrictions on the grid ratio . to remove these restrictive conditions ,a new approach was introduced in to decouple the discretization errors from the temporal and spatial directions , and optimal error estimates of a galerkin - mixed fem was established without restriction on the grid ratio .the methodology of was later successfully applied to other nonlinear parabolic equations , such as the nonlinear schrdinger equation , the thermistor equations and the navier - stokes equations . however , all the analyses presented in these works rely on the global regularity of the `` time - discrete solution '' ( the solution of the linearized pdes ) , which requires the permeability and porosity to be globally smooth in the miscible displacement model . in engineering computations , due to the existence of fault , filling - type karst caves or complex geological composition , the permeability and porosity are often discontinuous across some interfaces .it is desirable to solve the miscible displacement equations with discontinuous permeability and porosity by stable and accurate numerical methods . for this purpose ,numerical methods for flow in porous media with discontinuous permeability have been studied by many authors based on linear models .for example , see on the approximation of the elliptic pressure equation and see on the approximation of a parabolic pressure equation from the compressible model .convergence of finite element methods for general linear elliptic and parabolic interface problems can also be found in . in this paper , we study stability and convergence of fully discrete fems for the full model of miscible displacement in porous media , where the permeability and porosity are discontinuous across some interfaces .mathematically , we assume that the domain is partitioned into separated by the interfaces , , as shown in figure [ fig01 ] , and we consider the nonlinear elliptic - parabolic interface problem \left[c\right]=0,\quad \left[d(\u , x)\nabla c\cdot{\bf n}\right]=0 & \mbox{on}~~\gamma_m,\\[10pt ] d(\u , x)\nabla c\cdot{\bf n}=0&\mbox{on}~~\partial\omega,\\[8pt ] c(x,0)=c_0(x)~~ & \mbox{for}~~x\in\omega , \end{array } \right.\label{e - fuel-1}\\[8pt ] & \left\ { \begin{array}{ll } \displaystyle \nabla\cdot\u = q_i - q_p & \mbox{in}~~\omega_m,\\[8pt ] \displaystyle\u=-\frac{k(x)}{\mu(c)}\nabla p & \mbox{in}~~\omega_m,\\[10pt ] \displaystyle \left [ p \right]=0,\quad \left[\u\cdot{\bf n}\right]=0 & \mbox{on}~~\gamma_m,\\[8pt ] \displaystyle \u\cdot{\bf n}=0 & \mbox{on}~~\partial\omega .\end{array } \right .\label{e - fuel-3}\end{aligned}\ ] ] in each subdomain , the pressure , the velocity and the concentration are governed by the partial differential equations , and jump conditions are specified across the interfaces .the permeability and porosity are assumed to be constant in each subdomain but are discontinuous across the interfaces .clearly , the diffusion - dispersion tensor is an unbounded function of . due to this strong nonlinearity and the coupling of equations ,previous error estimates presented for the linear interface problems can not be extended here .a direct application of the traditional error estimates requires undesired restrictions on the grid ratio to control the numerical velocity . in order to avoid any restrictive conditions on the grid ratio, one has to use the error - splitting technique introduced in .however , due to the discontinuity of the permeability and porosity across the interfaces , the solution of - is not globally smooth .instead , they are at most piecewise smooth , as assumed in this paper . in this case , the analysis for the galerkin - mixed fem presented in does not work . in this paper , we show that a decoupled and linearized mixed fem is stable for the nonlinear interface problem by proving that the time - discrete solution is piecewise smooth enough in each subdomain separated by the interfaces .optimal error estimates are established without restriction on the grid ratio .we believe that the methodology of this paper , together with lemma [ lemhk0]-[lemhkp1 ] introduced here , can also be applied to other nonlinear parabolic interface problems in engineering and physics .suppose that the smooth domain is partitioned into , where is enclosed by a smooth interface for , and . for any integer and a subdomain ,we let and denote the usual sobolev spaces of functions defined on the domain ; see .let denote the abbreviations of and define as the subspace of equipped with the norm therefore , the functions in are in for each subdomain , but may not be continuous in the whole domain . to simplify the notations ,we define , and for any banach space and a function , we define the norm \displaystyle{\rm ess}\!\!\sup_{t\in ( 0,t)}\|g(t)\|_x , & p=\infty. \end{array } \right.\ ] ] let be a uniform partition of the time interval ] on , .let denote the raviart thomas mixed finite element subspace of introduced in , which coincides with an element of in each triangle .moreover , we require that the functions satisfy the boundary condition , , on each boundary edge and the jump condition \ , \chi_h\d s=0 1\leq k\leq r ] on an interface is similar .firstly , we take in ( [ erre - fem-1 ] ) and get \big)_{\gamma_m}\\ & \leq c\|p^n\|_{\overline h^3}\| \u^{n}_h - q_h \u^{n}\|_{l^2}h^{2}\end{aligned}\ ] ] which implies that in particular , we have and so , by the inverse inequality , as a result , there exists a positive constant such that when we have secondly , we proceed with a mathematical induction on which is already proved for . in the following ,we assume that it holds for and try to prove that it also holds for . since , by applying and gronwall s inequality, there exists a positive constant such that when we have which , together with , gives from we further derive that which implies that and so , by the inverse inequality , in view of the last inequality , there exists a positive constant such that when we have the mathematical induction on is completed , and proposition [ bdunh ] is proved with and .in this section , we prove theorem [ mainthm ] based on the boundedness of the fully discrete solution proved in proposition [ bdunh ] . similar as the last section, for any fixed integer we introduce the ritz projection of as the finite element solution of ( d(\u^{n+1},x)^{-1}(\overline { \bf w}_h^{n+1}-{\bf w}^{n+1}),{\bf v}_h ) = ( \overline c^{n+1}_h - c^{n+1},\nabla\cdot { \bf v}_h),&\forall~ { \bf v}_h\in { \bf h}_h^r , \end{array } \right.\end{aligned}\ ] ] with for the uniqueness of solution .then there exists a positive constant such that when the following inequalities hold : & \max_{0\leq n\leq n}(\|c^{n}-\overline c_h^{n}\|_{l^2 } + \|{\bf u}^{n}-q_h { \bf u}^{n}\|_{l^2 } + \|{\bf w}^{n}-\overline { \bf w}_h^{n}\|_{l^2 } ) \leq ch^{r+1 } , \\ & \max_{0\leq n\leq n } ( \|{\bf w}^{n}\|_{l^\infty } + \|\overline { \bf w}_h^{n}\|_{l^\infty } ) \leq c , \label{p - app3}\\ & \bigg(\sum_{n=0}^{n-1}\tau \|d_\tau ( c^{n+1}-\overline c_h^{n+1})\|_{l^2}^2 \bigg)^{1/2}\leq ch^{r+1 } .\label{p - apap3}\end{aligned}\ ] ] firstly , by choosing in and in , we derive that in order to make use of , we define as the solution of =0,\quad [ \widetilde g^n\cdot{\bf n}]=0 & \mbox{on}~~\gamma_m,\\ \nabla \widetilde g^n\cdot{\bf n}=0 & \mbox{on}~~\partial\omega , \end{array } \right.\end{aligned}\ ] ] and substitute into. then we obtain which together with implies that when , by the inverse inequality we have then we choose in and in . with the boundedness of , we derive that applying gronwall s inequality , there exists a positive constant such that when and we have taking in ( [ erre - fem-13 ] ) , we get \big)_{\gamma_m},\\ & \leq c\|p^n\|_{\overline h^{r+1}}\| \u^{n}_h - q_h \u^{n}\|_{l^2}h^{r+1 } , \end{aligned}\ ] ] which implies that to estimate , we define as the solution of =0,\quad [ \nabla g^{n}\cdot{\bf n}]=0 & \mbox{on}~~\gamma_m,\\ \nabla g^{n}\cdot{\bf n}=0 & \mbox{on}~~\partial\omega , \end{array } \right.\end{aligned}\ ] ] and substitute into . since , it follows that which gives finally , when and we see that - give from - we see that theorem [ mainthm ] holds .in this section , we prove lemma [ lemhk0 ] and lemma [ lemhkp1 ] , which were used in section [ section3 ] to prove the uniform piecewise regularity of the solution of the linearized pdes .we shall use the notation , with .before we prove lemma [ lemhk0 ] , we need to introduce some lemmas below .[ lemapp ] _ let , , .let and .suppose that satisfies the strong ellipticity condition and is a solution of \displaystyle [ \phi]=0,\quad \big[a\nabla \phi\cdot{\bf n}\big]=g & \mbox{on}~~\gamma_{2r } .\end{array } \right.\end{aligned}\ ] ] then where for any and nonnegative integer . _ _ proof _ to simplify the notations , we relax the dependence on in the generic constant , and set , . differentiatingthe equation with respect to for some fixed and denote , we obtain that \displaystyle [ \phi_j]=0,\quad \big[a\nabla \phi_j\cdot{\bf n}\big ] = \partial_jg-\big[\partial_ja\nabla \phi\cdot{\bf n}\big ] & \mbox{on}~~\gamma_{2r } .\end{array } \right.\end{aligned}\ ] ] where denote the upward unit normal vector on .if we define as a smooth cut - off function satisfying , in and outside , then times gives which reduces to and from we see that the last two inequalities imply .the above lemma can be easily extended to the case that is a smooth surface defined by for some smooth function .[ lemm73 ] _ let , , , and .suppose that satisfies that and assume that is a solution of \displaystyle [ \phi]=0,\quad \big[a\nabla \phi\cdot{\bf n}\big]=g & \mbox{on}~~\gamma_{2r}(\varphi ) .\end{array } \right . \end{aligned}\ ] ] then & \|\phi\|_{\overline h^3(s_r(\varphi ) ) } \leq c_r\big(\|f\| _ { \overline h^1(s_{2r}(\varphi ) ) } + \| g\|_{h^{3/2}(\gamma_{2r}(\varphi ) ) } + \|\phi\|_{\overline w^{2,4}(s_{2r}(\varphi ) ) } \big ) , \label{trmmp13}\end{aligned}\ ] ] where for any and nonnegative integer ._ _ proof _ let denote the coordinates transformation and . under this coordinates transformation , the problem is converted to \displaystyle [ \widetilde\phi]=0,\quad \big[\widetilde a(y ) \nabla_y \widetilde\phi\cdot{\bf n}\big]=\widetilde g(y ) & \mbox{on}~~\gamma_{2r } , \end{array } \right .\end{aligned}\ ] ] where , , and . by applying lemma [ lemapp ] to the problem , we get transforming back to the -coordinates , the last two inequalities imply -. _ proof of lemma [ lemhk0 ] _ without loss of generality, we can assume that the functions , and are sufficiently smooth so that the problem has a piecewise solution .if we can prove with a constant which does not depend on the extra smoothness of , and , then a compactness argument gives for the nonsmooth , and under the condition of lemma [ lemhk0 ] .first , multiplying the equation by , it is easy to derive the basic estimate : secondly , by a `` partition of unity '' , there exist a finite number of cylinders , , such that covers , .moreover , each only intersects one interface and in each , up to a rotation , the interface can be expressed as for some smooth function .then , by applying lemma [ lemm73 ] , we derive that let .it is well - known that , by the interior estimates of elliptic equations , there hold the last four inequalities imply that where can be arbitrarily small . finally , by choosing small enough and using the basic estimate ,the last two inequalities imply . integrating against , it is easy to derive that then , by setting , integrating against gives which further reduces to + \big\|\phi^{-1/2}\nabla\cdot\big(a^{n+1}\nabla \phi^{n+1}\big)\big\|_{\overline l^2}^2\\ & \leq c\|f^{n+1}\|_{\overline l^2}^2+c\|\nabla\cdot {\bf g}^{n+1}\|_{\overline l^2}^2 + c\big(d_\tau a^{n+1}\nabla\phi^{n}\cdot\nabla \phi^{n } ) -\big(\nabla\phi^{n},d_\tau{\bf g}^{n+1}\big)\\ & \leq c\|f^{n+1}\|_{\overline l^2}^2+c\|{\bf g}^{n+1}\|_{\overline h^1}^2 + c\|d_\tau a^{n+1}\|_{\overline l^2}\|\nabla\phi^{n}\|_{\overline l^4}^2 + \|\nabla\phi^{n}\|_{\overline l^2}\|d_\tau{\bf g}^{n+1}\|_{\overline l^2}d_{n,0}\\ & \leq c\|f^{n+1}\|_{\overline l^2}^2+c\|{\bf g}^{n+1}\|_{\overline h^1}^2 + c_\epsilon\|\nabla\phi^{n}\|_{\overline l^2}^2 + \epsilon(\|\nabla\phi^{n}\|_{\overline h^1}^2 + \|d_\tau{\bf g}^{n+1}\|_{\overline l^2}^2d_{n,0 } ) .\end{aligned}\ ] ] from lemma [ lemhk0 ] we know that \|_{h^{1/2}(\gamma)}\\ & \leq c\big\|\phi^{-1/2}\nabla\cdot \big(a^{n+1}\nabla \phi^{n+1}\big)\big\|_{\overline l^2 } + c\|{\bf g}^{n+1}\|_{\overline h^1}\end{aligned}\ ] ] the last two inequalities imply that + c^{-1}\|\phi\|_{\overline h^2}^2\\ & \leq c\|f^{n+1}\|_{\overline l^2}^2+c\|{\bf g}^{n+1}\|_{\overline h^1}^2 + c_\epsilon\|\nabla\phi^{n}\|_{\overline l^2}^2 + \epsilon(\|\nabla\phi^{n}\|_{\overline h^1}^2 + \|d_\tau{\bf g}^{n+1}\|_{\overline l^2}^2d_{n,0}).\end{aligned}\ ] ] summing up the last inequality for , we obtain which further reduces to .the proof of lemma [ lemhkp1 ] is completed .in this section , we present numerical examples to support our theoretical error analysis . the computations are performed with the software freefem++ .we solve the problem \left[c\right]=0,\quad \left[d(\u , x)\nabla c\cdot{\bf n}\right]=0 & \mbox{on}~~\gamma , \\[10pt ] d(\u , x)\nabla c\cdot{\bf n}=0&\mbox{on}~~\partial\omega,\\[8pt ] c(x,0)=c_0(x)~~ & \mbox{for}~~x\in\omega , \end{array } \right.\\[8pt ] & \left\ { \begin{array}{ll } \displaystyle \nabla\cdot\u= g & \mbox{in}~~\omega_0\cup \omega_1,\\[8pt ] \displaystyle\u=-\frac{k(x)}{\mu(c)}\nabla p & \mbox{in}~~\omega_0\cup \omega_1,\\[10pt ] \displaystyle \left [ p \right]=0,\quad \left[\u\cdot{\bf n}\right]=0 & \mbox{on}~~\gamma , \\[8pt ] \displaystyle \u\cdot{\bf n}=0 & \mbox{on}~~\partial\omega , \end{array } \right.\end{aligned}\ ] ] in the unit ball which is separated by the interface into two subdomains for simplicity , we choose , and choose the permeability and porosity which are smooth in each subdomain but discontinuous across the interface .let so that . the functions , and the initial data are chosen corresponding to the exact solution 0 & \mbox{for}~~x\in \omega\backslash\omega_k,\\ \end{array}\right .\\[5pt ] & c(x , t)=0.5 + 50\ , p(x , t ) \cos(0.4\ , x_1)\sin(0.4 \ , x_2)\sin(4 t ) , \end{aligned}\ ] ] which satisfy the jump conditions on the interface and the boundary conditions on , while and are discontinuous across the interface . , and .] , and . ] , and . ]we partition the domain into quasi - uniform triangles with nodes on the boundary and nodes on the interface with , as shown in figure [ figure1 ] . for simplicity , we set and solve the system by the proposed method using the raviart thomas linear finite elements up to the time . to test the convergence rate of the proposed method, we solve the problem for different and , and present the errors of the numerical solutions in table [ extab1 ] , where the convergence rates of and are calculated by the formulas at the finest two meshes . from table[ extab1 ] we see that the convergence rate of the numerical solution is about second order , which is consistent with our numerical analysis . 0.1 in -0.2 in 0.1 in [ extab1 ] .errors of the linearized mixed fem with . [ cols="<,<,^,^,^,^",options="header " , ]we have studied the convergence of a linearized mixed fem for a nonlinear elliptic - parabolic interface problem from the model of incompressible miscible flow in porous media .we showed that the solution of the linearized pdes is piecewise uniformly regular in each subdomain separated by the interfaces if the solution of the original problem is piecewise regular , and established optimal - order error estimates for the fully discrete solution without restriction on the grid ratio .the analysis presented in this paper , together with lemma [ lemhk0][lemhkp1 ] , may be extended to other nonlinear parabolic interface problems with other time - stepping schemes .b. amaziane and m. el ossmani , _ convergence analysis of an approximation to miscible fluid flows in porous media by combining mixed finite element and finite volume methods _ , numer .methods partial differential eq . , 24 ( 2008 ) , pp .799832 .h. chen , z. zhou and h. wang , _ an optimal - order error estimate for an -galerkin mixed method for a pressure equation in compressible porous medium flow _, int . j. numer .modeling , 9 ( 2012 ) , pp .132 - 148 .j. douglas , jr . , r. ewing and m.f .wheeler , _ a time - discretization procedure for a mixed finite element approximation of miscible displacement in porous media _ , rairo anal ., 17 ( 1983 ) , pp .249 - 265 .ewing and j. shen , _ superconvergent error estimates for a class of discontization methods for coupled first - order system with discontinuous coefficients _ , numer .methods partial diff .15 ( 1999 ) , pp .267 - 283 .raviart and j.m .thomas , _ a mixed finite element method for order elliptic problems _ , mathematical aspects of finite element methods , lecture notes in math .606 , springer - verlag , 1977 , pp .292 - 315 .b. rivire , m.f .wheeler and v. girault , _ improved energy estimates for interior penalty , constrained and discontinuous galerkin methods for elliptic problems .part i _ , computational geosciences , 3 ( 1999 ) , pp .337 - 360 .t. sun and y. yuan , _ an approximation of incompressible miscible displacement in porous media by mixed finite element method and characteristics - mixed finite element method _ , j. comput ., 228 ( 2009 ) , pp .391 - 411 .
in this paper , we study the stability and convergence of a decoupled and linearized mixed finite element method ( fem ) for incompressible miscible displacement in a porous media whose permeability and porosity are discontinuous across some interfaces . we show that the proposed scheme has optimal - order convergence rate unconditionally , without restriction on the grid ratio ( between the time - step size and spatial mesh size ) . previous works all required certain restrictions on the grid ratio except for the problem with globally smooth permeability and porosity . our idea is to introduce an intermediate system of elliptic interface problems , whose solution is uniformly regular in each subdomain separated by the interfaces and its finite element solution coincides with the fully discrete solution of the original problem . in order to prove the boundedness of the fully discrete solution , we study the finite element discretization of the intermediate system of elliptic interface problems .
reaction - diffusion equations with spatially distributed hysteresis were first introduced in to describe the growth of a colony of bacteria ( salmonella typhimurium ) and explain emerging spatial patterns of the bacteria density . in , numerical analysis of the problemwas carried out , however without rigorous justification .first analytical results were obtained in ( see also ) , where existence of solutions for multi - valued hysteresis was proved .formal asymptotic expansions of solutions were recently obtained in a special case in .questions about the uniqueness of solutions and their continuous dependence on initial data as well as a thorough analysis of pattern formation remained open . in this paper , we formulate sufficient conditions that guarantee existence , uniqueness , and continuous dependence of solutions on initial data for systems of reaction - diffusion equations with discontinuous spatially distributed hysteresis .analogous conditions for scalar equations have been considered by the authors in .denote , where .let and ( ) be closed sets .we assume throughout that , , .we consider the system of reaction - diffusion equations with the initial and boundary conditions here is a positive - definite diagonal matrix ; is a hysteresis operator which maps an initial configuration function ( ) and an input function to an output function . as a function of , takes values in a set ( ) .now we shall define this operator in detail .let be two disjoint smooth manifolds of codimension one without boundary ( hysteresis `` thresholds '' ) . for simplicity, we assume that they are given by and with and , respectively , where and are -smooth functions ( in the general situation , atlases can be used ) . denote , , .assume that and ( fig .[ fig1 ] ) .next , we introduce locally hlder continuous functions ( _ hysteresis `` branches '' _ ) we fix and denote by the space of functions continuous on the right in . for any ( _ initial configuration _ ) and ;\mathcal u) ] .then if , if , if ; for ] by the following rule ( cf . ) . for any initial configuration and input ;\mathcal u) ] ( _ output _ ) is given by where is the configuration function defined above .assume that the initial configuration and the input function depend on spatial variable .denote them by and , where ;\mathcal u) ] is the integer part of .introduce the anisotropic sobolev spaces with the norm and the space of -valued functions continuously differentiable on ] , we have or , equivalently , .3 . for ] respect to a spatial configuration if , for every fixed ] _ if there is such that , for ] .[ conddissipativity - f ] there is a parallelepiped and , for each sufficiently small , there is a parallelepiped and a locally lipschitz continuous function such that 1 . converges to uniformly on compact sets in as , 2 . at each point , , the vector points strictly inside .3 . at each point , , the vector points strictly inside for all . to formulate the assumption on , we fix satisfying condition [ conddissipativity - f ] and set [ conddissipativity - g ] for any , there is a compact such that and the cauchy problem has a solution satisfying whenever it follows from ( * ? ? ?* theorem 1 , p. 111 ) that system has a unique solution for a sufficiently small .condition [ conddissipativity - g ] additionally guarantees the absence of blow - up . in particular, the uniform boundedness of holds if , where and are bounded on compact sets ( see example [ exbacteria ] ) .however , if , one must additionally check that never leaves . to fulfill condition [ conddissipativity - g ] , one could alternatively assume the existence of invariant parallelepiped for ( similarly to condition [ conddissipativity - f ] ) .[ exbacteria ] the hysteresis operator and the right - hand side in the present paper apply to a model describing a growth of a colony of bacteria ( salmonella typhimurium ) on a petri plate ( see ) .let and denote the concentrations of diffusing buffer ( ph level ) and histidine ( nutrient ) , respectively , while denote the density of nondiffusing bacteria .these three unknown functions satisfy the following equations : supplemented by initial and no - flux ( neumann ) boundary conditions . in , are given constants and is the hysteresis operator . in this example, we have , , .the hysteresis thresholds and are the curves on the plane given by and , respectively , where are some constants ( fig .[ fig1 ] ) ; the hysteresis `` branches '' are given by functions and .in what follows , we assume that conditions hold .[ thlocalexistence ] there is a number such that 1 .there is at least one solution of problem , in 2 .any solution in is transverse and preserves spatial topology .[ tcont ] let be a transverse topology preserving solution of problem , in for some .then it can be continued to an interval , where has the following properties .1 . for any , the pair is a transverse solution of problem , in .either , or and is a solution in , but is not transverse with respect to .[ tcontdepinitdata ] assume the following . 1 .there is a number such that problem , with initial functions and initial configuration defined by its discontinuity point admits a unique transverse topology preserving solution in for any .2 . let , , , be a sequence of other initial functions such that , as .3 . let , , be a sequence of other initial configurations defined by their discontinuity points similarly to and as .then , for all sufficiently large , problem , with initial data has at least one transverse topology preserving solution .each sequence of such solutions satisfies }\to 0,\\ \sup\limits_{t\in[0,t]}\left(\|v_n(\cdot , t)- v(\cdot , t)\|_{l_q((0,1);{{\mathcal v}})}+\|v_{nt}(\cdot , t)- v_t(\cdot , t)\|_{l_q((0,1);{{\mathcal v}})}\right)\to 0 \end{gathered}\ ] ] as where and are the respective discontinuity points of the configuration functions and .if one a priori knows that all are transverse on some interval \subset[0,t_{max}) ] even if is not topology preserving on ] and then satisfying condition [ conddissipativity - g ] .the following result is straightforward . [ lab ] 1 .let , ] .then ] 2 .if ] , , then }\le \|a_1-a_2\|_{c[0,t]}. ] such that ( ) and ( ] and assume that for this .then , using and omitting the arguments of the integrands , we have using the hlder continuity and the boundedness of for and integrating with respect to from to , we complete the proof .now we introduce sets that `` measure '' the spatial transversality .denote by , , the set of triples such that , , is of the form , and the following hold : 1 . ] , 3 . for ] and ] . 1. the equation for ] , ] , ] , conclusions ( 1 ) and ( 2 ) from lemma hold for , for the corresponding `` root '' function , for the configuration function of the hysteresis , for its discontinuity point , and for instead of .furthermore , .let us prove the first assertion . 1.1 .fix in lemma [ loneroot ] such that .fix from lemma [ ltnp0 ] .set , where is the embedding constant such that . set , where are defined in lemmas [ loneroot ] , [ ltnp0 ]let be the set of functions such that , the set is a closed convex subset of the banach space endowed with the norm given by the left - hand side in .similarly , we define construct a map .take any and define and according to lemma [ loneroot ] . then define by and , using this , define by .finally apply lemma [ ltnp0 ] and obtain a solution of auxiliary problem .we now define .the operator is continuous .indeed , it is not difficult to check that the mapping is continuous from to ] .then or is not transverse with respect to , where is given by .\1 . it suffices to prove the theorem for a sufficiently small time interval . since , it is easy to show that there is such that for all .hence , by theorem [ thlocalexistence ] , there is ] and }a_n(s) ] , respectively .hence , we can choose subsequences of and ( which we denote and again ) such that }\to 0,\quad n\to\infty \label{eq1.1 - 9}\end{gathered}\ ] ] for some function with and some ] .due to , , and lemma [ lab ] , we have }\to 0,\quad n\to\infty , \\ \label{eqcontdepw } w(\xi_0(x),\hat u(x,\cdot))(t)=\begin{cases } w_1(\hat u(x , t ) ) , & 0\le x\le \hat b(t),\\ w_2(\hat u(x , t ) ) , & \hat b(t)<x\le 1 . \end{cases}\end{gathered}\ ] ] \3 .now we show that }\left(\|v_n(\cdot , t)-\hat v(\cdot , t)\|_{l_q((0,1);{{\mathcal v}})}+\|v_{nt}(\cdot , t)-\hat v_t(\cdot , t)\|_{l_q((0,1);{{\mathcal v}})}\right)\to 0,\quad n\to\infty,\ ] ] for some .take an arbitrary .it follows from the assumptions of the theorem , from , , , and from lemma [ lu0b0v0 ] that provided are large enough .estimates , the second equation in , and the local lipschitz continuity of yield where does not depend on .hence , by gronwall s inequality , where does not depend and ] ,\\ \label{lem4.0.5 } \gamma_{\beta}(u(x , t ) ) < 0 , \quad x\in[0,b(t)],\end{gathered}\ ] ] and the analogous inequalities hold for . \1 .denote , .the functions , satisfy the equations and the zero boundary and initial conditions , where obviously , .the function can be represented via the green function of the heat equation with the neumann boundary conditions : let us prove this inequality for the function , assuming that .( the cases of and are treated analogously . ) since is locally lipschitz , where and the constants below do not depend on ] .hence , the values and are separated from 0 . therefore , using condition [ conduniq ] , we obtain 2.2 .boundedness of and for and lemma [ lab ] imply }\le k_5 \|a-\hat a\|_{c[0,t]}.\ ] ] using , we obtain for any $ ] the inequalities where is a respective lipschitz constant for and hence does not depend on .inequalities and yield . a mathematical model for bacterial growth described by a hysteresis operator .recent advances in nonlinear analysis proceedings of the international conference on nonlinear analysis .hsinchu , taiwan , 2006 .green s matrices of boundary value problems for petrovski parabolic systems of general form , ii , mat ., * 114(156 ) * , no .4 , 523565 ( 1981 ) ; english transl . in math .ussr sbornik , * 42 * 461489 ( 1982 ) . : _ pavel gurevich _ , free university berlin , arnimallee 3 , berlin , 14195 , germany ; peoples friendship university , mikluho - maklaya str. 6 , moscow , 117198 , russia ; e - mail : ` gurevichp.com ` ._ sergey tikhomirov _ , chebyshev laboratory , saint - petersburg state university , 14th line of vasilievsky island , 29b , saint - petersburg , 199178 , russia ; max planck institute for mathematics in the sciences , inselstrasse 22 , 04103 , leipzig , germany ; e - mail : ` sergey.tikhomirov.com ` .
we study systems of reaction - diffusion equations with discontinuous spatially distributed hysteresis in the right - hand side . the input of hysteresis is given by a vector - valued function of space and time . such systems describe hysteretic interaction of non - diffusive ( bacteria , cells , etc . ) and diffusive ( nutrient , proteins , etc . ) substances leading to formation of spatial patterns . we provide sufficient conditions under which the problem is well posed in spite of the discontinuity of hysteresis . these conditions are formulated in terms of geometry of manifolds defining hysteresis thresholds and the graph of initial data .
a network , or graph , consists of a set of nodes , from which selected pairs are connected by edges .such mathematical constructions provide a useful description for systems of interacting objects .more specifically , network concepts are used in the analysis of chemical and metabolic systems as well as food webs and social networks . in recent years , there has been much progress in the analysis of the topology of these networks .the network topology can be characterized by features such as the number of nodes , , and the average degree ] , is obtained by averaging over the distance between all pairs of nodes in the network .the parameters defined above were evaluated for random graphs and their dependence on and was found .however , the analysis of realistic networks shows that they are very different from random graphs . in realistic networksit is common to find surprisingly low average path lengths , and relatively high clustering coefficients . in many casesthe degree distribution follows a power law form , rather than the poisson distribution which is the signature of random networks .these features were found to appear in social networks , the world wide web , ecological networks , and metabolic networks . while the topological properties of realistic networks have been elucidated , the implications on the functionality of these networks are not fully understood . the small average path length and the high clustering of many realistic networks ,render them as small world networks . at first glance ,the small world characteristics imply that realistic networks function as highly connected systems .indeed , one expects that if the distance between two nodes is small , the correlation between them will be strong .for instance , in the case of a metabolic network , the concentrations of interacting proteins will strongly depend on each other .a perturbation in the concentration of one protein is likely to affect the concentration of the other .this might lead to the conclusion that small world networks are highly susceptible to local perturbations , as almost all the nodes are just a short distance away .the problem with this topological analysis , is that it does not relate to the specific function of a given network or to the strength of the interactions between its nodes .consider , for instance , a metabolic network and an ecological network sharing the same topology . in what sensecan these two networks be regarded as similar networks ? even if the two have the same topological structure , the nature of their functional behavior is fundamentally different .the process of predation may lead to different behavior than the process of chemical reaction between proteins .even two metabolic networks may function differently if the interaction strengths in one network are higher than in the other . in this paper, we present a method for obtaining the correlation matrix of a given network .the elements of this matrix provide the magnitudes of the correlations between pairs of nodes in the network . in certain casesthe matrix can be used to characterize some of the global features of the network s functionality .for instance , it can be used to identify domains of high correlations versus domains of low correlations .another use of the correlation matrix is in quantifying the connectivity of a network in a way that accounts both for its topology and for the specific processes taking place between its nodes .this method , referred to as the network correlation function ( ncf ) method , enables us to determine whether a topological small world ( tsw ) network will also be a functional small world ( fsw ) network .a network will be regarded as an fsw network if the correlations between its nodes are typically high , and thus the state of one node is highly dependent on that of the others . herewe apply the method to metabolic networks with various topologies and different interaction strengths . in these networks ,each node represents a reactant , and is assigned a dynamical variable that accounts for the concentration of this reactant .the time dependence of these concentrations is described by a set of rate equations .the equations include terms that describe the interaction processes in the given network .they account both for the topology and for the functionality of the network . from the solution of the rate equations under steady state conditionsone can extract the correlation between each pair of nodes . in certain cases ,networks are found to have a typical correlation length .if the distance between two nodes is much higher than this length , the correlation between them is negligible . to quantify the connectivity of the network ,one compares the correlation length with the average path length . in case that the average path length is smaller than the typical correlation length, the network will be considered as an fsw network . in this case, local perturbations will have a global effect on the network .the fsw network will thus be regarded as strongly connected . on the other hand ,if the average path length is larger than the typical correlation length , the network will be considered as weakly connected .the paper is organized as follows . in sec .ii we present the methodology , and demonstrate its applicability to metabolic networks . in sec .iii we analyze some simple , analytically soluble networks , and in sec .iv we present a computational analysis of a set of more complex networks , culminating in an example of a scale free network .the results are summarized and discussed in sec .below we present the ncf method for evaluating the connectivity of interaction networks .for concreteness , we focus on the specific case of metabolic networks .it is straightforward to generalize the method to other types of networks .consider a metabolic network consisting of different molecular species , , .the generation rate of the molecules is ( s ) .once a molecule is formed it may undergo degradation at a rate ( s ) .certain pairs of molecules , and , may react to form a more complex molecule ( ) .in general , the product molecules may be reactive and represented by another node in the network . for simplicity , in the analysis below, we assume that the molecules are not reactive and thus do not play a further role in the network .we also limit the discussion to the case in which a molecular species does not react with itself , namely reactions of the form are excluded .the reaction rate between the and molecules is given by the _ reaction rate matrix _ .its matrix elements are ( s ) , where .note that for non - interacting pairs of molecules .the _ network topology matrix _, , is also a dimensional matrix , which is defined as follows : = 1 ] otherwise .let ] provides some information as to the connectivity of the network , but only in the topological sense . in order to account for the functionality of the network we consider the rate equations , which take the form where is the time dependent concentration of the molecule .the first term on the right hand side of eq . ( [ eq : metabolic_rate ] ) accounts for the generation of molecules .the second term accounts for the process of degradation , and the third term accounts for reactions between molecules .the steady state ( ss ) solution of the rate equations , , can be obtained by setting the left hand side of eq .( [ eq : metabolic_rate ] ) to zero .one obtains where is the effective degradation rate .our goal is to characterize the correlations between the different species around the steady state condition . roughly speaking, we are asking the following question : while at steady state , to what extent does a small perturbation in the concentration of the species affect the concentration of the species ? to this endwe define the _ first order correlation matrix _ as = \left .{ \frac{\partial n_i}{\partial n_j } } \right| _ { \rm ss } , \label{eq:2ni_2nj}\ ] ] which , using eq .( [ eq : metabolic_ss ] ) takes the form = - { \frac { a_{ij } g_i}{({w_i^{\rm eff}})^2}}. \label{eq : cij}\ ] ] note that the elements of the first order correlation matrix are non - zero only if the species and directly interact with each other .topologically , this means that the matrix element ] .indirect correlations between species that are connected via a third species are not accounted for ( hence the term first order correlation matrix ) . to account for indirect correlations, one has to compute the complete correlation matrix = \left .{ \frac{dn_i}{dn_j}}\right|_{\rm ss}. \label{eq : dni_dnj}\ ] ] clearly , the diagonal terms of this matrix must satisfy for . for the off - diagonal terms , ,one can write in matrix form , these equations become = 1 & & \\\mat[g]ij = \sum_{k=1}^{j}{\mat[c]ik \mat[g]kj } & \,\ , & ( i \ne j ) . \end{array } \right . \label{eq : gij}\end{aligned}\ ] ] eq .( [ eq : gij ] ) is a set of coupled linear equations .their solution provides the complete correlation matrix , ] , between them .the rate of this decay provides the correlation length . to obtain the correlation function we identify all pairs of species and that are separated by a distance from each other .we then average the magnitude of the correlations , | ] was used .this is because certain pairs of species and may be positively correlated , and others may be negatively correlated . in any case , the focus here is merely on the strength of their mutual correlations and not on the sign of these correlations . to obtain the correlation length, one may fit the function to an exponent of the form .the distance is the correlation length .it approximates the distance within which strong correlations between different species are maintained .this distance is determined by the dynamical processes and by the characteristic rate constants of a specific network .it thus accounts not only for the topology of the system , but also for its functionality .finally , we define the _ connectivity _ of a network as }. \label{eq : eta}\ ] ] in the limit where is much greater than the average path length , most of the nodes are within the correlation length from one another , and the components of the network are highly correlated .the concentrations of different species are strongly dependent on each other , and the network is an fsw network . correspondingly , one obtains that . in casethat is much smaller than the average path length , the effect of a perturbation in the concentration of one species decays on average before it reaches most of the other species .perturbations are thus local , and the connectivity of the network is said to be low .while topologically , such a network might be considered a small world network , functionally it is a loosely connected network .to demonstrate the ncf method we now refer to a set of simple examples , which are analytically soluble . consider a linear metabolic network of species ( )the species , , reacts with its nearest neighbors , namely and .this network is shown in fig .[ fig1_linear_network ] . for simplicity, we take all the reacting species to have identical parameters , namely and for . also , in case that , and otherwise . taking the limitin which the number of species is very large , we can avoid the complexities related to the boundaries of the network . under these conditions , the steady state solution for all the species is the same , enabling us to omit the index from the steady state concentrations .the reaction rate matrix for this network is for a linear network , the average distance between pairs is = ( j+1)/3 ] scales linearly with .the clustering coefficient for this network is zero .thus , from the topological point of view , the linear network can not be considered a small world .the rate equation for the linear metabolic network is leading to the steady state solution the first order correlation matrix takes the form where [ eq . ( [ eq : cij ] ) ] . using eq .( [ eq : ss_linear ] ) , one obtains since the parameters , and are positive , it is easy to see that takes values only in the range .this fact will be used in the analysis below . to obtain the complete correlation matrix, one has to solve eq .( [ eq : gij ] ) . in the case of a linear metabolic networkit takes the form = 1 & & \\\mat[g]ij = q(\mat[g]{i+1}{,j } + \mat[g]{i-1}{,j } ) & \,\ , & ( i \ne j ) . \end{array } \right .\label{eq : gij_linear}\end{aligned}\ ] ] based on the symmetry of the problem , it is clear that for a given choice of the parameters , the correlation between the species and depends only on the distance between them . using this indexation , eq .( [ eq : gij_linear ] ) becomes & \,\ , & ( d \ge 1 ) , \end{array } \right .\label{eq : gd_linear}\end{aligned}\ ] ] where is the correlation matrix term for pairs of species and where . since the correlation is expected to decay exponentially as a function of the distance between the nodes , we search for a solution of the form . inserting this expression into eq .( [ eq : gd_linear ] ) we obtain two possible solutions of the form where . since the parameter is limited to the range , the parameter can take values only in the range .the physically relevant solution must satisfy the condition that the correlation between very distant species will vanish .this constraint requires that . to satisfy this condition for , one has to choose the solution where the square root is subtracted .the result is where .the correlation between species as a function of the distance between them is thus .\ ] ] the pre - factor of the exponent accounts for the fact that since , the correlations between directly interacting species are negative .thus , pairs of species which are next - nearest neighbors in the network tend to have positive correlations between them .the correlation function [ eq . ( [ eq : g_of_d ] ) ] is the absolute value of , which comes to be where is the correlation length of the network .it is interesting to examine the limit in which . in this limitthe correlations are weak and the typical correlation length converges to .the correlation function approaches . in this limit ,the correlation between a pair of species is dominated by the shortest path between them . for each step along that path ,the correlation is multiplied by a factor of .thus , the magnitude of the correlation between a pair of species at distance from each other is approximated by .one can identify two limits . in the limit where the correlations are strong , and . in this limit , the reaction process is dominant and long range correlations are observed . in the limit where , the correlations are weak , and . in this limitthe degradation process is dominant and the correlation length is small . in fig .[ fig2_do_metabolic_graph ] we present the correlation length , , as a function of the parameters , and for a linear metabolic network .the correlation length increases with and ( as the reaction process becomes dominant ) , and decreases with ( as the process of degradation becomes dominant ) . using eqs .( [ eq : eta ] ) and ( [ eq : av_d_linear ] ) , the connectivity can be expressed by .the linear network clearly demonstrates the difference between the concepts of tsw networks and fsw networks . in the topological sense it is as far as a networkcan be from a small world network , as the distance scales linearly with the network size , and the clustering coefficient is zero .however , in the functional sense the linear network can be a small world network , when the reaction terms are sufficiently dominant , enabling to become larger than . in order to examine the theoretical predictions of the method, we conducted a simulation of the long linear metabolic network described above .in this simulation we constructed a linear network of reacting species with periodic boundary conditions , namely , reacts with . at time we assigned to each reacting species its steady state concentration .then we forced the concentration to be slightly above its steady state value , namely , where .we then let the network relax to its new steady state .we denote the resulting change in the steady state concentration of the species by . in fig .[ fig3_perturbation ] we show the absolute value of as a function of , the distance of the node from the perturbed node , .these results , obtained from direct integration of the rate equations , are shown for different values of the reaction rate ( symbols ) .when increases the typical correlation length becomes higher , and the effect of the local perturbation of extends to more distant species .the results are in good agreement with the theoretically derived correlation function , [ eq . ( [ eq : fcor_linear ] ) ] ( solid lines ) .slight deviations appear for distant species .this is because in numerical simulations one must choose to be a finite perturbation .the resulting deviation in the rest of the species is thus affected by higher order terms in the taylor expansion which are not accounted for by our method . herethe generation rates and the degradation rates of all the species are and respectively .the network becomes an fsw network once , which is approximately the average path length for this network .this condition is satisfied for .hierarchical structures are common in realistic networks . for instance , ecological networks have in many cases distinct trophic levels .social organizations are also constructed in a tree - like framework .here we relate to a hierarchical metabolic network .consider a metabolic network of nodes where each node is assigned a level ( ) .the highest level consists of a single node , referred to as the _root_. each node at level is then connected to exactly one node at level ( the _ parent _ ) and nodes at level ( the _ siblings _ ) .the parameter is defined as the order of the tree .the degree of all the nodes ( except those at the levels zero and ) is thus ( fig .[ fig4_tree_network ] ) .since this network is hierarchical , the up and down directions are well defined . stepping from a node at level to a node at level will be considered going up the network , while stepping from level to level is going down the network .note that in a tree - like network it is not possible to go sideways , as there is no edge connecting two nodes at the same level .consider a species , which is at a distance from some other species .the path between them consists of steps up the network and steps down the network .the total distance satisfies , and the path between them can be noted by .for example , the path between the two shaded nodes in fig .[ fig4_tree_network ] is and the distance is .two species are said to be located in the same branch if in the path between them either or .the reaction rate matrix and the first order correlation matrix have non - zero values only for directly interacting species , namely , for pairs of species where either and , or and . in order to avoid the complexities related to the boundaries of the network , we consider the case in which . for simplicity, we take the generation and the degradation rates to be and for .the reaction rate is for each pair of nodes and that react with each other . under these conditions , the network is symmetrical and the rate equations are identical for all nodes : the steady state solution is thus and the non - zero elements in the first order correlation matrix [ eqs .( [ eq:2ni_2nj ] ) and ( [ eq : cij ] ) ] are two limits are observed . in the limit of strong interactions , where , the matrix elements approach . in the limit of weak interactions , where one obtains . in any casethe values that can take are limited to .for an infinite perfect tree with uniform rate constants , the correlation between all pairs of species with that same values of and are the same .we denote this correlation by . in each line of the first order correlation matrix there are exactly non - zero terms .one term for s parent and terms corresponding to s siblings .the correlation between two species and is thus carried via the the parent of the species , for which the correlation with is , and via the siblings of the species , for which the correlation with is .( [ eq : gij ] ) thus takes the form & & { \rm for } & v>0 \\ g_{u ,v } = q(g_{u - 1,v } + m g_{u + 1,v } ) & & { \rm for } & u > 0 \end{array } \right .\label{eq : gij_tree}\end{aligned}\ ] ] the first equation states that the correlation of every species with itself is unity .the second equation accounts for the correlations between species at the same branch , measuring the effect of variation in the higher level node on a node at a lower level .the third equation accounts for all the correlations that are not included in the first two equations .more specifically , it includes the correlations between species from different branches .it also includes the correlations between pairs on the same branch , measuring the effect of variation in the lower level node on a node at a higher level .we seek a solution of the form , where satisfies the condition that correlations vanish between distant species . from the third equation oneobtains } , \label{eq : k1_tree}\ ] ] while from the second equation one obtains \right\ } } , \label{eq : k2_tree}\ ] ] where . in order to satisfy the conditions that does not diverge for while , one has to choose the solution with the plus sign for in eq .( [ eq : k1_tree ] ) . the same condition for requires one to choose the solution with the plus sign for in eq .( [ eq : k2_tree ] ) . after some algebraic manipulations it can be shown that .the correlation between any pair of species is thus where is the distance between the two species , and is the correlation length of the tree - like network .the correlation function is .note that for ( ) this solution coincides with the solution obtained for the linear network [ eq .( [ eq : k_linear2 ] ) ] . in the limit of weak interactions , where and the correlation function approaches . in this limit , due to the weak interactions ,the correlation between a pair of species is dominated by the shortest path between them . in the limit of strong interactions , where and , the correlation length satisfies .for the correlation length is always finite . since the average path length of a perfect tree - like network must scale in some form with the number of levels in the tree , one obtains that for a large enough tree network the connectivity will always be less than unity .thus a perfect tree - like network of order or more will never be an fsw . in fig .[ fig5_d0_vs_a ] we show the correlation length as obtained for a metabolic network with a perfect tree topology vs. the reaction rate ( symbols ) .the results are shown for trees of different orders .here , and is varied .to demonstrate the applicability of the ncf method , we now refer to the analysis of a set of more complex networks . hereanalytical solutions are not available , and the correlation matrix must be obtained numerically .we analyze three different topologies following the structural classification proposed by estrada .the first example represents a class of networks which are organized into highly connected modules with few connections between them .the second example will be of a network with a highly connected central core surrounded by a sparser periphery , and the last example will be of a scale - free network .consider a network constructed of three fully connected modules ( communities ) , with a single connection between each pair of communities .this network is displayed in fig .[ fig6_three_joint_clusters](a ) . here , each community consists of nodes , adding up to a total of nodes .to obtain , , , the steady state solution for the concentrations of the different reacting species we solve eq .( [ eq : metabolic_rate ] ) using a standard runge - kutta stepper .the parameters we use are and .the reaction rate between pairs of reacting species is also set to unity .we then construct the first order correlation matrix , ] , is obtained from eq .( [ eq : gij ] ) .it consists of a set of linear algebraic equations . solving these equations, one obtains the complete correlation matrix of the network . for this network , the main insight on the global functions of the network can be deduced from the complete correlation matrix , which is displayed in fig . [ fig6_three_joint_clusters](b ) .the diagonal terms , which are all unity , are omitted from the figure . as expected , strong correlations appear between species within the same community ( sub - matrices along the diagonal ) , and vanishingly small correlations appear between species from different communities .in fact , the correlation matrix is close to be a partitioned block matrix , except for a few coupling terms between the blocks . in this casethe correlation matrix reflects the topological structure of the network , which is almost fully partitioned into three isolated communities .we now consider a network , which features a highly connected central core surrounded by a sparser periphery .this network consists of nodes .the nodes , , are a fully connected cluster ( the core ) , while the additional nodes are connected to all the nodes in the core , but not to each other ( the periphery ) .this network is shown in fig . [fig7_core_and_periphary](a ) . following the same procedure described above oneobtains the correlation matrix for this network [ fig .[ fig7_core_and_periphary](b ) ] .the central square ( domain i ) shows the correlations between the nodes in the central core .domains ii show the correlations between peripheral nodes and central ones .the value of these correlations is high , expressing the strong dependence of the peripheral nodes on the nodes in the central core . on the other hand for the correlations between the central nodes and the peripheral ones ( domains iii )one obtains very low correlations .this is an expected result , as deviations in the population of a node from the periphery should have almost no effect on a node from the core .an interesting result appears in domains iv .these domains show the correlations between pairs of nodes that are both from the periphery .it turns out that the effect of these nodes on each other is stronger than the effect they have on their adjacent nodes from the core .this is even though the topological distance between peripheral nodes is , while the distance between them and the central nodes is . a small perturbation in a peripheral node results in a very minor effect on all the central nodes .however this minor change in the core results in a more dramatic effect on all the rest of the peripheral nodes .this non - trivial result exemplifies the importance of the functional methodology as a complimentary analysis to the common topological approach . in the two examples shown above , we focused on the insights provided by the complete correlation matrix .below we show an additional numerical example , where we continue the analysis to obtain the correlation length , , and the connectivity .one of the common characteristics of many realistic networks is their degree distribution that follows a power law , namely , where and are positive constants .ecological networks , social networks and metabolic networks are characterized by power - law degree distributions , and are referred to as scale - free networks .such networks include some nodes , called hubs , with a degree which is orders of magnitude higher than the average degree in the network .scale free networks are considered as highly connected , because due to these hubs the average path length between nodes is small .in fact , in metabolic networks the average path length was found to be as small as \simeq 3 ] . solving eq .( [ eq : metabolic_rate ] ) we obtain the steady state solution for the concentrations of all the reactive species .the parameters are and for .the reaction rate between pairs of reacting species is varied . in this case , obtaining the complete correlation matrix , $ ] , requires the solution of linear algebraic equations [ eq .( [ eq : gij ] ) ] .we solve these equations and then average over the correlations between equidistant species to obtain the correlation function [ eq .( [ eq : g_of_d ] ) ] . in fig .[ fig9_g_of_d_scale_free ] we show the resulting correlation function vs. for three different values of the reaction rate ( symbols ) . when the interaction is suppressed ( ) the correlations decay rapidly .when the interaction is dominant ( ) , correlations are maintained over long distances . by fitting the correlation functions to exponential functions ( solid lines )one obtains the typical correlation length , , and the connectivity , , of each of the networks .the results for vs. the reaction rate are shown in fig .[ fig10_eta_vs_a_scale_free ] .it is found that the connectivity increases logarithmically with .note that for a very wide range of values of the parameter , the connectivity remains lower than unity .this means that although the examined scale free network is a tsw , for a very wide range of parameters it is not an fsw . only in the extreme cases of very strong interactionsfsw behavior might emerge .we have presented the ncf method for the analysis and evaluation of the connectivity of interaction networks .the method complements the topological analysis of networks , taking into account the functional nature of the interactions and their strengths .the method enables to obtain the correlation matrix , which provides the correlations between pairs of directly and indirectly interacting nodes . in certain cases ,one may gain insights on the network s functionality by writing down the complete correlation matrix .for instance , one can identify domains of high and low correlations .in other cases it is more insightful to extract the macroscopic characteristics of the network from the matrix .in particular , we have shown how to calculate the typical correlation length of the network .this correlation length , which has to do with the functionality of the network , can be compared to topological characteristics such as the average minimum path length of the network .the ratio between these two lengths provides the characteristic connectivity of the network .it was shown that the topological analysis alone is not sufficient in order to characterize the functionality of a network .for instance , networks with small world topology may display low connectivity , while networks that do not exhibit small world topology may display high connectivity .this is because in terms of the functionality of the network , when the correlation length is large , even distant species may be highly correlated .we demonstrated the method for metabolic networks with different topological structures , and identified the regimes of low connectivity and of high connectivity . as expected , these regimes depend on topological features , such as the number of species or the average minimum path length between pairs of species .however , they also depend on functional features such as the type of interactions in the network and the rate constants of the different processes .the ncf method was demonstrated for metabolic networks , but its applicability is much wider .in fact , the method could be applied to any reaction network that can be modeled by rate equations .such networks include metabolic networks , chemical networks , gene expression networks and ecological networks .it is common to use rate equations for the modeling of these types of networks . in certain models of social networks ,the flow of information as well as the spreading of viruses can also be described by rate equations .the method is not suitable for obtaining the correlations in ising type models , where the nodes are assigned discrete variables , which can not be modeled using continuous equations .the number of elements in the correlation matrix is equal to the number of pairs of nodes in the system .when applying the ncf method , one writes a single linear equation for each matrix element .thus , from a computational point of view , the scaling of the ncf method is quadratic in the number of reactive species .this enables the application of the method to networks which include even thousands of nodes .it is straightforward to extend the application of the method to the other types of interaction networks mentioned above .a few examples are addressed below .consider , for example , gene expression networks .these networks consist of genes and proteins that interact with each other .in addition to protein - protein interactions , already analyzed in the context of metabolic networks , genetic networks include transcriptional regulation processes , where some genes regulate the expression of other genes . in recent years, much information has been acquired about the topology of these networks , for certain organisms such as _ escherichia coli _ .the problem is that these networks are very elaborate , and may consist of thousands of nodes .this limits our ability to simulate their functionality , and thus , currently most of the theoretical and computational analysis of these networks is focused on small modules . in this analysis ,one performs simulations of small subnetworks consisting of only a few nodes .these subnetworks are expected to play specific roles in the functionality of the network as a whole .such approach is valid if an isolated module maintains its function when incorporated in a large network in which it interacts with many other genes .we expect the analysis presented here to provide some insight on this matter . by obtaining the complete correlation matrix, one can characterize the dependence of different proteins and genes on one another .the network may then be divided into subnetworks , grouping together nodes that are highly correlated , and excluding ones that are not .it is expected that these modules will not function significantly differently when analyzed in the context of the surrounding network nodes .in addition , the typical correlation length will provide us with an approximate radius beyond which correlations may be neglected . to simulate a module properly, one needs to include all the nodes which are within that radius from the module .other possible applications regard social networks .for instance , the process of viral spreading could be analyzed .many social networks are known to be small world networks .however , this does not mean that any contagious disease spreads rapidly .this is , possibly , because for certain diseases the correlation length is small .using the method presented here , one can obtain this correlation length , taking into account the specific rate constants of the viral flow .the recent applications of graph theory to many natural macroscopic systems was enabled by focusing on their topology .this approach has been very fruitful , as it uncovered the mutual structure of networks from many different fields . in particular , the ubiquity of the scale free degree distribution , and the small world topology was found .however , it still is not completely clear what functional meaning can be given to these topological properties in different contexts .a recently proposed approach derives the key aspects of the network functionality from its topological structure .other approaches use the ising hamiltonian to describe the interaction pattern between nodes on scale free and small world networks . functional characteristics such as phase transitions , and critical exponentsare then observed .the ncf method presented in this paper complements these approaches .it can be applied to a variety of different interaction processes , such as metabolic , ecological or social interactions , all of which can be described by rate equations .we believe that the approach presented here will lead to new insights on the behavior of networks and their functionality . , of the linear metabolic network versus the generation rate , ( a ) ; the degradation rate , ( b ) ; and the reaction rate , ( c ) .high connectivity is reached when the primary process is the reaction process ( proportional to and ) .the correlation length decreases with increasing ( as the degradation becomes dominant ) ., width=453 ] for the linear metabolic network as obtained from a numerical simulation for different values of the interaction rate ( symbols ) . to conduct the numerical test we integrate the equations for the linear network and bring them to the steady state condition. then we force a small perturbation on the concentration of the species .we evaluate the correlation function using . the correlations decay exponentially with the distance between species .the typical correlation length increases as the reaction rate is increased .the results are in agreement with the theoretical results of eq .( [ eq : fcor_linear ] ) ( solid lines ) .slight deviations appear due to the fact that in numerical simulations must be finite ., width=453 ] levels .each node is linked with exactly one node at the level above it ( father ) , and nodes at the level below ( siblings ) .the top node ( here at level ) is the root node .the order of the tree is , and the degree of the nodes is .the path between a pair of nodes is characterized by the number of upward steps followed by the number of downward steps to get from one node to the other . for the path between the two shaded nodes ., width=453 ] versus the reaction rate for a metabolic network with a perfect tree structure .the results are shown for trees of different order , ( symbols ) .for the results coincide with those obtained for the linear network . for higher ordersthe correlation length is bound from above by ( gray horizontal lines ) .thus , for a sufficiently large tree - like network , where the average path length is larger than , the connectivity is always less than unity .tree like networks are thus not expected to display fsw behavior ., width=453 ] for different values of the parameter ( symbols ) .the correlations decay rapidly for low values of , and more gradually for large values of . by fitting the correlation function to an exponential ( solid lines ) , the correlation length can be obtained ., width=453 ] vs. as obtained for the scale free network shown in fig .[ fig8_scale_free_network ] .the connectivity increases logarithmically as a function of .note that for a very broad range of values of the parameter .this implies that although scale - free networks are commonly tsw networks , in the functional sense they may not be fsw networks ., width=453 ]
networks are useful for describing systems of interacting objects , where the nodes represent the objects and the edges represent the interactions between them . the applications include chemical and metabolic systems , food webs as well as social networks . lately , it was found that many of these networks display some common topological features , such as high clustering , small average path length ( small world networks ) and a power - law degree distribution ( scale free networks ) . the topological features of a network are commonly related to the network s functionality . however , the topology alone does not account for the nature of the interactions in the network and their strength . here we introduce a method for evaluating the correlations between pairs of nodes in the network . these correlations depend both on the topology and on the functionality of the network . a network with high connectivity displays strong correlations between its interacting nodes and thus features small - world functionality . we quantify the correlations between all pairs of nodes in the network , and express them as matrix elements in the correlation matrix . from this information one can plot the correlation function for the network and to extract the correlation length . the connectivity of a network is then defined as the ratio between this correlation length and the average path length of the network . using this method we distinguish between a topological small world and a functional small world , where the latter is characterized by long range correlations and high connectivity . clearly , networks which share the same topology , may have different connectivities , based on the nature and strength of their interactions . the method is demonstrated on metabolic networks , but can be readily generalized to other types of networks .
frequency division multiplexing ( ofdm ) systems with guard - time interval or cyclic prefix can prevent inter - symbol interference ( isi ) .ofdm has overlapping spectra and rectangular impulse responses .consequently , each ofdm sub - channel exhibits a sinc - shape frequency response .therefore , the time variations of the channel during one ofdm symbol duration destroy the orthogonality of different subcarriers , and result in power leakage among subcarriers , known as inter - carrier interference ( ici ) , which causes degradation in system performance . in order to overcome the above drawbacks of ofdm system , several pulse - shaping ofdm systems were proposed .it is shown that signal transmission through a rectangular lattice is suboptimal for doubly dispersive ( dd ) channel . by using results from sphere covering theory , the authors have demonstrated that hexagonal multi - carrier transmission ( hmct ) system obtains lower energy perturbation than ofdm system , hence providing better performance against isi / ici .there are mainly two types of hmct systems , ( 1 ) : hmct system with orthogonal prototype pulse , named as lattice ofdm ( lofdm ) system , which confines the transmission pulses to a set of orthogonal ones .( 2 ) : hmct system with nonorthogonal prototype pulse , named as hexagonal multicarrier transmission ( hmt ) system , which abandons the orthogonality condition of the modulated pulses and obtains the high robustness performance on combating the dispersion of dd propagation channel in both time and frequency domain . to optimally combat the impact of the dd propagation channels , the lattice parameters in the time - frequency ( t - f ) plane and the pulse shape of modulation waveform of hmct system are jointly optimized to adapt to the channel scattering function .it is shown in that the hmct system outperforms ofdm system from the robustness against channel dispersion point of view .the basic mathematical operation of the received signal performed by the demodulator is a projection onto an identically structured function set generated by the prototype pulse function , i.e. an optimal match filter . in hmct system , there is no cyclic prefix and data symbols of hmct signal are transmitted on hexagonal lattice points in the t - f plane .traditional timing and frequency synchronization schemes can not be applied to hmct system directly . in our previous work ,signal to interference and noise ratio ( sinr ) of hmct system over dd channel is analyzed and the max - sinr based timing synchronization scheme is proposed . in this paper ,a novel preamble structure and two carrier frequency offset estimation algorithms , named as phase differential algorithm and least squares algorithm , for hmct system are proposed .after detailed derivation , the close form performance lower bound of the proposed carrier frequency offset estimation algorithm is given .theoretical analyses and simulation results show that the proposed carrier frequency estimation approach for hmct system obtains an approximation to the lower bound mean square error ( mse ) performance over dd channel .in hmct systems , the transmitted baseband signal can be expressed as where and are the lattice parameters , which can be viewed as the symbol period and the subcarrier separation , respectively ; denotes the transmitted data symbol , which is assumed to be taken from a specific signal constellation and independent and identically distributed ( i.i.d . ) with zero mean and average power ; and are the position indices in the t - f plane ; and denote the sets from which can be taken , with cardinalities and , respectively . and are the transmitted pulses generated by the prototype pulse . the ambiguity function of the prototype pulse is defined as baseband dd channel can be modeled as a random linear operator =\int_{0}^{\tau_{\textrm{max}}}\int^{f_{d}}_{-f_{d}}h(\tau,\upsilon)x(t-\tau)e^{j2\pi\upsilon t}d\tau d\upsilon\ ] ] where and are the maximum multipath delay spread and the maximum doppler frequency , respectively . is called the delay - doppler spread function , which is the fourier transform of the time - varying impulse response of the channel with respect to . in wide - sense stationary uncorrelated scattering ( wssus )assumption , the dd channel is characterized by the second - order statistics =s_{h}(\tau,\upsilon)\delta(\tau-\tau_{1})\delta(\upsilon-\upsilon_{1})\ ] ] where ] , and denotes the length of training sequence .we assume that the received signal experiences a carrier frequency offset and the timing offset is completely compensated .the demodulated training sequences can be expressed as and and denote the awgn in frequency domain .thanks to the central limit theorem , the last four terms in ( 14 ) and ( 15 ) can be denoted as and , respectively .we can rewrite equation ( 14 ) and ( 15 ) as and we assume that and define as ( 18 ) at the top of next page . denotes the phase rotation caused by frequency offset and is a constant .let denotes the normalized frequency offset , and denotes the sample period . is the additive noise with zero mean and average power .there are two fine frequency offset estimation schemes : phase differential estimation scheme and least squares estimation scheme . ' '' '' the phase differential carrier frequency offset estimation algorithm can be expressed as the phase differential estimation scheme is valid for =[-\rho/2,\rho/2] ] and ^t ] and ] .let ] denote the received signal vector with length , and , , denotes the objective projection subspace . ] . denotes the hadamard product and denotes the fast fourier transform . the output symbol of the time frequency subspace projector after projecting the received signal to the subspace can be expressed as after performing the ifft on the signal , the transformed signal can be expressed as where denotes the ceiling function . , , which can be denoted as .we can conclude from that the transmitted frequency domain training sequence can be obtained by the following two steps .the first step is to calculate the hadamard product of the received signal and the prototype pulse , that is .the second step is to superimpose vector with period and perform -point fft on the superimposed vector $ ] .hence , two hadamard product operations , two superimpose operators with period and two -point fft are needed to recover the transmitted frequency domain training sequence , which is a low complexity approach compared to the traditional projectors approach. we can conclude from ( 19 ) and ( 28 ) that the proposed phase differential and least squares carrier frequency offset estimators are functions of . in awgn channel can be expressed as the conditional probability density function can be expressed as differentiating the log likelihood function with respect to , we have and hence , the cramer - rao lower bound of the proposed carrier frequency offset can be expressed as ^{-1}\\ & = \frac{n}{2\pi^2m^2\textmd{snr } } \end{split}\ ] ]in this section , we test the proposed synchronization approach for mcm system with hexagonal t - f lattice via computer simulations based on the discrete signal model . in the following simulations , the number of subcarriers for hmct system is chosen as =40 , and the length of prototype pulse =600 .the center carrier frequency is =5ghz and the sampling interval is set to = .the system parameters of hmct system are =25khz , = and signaling efficiency is set to =0.8 .wssus channel is chosen as dd channel with exponential power delay profile and u - shape doppler spectrum . the mse performance of the proposed carrier frequency estimation algorithm over awgn channel is given in fig .2 . we can see from fig .2 that the proposed least squares carrier frequency estimation algorithm outperforms phase differential scheme at low snr , and both the proposed schemes can obtain an approximation to the cramer - rao lower bound mse performance . the mse performance of the proposed carrier frequency estimation algorithm over dd channel is given in fig .3 . as shown in fig .3 that the proposed least squares carrier frequency estimation algorithm outperforms phase differential scheme at low snr , but the phase differential scheme outperforms least squares algorithm at high snr . the dd propagation channel introduces energy perturbation among the transmitted symbols , hence there is a gap between the mse performance of the proposed two carrier frequency offset estimation algorithms and that of the cramer - rao lower bound .in this paper , the effects of carrier frequency offset on hmct system is analyzed . meanwhile , a novel carrier frequency offset estimation approach , including preamble structure , carrier frequency offset estimation algorithm , is proposed for hmct system .after detailed derivation , the close form performance lower bound of the proposed carrier frequency offset estimation alglrithm is given .theoretical analyses and simulation results show that the proposed carrier frequency offset estimation approach for hmct system obtains an approximation to the cramer - rao lower bound mse performance over the dd channel with exponential power delay profile and u - shape doppler spectrum .this work was supported by the jiangsu province national science foundation for young scholars under grant ( no .bk2012055 ) and the young scientists pre - research fund of plaust under grant ( no .kytyzlxy1211 ) , the national natural science foundation of china ( no . 60972050 ) and the jiangsu province national science foundation under grant ( no .bk2011002 ) . 00 kumbasar v. , kucur o. ici reduction in ofdm systems by using improved sinc power pulse . _ digital signal processing _ , vol.17 , no.6 , pp.997 - 1006 , nov .das s. , schniter p. max - sinr isi / ici - shaping multicarrier communication over the doubly dispersive channel ._ ieee transactions on signal processing _ , vol.55 , no.12 , pp.5782 - 5795 , dec . 2007 .abbas h.k ., waleed a. m. , nihad s. , the performance of multiwavelets based ofdm system under different channel conditions _ digital signal processing _ , vol.20 , no.2 , pp.472 - 482 , mar .gao x. , wang w. , xia x.g . , et al . cyclic prefixed oqam - ofdm and its application to single - carrier fdma ._ ieee transactions on communications _ , vol.59 , no.5 , pp.1467 - 1480 , may 2011 .p. jung , g. wunder , the wssus pulse design problem in multicarrier transmission , _ ieee transactions on communications _ ,vol.55 , no .10 , pp.1918 - 1928 , oct . 2007 .g. lin , l. lundheim , n. holte , optimal pulses robust to carrier frequency offset for ofdm / qam systems , _ ieee communications letters _ , vol .161 - 163 , mar .p. siohan , c. siclet , n. lacaille , analysis and design of ofdm / oqam systems based on filterbank theory , _ ieee transactions on signal processing _ ,5 , pp . 1170 - 1183 , may 2002 . m. ma , b. jiao , c. y. lee , a dual - window technique for enhancing robustness of ofdm against frequency offset , _ieee communications letters _ , vol .17 - 19 , jan . 2008 .strohmer t , beaver s. optimal ofdm design for time - frequency dispersive channels ._ ieee trans ._ vol.51 , no.7 , pp.1111 - 1122 , jul . 2003 .yuan z.g . , shen y.h . , a novel lofdm signal and its optimization over doubly - dispersion channels . in _ proc .3rd ieee conference on industrial electronics and applications , 2008 .iciea 2008 _ , pp.853 - 856 , jun .han f.m . , zhang x.d .hexagonal multicarrier modulation : a robust transmission scheme for time - frequency dispersive channels ._ ieee transactions on signal processing _ , vol.55 , no.5 , pp.1955 - 1961 , may 2007han f.m . , zhang x.d ., mlsd for hexagonal multicarrier transmission with time - frequency localized pulses , _ ieee transactions on vehicular technology _ , vol.58 , no.3 , pp.1598 - 1604 , marhan f.m . , zhang x.d ., asymptotic equivalence of two multicarrier transmission schemes in terms of robustness against time - frequency dispersive channels ._ ieee transactions on vehicular technology_,vol.59 , no.2 , pp.1598 - 1604 , feb . 2010 .xu k. , shen y. h. , effects of carrier frequency offset , timing offset , and channel spread factor on the performance of hexagonal multicarrier modulation systems ._ eurasip journal on wireless communications and networking _ ,vol.2009 , pp.1 - 8 , jan .xu k. , xu y. , zhang d. , sinr analysis of hexagonal multicarrier transmission systems in the presence of insufficient synchronization for doubly dispersive channel ._ frequenz _ , vol.65 , no.5 , pp.149 - 157 , aug . 2011 .xu k. , lv z. , xu y. , zhang d. , max - sinr based timing synchronization scheme in hexagonal multicarrier transmission ._ wireless personal communications _ ,doi : 10.1007/s11277 - 012 - 0550 - 5 , 2012 .xu k. , xu y. , xia x. , zhang d. , on max - sinr receiver for hexagonal multicarrier transmission over doubly dispersive channel . to appear in _ proc .ieee globecom 2012_. bello p. a.,characterization of randomly time - variant linear channels , _ ieee transactions on communication system _ ,vol.11 , no.4 , pp.360 - 393 , decl. cohen , _ time - frequency analysis_. englewood cliffs , nj : prentice - hall , 1995 .p. matthias , _ mobile fading channels_. west sussex , england : john wiley & sons , ltd , 2002 .j. h. conway and n. j. a. sloane , _ sphere packings , lattices and groups _ , 3rd ed .new york : springer - verlag , 1998 . j. proakis and m. salehi , _ digital communications _ , 5th edition . new york : mcgraw - hill , 2007 .
in this paper , a novel carrier frequency offset estimation approach , including preamble structure , carrier frequency offset estimation algorithm , is proposed for hexagonal multi - carrier transmission ( hmct ) system . the closed - form cramer - rao lower bound of the proposed carrier frequency offset estimation scheme is given . theoretical analyses and simulation results show that the proposed preamble structure and carrier frequency offset estimation algorithm for hmct system obtains an approximation to the cramer - rao lower bound mean square error ( mse ) performance over the doubly dispersive ( dd ) propagation channel . shell : bare demo of ieeetran.cls for journals hexagonal multi - carrier transmission system ; preamble structure ; carrier frequency offset estimation ; cramer - rao lower bound ;
asteroseismology is at present being developed as an efficient instrument in the study of stellar interiors and evolution .pulsational frequencies are the most important asteroseismic observational inputs .it is evident that a meaningful analysis of the observation , in terms of the basic physics of stellar interiors which is the ultimate target of the investigation , requires reliable computation of oscillation frequencies for specified physics .this is a two - step process , involving first the computation of stellar evolutionary models and secondly the computation of frequencies for the resulting models .lebreton et al .( this volume ) provide an overview of the tests of stellar model calculations . herewe consider the computation of the oscillation frequencies .an evident goal is that the computed frequencies , for a given model , should have errors well below the observational error , which in the case of the corot mission is expected to be below 0.1 . for the kepler mission ( e.g. , * ? ? ?* ) , with expected launch in early 2009 , selected stars may be observed continuously for several years and errors as low as may be reachable , particularly for modes excited by the heat - engine mechanism . since errors resulting from numerical problemsare typically highly systematic , they may affect the asteroseismic inferences even if they are substantially below the random errors in the observed frequencies .this must be kept in mind in the assessment of the estimates of the numerical errors . during the last decades a lot of codes obtaining numerical solutions of an adiabatic system of differential equations describing stellar oscillationshave been developed . in order to ascertain whether any possible difference in the description of the same observational data by different numerical codes is due to physical descriptions or to different numerical integration schemes ,the inter - comparison of these oscillation codes in a fixed and homogeneous framework is absolutely necessary .some effort has been already done in the past but only regarding pairs of codes .some codes have also developed a lot of internal precision tests .however , there is a lack of inter - comparison of a large enough set of codes .we aim in this study try to fix a set of minimum requirements for a code to be sure that any difference found is only due to a different physical assumption . ideally , for a given model there should be a set of ` true ' frequencies with which the results of the different codes could be compared .this ideal situation could probably be approximated by considering polytropic models for which it is relatively straightforward to calculate the equilibrium structure with essentially arbitrary accuracy ( see also * ? ? ?* ) . in practice ,the situation for realistic stellar models is more complex .owing to the complexity of the stellar evolution calculation the models are often available on a numerical mesh which is not obviously adequate for the pulsation calculation .the effect of this on the frequency computation depends on the detailed formulation of the equations in the pulsation codes .these formulations are equivalent only if the equilibrium model satisfies the ` dynamical ' equations of stellar structure , i.e. , the mass equation and the equation of hydrostatic support , and this is obviously not exactly true for a model computed on a finite ( possibly even relatively sparse ) mesh .one might define a consistent set of frequencies for a given model by interpolating it onto a very dense mesh and resetting it to ensure that the relevant equations of stellar structure are satisfied .the model is fully characterized by the dependence on distance to the centre of density and the adiabatic exponent , being pressure and the derivative being at constant specific entropy .thus one could interpolate and to a fine mesh , and recompute the mass distribution and pressure by integrating the mass equation and the equation of hydrostatic equilibrium .frequencies of this model should then be essentially independent of the formulation of the oscillation equation and would provide a suitable reference with which to compare other frequency calculations .such a test may be carried out at a later stage in the esta effort . in task 2 , for now , we aim at testing , comparing and optimizing the seismic codes used to calculate the oscillations of existing models of different types of stars . in order to doso we consider steps in the comparison by addressing some of the most relevant items that must be compared regarding the seismic characterization of the models : * step 1 : comparison of the frequencies from different seismic codes for the same model . * step 2 : comparison of the frequencies from the same seismic code for different models of the same stellar case provided by different equilibrium codes* step 3 : comparison of the frequencies for specific pulsators .the work presented here is mostly focused on step 1 . at this step threedifferent equilibrium models have been used .two of them have been computed using cesam , with 902 and 2172 mesh points , and a third one with 4042 mesh points provided by astec .we present inter - comparisons using the two models with the larger numbers of mesh points .the same physics and physical constants ( except ) are used for all the oscillation codes .frequencies in the range of \,\mu ] + 1.5 & 3.826 & 1355 & 0.4 & 1.731 & 4042 & 6.6716823 + 1.5 & 3.830 & 1368 & 0.4 & 1.724 & 2172 & 6.67232 + the second model was also provided by cesam ( referred from now on as m2k ) .it uses a grid , with 2172 mesh points , more suitable for asteroseismic purposes .general characteristics of the model are presented in table 1 .these are typical of a doradus star showing oscillations in the asymptotic g - mode regime , and also around the fundamental radial mode . a priori, solar - like pulsations can not be excluded for this type of star and therefore it can be a good candidate for a global study . in fig .[ fig:1 ] , ( which is a quantity directly related to the brunt - visl frequency : , where is the gravitational acceleration ) is depicted as a function of the relative radius ( ) in a region of steeply varying hydrogen abundance , and hence mean molecular weight , just outside the convective core .the model is in a phase of a growing convective core .if diffusion and settling are neglected this leads to a discontinuity in the hydrogen abundance and hence , formally , to a delta function in ; also , there is a region of ` semiconvection ' at the edge of the core .in fact , the figure shows that three points in this model display an erratic variation in just in the transition region between the convective and the radiative zone ; also , the mesh resolution of this region of rapid variation seems inadequate .this is emphasized by fig .[ fig:2 ] which shows the distribution of mesh points of this model along the stellar radius , indicating that there are not enough mesh points in this transition zone .as discussed below , these features of model m2k give rise to frequency differences in the comparison , particularly for those modes for which this inner part is critical for their physical description .the third and last model is a 4042 mesh points model ( from now on m4k ) provided by astec .general characteristics of the model are presented in table 1 .it has been computed to have overall characteristics similar to the previous cesam model in order to understand better the differences . however , as discussed by christensen - dalsgaard ( this volume ) particular care has been taken in the treatment of the -gradient region ; the semiconvective region was replaced by a region with a steep gradient in the hydrogen abundance , defined such as to ensure neutral stability of the temperature gradient . as a result , for this model , also presented in fig .[ fig:1 ] , shows a well - defined and reasonable behaviour .the distribution of the mesh points can be found in fig .[ fig:2 ] . in the central and outer parts of the stellar model ,the distribution is similar to the cesam model m2k .it is in the inner zones , particularly in the boundary region between the convective core and the -gradient zone , that the models present different mesh - point distributions , with m4k providing a far superior resolution of this critical region .all oscillation codes involved in this task were asked to provide adiabatic frequencies in the range of \,\mu ] & + adipls & & 2 & y , n & & 6.67232 & + filou & & 2 & n & & 6.67232 & + graco & , & 2 & y , n & , & 6.6716823 & + losc & & 4 & n & & 6.67232 & + nosc & , & 2 & y , n & , & 6.67259 & + oscrox & & 4 & n & & 6.6716823 & + posc & & 2 & y , n & & 6.6716823 & + pulse & & 4 & n & & 6.6716823 & + lnawenr & & 2 & n & & 6.67232 & + * to use the mesh provided by the equilibrium model , no re - meshing is allowed . * to set the lagrangian perturbation to the pressure to zero ( ) as outer mechanical boundary condition . * to use the physical constants prescribed in task 1 . * to use linear adiabatic equations. nevertheless , some other schemes for the numerical solution of the differential equations ( from now on called for simplicity `` degrees of freedom '' ) remain open .nine oscillation codes of different research groups in the field have been used in this inter - comparison exercise .a summary of the participating codes and the different degrees of freedom " provided by each one is found in table 2 and include : * set of eigenfunctions : use of the lagrangian or the eulerian perturbation to the pressure ( or ) .this obviously affects the form of the equations ; in particular , when using the equations do not depend explicitly on .* order of the integration scheme : most of the codes use a second - order scheme , but some of them have implemented a fourth - order scheme . * richardson extrapolation : some of the codes using a second - order scheme have the possibility to use richardson extrapolation to decrease the truncation error ; combining a second - order scheme with richardson extrapolation yields errors scaling as , being the number of mesh points ( e.g. * ? ? ?* integration variable : two integration variables are used : 1 ) the radius ( ) , or 2 ) the ratio .the latter variable may reduce the effect of rounding errors in the outer layers ( see sect .[ sec : indep ] ) .* despite the requirement that the physical constants be fixed at the values for task 1 , the codes used slightly different values of the gravitational constant , as listed in table [ tab:2 ] .ideally the equilibrium model should have been computed with the prescribed task 1 value ( ) which should then have been used for the oscillation calculations . in practice modelm4k was computed with .using different values of in the oscillation equations clearly gives rise to inconsistencies , with potential effects on the frequencies , as discussed further in sect .[ sec : grav ] . note that most of the oscillation codes put in the general non - radial differential equations to obtain the radial modes , except for losc that uses the lawe differential equation ( linear adiabatic wave equation for radial modes ) , and for graco for which results will be shown for both sets of equations .it should be noted that the lawe does not depend on .all the main characteristics and numerical schemes are presented in previous chapters of this volume .the nice code ( nosc ) has the options of using or as dependent variables , and or as independent variable .however , all nosc results presented here use and . the use of the mesh provided in the equilibrium model , rather than meshes optimized for the different kinds of modes , may result in inadequate resolution of the rapid variation of high - order p- and g - modes and hence larger truncation errors in the solution of the differential equations . in these cases , therefore, the second - order scheme may result in unacceptable errors . herethe use of higher - order integration schemes ( a fourth - order scheme or a second - order scheme followed by richardson extrapolation ) is therefore expected to give better results . for low- and intermediate - order modeswe expect little effect of the use of higher - order schemes .in this section , the results of the direct frequency inter - comparison are presented .we have structured this study splitting the frequency range in three parts for the non - radial case , only one for the radial case , and comparing codes only with similar selections of degrees of freedom " .in addition , the influence of using different selections in the same code ( graco in this case ) is shown ; for these tests the value of in the oscillation calculations was the same as was used to compute the equilibrium model .these are shown in a single frequency range . in fig .[ fig:3 ] the results obtained using the model m2k are presented . the reference line for all the inter - comparisonsis selected to be graco . in this figurethe reference frequencies have been obtained using , second order , no richardson extrapolation and as independent variable ( see table 2 ) .two sets of codes can be identified in the figure , adipls - nosc - graco , with differences lower than , and all with the same degrees of freedom " , and oscrox - pulse - losc - posc with differences for high frequencies lower than with graco , but with differences among them around .this second set of codes differs from the first one in the use of a fourth - order numerical scheme instead of a second - order one . in the present figure , in fig .[ fig:5 ] and in figs [ fig:7 ] and [ fig:8 ] , showing inter - comparisons with model m2k , posc has been chosen as representative of the codes using second order plus richardson extrapolation ( see table [ tab:2 ] ) .we can see how the use of this integration procedure provides similar results as the fourth - order solutions .these differences , around for codes using the same integration scheme , and for codes using different schemes , are larger than the expected precision of the coming observational data .therefore this effect can change any detailed physical description as interpreted by different oscillation codes . also , we point out that the differences between the codes using fourth - order schemes and graco results using a second - order scheme indicate that the model has an insufficient number of mesh points for asteroseismic studies. figure [ fig:4 ] shows the results obtained for the model m4k . in the top panel models with second order , no richardson extrapolation , ,are depicted .all the differences are lower in magnitude than , i.e. , two orders of magnitude lower than those obtained for model m2k . therefore improving the mesh , including a doubling of the number of points , provides a very substantial improvement in the precision , making these values more acceptable for theoretical modeling . the middle panel of fig .[ fig:4 ] presents the differences obtained with models providing fourth - order integration solutions or second - order plus richardson extrapolation .the global precision here is similar to the previous case or even slightly lower , with differences lower than .it is interesting to point out that we can not directly distinguish between a fourth - order integration scheme solution ( oscrox ) or a second - order plus richardson extrapolation ( the rest ) .however , it is noticeable that the graco - oscrox - posc fall in one group and adipls - nosc in a second , with a slight difference in the latter case .these two groups are distinguished by the value of ( cf .table [ tab:2 ] ) , with adipls and nosc having similar but not identical values . this pattern will be found in other cases also .the losc behaviour is discussed in the next paragraph . finally in the bottom panel of fig .[ fig:4 ] a comparison between the different degrees of freedom " using only the graco code are presented . as referencewe have used the solution with degrees of freedom " : ( , no richardson extrapolation , , ) . for each comparisonwe have changed only one of these degrees of freedom " at a time , keeping the rest unchanged ( solutions ) .the most prominent effect arises from the use of richardson extrapolation , which changes the frequencies by nearly for the highest - order modes , substantially more than the expected observational accuracies . for model m2k ( see fig . [ fig:3 ] ) , we have similarly found a change of , reflecting the smaller number of mesh points .this clearly shows that second - order schemes are inadequate , even for the mesh in m4k , for the computation of high - order acoustic modes ; as expected the effect decreases rapidly with decreasing mode order . the use of as integration variable provides small differences , always lower than .these differences are of the order of those obtained in the top and middle panels .the use of the lagrangian perturbation to the pressure as variable ( ) or the use of the lawe differential equation provide very similar differences , lower than , but with an oscillatory pattern .this pattern is very similar to that observed for losc , which also uses lawe to obtain the radial modes . as discussed in sect .[ sec : dep ] , this oscillatory pattern arises from an inconsistency in the thermodynamics of model m4k which affects ; solutions of equations that do not depend on ( i.e. , the lawe or the equations based on ) are insensitive to this effect . therefore , even for model m4k and radial modes , the use of different integration procedures can give different values for the oscillation eigenfrequencies that are non - physical in nature .these non - physical sources of differences are mainly some inconsistencies in the equilibrium models ( see sect .[ sec : dep ] ) and the lack of mesh points .but when the same numerical schemes are used , the different codes provide very similar frequencies . to illustrate the differences appearing in the case of non - radial modes , the spherical degree has been chosen .we have divided the frequency spectrum into three regions : 1 ) large frequencies ( \,\mu]hz ) . in all casesa study similar to that developed in the radial case has been carried out . for the sake of simplicitythe high - frequency differences are not represented since the results are very similar to those presented for the radial case .only a slightly higher precision is found in this case .the results of losc present the same pattern as the radial case , owing to the use of as eigenfunction in that code ( see sect .[ sec : dep ] ) .figure [ fig:5 ] shows the results obtained for model m2k when comparing frequencies around the fundamental radial mode .the main differences are smaller than in the high - frequency region , corresponding to the low order of the modes and the consequent lesser sensitivity to the number of mesh points .the largest differences are found for two modes of frequency near and showing avoided crossing ; these modes have a mixed character , with fairly substantial amplitude in the -gradient zone .pulse and losc present differences around , posc around , oscrox less than , and the rest do not present significant differences for these two modes .that is , the largest differences are found in these codes when using a fourth - order integration scheme or a second - order plus richardson extrapolation .the values of these differences are larger than the expected precision of the satellite data to come .they clearly reflect the inadequate representation of the -gradient zone in m2k , with higher - order schemes being more sensitive to the resulting inconsistency in the model . in the top panel of fig .[ fig:6 ] the inter - comparisons of the frequencies ( ) for model m4k , using a second - order scheme without richardson extrapolation , are depicted for the same frequency range as in fig .[ fig:5 ] .the main differences are two orders of magnitude lower than those obtained for model m2k , and they are also of the same order of magnitude as those obtained for the high - frequency range with m4k .all these differences remain always lower than .graco and posc are extremely close , while filou and adipls provide very similar results , with slightly larger differences for nosc , by about , relative to these codes .thus again the differences are related directly to the different values of .there remain small wiggles for the mixed modes near but reduced more than two orders of magnitude relative to the largest differences for model m2k , reflecting the superior resolution of the critical region in model m4k . the middle panel of fig .[ fig:6 ] shows the same inter - comparison as the top panel for codes using a fourth - order scheme or a second - order plus richardson extrapolation .the precision is similar to the previous case , with differences of the same order of magnitude .we can distinguish two groups of codes providing very similar results : oscrox - posc - graco and losc - adipls - nosc .this distribution does not depend on the integration scheme , but again reflects the values of .the wiggle of the mixed modes is also very similar to that obtained with a second - order scheme without richardson extrapolation .finally , an inter - comparison of different degrees of freedom " using only graco is presented in order to test the differences obtained for the different choices .this comparison is depicted in the bottom panel of fig .[ fig:6 ] .the use of the richardson extrapolation is not very important , as expected for these modes of low order , with effects generally smaller than , although a larger value is present for the mixed modes . using as variable gives differences larger than the differences among codes with the same degrees of freedom " .the use of the integration variable does not introduce significant differences . compared with model m2k, we find a general reduction of the differences for m4k , the main effects being in the region of mixed modes with an improvement reaching up to three orders of magnitude .this is obviously not a simple consequence of the doubling of the number of mesh points .the main reason is likely the inadequate resolution shown by model m2k in the description of the brunt - visl frequency in the region close to the boundary of the convective core , which is not present in model m4k .the avoided - crossing phenomenon and the behaviour of the mixed modes are very sensitive to the detailed treatment of this region , including the effects of semiconvection .therefore , an accurate description of is critical for the oscillation codes in order correctly to obtain eigenfrequencies for modes near an avoided crossing .the direct frequency inter - comparison ends with the study of the low - frequency region ; as above we concentrate on modes of degree . fig .[ fig:7 ] shows the results obtained for model m2k . in this casethe differences are lower than obtained at higher frequencies for this model ( cf .figs [ fig:3 ] and [ fig:5 ] ) .the most surprising behaviour is present in pulse and losc .posc and oscrox also present some differences in the same region .to understand the reason for these differences and the region where they appear fig .[ fig:8 ] shows the period separation between adjacent modes .the first - order asymptotic g - mode theory predicts a constant separation of the periods in this regime for a given .however , when the equations are solved numerically , this period spacing presents several minima , and these minima are directly linked with the mode trapping .[ fig:8 ] shows that the position of one of these minima is the same as the position of the largest differences .the modes in this region have somewhat enhanced amplitudes in the region just outside the convective core . in pulse and loscthis apparently happens for modes somewhat different from the remaining codes .this is the origin of the frequency differences .as the mode trapping in this region for these stellar models is related to the brunt - visl frequency in the -gradient zone , the previously mentioned inadequate treatment of this region in model m2k is likely the reason for these differences . the top panel of fig .[ fig:9 ] presents inter - comparisons for frequencies calculated with a second - order scheme without richardson extrapolation and for model m4k , for the same frequency range as in fig .[ fig:7 ] .posc is again extremely close to graco , within .filou and adipls present very similar results , with nosc showing slightly larger differences , ranging from 0.0005 up to relative to this group .the differences decrease globally as far as the frequency decreases until reaching a magnitude of for the smallest frequency studied here . the middle panel of fig .[ fig:9 ] shows the same comparison for codes using a fourth - order scheme or a second - order plus richardson extrapolation .once again it can be seen that the precision resembles the results in the previous panel .here posc and oscrox provide very similar results to graco .adipls - losc - noscpresent small differences among them , with nosc having a small increasing difference with frequency with respect to the other two . again the general pattern here , and in the top panel ,largely reflects the differences in . in this casewe can distinguish the codes using fourth - order integration scheme with its apparently noisy profiles as compared with a solution using second - order integration plus richardson extrapolation . in the bottom panel of fig .[ fig:9 ] , an inter - comparison for the different degrees of freedom " using only graco is presented ; as in earlier corresponding plots the same value of is used as in the computation of the equilibrium model .as expected the richardson extrapolation has a growing influence as the frequency decreases and the mode order increases , with quite substantial differences , compared with the differences between different codes , for the lowest - frequency modes .thus , with the mesh provided by the evolution calculation the second - order schemes have inadequate numerical precision . in this case , the differences provided by the use of as integration variable are negligible ; the use of the lagrangian perturbation to the pressure ( ) gives rise to frequency differences exceeding those obtained between the different codes , at the lowest frequencies ; we note , however , that a corresponding comparison between adipls and losc does not show this effect which may therefore be particular to the gracoimplementation .finally , we want to point out that in the case of model m4k , the large differences appearing in the mode trapping region are not found .[ fig:10 ] presents the same period separation as fig .[ fig:8 ] but for this model .all the codes give quite similar results .two mode trapping regions appear with the same frequency domain as in fig .[ fig:8 ] . asthis model does not present any numerical imprecision in the brunt - vsl frequency pattern , the obvious conclusion is that , as for the mixed modes in avoided crossing , any numerical imprecision in the description of coming from the equilibrium model can give rise to large differences in the frequencies calculated by different oscillation codes for the g - modes trapped in the -gradient zone .this section is devoted to the asymptotic behaviour of p - modes through the use of the so called large separations " , that is , the difference between two consecutive modes with the same spherical degree ( , being the radial order of the mode ) .the structure of the section is similar to the previous one .we will use the same definitions as in the previous section to study all the frequencies ranges . from now on , the results with the m2k model will not be discussed , since no additional information is found from the further inter - comparisons .figure [ fig:12 ] shows the results obtained for model m4k . in the top panel differences for the different codes using second order , no richardson extrapolation , ,are depicted .posc and graco are again extremely close .the rest of the codes ( adipls - nosc - filou - graco ) present differences of up to around , generally sharing an oscillatory pattern , particularly at relatively low frequency ; we have no explanation for this behaviour .however , the effect is evidently small . in the middle panel of fig .[ fig:12 ] the ls differences for the different codes , using a fourth - order integration scheme or a second - order plus richardson extrapolation , are presented . with the exception of loscthe global behaviour is similar to that obtained without richardson extrapolation , the nosc - adipls - oscrox - graco differences being always lower than .the pattern of the losc differences , presenting differences one order of magnitude larger than the rest of the codes , is clearly related to the corresponding oscillatory pattern found in fig .[ fig:4 ] ; as before it is explained by the use of the lawe differential equation .the bottom panel of fig .[ fig:12 ] shows the ls differences for the radial modes obtained with graco and model m4k when different degrees of freedom " are used .the richardson extrapolation introduces differences increasing with frequency and mode order , as expected , giving the largest differences , around .this again emphasizes the inadequacy of the second - order schemes for the highest - order modes , on the m4k mesh .the integration variable gives differences slightly lower than , i.e. , much smaller than that found for different codes using the same numerical techniques , as depicted in the previous panels .the use of the lagrangian perturbation to the pressure and the lawe differential equation show the same oscillating behaviour and values as those previously observed for the losc results . as discussed above , this is related to the inconsistency in in model m4k ( see sect . [sec : dep ] ) . to illustrate the differences appearing in the case of non - radial modes , the spherical degree has been arbitrarily chosen .we have divided the frequency spectrum in three regions , like in the direct frequency inter - comparison : 1 ) high - frequency region , 2 ) frequencies around the fundamental radial mode and 3 ) low - frequency region . in all cases , a study similar to that developed in the radial case has been carried out . in the low - frequency region ,the more physical period separation is studied , instead of the frequency separation relevant for acoustic modes . in the first regionthe results are very similar to those obtained for the radial case ; therefore the plots are not presented here . as in the direct frequencyinter - comparison case , losc also presents an oscillating pattern , owing to the use of as eigenfunction ( see sect . [sec : dep ] ) . on the other hand ,the only noticeable difference , when compared with the radial case in this region , is that for the precision among codes using the same integration procedures is slightly higher .the results obtained with the codes using a second - order scheme , for the modes around the fundamental radial mode , are depicted in the top panel of fig .[ fig:14 ] .posc remains very close to graco , with the other set of codes ( filou - nosc - adipls ) extending the oscillatory pattern in the top panel of fig .[ fig:12 ] , with the largest difference being for the mixed modes .this set of codes agrees to within differences around ( slightly larger for nosc ) .the middle panel of fig .[ fig:14 ] depicts the differences for a fourth - order integration scheme or second - order plus richardson extrapolation .the precision of the different codes is similar to that given by the second - order scheme without richardson extrapolation .the maximum difference is lower than , and most of the codes present differences around .once again the largest differences are obtained in the mixed modes .no different behaviours depending on the integration scheme are found . the bottom panel of fig .[ fig:14 ] is devoted to the differences obtained with graco when different options for the solution of the differential equations are chosen . in this frequency regionthe replacement of by as integration variable introduces the smallest differences , one order of magnitude lower than those obtained using the codes with the same degree of freedom " . using the lagrangian perturbation to the pressure ( ) as eigenfunction causes some difference indicating sensitivity to whether or not is used .the richardson extrapolation introduces differences lower than those found for the different codes , except in the avoided crossing zone of mixed modes , where changes comparable to the largest one are found .given the rather substantial variations in the bottom panel , and the fact that most codes show the same variation in the top two panels , one might suspect that the dominant source of this variation is in fact in the graco results used as reference .the low - frequency region is studied through the period separation for ( in seconds ) , illustrated in fig .[ fig:16 ] .the top panel of this figure shows the differences found using only codes with a second - order integration scheme .adipls and nosc show similar shifts of around 0.1 s relative to graco .filou is similar at the higher frequencies but shows a small oscillating behaviour in the mode - trapping regions .finally posc presents a quite noisy pattern , varying around zero .again the overall grouping of the differences ( adipls - nosc - filouand graco - posc ) reflects the different values of .results obtained using a fourth - order integration scheme or a second - order plus richardson extrapolation are compared in the middle panel of fig .[ fig:16 ] .the values of the differences found in this case are of the same order as in the previous inter - comparison . herewe can distinguish codes using a fourth - order scheme or a second - order plus richardson extrapolation , because of the apparently random pattern they present , with differences one order of magnitude larger than the main values .the frequency differences obtained with the oscrox and losc results are those presenting a noisy behaviour , when comparing with a second - order plus richardson extrapolation solution ( as graco does ) as the reference line .posc also presents some differences in the mode - trapped regions when compared with other codes but using the same integration scheme . the bottom panel of fig .[ fig:16 ] shows the differences obtained with the same code ( graco ) and different choices of the degrees of freedom " .the use of the richardson extrapolation introduces substantial differences , of the order of seconds , with noticeable wiggles in the two mode - trapping regions .as noted previously this reflects the inadequacy of the second - order schemes for high - order modes .the use of the lagrangian perturbation to the pressure as variable ( ) also introduces substantial differences , particularly around the trapped modes near , related to the frequency differences found with graco in this region when is used ( cf .[ fig:9 ] ) . using integration variable gives a small difference when compared with , one order of magnitude lower than the differences found between the codes using the same numerical integration schemes .in this section we study the inter - comparisons for the small separations ( ss ) .therefore , two sets of inter - comparisons can be done , one for the modes and another for the ( ) modes . in this case only the high - frequency region is studied , since this is where this quantity has physical meaning . as in the previous section we concentrate on results for model m4k .the results obtained for the small separation are presented in fig .[ fig:18 ] .the top panel of this figure shows the differences obtained with the codes solving the set of differential equations with a second - order integration scheme . in the avoided crossing regionsome wiggles occur related to variations for , but these wiggles are of the order of magnitude of the differences obtained for the high - frequency region .adipls - posc - gracopresent similar values for high frequencies and nosc - filou presents differences around , and always lower than .these differences are much lower than the expected observational errors for corot .the middle panel shows the same inter - comparison using a fourth - order integration scheme or a second - order plus richardson extrapolation .once again we can not distinguish the integration scheme used .just losc shows an oscillatory pattern for high frequencies due to the use of lawe or ( see sect .[ sec : dep ] ) .the order of magnitude of the main differences is the same as those obtained using only a second - order scheme .the wiggles in the avoided crossing regions are still present , and for high frequencies the differences are all in the range \,\mu]hz for model m4k , larger than those obtained when different numerical integration schemes are used . differences between the values of actually used by the different codes ,although less extreme , account for much of the difference between the computed frequencies .therefore , for a proper pulsational study , we require that the number of mesh points and their distribution must be such as to yield an equilibrium model that satisfies the dynamical equations with sufficient accuracy also in the regions of the star where the physical quantities present rapid variations ( e.g. , the outer layers and -gradient zones ) .in addition , the mesh used in the pulsation calculation must properly resolve the eigenfunctions of the highest - order modes considered .in the present case model m4k , with 4042 points , appears to satisfy these conditions although the use of a fourth - order integration scheme , or a second - order scheme and richardson extrapolation , is still needed in the oscillation calculation ; these higher - order schemes give significant improvements compared with the use of a simple second - order scheme .the use of a second - order integration plus richardson extrapolation scheme is not distinguishable , in accuracy and precision , from the use of a fourth - order integration scheme .a correct physical and numerical description of the brunt - visl frequency is essential when is used as eigenfunction ; in particular , inconsistencies in the equation of state can have serious effects on the frequencies .inconsistency between the value of used in the oscillation calculation and the value used to compute the equilibrium model , within the range of the different values of found in the literature , may lead to substantial errors in the computed frequencies .we note that the situation is somewhat different if consistent values of are used in the evolution and oscillation calculations .then the effect on the frequencies is approximately given , according to homology arguments , as a scaling by .however , since the product is known extremely precisely from planetary motion in the solar system , any variation in should be reflected in a corresponding change in the assumed value of . if this is the case , and if the model is characterized by a given value of ( as is typically the case ) the effect on the frequencies of changes in are very small ( see also * ? ? ? * ) . in further testsmore care is required to secure the full consistency of the models : a consistent equation of state should be used , and the value of should obviously be the same in the equilibrium - model and the pulsation calculations ; indeed , this strongly argues for including the value of as one of the parameters in the model file . the main conclusion of this extensive investigation , however , it positive : with a properly resolved equilibrium model the broad range of oscillation codes likely to be involved in the asteroseismic analysis of data from corot and other major upcoming projects generally give consistent results , well within the expected errors of the observations .thus , although the remaining problems in the calculation evidently require attention , we can be reasonably confident in our ability to compute frequencies of given models and hence in the inferences concerning stellar structure drawn from comparing the computed frequencies with the observations .this work was supported by the spanish pne under project number esp 2004 - 03855-c03-c01 , and by the european helio- and asteroseismology network ( helas ) , a major international collaboration funded by the european commission s sixth framework programme .a.m. and j.m .acknowledge financial support from the belgian science policy office ( belspo ) in the frame of the esa prodex 8 program ( contract c90199 ) .mjpfgm is supported in part by fct and feder ( poci2010 ) through projects poci / cte - ast/57610/2004 and poci / v.5/b0094/2005 .christensen - dalsgaard , j. & berthomieu , g. : theory of solar oscillations . in _ solar interior and atmosphere _ , ( eds ) cox , a. n. , livingston , w. c. & matthews , m. , space science series , university of arizona press , p. 401478 ,( 1991 ) christensen - dalsgaard , j. , arentoft , t. , brown , t. m. , gilliland , r. l. , kjeldsen , h. , borucki , w. j. & koch , d. : asteroseismology with the _kepler mission_. in _ proc .vienna workshop on the future of asteroseismology _ , ( eds ) g. handler & g. houdek , coast * 150 * , 350356 ( 2007 ) gough , d. o. : comments on helioseismic inference . _progress of seismology of the sun and stars _ , ( eds )osaki , y. & shibahashi , h. , springer - berlin , _ lecture notes in physics _ , vol .* 367 * , 283 318 ( 1990 )
in order to make asteroseismology a powerful tool to explore stellar interiors , different numerical codes should give the same oscillation frequencies for the same input physics . any differences found when comparing the numerical values of the eigenfrequencies will be an important piece of information regarding the numerical structure of the code . the esta group was created to analyze the non - physical sources of these differences . the work presented in this report is a part of task 2 of the esta group . basically the work is devoted to test , compare and , if needed , optimize the seismic codes used to calculate the eigenfrequencies to be finally compared with observations . the first step in this comparison is presented here . the oscillation codes of nine research groups in the field have been used in this study . the same physics has been imposed for all the codes in order to isolate the non - physical dependence of any possible difference . two equilibrium models with different grids , 2172 and 4042 mesh points , have been used , and the latter model includes an explicit modelling of semiconvection just outside the convective core . comparing the results for these two models illustrates the effect of the number of mesh points and their distribution in particularly critical parts of the model , such as the steep composition gradient outside the convective core . a comprehensive study of the frequency differences found for the different codes is given as well . these differences are mainly due to the use of different numerical integration schemes . the number of mesh points and their distribution are crucial for interpreting the results . the use of a second - order integration scheme plus a richardson extrapolation provides similar results to a fourth - order integration scheme . the proper numerical description of the brunt - visl frequency in the equilibrium model is also critical for some modes . this influence depends on the set of the eigenfunctions used for the solution of the differential equations . an unexpected result of this study is the high sensitivity of the frequency differences to the inconsistent use of values of the gravitational constant ( ) in the oscillation codes , within the range of the experimentally determined ones , which differ from the value used to compute the equilibrium model . this effect can provide differences for a given equilibrium model substantially larger than those resulting from the use of different codes or numerical techniques ; the actual differences between the values of used by the different codes account for much of the frequency differences found here .
models have become widely adopted as means to implement many machine learning algorithms and represent the state - of - the - art for many image and speech recognition applications . as the application space for evolves beyond workstations and data centres towards low - power mobile and embedded platforms , so too must their design methodologies .mobile voice recognition systems , such as apple s _ siri _ , currently remain too computationally demanding to execute locally on a handset . instead, such applications are processed remotely and , depending on network conditions , are subject to variations in performance and delay .are also finding application in other emerging areas , such as autonomous vehicle localization and control , where meeting power and cost requirements is paramount . , which replace manually engineered computer algorithms ,must be trained instead of programmed .this training involves an optimization process where network weights are adjusted with the objective of minimizing the output error .these adjustments often involve a variation of the method .while these training methods have been automated , much of the design process and choice of network hyper - parameters ( e.g. , number of hidden layers , nodes per layer , or choice of activation functions ) has been historically relegated to manual optimization .this relies on human intuition and expert knowledge of the target application in conjunction with extensive trial and error .this process is difficult , considering the vast network hyper - parameter space which includes : the number of convolutional or hidden layers , the quantity of nodes in each layer , the type of nonlinear activation functions , and many others which depend on the system in - hand .in addition , the problem with manual hyper - parameter tuning is that there is no guarantee that the process will result in optimal configurations .not only does the diversity of possible hyper - parameters create extremely large design spaces , but time intensive training phases on comprehensive data sets must also be performed prior to evaluating candidate solutions .this significant computational overhead renders exhaustive searches intractable , and necessitates the use of automated tools to intelligently explore the solution space while limiting the number of candidate models that must be trained and evaluated . with the proliferation of machine learning on embedded and mobile devices, application designers must now deal with stringent power and cost requirements .these added constraints transform hyper - parameter design into a multi - objective optimization problem where no single optimal solution exists . instead , the set of points which are not dominated by any other solution forms a pareto - optimal front .simply put , this set includes all solutions for which no other is objectively superior in all criteria .formally , a solution vector ( where and are two objectives for optimization ) is said to dominate another point if and , or and ; the set of points which are not dominated by any other solution constitutes the pareto - optimal front .this paper presents an automated method that effectively trains a neural network to design other neural networks , optimizing for both performance and implementation cost .this meta - heuristic exploits machine learning to predict the performance of candidate solutions ( modelling the response surface ) ; the algorithm learns which points to explore and avoids the lengthy computations involved in evaluating solutions which are predicted to be unfit .this leveraging of techniques dramatically reduces the proposed algorithm run - time , which can ultimately result in the reduction of product design time , application time - to - market , and overall costs . while there are many different models , the is a well - known form , which rose in popularity with the advent of the back - propagation training algorithm . characterized by a series of cascaded _ hidden _layers , have a single feed - forward path from the input to output layers .layers are also fully - connected ; each node has a connection to each and every node in adjacent layers .when illustrated as a directed graph ( shown in figure [ fig : mlp ] ) , graph connections are representative of signals being multiplied by corresponding weights , and graph nodes the summation of all inputs followed by non - linear activation functions .the evaluation of such elements , commonly or sigmoid functions , is comparatively simple .therefore , multiply - accumulate operations and memory accesses remain the dominant tasks in terms of cost . while structurally simple , determining an optimal configuration is a difficult task due to the large number of parameters inherent to even the simplest designs . for example , given the simple shown in figure [ fig : mlp1 ] , with inputs , a single hidden layer with nodes , and nodes in the output layer , operations must be evaluated at each inference pass and an equal number of weights must be accessed in memory .starting from this initial configuration , a designer may then choose to include a second hidden layer , as illustrated in figure [ fig : mlp2 ] ; the first will then act as a feature extractor and the newly added layer will then process only the limited number of features generated by the first .this alteration allows for the reduction in dimension of the first hidden layer , which reduces the total number of connections to the input layer .however , this also increases the network depth and results in a cost penalty ( in terms of requiring additional memory accesses and arithmetic operations associated with the newly added layer ) .the key problem demonstrated is that even for these two simple configurations , there is no systematic way to determine which design yields superior performance without having trained and evaluated both .even when the designer has _ a priori _ knowledge of the application , determining the optimal hyper - parameters is non - intuitive , especially for deep networks .a concrete example of the described problem would be the design of an embedded system to recognize handwritten numerical characters ( such as those contained in the dataset ) .if the implementation goal is throughput of categorized digits , and a penalty is incurred when a character is misclassified ( perhaps requiring manual intervention ) , then a smaller network that requires fewer clock cycles to evaluate may still result in an overall throughput exceeding those of more accurate alternatives .in such a scenario , the engineer would require knowledge of the cost and performance of all pareto - optimal solutions in order to meet all requirements with the lowest implementation costs .even more complex structures are those of , which have demonstrated state - of - the - art results in image recognition .the use of convolutional layers further complicates the design process as they introduce a separation between the memory and processing requirements .illustrated in figure [ fig : cnn_graph ] , convolutional layers are composed of trainable filters ( five _ 3-by-3 _ kernels are shown ) .have the advantage of reduced memory requirements due to each convolutional filter reusing the same kernel weights for all input values .however , this is at the expense of increased processing demands , the result of each convolutional filter requiring multiplication operations for each input value ( where the convolutional kernels are sized _ n - by - n _ ) . in a , each filter produces a processed copy of the input data ( as illustrated in figure [ fig : cnn_graph ] ) ; and without any form of down - sampling , this greatly increases the computational complexity of the following layers .an example of a possible efficient down - sampling method is the inclusion of max - pooling ( or mean - pooling ) after convolutional layers .illustrated in figure [ fig : cnn_graph ] , max - pooling refers to partitioning the filtered images into non - overlapping _ k - by - k _ regions , with each outputting the maximum pixel value within ( alternatively , mean - pooling would output the mean value ) .all of these additional parameters further increase the design space dimensionality , forcing a designer to not only choose how many filters to use in each layer , but also the kernel and pooling sizes .this greatly affects both the performance and computational complexity of the resulting networks .the key contributions of this work are presented as follows : * we introduce a method , which employs techniques to predict classification accuracy , to automate the design of hyper - parameters .this method is then validated with and designs targeting the -10 and image recognition datasets .* we demonstrate that multi - objective hyper - parameter optimization can successfully be used as a method to reduce implementation cost ( computational complexity ) .this work exists at the intersection of two fields : automated hyper - parameter optimization , and reduction of computational complexity . to our knowledge, this work is the first that has applied automated hyper - parameter optimization as a method of reducing computational complexity .the design of neural network hyper - parameters has long been considered to be unwieldy , unintuitive , and as a consequence , ideal for automated hyper - parameter optimization techniques such as in , , , and .however , these works have been developed with the sole purpose of optimizing performance , with little regard to the resulting computational resource requirements .two types of sequential hyper - parameter optimization algorithms were presented in ; in both cases experimental results compared positively to human designed alternatives .these findings were echoed in , where it was demonstrated that a random search of a large design space , which contains areas that may be considered less promising , can at times exceed the performance of manually tuned hyper - parameters .positive results were also presented in where similar algorithms were extended using a method to extrapolate the shape of learning curves during training , so as to evaluate fewer epochs for unfit solutions and reduce design time . since parameters which perform favourably fora large network may not be optimal for a smaller alternative , and the most important hyper - parameters ( with the greatest impact on resulting performance ) may vary for different data sets , the need for multi - objective optimization ( especially where low power platforms are targeted ) is clear .additionally , in an automated method was applied to the multi - objective optimization problem of application - specific design , a field which also consists of high - dimensionality solution spaces .it was demonstrated that through the presented method could efficiently identify pareto - optimal architectures .there also exists a body of work attempting to reduce the computational complexity of models through weight quantization or the removal of extraneous node connections ( pruning ) .the research in and are examples of two methods that construct networks which reduce the need for multiplications , or modify already trained networks in order to minimize the number of non - zero weights . in , the authors attempted to reduce the computational resources required when evaluating fully - connected , as well as convolutional , networks through representing all weights as binary values of or .doing so reduces the number of multiplication operations performed each forward pass ; however the requirement to store floating - point weights during training remains . the work in also compared trained binary weighted networks to traditional equivalents with equal layer dimensions and similar performance was demonstrated .however , only considered very large network dimensions ; further benefits may be obtained from smaller optimized architectures . instead of restricting weights to specific values , in a pruning methodwas presented in which a network can be trained while reducing the number of non - zero weights .the resulting compressed networks have lower bandwidth requirements and require fewer multiplications due to most weights being zero .the results in demonstrated up to 70% reductions in the numbers of floating - point operations required for various networks , with little to no reduction in performance .however , such pruning methods are not mutually exclusive to the use of tools and could very well be implemented in conjunction with the presented methodology in order to compress an already optimized network configuration .this work presents a approach that searches for pareto - optimal hyper - parameter configurations and has been applied to both and topologies .the design space is confined to : the numbers of and convolutional layers , the number of nodes or filters in each layer , the convolution kernel sizes , the max - pooling sizes , the type of activation function , and network training rate .these degrees of freedom constitute vast design spaces and all strongly influence the resulting networks cost and performance. for design spaces of such size , performing an exhaustive search is intractable ( designs with over to possible solutions are not uncommon ) , therefore we propose to model the response surface using an for regression where the set of explored solution points is used as a training set .the presented meta - heuristic is then used to predict the performance of candidate networks ; only points which are expected not to be pareto - dominated are explored .a flowchart describing the implementation is shown in figure [ alg : dse ] .the general steps performed during the design space exploration can be broken down into : 1 .sample the next candidate point from a gaussian distribution centred around the previously explored solution ( or sample a random point for the first iteration ) .2 . predict the candidate solution performance using the neural network , and calculate the cost as : + 3 . compare the predicted results to the current pareto - optimal front .if the candidate is predicted to be pareto - dominated , it is accepted with probability , otherwise it is accepted with probability .if rejected , the previously explored solution is rolled back and the algorithm returns to step 1 .if accepted , the candidate model is trained , tested , and the evaluated results are added to the training set of the ( which is then retrained ) .finally , if the training set size exceeds the maximum number of desired iterations , the process ends .otherwise , the algorithm returns to step 1 and a new solution is sampled .the sampling strategy proposed is an adaptation of the metropolis - hastings algorithm . in each iterationa new candidate is sampled from a gaussian distribution centred around the previously explored solution point . performing thisrandom walk limits the number of samples chosen from areas of the design space that are known to contain unfit solutions , thereby reducing wasted exploration effort .however , exploring an inferior solution may eventually lead to those of superior performance , therefore the probability of accepting such a solution ( ) must remain greater than zero ; this also ensures that the training set for the remains varied .all experimental results in section [ sec : experimental_results ] were obtained with .we choose to model the response surface using a model with an input set representative of network hyper - parameters and a single output trained to predict the error of corresponding networks .this is composed of two hidden layers and a linear output layer .experimental results demonstrated that sizing the hidden layers with to the number of input nodes provided the best performance .the network inputs are formed as arrays characterizing all explored dimensions .integer input parameters ( such as number of nodes in a hidden layer , or size of the convolutional kernels ) are scaled by the maximum possible value of the respective parameter , resulting in normalized variables between 0 and 1 . for each parameter that represents a choice where the options have no numerical relation to each other ( such as whether or sigmoid functions are used ) an input mode is added and the node that represents the chosen option is given an input value of 1 , all others .for example , a solution with two hidden layers with 20 nodes each ( assuming a maximum of 100 ) , using ( with the other option being sigmoid functions ) and with a learning rate of 0.5 would be presented as input values : ] as loss function .finally , for all except the reduced design problem , batch normalization was included after each network layer in order to smooth the response surfaces .[ table : dse_parameters ] in order to evaluate the efficiency with which the method approximates the true pareto - optimal front , we first compare experimental results to those of an exhaustive search targeting the design of models for the dataset . in order to make an exhaustive search tractable, we limited the design space to only the values outlined in the first section of table [ table : dse_parameters ] .this resulted in a moderate design space of solutions , all of which were trained and tested .the results of executing the algorithm for 200 iterations ( each iteration represents a design training , evaluation , and model update pass ) are plotted alongside those of the exhaustive search in figure [ fig : mnist_exhaustive ] .these results demonstrate that the true pareto - optimal front is very closely approximated by the presented method , while requiring very few solutions to be evaluated .however , it should be noted that is by nature non - deterministic and training the same network twice may yield different performances .consequently , the generated results ( shown in figure [ fig : mnist_exhaustive ] ) dominate those of the exhaustive search at several points .this figure also demonstrates that the majority of solutions evaluated by the algorithm remain within a close vicinity of the true pareto - front , successfully avoiding pareto - dominated areas .since the true pareto - optimal front is known for this restricted case , we use as the metric of evaluation ; quantifying how closely the approximated set differs from the exact .is defined in eq . , where and are the approximate and exact pareto - optimal fronts , and the function represents the normalized distance between solutions and . the evolution of the approximated pareto - optimal set discovered by the algorithm , as a function of iterations completed is plotted in figure [ fig : mnist_mlp_reduced_results ] .evident from this figure is that the optimal front obtained from the algorithm progressively approaches the true pareto - optimal , while evaluating only a comparatively small number of iterations .the proposed method identifies the optimal solutions with high accuracy , achieving low values of 7.1% after completing only 50 iterations , 5.0% after 100 , and 3.6% after 200 .the results in table [ table : adrs ] also demonstrate fast convergence , with the largest changes occurring over the first 30 iterations .further execution yields more gradual changes as the approximated pareto - front asymptotically approaches the exact . decreasing design time ( both in terms of computation and man - hours ), these results also demonstrate that by exploring less than 1% of the design space , the method closely predict which hyper - parameters are required for optimal solutions . in addition , the generated set of pareto - optimal configurations also exposes the trade - offs application designers may want to make for cost - constrained systems , allowing for more informed design choices to be made .< m0.225 < m0.2 < iterations & of explored set & execution time + 10 & 30% & 2.0 min + 20 & 21% & 4.1 min + 30 & 8.4% & 6.4 min + 50 & 7.1% & 11.2 min + 70 & 6.1% & 16.4 min + 100 & 5.0% & 25.4 min + 150 & 4.5% & 45.2 min + 200 & 3.6% & 70.1 min + [ table : adrs ] in order to evaluate performance of the heuristic method for much larger designs , the algorithm was run on the remaining spaces described in table [ table : dse_parameters ] for both and design problems . the total execution times ( on an intel _xeon e5 - 1603 _ with 8 gb of and a nvidia _ tesla k20c _ ) for the design examples are listed in table [ table : runtime ] and the corresponding pareto - optimal results ( plotted as functions of the number of iterations completed ) are shown in figure [ fig : experimental_results ] . in these plots , the colour scheme presents solutions with low error in blue , and low cost in red .even though the outputs can not be directly compared to the true results from an exhaustive search ( the -10 example design space exceeds solutions ) , the trends discussed in section [ sec : exhaustive ] are mirrored by all plots in figure [ fig : experimental_results ] .m0.27 < m0.29 < m0.29 < target & algorithm execution time & mean solution evaluation time + ( ) & 3.6 h & 30 s + ( ) & 18 h & 2 min + -10 ( ) & 66 h & 8 min + [ table : runtime ] because of the intractable nature of such exhaustive searches , the generated results are not expected to always predict the true pareto - optimal fronts . instead, automated exploration must only match , or exceed , the fitness of manually designed networks in order to provide cost reductions . in comparison with manually designed architectures in literature ,the pareto - optimal results in figure [ fig : mnist_cnn_results ] include points that offer equivalent performance to the designs in and , with implementation costs as low as 25% of their manually designed counterparts ( when weighted with the same cost model detailed in table [ table : costs ] ) .this demonstrates that the proposed method can design with vastly reduced cost requirements and same levels of performance when compared to manual design approaches .furthermore , the pareto - optimal results can also find data points with substantial cost savings penalized only by a small decrease in performance . given that increasing accuracy by only 0.01%requires a doubling in implementation cost ( for designs shown in figure [ fig : mnist_cnn_results ] ) , these additional points are invaluable for application platforms with extremely stringent cost budgets . in order to validate the assumption that a neural network can be trained through regression to model the response surface with sufficient accuracy , the prediction error is plotted in figure [ fig : rsm_error ]this graph plots the absolute value of the error ( % difference between the predicted and the evaluated performance of each explored solution ) for each of the 500 algorithm iterations performed during the design example ( with results in figure [ fig : mnist_cnn_results ] ) .the narrow error spikes , which are expected , occur at points where the algorithm encounters previously unexplored areas . as these solutions are added to the training set , the prediction error decreases as the response surface modelis updated , and the spikes begin to occur less frequently . outside of these sparse peaks ,the prediction accuracy is exceptionally high ; the mean error over the last 95 iterations ( all points after the last spike ) is only 0.35% .a method to automate the multi - objective optimization of neural network hyper - parameters , reducing both algorithm error and computational complexity , was presented . when compared to the results of an exhaustive search on a restricted design space targeting the dataset , the presented method was shown to dramatically reduce computation time required for convergence to a pareto - optimal solution set .a low of 5% was achieved after only 100 iterations ; in practice , fewer solution evaluations may be required , with corresponding execution times less than those in table [ table : runtime ] .furthermore , scalability of the method was demonstrated on larger design spaces for both and models targeting the -10 dataset as well .even when evaluated on massive design spaces , the presented method was found to still efficiently converge to a diverse pareto - optimal front .not only was the automation of hyper - parameter design process demonstrated to be both feasible and time efficient , but when compared to manually designed networks from literature , the automated technique produced results with near identical performance while reducing the associated costs by a factor of .as applications for make further inroads in mobile and embedded market segments , the need to reduce time - to - market and costs will necessitate the use of such automated design methods .this work was supported in part by a scholarship from the , as well as equipment donations from nvidia corporation .m. courbariaux , y. bengio , and j .-binaryconnect : training deep neural networks with binary weights during propagations . in _ advances in neural information processing systems 28_ , pages 31233131 .curran associates , inc . , 2015 .t. domhan , j. t. springenberg , and f. hutter .speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves . in _ proc .of the twenty - fourth int .joint conf . on artificial intelligence ,ijcai _ , pages 34603468 , july 2015 .s. ishibushi et al .statistical localization exploiting convolutional neural network for an autonomous vehicle . in _ industrial electronics society , iecon 2015 - 41st annual conf . of the ieee _ , pages 13691375 , nov 2015 .a. krizhevsky , i. sutskever , and g. e. hinton .classification with deep convolutional neural networks . in _ advances in neural information processing systems 25 _ , pages 10971105 .curran associates , inc . , 2012 .t. okabe , y. jin , and b. sendhoff .a critical survey of performance indices for multi - objective optimisation . in _ evolutionary computation , 2003 .the 2003 congress on _ , volume 2 , pages 878885 , dec 2003 .s. r. young et al . optimizing deep learning hyper - parameters through an evolutionary algorithm . in _ proc . of the workshop on machine learning in high - performance computing environments_ , mlhpc 15 , pages 4:14:5 , new york , ny , usa , 2015 .
artificial neural networks have gone through a recent rise in popularity , achieving state - of - the - art results in various fields , including image classification , speech recognition , and automated control . both the performance and computational complexity of such models are heavily dependant on the design of characteristic hyper - parameters ( e.g. , number of hidden layers , nodes per layer , or choice of activation functions ) , which have traditionally been optimized manually . with machine learning penetrating low - power mobile and embedded areas , the need to optimize not only for performance ( accuracy ) , but also for implementation complexity , becomes paramount . in this work , we present a multi - objective design space exploration method that reduces the number of solution networks trained and evaluated through response surface modelling . given spaces which can easily exceed solutions , manually designing a near - optimal architecture is unlikely as opportunities to reduce network complexity , while maintaining performance , may be overlooked . this problem is exacerbated by the fact that hyper - parameters which perform well on specific datasets may yield sub - par results on others , and must therefore be designed on a per - application basis . in our work , machine learning is leveraged by training an artificial neural network to predict the performance of future candidate networks . the method is evaluated on the and -10 image datasets , optimizing for both recognition accuracy and computational complexity . experimental results demonstrate that the proposed method can closely approximate the pareto - optimal front , while only exploring a small fraction of the design space .
it is becoming increasingly apparent that biological evolution of organisms life history traits can occur on a similar time scale to their population dynamics , and that such rapid evolution can shape the ecological dynamics of interacting species .in particular , predator - prey and host - parasite interactions can be dramatically affected by rapid genetic variation within the populations , and we need to take this fact into account in food - web and epidemiological models .for instance , it was demonstrated in that rapid genetic variation can enlarge the period and modify the phase relations in predator - prey cycles .recently , using a parsimonious eco - evolutionary predator - prey model , morozov _ et al . _have shown that the interplay between a fast genetic variation within the prey and the selectivity of consumption of the prey by its predator can suppress large amplitude oscillations in the system .interestingly , the reported stabilization could occur even in the case when the carrying capacity of the prey is infinitely large .in contrast , the same system with non - changing life history traits of species would be globally unstable , see e.g. .thus , a rapid evolution could potentially provide an extra mechanism of persistence for trophic interaction in a highly eutrophic environment ( known also as the top - down control ) .this mechanism would answer the long standing open question in theoretical ecology about possible solutions of the paradox of enrichment : what factors can stabilize predator - prey interactions in the case the supply of resource for prey is large ? .the previous results on stabilization in the eco - evolutionary predator - prey model with prey structuring in a highly eutrophic ecosystems were obtained by directly simulating the underlying model equations for particular parameterisations of the model functions .obviously , this can not be considered as a rigorous proof of stability .furthermore , considering other functional parameterizations of the inheritance kernel as well as the selectivity of predation can potentially affect the results obtained only using simulations .thus , the central question is whether or not the main conclusions in on the importance of within population structuring on persistence and stability will be generic .the main objective of this paper is to explore in detail the mathematical properties of the eco - evolutionary predator - prey model .our analytical investigation is centred around two main issues : the existence of non - trivial stationary state(s ) of the model ( assuring the coexistence of the prey and predator ) , and the stability of the stationary state(s ) for a general family of functions ( kernels ) describing genetic inheritance within the prey .we also establish global existence and positivity of solutions of the model .mathematically we consider the following system of integro - differential equations ( which is a generalization of that in ) : where describes the distribution of prey individuals across the cohorts according to a certain life history trait , which can vary between and .hence the integration of over ] .the contribution to the growth rate of cohort due to reproduction of the group of cohorts with life traits in the interval ] .the mortality of prey in the model is due to predation only ( i.e. we neglect natural mortality ) , which is achieved via a standard holling type ii parametrization , see e.g. , where characterises the saturation of predation at high densities of the prey .the vulnerability of the prey to predation is cohort - dependent and is described by the function , i.e. there is prey selectivity by the predator according to the life trait . is the food conversion coefficient describing the efficiency of transformation of the consumed prey biomass .the predator is subject to a natural mortality which is assumed to be constant .model ( [ eq : zoo])-([eq : fish ] ) can be considered as a standard rosenzweig - macarthur model ( see ) in which genetic structuring and rapid evolution has been incorporated .our model is similar to that of , where evolution of cohorts of predator and prey in a chemostat was considered . however , unlike the model in , here we allow mutations of the clones .in particular , the kernel can be constant which can model the scenario of perfect genetic mixing within the population .it is of importance to mention that we intentionally consider the carrying capacity of prey population to be infinitely large , i.e. we model predator - prey interactions in a highly eutrophic environment .it is well known that the same model without population structuring is globally unstable and the species persistence is impossible , thus the main question explored using model ( [ eq : zoo])-([eq : fish ] ) is whether or not population structuring and rapid evolution ( as well as selectivity of the predator ) can eventually stabilize this otherwise unstable system . note that by rapid evolution we understand here variation in the cohort distribution which occurs on the same time scale as the ecological dynamics ( i.e. variation of the total biomass of prey and predator ) ._ et al . _ , we explore the model for arbitrary parametrisations of the vulnerability to predation and the inheritance kernel .we only make the following natural assumptions regarding the model parameters and and global coexistence and positivity of solutions of model ( [ eq : zoo])-([eq : fish ] ) , based on the above assumptions for the parameters and integral kernels can be readily established for example using methods developed in .it is worthwhile to point out that unlike in the case of standard " physiologically structured population models , see for example the classical size - structured models in ( and the references therein ) , in our model individuals are structured with respect to a genetically inherited property , which implies that there is no transport term in equation .this allows us to work in the framework of bounded operators . on the other hand ,recruitment is represented by an integral operator , which is of infinite rank , in general .eigenvalue problems involving integral operators are often difficult to analyse .see for example the recent papers , where structured population models with distributed recruitment processes ( modelled by an integral operator ) were analysed .we also note that even though the operators arising from equations - are bounded , positivity of solutions can not be immediately established for example from the variation of constants formula .this is due to the negative feedback between predator and prey populations . in the next two sections we show the existence of the nontrivial stationary state ( coexistence state ) of the model for a particular class of kernels , which is actually a separable kernel of a finite rank .then we derive the characteristic equations to obtain the stability condition of the nontrivial stationary state . using the obtained generic condition we consider the stability property in a few particular biologically relevant cases , for instance , in the case where the kernel is constant .finally , we address the important question of the necessity of predation selectivity for the stabilization of model ( [ eq : zoo])-([eq : fish ] ) , i.e. whether or not the stability of the coexistence state is possible in the case .we summarize our results in the discussion section .note that model ( [ eq : zoo])-([eq : fish ] ) always admits the trivial stationary state . throughout the paper we assume that the kernel is strictly positive , but for the sake of completeness we notethat if the kernel is not strictly positive then our model may admit stationary states of the form , where satisfies the integral equation for example if vanishes for every on some interval ] will be , together with , a non - trivial stationary state . in the next two sections we discuss the existence of strictly positive ( coexistence or `non - trivial ' ) stationary states of model ( [ eq : zoo])-([eq : fish ] ) .we start with the relatively simple case of a separable kernel . in this casethe integral operator describing the recruitment process is of rank one ( at most ) .then , we will discuss the case of a finite rank approximation of the kernel , and finally , for a general kernel , we will reformulate the steady state problem as an eigenvalue problem for a bounded operator .assume that , for some functions , i.e. that the kernel is separable .we look for a strictly positive stationary solution .the stationary state equations read : we introduce the following notation : assuming that we obtain from equations ( [ sseq1 ] ) the following set of scalar equations for the variables . one can easily see that a unique positive solution of ( [ sseq1])-([sseq2 ] ) exists if and only if the 4-dimensional scalar system ( [ scalar1])-([scalar4 ] ) above has a strictly positive solution .we introduce the notation hence , we obtain from ( [ scalar2])-([scalar4 ] ) a unique positive solution , as long as .substituting this into equation ( [ scalar3 ] ) yields a unique positive solution . from ( [ scalar1])-([scalar4 ] )we obtain , and finally from equation ( [ scalar4 ] ) we obtain . hence the model admits a unique positive stationary state if the condition above is satisfied , for instance , in the case when the saturation is not very large and vulnerability to predation is large enough .the approach above may be extended to the more general case of the kernel : i.e. for a finite rank approximation of the kernel with non - negative functions . with and as defined in ( [ z1z2 ] ) , and introducing the new variables : the stationary state equations of model ( [ eq : zoo])-([eq : fish ] ) read : equations ( [ sseq3 ] ) lead to the following dimensional scalar system : condition ( [ sscond1 ] ) for a separable kernel suggests that for small enough values of the parameter one should be able to establish existence ( but not necessarily uniqueness ) of a positive stationary state . in particular , for the first and third equation of ( [ sseq1sep ] ) together yield with this , the -dimensional nonlinear scalar system [ sseq2sep ] can be cast in the form : where once a non - negative solution of system ( [ sseq3sep ] ) is found it can be substituted into equation ( [ sseqsepf ] ) to determine , which using the second equation of ( [ sseq1sep ] ) determines a unique positive . to establish existence of a positive solution of system ( [ sseq3sep ] ) we utilise an idea which was employed for infinite dimensional problems recently , for example , in .the key idea is to recast the non - linear problem ( [ sseq3sep ] ) as an eigenvalue problem for a parameterised family of matrices .that is , we rewrite system ( [ sseq3sep ] ) as where note that for any the matrix is non - negative .it follows from perron - frobenius theory that the spectral radius is an eigenvalue with a corresponding non - negative eigenvector .also note that the function is continuous for .it follows for example from gershgorin s circle theorem that as we have . hence if there exists a value such that , then there exists a such that and therefore is an eigenvalue with a corresponding non - negative eigenvector .we then normalize this eigenvector such that it satisfies note that the existence of a such that holds depends on the values , i.e. on the approximation of the kernel .in particular if the kernel is such that for some value it is concentrated on the diagonal point then again gershgorin s circle theorem implies the existence of a such that . also note that if the kernel is such ( typically strictly positive ) that it can be approximated with such that for ,then the matrix above is positive , and the perron - frobenius theorem guarantees the existence of a strictly positive eigenvector .also if the matrix is positive , then it is clear that for any we have , hence it follows from the intermediate value theorem that there exists a such that .we summarize our findings in the following lemma .assume that , such that for . then model ( [ eq : zoo])-([eq : fish ] ) admits a strictly positive stationary state for .next we rewrite system - as follows : where equations ( [ sseq4sep])-([sseq6sep ] ) can be recast in the more economic form : where is well - defined ( via the left hand - sides of equations ( [ sseq4sep])-([sseq6sep ] ) ) on and continuously differentiable .we would like to apply the implicit function theorem for to show that if the equation has a strictly positive solution then it also has a strictly positive solution for some small positive values of .to this end we compute the jacobian of at : hence if the determinant of the jacobian matrix evaluated at the a strictly positive stationary state is not zero then by the implicit function theorem a strictly positive stationary state also exists for small enough values of .note that the value of the determinant of depends on the particular finite rank approximation of the kernel .we briefly discuss here how the steady state problem can be formulated in case of a general ( i.e. , non - separable ) kernel .this case is challenging from the mathematical point of view since the integral operator describing the recruitment process is of infinite rank , in general .for a positive stationary state we define ,\quad z^*=\int_{x_1}^{x_2}z^*(x)\,dx.\ ] ] with this notation the ( positive ) steady state problem can be formulated at least for sufficiently small values of ( e.g. for ) as follows , \label{eqop1 } \\ f^ * & = \frac{k}{\mu}\int_{x_1}^{x_2}\kappa^*(x)\,dx,\label{eqop2 } \\ z^ * & = \frac{\int_{x_1}^{x_2}\frac{\kappa^*(x)}{a(x)}\,dx}{f^*-\beta\int_{x_1}^{x_2}\frac{\kappa^*(x)}{a(x)}\,dx}. \label{eqop3}\end{aligned}\ ] ] problem - can be considered as an eigenvalue problem for a bounded operator , where is defined via the right hand side of equations - , and is an appropriately choosen banach space , for example .more precisely , if has eigenvalue with a corresponding strictly positive eigenvector then the system admits a positive steady state with .\ ] ] note that for the operator is positive , and the eigenvalue problem for can be analysed using similar arguments as in section 2.2 above for infinite dimensional problems see e.g. .in particular , for the special case of and we have the following result . [ ssgeneral ] in the case of and model- admits a unique positive stationary state .the proof of theorem [ ssgeneral ] is included in appendix a. it is worth pointing out that , unlike in the previous sections , we are able to prove uniqueness of the coexistence steady state . to prove the existence of a positive steady state in the most general case , i.e. without the assumptions and , we could apply results from the forthcoming paper , but due to the technical difficulties involved this is outside the scope of the present paper .in this section we shall address the local stability of the coexistence stationary state of the model .to this end we linearise the model around the steady state , and analyse the arising eigenvalue problems .our model is semi - linear , and so we may invoke theorem 5.1.1 and theorem 5.1.3 from to justify that the stability results obtained in the section are valid .we linearise equations ( [ eq : zoo])-([eq : fish ] ) in the vicinity of the stationary state and consider perturbations and of this state .we obtain where and .we note that the linear problem above is governed by an analytic semigroup , hence the spectrum may contain only eigenvalues of finite multiplicity . to determine the possible eigenvalues we look for solutions of the linearised equations in the standard form : , and .this ansatz leads to the following eigenvalue problem ( with the assumption of ) : as we can see the eigenvalue problem ( [ eq : stability_1])-([eq : stability_2 ] ) is rather complicated , in general , since it contains integral equations .nevertheless in the following subsections we discuss some interesting special cases when the eigenvalue problem becomes tractable , and we can deduce analytical stability or instability results .the eigenvalue problem for the case of a constant kernel reads : we integrate equation ( [ evalue1_1 ] ) , then multiply it by and add it to equation ( [ evalue1_2 ] ) to obtain : since any non - trivial eigenvector is determined up to a constant multiplier , we may assume that the eigenvector is such that .note that since we only want show the existence of a positive eigenvalue with a corresponding non - trivial eigenvector ( hence we are not characterising the whole point spectrum ) we do not need to worry about possible eigenvalues with eigenvectors for which . with from ( [ evalue3_1 ] ) we obtain ( for ) : using ( [ gvalue ] ) we obtain from ( [ evalue1_1 ] ) the equation for the eigenvalues can be derived by integrating ( [ main ] ) over ] is not small and we can not always assume that ( but we can still neglect the saturation in the predation rate , i.e. set ) we need to solve the full equation ( [ main3 ] ) , which in general is a difficult task since has both real and imaginary parts . however , we know that for a sufficiently small the stationary state is always stable and due to the continuous dependence of the model dynamics on the parameters ( by invoking theorem 3.4.1 and theorem 3.4.4 from ) , the stability loss ( if any ) of the stationary state for a larger value of should take place via a hopf bifurcation with . to find the bifurcation point we substitute into ( [ main3 ] ) and separate the real and imaginary part of this equation . after some rearrangement we obtain the following system of equations : ^ 2}dx & = & \frac{h \mu}{r+\mu},\\ \label{re_1 } \int_{x_1}^{x_2}\frac{a(x)\frac{r}{h}i_1}{\omega^2+\left[a(x)\frac{r}{h}i_1\right]^2}dx & = & \frac{h}{r+\mu},\end{aligned}\ ] ] where the integral is defined earlier .one can further explore the possibility of solving equations ( [ im_1])-([re_1 ] ) for a particular function . in our studywe have considered the generic linear and parabolic functions given by and .note that for those functions it is easy to calculate the above integrals analytically . in each casewe investigated the possibility of solving the system ( [ im_1])-([re_1 ] ) , i.e. to find satisfying both equations .such will correspond to a hopf bifurcation point .our results show that for the given functions there is no solution for any combination of the other model parameters .we do not show here the results for the sake of brevity .thus , we can conclude that for linear and parabolic parameterisations of the stationary state is always locally asymptotically stable for any trait interval ] .one can see that stability loss due to an increase in can be compensated for either by increasing the range of , or by increasing the absolute value of the gradient of .this is in agreement with earlier results in , which were obtained via direct simulation of the model equations . )-([eq : fish ] ) with a constant kernel and the linear vulnerability constructed using equations ( [ im_2])-([re_2 ] ) .different curves correspond to different values of .the other model parameters are : . for a given the stability regionis located on the left - hand side of each curve .the dashed horizontal lines show the lower limits of beyond which becomes negative.,width=453 ] note that equations ( [ im_2])-([re_2 ] ) only account for the fact that the stationary state loses it stability , however , without providing information about which type of hopfbifurcation actually occurs : a subcritical or a supercritical one .the knowledge about the type of hopf bifurcation can be of importance since after a supercritical bifurcation the trajectories remain bounded close to the equilibrium whereas they can become unbounded ( no population persistence , see ) in case of subcritical hopf bifurcation .numerical simulations carried out with particular parametrisations of show that we can have both types of bifurcations ( depending on the model parameters ) .however , revealing a particular type of hopf bifurcation would require a rather lengthy expressions and should be a matter of a separate study .the main technique of stability analysis is similar to the one implemented in the case of a constant kernel .equations ( [ eq : stability_1])-([eq : stability_2 ] ) now take the following form : we integrate equation ( [ evalue1 ] ) , multiply it by and add it to ( [ evalue2 ] ) to obtain : where .we can again assume that ( as before ) and we do not need to worry about possible eigenvalues with eigenvectors for which .we obtain for : using ( [ gvalue_1 ] ) we obtain from ( [ evalue1 ] ) we multiply equation ( [ main_4 ] ) by and integrate it over ] , this should give us 1 since we set .after some simplification , we have : we summarize our findings in the following theorem .[ k(x ) ] the positive stationary state of ( [ eq : zoo])-([eq : fish ] ) with a kernel is stable if the eigenvalues of characteristic equation ( [ main_7 ] ) have negative real parts , and it is unstable if there is at least one eigenvalue with positive real parts .note that it is possible ( using the same method as in the previous section ) to prove the stability of the system in the case when and the length of the trait interval are small .again , this stability does not depend on the sign of the gradient of . on the contrary , in the casewhen is not small , but is small , we can prove that the stationary state is unstable .thus again , there should exist a hopf bifurcation curve which separates the regions of stability and instability . for any particular function , construction of such a curveshould be based on equation ( [ main_7 ] ) , where we replace with .finally , we can consider the situation where the vulnerability to predation is constant : . in this casethe characteristic equation reduces to : which can be re - written as an equivalent equation ( we are interested to find only with ) it is easy to see that for we shall always have eigenvalues with positive real part , thus the equilibrium will be unstable for any .we can extend the previous techniques of stability analysis to deal with a more general case , where the kernel is given by ( [ finite_rank ] ) .we assume that we have already calculated the values of and ( see section 2.2 for details ) .this can be done , for instance , by using numerical methods .equations ( [ eq : stability_1])-([eq : stability_2 ] ) now read : we introduce the notation , multiply equation ( [ evalue_f1 ] ) by , integrate this equation and add it to equation ( [ evalue_f2 ] ) .we obtain : where .we assume again that , and express as follows : using equation ( [ gvalue_f ] ) we obtain from ( [ evalue_f1 ] ) with given by ( [ sseq1sep ] ) \displaystyle\sum_{i=1}^n z_{i+2}k_i(x)}{\lambda+\frac{f^*a(x)}{1+\beta z^*}}.\end{aligned}\ ] ] to find the coefficients we multiply equation ( [ main_f ] ) by and integrate it over ] , it is shown that the operators are compact . we also note that along the half - line the spectral radius is constant .next we are going to establish that at any point of the half - line the spectral radius of the operator is greater or equal than one . to this end recall( see e.g. ) that for any bounded linear operator we have the following characterisation of the spectral radius : we have : }\left\{\int_{x_1}^{x_2}k(x , y)\,dy\right\}\,d x \nonumber \\ & \ge 1 .\nonumber\end{aligned}\ ] ] similarly we obtain : in particular , for any as above we obtain hence to establish the existence of a point in the region ( in the positive quadrant of the parameter plane ) , where the spectral radius equals it is convenient to re - parametrise the family of operators using polar coordinates .we write : with this new parametrisation we have : note that the condition corresponds to the condition . also note that for every we have : since we assumed that is bounded above by some it follows that the spectral radius of the integral operator , which maps to is bounded above by , hence for every we note that we have shown above that condition implies that for any .it follows from the intermediate value theorem and the continuity of the spectral radius function ( see e.g. ( * ? ? ?* ch.iv.3.5 ) ) that there is a point in the region , where the spectral radius of equals one .note that the spectral radius itself is an eigenvalue .it follows from ( * ? ? ?* theorem 6.6 ) that there is a strictly positive unique normalised eigenvector corresponding to the spectral radius : hence we proved the existence of a solution : of the eigenvalue problem - , which implies that the stationary state is unstable .thus , the above theorem provides us with the sufficient condition of instability in the case of , in general .food - web models including rapid evolution are now gaining more and more popularity in the ecological literature in the recognition of the fact the rapid evolution can dramatically affect population dynamics . in this paper , we have analytically investigated the recent eco - evolutionary predator - prey model proposed by morozov __ in , which is given by equations ( [ eq : zoo])-([eq : fish ] ) .the previous numerical investigation shows a counter - intuitive result that rapid evolution of an organism s life trait combined with predation selectivity could stabilize this otherwise globally unstable system ( when the carrying capacity of prey is considered to be infinitely large and the functional response of the predator is assumed to be a destabilizing holling type ii response ) . since these important results were obtained only using numerical simulation and only for some specific parametrization of the inheritance kernel as well as the selectivity function , the natural question was about the generality of conclusions made in . herewe address the above question by investigating the model analytically and for generic model functions .the main mathematical outcomes of our study are the following : \(i ) we found conditions for the existence of the positive stationary states of model ( [ eq : zoo])-([eq : fish ] ) for a class of kernels given by ( [ finite_rank ] ) and the arbitrary parametrization of the vulnerability to predation .\(ii ) we analytically derived stability conditions for the non - trivial stationary state in the case when the kernel is separable .in particular , when the kernel is constant and there is a perfect genetic mixing in the offspring in the prey population .the obtained characteristic equations allow us to construct the hopf bifurcation curve without the need for a direct simulation of equations ( [ eq : zoo])-([eq : fish ] ) .although , to construct the hopf bifurcation curve one still need to solve transcendental equations ( e.g. ( [ im_2])-([re_2 ] ) ) , this task becomes substantially faster compared to direct simulations of the integro - differential equations .\(iii ) for an arbitrary function , describing selectivity of predation , we analytically proved the stability of the coexistence stationary state in the case when the saturation of the predation as well as the length of the interval of variation of the life trait ] is not small .in particular , our investigation shows that for and , the stationary state turns out to be always stable for a constant kernel .\(iv ) in contrast , for a constant , we obtained that stabilisation is never possible in the case of or even in some more complicated cases of the separable kernel .for an arbitrary kernel we derived sufficient conditions of instability with ( see theorem [ theorem ] ) .\(v ) the framework of stability analysis and stationary states search suggested in this paper is rather generic and can be implemented to other eco - evolutionary models involving integro - differential equations . from the biological perspective, the analytical results obtained have clear interpretation .stabilisation of the otherwise globally unstable predator - prey system is possible as a result of rapid evolution , and the necessary ingredients of stability are : ( a ) sufficiently large genetic variation within the prey population ( for the length of the interval ] except the area around their maximum .biologically such kernels model the situation where the reproduction of the cohort with the life trait would be centred around with only small probability for offspring to deviate from this value .this represents a biologically relevant case of asexual reproduction .we are planning to address the above issues in future work .finally , it would be rather natural to incorporate diffusion into model - .this would allow modelling of small random variations in the genetic trait in a deterministic fashion , for instance , this can account for random variation in the trait within the life time of an organism .structured population models with diffusion in the state space are attracting interest , see e.g. the recent papers . since diffusion has a smoothing effect , in general , it would be an interesting question to investigate whether the hopf bifurcation shown here can be sustained in the analogue model with diffusion .here we prove theorem [ ssgeneral ] stated in section 2.3 . in the special case of and the steady state problem reads hence we define a parametrised family of bounded linear integral operators as follows with domain since the kernel is strictly positive and bounded , the integral operator is irreducible and compact for every , see e.g. ( * ? ? ?* chapter v ) , and the frchet - kolmogorov theorem e.g. in ( * ? ? ?* chapter x ) .theorem 6.6 in ( * ? ? ? * chapter v ) implies that the spectral radius of is an isolated and dominant simple eigenvalue with a corresponding strictly positive eigenfunction .it also follows from the proof of theorem 2.2 in that the spectral radius is the only eigenvalue with a positive eigenvector .also note that the spectral radius function is strictly monotone decreasing , and it is continuous .continuity of the spectral radius function is a consequence of theorem 3.16 in ( * ? ? ?* chapter iv ) .it follows that there exists a unique value for which the spectral radius with a corresponding strictly positive eigenvector .that is we have we normalise this eigenvector , so that the stationary prey population density , which also satisfies equation , is given by .\end{aligned}\ ] ] * acknowledgements * j. z. farkas was partly funded by a university of stirling research and enterprise support grant .we thank professors a. gorban and s. petrovskii ( university of leicester ) for helpful discussions and comments .we also thank the anonymous referee for helpful suggestions and comments .e. bouin , v. calvez , n. meunier , s. mirrahimi , b. perthame , g. raoul , and r. vouituriez ._ invasion fronts with variable motility : phenotype selection , spatial sorting and wave acceleration_. c. r. math .paris , 350 ( 2012 ) , 761766 .d. dube , k. , kim , a. p. alker , c.d ._ size structure and geographic variation in chemical resistance of sea fan corals gorgonia ventalina to a fungal pathogen_. mar .ser , . 231( 2002 ) , 139150 .w. gentleman , a. leising , b. frost , s. storm , j. murray ._ functional responses for zooplankton feeding on multiple resources : a review of assumptions and biological dynamics_. deep - sea res .ii top stud ., oceanog . , 50 ( 2003 ) , 28472875 .a. yu . morozov ._ incorporating complex foraging of zooplankton in models : role of micro and mesoscale processes in macroscale patterns_. in dispersal , individual movement and spatial ecology : a mathematical perspective ( eds m lewis , p maini & s petrovskii ) , pp .new york , ny : springer , 2011 .morozov , e.g. arashkevich , a. nikishina , k solovyev ._ nutrient - rich plankton communities stabilized via predator - prey interactions : revisiting the role of vertical heterogeneity_. math .biol . , 28 ( 2011 ) , 185215 d. n. reznick , c. k. ghalambor , k. crooks ._ experimental studies of evolution in guppies : a model for understanding the evolutionary consequences of predator removal in natural communities .17 ( 2008 ) , 97107 .yu . v. tyutyunov , o. v. kovalev , l. i. titova ._ spatial demogenetic model for studying phenomena observed upon introduction of the ragweed leaf beetle in the south of russia_. math .( 2013), .
in this paper we explore the eco - evolutionary dynamics of a predator - prey model , where the prey population is structured according to a certain life history trait . the trait distribution within the prey population is the result of interplay between genetic inheritance and mutation , as well as selectivity in the consumption of prey by the predator . the evolutionary processes are considered to take place on the same time scale as ecological dynamics , i.e. we consider the evolution to be rapid . previously published results show that population structuring and rapid evolution in such predator - prey system can stabilize an otherwise globally unstable dynamics even with an unlimited carrying capacity of prey . however , those findings were only based on direct numerical simulation of equations and obtained for particular parameterizations of model functions , which obviously calls into question the correctness and generality of the previous results . the main objective of the current study is to treat the model analytically and consider various parameterizations of predator selectivity and inheritance kernel . we investigate the existence of a coexistence stationary state in the model and carry out stability analysis of this state . we derive expressions for the hopf bifurcation curve which can be used for constructing bifurcation diagrams in the parameter space without the need for a direct numerical simulation of the underlying integro - differential equations . we analytically show the possibility of stabilization of a globally unstable predator - prey system with prey structuring . we prove that the coexistence stationary state is stable when the saturation in the predation term is low . finally , for a class of kernels describing genetic inheritance and mutation we show that stability of the predator - prey interaction will require a selectivity of predation according to the life trait .
the multi- and many - core ( mc ) era we have reached was triggered after the beginning of the century by the stalling of single - processor performance .technology allowed more transistors to be placed on a die , but they could not reasonably be utilized to increase single - processor performance .predictions about the number of cores has only partly been fulfilled : today s processors have dozens rather than the predicted hundreds of cores ( although the chinese supercomputer announced in the middle of 2016 comprises 260 cores on a die ) . despite this , the big players are optimistic .they expect that moore - law persists , though based on presently unknown technologies. the effect of the stalled clock frequency is mitigated , and it is even predicted that `` _ _ now that there are multicore processors , there is no reason why computers should nt begin to work faster , whether due to higher frequency or because of parallel task execution . and with parallel task execution it provides even greater functionality and flexibility!. _ _ ''parallelism is usually considered in many forums to be the future , usually as the only hope , rather than as a panacea .people dealing with parallelism are less optimistic . in general, the technical development tends to reduce the human effort , but `` _ _ parallel programs ... are notoriously difficult to write , test , analyze , debug , and verify , much more so than the sequential versions _ _ ''the problems have led researchers to the _ viewpoint _ , that _ multicore hardware for general - purpose parallel processing is broken_.the essence of the present viewpoint is that multicore hardware can perhaps be mended .although one can profoundly agree with the arguments that using manycore chips can not contribute much to using parallelism in general , and especially not in executing irregular programs , one has to realize also that this is not the optimal battlefield for the manycore chips , at least not in their present architecture .present manycore systems comprise many segregated processors , which make no distinction between two processing units that are neighbours within the same chip or are located in the next rack .the close physical proximity of the processing units offers additional possibilities , and provides a chance to implement amdahl s dream of cooperating processors .paradigms used presently , however , assume a private processor and a private address space for a running process , and no external world . in many - core systems , it is relatively simple to introduce signals , storages , communication , etc . , and deploy them in reasonable times .they can not , however , be utilized in a reasonable way , if one can not provide compatibiliy facades providing the illusion of the private world .cooperation must be implemented in a way which provides complete ( upward ) compatibility with the presently exclusively used single - processor approach ( spa ) .it means that on the one hand that new functionality must be formulated using the terms of conventional computing , while on the other , it provides considerably enhanced computing throughput and other advantages .it is well known , that general purpose processors have a huge handicap in performance when compared to special purpose chips , and that the presently used computing stack is the source of further serious inefficiencies .proper utilization of available manycore processors can eliminate a lot of these performance losses , and in this way ( keeping the same electronic and programming technology ) can considerably enhance ( apparently ) the performance of the processor .of course , there is no free lunch .making these changes requires a _ simultanous _ change in nearly all elements of the present computing stack . before making these changes , one should scrutinize the promised gain , and whether the required efforts will pay off .cc = [ scale=0.75 ] ( l1 ) ; ( e1 ) [ above of = l1 ] ; ( e2 m ) [ above of = e1 ] ; ( e2 ) at ( ) ; ( l2 ) [ right of = l1 ] ; ( l3 ) [ right of = l2 ] ; ( l4 ) [ right of = l3 ] ; ( x1f ) at ( ) ; ( x1 ) [ below of = x1f ] ; ( x2f ) at ( ) ; ( x2 ) [ below of = x2f ] ; ( plus ) [ below of = x1 ] ; ( minus ) [ below of = x2 ] ; \(a ) [ below of = plus ] ; ( b ) [ below of = minus ] ; ( l1 ) edge ( x1 ) ( l2 ) edge ( x1 ) ( l3 ) edge ( x2 ) ( l4 ) edge ( x2 ) ; ( x1 ) edge ( plus ) ( x1 ) edge ( minus ) ( x2 ) edge ( plus ) ( x2 ) edge ( minus ) ; ( plus ) edge ( a ) ( minus ) edge ( b ) ; ( c1 ) [ left of = l1 ] ; (c1 ) (x1 ) (plus ) ( c2 ) at ( ) ; ( c3 ) at ( ) ; & = [ scale=0.75 ] ( o1 ) ; ( h1 ) at ( ) ; ( h2 ) at ( ) ; ( o1 ) edge [ bend left ] node ( h1 ) ( o1 ) edge [ bend left ] node ( h2 ) ; ( l11 ) at ( ) ; ( l12 ) at ( ) ; ( h1 ) edge [ bend left ] node ( l11 ) ( h1 ) edge [ bend left ] node ( l12 ) ; ( l21 ) at ( ) ; ( l22 ) at ( ) ; ( h2 ) edge [ bend left ] node ( l21 ) ( h2 ) edge [ bend left ] node ( l22 ) ; ( x1 ) at ( ) ; ( l11 ) edge [ bend right ] node ( x1 ) ( l12 ) edge node ( x1 ) ; ( x2 ) at ( ) ; ( l21 ) edge [ bend right ] node ( x2 ) ( l22 ) edge node ( x2 ) ; (o1 ) (x2 ) ( o2 ) at ( ) ; ( x1 ) edge node ( o2 ) ( x2 ) edge [ bend right ] node ( o2 ) ; ( plus ) at ( ) ; ( minus ) at ( ) ; ( o2 ) edge [ bend left ] node ( plus ) ( o2 ) edge [ bend left ] node ( minus ) ; (minus ) ( o3 ) at ( ) ; ( plus ) edge node ( o3 ) ( minus ) edge node ( o3 ) ; ( c1 ) [ left of = o1 ] ; (c1 ) (x1 ) ( c2 ) at ( ) ; (o2 ) ( c3 ) at ( ) ; + below , some easy - to follow case studies are presented , all of which lead to the same conclusion : we need a cooperative and flexible rather than rigid architecture comprising segregated mcs , and the 70-years - old von neumann computing paradigms should be extended . at the end , the feasibility of implementing such an architecture is discussed .the recently introduced explicitly many - processor approach seems to be quite promising : it not only provides higher computing throughput , but also offers advantageous changes in the behavior of computing systems .todays computing utilizes many forms of parallelism , both hardware ( hw ) and software ( sw ) facilities .the software is systematically discussed in and hardware methods are scrutinized in . a remarkable difference between the two approachesis , that while the sw methods tend to handle the parallel execution explicitly , the hw methods tend to create the illusion that only one processing unit can cope with the task , although some ( from outside invisible ) helper units are utilized in addition to the visible processing unit .interestingly enough , both approaches arise from the von neumann paradigms : the abstractions _ process _ and the _ processor _ require so .the inefficiency of using several processing units is nicely illustrated with a simple example in ( see also fig [ fig : flexibleproc ] , left side ) . a simple calculation comprising 4 operand loadings and 4 aritmetic operations , i.e. altogether 8 machine instructions ,could be theoretically carried out in 3 clock cycles , provided that only dependencies restrict the execution of the instructions and an unlimited number of processing units ( or at least 4 such units in the example ) are available .it is shown that a single - issue processor needs 8 clock cycles to carry out the calculation example .provided that memory access and instruction latency time can not be further reduced , the only possibility to shorten execution time is to use more than one processing unit during the calculation .obviously , a fixed architecture can only provide a fixed number of processing units . in the example such ideas are scrutinized : a dual - issue single processor , and a two - core single issue processor .the hw investment in both cases increases by a factor of two ( not considering the shared memory here ) , while the performance increases only moderately : 7 clock cycles for the dual - issue processor and 6 clock cycles for the dual - core processor , versus the 8 clock cycles of the single - issue single core processor .the _ obvious reasons here are the rigid architecture and the lack of communication possibilities _ , respectively .consider now a processor with flexible architecture , where the processor can outsource part of its job : it can rent processing units from a chip - level pool just in the time it takes to execute a few instructions .the cores are smart : they can communicate with each other , and even they know the task to be solved and are able to organize their own work while outsourcing part of the work to the rented cores .the sample calculation , borrowed from as shown in fig .[ fig : flexibleproc ] , left side , can then be solved as shown on the right side of the figure .the core ; originally receives the complete task to make the calculation , as it would be calculated by a conventional single - issue , single core system , in 8 clock cycles .however , ; is more intelligent . using the hints hidden in the object code, it notices that the task can be outsourced to another cores . for this purposeit rents , one by one , cores ; and ; to execute two multiplications .the rented ; cores are also intelligent , so they also outsource loading the operands to cores ; and ; .they execute the outsourced job : load the operands and return them to the requesting cores ; , which then can execute the multiplications ( denoted by ;) and return the result to the requesting core , which can then rent another two cores ; and ; for the final operations . two resultsare thus produced .this unusual kind of architecture must respond to some unusual requirements .first of all , the architecture must be able to organize itself as the received task requires it , and build the corresponding `` processing graph '' , see fig .[ fig : dynpar ] , for legend see .furthermore , it must provide a mechanism for mapping the virtually infinite number of processing nodes to the finite number of cores .cores ; must receive the address of the operand , i.e. at least some information must be passed to the rented core .similarly , the loaded operand must be returned to the renting core in a synchronized way . in the first case synchronizationis not a problem : the rented core begins its independent life when it receives its operands . in the second casethe rented core finishes its assigned operation and sends the result asyncronously , independently of the needs of the renting core .this means that the architecture must provide a mechanism for transferring some ( limited amount of ) data between cores , a signalization mechanism for renting and returning cores , as well as a latched intermediate data storage for passing data in a synchronized way .the empty circles are the theoretically needed operations , and the shaded ones are additional operations of the `` smart '' cores .the number of the cores being used changes continuously as they are rented and returned .although _ physically _ they may be the same core , _ logically _ they are brand new .note that the `` smart '' operations are much shorter they comprise simple bit manipulations and multiplexing , than the conventional ones that comprise complex machine instructions , and since the rented cores work in parallel ( or at least mostly overlap ) , the calculation is carried out in 3 clock periods .the cycle period is somewhat longer , but the attainable parallelism approaches the theoretically possible one , and is more than twice as high as the one attainable using either two - issue or dual - core processors .although the average need of cores is about 3 , these cores can be the simplest processors , i.e. the decreasing complexity of the cores ( over)compensates for the increasing complexity of the processor .in addition , as the control part of the processors increases , the need for the hidden parallelization ( like out - of - order and speculation ) can be replaced by the functionality of the flexible architecture , the calculational complexity can be decreased , and as a result , the clock speed can be increased .a processor with such an internal architecture appears to the external world as a `` superprocessor '' , having several times greater performance than could be extracted from a single - threaded processor . that processor can adapt itself to the task : unlike in the two issue processor , all ( rented ) units are permanently used ._ the many - core systems with flexible architecture comprising cooperating cores can approach the theoretically possible maximum parallelism . _ in addition , the number of the cores can be kept at a strict minimum , allowing reduction of the power consumption .in the many - many processor ( supercomputer ) systems the processing units are assembled using the spa , and so their maximum performance is bounded by amdahl s law . although amdahl s original model is pretty outdated , its simple and clean interpretation allows us to derive meaningful results even for today s computing systems .amdahl assumed that in some part of the total time the computing system engages in parallelized activity , in the remaining ( ) part it performs some ( from the point of view of parallelization ) non - payload activity , like sequential processing , networking delay , control or organizational operation , etc .the essential point here is that all these latter activities behave _ as if they were sequential processing_. under such conditions , the efficiency is calculated as the ratio of the total speedup and the number of processors : although in the case of supercomputers ( ) comprises contributions of a technically different nature ( it can be considered as the `` imperfectness '' of implementation of the supercomputer ) , it also behaves as if it were a sequentially processed code .[ supercomputertimeline ] shows how this `` imperfectness '' was decreased during the development of supercomputers , calculated from the actual data of the first three supercomputers in the year in question over a quarter of a century .as the figure shows , this parameter behaves similarly to the moore - observation , but it is independent of that one ( because the parameter is calculated from , any technology dependence is removed ) . at first glance , it seems to be at least surprising to look for any dependence in function of `` imperfectness '' .the key is equ .( [ eq : soverk ] ) .since the approaches unity , the term determines the overall efficiency of the computing system . to _ increase _ by an order or magnitude alone is useless if not accompanied by an order of magnitude _ decrease _ in the value of ( ) .however , while increasing is simply a linear function , decreasing ( ) as any kind of increasing perfectness , is exponentially more difficult .[ supercomputertimeline ] proves that today s supercomputers are built in spa , and makes it questionable whether further significant decrease of value ( ) could be reached at reasonable cost .this means that it is hopeless _ to build exa - scale computers , using the principles drawn from the spa_. looking carefully at , one can notice that the two terms describe two important behavioral features of the computing system. as already discussed , decribes , how much the work of the many - processor system is _coordinated_. the factor , on the other hand , describes , how much the processing units _cooperate_. in the case of using the spa , the processing units are segregated entities , i.e. they do not cooperate at all .if we could make a system where the processing units behave differently in the presence of another processors , we could write in equ .( [ eq : soverk ] ) . depending on how cores behave together in the presence of another cores when solving a computing task, the , the cooperation of the processing units can drastically increase the efficiency of the many - processor systems .in other words , to increase the performance of many - many - processor computers , _ the cores must cooperate _( at least with some ) other cores . _ using cooperating cores is inevitable for building supercomputers at a reasonable cost . _a computer computes everything , because it can not do any other type of operations .computational density has reached its upper bound , so no further performance increase in that direction is possible .in addition to introducing different forms of hw and sw parallelism , it is possible to omit some non - payload , do - not - care calculations , through providing and utilizing special hw signals instead .the signals can be provided for the participating cores , and can be used to replace typical calculational instruction sequences by using special hardware signals .the compilation is simple : where the compiler should generate non - payload loop organization commands , it should give a hint about renting a core for executing non - payload instructions and providing external synchronization signals . a simple example : when summing up elements of a vector , the only payload instruction is the respective ` add ` .one has , however , to address the operand ( which includes handling the index , calculating the offset and adding it to the base address ) , to advance the loop counter , to compare it to the loop bound , and to jump back conditionally .all those non - payload operations can be replaced by handling hw signals , if the cores can cooperate , resulting in a speed gain of about 3 , using an extra core only .even , since the intermediate sum is also a do - not - care value until the summing is finished , a different sumup method can be used , which may utilize dozens of cores and result in a speed gain of dozens .when organizing a loop , the partial sum is one of the operands , so it must be read before adding a new summand , and must be written back to its temporary storage , wasting instructions and memory cycles ; in addition it excludes the possibility of parallelizing the sumup operation . for details and examplessee .this latter example also demonstrates that _ the machine instruction is a too rigid atomic unit of processing_. _ utilizing hw signals from cooperating cores rather than providing some conditions through ( otherwise don - not - care ) calculations , allows us to eliminate obsolete calculational instructions , and thus apparently accelerate the computation by a factor of about ten . _the need for multi - processing ( among others ) forced to use exceptional instruction execution .i.e. , a running process is _ _interrupt__ed , its hw and sw state is saved and restored , because the hard and soft parts of the _ only _ processor must be lent to another process .the code of the interrupting process is effectively inserted in the flow of executing the interrupted code .this maneuver causes an indeterministic behavior of the processor : the time when two consecutive machine instructions in a code flow are executed , becoming indeterminate .the above is due to the fact that during development , some of the really successful accelerators , like the internal registers and the highest level cache , became part of the architecture : the soft part of the processor . in order to change to a new thread, the current soft part must be saved in ( and later restored from ) the memory .utilizing asynchronous interrupts as well as operating system services , implies a transition to new operating mode , which is a complex and very time - consuming process .all these extensions were first developed when the computer systems had only one processor , and the only way to provide the illusion of running several processes , each having its own processor , was to detach the soft part from the hard one . because of the lack of proper hardware support , this illusion depended on using sw services and on the architectures being constructed with a spa in mind , conditions that require rather expensive execution time : in modern systems a context change may require several thousands of clock cycles . as the hyper - threading proved , detaching soft and hard part of the processors results in considerable performance enhancement . by having more than one processor and the explicitly many - processor approach , the context change can be greatly symplified .for the new task , such as providing operating system services and servicing external interrupts a dedicated core can be reserved .the dedicated core can be prepared and held in supervisor mode .when the execution of the instruction flow follows , it is enough to clone the relevant part of the soft part : for interrupt servicing nothing is needed , for using os services only the relevant registers and maybe cache .( the idea is somewhat similar to utilizing shadow registers for servicing an asynchronous interrupt . )if the processors can communicate among each other using hw signals rather than os actions , and some communication mechanism , different from using ( shared ) memory is employed , the apparent performance of the computing systems becomes much faster ._ for cooperating cores no machine instructions ( that waste real time , machine and memory cycles ) are needed for a context change , allowing for a several hundredfold more rapid execution in these spots ._ the application can even run parallel with the system code , allowing further ( apparent ) speedup . using the many - processor approachcreates many advantageous changes in the real - time behavior of the computing systems .since the processing units do not need to save or restore anything , the servicing can start immediately and is restricted to the actual payload instructions .the dedicated processing units can not be addressed by non - legal processing units , so issues like exluding priority inversion are handled at hw level .and so on .from all points of view ( the just - a - few and many - many processors , as well as utilizing kernel - mode or real - time services ) we arrive at the same conclusion : segregated processors in the many - processor systems do not allow a greater increase in the performance of our computing systems , while cooperating processors can increase the attainable single - threaded performance .amdahl contented this by a half century ago : `` _ the organization of a single computer has reached its limits and that truly significant advances can be made only by interconnection of a multiplicity of computers in such a manner as to permit cooperative solution . at this point the many - core architectures have the advantage that they are in the close proximity to one another : there is no essential difference between that a core needing to reach its own register ( or signal ) or that of another core .the obstacle is actually the spa : for a core and a process , there exists no other core . in the suggested new approach , which can be called _ explicitly many - processor approach _( empa ) , the cores ( through their supervisor ) can know about their neighbours . today, radical departures from conventional approaches ( including rethinking the complete computing stack ) are advanced , but at the same time a smooth transition must be provided to that radically new technology . _ to preserve compatibility with conventional computing , the empa approach is phrased using the terms of conventional computing _( i.e. it contains spa as a subset ) .some of the above - mentioned boosting principles are already implemented in the system . from the statisticsone can see that in some spots , performance gain in the range 3 - 30 can be reached .the different algorithms need different new accelerator building stone solutions in frame of empa .for example , the gain 3 in an executing loop , when used in an image processing task where for edge detection a 2-dimensional matrix is utilized , means nearly an order of magnitude performance gain , using the same calculational architecture in calculating a new point . and , to consider all points of the picture another double loop is used .this means , that a 4-core empa processor can produce nearly 100 times more rapid processing ( not considering that several points can be processed in parallel on processors with more cores ) .this is achieved not by increasing computing density , but by replacing certain non - payload calculations with hw signals , and so executing 100 times less machine instructions .the mc architecture comprising segregated cores is indeed broken .it can , however , be mended , if the manycore chips are manufactured in the form using cooperating cores . as the first step toward implementing such a system , for simulating its sophisticated internal operation and providing tools for understanding and validating it , an empa development system has been prepared .an extended assembler prepares empa - aware object code , while the simulator allows us to watch the internal operation of the empa processor . to illustrate the execution of programs using the empa method ,a processing diagram is automatically prepared by the system , and different statistics are assembled .[ fig : dynpar ] shows the equivalent of fig .[ fig : flexibleproc ] , running on an 8-core and a 4-core processor , respectively ( for legend see ) .the left hand figure depicts the case when `` unlimited '' number of processing units are available , the right hand one shows the case when the processor has a limited number of computing resources to implement the maximum possible parallelism .the code assembled by the compiler is the same in both cases .the supervisor logic detects if not enough cores are available ( see right side ) , and delays the execution ( outsourcing more code ) of the program fragments until some cores are free again .the execution time gets longer if the processor can not rent enough cores for the processing , but the same code will run in both cases , without deadlock and violating dependencies . for electronic implementation, some ideas may be borrowed from the technology of reconfigurable systems . there , in order to minimize the need for transferring data , some local storage ( block - ram ) is located between the logical blocks , and a lot of wires is available for connecting them . in analogy also with fpgas, the cores can be implemented as mostly fixed functionality processing units , having multiplexed connecting wires to their supervisor with fixed routing .some latch registers and non - stored program functionality gates can be placed near those blocks , which can be accessed by both cores and supervisor .the inter - core latch data can be reached from the cores using pseudo - registers ( i.e. they have a register address , but are not part of the register file ) and the functionality of the cores also depends on the inter - core signals . in the prefetch stage the cores can inform the supervisor about the presence of metainstruction in their object code , and in this way the mixed code instructions can be directed to the right destination . in order to be able to organize execution graphs , the cores ( after renting ) are in parent - child relation to unlimited depth .as was very correctly stated , `` due to its high level of risk , prototype development fits best within the research community . ''the principles and practice of empa differ radically from those of spa . to compare the performance of both, empa needs a range of development .many of the present components , accelerators , compilers , etc ., with spa in mind , do not fit empa. the research community can accept ( or reject ) the idea , but it definitely warrants some cooperative work .h. esmaeilzadeh .approximate acceleration : a path through the era of dark silicon and big data . in _ proceedings of the 2015 international conference on compilers , architecture and synthesis for embedded systems _ , cases 15 , pages 3132 , piscataway , nj , usa , 2015 .ieee press .
a suggestion is made for mending multicore hardware , which has been diagnosed as broken .
galaxy clustering as measured in redshift - space contains the imprint of the linear growth rate of structure , in the form of a measurable large - scale anisotropy .this is produced by the coherent peculiar velocity flows towards overdensities , which add an angle - dependent contribution to the measured redshift . in linear theory , these _ redshift - space distortions _( rsd ) in the clustering pattern can be quantified in terms of the ratio ( where is the linear bias of the sample of galaxies considered ) .a value for can be obtained by modeling the anisotropy of the redshift - space two - point correlation function ( where and are the separations perpendicular and parallel to the line of sight ) or , equivalently , of the power spectrum ( see for a review ) . since can be defined as the ratio of the _ rms _ galaxy clustering amplitude to that of the underlying matter , , the measured product is equivalent to the predicted combination .the latter is a prediction depending on the gravity theory , once normalized to the amplitude of matter fluctuations at the given epoch , e.g. using cmb measurements .measurements of the growth rate are crucial to pinpoint the origin of cosmic acceleration , distinguishing whether it requires the addition of `` dark energy '' in the cosmic budget , or rather a modification of general relativity .these two radically alternative scenarios are degenerate when considering the expansion rate alone , as yielded , e.g. , by the hubble diagram of type ia supernova ( e.g. ) or baryonic acoustic oscillations ( bao , e.g ) .although the rsd effect is well known since long , its important potential in the context of dark energy studies has been fully appreciated only recently .this led to a true renaissance of interest in this technique , such that rsd have quickly become one of the most promising probes for future large dark energy surveys .this is the case of the recently approved esa euclid mission , which is expected to reach statistical errors of a few percent on measurements of in several redshift bins out to using this technique ( coupled to similar precisions with the complementary weak - lensing experiment ) . in general , forecasts of the statistical precision reachable by future projects on the measurements of different cosmological parameters have been produced through widespread application of the so - called fisher information matrix technique .this has also been done specifically for rsd estimates of the growth rate and related quantities .one limitation of these forecasts is that they necessarily imply some idealized assumptions ( e.g. on the gaussian nature of errors ) and have not been verified , in general , against systematic numerical tests .this is not easily doable in general , given the large size of planned surveys .a first attempt to produce general forecasts based on numerical experiments was presented by , who used mock surveys built from the millennium simulation to numerically estimate the random and systematic errors affecting their measurement of the growth rate from the vimos vlt deep survey . using a grid of reference survey configurations , they calibrated an approximated scaling relation for the relative error on as a function of survey volume and mean density .the range of parameters explored in this case was however limited , and one specific class of galaxies only ( i.e. bias ) was analyzed .the second crucial aspect to be taken into consideration when evaluating fisher matrix predictions , is that they only consider statistical errors and can not say anything about the importance of systematic effects , i.e. on the _ accuracy _ of the expected estimates .this is clearly a key issue for projects aiming at percent or sub - percent precisions , for which systematic errors will be the dominant source of uncertainty .in fact , a number of works in recent years suggest that the standard linear kaiser description of rsd is not sufficiently accurate on quasi - linear scales ( ) where it is routinely applied ( ) .various non - linear corrections are proposed in these papers , the difficulty often being their practical implementation in the analysis of real data , in particular in configuration space .one may hope that in the future , with surveys covering much larger volumes , it will be possible to limit the analysis to very large scales , where the simple linear description should be adequate .still , ongoing surveys like wigglez , boss and vipers ( guzzo et al ., in preparation ) , will still need to rely on the clustering signal at intermediate scales to model rsd . here, we shall address in a more systematic and extended way the impact of random and systematic errors on growth rate measurements using rsd in future surveys .we shall compare the results directly to fisher matrix predictions , thoroughly exploring the dependence of statistical errors on the survey parameters , including also , in addition to volume and density , the bias parameter of the galaxies used .this is also relevant , as one could wonder which kind of objects would be best suited to measure rsd in a future project .these will include using halos of different mass ( i.e. bias ) , up to those traced by groups and clusters of galaxies .potentially , using groups and clusters to measure rsd could be particularly interesting in view of massive galaxy redshift surveys as that expected from euclid , which can be used to build large catalogues of optically - selected clusters with measured redshifts . a similar opportunity will be offered by future x - ray surveys , such as those expected from the e - rosita mission , although in that case , mean cluster redshifts will have to be measured first .this paper is complementary to the parallel work of , where we investigate the impact on rds of redshift errors and explore how to disentangle geometrical distortions introduced by the uncertainty of the underlying geometry of the universe i.e. the alcock - paczynski effect on measurements of rsd .also , while we were completing our work , independent important contributions in the same direction appeared in the literature by and .the paper is organized as follows . in [ sec sims ] we describe the simulations used and the mass - selected subsamples we defined ; in [ sec rsd ] we discuss the technical tools used to estimate and model the two - point correlation function in redshift space , , and to estimate the intrinsic values of bias and distortion to be used as reference ; in [ sec err ] we present the measured and show the resulting statistical and systematic errors on , as a function of the halo bias ; here we discuss in detail how well objects related to high - bias halos , as groups and clusters , can be used to measure rsd ; in [ sec form ]we organise all our results into a compact analytic formula as a function of galaxy density , bias and survey volume ; we then directly compare these results to the predictions of a fisher matrix code ; finally we summarize our results in [ sec concl ] .the core of this study is based on the high - resolution baryonic acoustic - oscillation simulations at the institute for computational cosmology ( basicc ) of , which used particles of mass to follow the growth of structure in dark matter in a periodic box of side .the simulation volume was chosen to allow for growth of fluctuations to be modelled accurately on a wide range of scales including those of bao .the very large volume of the box also allows us to extract accurate measurements of the clustering of massive halos .the mass resolution of the simulation is high enough to resolve halos that should host the galaxies expected to be seen in forthcoming high - redhift galaxy surveys ( as e.g. luminous red galaxies in the case of sdss - iii boss ) .the cosmological parameters adopted are broadly consistent with recent data from the cosmic microwave background and the power spectrum of galaxy clustering ( ) : the matter density parameter is , the cosmological constant density parameter , the normalization of density fluctuations , expressed in terms of their linear amplitude in spheres of radius at the present day , the primordial spectral index , the dark energy equation of state , and the reduced hubble constant .we note the high value of normalization of the power spectrum , with respect to more recent wmap estimates ( , ) .this has no effect on the results discussed here ( but see for a method to scale self - consistently the output of a simulation to a different background cosmology ) .outputs of the particle positions and velocities are stored from the simulations at selected redshifts .dark matter halos are identified using a friends - of - friends ( fof ) percolation algorithm with a linking length of times the mean particle separation .position and velocity are given by the values of the center of mass . in this paper , only groups with al least particles are considered ( i.e only halos with mass ) .this limit provides reliable samples in term of their abundance and clustering , which we checked by comparing the halo mass function and correlation function against and respectively .we use the complete catalogue of halos of the simulation at , from which we select sub - samples with different mass thresholds ( i.e. number of particles ) .this corresponds to samples with different bias values .table [ tab halo masses ] reports the main features of these catalogues ..properties of the halo catalogues used in the analysis . is the threshold value of , e.g. the catalogue is the set of groups ( i.e. halos ) with at least dm particles ; is the corresponding threshold mass ; is the total number of halos ( i.e. the number of halos with ) ; is the number density ( i.e. , where is the simulation volume ) .[ cols="^,^,^,^",options="header " , ] in doing this exercise , a specific problem is that , as shown in table [ tab halo masses ] , catalogues with larger mass ( i.e. higher bias ) are also less dense .our aim is to separate the dependence of the errors on these two variables .to do so , once a population of a given bias is defined by choosing a given mass threshold , we construct a series of diluted samples obtained by randomly removing objects .the process is repeated down to a minimum density of , at which shot noise dominates and for the least massive halos the recovered is consistent with zero . in this way, we obtain a series of sub - samples of varying density for fixed bias , as reported in table [ tab dilution ] .the full samples are the same used to build , e.g. , figure [ fig beta lin - exp ] . in figure [ fig surface ]we plot the relative errors on measured from each catalogue of table [ tab dilution ] , as a function of the bias factor and the number density .these 3d plots are meant to provide an overview of the global behavior of the errors ; a more detailed description is provided in figures [ fig fisher n]-[fig fisher b ] , where 2d sections along and are reported .for all the samples considered , the volume is held fixed . as shown by the figure ,the bias dependence is weak and approximately described by , i.e. the error is slightly larger for higher - bias objects .this indicates that the gain of a stronger clustering signal is more than cancelled by the reduction of the distortion signal , when higher bias objects are considered .this is however fully true only for samples which are not too sparse intrinsically .we see in fact that at extremely low densities , the relationship is inverted , with high - bias objects becoming favoured . at the same time , there is a clear general flattening of the dependence of the error on the mean density . the relation is not a simple power - law , but becomes constant at high values of . in comparison , over the density range considered here , the old scaling formula of guzzo et al . would overestimate the error significantly .this behaviour is easily interpreted as showing the transition from a shot - noise dominated regime at low densities to a cosmic - variance dominated one , in which there is no gain in further increasing the sampling .such behaviour is clear for low - mass halos ( i.e. low bias ) but is much weaker for more massive , intrinsically rare objects .we can now try to model an improved empirical relation to reproduce quantitatively these observed dependences .let us first consider the general trend , , which describes well the trend of in the cosmic variance dominated region ( i.e. at high density ) . in figure [ fig surface ] such a power - lawis represented by a plane .we then need a function capable to warp the plane in the low density region , where the relative error becomes shot - noise dominated .the best choice seems to be an exponential : , where , by construction , roughly corresponds to the threshold density above which cosmic variance dominates .finally , we need to add an exponential dependence on the bias so that at low density the relative error decreases with , such that the full expression becomes $ ] .the grid shown in figure [ fig surface ] represents the result of a direct fit of this functional form to the data , showing that it is indeed well suited to describe the overall behaviour . in the right panel we have oriented the axes as to highlight the goodness of the fit : the _ rms _ of the residual between model and data is , which is an order of magnitude smaller than the smallest measured values of .this gives our equation the predictive power we were looking for : if we use it to produce forecasts of the precision of for a given survey , we shall commit a negligible error ( figure [ fig fisher n ] ) , with the _ rms _ of the residuals , . ]( ) on ( at least for values of bias and volume within the ranges tested here ) . to fully complete the relation , we only need to add the dependence on the volume , which is in principle the easiest .to this end , we split the whole simulation cube into sub - cubes , with . by applying this procedure to 5 samples with different bias and number density ( see table [ tab dilution ] ) we make sure that our results do not depend on the particular choice of bias and density .figure [ fig rbt vs vol ] shows that independently of and , confirming the dependence found by .we can thus finally write the full scaling formula for the relative error of we were seeking for where and .clearly , by construction , this scaling formula quantifies random errors , not the systematic ones .the fisher information matrix provides a method for determining the sensitivity of a particular experiment to a set of parameters and has been widely used in cosmology . in particular , introduced an implementation of the fisher matrix aimed at forecasting errors on cosmological parameters derived from the galaxy power spectrum , based on its expected observational uncertainty , as described by ( * ? ? ?this was adapted by to the measurements of distances using the baryonic acoustic oscillations in . following the renewed interest in rsd , over the past few yearsthe fisher matrix technique has also been applied to predict the errors expected on , and related parameters ( e.g * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?* ; * ? ? ?the extensive simulations performed here provides us with a natural opportunity to perform a first simple and direct test of these predictions .given the number of details that enter in the fisher matrix implementation , this can not be considered as exhaustive . yet , a number of interesting indications emerge , as we shall see .we have computed fisher matrices for all catalogues in table [ tab dilution ] , using a code following . in particular , our fisher matrix predicts errors on and , given the errors on the linear redshift space power spectrum modeled as in eq .( [ eq kaiser spectrum beta ] ) .we first limit the computations to linear scales , applying the standard cut - off .we also explore the possibility of including wavenumbers as large as ( that should better match the typical scales we fit in the correlation functions from the simulations ) , accounting for non - linearity through a conventional small - scale lorentzian damping term .our fiducial cosmology corresponds to that used in the simulation , i.e. , , and today .we also choose km s as reference value for the pairwise dispersion .we do not consider geometric distortions , whose impact on rsd is addressed in the parallel paper by . to obtain the fisher predictions on , we marginalize over the bias , to account for the uncertainty on its precise value , and on the pairwise velocity in the damping term ( when present ) .figure [ fig fisher n ] shows the measured relative errors on as a function of the number density , compared to the fisher forecasts for the two choices of .we also plot the scaling relation from eq .( [ eq err mine general ] ) , which best represents the simulation results .we see that the simulation results are in in fairly good agreement with the fisher predictions , when we limit the computation to very linear scales in the power spectrum ( solid line ) .the inclusion of higher wavenumbers produces unrealistically small errors and with a wrong dependence on the number density .both the solid lines and points reproduce the observed flattening at high number densities , which corresponds to the transition between a shot - noise and a cosmic - variance dominated regime , respectively . similarly ,figure [ fig fisher b ] looks at the dependence of the error on the linear bias parameter , comparing the simulation results ( points and scaling formula best - fit ) to the fisher forecasts .the behaviour is similar to that observed for the number density : there is a a fairly good agreement when the fisher predictions are computed using , except for very low values of the number density and the bias .again , when non - linear scales are included , the fisher predictions become too optimistic by a large factor .we have performed an extensive investigation of statistical and systematic errors in measurements of the redshift - distortion parameter from future surveys . we have considered tracers of the large - scale distribution of mass with varying levels of bias , corresponding to objects like galaxies , groups and clusters . to this purpose, we have analyzed large catalogues of dark - matter halos extracted from a snapshot of the basicc simulation at .our results clearly evidence the limitations of the linear description of redshift - space distortions , showing how errors depend on the typical survey properties ( volume and number density ) and the properties of the tracers ( bias , i.e. typical mass ) .let us recap them and discuss their main implications . * estimating using the hamilton / kaiser harmonic expansion of the redshift - space correlation function extended to typical scales , leads to a systematic error of up to .this is much larger than the statistical error of a few percent reachable by next - generation surveys .the larger systematic error is found for small bias objects , and decreases reaching a minimum for halos of .this reinforces the trend observed by . *additional analysis of mock surveys from the millennium run confirm that the observed systematic errors are not the result of potentially missing sub - structure in the basicc halo catalogues .* the use of the deprojected correlation function increases the statistical error , inducing also some additional systematic effects ( details are given in appendix [ sec deprojected ] and also in the companion paper by ) . * for highly biased objects , which are sparser and whosesurveys typically cover larger , more linear scales , the simple kaiser model describes fairly well the simulated data , without the need of the empirical damping term with one extra parameter accounting for non - linear motions .this results in smaller statistical errors .* we have derived a comprehensive scaling formula , eq .( [ eq err mine general ] ) , to predict the precision ( i.e. relative statistical error ) reachable on as a function of survey parameters .this expression improves on a previous attempt , generalizing the prediction to a population of arbitrary bias and properly describing the dependence on the number density . +this formula can be useful to produce quite general and reliable forecasts for future surveys ] .one should in any case consider that there are a few implementation - specific factors that can modify the absolute values of the recovered _ rms _ errors .for example , these would depend on the range of scales over which is fitted .the values obtained here refer to fits performed between and .this has been identified through several experiments as an optimal range to minimize statistical and systematic errors for surveys this size .theoretically , one may find natural to push , or both and to larger scales , as to ( supposedly ) reduce the weight of nonlinear scales . in practice , however , in both cases we see that random errors increase in amplitude ( while the systematic error is not appreciably reduced ) .+ similarly , one should also keep in mind that the formula is strictly valid for , i.e. the redshift where it has been calibrated .there is no obvious reason to expect the scaling laws among the different quantities ( density , volume , bias ) to depend significantly on the redshift .this is confirmed by a few preliminary measurements we performed on halo catalogues from the snapshot of the basicc .conversely , the magnitude of the errors may change , as shown , e.g. , in .we expect these effects to be described by a simple renormalization of the constant .+ finally , one may also consider that the standard deviations measured using the 27 sub - cubes could be underestimated , if these are not fully independent .we minimize this by maximizing the size of each sub - cube , while having enough of them as to build a meaningful statistics .the side of each of the 27 sub - cubes used is in fact close to , benefiting of the large size of the basicc simulation .* we have compared the error estimations from our simulations with idealized predictions based on the fisher matrix approach , customarily implemented in fourier space .we find a good agreement , but only when the fisher computation is limited to significantly large scales , i.e. .when more non - linear scales are included ( as an attempt to roughly match those actually involved in the fitting of in configuration space ) , then the predicted errors become unrealistically small .this indicates that the usual convention of adopting for these kind of studies is well posed . on the other hand, it seems paradoxical that in this way with the two methods we are looking at different ranges of scales .the critical point clearly lies in the idealized nature of the fisher matrix technique .when moving up with and thus adding more and more nonlinear scales , the fisher technique simply accumulates signal and dramatically improves the predicted error , clearly unaware of the additional `` noise '' introduced by the breakdown of linearity . on the other hand , if in the direct fit of ( or ) one conversely considers a corresponding very linear range , a poor fit is obtained , with much larger statistical errors than shown , e.g. , in fig .[ fig beta lin - exp ] .there is no doubt that smaller , mildly nonlinear scales at intermediate separations have necessarily to be included in the modelling if one aims at reaching percent statistical errors on measurements of ( or ) .if one does this in the fisher matrix , then the predicted errors are too small .the need to push our estimates to scales which are not fully linear will remain true even with surveys of the next generation , including tens of millions of galaxies over gpc volumes , because that is where the clustering and distortion signals are ( and will still be ) the strongest .of course , our parallel results on the amount of systematic errors that plague estimates based on the standard dispersion model also reinforce the evidence that better modelling of nonlinear effects is needed on these scales .the strong effort being spent in this direction gives some confidence that significant technical progress will happen in the coming years ( see e.g. * ? ? ?* ; * ? ? ?* and references therein ) .+ in any case , this limited exploration suggests once more that forecasts based on the fisher matrix approach , while giving useful guidelines evidence the error dependences , have to be treated with significant caution and possibly verified with more direct methods .similar tension between fisher and monte carlo forecasts has been recently noticed by . *finally , in appendix [ sec likelihood ] we have also clarified which is the most unbiased form to be adopted for the likelihood when fitting models to the observed redshift - space correlation function , proposing a slightly different form with respect to previous works . withredshift - space distortions having emerged as probe of primary interest in current and future dark - energy - oriented galaxy surveys , the results presented here further stress the need for improved descriptions of non - linear effects in clustering and dynamical analyses . on the other hand , they also indicate the importance of building surveys for which multiple tracers of rsd ( with different bias values ) can be identified and used in combination to help understanding and minimizing systematic errors .we warmly thank m. bersanelli for discussions and constant support and c. baugh for his invaluable contribution to the basicc simulations project .db acknowledges support by the universit degli studi di milano through a phd fellowship .em is supported by the spanish micinns juan de la cierva programme ( jci-2010 - 08112 ) , by cicyt through the project fpa-2009 09017 and by the community of madrid through the project hephacos ( s2009/esp-1473 ) under grant p - esp-00346. financial support of prin - inaf 2007 , prin - miur 2008 and asi contracts i/023/05/0 , i/088/06/0 , i/016/07/0 , i/064/08/0 and i/039/10/0 is gratefully acknowledged .lg is partly supported by erc advanced grant # 291521 ` darklight ' .99 acquaviva v. , hajian a. , spergel d. n. , das s. , 2008 , phrvd , 78 , 043514 alcock c. , paczynski b. , 1979 , nature , 281 , 358 angulo r. e. , baugh c. m. , frenk c. s. , lacey c. g. , 2008 , mnras , 383 , 755 angulo , r.e . , white , s.d.m ., 2010 , mnras , 405 , 143 bianchi d. , 2010 , master laurea thesis , unversity of milan blake c. , et al . , 2011 , mnras , 415 , 2876 bueno belloso a. , garca - bellido j. , sapone d. , 2011 , jcap , 10 , 10 cabr a. , gaztaaga e. , 2009 , mnras , 393 , 1183 cappelluti n. , et al . , 2011 , msais , 17 , 159 davis m. , peebles p. j. e. , 1983 , apj , 267 , 465 davis m. , efstathiou g. , frenk c. s. , white s. d. m. , 1985 , apj , 292 , 371 de la torre s. , guzzo l. , 2012 , arxiv , arxiv:1202.5559 de lucia g. , blaizot j. , 2007 , mnras , 375 , 2 di porto c. , amendola l. , branchini e. , 2012 , mnras , 419 , 985 eisenstein d. j. , et al . , 2011 ,aj , 142 , 72 feldman h. a. , kaiser n. , peacock j. a. , 1994 , apj , 426 , 23 fisher k. b. , davis m. , strauss m. a. , yahil a. , huchra j. , 1994 , mnras , 266 , 50 fisher k. b. , davis m. , strauss m. a. , yahil a. , huchra j. p. , 1994 ,mnras , 267 , 927 fry j. n. , 1985 , phlb , 158 , 211 guzzo l. , et al . , 2008 , nature , 451 , 541 hamilton a. j. s. , 1992 , apj , 385 , l5 hamilton a. j. s. , 1993 , apj , 417 , 19 ha hamilton , a. j. s. 1998 , in d. hamilton , ed , the evolving universe .kluwer , dordrecht , p. 185hawken2012 hawken a. j. , abdalla f. b. , htsi g. , lahav o. , 2012 , mnras , 424 , 2 hawkins e. , et al ., 2003 , mnras , 346 , 78 hewett p. c. , 1982 , mnras , 201 , 867 jenkins a. , frenk c. s. , white s. d. m. , colberg j. m. , cole s. , evrard a. e. , couchman h. m. p. , yoshida n. , 2001 , mnras , 321 , 372 jennings e. , baugh c. m. , pascoli s. , 2011 , mnras , 410 , 2081 kaiser n. , 1987 , mnras , 227 , 1 2011arxiv1105.1194k kwan j. , lewis g. f. , linder e. v. , 2012 , apj , 748 , 78 landy s. d. , szalay a. s. , 1993 , apj , 412 , 64 larson d. , et al . , 2011 , apjs , 192 , 16 laureijs r. , et al. , 2011 , arxiv , arxiv:1110.3193 lightman a. p. , schechter p. l. , 1990 , apjs , 74 , 831 linder e. v. , 2008 , aph , 29 , 336 marulli2012 marulli f. , bianchi d. , branchini e. , guzzo l. , moscardini l. , angulo r. e. , 2012 , mnras , 426 , 2566 mcdonald p. , seljak u. , 2009 , jcap , 10 , 7 nesseris s. , perivolaropoulos l. , 2008 , phrvd , 77 , 023504 okumura t. , jing y. p. , 2011 ,apj , 726 , 5 peacock j. a. , 1999 , cosmological physics , cambridge univ . press , cambridge peebles p. j. e. , 1980 , lssu.book , percival w. j. , white m. , 2009 , mnras , 393 , 297 percival w. j. , et al . , 2010 , mnras , 401 , 2148 perlmutter s. , et al ., 1999 , apj , 517 , 565 riess a. g. , et al . , 1998 , aj , 116 , 1009 samushia l. , et al . , 2011 , mnras , 410 , 1993 2011arxiv1102.1014s samushia l. , percival w. j. , raccanelli a. , 2012 , mnras , 420 , 2102 snchez a. g. , baugh c. m. , percival w. j. , peacock j. a. , padilla n. d. , cole s. , frenk c. s. , norberg p. , 2006 , mnras , 366 , 189 saunders w. , rowan - robinson m. , lawrence a. , 1992 , mnras , 258 , 134 scoccimarro r. , 2004 , phrvd , 70 , 083007 seo h .-j . , eisenstein d. j. , 2003 , apj , 598 , 720 sheth r. k. , mo h. j. , tormen g. , 2001 , mnras , 323 , 1 simpson f. , peacock j. a. , 2010 , phrvd , 81 , 043512 song y .- s ., percival w. j. , 2009 , jcap , 10 , 4 springel v. , et al . , 2005 , natur , 435 , 629 2010arxiv1006.0699 t taruya a. , nishimichi t. , saito s. , 2010 , phrvd , 82 , 063522 tegmark m. , 1997 , phrvl , 79 , 3806 tinker j. l. , robertson b. e. , kravtsov a. v. , klypin a. , warren m. s. , yepes g. , gottlber s. , 2010 , apj , 724 , 878 tinker j. l. , weinberg d. h. , zheng z. , 2006 , mnras , 368 , 85 wang l. , steinhardt p. j. , 1998 , apj , 508 , 483 wang y. , 2008 , jcap , 5 , 21 wang y. , et al . , 2010 , mnras , 409 , 737 white m. , song y .- s . , percival w. j. , 2009 , mnras , 397 , 1348 zhangnew zhang , p. , liguori , m. , bean , r. , & dodelson , s. 2007 , physical review letters , 99 , 141302 zurek w. h. , quinn p. j. , salmon j. k. , warren m. s. , 1994 , apj , 431 , 559to estimate , in section [ sec fitting procedure ] we defined a likelihood function comparing the measured correlation function and the corresponding parameterized models .our likelihood is simply given by the standard expression where however the stochastic variable considered is not just the value of at each separation , but the expression \ ; , \ ] ] which has the desirable property of placing more weight on large , more linear scales .this was first proposed by , who correspondingly adopt the following expression for the expectation value of the variance - \log [ 1 + \xi_{ij } - \delta(\xi_{ij})]\}}^2 \ ; .\ ] ] this simply maps onto the new variables , the interval including 68% of the distribution in the original variables , i.e. twice the standard deviation if this were gaussian distributed . strictly speaking , here an extra factor 1/2 would be formally required if one aims at defining the equivalent of a standard deviation , but this is in the end uneffective in the minimization and thus in finding the best - fitting parameters ., as recovered from catalogues with varying density ( but same volume and bias ) , using the two different definitions of the variance of each data point of eqs .[ eq scatter hawkins ] ( open blue squares ) and [ eq delta y ] ( open red circles ) .the dashed line shows as reference the asymptotic common value of that both methods identically recover at high densities .note how using eq .[ eq delta y ] yields an unbiased estimate down to significantly smaller densities , whereas the estimator based on eq .( [ eq scatter hawkins ] ) becomes rapidly more and more biased below .the intrinsic scatter of the measurements , as usual obtained from the 27 sub - cubes of this specific catalogue , also follows a similar trend . , width=302 ] however , the weighting factors in the likelihood definition depend explicitly on , which may result in an improper weighting of the data when the correlation signal fluctuates near zero .we have directly verified that when the estimate is noisy , it is preferable to use a smooth weighting scheme rather than one that is sensitive to local random oscillations of , which is more likely to yield biased estimates .this supported our choice of adopting the usual sample - variance expression estimated over realizations of the survey .this can be done using mock realizations , or , alternatively , through appropriate jack - knife or booststrap resamplings of the data . specifically , we find a significant advantage of the weighting scheme based on sample variance when dealing with low - density samples .this is shown in figure [ fig bt vs n 20 ] , where is estimated on the catalogue with using the two likelihoods and gradually diluting the sample ( note that all computations in this section use the linear - exponential model , with directly measured in real - space ) . in order to understand the reasons behind this behaviour, we have studied independently the various terms composing the likelihood .we use one single sub - cube ( i.e. 1/27 of the total volume ) , from the catalogue with , and consider two extreme values of the mean density .first , we consider the case of the highest density achievable by this halo catalogue , . in the upper panel of figure [ fig enzo perc100 pi9.75 refhawkins ] we plot a section of at constant , together with the model corresponding to the best - fit and parameters . in this density regimethe values of the recovered best - fit parameters are essentially independent of the form chosen for ( as shown by the coincident values of on the right side of figure [ fig bt vs n 20 ] ) .the match of the model to the data is very good . in the central panel ,we plot instead , for each bin along , the absolute value of the difference between model and observation , , together with the corresponding standard deviations in the two cases , which are virtually indistinguishable from each other .finally , the lower panel shows the full values of the terms contributing to the sum , again showing the equivalence of the two choices in this density regime . .top panel : cut - through at fixed ( broken line ) , and corresponding best fit model using the hawkins et al .form for the scatter of each data point ( continuous line ) .central panel : residual values between the data and model values ( light grey line ) and values for the scatter of each point , according to the two definitions of eqs .[ eq delta y ] ( solid red line ) and [ eq scatter hawkins ] ( dashed blue line ) .bottom panel : corresponding terms in the sum ( see eq .( [ eq fit app ] ) ) .the two definitions for the scatter , as expected , produce virtually identical values for the likelihood . , width=302 ] however ,when we sparsely sample the catalogue , as to reach a mean density of ( leaving all other parameters unchanged ) , a very different behaviour emerges ( figure [ fig enzo perc3.08 pi9.75 refhawkins ] ) ( upper panel , second blue square from the left ) we show the same behaviour when averaged over 27 sub - samples . ] ., but now in the low - density regime ( ) .again , the model curve in the top panel corresponds to the best - fit parameters obtained using the hawkins et al .form of the scatter of each measurements .the fit is very unsatisfactory .the bottom panel shows how the likelihood expression based instead on the standard deviation of as from eq .( [ eq delta y ] ) rejects these parameter values , giving high values ( red solid curve ) .note the different scale on the ordinate , with respect to previous figure ., width=302 ] using the hawkins et al . definition for the variance yields a best - fit model that overestimates the data on almost all scales ( top panel ) , corresponding to unphysical values of and .the central panel now shows how in this regime the two definitions of the scatter , ( which weigh the data - model difference ) , behave in a significantly different way , with the hawkins et al .definition being much less stable than the one used here , and in general anti - correlated with the values of in the upper panel . in the lower panel ,the dashed line shows how this anti - correlation smooths down the peaks resulting in erroneously low values for the that drive the fit to a wrong region of the parameter space . in the same panel ,the solid line shows how the likelihood computed with our definition for these same parameters gives high values , thus correctly rejecting the model ( and ) we find .consequently , is not well defined ( figure [ fig enzo perc3.08 pi9.75 refhawkins ] , central panel ) resulting in a zero weight for the corresponding summand ( lower panel ) . ] .in a real survey , the direct measurement of is not possible . a way around this obstacle is to project along the line of sight , i.e. along the direction affected by redshift distortions .we hence define the projected correlation function as inverting the integral we recover .more precisely , following , we have where is the usual mathematical constant , not to be confused with the line - of - sight separation in eq .( [ eq projected ] ) . a more extended investigation of the effects arising when using the deprojected instead of that directly measured ( hereafter and respectively )is carried out in . herewe limit the discussion to the impact of the deprojection technique on the estimate of , as a function of the mass ( i.e. the bias ) of the adopted tracers , focussing on the systematic effects ( figure [ fig beta dep ] ) .obtained when the real - space correlation function is directly measured from the simulation ( triangles ) or deprojected as in real surveys ( rhombs and inverted triangles ) .the latter correspond to two different integration limits in the projection .the two lower panels give ths systematic and statistical error as in figure [ fig lin vs exp].,width=302 ] one possible source of systematic error in performing the de - projection is the necessity of defining a finite integration limit in eq .( [ eq deprojected ] ) . in figure [ fig beta dep ]two different choices of are considered .we notice that these choices ( purple inverted triangles and yellow rhombs ) result in different slopes of as a function of bias , which differ from the slope obtained using ( green triangles ) .this is plausibly due to the fact that using a limiting we are underestimating the integral ( consider that for ) .this effect grows when the bias increases , because of the corresponding growth of which leads to a larger `` loss of power '' in .however , we can not use arbitrarily large values of because the statistical error increases for larger ( see lowest panel of figure [ fig beta dep ] ) .this may be due to the increase of the shot noise at large separations .similarly , the drop of correlation signal at small separations due to the finite size of the dark matter halos produces an impact on which grows with bias .finally , as suggested previously and discussed extensively in , figure [ fig beta dep ] shows how using in modelling rsd , produces a statistical error about twice as large as that obtained using ( lower panel ) .
we investigate the impact of statistical and systematic errors on measurements of linear redshift - space distortions ( rsd ) in future cosmological surveys by analysing large catalogues of dark - matter halos from the basicc simulation . these allow us to estimate the dependence of errors on typical survey properties , as volume , galaxy density and mass ( i.e. bias factor ) of the adopted tracer . we find that measures of the specific growth rate using the hamilton / kaiser harmonic expansion of the redshift - space correlation function on scales larger than are typically under - estimated by up to 10% for galaxy sized halos . this is significantly larger than the corresponding statistical errors , which amount to a few percent , indicating the importance of non - linear improvements to the kaiser model , to obtain accurate measurements of the growth rate . the systematic error shows a diminishing trend with increasing bias value ( i.e. mass ) of the halos considered . we compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the fisher information matrix technique . this is what is usually adopted to produce rsd forecasts , based on the fkp prescription for the errors on the power spectrum . we show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues , provided it is applied to strictly linear scales in fourier space ( ) . finally , we combine our measurements to define and calibrate an accurate scaling formula for the relative error on as a function of the same parameters , which closely matches the simulation results in all explored regimes . this provides a handy and plausibly more realistic alternative to the fisher matrix approach , to quickly and accurately predict statistical errors on rsd expected from future surveys . [ firstpage ] cosmological parameters dark energy large - scale structure of the universe .
quantum error correction ( qec ) and fault - tolerant quantum computation demonstrate that quantum information can in principle be stored and manipulated coherently for arbitrarily long times despite the presence noise .the general framework of qec is the following .redundancy is introduced by encoding the information of system into a larger system .the image of in characterizes a code , while a particular embedding of into is called an encoding .the system is subjected to some noise .partial measurements whose outcomes are known as theerror syndrome " are performed on . conditioned on this error syndrome , a recovery operation is applied to in order to restore its original information .this last step , called decoding " , is the subject of the present study . in the absence of structure in the code , we know from a classical result finding the optimal recovery is np - hard . for practical purposes ,one must either use codes with lots of structure which typically offer poorer performances or settle for suboptimal recovery .residual errors after decoding are therefore of two varieties : those due to the information - theoretic limitations of the code and those arising from suboptimal decoding procedures . in the past decades, considerable progress has been made towards understanding this tradeoff in the classical setting ( see e.g. and references therein ) .central to these advancements is the use of the message passing decoding algorithm pioneered by gallager which often leads to near - optimal decoding .this technique was recently introduced in the quantum realm by ollivier et al . for the decoding of low density parity check ( ldpc ) codes ( see also for related work ) .concatenation of block codes is widely used in quantum information science and is a key component of almost all fault tolerant schemes ( a noticeable exception is topological quantum computing ) . as the name suggests , the system that redundantly encodes the information of system can itself be encoded in a yet larger system , adding an extra layer of redundancy .provided the initial error rate is below a threshold value , every extra level of concatenation should reduce the probability of error after decoding , so concatenation can in principle be repeated until the error is below any desired value . in this article , we demonstrate an efficient message passing algorithm that achieves optimal ( maximum likelihood ) decoding for _ concatenated block codes _ with uncorrelated noise .we numerically investigate the message passing algorithm using the 5 qubit code and steane s 7 qubit code and compare their performances to the commonly used blockwise minimal - distance decoder ( based on a local rather than global optimization ) .the advantages of the message passing algorithm are substantial . on the one hand , for the 5 qubit code used on a depolarizing channel, the message passing algorithm can correctly decode the information for a noise level up to at least 0.1885 ( the exact threshold is probably the hashing bound ) compared to the values 0.1376 previously established using blockwise decoding .for steane s code , this enhancement is even greater going from 0.0969 to at least 0.188 . on the other hand , away from these noise thresholds , the probability of error decreases at a significantly higher rate using optimal decoding .for instance , for a 0.1 depolarizing channel and using 4 levels of concatenation of the 5 qubit code , the probabilities that the blockwise decoding and the optimal decoding fail to correctly identify the error differ by more that 3 orders of magnitude . as a consequence , a decoding error probability for any be achieved with a substantially reduced error correction overhead .our presentation of the stabilizer formalism follows , see for the general theory .denote by , and the three pauli matrices and by the identity matrix .the group is the multiplicative group generated by the pauli matrices and the imaginary unit .the -qubit pauli group is the -fold tensor product of .we denote the pauli matrix acting on the qubit for and similarly for and .note that the s and the s are a generating set of , i.e. . the clifford group on qubits is the largest subgroup of the unitary group that maps to itself under the adjoint action .the encoding of qubits into qubits can be specified by a matrix . is a unitary matrix acting on qubits that are distributed in 3 different sets .the first logical " qubits contain the information to be encoded in the qubits ; the next stabilizer " qubits are set to the state ; and finally the remaining gauge " qubits are in arbitrary states . the image of the pauli operators acting on the first qubits are known as logical pauli operators and .the image of the pauli operators acting on qubits are called stabilizer generators whereas the image of the operators acting on those qubits are called pure errors . finally , the image of the pauli operators acting on the remaining qubits are called gauge operators and .the stabilizer generators mutually commute , so can be simultaneously measured .the outcome of that measurement is called the error syndrome .since the stabilizer qubits are all in state prior to encoding , we conclude that in the absence of noise the encoded state should be a eigenstate of all stabilizer generators , thus the error syndrome should be all ones .a non - trivial syndrome therefore indicates that an error has corrupted the register , and the task of decoding consists in finding the optimal recovery procedure given an error syndrome .to address the decoding problem , note that .in other words , any element can be written , up to an irrelevant phase , as where is a product of logical pauli operators , is a product of pure errors , and is a product of gauge operators and stabilizer elements .moreover , this decomposition can be found by running the circuit backward , which is efficient since . is completely determined by the syndrome : appears in if and only if the syndrome bit is .the value of is irrelevant because the information encoded in the qubits is invariant under the action of any .this reflects the fact that the stabilizer qubits are initially set to and that the gauge qubits are in random states .thus , to undo the effect of an error , one needs to identify the most likely value of given , or equivalently given . for simplicity , we will focus on pauli channels where errors are elements of distributed according to . given this probability over one can compute the conditional probability using \delta[\cl(e ) = l ] p(e ) \\ & = & \sum_g p(e = ltg ) , \label{eq : prob_block}\end{aligned}\ ] ] where denotes the indicator function and takes all possible combinations of stabilizer generators and gauge operators .given a finite block size , these probabilities can be computed and the optimal decoding can be evaluated .decoding a block code thus consists of looking in a table containing the values of for each .typically and in particular for a non - degenerate code over the depolarization channel corresponds to the minimal distance decoder where is the error acting on the fewest number of qubits and that is compatible with the observed syndrome .concatenation is realized by encoding the qubits of the code in an other code .there is no need for this other code to be identical to the original one .however to simplify the presentation , we will assume that the same code is used at every concatenation layer and that it encodes a single qubit in qubits ; generalizations are straightforward .this procedure can be repeated times at the expense of an exponentially growing number of physical qubits .the number of stabilizer generators grows roughly as ( it is a geometric sum ) ; thus the syndrome can take different values .thus , even for moderate values of , it is not feasible to construct a lookup table giving the optimal decoding procedure for each syndrome value . what is generally done to circumvent this double exponential blowupis to apply the optimal recovery _independently for each concatenation layer _ ( see e.g. ( * ? ? ?* chap . 6 ) and references therein ) .one first measures the syndrome from each of the blocks of qubits of the last layer of concatenation , and optimally decodes them using the lookup table .one then moves one layer up and applies the same procedure to the blocks of the second - to - last layer , etc .when the initial error rate is below a certain threshold value , the probability that this procedure fails to correctly identify decreases doubly - exponentially with .hence , this decoding scheme based on hard decisions for each concatenation layer is efficient , leads to a good error suppression , but is nonetheless suboptimal .let be the syndrome of the block of the concatenation layer .denote the collection of syndromes whose stabilizers act non - trivially on the physical qubits associated to the block of the concatenation layer : these sets can be defined recursively by with the initialization .finally , denote all the syndromes from the layers to .( see the above figure for a pictorial representation of , , and . ) then , is the set of all syndromes and maximum likelihood decoding consists in finding .this probability can be factorized by conditioning on the logical errors of the second layer : \frac{p(\bl_2,\bs_1)}{p(\bs_1 ) } \nonumber \\ & = & \sum_{\bl_2 } \delta[l_1 = \cl(\bl_2 ) ] \frac{p(s_1|\bl_2,\bs_2)p(\bl_2,\bs_2)}{p(s_1,\bs_2 ) } \nonumber \\ & = & \sum_{\bl_2 } \delta[l_1 = \cl(\bl_2 ) ] \delta[s_1 = \cs(\bl_2 ) ] \frac{p(\bl_2|\bs_2 ) p(\bs_2)}{p(s_1,\bs_2 ) } \nonumber\\ & = & \sum_{\bl_2 } \frac{\delta[l_1 = \cl(\bl_2 ) ] \delta[s_1 = \cs(\bl_2)]}{p(s_1|\bs_2 ) } \prod_{j=1}^n p(l^{(j)}_2|\bs^{(j)}_2 ) .\nonumber\end{aligned}\ ] ] above , denotes the syndrome associated to the error pattern .this series of manipulations repeatedly uses bayes rule and the fact that the syndrome and logical error of level are completely determined given the logical errors of layer .the last step relies on the important assumption that the channel is memoryless , or more specifically , that the noise model does not correlate qubits across distinct blocks ( errors on qubits in the same block could be correlated ) .equation ( [ eq : factor ] ) shows that by conditioning on the logical errors of each concatenation layer , the _ factor graph _ associated to the function is a tree , as depicted in the above graph .we have thus reduced optimal decoding to a sum product problem ( known as tensor network contraction in quantum information science ) on a tree graph which can be solved exactly and efficiently in the number of variables using a message passing algorithm ( also known as belief propagation ) ; see , and references therein .let us describe this algorithm in a general setting .the factor graph is a bipartite graph , and vertices from the two partitions are decorated with circles and boxes .circle vertices are labeled and each one carries a variable with value in a discrete set .box vertices are labeled , and each one contains a function that depends on the variables from the adjacent circles , collectively denoted .the goal is to compute marginals where indicates that is omitted from the set and is a normalization factor . to this end , messages passed from the circles to the boxes and messages are passed from the boxes to the circles following the rules ,\end{aligned}\ ] ] where means all neighbors of excluding , and similarly for .note that these messages are functions of the discrete variables ( i.e. they are vectors ) .the messages are initialized to the constant function 1 . for a tree graph, the desired marginal is obtained from these messages after a number of steps equal to the depth of the variable and is given by , where is a normalization factor . in the case of interest , circles carry logical operators and a box labeled carries the function \delta[s_{m+1}^{(j ) } = \cs(l_m^{(nj - n+1)},\ldots , l_m^{(nj)})]$ ] , where the syndrome is fixed by the measurements . to complete the picture , extra box vertices carrying the function eq .( [ eq : prob_block ] ) need to be attached to the bottom leaves of the graph .the factor can be evaluated by normalizing the obtained distribution .thus , we can efficiently evaluate , for all .computing only leads to simplification of the message passing rules ; in particular , messages need only to flow from the bottom to the top of the graph . ] and the optimal recovery is the maximizing this function .the advantage of the message passing algorithm over the minimal distance decoder comes from the fact that it does not throw away useful information . instead of computing the most likely recovery and passing it on to the next level of coding, the entire list of probability of possible recoveries , conditioned on the observed syndrome , is passed on .in other words , the original channel is composed with the syndrome measurement , and projected onto the logical algebra to yield a conditionally renormalized " channel .following the tradition for benchmarking qec techniques , we investigate the performance of the message passing decoding algorithm using a depolarization channel , where each qubit is independently subjected to the channel we use the 5 qubit code concatenated with itself up to times , for an overhead of 9,765,625 physical qubits per logical qubit .pauli errors are generated by picking each single - qubit operator independently according to the probability , .the associated logical error and syndromes are computed exactly .these syndromes are used by a blockwise decoding routine yielding an estimate and by a message passing routine yielding the optimal decoding .a decoding is declared incorrect when its estimate differs from .this is repeated a large number of times ( ) to evaluate the probability that the decoding gives an incorrect estimate . as a function of the level of concatenation for different depolarization rate and .the diamonds are from the message passing algorithm , and the circles are from the blockwise decoding .all data are from samples of encoded qubits.,height=201 ] figure [ fig : flow ] shows the probability of incorrect decoding for both the blockwise and the optimal decoding as a function of the level of concatenation and for different channel parameters ranging from 0.13 to 0.19 . for the blockwise decoding , ceases to decrease with for values of .this reflects the fact that the threshold of this decoding technique for this particular code is about 0.1376 , so all curves except the 0.13 one are above the threshold . on the other hand ,optimal decoding succeeds in decreasing the error probability for values of up to at least 0.1885 , but appears to fail at .we conjecture that the exact value of this threshold is the hashing bound , where the single - qubit coherent information vanishes and is the highest threshold any non degenerate code can achieve .results obtained from steane s code show a quite similar behavior , with at least 94 increase of threshold going from 0.0969 to at least 0.188 and appears to fail at 0.1885 .an interesting feature of the curves obtained from optimal decoding is their non - monotonicity .blockwise decoding , on the other hand , always yields monotonic curves for this type of channel ; thus its global behavior under concatenation can be predicted from a single level of coding .this is because decoding is performed independently on each concatenation layer . with the optimal decoder , information about the syndromesis propagated from one layer of concatenation to the next through the conditionally - renormalized channel that ceases to be depolarizing and varies from one qubit to the other .thus , non - monotonicity of the curves is a signature of the global optimization performed by the message passing algorithm . for and 0.05 .diamonds obtained from samples of encoded qubits .circle were produced using an exact numerical technique similar to that of ref ..,height=201 ] figure [ fig : exponential ] shows the behavior of as a function of away from the threshold values , i.e. in the natural operating regime of the code .again , the advantages of the message passing algorithm are considerable . after 4 rounds of concatenation for ,message passing fails with a probability of roughly , whereas this probability is well above for blockwise hard decoding .it takes layers of concatenation for the blockwise decoding to reach comparable performances .again , results obtained from steane s code show an even larger gap .finally , we once again stress that the message passing outputs the probability of an error rather than a particular value of . a hard decision can then be made based on this probability .we observe that when decoding succeeds , is typically very close to one ( e.g. 0.999 for ) whereas when it fails it is relatively low ( typically 0.7 ) ; the algorithm knows that it is failing . this flagging " of errors offers a great advantage when post - selection is an option .the possibility of operating the algorithm with soft inputs , i.e. noisy syndrome measurements , is also of interest in several circumstances .we have demonstrated an efficient message passing algorithm for the optimal decoding of concatenated quantum block codes on a memoryless channels .numerical results show substantial benefits of our approach over the widely used blockwise hard decoding , including an increase of error thresholds and a greater error suppression rate .message passing algorithms have been used on graphs with loops ( describing e.g. ldpc codes , turbo - codes , or channels with memory ) and often yield near - optimal decoding .the quantum generalization of these schemes , including quantum ldpc codes and quantum turbo - codes , are promising avenues for the realization of a quantum information technologies .techniques reminiscent of message passing have been used to beat the hashing bound but were not efficiently implementable : efficient decoding may now be within reach using our techniques .a hard " message passing scheme was also used in to obtain high fault - tolerant error thresholds : a full - fledge message passing scheme although not optimal for correlated errors that are typically present in fault - tolerant schemes should further improve this threshold and may significantly reduce the resource overhead .i thank harold ollivier for several useful conversations on message passing algorithms and quantum error correction , and graeme smith and jon yard for comments on this paper .this work was supported in part by the gordon and betty moore foundation through caltech s center for the physics of information , by the national science foundation under grant no .phy-0456720 , and by canada s nserc .
we consider the problem of optimally decoding a quantum error correction code that is to find the optimal recovery procedure given the outcomes of partial check " measurements on the system . in general , this problem is np - hard . however , we demonstrate that for concatenated block codes , the optimal decoding can be efficiently computed using a message passing algorithm . we compare the performance of the message passing algorithm to that of the widespread blockwise hard decoding technique . our monte carlo results using the 5 qubit and steane s code on a depolarizing channel demonstrate significant advantages of the message passing algorithms in two respects . 1 ) optimal decoding increases by as much as 94 the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel . 2 ) for noise levels below these thresholds , the probability of error after optimal decoding is suppressed at a significantly higher rate , leading to a substantial reduction of the error correction overhead .
the quantitative understanding of structure , function , dynamics and transport of biomolecules is a fundamental theme in contemporary life sciences .geometric analysis and associated biophysical modeling have been the main workhorse in revealing the structure - function relationship of biomolecules and contribute enormously to the present understanding of biomolecular systems .however , biology encompasses over more than twenty orders of magnitude in time scales from electron transfer and ionization on the scale of femtoseconds to organism life spanning over tens of years , and over fifteen orders of magnitude in spatial scales from electrons and nuclei to organisms . the intriguing complexity and extraordinarily large number of degrees of freedom of biological systemsgive rise to formidable challenges to their quantitative description and theoretical prediction .most biological processes , such as signal transduction , gene regulation , dna specification , transcription and post transcriptional modification , are essentially intractable for atomistic geometric analysis and biophysical simulations , let alone _ ab - initio _ quantum mechanical descriptions .therefore , the complexity of biology and the need for its understanding offer an extraordinary opportunity for innovative theories , methodologies , algorithms and tools .the study of subcellular structures , organelles and large multiprotein complexes has become one of the major trends in structural biology .currently , one of the most powerful tools for the aforementioned systems is cryo - electron microscopy ( cryo - em ) , although other techniques , such as macromolecular x - ray crystallography , nuclear magnetic resonance ( nmr ) , electron paramagnetic resonance ( epr ) , multiangle light scattering , confocal laser - scanning microscopy , small angle scattering , ultra fast laser spectroscopy , etc ., are useful for structure determination in general . in cryo - em experiments ,samples are bombarded by electron beams at cryogenic temperatures to improve the signal to noise ratio ( snr ) .the working principle is based on the projection ( thin film ) specimen scans collected from many different directions around one or two axes , and the radon transform for the creation of three - dimensional ( 3d ) images .one of major advantages of cryo - em is that it allows the imaging of specimens in their native environment .another major advantage is its capability of providing 3d mapping of entire cellular proteomes together with their detailed interactions at nanometer or subnanometer resolution .the resolution of cryo - em maps has been improved dramatically in the past two decades , thanks to the technical advances in experimental hardware , noise reduction and image segmentation techniques . by further taking the advantage of symmetric averaging ,many cryo - em based virus structures have already achieved a resolution that can be interpreted in terms of atomic models .there have been a variety of elegant methods and software packages in cryo - em structural determination .most biological specimens are extremely radiation sensitive and can only sustain a limited electron dose of illumination . as a result ,cryo - em images are inevitably of low snr and limited resolution .in fact , the snrs of cryo - tomograms for subcellular structures , organelles and large multi - protein complexes are typically in the neighborhood of 0.01 .to make the situation worse , the image contrast , which depends on the difference between electron scattering cross sections of cellular components , is also very low in most biological systems .consequently , cryo - em maps often do not contain adequate information to offer unambiguous atomic - scale structural reconstruction of biological specimens .additional information obtained from other techniques , such as x - ray crystallography , nmr and computer simulation , is indispensable to achieve subnanometer resolutions .however , for cryo - em data that do not have much additional information obtained from other techniques , the determination of what proteins are involved can be a challenge , not to mention subnanometer structural resolution . to improve the snr and image contrast of cryo - em data , a wide variety of denoising algorithms has been employed .standard techniques , such as bilateral filter and iterative median filtering have been utilized for noise reduction .additionally , wavelets and related techniques have also been developed for cryo - em noise removing . moreover ,anisotropic diffusion or beltrami flow approach has been proposed for cryo - em signal recovering .however , cryo - em data denoising is far from adequate and remains a challenge due to the extremely low snrs and other technical complications .for example , one of difficulties is how to distinguish signal from noise in cryo - em data . as a result, one does not know when to stop or how to apply a threshold in an iterative noise removing process .there is a pressing need for innovative mathematical approaches to further tackle this problem . recently, persistent homology has been advocated as a new approach for dealing with big data sets . in general ,persistent homology characterizes the geometric features with persistent topological invariants by defining a scale parameter relevant to topological events .the essential difference between the persistent homology and traditional topological approaches is that traditional topological approaches describe the topology of a given object in truly metric free or coordinate free representations , while persistent homology analyzes the persistence of the topological features of a given object via a filtration process , which creates a family of similar copies of the object at different spatial resolutions .technically , a series of nested simplicial complexes is constructed from a filtration process , which captures topological structures continuously over a range of spatial scales .the involved topological features are measured by their persistent intervals .persistent homology is able to embed geometric information to topological invariants so that birth " and death " of isolated components , circles , rings , loops , pockets , voids or cavities at all geometric scales can be monitored by topological measurements .the basic concept of persistent homology was introduced by frosini and landi and by robins in 1999 independently .edelsbrunner et al . introduced the first efficient computational algorithm , and zomorodian and carlsson generalized the concept .a variety of elegant computational algorithms has been proposed to track topological variations during the filtration process .often , the persistent diagram can be visualized through barcodes , in which various horizontal line segments or bars are the homology generators lasted over filtration scales .it has been applied to a variety of domains , including image analysis , image retrieval , chaotic dynamics verification , sensor network , complex network , data analysis , computer vision , shape recognition and computational biology . the concept of persistent homology has also been used for noise reduction .it is generally believed that short lifetime events ( or bars ) are of less importance and thus regarded as `` noise '' while long lifetime ones are considered as `` topological signals '' , although this idea was challenged in a recent work . in topological data analysis ,pre - processing algorithms are needed to efficiently remove these noise .depending on the scale of a feature , a simple approach is to pick up a portion of landmark points as a representative of topological data .the points can be chosen randomly , spatially evenly , or from extreme values .more generally , certain functions can be defined as a guidance for node selection to attenuate the noise effect , which is known as thresholding .clustering algorithms with special kernel functions can also be employed to recover topological signal .all of these methods can be viewed as a process of data sampling without losing the significant topological features .they rely heavily on the previous knowledge of the geometric or statistic information .in contrast , topological simplification , which is to remove the simplices and/or the topological attributes that do not survive certain threshold , focuses directly on the persistence of topological invariant .in contrast , gaussian noise is known to generate a band of bars distributes over a wide range in the barcode representation .thank to the pairing algorithm , persistence of a homology group is measured through an interval represented by a simplex pair .if the associated topological invariant is regarded less important , simplices related to this simplex pair are reordered .this approach , combined with morse theory , proves to be a useful tool for denoising , as it can alters the data locally in the region defined as noise .additionally , statistical analysis has been carried out to provide confidence sets for persistence diagram .however , persistent homology has not been utilized for cryo - em data noise reduction , to our knowledge .a large amount of experimental data for macroproteins and protein - protein complexes has been obtained from cryo - em . to analyze these structural data ,it is a routine procedure to fit them with the available high - resolution crystal structures of their component proteins .this approach has been shown to be efficient for analyzing many structures and has been integrated into many useful software packages such as chimera . however , this docking process is limited by data quality .for some low resolution data , which usually also suffer from low snrs , there is enormous ambiguity in structure fitting or optimization , i.e. , a mathematically ill - posed inverse problem .sometimes , high correlation coefficients can be attained simultaneously in many alternative structures , while none of them proves to be biologically meaningful .basically , the fitting or optimization emphasizes more on capturing `` bulk '' regions , which is reasonable as greater similarities in density distributions imply higher possibility .however , little attention is paid to certain small `` linkage '' regions , which play important roles in biological system especially in macroproteins and protein - protein complexes .different linkage parts generate different connectivity , and thus directly influence biomolecular flexibility , rigidity , and even its functions . since persistent homology is equally sensitive to both bulk regions and small linkage regions , it is able to make a critical judgment on the model selection in structure determination , however , nothing has been reported on persistent homology based solution to ill - posed inverse problems , to our knowledge .although persistent homology has been applied to a variety of fields , the successful use of persistent homology is mostly limited to characterization , identification and analysis ( cia ) .indeed , persistent homology has seldom employed for quantitative prediction .recently , we have introduced molecular topological fingerprints ( mtfs ) based on persistent homology analysis of molecular topological invariants of biomolecules .we have utilized mtfs to reveal the topology - function relationship of macromolecules .it was found that protein flexibility and folding stability are strongly correlated to protein topological connectivity , characterized by the persistence of topological invariants ( i.e. , accumulated bar lengths ) .most recently , we have employed persistent homology to analyze the structure and function of nano material , such as nanotubes and fullerenes .the mtfs are utilized to quantitatively predict total curvature energies of fullerene isomers .the overall objective of this work is to explore the utility of persistent homology for cryo - em analysis .first , we propose a topology based algorithm for cryo - em noise reduction and clean - up .we study the topological fingerprint or topological signature of noise and its relation to the topological fingerprint of cryo - em structures .we note that the histograms of topological invariants of the gaussian random noise have gaussian distributions in the filtration parameter space .contrary to the common belief that short barcode bars correspond to noise , it is found that there is an inverse relation between the snr and the band widths of topological invariants , i.e. , the lower snr , the larger barcode band width is .therefore , at a low snr , noise can produce long persisting topological invariants or bars in the barcode presentation .moreover , for cryo - em data of low snrs , intrinsic topological features of the biomolecular structure are hidden in the persistent barcodes of noise and indistinguishable from noise contributions . to recover the topological features of biomolecular structures ,geometric flow equations are employed in the present work .it is interesting to note that topological features of biomolecular structures persist , while the topological fingerprint of noise moves to the right during the geometric flow iterations . as such, `` signal '' and noise separate from each other during the geometric flow based denoising process and make it possible to prescribe a precise noise threshold for the noise removal after certain iterations .we demonstrate the efficiency of our persistent homology controlled noise removal algorithm for both synthetic data and cryo - em density maps .additionally , we introduce persistent homology as a new strategy for resolving the ill - posed inverse problem in cryo - em structure determination .although the structure determination of microtubule data emd 1129 is used as an example , similar problems are widespread in other intermediate resolution and low resolution cryo - em data . as emd 1129is contaminated by noise , a preprocess of denoising is carried out by using our persistent homology controlled geometric flow algorithm .a helix backbone is obtained for the microtubule intermediate structure .based on the assumption that the voxels with high electron density values are the centers of tubulin proteins , we construct three different microtubule models , namely a monomer model , a two - monomer model , and a dimer model .we have found that all three models give rise to essentially the same high correlation coefficients , i.e. , 0.9601 , 0.9607 and 0.9604 , with the cryo - em data .this ambiguity in structure fitting is very common with intermediate and low resolution data .fortunately , after our topology based noise removal , the topology fingerprint of microtubule data is very unique , which is true for all cryo - em data or data generated by using other molecular imaging modalities .it is interesting to note that although three models offer the same correlation coefficients with the cryo - em data , their topological fingerprints are dramatically different .it is found that the topological fingerprint of the microtubule intermediate structure ( emd 1129 ) can be captured only when two conditions are simultaneously satisfied : first , there must exist two different types of monomers , and additionally , two type of monomers from dimers .therefore , based on topological fingerprint analysis , we can determine that only the third model is a correct model for microtubule data emd 1129 .the rest of this paper is organized as follows .the essential methods and algorithms for geometric and topological modelings of biomolecular data are presented in section [ methods ] .approaches for geometric modeling , which are necessary for topological analysis , are briefly discussed .methods for persistent homology analysis are described in detail .we illustrate the use of topological methods with both synthetic volumetric data and cryo - em density maps .their persistence of topological invariants is represented by barcodes .the geometric interpretation of the topological features is given .section [ sec : ph_noise ] is devoted to the persistent homology based noise removal .the characterization of gaussian noise is carried out over a variety of snrs to understand noise topological signature .based on this understanding , we design a persistent homology monitored and controlled algorithm for noise removal , which is implemented via the geometric flow .persistent homology guided denoising is applied to the analysis of a supramolecular filamentous complex . in section [ sec : ph_microtubule ] , we demonstrate topology - aided structure determination of microtubule cryo - em data .several aspects are considered including helix backbone evaluation , coarse - grained modeling and topology - aided structure design and evaluation .we show that topology is able to resolve ill - posed inverse problem .this paper ends with a conclusion .persistent homology has been utilized to analyze biomolecular data , which are collected by different experimental means , such as macromolecular x - ray crystallography , nmr , epr , etc . due to their different origins, these data may be available in different formats , which requires appropriate topological tools for their analysis .additionally , their quality , i.e. , resolution and snr varies for case to case , and thus , a preprocessing may be required . moreover , although biomolecular structures are not a prerequisite for persistent homology analysis , the understanding of biomolecular structure , function and dynamics is crucial for the interpretation of topological results . as a consequence , appropriate geometric modeling is carried out in a close association with topological analysis .furthermore , information from geometric and topological modelings is in turn , very valuable for data preprocessing and denoising .finally , topological information is shown to be crucial for geometric modeling , structural determination and ill - posed inverse problems . [ cols="^ " , ] in the panel of fig .[ fig:1129_barcodes_polish ] , bars can be roughly groups into three parts from the top to the bottom , i.e. , an irregular hair - like " part on the top , a narrow regular body " part in the middle and a large regular base " part in the bottom .topologically , these parts represent different components in the microtubule intermediate structure .the irregular hair - like " part corresponds to the partial monomer structures located on the top and the bottom boundaries of the structure . as can be seen in fig .[ fig : microtubule_fitting ] , each monomer has lost part of the structure at the boundary regions . the regular body " and base " partsare basically related to two types of monomers in the middle region where the structure is free of boundary effect . from the barcodes, it can be seen that body " part has a later birth " time and earlier death " time compared with the base " barcode part .this is due to the reason that this type of monomers has relative lower electron density .as the filtration is defined to go from highest electron density values to lowest ones , their corresponding barcodes appear later .their earlier death time , however , is due to the reason that they form dimers with the other type of monomers represented by the base " barcode part .it can be derived from these nonuniform behavior that monomers are not equally distributed along the helix backbone structure .instead , two adjacent different types of monomers form a dimer first and then all these dimers simultaneously connect with each other as the filtration goes on . moreover , from the analysis in the previous section , it is obvious to see that the body " and base " parts are topological representations of type ii " monomer and type i " monomer , respectively . for the panel of fig .[ fig:1129_barcodes_polish ] , there also exists a consistent pattern when the denoising process passing a certain stage . two distinctive types of barcodes can be identified in the fingerprint , i.e. , a shorter band of barcodes on the top and a longer band of bars on the bottom .topologically , these bars correspond to the rings formed between two adjacent helix circles of monomers or dimers . during the filtration, dimers are formed between type i " and type ii " monomers and soon after that , all dimers connect with each other and form the helix backbone . as the filtration goes on , type i " monomers from the upper helix circle first connect with type ii " monomers at the lower circle .geometrically , this means six monomers , three ( i - ii - i " ) from the upper layer and three ( ii - i - ii " ) from the lower layer , form a circle .as the filtration goes further , this circle evolves into two circles when two middle monomers on two layers also connect .however , these two circles do not disappear simultaneously . instead , one persists longer than the other .this entire process generates the unique topological fingerprint in barcodes .the topological fingerprint we extracted from the denoising process can be used to guide the construction and evaluation of our microtubule models . to this end, we analyze the topological features of three theoretical models .our persistent homology results for three models are demonstrated in figs .[ fig:1129_fit_barcodes ] * a * , * b * and * c * , respectively . it can be seen that all the three models are able to capture the irregular hair " region in their barcode chart . from the topological point of view , the first model is the poorest one .it fails to capture the regular fingerprint patterns in both and panels of the original cryo - em structure in fig .[ fig:1129_barcodes_polish ] * d*. with two different weight functions to represent two types of monomers , the second model delivers a relatively better topological result .it is able to preserve part of the difference between type i " and type ii " barcodes in the panel . in panel ,some nonuniform barcodes emerges .the persistent homology results are further improved in the third model when the intra - dimer and inter - dimer interactions are considered . in our third model ,fingerprint patterns of the cryo - em structure in both and panels of fig .[ fig:1129_barcodes_polish ] * d * are essentially recovered by those of fig .[ fig:1129_fit_barcodes ] * c*. even though their scales are different , their shapes are strikingly similar .the essential topological features that are associated with major topological transitions of the original cryo - em structure are illustrated in figs .[ fig:1129_homology ] * a * , * a * , * a * and * a*. as shown in figs .[ fig:1129_homology ] * b * , * b * , * b * and * b * , these features have been well - preserved during the denoising process . our best predicted model is depicted in figs .[ fig:1129_homology ] * c * , * c * , * c * and * c*. in these figure labels , subscripts and denote four topological transition stages in the filtration process , namely hetero - dimmer formation , large circles formation , evolution of one large circle into two circles , and finally death of one of two circles . by the comparison of denoising results ( figs .* b , b , b * and * a * ) with original structures ( figs . *a , a , a * and * a * ) , it is seen that , in the noise reduction process , although some local geometric and topological details are removed , fundamental topological characteristics are well preserved . as illustrated in fig .[ fig:1129_barcodes_polish ] , using the persistent homology description , these fundamental topological characteristics are well - preserved in topological persistence patterns , which are further identified as fingerprints of the microtubule intermediate structure .we believe that topological fingerprints are crucial to the characterization , identification , and analysis of the biological structure . as demonstrated in figs .[ fig:1129_homology ] * c * , * c * , * c * and * c * , once our model successfully reproduces the topological fingerprints , the simulated structure is able to capture the essential topological characteristics of the original one .moreover , through the analysis in section [ sec : mode_evaluation ] , it can be seen that to reproduce the topological fingerprint of emd 1129 , two conditions are essential .the first is the creation of two types of monomers .the second is the differentiation of intra - dimers and inter - dimers .biologically , these requirements means : 1 ) there are two types of monomers , i.e. , -tubulin monomers and -tubulin monomers ; and 2 ) intra - dimers and inter - dimers should behave differently from hetero - dimers . it also should be noticed that a higher correlation coefficient may not guarantee the success of the model , especially when the original data is of low resolution and low snr . as can be seen in section [ sec : three_models ] , our three theoretical models have very similar fitting coefficients .the second model even has a slightly higher cross - correlation coefficients .however , only the third model is able to reproduce the essential topological features of the original cryo - em data .this happens as topological invariants , i.e. , connected components , circles , loops , holes or void , tend to be very sensitive to tiny " linkage parts , which are almost negligible in the density fitting process , compared to the major body part .we believe these linage parts play important roles in biological system especially in macroproteins and protein - protein complexes .different linkage parts generate different connectivity , thus can directly influence biomolecular flexibility , rigidity , and even its functions . by associates topological features with geometric measurements , our persistent homology analysis is able to distinguish these connectivity parts. therefore , persistent homology is able to play a unique role in protein design , model evaluation and structure determination .cryo - electron microscopy ( cryo - em ) is a major workhorse for the investigation of subcellular structures , organelles and large multiprotein complexes . however , cryo - em techniques and algorithms are far from mature , due to limited sample quality and or stability , low signal to noise ( snr ) , low resolution , and the high complexity of the underlying molecular structures .persistent homology is a new branch of topology that is known for its potential in the characterization , identification and analysis ( cia ) of big data . in this work ,persistent homology is , for the first time , employed for the cryo - em data cia .methods and algorithms for the geometric and topological modeling are presented . here , geometric modeling ,such the generation of density maps for proteins or other molecules , is employed to create known data sets for validating topological modeling algorithms .we demonstrate that cryo - em density maps and fullerene density data can be effectively analyzed by using persistent homology . since topology is very sensitive to noise , the understanding of the topological signature of noise is a must in cryo - em cia .we first investigate the topological fingerprint of gaussian noise .we reveal that for the gaussian noise , its topological invariants , i.e. , , and numbers , all exhibit the gaussian distribution in the filtration space , i.e. , the space of volumetric density isovalues . at a low snr , signal and noiseare inseparable in the filtration space .however , after denoising with the geometric flow method , there is clear separation between signal and noise for various topological invariants . as such , a simple threshold can be prescribed to effectively remove noise . for the case of low snr ,the understanding of noise characteristic in the filtration space enable us to use persistent homology as an efficient means to monitor and control the noise removal process .this new strategy for noise reduction is called topological denoising .persistent homology has been applied to the theoretical modeling of a microtubule structure ( emd 1129 ) .the backbone of the microtubule has a helix structure .based on the helix structure , we propose three theoretical models .the first model assumes that protein monomers form the helix structure .the second model adopts two types of protein monomers evenly distributed along helix chain .the last model utilizes a series of protein dimers along the helix chain .these models are fitted with experimental data by the least square optimization method .it is found that all the three models give rise to similar high correlation coefficients with the experimental data , which indicates that the structural optimization is ill - posed . however , the topological fingerprints of three models are dramatically different . in the denoising process ,the cryo - em data of the microtubule structure demonstrate a consistent pattern which can be recognized as the intrinsic topological fingerprint of the microtubule structure .by careful examination of the fingerprint , we reveal two essential topological characteristics which discriminate the protein dimers from the monomers . as such ,we conclude that only the third model , i.e. , the protein dimer model , is able to capture the intrinsic topological characteristics of the cryo - em structure and must be the best model for the experimental data .it is believed that the present work offers a novel topology based strategy for resolving ill - posed inverse problems .this work was supported in part by nsf grants dms-1160352 and iis-1302285 , nih grant r01gm-090208 and msu center for mathematical molecular biosciences initiative .the authors acknowledge the mathematical biosciences institute for hosting valuable workshops .shawn q. zhenga , bettina keszthelyi , eric branlunda , john m. lyleb , michael b. braunfelda , john w. sedatb , and david a. agarda . : an integrated software suite for real - time electron microscopic tomographic data collection , alignment , and reconstruction . , 157:138147 , 2007 .t. hrabe , y. chen , s. pfeffer , l.k .cuellar , a.v .mangold , and f. forster . : a python - based toolbox for localization of macromolecules in cryo - electron tomograms and subtomogram analysis . , 178:178188 , 2012 .r. s. pantelic , c. y. rothnagel , r.and huang , d. muller , d. woolford , m. j. landsberg , a. mcdowall , b. pailthorpe , p. r. young , j. banks , b. hankamer , and g. ericksson .the discriminative bilateral filter : an enhanced denoising filter for electron microscopy data ., 155:395c408 , 2006 .p. van der heide , x. p. xu , b. j. marsh , d. hanein , and n. volkmann .efficient automatic noise reduction of electron tomographic reconstructions based on iterative median filtering ., 158:196c204 , 2007 .v. de silva and g. carlsson .topological estimation using witness complexes . in _ proceedings of the first eurographics conference on point - based graphics _ , pages 157166 .eurographics association , 2004 .r. j. adler , o. bobrowski , m. s. borman , e. subag , and s. weinberger .persistent homology for random fields and complexes . in _borrowing strength : theory powering applications a festschrift for lawrence d. brown _ , volume 6 , pages 124143 .institute of mathematical statistics , 2010 .m. lysaker , a. lundervold , and x. c. tai .noise removal using fourth - order partial differential equation with application to medical magnetic resonance images in space and time ., 12(12):15791590 , 2003 .q. qiao , c. h. yang , c. zheng , l. fontn , l. david , x. yu , c. bracken , m. rosen , a. melnick , e. h. egelman , and h. wu .structural architecture of the carma1/bcl10/malt1 signalosome : nucleation - induced filamentous assembly ., 51(6):766779 , 2013 .
in this work , we introduce persistent homology for the analysis of cryo - electron microscopy ( cryo - em ) density maps . we identify the topological fingerprint or topological signature of noise , which is widespread in cryo - em data . for low signal to noise ratio ( snr ) volumetric data , intrinsic topological features of biomolecular structures are indistinguishable from noise . to remove noise , we employ geometric flows which are found to preserve the intrinsic topological fingerprints of cryo - em structures and diminish the topological signature of noise . in particular , persistent homology enables us to visualize the gradual separation of the topological fingerprints of cryo - em structures from those of noise during the denoising process , which gives rise to a practical procedure for prescribing a noise threshold to extract cryo - em structure information from noise contaminated data after certain iterations of the geometric flow equation . to further demonstrate the utility of persistent homology for cryo - em data analysis , we consider a microtubule intermediate structure ( emd-1129 ) . three helix models , an alpha - tubulin monomer model , an alpha- and beta - tubulin model , and an alpha- and beta - tubulin dimer model , are constructed to fit the cryo - em data . the least square fitting leads to similarly high correlation coefficients , which indicates that structure determination via optimization is an ill - posed inverse problem . however , these models have dramatically different topological fingerprints . especially , linkages or connectivities that discriminate one model from another one , play little role in the traditional density fitting or optimization , but are very sensitive and crucial to topological fingerprints . the intrinsic topological features of the microtubule data are identified after topological denoising . by a comparison of the topological fingerprints of the original data and those of three models , we found that the third model is topologically favored . the present work offers persistent homology based new strategies for topological denoising and for resolving ill - posed inverse problems . key words : cryo - em , topological signature , geometric flow , topological denoising , topology - aided structure determination .
this paper is mainly concerned with the asymptotic limit of the following linear kinetic transport equation as , here , denotes the density of particles at time in the phase space , and is the macroscopic density , the velocity distribution is given . for the sake of clarity, we focus here on the case of the gaussian distribution with variance , our methodology can be applied to a wide range of distributions , as discussed at the end of this introduction . the time renormalizing factor in front of the bgk velocity operator in is chosen so as to obtain a nontrivial limit in the asymptotic regime of large deviations , _i.e. _ in order to capture the vanishing exponential tails of the density .we emphasize that the classical diffusive limit would involve an additional factor in front of the time derivative .[ [ scaling . ] ] scaling .+ + + + + + + + starting from the dimensionalized kinetic transport equation where is the rate of reorientation , and is the variance of the velocity distribution , we can always reduce to after the change of variables , , and .then , we are interested in the asymptotics with the appropriate scaling .it appears that the correct scaling in order to capture the small exponential tails of the distribution is such that and .accordingly , we set for some ( small ) parameter that drives the large scale asymptotics when it vanishes .we recover in the new variables .note that the appropriate rescaling is specific to the choice of a gaussian velocity distribution .[ [ diffusive - limit . ] ] diffusive limit .+ + + + + + + + + + + + + + + + to draw an instructive parallel , let us consider the diffusive limit of , namely the heat equation ( see and the references therein ) when the effective diffusion coefficient is small ( say ) or , equivalently , when investigating the large scale asymptotics for small , one deals with the same equation with vanishing viscosity , it is well - known that , under appropriate conditions , converges uniformly locally towards a viscosity solution of the following hamilton - jacobi equation our main purpose here is to obtain a similar result for the kinetic transport equation .[ [ large - deviations . ] ] large deviations .+ + + + + + + + + + + + + + + + + our work can be viewed as a preliminary contribution to the theory of large deviations for simple velocity jump processes .we follow the lines of evans and ishii ( see also ) , in which pde techniques were successfully applied to reformulate the ideas of fleming in the context of viscosity solutions .fleming s work was devoted to applying the logarithmic transformation , and ideas from stochastic control , to the freidlin - wentzell theory for stochastic differential equations with a small noise .[ [ logarithmic - transformation . ] ] logarithmic transformation . + + + + + + + + + + + + + + + + + + + + + + + + + + + as in the diffusive case , we perform the following hopf - cole transformation , the function satisfies [ [ limit - system . ] ] limit system .+ + + + + + + + + + + + + our first result can be stated informally as follows : under suitable conditions ( see theorem [ hjlimit ] below ) , converges locally uniformly towards a viscosity solution of the following non - local hamilton - jacobi equation , where we have used the following notation , to avoid possible boundary layers at as , we assume that the initial condition for is of the form .this set as the initial condition for and for the limit problem .it is worth making some comments concerning the structure of the system .first of all , it is not a standard hamilton - jacobi equation as , and the one obtained in the case of bounded velocities .moreover , we notice that the first equation of does not contain enough information due to the apparition of for which extra dynamics are required .although it seems somehow sparse , the two additional ( in)equations are sufficient to determine a unique solution of the cauchy problem , as stated in the comparison principle below ( theorem [ theo : comp ] ) . to the best of our knowledge ,system is of a new kind .we refer to it as a hamilton - jacobi problem by analogy with which was obtained via a similar procedure .moreover , in the case of a compactly supported velocity distribution , the same procedure leads to a standard hamilton - jacobi problem . we should mention that non - local hamilton - jacobi equations of a very different type have been studied in the context of dislocations by g. barles , p. cardaliaguet, o. ley , r. monneau , and a. monteillet in a series of papers , see _e.g. _ and the references therein .[ [ viscosity - solutions . ] ] viscosity solutions .+ + + + + + + + + + + + + + + + + + + + equation should be read as a coupled system of hamilton - jacobi equations on and .accordingly , we define viscosity solutions of using a couple of test functions . [ def : subsol ]let .a upper semi - continuous function is a * viscosity sub - solution * of on if and only if : + _ ( i ) _ . + _( ii ) _ it satisfies the constraint + _ ( iii ) _ for all pair of test functions , if is such that both and have a local maximum at , then [ def : supersol ] let .a lower semi - continuous function is a * viscosity super - solution * of on if and only if : + _( i ) _ . + _( ii ) _ for all , is a global minimum of .moreover , is locally uniformly isolated : for any compact set , there exists such that for all . + _ ( iii ) _ for all pair of test functions , if is such that both and have a local minimum at , then let mention that the mimimality ( _ resp . _maximality ) in the definition of the super- ( _ resp ._ sub- ) solution arises with respect to variables only .this is consistent with the fact that there is no derivative in the velocity variable in .[ def : sol ] let .a function is a * viscosity solution * of on if its upper ( _ resp . _lower ) semi - continuous envelope is a sub- ( _ resp . _super- ) solution in the sense of definitions [ def : subsol ] and [ def : supersol ] above .[ [ convergence - and - uniqueness - for - the - limit - system . ] ] convergence and uniqueness for the limit system .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the following theorem states a comparison principle for viscosity ( sub / super-)solutions of the system .this establishes uniqueness of viscosity solutions as a corollary .the proof is contained in section [ sec : comp ] .[ theo : comp ] let ( resp . ) be a viscosity sub - solution ( resp .super - solution ) of on .assume that and are such that then on .this result is extended for sub- and super- solutions with quadratic spatial growth at infinity , in section [ sec:621 ] .this growth condition is compatible with the fundamental solution of the limit system , as we shall see below . in section [ sec : conv ], we prove the convergence of the sequence as , towards the unique viscosity solution of .[ hjlimit ] assume that the initial condition satisfies the following properties : } \quad & b_0 = { u}_0 - \frac{\vert v\vert^2}{2 } \in w^{1,\infty}({\mathbb{r}}^{2n})\,,\\ \label{eq : initial condition 2 } \textbf{[b ] } \quad & - \det\left ( \text{hess}_{v}({u}_0(x , v ) ) \right ) \neq 0 , \qquad d_{v}^{(3 ) } u_0 \in l_{\text{loc}}^{\infty}.\end{aligned}\ ] ] let be the solution of , with the initial data . then , converges locally uniformly towards , which is the unique viscosity solution of , as .in particular , for all , is an isolated global minimum of . however , we were not able to extend this convergence result to unbounded initial data with respect to space variable .we leave this issue for future work .[ [ heuristics . ] ] heuristics .+ + + + + + + + + + + it is useful to give some heuristics , in order to understand how the supplementary condition appears in the limit , with equality if is reduced to the singleton .first , notice that the constraint immediately follows from if the left - hand - side is bounded with respect to .as a consequence , the minimum of with respect to velocity is necessarily attained at .then , integrating against the probability measure we obtain the following continuity equation , the probability measure is expected to concentrate on the minimum points of as .let assume that we do have in some sense , where the weights satisfy .we notice that the constraint at each is clearly unsaturated , in the sense that .there , we expect to see the last contribution of vanish .this would lead to for each such . plugging this into , and using ,we obtain successively , as we have formally by the chain rule , we expect eventually that and even if , that is , somehow . obviously , all this reasoning is formal , but we shall make it rigorous in section [ sec : conv ] . [ [ comparison - with - the - case - of - bounded - velocities . ] ] comparison with the case of bounded velocities .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we extend previous results obtained in to the case of unbounded velocities .the main difference lies of course in the velocity rescaling which is not admissible in the case of bounded velocities .this has important consequences .firstly , in the case of bounded velocities , the asymptotic limit does not depend on , due to some averaging process arising in the velocity variable .secondly , the limit equation is a standard hamilton - jacobi equation , associated with an effective hamiltonian .in fact , the hamilton - jacobi equation obtained in the limit reads as under appropriate integrability conditions , where the limit is taken as from above ( in order to preserve the positivity of the denominator ) .caillerie has extended this result to the general case , without such integrability condition . ] .the case of unbounded velocities is by far more subtle , since there is no averaging with respect to velocity .high velocities play a prominent role in the dispersion process .the case of bounded velocities is analog to large deviations estimates for slow - fast systems as in , and references therein . in our case ,the role of the fast variable is played by velocity , whereas the space variable is the slow one . in this work ,we follow the hamiltonian viewpoint , focusing on the value function , solution of the hamilton - jacobi equation .there is a dual viewpoint , focusing on the trajectories of the underlying piecewise deterministic markov process ( pdmp ) .we refer to for further reading .[ fig : min intro ] [ [ the - fundamental - solution . ] ] the fundamental solution .+ + + + + + + + + + + + + + + + + + + + + + + + + in sections [ sec : fund ] and [ sec : explicit ] , we compute explicitly the fundamental solution of the non - local hamilton - jacobi equation _ in space dimension one _ , for .we follow a time discrete iteration scheme based on the duhamel formulation of . since the problem is not translation invariant with respect to velocity , it is necessary to compute the solution for all initial data of the form where we obtain the following expression for the fundamental solution , the minimum value with respect to velocity is given by the function is plotted in figure [ fig : min intro ] for successive values of time .a striking feature is that the solution does not converge to zero as , as compared to the fundamental solution of the hamilton - jacobi equation associated with the heat equation .[ [ comparison - with - the - heat - equation . ] ] comparison with the heat equation .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + at first glance , the asymptotics of the kinetic equation , is linked with the asymptotics of the heat equation with vanishing viscosity , as the latter is the limit of the former under appropriate diffusive rescaling , yet different from : . however , as shown in in figure [ fig : min intro ] , the two behaviours are radically different. we may informally present our results as follows : the scaling leads to the heat equation with vanishing viscosity after a simple chapman - enskog expansion in order to preserve the diffusive scaling .the fundamental solution of the associated hamilton - jacobi equation , associated with the initial data , is . in particular, it converges to zero in long time , uniformly on compact intervals . on the other hand ,we have computed the fundamental solution of the limit system , obtained after the more appropriate rescaling . for the sake of comparison ,it is better to describe ( which corresponds to the macroscopic density via the hopf - cole transform ) .surprisingly enough , it does not converge to zero , but to the function .we interpret this as follows : at a larger scale than the standard hyperbolic rescaling ( both space and velocity are larger in ) , we get non trivial asymptotics , but the density remains uniformly exponentially small far from the origin , of the order . as compared to the heat equation , lacks scaling invariance ( with respect to velocity ) .this is emphasized by the fact that the fundamental solution of does not have a single - line expression , see below .furthermore , picking a high velocity from a gaussian distribution is a rare event , that can be completely reset at the next velocity jump .this leads to the predominance of low velocities that slows down the dispersion .this has to be compared with the fokker - planck equation having the same stationary velocity distribution , for which we expect that the large deviations potential in the scaling behaves like a solution of . technically speaking, we expect that the limit function is independent of the velocity variable due to the additional drift that compactifies the velocity space .this is under investigation by the first author .[ [ accelerated - fronts - in - reaction - transport - equations . ] ] accelerated fronts in reaction - transport equations .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + as an application of this work , we investigate quantitatively front propagation in reaction - transport equations in section [ sec : acc ] .we focus on with an additional monostable reaction term : this models a population of particles that change velocity at rate one , pick up a random new velocity following a gaussian distribution , and divide at rate .moreover , new particles pick up their initial velocity from the same gaussian distribution .this type of model has been studied in in the case of bounded velocities , and in in the case of possible unbounded velocities . in the case of bounded velocities , there exist traveling waves with constant speed . moreover ,any solution to the cauchy problem with sufficiently decaying initial data spreads with the minimal speed .the classical fisher - kpp equation is obtained in the diffusive limit , provided that the rate of division is assumed to be of order , it is well known that front propagation occurs at constant speed , here , for any suitably decaying initial data ( otherwise , see ) . beyond this classical problem, there has been some recent interest for describing accelerating fronts in reaction - diffusion equations .garnier has investigated integro - differential equations , where the spreading operator is given by the convolution with a fat - tailed kernel , here , fat - tailed means that the kernel decays slower than exponentially .there , the level lines of the solution spread super linearly , depending on the decay of the convolution kernel .cabr and roquejoffre have studied the fisher - kpp equation where the diffusion operator is replaced with a fractional diffusion operator , for some exponent .they describe quantitatively the acceleration of the front , which occurs at exponential rate , namely in a weak sense .this seminal work was continued by coulon and roquejoffre in .more recently , mirrahimi and mlard have described the limit of the reaction - diffusion with a fractional diffusion operator in dimension , after suitable rescaling and the hopf - cole transform . as opposed to our results, the limit function has the following simple expression , with separation of variables : .recently , spreading in the so - called cane toads equation has been studied intensively . when the variable is unbounded , accelerated propagation has been proved independently by berestycki , mouhot and raoul , and by the first author , henderson and ryzhik .there is a formal analogy between , and our problem , or rather .indeed , acceleration also happens due to the influence of the microscopic variable , which plays a similar role as the velocity variable in this paper .this is another example of a nonlinear acceleration phenomena appearing in a structured model .both equations and describe spatial jumps of particles , with a focus on the distribution of long - range jumps .equation describes a velocity - jump process , with a focus on the distribution of high velocities , that correspond somehow to long - range spatial jumps .it was established in that solutions to behave in the long - time asymptotics as accelerating fronts due to the ( rare ) occurence of high velocities that send particles far from the bulk .furthermore , the location of the front is of the order of , in accordance with the scaling limit .however , the exact location of the front was not determined accurately , but it was estimated as in a weak sense ( see ( * ? ? ?* theorem 1.11 ) for details ) . from the knowledge of the quantitative scaling limit procedure in the kinetic dispersion operator ( sections 2 to 5 ) , we aim to apply the same procedure as in , respectively for the classical fisher - kpp equation , and for the fractional fisher - kpp equation .however , we face several issues , two of them are still open . starting from , after appropriate rescaling , we prove that converges towards a non - local hamilton - jacobi equation , very similar to , complemented with the additional constraint , so far , the limit is rigorously proven for bounded data only. however , this rules out compactly supported initial data . on the other hand , in order to estimate quantitatively the location of the front , as the boundary of the support of the limit function , some condition on the extremal trajectories is needed : the so - called freidlin condition .we propose an extension of this condition in our context , taking into account the non local feature of the constraint .we verify that this condition is verified , but only on a subset of .this rules out an immediate application of our comparison principle .we conjecture that the edge of the front is located around naturally , this lies between the two bounds obtained in . finally , we claim that our work can be extended in a straightforward way to include the velocity distribution for ( this last assumption appearing for convexity reasons ) .the rate of expansion should be given in the limit by the following expression , this section , we prove the comparison principle stated in theorem [ theo : comp ] . we perform a classical doubling of variableshowever , much attention has to be paid to the velocity variable .this is the main concern of this proof . in particular ,the velocity variable is not doubled , which is consistent with the fact that there is no gradient with respect to velocity in the limit system .however , due to the unboundedness of the velocity space , it is mandatory to confine the velocities in a way to get absolute extrema .let us define , for , the minimum values with respect to the velocity variable : let , .let to be suitably chosen below . since the limit system requires two test functions , we shall define notice that penalizes high velocities .we denote . the assumption will give a confinement on the minima of as follows .[ lem : minconf ] let . for all , for all , we have .fix .suppose that there exists such that .then at , and this can not hold .therefore we have .let .we can now introduce the following maximum values , these two quantities are clearly finite from .[ lem : oomega ] we have .let be a maximum point of .let .the following sequence of inequalities hold true , we now prove that for suitably chosen parameters , the supremum of is attained at . for that, we distinguish between two cases : , and . * # case 1 : . * we denote by a maximum point of , such that .this point exists thanks to the penalization in the three variables and the fact that is upper semi - continuous .we emphasize that this point depends on the parameters and , but for legibility we omit this dependency .we aim to prove that .we argue by contradiction , and assume that . we knowshow that necessarily satisfies _ a priori _ a bound independent of .[ lem : velconf ] we have the evaluation gives we define some auxiliary function with twice the number of variables , except for the velocity , as follows let which realizes the maximum of .[ lem : limit t0 ] the following limit holds true , the evaluation yields we deduce the following estimates , therefore , the sequence converges as , up to a subsequence .a closer look shows that the limit is necessarily .we denote by an accumulation point .we have and . on the other hand , passing to the limit in the inequality , we deduce therefore , we have necessarily , and .[ lem : constraint ] there exists such that for all , the evaluation gives the following piece of information , we rewrite as since is a subsolution and is verified for small enough , we get that as , by upper semi - continuity , the last contribution in the r.h.s .of is such that therefore , there exists such that for all , since we have assumed in this case that .we now use the test function associated to the supersolution at the point .notice that the condition is verified for small enough . by using the definition [ def : supersol ] of a super - solution , this yields on the other hand , using the test function associated to the subsolution at the point , we obtain by using the definition [ def : subsol ] of a sub - solution . by substracting to , we obtain letting , this gives .recall that from the estimates in the proof of lemma [ lem : limit t0 ] , we have that holds true for some constant , and for small enough .recall also lemma [ lem : velconf ] .we obtain where depends only on .hence , the choice establishes a contradiction .we conclude that in this case .* # case 2 : .* we denote by a maximum point of .we aim to prove that .we argue again by contradiction and we suppose that . the evaluation yields immediately the following bound : * # # case 2.1 : we first consider the case where . * we introduce the following auxiliary function let which realizes the maximum value of .we can prove as in lemma [ lem : limit t0 ] , that the following limit holds true , we use the test function associated to the supersolution at the point .notice that the condition is verified for small enough . in order to apply the second criterion in ,it is required that the set is reduced to , provided is sufficiently small .[ lem:26 ] there exists such that the set is reduced to , provided that .we argue by contradiction .assume that there exists a sequence such that contains some nonzero .it is required in the definition [ def : supersol ] of a viscosity super - solution that is a locally uniformly isolated minimum over .hence , there exists such that for all . up to extraction of a subsequence, we can assume that converges to some nonzero .the following inequalities are satisfied , there is some subtlety here , because the lower semi continuity of would only imply in full generality .however , the following argument establishes that such a is in fact a true limit . from the maximality of , we have now using the upper semi - continuity of and the lower semi - continuity of , we deduce that as a consequence of the previous inequalities , all inequalities are equalities .thus , the following limit is well - defined , we conclude from that this yields the existence of such that , which is a contradiction . the second criterion in writes as follows , on the other hand , using the test function associated to the subsolution at the point , we obtain by substracting to , we obtain passing to the limit , we get a contradiction . *# # case 2.2 : there exists some nonzero .* the following lemma describes the equality cases in lemma [ lem : oomega ] .[ lem : oomega eq ] assume , and let . then , 1 . realizes the supremum of , 2 . is a minimum velocity also for .we repeat the proof of lemma [ lem : oomega ] with . by examining the case of equality, we realize that similarly as in case , we define some auxiliary function with twice the number of variables , except for the velocity , as follows let which realizes the maximum value of . we can prove as in lemma [ lem : limit t0 ] , that the following limit holds true , one key observation is that the following strict inequality holds true by the very definition of . as as , we expect that this inequality is also strict for small enough .however , this is not compatible with the _ a priori _ lower semi - continuity of as it may exhibit some negative jump when passing to the limit .the following lemma resolves this difficulty ( see also the proof of lemma [ lem:26 ] ) .[ lem : constraint2 ] there exists such that for all , from the maximality of , we have using the upper semi - continuity of and the lower semi - continuity of , we deduce that as a consequence of the previous inequalities , all inequalities are equalities .thus , the following limit is well - defined , now define and take sufficiently small such that both inequalities hold true , the former is a consequence of , while the latter is deduced from lower semi - continuity of .then , one has and the lemma follows . therefore , we can use the test function associated to the supersolution at the point .notice that the condition is verified for small enough .this yields on the other hand , using the test function associated to the subsolution at the point , we obtain by substracting to , we obtain passing to the limit , we get similarly as in case 1 , the choice yields a contradiction .we conclude that in this case also .we are in position to conclude . by using that , we deduce that , that is we obtain the comparison by taking limits , and .in this section , we shall prove that converges locally uniformly towards the unique viscosity solution of the system .we first establish lipschitz bounds that allow to extract a locally uniformly converging subsequence .then , we prove that this limit is a solution of . * ( uniform estimates).*[estimates ] let be a solution of equation and define as . if satisfies condition , then is uniformly locally lipschitz .precisely the following _ a priori _ bounds hold for all : the function satisfies let us notice that we obtain a unique solution , and thus a unique , from a fixed point method on the duhamel formulation of : [ [ proof - of - the - bound - i . ] ] proof of the bound .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we define , for to be chosen later : for any , attains a maximum at point .suppose that .then , we have as a consequence , we have at the maximum point : moreover , the maximal character of also implies it yields that and . introducing these last inequalities in yields and thus one can choose such that , for sufficiently small , this last inequality is impossible since the r.h.s is when : as a consequence , and we have , passing to the limit , we obtain .one can carry out the same argument with to get the lower bound .[ [ proof - of - the - bound - ii . ] ] proof of the bound .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + to find lipschitz bounds , we use the same ideas on the difference .the equation for reads as follows , using the same argument as above with the correction function , we end up with using the bound on , we find taking again suitably , we get that the previous inequality is impossible for small . as a consequence , \times { \mathbb{r}}^{2n } , \qquad b_h^{{\varepsilon}}(t , x , v ) \leq \sup_{(x , v ) \in{ \mathbb{r}}^{2n}}\left ( b_0(x+h , v ) - b_0(x , v ) \right)\ ] ] the same argument will apply to .indeed , one can write an equation on on the following form , such the maximum principle works since the r.h.s has the right sign when attains a maximum . finally , \times { \mathbb{r}}^{2n } , \qquad \vert b_h^{{\varepsilon}}(t , x , v ) \vert \leq \sup_{(x , v ) \in{ \mathbb{r}}^{2n}}\left \vert b_0(x+h , v ) - b_0(x , v ) \right \vert \leq \left \vert \nabla_x b_0 \right\vert_{\infty } \vert h \vert.\ ] ] from which the estimate follows .[ [ proof - of - the - bound - iii . ] ] proof of the bound .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + let and . to obtain the local bound on the time derivative on ,let us differentiate with respect to time , we shall multiply by and split the r.h.s into two parts : and estimate both parts separately .first , re - using the equation on and the estimate on the space derivative , we get e^{\frac{2\vert b^{\varepsilon}\vert_\infty}{{\varepsilon } } } dv ' : = c({\varepsilon}),\end{gathered}\ ] ] where is uniformly bounded ( and vanishes when ) since the ball contains the centered ball of radius .assume now that has a positive local maximum at . then at this point we thus deduce that , and thus \times { \mathbb{r}}^n \times { \mathcal{b}_r } , \qquad \vert \partial_t b^{\varepsilon}\vert \leq c({\varepsilon})t + \sup_{(x , v ) \in { \mathbb{r}}^n \times { \mathcal{b}_r } } \vert \partial_t b^{\varepsilon}\vert(0,x , v ) \,.\ ] ] we now recall to compute . indeed , we have the laplace method gives that is uniformly bounded in , locally in as soon as any minimum point in velocity of is non degenerate ( in the sense that ) and that the third order derivatives in velocities are locally uniformly bounded , which is exactly hypothesis ] .recall that we have by convention therefore , under the condition , including the equality , the second candidate for the minimum value is we notice immediately that this candidate is always better than .c. assume that the minimum is attained on the edge , ] .then minimizing reduces to minimizing on ,t[ ] , that is ,t[ ] .therefore , only three possible candidates remain after this discussion , we plot in figure [ fig : fund ] the final result that we obtain after minimizing between these three candidates .[ fig : fund ] starting from an initial data of the form , a particle arriving at at time follows the following trajectory in the phase plane in order to compute the fundamental kernel given by , we first fully determine : on the set ^ 3 \ ; : \ ; 0\leq s_1 + s_2 + s_3 \leq t \right\rbrace. ] .we have by convention + the latter quantity is finite for values of ^ 2 ] such that can be achieved .the minimum of the quantity on this set is necessarily attained on the boundary of the triangle .thus , * the minimizer is available only if ] , and then a candidate is .* the minimizer is available only if ] , and then a candidate is , * the minimizer is available only if ] .we have and the optimal times are given through .d. assume that the minimum is attained on the edge , ] .we deduce immediately from the expression that the fundamental solution satisfies for this , simply choose in the minimization problem .note that a similar bound can be derived for when . for the sake of simplicity ,we restrict to initial conditions compactly supported in , with a gaussian velocity distribution , .\left [ { \varepsilon}^{1/2 } m_{\varepsilon}(v)\right]\ , , \ ] ] for some is some open interval .first , we claim that we can restrict to the case without the nonlinear term ( ) , since the contribution is nonnegative . from the duhamel formula, we deduce that is bounded below by the damped free transport equation , hence , we have up to elementary spatial rescaling , we assume that contains the interval . plugging this bound into the duhamel formula with [ eq : duhamel formula ] , we deduce this estimate from below , combined with yields the following estimate on , we now continue our discussion about the limit system .this limit system appears to be an obstacle problem , subject to the constraint .it is analogous to the following constrained hamilton - jacobi equation , which can be derived from the fisher - kpp equation using the same methodology as described above ( see also for earlier works using a different framework based on large deviations ) .the so - called freidlin condition allows to solve this obstacle problem by the following truncation procedure : first , solve the unconstrained problem then , truncate the solution by taking into account the constraint : the solution can be written using the hopf - lax representation formula , ds + \tilde u_0(x(0 ) ) \ ; \big\vert \ ; x(t ) = x \right\rbrace\\ & = \inf_{x \in\mathsf{h}_{loc}^1((0,t),{\mathbb{r}}^n ) } \left\lbrace \int_0^t \left [ \frac{\vert \dot x(s ) \vert^2}{2 } - r \right ] ds \ ; \big\vert \ ; x(0 ) \in \overline{g_0}\;,\ ; x(t ) = x \right\rbrace.\end{aligned}\ ] ] the freidlin condition states that , starting from a final point such that , then tracing backward a minimizing trajectory , it verifies for all .this implies the following modification of the hopf - lax formula , ds \ ; \big\vert \ ; x(0 ) \in \overline{g_0}\;,\ ; x(t ) = x \ , , \ , ( \forall s)\ ; \tilde u(s , x(s ) ) > 0 \right\rbrace,\ ] ] alternatively speaking , trajectories ending in the unconstrained area ( _ i.e. _ ) have gone through the unconstrained area only .the freidlin condition holds true for the unconstrained problem . in section [ sec : explicit ] , we derived explicit formulas for extremal trajectories .let define the following set , an important observation is that extremal trajectories ending in remain in in backward time .let .clearly , is an optimal choice in .we deduce from proposition [ prop : traj ] that an extremal trajectory joining and is necessarily given by one of the two following choices , the choice between the first case ( referred to as trajectory 1 ) , and the second case ( referred to as trajectory 2 ) is set up according to proposition [ prop : traj ] , see also figure [ fig : fund opt ] for a sketchy diagram .we propose the following extension of the freidlin condition : let be an extremal trajectory such that the ending point belongs to . assume that is such that , then for all , we refer to it as a non - local freidlin condition , as the sign condition is required for all along the projected backward trajectory .we prove below that this condition holds true .let be such an extremal trajectory belonging to .we denote by its starting point .the condition is rewritten using and the requirement is : we can conclude , provided that the function has the following monotonicity for : increasing , then decreasing .indeed , it is nonnegative at and at . therefore , it is positive for .there is some subtlety here , because , the trajectory is associated with some parameter , which is generally different from .it means that the typical trajectories shown in figure [ fig : fund opt ] are not necessarily in phase with background areas delimited by the plain red curves .there is quite a number of cases to discuss .however , we can overcome the complexity of the picture , by remarking that the trajectory always finish with a constant part , . on this part , the function is clearly decreasing .to conclude , it remains to prove that the function is concave on the linear part of the trajectory .let it be , for some positive speed .on each of the zones , is concave : it is linear on and , and it is equal to on . in addition , there are two possible junctions : the one from to , and the one from to ( see figure [ fig : junction ] ) .* junction from to ( figure [ fig : junction]a ) : * let be the time of transition from to .it satisfies .the slope at is equal to 1 .the slope at is there is no slope discontinuity at this junction .* junction from to ( figure [ fig : junction]b ) : * let be the time of transition from to .it satisfies .the slope at is the slope at is the slope is necessarily decreasing at this junction . we conclude that is globally concave on the linear part of the trajectory , wherever it goes through .then , it is decreasing on the constant part of the trajectory .consequently , has the required monotonicity .g. barles , p. cardaliaguet , o. ley , and a. monteillet ._ existence of weak solutions for general nonlocal hamilton - jacobi equations _, nonlinear analysis : theory , methods & applications , 71(78):2801 - 2810 " , 2009 e. bouin , v. calvez , n. meunier , s. mirrahimi , b. perthame , g. raoul , r. voituriez , _ invasion fronts with variable motility : phenotype selection , spatial sorting and wave acceleration _ , c. r. math . acad .paris , * 350 * ( 2012 ) , 761766 .e. bouin , v. calvez , and g. nadin ._ front propagation in a kinetic reaction - transport equation : travelling waves and accelerating fronts_. archive for rational mechanics and analysis , ( 2015 ) , to appear .l. c. evans , h. hishii , _ a pde approach to some asymptotic problems concerning random differential equations with small noise intensities _ , annales de linstitut henri poincar ( c ) analyse non linaire ( * 2 * ) , ( 1985 ) .kolmogorov , i.g .petrovsky , n.s .piskunov , _ etude de lquation de la diffusion avec croissance de la quantit de matire et son application un problme biologique _ , moskow univ .math . bull .* 1 * ( 1937 ) , 125 .s. mlard and s. mirrahimi , _ singular limits for reaction - diffusion equations with fractional laplacian and local or non - local nonlinearity _ , communications in partial differential equations , vol .40.5 , ( 2015 ) , pp .957 - 993 .p. e. souganidis .front propagation : theory and applications . in _ viscosity solutions and applications ( montecatini terme , 1995 ) _ , volume 1660 of _ lecture notes in math ._ , pages 186242 .springer , berlin , 1997 .
we establish a large deviation theory for a velocity jump process , where new random velocities are picked at a constant rate from a gaussian distribution . the kolmogorov forward equation associated with this process is a linear kinetic transport equation in which the bgk operator accounts for the changes in velocity . we analyse its asymptotic limit after a suitable rescaling compatible with the wkb expansion . this yields a new type of hamilton jacobi equation which is non local with respect to velocity variable . we introduce a dedicated notion of viscosity solution for the limit problem , and we prove well - posedness in the viscosity sense . the fundamental solution is explicitly computed , yielding quantitative estimates for the large deviations of the underlying velocity - jump process _ la freidlin - wentzell_. as an application of this theory , we conjecture exact rates of acceleration in some nonlinear kinetic reaction - transport equations . * key - words : * large deviations , piecewise deterministic markov processes , hamilton - jacobi equations , viscosity solutions , scaling limits , front acceleration . + * ams class . no : *
in the paper we investigate a stochastic version of a linear volterra equation of the general form where , , , is a linear operator and some mapping . to fix our attentionwe shall consider the equation ( [ eq1 ] ) in a separable hilbert space with a scalar product , a norm and a complete orthonormal system .the equation ( [ eq1 ] ) creates a big class of equations and generalizes heat and wave equations and even linear navier - stokes equation .we refer to the excellent monograph for a rich survey .that kind of volterra equation has been studied by many authors in connection with problems arising in mathematical physics , particularly in viscoelasticity , heat conduction in materials with memory , energy balance and termoviscoelasticity . in order to take into account random fluctuations, we have to consider the equation ( [ eq1 ] ) with random external force .there are our first considerations concerning numerical treatment of stochastic volterra equations , so we will be grateful for readers remarks and advices .next , we plan to study the probabilistic features of family of trajectories , take into account different noises and develope numerical schemes for cases where the analytic form of reselvent is not known .assume that is a probability space with a complete right - continuous filtration and , is a cylindrical wiener process with values in the space .let us omit , for convenience , the space variable in the equation ( [ eq1 ] ) and introduce the process , instead of function .hence , we arrive at the following stochastic volterra equation in this part of the paper we recall some results concerning solutions to ( [ eq2 ] ) .we restrict our considerations to paper containing so - called resolvent approach to volterra equation . the notion _ resolvent _ or _ fundamental solution _ for volterra equation ( [ eq1 ] ) probably comes from friedman and shinbrot who studied deterministic volterra integral equations in banach space . for recent surveywe refer again to . in the sequelwe shall assume that the equation ( [ eq1 ] ) is _ well - posed _ , that is , that ( [ eq1 ] ) admits resolvent . as in deterministic case ,the _ mild solution _ to the stochastic volterra equation ( [ eq2 ] ) is of the form where , is the resolvent family for the equation ( [ eq1 ] ) determined by the operator and the function . in order to study solution ( [ eq3 ] ) it is enough to consider the stochastic convolution where the stochastic integral is defined according to particular case under consideration .stochastic volterra equations with resolvent approach have been treated by several authors , see e.g. ,,,, and recently and . in the first three papers stochastic volterra equations are studied in connection with viscoelasticity and heat conduction in materials with memory .the paper is particularly significant because the authors were the first who have extended the well - known semigroup approach , applied to stochastic differential equations , to the equation ( [ eq2 ] ) .the resolvent approach is a natural way of extension the semigroup approach which is well - known from the theory of evolution equations .that approach enables to follow some results and schemes obtained for semigroups .unfortunately , some results are not valid in our case because resolvent family , , does not satisfy semigroup property .clment and daprato studied stochastic volterra equation ( [ eq2 ] ) where was self - adjoint , negative operator in the space , such that they considered stochastic volterra equation ( [ eq2 ] ) driven by the noise term of the form where was a sequence of real - valued , independent wiener processes .they assumed that the kernel function is completely positive .the consequence of completely positiveness of the function is that the solution , to the following equation is nonnegative and nonincreasing for any .in fact , ] for some special cases the function may be found explicitly .for instance , & ~t\geq 0,\hspace{1ex } \gamma\geq 0 .\end{array}\ ] ] in that case , the resolvent family , determined by the operator and the function is given by the formula ( [ eq7 ] ) and has the form where the following results for stochastic convolution are consequences of properties of stochastic integral . [ 3.4 ] ( , theorem 2 ) let be a spatially homogeneous wiener process and , the resolvent for the equation ( [ eq2 ] ) . if hypothesis [ h ] holds then the stochastic equation is a well - defined -valued process . for each random variable is generalized , stationary random field on with the spectral measure : \mu(d\lambda)\;.\ ] ] [ 3.5 ] ( , theorem 3 ) assume that hypothesis [ h ] holds .then the process is function - valued for all if and only if if for some and all , then , for each , is a sample continuous random field . as we have already written , the volterra equation ( [ eq1 ] ) creates a big class of equations . in particular cases ,when the operator and the function in the stochastic volterra equation ( [ eq2 ] ) are fixed , there is possible to obtain some additional regularity results .this is obvious because in particular cases we may use some extra features of solutions to ( [ eq2 ] ) .for instance , apparently well - known is integrodifferential equation which interpolates heat and wave equations , that is the volterra equation ( [ eq1 ] ) , where , the laplace operator and , where is the gamma function .recently , deterministic version was studied in detail by fujita and , independently , by schneider and wyss and stochastic version of that integrodifferential equation was treated in and . in this paperwe shall demonstrate numerical results obtained for that equation in the deterministic version and the stochastic one , as well . in the theory of stochastic volterra equations we are interested not only in the existance , uniqueness and regularity of solutions but in some asymptotics , too .the paper is concerned with a limit measure of stochastic volterra equation driven by very general noise in a form of a spatially homogeneous wiener process , with values in the space of tempered distributions .that paper provides necessary and sufficient conditions for the existence of the limit measure and additionally , it gives a form of any limit measure .let us summarize the results cited above .the resolvent operators , corresponding to the volterra equation ( [ eq1 ] ) do not form any semigroup . therefore it is not possible to obtain such strong results as in the case of evolution equations with semigroup generators .in the case of volterra equations one can not use _ the fractional method of infinite dimensional stochastic calculus_. that method , used for demonstration of continuity with respect to for convolutions with semigroups , enables to obtain only some estimates for the convolutions ( [ eq4 ] ) .moreover , it is possible only in some special cases , see e.g. .it is clear that the volterra equation ( [ eq1 ] ) , in particular its stochastic version ( [ eq2 ] ) , is difficult to study .it results from the fact that equation ( [ eq1 ] ) contains a wide class of equations .an essential role is played by the kernel function which , in general , is assumed to be a locally integrable function . as the function and operator the resolvent , the type of the function is very important . that is reason why it is so difficult to obtain in a general case the continuity of the convolution ( [ eq4 ] ) and some other theoretical results .for more general convolution , significant in many applications , like , where is an appropriate process , it becomes even far more difficult .therefore , in many cases a numerical support of theoretical ( analytical ) considerations is demanded .we need computations for obtaining estimates in regularity results , choosing some paramenters , choosing function with required properties and for visualization of solutions obtained .numerical analysis is particularly important when we are not able to obtain analytical results .additionally , numerical schemes are especially useful for studying the asymptotics .in this section we construct a scheme for numerical solution of the volterra equation ( [ eq2 ] ) without random part , that is , for we shall consider the case when is the laplace operator . denoting by we can write ( [ eq1a ] ) in the standard form in galerkin method one introduces the complete set of orthonormal functions on the interval ] , i.e. and next a finite sequence of independent random variables , with standard normal distribution .the approximation for the convolution ( [ eq4 ] ) can be written in the form \\ & = & \tau^{\half } \sum_{i=0}^{i-1 } \zeta_i \int_{-\infty}^{\infty } \phi_\alpha ( t - s_i , x - y)\,w_2(y ) dy \nonumber\end{aligned}\ ] ] for further specification we choose ( the constant represents a strength of stochastic forces ) . with this assumption , after performing the integral ( [ eq21 ] ) for particular one obtains and these explicite forms were inserted into the numerical code .the sequence of independent random variables , with standard normal distribution was generated using subroutines gasdev and ran1 from .we illustrate the efficiency of the numerical approach on two examples of the function , mentioned earlier . as the initial value of the we chose the gaussian .the grid in variable contained intervals with , covering the interval ] .the bottom part of the fig . [ a1ds ] shows the example of a single stochastic trajectory ( i.e. the sum of deterministic solution and stochastic integral ) for the same case .clment and g. daprato , some results on stochastic convolutions arising in volterra equations perturbed by noise , _ rend . math .lincei _ , s. 9 , * 7 * ( 1996 ) , 147 - 153 . ph .clment and g. daprato , white noise perturbation of the heat equation in materials with memory , _ dynamic systems and applications _ * 6 * ( 1997 ) , 441 - 460 . ph . clment and g. daprato ,stochastic convolutions with kernels arising in some volterra equations , in : volterra equations and applications ( arlington , tx , 1996 ) 55 - 65 , _ stability control theory methods appl ._ * 10 * , gordon and breach , amsterdam , 2000 .clment , g. daprato and j. prss , white noise perturbation of the equations of linear parabolic viscoelasticity , _ rendiconti trieste _ , 1997 .a. friedman and m. shinbrot , volterra integral equations in banach spaces , _ trans .soc . _ * 126 * , ( 1967 ) , 131179 .y. fujita , integrodifferential equations which interpolates the heat equation and the wave equation , _ osaka j. math . _* 27 * ( 1990 ) , 309321 .y. fujita , a probabilistic approach to volterra equations in banach spaces , _ diff .eqs . _ * 5 * ( 1992 ) , 769 - 776 .a. karczewska , on the limit measure to stochastic volterra equations , _ j. int. eqs .appl . _ * 15 * ( 2003 ) , 59 - 77 .a. karczewska , function valued stochastic convolutions arising in integro - differential equations , submitted .a. karczewska , the fractional calculus used to linear stochastic volterra equations , submitted .a. karczewska and j. zabczyk , regularity of solutions to stochastic volterra equations , rend .s. 9 , * 11 * no.3 ( 2001 ) 141154 .w.h . press , s.a .teukolsky , w.t .vetterling and b.p .flannery , _ numerical recipes in fortran _ sec.ed ., cambridge university press , new york , 1992 j. prss , j. , _ evolutionary integral equations and applications _ ,birkhuser , basel , 1993 .schneider and w. wyss , fractional diffusion and wave equations , _ j. math .phys . _ * 30 * ( 1989 ) , 134 - 144 .
the aim of the paper is to demonstrate the use of the galerkin method for some kind of volterra equations , determininistic and stochastic as well . the paper consists of two parts : the theoretical and numerical one . in the first part we recall some apparently well - known results concerning the volterra equations under consideration . in the second one we describe a numerical algorithm used and next present some examples of numerical solutions in order to illustrate the pertinent features of the technique used in the paper . stochastic and deterministic volterra equations , galerkin method . 60h20 , 65c30 , 65r20 , 60h05 , 45d05
the problem of multiple hypothesis testing in the setting of quantum channels can be formulated as follows .assume that is an unknown quantum channel , but some a priori information is available , in the sense that is one of given channels , with probabilities .the task is to find a procedure that determines the true channel , with the greatest possible probability of success . for quantum states ,this problem was formulated by helstrom and since then has been the subject of active research , see e.g. for an overview and further references . here , an ensemble is given , where are quantum states with prior probabilities , with a similar interpretation as above .a testing procedure , or a measurement , for this problem is described by a positive operator valued measure ( povm ) , defined as a collection of positive operators summing up to the identity operator . the value is interpreted as the probability that the procedure chooses while the true state is .the task is to maximize the average success probability over all povms . in the case , it is well known that the optimal povm is projection valued , given by the projections onto the positive and negative parts of the operator , . for , there is no explicit expression for the optimal povm in general , but it is known that a povm is optimal if and only if it satisfies this condition was obtained in using the methods of semidefinite programming . in the case of quantum channels ,a most general measurement scheme is described by a triple , where is an ancilla , a ( pure ) state on and is a povm on . for , the value is interpreted as the probability that is chosen when the true channel is .the average success probability is then the task is to maximize this value over all triples .it was observed that using entangled input states may give greater success probability and it was shown in that every entangled state is useful for some channel discrimination problem . however , there are situations when e.g. the maximally entangled input state does not give an optimal success probability .it is therefore important to find out whether an optimal scheme with a given input state exists . in the broader context of generalized decision problems , conditions for existence of an optimal scheme with an input state having maximal schmidt rankwere found in , a related problem was studied in . in the present paper , we show that these conditions can be obtained using the methods of semidefinite programming .such methods were already applied before in the context of discrimination of quantum channels , see . compared to these works , we are more concerned with the choice of an optimal input state .it is an easy observation that if a given scheme is optimal , then must be an optimal measurement for the ensemble .we show that , at least in the case that the input state is assumed to have maximal schmidt rank , the optimality condition for a channel measurement can be divided into the condition for this ensemble and an additional condition that ensures optimality of the input state .if the schmidt rank of the input state is not maximal we obtain a comparably weaker result , but show an example where the use of such an input state is required . as an important special case, we get a necessary and sufficient condition for existence of an optimal scheme with a maximally entangled input state .if this condition is not fulfilled , we give an upper bound on the optimal success probability . for discrimination of two channels , we use the known form of an optimal povm for two states to obtain a relatively simple condition in terms of choi matrices of the involved channels , which we call the condition .we also derive an upper bound on the diamond norm , which is tighter than the previously known bound given e.g. in , see remark [ rem : bounds ] below .the results are applied to discrimination of covariant channels , qubit channels , unitary channels and simple projective measurements .the paper is organized as follows : in the next section we rewrite the problem as a problem of sdp from which we obtain necessary and sufficient conditions for optimal solution and derive an upper bound on the optimal success probability . in section [ sec : two ] we investigate the condition and the related bounds .it the last two sections , we study special cases of channels and present some examples demonstrating the results .let be a finite dimensional hilbert space .we denote by the set of positive operators and by the set of states , that is , positive operators of unit trace . a completely positive trace preserving map called a channel , we will denote the set of all channels by .any linear map is represented by its choi matrix , defined in as here is a fixed orthonormal basis of . note that if and only if is positive and .an alternative description of a channel measurement is given in terms of process povms ( or testers , see also ) .a process povm is a collection of positive operators in with for some state . for any triple , there is a process povm such that for all and , conversely , for any process povm , one can find some ancilla , a pure state and a povm such that ( [ eq : ppovm ] ) holds . to see this ,let , for and observe that by schmidt decomposition , we have for some unitary operator and for every . denoting we get since the channel in acts only on the first part of the system we get and holds with .conversely , let for .let be defined on the support of and 0 elsewhere , then is a povm on where now and holds as before . using the description by process povms , we will show that the maximization of the success probability can be written as a problem of semidefinite programming : here , is the canonical basis of and is any basis of the ( real ) linear subspace to see this , note that according to , equation can be rewritten as : put .then : and the problem of maximizing can be understood as the problem of maximizing .we have and from it follows .note also that since is block - diagonal , we may extend the maximization over all positive elements with , ( and not only over block - diagonal ones ) . to rewrite this to the more usable form stated above, we need to note that with if and only if for all and . to prove this statement ,let us first assume that , then for any , and .conversely , assume that for all and .consider as a hilbert space with hilbert - schmidt inner product , then there is an orthonormal basis in , where each is a self - adjoint operator such that . with respect to this basis ,each can be expressed as : with some . from the condition we obtain . expressing and using the condition we get since there is no restriction on for , we must have for all , and hence . to conclude the proof , from the condition we get .moreover , it is worth realizing that from the condition we get , hence .the following result is obtained using standard methods of semidefinite programming ( see e.g. ) .the expression for maximal success probability was obtained also in , in a more general setting .[ thm : opt_cond ] let be a process povm . then is optimal if and only if there is some and some , such that for all , and moreover , in this case , the maximal success probability is as first , we will formulate the dual problem .let , be some basis of and let , then dual problem is : let be dual feasible , then since and , we must have .if we denote , then and from we obtain .hence there is some channel , such that . from the condition we obtain for all .from here we see , that the dual problem may be formulated as : now let , then is a primal feasible plan .moreover belongs to the interior of the cone of positive operators , therefore by slater s condition we obtain that the duality gap is zero , in other words or , where by we denote the primal optimal plan and by we denote the dual optimal plan . since is feasible , we have and we get : as and , the sum may be zero if and only if all summands are zero .moreover , trace of the product of two positive matrices is zero if and only if their product is zero . to see thislet and .we have and hence and . by the above argumentation, we get from on the other hand , the condition must hold for any dual feasible plan , but if for some primal and dual feasible plans the condition holds , then the duality gap for these plans is zero and they are optimal .this concludes the proof . using this result, we can characterize optimality of measurement schemes with input states of maximal schmidt rank .[ coro : optimcond ] let be a pure state such that is invertible .then a measurement scheme is optimal if and only if a. majorizes for all [ item : optimcondmajor ] b. .[ item : optimcondptr ] let and let be the operator such that , so that the process povm corresponding to is given by , see . note that by our assumptions , is invertible , and .assume that is optimal , then by theorem [ thm : opt_cond ] , there must be some and such that and summing up over , we obtain multiplying the above equality by from the left and by from the right , we get using the above expression for , the two conditions follow easily from this equality . assume conversely that the conditions , are satisfied .put , then ( i ) and ( ii ) imply that and .it follows that there is some positive number and such that .moreover , implies that for all and it follows that and this implies the optimality condition of theorem [ thm : opt_cond ] , exactly as in its proof .note that is the optimality condition for a povm in discrimination of the ensemble , where .in other words , if is an optimal povm for this ensemble and the majorization is satisfied .it follows that the existence of an optimal scheme with the given input state is equivalent to the condition ( ii ) .clearly , in this case , is the optimal scheme and the optimal success probability is .next , we show that the conditions of corollary [ coro : optimcond ] are necessary for a general pure input state .[ coro : notfullrankstate ] let be a pure state such that .then a measurement scheme is optimal only if the the conditions , from the previous corollary hold . we will show that the measurement scheme is optimal for some problem with reduced input space .let us denote by the support of .since is pure , it must be of the form for some schmidt decomposition of .from here we see that , where is a subspace isomorphic to .let be the restriction of to and let be the projection onto , then it is clear that , moreover , defines an optimal measurement scheme for the reduced channels , with full schmidt rank input state .the rest follows from the previous corollary . in general, the opposite implication does not hold .that is because if we limit the problem to some subspace of the original hilbert space , then in general we do nt have a guarantee that the optimal input state will be supported on a subspace of the form , or in other words we would have to maximize the average success probability over all choices of the subspace .we demonstrate this by the following simple example .let , where and let , where is the optimal povm for discrimination of the ensemble .by we have and .it is easy to see that both conditions and are satisfied , but as argued in , there are cases when entangled input states give strictly larger probability of success than any separable state , so that a scheme of the form can not be optimal .it seems that optimality of input states strongly depends on the structure of the channels . in some cases it is even necessary to use an input state with lower schmidt rank , because using maximal schmidt rank input state would `` waste '' some normalization of the input state on parts of the channels where it is unnecessary , as will be demonstrated in example [ ex : spm ] .it is an open question whether some stronger conditions for general input states can be obtained .see also for a discussion of a similar problem in the case of qubit pauli channels .we will next present an upper bound for in the case that condition is violated .we assume that the input state is maximally entangled , but a similar bound can be obtained for any input state having a maximal schmidt rank .[ thm : errorestimate ] let be an optimal povm for discrimination of the ensemble and let , .let denote the operator norm .then the optimal success probability satisfies note that is the largest success probability that can be obtained by the maximally entangled input state , this implies the first inequality .further , note that we have by optimality of the povm .if now and are such that , then , correspond to a dual feasible plan , hence by . to obtain the tightest upper bound in this way , we put by the choi isomorphism , there is some completely positive map , such that .as it was shown in ( see corollary 2 and section 3.1 ) , , where the diamond norm is defined as moreover , since is completely positive , this norm simplifies to \psi = \lvert\ptr_\ka z \rvert.\end{aligned}\ ] ] in general , the bound that we obtain in this way does not have to be meaningful , that is , it may happen that .but , as will be demonstrated by the examples in the last section , there are cases when the bound is meaningful , or even tight .[ rem : epsilon ] note that if , then and the value of indicates how much the condition ( ii ) is violated .it is easy to see that , this shows that if is small , the maximally entangled state is close to optimal .let and . the following notation will be used throughout .let , then we put and let be the maximally entangled state and consider any two - outcome povm on , given by for some operator on .the average success probability for the triple as defined by equation is : the optimal povm is obtained if is the projection onto the support of the positive part of . in this case , and [ coro : maxent ] an optimal measurement scheme with a pure maximally entangled input state exists if and only if the choi operators satisfy by the remarks below corollary [ coro : optimcond ] , such a scheme exists if and only if , equivalently , .since we always have and the condition can be rewritten as stated .the following corollary describes the upper bound of the optimal probability .[ coro : bounds ] we have the following bounds if the condition is satisfied , the inequalities become equalities .we only have to note that if the condition is satisfied , then .[ rem : bounds ] it is well known that is related to the diamond norm as . to our knowledge , the only known bounds on the diamond norm in terms of the choi matrices are the following ( see e.g. ( * ? ? ?* lemma 6 ) ) which is quite coarse . as in remark[ rem : epsilon ] , we obtain from corollary [ coro : bounds ] the following new upper bound : where .this shows that if is nearly satisfied , the above bounds are quite precise . to show that the upper bound given by is better than the bound we will show that in general we have since is a statethis implies the above inequality .we also see that this inequality is strict unless is of rank 1 .we apply the results of the previous section to the problem of discrimination of covariant channels , unitary channels , qubit channels and measurements . in the case of covariant channels and unital qubit channels ,similar results were obtained in for more general decision problems on families of quantum channels .let denote the unitary group of .for , let let be a group and let and be unitary representations .assume that and are covariant channels , that is , irreducibility of plays a strong role , as we will see . in this case , the only non - zero projection that commutes with all is , see e.g. .let be channels satisfying .assume that the representation is irreducible .then the condition is satisfied for any .let denote the transpose of with respect to the fixed basis .let and let be as in .we will prove the proposition by showing that is invariant under and by the discussion above this implies that .for every we have in case the representation is reducible , let us sketch an upper bound of the optimal probability . by the previous proof , we have , hence where and are projections onto the subspaces of the irreducible representation , orthogonal sum of which is .let then and we have let and let us denote . let be the werner - holevo channel , where denotes the transpose map with respect to the canonical basis . then is a unitary channel , given by the unitary such that it can be easily checked that and . if is a linear map such that there is some , satisfying then where .moreover , for a self - adjoint , if and only if .let and be two qubit channels and let for as before . by the previous remarks ,the condition is equivalent to .we are now going to investigate this equality .note that satisfies ( [ eq : tracepres ] ) with .since is a unitary channel , we have using further properties of and , we get ^t.\end{aligned}\ ] ] the last equality follows from the fact that , where denotes transpose with respect to the product basis , and that for any .thus we have proved : for a pair of qubit channels , the condition holds if and only if in particular , this is true if .if both channels are unital , this holds for any , hence maximally entangled input state is optimal , as it was already observed in and in in the case of qubit pauli channels . if , then the condition is satisfied if , even if the channels are not unital .let and let , be the corresponding unitary channels . as it was proved in ,it is not necessary to use entangled inputs for optimal discrimination of two unitary channels .nevertheless , it is an interesting question whether a maximally entangled state is also optimal , this will be addressed in this paragraph . let .since any input state may be replaced by , it is clear that discrimination of and is equivalent to discrimination of and the identity channel , and that a maximally entangled input state is optimal for one problem if and only if it is optimal for the other .we may therefore assume that and .since the unitaries are given only up to a phase , we may also assume that .put , are the choi matrices of the unitary channel and identity . by the results of the appendixit is clear that if and only if , where and since the transpose is a linear map and , we see that is equivalent to if this condition is obviously satisfied .if it is equivalent to the unitary has a spectral decomposition where ] and numbers such that for every . by the assumption we must have this implies that either and , or has exactly two eigenvalues , each of the same multiplicity .the fact that in our calculation the eigenvalues are complex conjugate of each other is simply caused by the choice and does not have to be generally required .we have proved the following : [ prop : unit - chan ] let , be unitary channels .put and let . then holds if and only if either or has at most two different eigenvalues , each of the same multiplicity .note that if is odd , mei holds iff , in which case the two channels are perfectly distinguishable .a special case of a channel is a measurement , which is given by a povm .one shot discrimination of quantum measurements was investigated in , where it was proved that entangled input states are necessary in some cases .the corresponding channel is defined as and the choi matrix has the form .let , for two povm s , with outcomes . in this case , the condition mei has the form we will further investigate simple projective measurements .let and be two orthonormal bases in an let and .we will also assume that .assume that for some .then the condition mei is satisfied if and only if .in this case , the condition is since for all , this can be trueonly if , that is , . from now on we will always assume that , equivalently , , for all .then , where and is the projection onto , so that the condition mei becomes note that if , then , so that the condition trivially holds . in this case , and are unital qubit channels , so that this follows also by previous results .put then .the condition ( [ eq : simplemea ] ) is equivalent to for all , or for all .note that the diagonal matrix , where , is invertible .indeed , for some implies that so that for all .but then , which is a contradiction .hence for all and is positive definite .let us begin from .let us denote .clearly is a unitary matrix .it is straightforward to see , that so the condition becomes which can be written as a matrix equation of the form at this point we are ready to settle the case .let , then the condition holds if and only if the matrix , defined as above , has of one of the following forms for . in other words ,one povm is a cyclic permutation of the other .consider the equation and let , then we obtain for . by putting , we obtain .it follows that some off - diagonal elements of the matrix must be zero . on the other hand , for equation becomes which shows that some off - diagonal elements must be non - zero .putting , say , , the above equalities imply that also and all other off - diagonal elements are nonzero . from the condition , we obtain that and , this implies that is of the form and the basis and are just cyclic permutations of each other , modulo phase change .similarly , assuming that , we obtain that is of the form , which is just the other possible cyclic permutation of the basis , modulo complex phase . for the converse , it is easy to check that both and satisfy .see also the remark following the proof of proposition [ prop : const ] .the basis vectors and are given only up to a phase , note that also the condition is invariant under phase changes .we may therefore assume that for all .then the matrix has only real non - negative entries , hence it is self - adjoint . from we can express and hence . since both and are self - adjoint matrices , their functions are well defined by the spectral theorem . multiplying out the right hand side of the equation and using the the obtained expression for obtain : clearly the operators and are self - adjoint .let us denote the real linear space of self - adjoint operators on , endowed with a hilbert - schmidt scalar product , which we denote as .let us choose an orthonormal basis on composed of the operator and of operators , such that .from equation we can see that : where is some real linear combination of , hence and .it is easy to compute as follows : now it is easy to see , that : because from equation we see that as .we have proved the following : let and for some orthonormal bases and . then condition is equivalent to the equations , .the obtained equations look rather complicated , but they yield some results in specific cases .[ prop : const ] with the above notations , assume that for all . then 1 .if and is odd , then can not be satisfied .if and is even , then is satisfied if and only if , where is a symmetric unitary matrix with zero diagonal .3 . if , then always holds .let s assume that for all .immediately we see , that and . as a direct consequence of this we obtain from the equation . using this to our advantage in the equationwe obtain that in this case , is equivalent to as in the proof of proposition [ prop : unit - chan ] , we see that has exactly two eigenvalues with the same multiplicity , this also implies ( i ) . if is even , it is clear that these eigenvalues must be equal to .let be the eigenprojection corresponding to , then , with .conversely , it is easy to see that if is of this form , then holds , this finishes the proof of ( ii ). now assume that , which means that for all .again we immediately see that conditions and are satisfied , with .this implies ( iii ) .note that we can conclude from the last statement that any two bases , such that the unitary matrix mapping one basis to the other is hollow satisfy the condition .especially , this happens if the basis is a permutation of , leaving no element fixed .a particular case of the situation described in proposition [ prop : const ] is when the bases are mutually unbiased , that is when for all .then is a hadamard matrix , see e.g. for more information on complex hadamard matrices .as we have seen , such bases can satisfy the condition only in even - dimensional hilbert spaces .below , we provide a further result for .recall that two hadamard matrices are equivalent if one can be turned into the other by multiplication by diagonal unitaries and permutations . up to equivalence , any hadamard matrix can be turned into a dephased form , with all elements in the first row and column equal to unity . in this way ,any four - dimensional hadamard matrix is equivalent to a member of a one - parameter family , containing a unique matrix with real entries . sincethe vectors and are given only up to a phase , the next result shows that this is the only case when is satisfied .let , and let the bases and be mutually unbiased .then the condition is satisfied if and only if the corresponding hadamard matrix satisfies , where , are diagonal unitaries and it is clear that . by proposition [ prop : const ] , we see that we need to search for a hollow symmetric unitary matrix such that each off - diagonal element has modulus . a general form for such a matrix can be found by a straightforward computation .we find that for , so that , where and .conversely , suppose that for diagonal unitaries and .since we also assume that all diagonal elements of are equal to 1 , we obtain that we must have and .it is now easy to check that has the form , with , and .in this section , we present examples based on the results of the previous sections . to underline how maximally entangled input states may be used for discrimination of two qubit channels , we will present an example of discrimination of identity channel and amplitude damping channel .the amplitude damping channels are not unital and hence is not necessarily satisfied , and we will see that the maximally entangled state is indeed not optimal . the identity channel may be replaced by a unitary channel with some changes to the following calculations .we will set . , optimal success probability and the upper bound given by thm .[ thm : errorestimate ] on the parameter in discrimination of the amplitude damping channel and the identity channel ( example [ ex : qubit ] ) . ][ ex : qubit ] let denote complex hilbert space with and will denote some orthonormal basis of .the amplitude damping channel is represented by kraus operators , , defined as where ] . between these pointsstraights lines were drawn , hence the figure looks like a continuous line .the resulting expressions of , and numerical data of as functions of are plotted in the fig .[ fig : prob - theta ] . even though it shows that maximally entangled input state is not optimal , notice that the upper bound is close to for small values of the parameter . , optimal success probability and the upper bound given by thm .[ thm : errorestimate ] on the parameter in discrimination of the unitary channel and the identity ( example [ ex : unitary ] ) . ][ ex : unitary ] let and . without loss of generality, we may always assume that we discriminate the identity channel against a unitary .as our first example , we consider the unitary matrix at this point , it is easy to compute and , as we can express them as functions of , which is only dependent on , so the bounds are functions of .moreover it is possible to compute by numerical methods described in example [ ex : qubit ] .the expressions for and are long and messy , they are plotted in the fig .[ fig : prob - xi ] as well as the computed values of . from the figure it is once again clear that on one hand as rises the the upper bound becomes meaningless , but on the other hand if is small then the upper bound and is small as well .another case we consider is matrix of the form again , it is straightforward to obtain and and to compute numerically , the obtained functions are plotted in fig . [fig : prob - xi-2 ] .again it can be nicely seen that as rises towards the upper bound becomes meaningless . , optimal success probability and the upper bound given by thm .[ thm : errorestimate ] on the parameter in discrimination of the unitary channel and the identity ( example [ ex : unitary ] ) . ]let us demonstrate our results once more , this time for simple projective measurements .[ ex : spm ] let , , , , for .according to our previous results the maximally entangled input state is not optimal for discriminating these two measurements .as first we need to find .we get .the operator is diagonal in the orthonormal basis , , where the following calculation is straightforward .we obtain now it is easy to see that i.e. the bound is meaningful and the same for all , even though tends to from above in the formal limit . to underline the correctness of the upper bound we will find the optimal state for discrimination of the channelslet us denote , the channel corresponding to simple projective measurement corresponding to the set of projectors , respectively and denote .notice that the channels and can be separated as follows where is a state on and , and .notice that where .this leads to the following in other words , it is sufficient to consider only input states of the form , where and are states , such that , and .since where , is the povm we use for discrimination of the channels , it is obvious that for to be maximal we also have to set and according to corollary [ coro : notfullrankstate ] the problem reduces to discriminating the channels and .both and are unital qubit channels and by our previous results the optimal input state is in the product basis given by , , we have this matrix has two eigenvalues and each with multiplicity .we get and which is exactly the same as our upper bound .we presented necessary and sufficient conditions for optimality of a process povm in channel discrimination , especially for a process povm corresponding to a measurement scheme with full schmidt rank input state .in particular , a necessary and sufficient condition for existence of an optimal measurement scheme with a given full schmidt rank input state were found . in the case of maximallyentangled input states , an upper bound of the optimal success probability was given if the optimality condition is not satisfied .for discrimination of two channels , we obtained a simple condition in terms of the choi matrices of the channels and a new upper bound on the diamond norm .the results were applied to discrimination of four types of channels . for covariant channels , known results for the irreducible case were extended to some reducible cases and an upper bound on the optimal success probability was found . for qubit channels , the obtained condition generalizes previously known results to some pairs of non - unital channels . we proved that for discrimination of unitary channels , maximally entangled input states are optimal only in some very special cases . to our best knowledge , the results obtained for unitary channels and simple projective measurements are new .an interesting open question is whether it is possible to obtain a similar condition for input states with lower schmidt rank . as it was shown , there are cases when such input states are optimal and there may even be no optimal full rank input states .it is not only of question what the schmidt rank of the optimal input state may be but also how to select the subspace of the input hilbert space that will form the support of the partial trace of the input state .another possible future directions of research is to investigate optimal discrimination of more complex quantum processes .we thank the anonymous referee for the valuable feedback and comments that made this paper more readable and better organized and also for pointing out an easier proof of prop . [prop : unit - chan ] .this research was supported by grant vega 2/0069/16 .s. m. barnett and s. croke , quantum state discrimination , adv .photonics , vol . 1 , no . 2 , pp .238 - 278 ( 2009 ) a. o. barut , r. rczka , _ theory of group represetations and applications _, warszava : pwn - polish scientific publishers , 1980 a. barvinok , _ a course in convexity _ , ams , 2002 f.g.s.l .brandao , m. piani , p. horodecki , nature communications 6 , art .7908 ( 2015 ) g. m. dariano , p.lo presti , m. g. a. paris , phys .87 , 270404 ( 2001 ) g. m. dariano , p.lo presti , m. g. a. paris , j. opt .b 4 , 273 ( 2002 ) g. chiribella , optimal networks for quantum metrology : semidefinite programs and product rules , new j. phys . , vol .14 , pp . 1 - 19 , ( 2012 ) g. chiribella , g. m. dariano , and p. perinotti , phys .lett . 101 , 180501 ( 2008 ) m.d .choi , completely positive maps on complex matrices , lin .285 - 290 ( 1975 ) cvx research , inc .cvx : matlab software for disciplined convex programming , version 2.1 .http://cvxr.com/cvx , june 2015 .m. grant , s. boyd , graph implementations for nonsmooth convex programs , recent advances in learning and control ( a tribute to m. vidyasagar ) , v. blondel , s. boyd , and h. kimura , editors , pp .95 - 110 , lecture notes in control and information sciences , springer , 2008 .gus gutoski , j. math .3 , ( 2012 ) gus gutoski and john watrous , toward a general theory of quantum games . in _ proceedings stoc 2007 _ , pp .565574 c. w. helstrom , _ quantum detection and estimation theory _ , academic press , 1976 a. s. holevo , journal of multivariate analysis 3 , pp .337394 ( 1973 ) a. jenov , j math .55 , 022201 ( 2014 ) a. kitaev , quantum computations : algorithms and error correction .russian mathematical surveys , 52 , pp .11911249 ( 1997 ) k. matsumoto , arxiv:1209.2392 m. piani and j. watrous , phys .lett . , 102 ( 25 ) : 250501 , ( 2009 ) m. f. sacchi , optimal discrimination of quantum operations , phys . rev . a 71 , 062340 ( 2005 ) ; m. f. sacchi , entanglement can enhance the distinguishability of entanglement - breaking channels , phys .rev . a - at .1 , pp . 12 ( 2005 ) m. f. sacchi , minimum error discrimination of pauli channels , j. opt . b - quantum semiclassical opt .s333s336 ( 2005 ) m. sedlk , m. ziman , phys .a 90 , 052312 ( 2014 ) w. tadej and k. zyczkowski , open systems and information dynamics , vol .133 - 177 ( 2006 ) j. watrous , chicago journal of theoretical computer science , vol .2013 , pp . 1 - 19( 2013 ) h. yuen , r. kennedy and m. lax , ieee transactions on information theory , it-21 ( 1975 ) m. ziman , phys .a 77 , 062112 ( 2008 )let be unit vectors , , .let and then where are the eigenvalues and the corresponding eigenvectors of . moreover , we have and since we do not worry about a phase, we may suppose that . from , we obtain where .it follows that , so that where .we obtain
we find optimality conditions for testers in discrimination of quantum channels . these conditions are obtained using semidefinite programming and are similar to optimality conditions for discrimination of quantum states . we get a simple condition for existence of an optimal tester with any given input state with maximal schmidt rank , in particular with a maximally entangled input state . in case when maximally entangled state is not optimal an upper bound on the optimal success probability is obtained . the results for discrimination of two channels are applied to covariant channels , qubit channels , unitary channels and simple projective measurements .
financial time series are hard to model , since they are heavily influenced by unpredictable events .natural disasters , uncertainty about public behaviour , statements from governments and central banks , etc ., are all events that can drastically affect the market . as a consequence financial datado not behave the same at all times , hence we can not assume any stationarity property for them .the latter implies that classic techniques used to analyse time series are widely inadequate to model such data , therefore alternative methods have to be developed .the family of _ markov switching models _ ( msm ) constitutes a possible solution , since these models allow us to effectively address the non - stationarity of financial data .the main idea behind the msm is that , in order to take into account the changes in the behaviour of the data , we allow the distribution of the observations to change over time . a general msm model can be written in the following form where indicates the state of the model at time , is the vector of the parameters characterizing the model , is the set of all observations up to time , is the set of all observed states up to time , is the set of all possible states , and is the function that governs the transitions between the states .the function defines how the observation at time depends on and finally , , where , , is the so called _ terminal time_. system clearly shows the intrinsic richness of the msm approach .particular realizations of allow the treatment of specific problems . before getting into the details of our study ,it is worth mentioning that in most of the dedicated literature , we can distinguish between two classes of models .the first class consists of models that have complicated distributions for the data or a large number of states , but very simple transition laws , e.g. , a first order markov chain , see , e.g. , .the second class is made up of models with simple assumptions and very few states , usually two , but with more complicated transition laws , see , e.g. , .the present paper is structured as follows : in sections 1 through 4 we cover the mathematical and computational theory needed to establish the techniques that we then develop in subsequent sections ; in section 5 we introduce the jump - diffusion model , while in section 6 we present a model that uses distributions ; in section 7 we explain how the models can be concretely implemented and , in section 8 , we present the related results obtained applying them to a relevant case study which concerns the s&p500 index ; conclusions and further developments are outlined in section 9 .bayesian inference is a branch of statistical inference that assumes the parameters of a probability distribution to be randomly distributed according to a _prior _ distribution . in particularthe idea is to exploit the observed data , along with the bayes rule , to generate the _ posterior _ distribution of the aforementioned parameters .therefore , the posterior distribution can be interpreted as the distribution of the parameters once we have taken into account both our subjective belief about them , namely the prior , and the data .such an approach can be rigorously represented as follows where is the prior distribution , is the distribution of the data depending on the parameter , and is the posterior of .finally , is the marginal distribution of , namely clearly the choice of the prior can have a large impact on the posterior .a particularly convenient form of prior is what is known as a _conjugate prior_. we say that a prior distribution is conjugate if the posterior distribution derived from it belongs to the same family , as it happens , e.g. , for the beta - bernoulli pair , namely it follows that if we start with a prior and assume that the data are binomially distributed , we end up with a posterior .hence , we do not have to update the distribution for each new observation , just its parameters .we would like to note that the latter is a particularly relevant aspect from the algorithmic point of view since it translates into less computationally expensive code .for the sake of completeness , in the following subsections we list other particularly convenient choices for distribution pairs and , in order to give clear examples , we first start by explaining how the posterior of a set of independent identically distributed ( i.i.d ) random variables is obtained .let be a set of i.i.d .random variables with density function .moreover , let .then where is the likelihood function of the data , and is the set of all possible values of . for the rest of this paper we will denote the vector of observations by .assume that we have independent observations where , and the value of is known . in order to perform bayesian inference on the given data , we also need to place a distribution on .therefore , we set the corresponding likelihood function is while for the posterior we have hence assume again that we have independent observations but , this time , is known , while is unknown .taking to be inverse - gamma distributed with parameters and , and denoting the distribution by , we can write the density function of as follows where is the extension of the factorial to the set of positive real numbers , known as the _ gamma function _ , and defined by therefore , the associated likelihood function is hence , the posterior is unfortunately , not all distribution pairs are as convenient as the previously mentioned ones , especially from the point of view of the parameter simulation needed by concrete computational studies . when the posterior is a well known distribution , as in the _ normal - normal _ and _ inverse gamma - normal _ cases , we can simulate the parameters using , e.g. , existing r libraries .otherwise , _ ad hoc _ sampling algorithms have to be developed .the next section addresses these problems .in this section , we describe two methods that will be used to sample the parameters , namely , the _ gibbs sampling method _ and the _ metropolis - hastings algorithm_. the latter will be used in situations where the posterior distribution is non - standard , while the former will be used when the distribution can be simulated using an existing software . assume that we have a model with a finite number of parameters , , and that we want to find the full posterior distribution .this goal can be quite difficult to reach , since the multivariate simulation of distributions is much more tangled and computationally heavy than its univariate counterpart .the gibbs sampling approach allows the sampling of , knowing only the conditional distributions .let be the number of simulations we want to perform .we assign arbitrary starting values to each of the parameters .then , for every , we perform the following steps hence we can simulate each of the model parameters .the first simulations are discarded , being part of what is called the _ burn in period _ , in order to get rid of the dependence on the arbitrary choice of the starting point , while the remaining values are assumed to be a suitable approximation of the real distribution .it is worth mentioning that the number of iterations , as well as the length of the _ burn in period _, should be chosen carefully , since for larger values of the simulations become too time consuming , while small values might not provide enough iterations for the sampler to converge .the gibbs sampler is rather easy to implement , but its major drawback is that it requires each to be readily samplable , where is the vector .the metropolis - hastings algorithm allows for a solution to such an inconvenience .in particular , it only requires a function proportional to the density function , and a _ proposal distribution _ which denotes a proper probability density function defined on the space of all possible values of . in what followswe provide the description of the general metropolis - hastings algorithm , which uses the full parameter vector , as it is reported in .we underline that the algorithm remains unchanged when is a scalar .let be the number of simulations we want to perform .we assign an arbitrary starting value to the parameter vector .then , for every , we perform the following steps having defined a method to sample the parameters , we now have the task of simulating the states of the models we will be using .this problem is the subject of the next section .in this section our goal is to simulate the state vector . in order to accomplish this ,we first need to obtain the values .we start by setting arbitrary values for the parameters , and then we use the following expression notice that , , we can sample from if we have , which is nothing more than the transition probability from one state to another , and .the latter can be obtained , , exploiting the hamilton filter , see below .the basic hamilton filter , see , can be described as input - output - byproduct .* input : * * output : * * byproduct : * running the hamilton filter for , we get the desired values , which can be used to generate , as described in what follows such a probability is used to draw a sample of , i.e. then , the above probability together with the previously simulated , are both used to simulate , and , proceeding iteratively , we have therefore , we can simulate obtaining the last component of .the latter implies that , for every , we know what the distribution of is , because we know what the state we are in is . in the next two sections we present the models that will be used laterin the paper _ the variation of certain speculative prices _ , see , benoit mandelbrot draws attention to the fact that the normal distribution is inadequate when it comes to describing economic and financial data .he argues that although the histograms of price changes seem to behave according to a gaussian distribution , a more careful analysis reveals that the large number of outliers makes the normal distribution fitted to the data much flatter than the actual data are , and with not enough density at the tails to include all the extreme values .if one tries to manipulate the variance of the gaussian distribution to accommodate the values around the mean , then the result is a distribution that is even worse than the previous one where the extreme values are concerned . in what followswe will show how to solve the aforementioned issue by using a gaussian distribution , to model the values around the mean , plus jumps of stochastic intensity , to include outlying values . specifically , our model is the following \end{cases}\ ] ]we divide the analysis of the model defined in into two components , the _ gaussian component _ and the _ jump component_. we will use the gaussian distribution to model most of the data by means of the random variable , where both the mean and the variance of are state dependent . in particular , we define the state dependence of the mean as follows hence each state has its own , constant mean , without further restrictions . concerning the variance , we assume that it increases depending on the state , namely where , , , which gives us hence , by , as we go up in states we also go up in volatility . jump diffusion models , first introduced into finance by robert c. merton in ,are currently widely accepted as an effective way to model the behaviour of financial data , see , e.g. , . in order to incorporate the jump feature in our model, we have to deal with two major difficulties .first , we have to find a distribution under which the sum of independent random variables behaves well , at least from the point of view of real statistical applications .this task is not as straightforward as it may seem , since even the sum of i.i.d .uniform random variables has a distribution that rapidly grows in complexity with the number of addends . to overcome this particular problem ,we have chosen to exploit the exponential distribution to model the i.i.d .jump amplitudes , as the sum of i.i.d .exponential random variables follows a _ gamma distribution _ , namely where is the gamma distribution in the parameterization , while and are given natural numbers .the aforementioned choice leads us to the second problem , which concerns the sign of the jumps . obviously , financial shocks can have both positive and negative values , while the gamma distribution allows only for the positive ones .the issue can be solved by multiplying the sum in by a random variable taking values in , with equal probability .we refer to the resulting distribution as the _ symmetric gamma distribution _ and we denote it by .assuming now that , the probability density function of is given by hence has mean equal to zero , and variance looking at eq .one can see that can be used to control how much influences the variance of the distribution .for example , if we take two random variables , we have and which is a drastic increase in variance .taking on the other hand gives us the variances and , which is a much smaller increase , for the same change in .the previous , rather straightforward observation , will be useful later since in our model will represent the number of jumps at a certain point in time . takinga large means that every extra jump only slightly increases the variance of the model , hence allowing for a finer analysis of the data .the next step consists in determining the length of the sum in eq . .in particular we assume that such a sum has a state dependent length represented by a state - dependent poisson random variable , hence we have to determine the values of . in keeping with the interpretation of the states , see eq . , we want the number of jumps to increase as the state the data are in increases .this can be done by ordering the parameters . moreover , in order to also allow the parameters to be sufficiently flexible for our purposes , we assume them to be distributed as follows which clearly guarantees that .summing up the definitions stated in subsections [ gaussianelement ] and [ jumpelement ] , we can write the full model as follows there is no analytic expression for the distribution of , but we can obtain an integral form of it using the following well known fact .let and be two independent random variables with density functions and , defined for . then the sum is a random variable with density function given by therefore , by the convolution formula in , we have although not very useful in general , the expression in eq. can be computationally handled with little difficulty , a crucial fact for the concrete case study we will consider in section [ casestudy ] . in the next section we consider the -stable distribution model .in section [ jumpdiffusionmodel ] , we pointed out that the gaussian distribution is not adequate to model financial data , mainly because of its slim tails , which we offset by adding jumps . inwhat follows , we will consider a different approach , namely we will model the data using a distribution that has fatter tails than the gaussian one , but still preserves its most important characteristics .there are multiple equivalent ways to define a stable distribution .we will consider the two most common ones , the interested reader can refer to , e.g. , , for the others .a random variable is said to have a stable distribution if , for every and positive , there exists a positive number and a real number such that where and are independent copies of and stands for _ equal in distribution_. this implies that the sum of two stable independent identically distributed random variables is still a stable random variable , with the same distribution , up to a _ scale factor _ , and a shift component . as an example, we can consider two gaussian random variables and , assumed to be independent copies of .then , , which means that .alternatively , we can define the stable distribution using characteristic functions , namely a random variable is said to have a stable distribution if there exist parameters , such that its characteristic function has the form } , & \alpha \neq 1 \\e^{\left [ - \sigma | \theta | ( 1 + i \beta \frac{2}{\pi } { \text{sgn}}(\theta ) \ln |\theta| ) + i \mu \theta \right ] } , & \alpha = 1 \end{cases } \:.\ ] ] we call the _ stability _ parameter , the _ skewness _ parameter , the _ scale _ parameter and the _ location _ parameter . for we obtain the normal distribution , which is the only member of the stable distribution family that has finite variance .for we have infinite variance and mean , while , for $ ] , both the mean and the variance are undefined .we note that in general there is not a solution in closed form for the probability density function of a stable distribution .the stable distribution will be denoted by for the remainder of the paper . in the modelwe propose , the data are assumed to follow a symmetric -stable distribution , more precisely .the full model is presented in the following \end{cases } \:.\ ] ] the motivation behind the choice of the model represented by , mainly relies on empirical observations of financial data which exhibit fat tails that can not be well described using the gaussian approach . in particularwe believe that such phenomenon can be suitably addressed by exploiting -stable distributions with .moreover , financial data often exhibit structural breaks because of abrupt changes in the market , e.g. as during the sub - prime mortgage credit crisis of 2008 , which is the reason why we consider both the scale and the location parameters , to be state - dependent .as we previously mentioned , in general there is no closed form for the density of an -stable distribution .nevertheless , this problem can be circumvented using the fact that can be conditionally represented as a gaussian random variable , see , e.g. , , by introducing a random variable and using the property which allows us to have an analytic likelihood function which significantly speeds up the sampling process .analogously to what we considered in section [ jumpdiffusionmodel ] , we have one mean for each state , without further restrictions , namely we also want the scale parameter to be increasing with respect to the state , namely where , which leads to the property so that an increase in the state number indicates an increase in volatility .in this section we get into the specifics of our two models .in particular we provide the details regarding the likelihood functions , the priors and the posteriors , for both the _ jump diffusion model _, described in section [ jumpdiffusionmodel ] , and for the _ model _ , defined in section [ alfastablemodel ] .the concept of duration analysis is also explained , along with its importance .both of the models will be characterized by four states , with the states being interpreted as _ low , medium , high _ and _ very high _ volatility _regime_. let us start by defining the following quantities for the rest of this section we will suppress unneeded parameters .therefore , e.g. , the conditional posterior , will be denoted by , the general rule being that the parameters that are not being inferred on are considered known .the description of the implementation is divided into three parts , namely : the first part deals with the form of the likelihood function , the second with the priors while the third part provides a detailed analysis of the different types of obtained posteriors .we have to take into account whether there are jumps in the model or not , as well as the state of each observation . hence ,if , we define while , if , we define as then , the full likelihood function is which has a standard form only if , for every .as this very rarely happens , we will use the metropolis - hastings algorithm in this model .* mean : * we take the mean to be normally distributed .moreover , we give the same prior to the means of all the states , namely * variance : * the variance will have an inverse - gamma prior . * h parameters : * we previously saw that in order for to hold , we need , hence we define , for all , and make these parameters frchet distributed , namely then the density function of reads as follows where , and is defined for . * poisson parameters : * for the priors of the poisson parameters we refer to .* transition probabilities : * for the transition probabilities we will use a dirichlet prior , namely for every .the density function of this particular dirichlet distribution is given by and it is defined on the simplex while , everywhere else , its value is zero . finally , the parameter is a constant .* mean : * because the likelihood function depends on whether or not jumps have occurred , we have two different posteriors for the mean .in particular , if , by , we have for all , while , if , we obtain * variance : * similarly to the previous point , we have to differentiate between the jump and no - jump cases . therefore ,if , by , we have otherwise , we obtain * h parameters : * in order to obtain , we need to transform the data , also taking into account the different states .in particular we have the following cases where we have used the transformed data set to obtain a posterior for when , namely which gives us the posterior hence , when , the posterior is analogous to the one for , with the only difference being that we use instead of . which yields the posterior in the case where we have jumps , i.e. , there is no analytic expression , therefore * poisson parameters : * concerning the posterior of the theta parameters , for , we have * transition probabilities : * the transition probabilities differ from the other parameters in that they do not depend directly on the observations . instead , they depend on the vector of states . assuming that the vector is known , the posterior distribution of the transition probability vector , , has the dirichlet distribution where is the number of transitions from state to state . in what follows , we proceed analogously to subsection [ subsec : jump_diffusion_model ] . using the fact that , in the present setting , our data are conditionally normal , see , the likelihood function reads as follows and , unlike in the previous model , we do not have to worry about multiple cases .* mean : * the prior of the means is the same as in eq . .* scale : * the distribution of the scale is analogous to that of the variance in the previous model , namely * h parameters : * these parameters are exactly the same as they were in the previous model , in fact their role remains unchanged , since they allow for the volatility to increase as the states increase . *lambda : * the lambda parameter follows a stable distribution , hence * mean : * the posterior of the mean is analogous to that of the one in eq ., with the only difference being the form of the variance .in particular , we have * scale : * the posterior of the scale is * h parameters : * in what follows we limit ourselves to listing the needed transformations , therefore we have the posteriors are obtained as in the previous case .* lambda : * since there is no closed form for the posterior distribution of the lambda parameter , we only write the prior and posterior of the transition probabilities are the same in both proposed models . in fact , the transition probabilities do not depend by any of the parameters , but only by the state vector .the expected duration of each state for a msm is a quantity of significant interest . having an estimate ofhow long a certain data set remains in a particular state can give us useful insights into how the model will behave for a certain period of time . in this sectionwe are going to explain how the expected duration can be calculated exploiting the transition probabilities .the expected duration , denoted by , is defined as follows \;,\ ] ] where is the random variable that models the length of the time interval for which the time series is in state .the first thing we have to consider is , the probability of the data being in state , meaning where .it just so happens that has a very simple closed form , in particular we will use the previous expression later on , when we compare the state durations obtained in the present paper , with those provided in .we will see that there is a significant difference in the state durations , showing that the models developed in this paper perform better than the one proposed in to model the time series of the chicago board options exchange volatility index , better known as vix .our case study is concerned with the application of the above theory to developing an indicator that has a role similar to the one played by the vix .in particular we use the set of s&p500 weekly prices , considering a time interval that runs from the 3rd of january , 2007 to 29th of december , 2014 .we picked this interval to include the sub - prime mortgage crash of 2008 as well as the subsequent period of relative calm .this choice allows us to analyse how our approach performs in both situations .we will show that our techniques improve the results stated in , where the model was very effective in periods of high volatility , but also too smooth in case of low volatility .our results are summarised below with respect to both the _ jump diffusion model _ , defined in section [ jumpdiffusionmodel ] , and the _ model _ , provided in section [ alfastablemodel ] . for the jump diffusion model we model the data as a zero mean process in order to make the framework more parsimonious .we take the exponential distribution parameter to be equal to 40 , in order to make the contribution of each extra jump to the variance relatively small .this choice of b allows for a finer analysis .we first present the histograms of the sampled variances , see fig .[ fig : variance_comparison ] .as we can see , the algorithm is rather accurate in sampling the variances . in particular , we recall that the theoretical posterior of the variances is an inverse - gamma distribution , which is exactly what we can observe in the histograms .moreover , in table [ tbl : variance1 ] , we report the point estimates of each variance value . .gaussian variance point estimates [ cols="^,^",options="header " , ] one thing that stands out in the mean point estimates , is the sign of the mean of the fourth state , which is negative . the latter should not come as a surprise since it refers to highest volatility value in the time series , namely the one related to the mortgage crisis of 2008 .we recall that , during a severe financial crisis , most price movements are downward , resulting in a negative drift .in what follows , we present the transition probability matrix , see eq . , and the transition matrix , see eq ., namely as in the previous case , we note the expected values of the state durations obtained using eq . , namely we can note how the difference between the results in and those in , is significant . in order to better explain the latter datum , let us define the volatility indicator within the present framework , and make a comparison with the vix index .in particular we define a second volatility indicator , denoted by , which , analogously to the previous case , will stand for the expected standard deviation of the data at time , i.e. in fig .[ fig : stable_comparison ] , we can see a visual comparison between the two values . using eq . , we obtain , which is a significant increase over .this leads us to conclude that the estimate obtained from the jump diffusion model is closer to the vix than the one obtained from the -stable distribution model .we now briefly explain the difference between the results in and .the expected duration of state 1 falls while at the same time there is a drastic increase in the expected durations of states 3 and 4 .looking at the way the estimators behave in fig .[ fig : jump_comparison ] and in fig .[ fig : stable_comparison ] , it is to note that the estimator obtained from the jump diffusion model is much more _ jagged _ , because of the regular transition from one state to another ; while the one obtained from the stable distribution model is much smoother , seeing as the time series tends to stay in the high volatility states much longer .furthermore , when the stable distribution model has to place observations , that should be in the low volatility states , in the high volatility ones , so that to solve the variance underestimation problem mentioned in , the jump diffusion model can simply add a few jumps to make up for the missing variance .this is why , despite its attempts to increase the variance by staying in the higher states , we see the indicator of the stable model _drooping _ and underestimating the low volatility , while , in this situation , the jump model stays much closer to the vix .in the present paper we have presented two novel techniques to implement a markov switching model ( msm ) type approach to non - stationary data , namely a jump diffusion - msm and an - msm . in sec .[ jump_diffusion_case ] , we have shown that the first one is very effective in mimic the vix index , moreover its implementation can be smoothly done without sacrificing its theoretical peculiarities , see sec .[ jumpdiffusionmodel ] . a slightly different situation concerns the implementation of the second approach ,[ alpha_stable_case ] , since eeven if the -stable - msm approach turns to be quite effective , we have to consider sampling problems of one of its parameters , implying that computational results do not behave exactly the way the are meant to .we would like to underline that the achieved tractability of the jump diffusion model is a crucial point , and it witnesses how such technique can be fruitfully used to model any kind of time series presenting pronounced tails , not just financial ones .as far as the issues of over - smoothing and excessive state duration , which have been stated in as the main deficiencies of the msm approach to financial data , we have shown that , using the models here presented , the state durations can been significantly reduced , see subsection [ jump_diffusion_case ] , and the problem of over - smoothing can be solved , see subesctions [ jump_diffusion_case ] and [ alpha_stable_case ] .concerning future developments , we aim at improving our jump diffusion - msm model by considering , instead of a simple first - order markov transition law , a -th order markov transition law .other possibilities consist in dealing with a transition law that is state duration dependent , or allowing the law to depend on other observable quantities used as indicators of the economy behaviour , e.g. , real personal income , industrial production index , rate of private credit growth , etc . +* acknowledgements : * we would like to sincerely thank matteo frigo for his insightful comments and his fundamental suggestions , which have helped us a lot in preparing the present work , especially with respect to the duration analysis .
we perform a detailed comparison between a _ markov switching jump diffusion model _ and a _ markov switching -stable distribution model _ with respect to the analysis of non - stationary data . we show that the jump - diffusion model is extremely robust , flexible and accurate in fitting of financial time series . a thorough computational study involving the two models being applied to real data , namely , the s&p500 index , is provided . the study shows that the jump - diffusion model solves the over - smoothing issue stated in , while the -stable distribution approach is a good compromise between computational effort and performance in the estimate of implied volatility , which is a major problem widely underlined in the dedicated literature , see , e.g. , . markov switching , -stable distribution , jump - diffusion model , symmetric gamma distribution , regime switching , markov chain monte carlo , metropolis - hastings algorithm .
in adaptive filtering , there is a class of algorithms specifically designed for sparse system identification , where the unknown system only has a few large coefficients while the remaining ones have a very small amplitude so that they can be ignored without significant effect on the overall performance of the system .a good example of them is the zero - attracting least mean square ( za - lms ) algorithm proposed in .this algorithm can achieve a higher convergence speed , and meanwhile , reduce the steady state excess mean square error ( mse ) .compared to the classic lms algorithm , the za - lms algorithm introduces an norm in its cost function , which modifies the weight vector update equation with a zero attractor term .recently , the hypercomplex concepts have been introduced to solve problems related to three or four - dimensional signals , such as vector - sensor array signal processing , color image processing and wind profile prediction .as quaternion - valued algorithms can be regarded as an extension of the complex - valued ones , the adaptive filtering algorithms in complex domain could be extended to the quaternion domain as well , such as the quaternion - valued lms ( qlms ) algorithm in . in this paper, we propose a novel quaternion - valued adaptive algorithm with a sparsity constraint , which is called zero - attracting qlms ( za - qlms ) algorithm .the additional constraint is formulated based on the norm .both the qlms and za - qlms algorithms can identify an unknown sparse system effectively .however , a better performance in terms of convergence speed is achieved by the latter one .this paper is organized as follows .a review of basic operations in the quaternion domain is provided in section [ sec : quaternion_basic ] to facilitate the following derivation of the za - qlms algorithm .the proposed za - qlms algorithm is derived in section [ sec : za - qlms ] .simulation results are given in section [ sec : simulations ] , and conclusions are drawn in section [ sec : conclusions ] .quaternion is a non - commutative extension of the complex number , and normally a quaternion consists of one real part and three imaginary parts , denoted by subscripts , , and , respectively . for a quaternion number , it can be described as where , , , and are real - valued . for a quaternion , when its real part is zero , it becomes a pure quaternion . in this paper, we consider the conjugate operator of as .the three imaginary units , , and satisfy as a quaternion has the noncommutativity property , in multiplication , the exchange of any two elements in their order will give a different result .for example , we have rather than . to derive the quaternion - valued adaptive algorithm , the starting point is the general operation of differentiation with respect to a quaternion - valued vector . at first , we need to give the definition of differentiation with respect to a quaternion and its conjugate .assume that is a function of the quaternion variable , which is expressed as where is in general quaternion - valued .the definition of can be expressed as the derivative of with respect to can be defined in a similar way with this definition , we can easily obtain some product rules can be obtained from above formulations , such as the differentiation of quaternion - valued functions to real variables .suppose and are two quaternion - valued functions of the quaternion variable , and is the real variable .then we can have the following result when the quaternion variable is replaced by a quaternion - valued vector , given by ^{t}\ ] ] where , , the differentiation of the function with respect to the vector can be derived using a combination of ( [ eq : general_definition ] ) straightforwardly in the following \label{eq : vector_definition}\end{aligned}\ ] ] similarly , we define as \label{eq : conj_vector_definition}\end{aligned}\ ] ] obviously , when , ( [ eq : vector_definition ] ) and ( [ eq : conj_vector_definition ] ) are reduced to ( [ eq : general_definition ] ) and ( [ eq : conj_general_definition ] ) , respectively .to improve the performance of the lms algorithm for sparse system identification , the za - qlms algorithm is derived in this section . to achieve this , similar to , in the cost function , we add an norm penalty term for the quaternion - valued weight vector ] and error ] is the adaptive weight vector with a length of , ] is the input sample vector , and denotes the transpose operation . moreover ,the conjugate form ] is given by =d^{*}[n]-{\textbf{x}^{h}[n]}{\textbf{w}^{*}[n]},\ ] ] our proposed cost function with a zero attractor term is given by =e[n]e^{*}[n]+\gamma{\|\textbf{w}[n]\|}_1\;,\ ] ] where is a small constant .the gradient of the above cost function with respect to ] can be respectively expressed as =\frac{\partial { j_0[n]}}{\partial \textbf{w}^ { * } } \label{eq : conj_gradient_cost_function}\ ] ] and =\frac{\partial { j_0[n]}}{\partial \textbf{w } } \label{eq : gradient_cost_function}\ ] ] from , we know that the conjugate gradient gives the maximum steepness direction for the optimization surface . therefore , the conjugate gradient $ ] will be used to derive the update of the coefficient weight vector .expanding the cost function , we obtain &=&e[n]e^{*}[n]+\gamma{\|\textbf{w}[n]\|}_1\nonumber\\ & = & d[n]d^{*}[n]-d[n]{\textbf{x}^{h}[n]}{\textbf{w}^{*}[n]}-{\textbf{w}^{t}[n ] } { \textbf{x}[n]}d^{*}[n]\nonumber\\ & ~&+{\textbf{w}^{t}[n ] } { \textbf{x}[n]}{\textbf{x}^{h}[n]}{\textbf{w}^{*}[n]}+\gamma{\|\textbf{w}[n]\|}_1\;. \label{eq : extended_cost_function}\end{aligned}\ ] ] furthermore , }}{\partial \textbf{w}^{*}}&=&\frac{\partial { ( e[n]e^{*}[n]+\gamma{\|\textbf{w}[n]\|}_1)}}{\partial \textbf{w}^{*}}\nonumber\\ & = & \frac{\partial}{\partial \textbf{w}^{*}}(d[n]d^{*}[n]-d[n]{\textbf{x}^{h}[n]}{\textbf{w}^{*}[n]}\nonumber\\ & & -{\textbf{w}^{t}[n ] } { \textbf{x}[n]}d^{*}[n]+{\textbf{w}^{t}[n ] } { \textbf{x}[n]}{\textbf{x}^{h}[n]}{\textbf{w}^{*}[n]})\nonumber\\ & & + \frac{\partial ( { \gamma{\|\textbf{w}[n]\|}_1})}{\partial \textbf{w}^{*}}\;. \label{eq : extended_gradient_cost_function}\end{aligned}\ ] ] details of the derivation process for the gradient are shown in the following ^{*}[n])}{\partial { \textbf{w}^{*}[n ] } } = 0 \label{eq : part_1}\ ] ] {\textbf{x}^{h}[n]}{\textbf{w}^{*}[n]})}{\partial { \textbf{w}^{*}[n ] } } = d[n]\textbf{x}^{*}[n ] \label{eq : part_2}\ ] ] }{\textbf{x}[n]}d^{*}[n])}{\partial { \textbf{w}^{*}[n ] } } = -\frac{1}{2}d[n]\textbf{x}^{*}[n ] \label{eq : part_3}\ ] ] } { \textbf{x}[n]}{\textbf{x}^{h}[n]}{\textbf{w}^{*}[n]})}{\partial { \textbf{w}^{*}[n]}}=\frac{1}{2}{\textbf{w}^{t}[n]}{\textbf{x}[n]}{\textbf{x}^{*}[n]}. \label{eq : part_4}\ ] ] moreover , the last part of the gradient of cost function is given by \|}_1})}{\partial \textbf{w}^ { * } } = \frac{1}{4 } \gamma\cdot{sgn(\textbf{w}[n])}\ ; , \label{eq : part_5}\ ] ] where the symbol is a component - wise sign function that is defined as combining the above results , the final gradient can be obtained as follows =-\frac{1}{2}e[n]\textbf{x}^{*}[n]+\frac{1}{4}\gamma\cdot{sgn(\textbf{w}[n])}\;.\ ] ] with the general update equation for the weight vector = \textbf{w}[n]-\mu \nabla_{\textbf{w}^{*}}j_0[n],\ ] ] where is the step size , we arrive at the following update equation for the proposed za - qlms algorithm = \textbf{w}[n]+\mu(e[n]\textbf{x}^{*}[n])-\rho\cdot{sgn(\textbf{w}[n])}\ ; , \label{eq : update_weight_vector}\ ] ] where . the last term represents the zero attractor , which enforces the near - zero coefficients to zero and therefore accelerates the convergence process when majority of the system coefficients are nearly zero in a sparse system . notethat equation will be reduced to the normal qlms algorithm without the zero attractor term , given by =\textbf{w}[n]+\mu(e[n]{\textbf{x}}^{*}[n])\;.\ ] ]in this part , simulations are performed for sparse system identification using the proposed algorithm in comparison with the qlms algorithm .two different sparse systems are considered corresponding to scenario one and scenario two in the following .the input signal to the adaptive filter is colored and generated by passing a quaternion - valued white gaussian signal through a randomly generated filter .the noise part is quaternion - valued white gaussian and added to the output of the unknown sparse system , with a 30db signal to noise ratio ( snr ) for both scenarios .for the first scenario , the parameters are : the step size is ; the unknown sparse fir filter length is , with non - zero coefficients at the 2nd , 8th , 16th and 31st taps , and its magnitude of the impulse response is shown in fig .[ fig : w ] ; the coefficient of the zero attractor is .the learning curve obtained by averaging runs of the corresponding algorithm is given in fig .[ fig : learning_curve_1 ] , where we can see that the za - qlms algorithm has achieved a faster convergence speed than the qlms algorithm when they both reach a similar steady state . ] ] for this case , length of the unknown fir filter is reduced to , still with active taps .the parameters are : step size is and the value of is .the results are shown in fig .[ fig : learning_curve_2 ] .again we see that the za - qlms algorithm has a faster convergence speed and has even converged to a lower steady state error in this specific scenario . ]in this paper , a quaternion - valued adaptive algorithm has been proposed for more efficient identification of unknown sparse systems .it is derived by introducing an penalty term in the original cost function and the resultant zero - attracting quaternion - valued lms algorithm can achieve a faster convergence rate by incorporating the sparsity information of the system into the update process .simulation results have been provided to show the effectiveness of the new algorithm .yilun chen , yuantao gu , and alfred o hero , `` sparse for system identification , '' in _ acoustics , speech and signal processing , 2009 .icassp 2009 .ieee international conference on_. ieee , 2009 , pp .31253128 .zhang , w. liu , y.g .xu , and z.w .liu , `` quaternion - based worst case constrained beamformer based on electromagnetic vectoe - sensor arrays , '' in _ proc .ieee international conference on acoustics , speech , and signal processing _ , vancouver , canada , may 2013 , pp .41496153 . x. r. zhang , w. liu , y. g. xu , and z. w. liu , `` quaternion - valued robust adaptive beamformer for electromagnetic vector - sensor arrays with worst - case constraint , '' , vol .104 , pp . 274283 ,november 2014 .m. b. hawes and w. liu , `` a quaternion - valued reweighted minimisation approach to sparse vector sensor array design , '' in _ proc . of the international conference on digital signal processing _, hong kong , august 2014 .m. d. jiang , w. liu , y. li , and x. r. zhang , `` frequency - domain quaternion - valued adaptive filtering and its application to wind profile prediction , '' in _ proc .of the ieee tencon conference _ , xian , china , october 2013 .m. d. jiang , w. liu , and y. li , `` a general quaternion - valued gradient operator and its applications to computational fluid dynamics and adaptive beamforming , '' in _ proc .of the international conference on digital signal processing _, hong kong , august 2014 .
recently , quaternion - valued signal processing has received more and more attention . in this paper , the quaternion - valued sparse system identification problem is studied for the first time and a zero - attracting quaternion - valued least mean square ( lms ) algorithm is derived by considering the norm of the quaternion - valued adaptive weight vector . by incorporating the sparsity information of the system into the update process , a faster convergence speed is achieved , as verified by simulation results . * _ keywords : _ * quaternion ; sparsity ; system identification ; adaptive filtering ; lms algorithm .
understanding the flow inside sessile liquid structures of different shapes is important in a variety of solution - based material deposition and patterning processes .solvent evaporation inherent in these processes is already known to lead to a rich variety of flows .the small length scale and general lack of shape symmetry implies the potential for capillary pressure gradients and corresponding flow phenomena .finally , the non - instantaneous nature of the formation of these liquid structures adds another element to the flow complexity . in the linked videos , we investigated the internal flow of inkjet - printed water rivulets of finite length using optical microscopy .six millimeter - long rivulets were formed by printing a pre - determined number of drops ( m ) at controlled frequency ( ) and spatial overlap ( m ) .microscope cover slips made of borosilicate glass were surface coated and used as substrates .two surface coatings were investigated - s1805 photoresist with and without koh etching .rivulets were printed inside a controlled humidity chamber at two relative humidity levels - and .the ambient , ink , and substrate were kept at room temperature ( ) . for fluorescent microscopy videos ,the water was seeded with nile red polystyrene spheres ( m ) at volume fraction .we report the experimental details and results of a wider range of both printing parameters , ink properties , and substrate surfaces elsewhere .the two surface coatings resulted in different values of the static advancing and receding contact angles ( and ) for water and in the rivulet formation process during printing .water droplets on the photoresist - coated surface exhibited and those on the etched surface and .complete rivulet formation was inhibited on the first surface type due to a non - zero , which allowed capillary - driven contact line recession and rivulet breakup into individual droplets .the zero on the second surface type prevented rivulet breakup but resulted in the formation of a distinctive bulge at the starting end of the water rivulet .the bulge grows immediately after the coalescence of the first few drops during rivulet formation .we investigated the flow that causes bulge growth by using optical fluorescent microscopy . during rivulet formation ,a distinctly pulsatile axial flow drives fluid away from the terminal end of the growing rivulet where printed drops are landing .the frequency of the flow is the same as the drop frequency .it has been hypothesized that the large local mean curvature in the region where drops land causes a capillary pressure gradient along the length of the rivulet .using , the height- and width - averaged axial velocity was measured and showed that the amplitude of the pulsatile flow decreases with reduced relative humidity in the ambient , all other conditions being fixed .the decreased flow magnitude results in the inhibition of bulge formation .the video entry to the 2013 gallery of fluid motion is shown in * a schematic of the flow observation approach * visualization of rivulet break up during printing onto a substrate with non - zero receding contact angle ( surface type 1 ) * visualization of successful rivulet formation on surface type 2 and concequent bulge growth * fluorescence microscopy video showing pulsatile axial flow and corresponding mean axial speed * the effect of liquid evaporation on axial flow and bulge growth inhibition
we used optical microscopy to investigate flows inside water rivulets that were inkjet - printed onto different surfaces and under different ambient conditions . the acquired fluid dynamics videos were submitted to the 2013 gallery of fluid motion .
the hugoniot curves ( see [ 6 ] ) correspond to the linkage between the velocity and the pressure in a shock wave travelling through a material after an impact . by starting from a position at rest , the algebraic equation of such curveshas the form where is the mass density of the material , is its speed of sound , and is a dimensionless constant retated to this material , obtained from experiments . for water , and by solving with respect to , we get where and whose unit is the inverse of an energy .we notice the relation the values of these parameters come from [ 5 ] .1.8ema general one dimension velocity - pressure model , with a constant mass density , is made of a dynamics equation of the form where is the time and is the position along a vertical upwards oriented axis ( with at sea level ) , and a hooke law of the form where for the pressure depending wave velocity , to be identified .these two equations compose a hyperbolic system whose riemann invariants have the form by derivating the expression of from the hugoniot curves , one gets identifying the two expressions and using lead to the formula this reads like a state law for our velocity pressure model .1.8emup to now we were only concerned with homogeneous equations since riemann invariants only exists in this case .we also have to take in account the gravity effects , since the pressure is increasing with the depth of water , and some friction effect since stillness is a stable configuration .we use a friction term of the strickler type as usual in hydraulics , but other choices are possible and will lead to similar results .that way , the dynamics equation is replaced by where is the gravity constant and the friction parameter .the hooke law and the state law are unchanged , and we take in the numerical experiments . the size of the friction parameter is a priori unknown and will be discussed later .we look for a quantity satisfying the transport equation since depends on we get to be compared with the hooke law .we get which is easily solved and gives for the choice the wave velocity can be written as a function of the strain density as we check now the conservation of the momentum we get where we introduce a pressure term , standing as a strain pressure , and we get the conservative equation for the momentum the system made of ( [ tsunami9 ] ) ( [ tsunami8 ] ) has the form of the well known euler equations for gas dynamics with a larger adiabatic coefficient instead of the usual value for gases , and can be handled in the same way , especially for the shock waves .the same rankine hugoniot condition is valid , connecting the velocity of a shock wave to the two states and by first compute the state at rest . in case of stillness the equations ( [ tsunami9 ] ) and ( [ tsunami8 ] ) reduce to since by denoting the strain density at rest, the integration gives recalling that we can use the atmospheric pressue at the surface ( ) and get which corresponds to the geostrophic equilibrium state .1.8emto compute the strain density profile we use the deviation variable and look for linkage of the form as in any with a source term ( see [ 4 ] , or annex below in section 5 ) , where and are constant .since stillness is reached for we have . besides , since does not depend on , we have which means that corresponds to the wave velocity , that we name the reference velocity , corresponding to a reference state such that .we compute , for which is always expected , to be introduce into ( [ tsunami8 ] ) which becomes \eta _ { z}\ + gq-\frac{gq_{0}}{c(q_{0})^{2}}\left({c^{2}-a^{2}(\frac{q - q_{0}}{q})^{2}}\right ) \ + \ \frac{kq}{\rho _ { 0}}\frac{a^{2}}{q^{2}}\left({q - q_{0}}\right ) ^{2}=0\ .\ ] ] this equation reduces to \ \eta _ { z}\ + \gq\ + \ \frac{gq_{0}}{c(q_{0})^{2}}\left[{a^{2}\left({\frac{q - q_{0}}{q}}\right ) ^{2}-c^{2}}\right ] \ + \ \frac{k\a^{2}}{\rho _ { 0}\ q}\left({q - q_{0}}\right ) ^{2}\ = \ 0\ .\ ] ] by multiplying by and using we get since this is a differential equation which can be integrated by using standard numerical methods . an increasing profile is expected .since the reference velocity far larger than the coefficient of is always negative in practice .the two last terms are always positive , and the friction term is the predominant one .the term is always negative and is always balanced by the friction term when the friction coefficent is not too small .1.8emthe value of is determined by the strength of the sismic wave at the bottom of the ocean , whose depth is denoted this corresponds to a reference state we have we get whichs gives 1.8emnow we can compute the profile of the stain wave as the solution of ( [ tsunami5 ] ) .figure 1 presents a series of numerical computation tests using the reference value , for a depth meters , that is an increasing of about 1500% above the natural pressure on the bottom of the ocean .2.1emthe velocity field increases from a few meters per second near the bottom to more than 300 meters per second near the front shock wave drawn here , which is here an hypothetic one .the tests performed with too small friction coefficients ( observed here for ) yield decreasing profiles , as expected from the remark above about the size of the friction term .the real front shock wave will progress more slowly than the strain wave , and its amplitude will decrease rapidly , and the effective values of the velocity field will be strongly reduced near the front schock wave . 1.8emthe front shock wave connects the geostrophic equilibrium state , with the velocity , to a value on the strain wave , with the velocity the rankine hugoniot condition ( [ tsunami4 ] ) gives the velocity of this shock wave , which reads here : we have the following result : the shock wave propagates slower than the stain wave , and faster than the local wave speed , that is * proof : * we fix and set and , as the left and right values of the shock wave. then the shock velocity reads we shall use some {0},q_{s}[$ ] such that we have where we get which reduces to and is equivalent to which is true since and 1.8emon the other hand , since the expression is an increasing function of for , we get obviously the other inequality.(end of proof ) 1.8emnow we can construct the whole wave , made of a regular part corresponding to a part of the strain wave and a front shock whose position is determined by the rankine hugoniot condition ( [ tsunami6 ] ) intertpreted as a differential equation whose solution gives the position of the shock .figure 2 shows the different positions of the shock wave for different values of the friction coefficient .1.8emthe velocity is equal to that is .the shock on the bottom of the ocean , at the depth corresponds to 15 times the value of the usual geostrophic pressure , thai is this corresponds to strain densities we notice that the variation from to corresponds to an increasing of 11.5 % which is an increasing of about 1400 % of the pressure .the velocity of the wave is computed from the values we notice that the velocities and for more important than .the real profiles are drawn in red , and are shaped as a part of the strain wave profile cut by the slower shock wave .the eliminated parts are drawn in blue .we observe an important difference between the velocites of the different fronts , for several values of the friction coefficient .we denote by the mean depth of the ocean .the wavelength of the saint - venant waves must satisfy a condition of the form where the number of sonic interactions ( back - and - forth ) between the bottom and the surface of the ocean .this condition means that along a horizontal distance of a wavelength , there are at least such sonic interactions .the use of the saint - venant model is as more appropriate as is great . as in [ 2 ] for rogue waves ,we propose to require which implies for example a wavelength greater than for an ocean depth of .a tsunami wave is expected when the condition ( [ tsunami7 ] ) is fullfilled . a linear model for the surface elevation propagationwas proposed in the historical paper [ 1 ] by k.kajiura .1.8emthe numerical tests in section 3 show the effect of the friction , whose effect is to shape up the amplitude of the wave near the front shock . at the same time , this front shock magnitude is eroded progressively as the wave propagates upwards to the surface . since the amplitude of the front shock decreases , the velocity of this front shock decreases too .by following the propagation of a wave with an incidence of angle ( that is only changing into and into ) , we get a longer path to travel with a weaker gravity constant ) and a delayed wave compare to the case withou incidence .this is show on figure 3 .1.8emthe value of the initial amplitude has a capital effect .it must be large enough to get , after erosion by the friction , a remaining wave near the surface which is sufficient to raise up the sea surface and provoke a wave .the physical wave starts as a sperical wave , and propagates according to the incidences .the part with a small incidence will reach the surface later and will help the formation of the water wave .the part with a larger incidence will disappear because of the friction effect .the question of the value of the friction coefficient stays open , since the use of the strain density was never done before .it seems from the numerical tests that the correct values lay between 1 and 10 .too large values provoke a sharp front wave which erodes rapidly and will never reach the surface with a sufficient amplitude to make a wave , which is not expected , since sometimes , tsunamis really occur .we consider a general 2x2 hyperbolic system whose first equation has the form we denote by and the eigenvalues of the flux matrix , which depend on and only .then the general form of the second equation is or by using the notations and is a source term , assumed to be not identically zero .we have the following result : the nonlinear non homogeneous system ( [ tsunami21 ] ) , ( [ tsunami22 ] ) admits non constant local solutions which are also solutions to the linear homogeneous system where is a real constant , with the linkage another real constant .* proof : * we look for local solution with a linkage of the form . then the system becomes since , the second equation becomes which has the form by introducing a real function whose derivative is now , integrating with respect to where an integrating constant which may depend on . next , derivating with respect to gives where we set we get and recalling that remains a new derivation with respect to leads to and since the solution is not constant , we get is some constants and next , and we have got for some constant or , locally , , which satisfies to ( [ tsunami23 ] ) , since .(end of proof ) 1.5emthis result is a very general one , since no special hypotheses were needed on the second equation ( [ tsunami22 ] ) .such waves are very common in the nature : water waves such as roll waves , rogue waves , tidal bore waves or also many other waves as reported in [ 2 ] or [ 4 ] .for example the double property of being either a solution to a non linear , non homogeneous systems and a linear homogeneous system provides the linkage between acoustics and gas dynamics in a wind instrument ( see [ 3 ] ) .1.8emin the case of a conservative system , invariant by galilean transform , the only choice of the function is reduced to we easily construct this way the usual saint venant system in hydraulics or the euler equations in gas dynamics .1.8emin section 3 , the state at rest is not , but so we look for a solution with the linkage since at rest .we find a differentiel equation which is more complex than ( [ tsunami24 ] ) .we have used the idea of source term linearization effects on waves as in [ 2],[3 ] and [ 4 ] .other models are developped in [ 1 ] , [ 5 ] and [ 7 ] .the data [ 6 ] were used to valuate the parameters . a .- y.leroux , m .- n.leroux , a mathematical model for rogue waves , using saint venant equations with friction .conservation laws pr .server : www.math.ntnu.no.html
by using the hugoniot curve in detonics as a riemann invariant of a velocity - pressure model , we get a conservative hyperbolic system similar to the euler equations . the only differences are the larger value of the adiabatic constant ( instead of for gas dynamics ) and the mass density replaced by a strain density depending on the pressure . the model is not homogeneous since it involves a gravity and a friction term . after the seismic wave reaches up the bottom of the ocean , one gets a pressure wave propagating toward the surface , which is made of a frontal shock wave followed by a regular decreasing profile . since this regular profile propagates faster than the frontal shock waves , the amplitude of the pressure wave is strongly reduced when reaching the surface . only in the case of a strong earth tremor the residual pressure wave is still sufficient to generate a water elevation with a sufficient wavelengths enable to propagate as a saintvenant water wave and to become a tsunami when reaching the shore . we describe the construction of the model and the computation of the wave profile and discuss about the formation or not of a wave . 1.5em we propose a model using a constant mass density because variable mass density models are often unstable , since a tiny variation of the density allways causes a large variation of the pressure . in the model presented here , the transported variable is not the mass density but a new variable , called the strain density , which has the same properties of conservation without this drawback of numerical unstability . 1.8emthe first section deals with the construction of the velocity - pressure model and the new conservative variable of strain density is designed in the second section . the new non homogeneous euler - like model is then studied in the third section and one dimension numerical computations of the profile of the wave are reported in the fourth section , for different values of the friction coefficient . we conclude by some discussions about the emergence of a tsunami wave or not .
the foreign exchange market ( fx ) , with its daily volume of over five trillions usd in 2009 is by far the world s largest financial market .any other financial market can hardly approach such volume .this market connects international institutions participating in currency exchange transactions all across the world and encompasses essentially everything of what is going on in the world , first of all including economic factors , political conditions and market psychology , all of them constantly changing . also , this market has direct influence on all other markets because any price is expressed in terms of a currency .the large volume makes it virtually impossible to control from outside and there is no friction ( transactions are basically commission free ) .due to time differences fx transactions are performed 24 h a day , 5 days a week with maximum volume between 13:00 and 16:00 gmt , when both american and european markets are open . hence , the fx time series relations represent an exceptionally complex network indeed , and they therefore constitute an especially challenging target of a detailed quantitative analysis . in connection with the almost continuous trading, fx is also much more effective and liquid than other speculative markets .a significance of this market ( an example of globalization ) is even more important , since it became an indicator of a condition of the world s economy . from a physicist s viewpoint fxis a complex system with extremely convoluted time dependencies .the fx effectiveness is intensified by the correlations between exchange rates known as the triangular arbitrage .it is possible only on small time scales and disappears immediately after taking advantage of inconsistent crossrates by the traders . to quantify such correlations we employed the multifractal analysis measuring nonlinear features of time series , in particular their multifractal spectra .especially interesting is a relation between the fractal properties of the exchange rates remaining in the triangular dependency . encapsulating thisrelation - especially of the empirical residuals within the triangle - may shed more light on this so far poorly understood issue .the resulting scale free statistics encodes information about complex interactions in fx .the fx data used in the present analysis include the following six indicative exchange rate pairs : usd / eur , eur / gbp , gbp / usd , jpy / gbp , gbp / chf , and chf / jpy sampled with 1 minute frequency over the period from 21:00 on january 2 , 2004 , until 21:00 on march 30 , 2008 ( 1183 days , 169 weeks ) .the selection of currency pairs and time interval was constrained by availability of the sufficient quality data , so not all important pairs could be included in our analysis . in a consequence , we deal with six synchronized time series of length that can be labeled as , , where , denotes a value of currency expressed in terms of a currency at time . consequently , the corresponding returns over the time period are expressed as let us define residual returns as which are expected to fulfill the following relation : departures from ( [ triangle ] ) generate the so - called triangular arbitrage opportunities that , whenever detected , may be exploited and in fact are commonly used for the risk free profit generation . in the contemporary markets an execution of the corresponding operation typically takes at most a few seconds and is thus far below the scales of 1 min . considered here .of course , this does not yet imply that returns of the corresponding three exchange rates synchronously evaluated at larger time scales obey eq .( [ triangle ] ) exactly .viewed at the same instant of time some mismatch may result just from the time needed ( a few seconds ) to execute the arbitrage opportunity and this introduces some dispersion .it may also reflect some noise component involved .clearly , on the larger time scales such effects of departure from zero in eq .( [ triangle ] ) become less and less relevant relative to the total return . for the two exchange rate triangles , eur - gbp - usd and gbp - chf - jpy , that operate within the six currency exchange rates considered here , the logarithmic returns defined by eq .( [ g ] ) are shown in fig . 1 for all the three exchange rates involved in each triangle accompanied by the sum of these three returns ( the left hand side of eq . ( [ triangle ] ) ) .these characteristics are presented for the shortest time lag here accessible of min ( left panels ) and for min ( right panels ) .as far as the magnitude of the fluctuations is concerned one sees essentially no sizeable difference for the four series so generated for min .the only visibly detectable difference is in the structure of fluctuations ; the sum of the three returns ( lowest row in each panel ) looks more uniform in each case and large fluctuations are less frequent .the situation changes considerably for min .what is natural , the magnitude of returns for the individual exchange rates significantly increases while , at the same time , their sum in each triangle decreases even relative to min .what seems also worth pointing already at this stage is that the fluctuations of returns in the eur / gbp exchange rate look more quiet than for the remaining two pairs in this triangle for both time lags of 1 and 60 min .the same applies to the chf / gbp exchange rate in the second triangle .irrespective of such details , the relation expressed by eq .( [ triangle ] ) definitely introduces a crucial factor that affects the dynamics of the currency exchange network in multiple ways .it first of all sets constraints on the dynamics by effectively reducing the number of independent degrees of freedom . for currencies instead of there in fact exists independent exchange rates .this crucially shapes topology of the corresponding exchange rate network structure .furthermore , some of the exchange rates may be primarily driven by the trade needs or some speculation specific arguments while dynamics of the others may be affected more by the market adjustments towards eliminating or at least reducing the arbitrage opportunities . in the followingwe analyze a few most informative statistical characteristics of time series for the exchange rates listed above .these characteristics determine the sectional organization of the paper . in each sectionsome novel results not discussed previously in literature are presented . in section 2 we show that although the distribution of returns for the individual exchange rates can be approximated ( similar to the returns from other financial markets ) by the -gaussian distributions , the residual signals clearly do not .section 3 is devoted to an analysis of temporal correlations and detection of repeatable patterns of market activity .we found that exchange rates exhibit different temporal correlation properties depending on the trading significance of a particular currency pair .next , in section 4 we study multifractal properties of the exchange rates in terms of the singularity spectra and , for the first time , identify their anomalous structure : negative exponents and negative .finally , section 5 deals with the cross - correlation structure of the currency triangles , documenting occurrence of the epps effect , i.e. increase of coupling strength between the exchange rates with increasing sampling time , observed for surprisingly long .one of the most relevant quantitative characteristics of the financial dynamics is the functional form of the distribution of returns .the related - in the past well identified stylized fact - is the so - called inverse cubic power - law which applies to developed stock markets , to some emerging stock markets like the polish market , to the commodity market , as well as to the most traded currency exchange rates in the early 1990s .of course , this type of distribution is lvy - unstable and thus for the sufficiently long time lags the returns distribution is expected to converge towards a gaussian .this convergence , and thus departures from the inverse cubic power - law , has been found to be very slow as a function of . in the more recent data, however , this convergence appears significantly faster and departures from the inverse cubic power - law in the contemporary stock markets can be seen already for min . a formalism that appears attractively compact and economic for describing the two extremes -the inverse cubic as well as the gaussian distributions - including all the intermediate cases is the one based on the generalized nonextensive entropy .accordingly , optimization of the corresponding generalized entropic form under appropriate constraints yields the following -gaussian form for the distribution of probabilities where the constants , , and the -exponential function are defined in appendix . in order to attain a better stability in this kind of the analysis, we prefer to use cumulative form of the distribution ( [ px ] ) - see appendix for the corresponding formulae . as a standard procedure that makes the distributions for different exchange ratesdirectly comparable , we convert of eq .( [ g ] ) into the normalized returns defined as where is the standard deviation of returns over the period . the empirical cumulative distributions for all the exchange rates considered here versus their best fits in terms of the -gaussians ( eq . ( [ pcx ] ) ) are shown in fig . 2 .the left column corresponds to the three exchange rates form the eur - gbp - usd triangle while the right column to the exchange rates from the gbp - chf - jpy triangle .as one can see in all the cases the -gaussians provide a very reasonable representation over the whole span of fluctuations and for the increasing return time - lags of 1 min ., 10 min . andsome asymmetry between the left and right wings in the distributions , as expressed by the slightly different values of the corresponding -parameter can be detected .as expected , with the increasing time - lags the -values decrease which reflects an expected ( slow ) convergence to the gaussian ( ) distribution . among the pairs considered here one interesting difference in this respect can be detected however : the decrease of the values with increasing is slower for the intra - european exchange rates ( eur / gbp and gbp / chf ) than for the intercontinental ones .the convergence towards a gaussian distribution is thus slower in the former case .this result is qualitatively similar to an earlier observation based on data from 1992 - 93 ( i.e. long before the introduction of euro ) , which was documented in ref .this indicates that the global forex market is largely stable as regards the statistical distribution of returns .moreover , the degree of convergence towards the gaussian distribution appears to behave similarly as in the contemporary stock markets .it also needs to be noticed that analogously as the recent s&p500 analysis shows a slight departure from the inverse cubic power law ( which in the present formalism of the -gaussians corresponds to ) takes place already for min .perhaps this law is approached more accurately only for the time - lags even smaller than 1 min .it is interesting to look at the distributions of the residual returns as defined by eq .( [ triangle ] ) , since - according to our knowledge - they have not yet been shown in literature .3 shows such distributions for the same time scales : , 10 and 60 min .the -gaussians are now able to fit only the central part of the distributions up to about three mean standard deviations .the corresponding -values only weakly decrease with starting from for min .the tails of the empirical distributions are disparately thicker than expected by the -gaussian model and become even more such for larger .such an effect is in fact visible already in fig . 1 . the background fluctuations in their lowest panelssizeably decrease with increasing while at the same time the largest fluctuations remain of the same order as compared to the individual exchange rates shown in the upper panels .these outliers may reflect a longer time needed to balance departures from eq .( [ triangle ] ) resulting from sudden large returns in one of the pairs in the triangle .these characteristics are very similar in both the triangles considered here .the issue of the character of temporal correlations is equally important in the financial context and many related questions still remain open .the simplest measure of the temporal correlations is in terms of the autocorrelation function which for a function is defined as where denotes an average over .the most studied cases in the financial context correspond to the autocorrelation of returns , here represented by , and of the volatility which can be defined as the modulus of returns .4 shows the return autocorrelation as a function of , in the upper panel for the exchange rates from the eur - gbp - usd triangle and in the lower panel for the exchange rates from the gbp - chf - jpy triangle .similarly as for the typical stock market returns , in fact even faster because in all these six cases already for min ., such an autocorrelation is seen to assume values at the null level .this even faster disappearance of the fx return autocorrelation is probably related to a higher liquidity involved .characteristic for all the cases considered here is appearance of the correlation hole of negative autocorrelation for min . which reflects some anti - persistent tendencies on such short time scales .there is no unique explanation of this effect first mentioned in ref . ; it can originate from the divergent opinions of traders about the direction of imminent price changes as well as from certain actions of market - makers and banks .the depths of these holes is different for different pairs even within the same triangle . for the pairs that can be considered leading in the fx dynamics ( gbp / usd , usd / eur , and jpy / gbp ) this depth at mincan be seen to be smaller than for the other ( eur / gbp , gbp / chf , chf / jpy ) pairs .the volatility autocorrelations for the same six pairs of currencies are shown in fig .it is quite obvious , due to the log - log scale used in this figure , that their behavior can well be approximated by the power law time dependance with .this value of the scaling index does not differ from the value which is typical for a majority of stock markets .even more , all the volatility autocorrelations between events that are separated by more than about basic units ( 1 min . ) suddenly drop down and start oscillating between the positive and negative values ( not visible in this figure ) with a decreasing amplitude .this effect has recently been found also for stock markets in .its natural interpretation is that the so - determined time horizon of the power - law volatility autocorrelations corresponds to an average length of either low or high volatility clusters .as far as the fx exclusively characteristics are concerned , one more effect needs to be pointed out based on fig .5 . for the same pairs ( gbp / usd , usd / eur , jpy / gbp ) that above have been indicated as the leading ones in the fx dynamics , the volatility autocorrelation is systematically stronger than for the remaining pairs ( as the relative location of the corresponding lines shows ) . a somewhat more advanced method to quantify the character of financial temporal correlations is to use a variant of the correlation matrix . in this approach , initiated in ref . , the entries of the corresponding matrix are the correlation coefficients between the time series of returns representing different disconnected time - intervals , like the consecutive trading days or weeks .the structure of eigenvalues and eigenvectors of such a matrix allows then to quantify further characteristics of the temporal correlations .suppose from the time series of length one extracts disconnected series of length .of course , the condition has to be fulfilled . by using such time seriesas rows one forms an matrix .then , the correlation matrix is defined as , where is matrix transpose . by diagonalizing : one obtains the eigenvalues and the corresponding eigenvectors . in the limiting case of entirely random correlations ,the density of eigenvalues is known explicitely , and reads : where with , , and where is equal to the variance of the time series . for a better visualization, each eigenvector can be associated with the corresponding time series by the following expression : thus , these new time series form orthogonal components into which the original signal is decomposed .they reflect distinct patterns of oscillations common to all the time intervals labeled with .these time series can therefore be called the eigensignals .the above methodology on the weekly basis is now applied to the present fx data . our original time series of returns comprise complete weeks counted from sunday 21:00 until friday 22:00 gmt .the length is min . for each pair of currencies and for the residual signals ,the distributions of matrix elements are displayed in fig .as it can be seen , for a majority of the rates the empirical distributions are gaussian - like . only for the most heavily - traded pairs : usd / eur and gbp / usd has a significant number of non - gaussian entries .eigenvalue density for the corresponding correlation matrices for all the six currency exchange rates discussed here , including the residual time series representing departures from the triangle rule , are shown ( histograms ) in fig .the left panel corresponds to the eur - gbp - usd triangle and the right panel to the gbp - chf - jpy one . for comparison , the pure noise distribution - as prescribed ( eq . ( [ rho ] ) ) by the corresponding wishart ensemble of random matrices - is indicated by dashed lines .as one can see , besides some small departures at the edges , the empirical eigenvalue distributions do not differ much from their pure noise counterparts .this fact may indicate that the intra - week exchange rate behaviour does not involve any particularly significant repeatable patterns .this observation applies more to the pairs within the gbp - chf - jpy triangle .the most evident departures between the empirical and the theoretical distributions one observes for the usd / eur and gbp / usd exchange rates . in both these cases ,the two largest eigenvalues stay visibly outside the noise range and thus may carry some system - specific information .this is in agreement with the distributions of matrix entries shown in fig .6 . more insight into this issue can be gained by looking at the corresponding eigensignals as defined by eq .( [ eigensignal ] ) . for each exchange ratefour such eigensignals are shown in fig . 8 : the ones corresponding to the first three largest eigenvalues and the fourth one which corresponds to an eigenvalue that is embedded deeply in the spectrum .indeed , the cases of the gbp / usd and of the usd / eur exchange rates look most spectacular . in both cases the first two eigensignals and even the third one display large outstanding fluctuations that by a factor of even about 50 surpass the neighbouring ones .their presence documents an enhanced market activity systematically at the same instants of time during the consecutive weeks .interestingly , such instants of time are concentrated more before the weekend than just after it .such special hours are 13.30 gmt and others .what is natural , the eigensignals from the bulk of the spectrum do not display such a kind of activity .another fact that deserves special attention is the eur / gbp exchange rate .even though belonging to the same triangle , the dynamics is equally smooth for all the eigensignals .one may hypothesize that this indicates a different mechanism that governs dynamics of the exchange rates within this pair as compared to the gbp / usd and usd / eur .it seems that in a world in which currency trade is dominated by gbp / usd and usd / eur , the complementary rate eur / gbp can in the first approximation be considered only as a spectator adjusting its value to the behaviour of the in - play rates .this receives additional support from the fact , that on time scales longer than min .considered here , the time series of eur / gbp has fluctuations resembling the ones corresponding to the other exchange rates . from the same perspective, the dynamics within the gbp - chf - jpy triangle looks more tranquil . yet, within the jpy / gbp and gbp / chf pairs one also sees the outlying fluctuations ( though relatively smaller than in the previous case ) at the well recognizable instants of time during the week period .at present , the most compact frame to globally grasp the whole richness of structures and correlations as identified above is - if applicable - in terms of the multifractal spectra .the presence of the long - range nonlinear power law temporal correlations , possibly accompanied by the non - gaussian character of fluctuations , constitute the necessary - likely not sufficient however - ingredients in this respect .furthermore , by now there exists quite convincing collection of evidence that the financial dynamics often carries signatures of multifractality . in this sectionwe therefore examine the multifractal characteristics of all the exchange rates considered in the previous sections . the multifractal detrended fluctuation analysis ( mfdfa ) is the most efficient practical method to quantify multifractality in the financial time series . in mfdfa for an discrete signalone starts with the signal profile , where denotes averaging over all s .then one divides the into non - overlapping segments of length ( ) starting from both the beginning and the end of the signal ( segments total ) . for each segment a local trend is estimated by fitting an order polynomial , which is then subtracted from the signal profile .for the so - detrended signal a local variance in each segment is calculated for each , from to , scale variable .finally , by averaging over all segments one calculates the order fluctuation function : ^{r/2 } \bigg\}^{1/r } , \label{ffunction}\ ] ] where . the relevant power law behavior of the fluctuation function reads : where is a generalized hurst exponent . for simple fractalsif in addition depends on , the signal is multifractal. then the singularity spectrum can be calculated as : + 1 , \label{falpha}\ ] ] where is the singularity strength .equivalently , with the commonly used scaling exponent the singularity strength is expressed as in ref . some results were shown concerning requirements to reliably determine the multifractal spectra such that potentially spurious effects are eliminated .this in particular concerns the length of the time series and the quantitative characteristics of the temporal correlations that determine the size of the scaling intervals .since , as shown above , the fx time series develop heavy tails , the range of the index needs to be appropriately narrow ; we consistently thus choose $ ] .the detrending polynomial used is of the second order which , as usual in this kind of analysis , proves to be an optimal choice .an example of for the gbp / usd returns is shown in fig .9 . for the other exchange rate returns the overall picture looks qualitatively similar .the scaling of the fluctuation functions is quite convincing and the scaling indices depend on .this , as is shown in the inset to the same fig .9 , results in a concave shaped which is characteristic to conventional multifractals .the reference dashed line in this inset represents calculated from the randomized original time series of gbp / usd returns , i.e. , by randomly shuffling the data points - a procedure which entirely destroys the temporal correlations . from the corresponding linear dependence of one straightforwardly identifies a monofractal with all the strength concentrated at .11.5 cm 11.5 cm singularity spectra calculated for all the exchange rates are presented in fig . 10 .they are all multifractal with the widths ranging from ca .0.15 ( usd / eur ) to ca . 0.25 ( gbp / chf , chf / jpy , eur / gbp ) with the maxima located at around , similarly as for the typical stock market cases .the dispersion of the maxima of is however larger within the gbp - chf - jpy triangle than within eur - gbp - usd .an even more interesting difference is seen in the shape of for different exchange rates .majority of them develop an asymmetric with the distortion somewhat towards the shapes characteristic to bifractals . the most beautiful and symmetric shape - like the model binomial cascade - is developed by the gbp / usd ( london - new york cable connection ) and to a lesser extent by the usd / eur exchange rates .interestingly , these two are the leading and the most traded exchange rates and the previously seen graphs illustrating their return fluctuations look the most erratic among all the exchange rates considered here .a trace of similar effect can be seen in the second , gbp - chf - jpy triangle . there, the most symmetric shape of corresponds to the leading jpy / gbp exchange rate .the degree of symmetry can also be seen to go in parallel with the strength of the volatility autocorrelation ( fig .stronger volatility autocorrelation corresponds to the more symmetric shape of .these effects to some extent resemble the situation encountered in the human electrocardiogram .there the most healthy and at the same time the most erratic case generates the widest and the most symmetric singularity spectrum .11.5 cm 11.5 cm in contrast to the proper exchange rate returns , we do not observe such conventionally interpretable multifractal characteristics for time series of the residual returns .complexity of the processes underlying such signals can be assessed from the scale -parameter dependence of the fluctuation function for different values of .the result based on calculation within the same range of the parameters as before is shown in fig .11a for the eur - gbp - usd triangle and in fig .11b for the gbp - chf - jpy triangle . while for the individual values of the fluctuation functions clearly behave scale free to a similar accuracy as in fig . 9 , the -dependence of the corresponding scaling indices is significantly different .for the positive values of , with increasing their values , the slope ( in the log - log scale ) of systematically redirects its orientation such that becomes negative . forthe negative values of , on the other hand , the slope of - and thus - almost does not depend on which signals a monofractal character of the small fluctuations as these are predominantly filtered by the negative parameters .the resulting is shown in the inset and can be seen to have a profoundly different functional form as compared to the original returns case of figs . 9 and 10 .still , by randomly shuffling the residual returns time series one obtains the same monofractal form of ( dashed lines in the insets to fig .11a and 11b ) as before when the series of returns were shuffled .this signals that the presently observed anomalous functional form of for the residual returns is primarily encoded in the specific form of the temporal correlations in .11.5 cm 11.5 cm the singularity spectra that correspond to the above two cases are shown in fig .they develop essentially only the left wing which corresponds to the positive values of .somewhat related `` left - sided '' multifractals have in fact already been considered in the literature in applications to diffusion limited aggregates ( dla ) and to fully developed turbulence .this may signal further analogies between the fx dynamics and the physics of turbulence in accord and giving more arguments in favor of the conjecture put forward in ref . .furthermore , extends to the negative values of the singularity exponents where , at the edge , even assumes the negative values . to our knowledgesuch an anomalous form of multifractality has never been identified before in the context of the financial dynamics .however even such a possibility appears to be implicitly involved in the mandelbrot considerations on the fluid dynamics and already explicitly in more recent statements on the issue of negative critical dimensions .this latter extended study has been motivated by a rigorous demonstration of the presence of negative fractal quantities for the ( conformal invariant ) harmonic measure around a number of incipient percolation clusters .a related indication is that the `` multifractal anomalies '' arise when the system under study behaves canonically - in the statistical physics sense - instead of microcanonically .illustrated by means of the binomial cascade this extends to the situation such that the sum of partitions at each recursion is not preserved exactly but only in the average .several quantitative characteristics seen above indicate that the dynamics associated with the constraint imposed by the fx triangle rule belongs to this category of phenomena . in the real fx dynamicsthe triangle relation expressed by the eq .[ triangle ] is obeyed also only in the average .a complementary interpretation of the negative fractal dimensions is that they describe the missing fluctuations - therefore typically large and thus filtered by the positive parameters - in a studied finite size sample .such fluctuations are thus expected to come into view in another realization of a finite size sample from the same ensemble .it is well - known that time series of returns of different assets traded on the same market are typically cross - correlated .this holds true also for the currency market . unlike other studies before ,here we analyze a specific case of correlations between the exchange rates coupled by the triangle rule .we expect that deviations from the perfect triangle relation can be observed not only by means of the residual returns distributions ( fig . 3 ) but also by means of the eigenvalue spectra of the correlation matrix calculated from the triples of time series corresponding to currency triangles .we can exploit here the fact that our time series were recorded simultaneously .we follow the same procedure of constructing a correlation matrix as above , but now we consider the complete time series of length . for a few different choices of the time scale , we create two matrices of size , each for one currency triangle . due to( [ triangle ] ) , the triangle rule , if fulfilled , implies that has only two non - zero eigenvalues whose sum satisfies the condition : .existence of would thus mean a possibility of a triangular arbitrage . 9.0 cm 9.0 cm 9.0 cm 9.0 cm in fig .13 top panels show functional dependence of the eigenvalues of on for the two considered currency triangles . for the shortest min . , the data clearly does not comply with eq .( [ triangle ] ) and has three non - zero eigenvalues in both cases . although possibilities of the triangular arbitrage , with the today s computer trading , are not expected to last longer than a fraction of a second , in the correlation matrix representation their trace can be seen clearly on much longer time scales .this is because all the exchange rates were sampled precisely at the same time and , thus , the inconsistencies in exchange rates could not be consumed yet . since these inconsistencies ,as regards their absolute magnitude , are the same no matter which time scale one considers , their relative magnitude should gradually decline with increasing ( and , therefore , with increasing variance of the unnormalized returns ) .this effect should manifest itself by a declining value of . due to the fact that trace of is independent of , decreasing the level of be associated with an increase of the two remaining eigenvalues .this resembles the well - known epps effect observed on the stock markets .in fact , even the time scales at which saturates ( 50 - 100 min . ) are roughly the same as those found for stocks .the same effect can in general be observed for any triples of the exchange rates not neccessarily forming a triangle , as it is documented by two examples shown in bottom panels of fig . 13 , where the exchange rates are formed from 4 currencies and ,thus , do not constitute any cycle .the only quantitative difference between the eigenvalues of the triangles and the `` non - triangles '' is in the asymptotic magnitude of , which in the former case is zero and in the latter case is small but positive .one therefore sees that the triangle rule implies that the fully - developed couplings among the involved exchange rates are associated with a zero mode of .the origin of the epps effect in the forex market is likely to be similar to its counterpart for the stock market : a finite speed of information spreading among the assets .one possible source is a lack of transaction synchronicity on different assets , which introduces noise - like effects on their correlated evolution .however , this non - synchronicity of trading is probably not the unique cause of the epps effect in the currency market : the trading frequency on this market is much higher than its counterpart on the stock markets , yet the time scales of saturation are comparable on both market types .this suggests that some other factor may play an important role in the development of correlations .indeed , other sources of the epps effect have already been proposed and may be relevant here like the microstructure noise and the discretization error .however , their influence in this case has yet to be assessed .in this paper we analyzed time series of currency exchange rate returns for the two triples of currencies forming triangles : eur - gbp - usd and gbp - chf - jpy .market efficiency requires that cycling through currencies in such triangles must not be profitable except very short time scales , which is reflected in the triangle rule . for the original fx timeseries we find that the main statistical properties of the corresponding returns - their distributions , temporal correlations , and multifractality - are qualitatively similar to those found for other markets .however , we found also some quantitative differences between properties of different exchange rates which may reflect their different significance in the world currency exchange system .we also studied the residual signals consisting of short - time deflections from the perfect no - arbitrage condition .a related interesting observation is that while the proper exchange rate returns are well modeled by the -gaussian distributions , the residual returns develop disproportionately heavier tails . among the most illuminating viewsis the one that can be obtained after diagonalizing the correlation matrices constructed from time series representing different weeks and calculating the corresponding eigensignals , i.e. independent components of dynamics associated with repeatable patterns of activity .eigensignals carry system - specific information if one can identify large fluctuations which can be related to some periodic external perturbation of the market ( e.g. economical news releases ) .it occurs that such fluctuations are clearly visible already on 1 min .time scale for heavily - traded cross - rates like usd / eur and gbp / usd , but only on longer time scales for less frequently traded rates like eur / gbp .the same refers to the second triangle , in which more popular rates : jpy / gbp and gbp / chf have more characteristic eigensignals at 1 min .time scale , while the less popular chf / jpy rate has more universal ( noisy ) eigensignals .we argued that this effect reflects the fact that less popular exchange rates play a passive role , tuning their values according to changes in dominant rates as demanded by the triangle rule . a parallel effect is related to the different shapes of the singularity spectra for different currency pairs .in this respect the most symmetric spectrum is observed for the gbp / usd pair , while other currency pairs have spectra which are more asymmetric , especially those from the gbp - chf - jpy triangle .even more intriguing are the signatures of negative singularity exponents and negative singularity spectra for the triangle residual returns .this opens an exciting direction for further investigations towards perhaps establishing closer analogy between the fx dynamics and the phenomenon of turbulence .we also found that some inefficiency of the market allowed for extremely short time scales leads to the emergence of the epps effect , i.e. an increase of couplings between different exchange rates from the same triangle , if going from shorter to longer time scales .our result indicates that , from a point of view of returns , the influence of market inefficiency on cross - correlations among exchange rates can be neglected on time scales longer than , roughly , an hour .an increase of coupling with similar characteristics time scales involved is in fact observed for any triples of exchange rates not necessarily forming a triangle .the -gaussian distribution is defined by : where } { \gamma \left [ \frac{2-q}{1-q } \right ] } \sqrt{\frac{1-q}{\pi } \mathcal{b}_{q } } & ~~{\rm for } & q<1 \\[3 mm ] \frac{\gamma \left(\frac{1}{q-1}\right)}{\gamma \left(\frac{3-q}{2 ( q-1)}\right ) \sqrt{\frac{\pi}{(q-1 ) \mathcal{b}_q } } } & ~~{\rm for } & 1<q<3 \end{array } \right . \\ \bar{\mu } _ { q}= \,\int x\frac{\ \left [ p\left ( x\right ) \right ] ^{q}}{\int \left [ p\left ( x\right ) \right ] ^{q}dx}\ dx\equiv \left\langle x\right\rangle _ { q } , \ \ \ \ \mathcal{b}_{q}=\left [ \left ( 3 - 1\right ) \,\bar{\sigma}_{q}^{2}\right ] ^{-1 } \nonumber\end{aligned}\ ] ] and denotes the -exponential function ^ { \frac{1}{1-q}}. \label{eq}\ ] ] for this distribution asymptotically develops a power law form .in particular , for , on the level of the cumulative distribution , it recovers the inverse cubic power law .this is a particularly useful aspect of the functional form expressed by eq .( [ px ] ) because it at the same time provides a compact form for the probability distribution for any value of . instead of directly using eq .( [ px ] ) it is more practical to convert it to the cumulative form by defining where the + and - signs correspond to the right and left wings of the distribution , respectively . by using eq .( [ px ] ) one obtains where , , , and is the gauss hypergeometric function :
we present a systematic study of various statistical characteristics of high - frequency returns from the foreign exchange market . this study is based on six exchange rates forming two triangles : eur - gbp - usd and gbp - chf - jpy . it is shown that the exchange rate return fluctuations for all the pairs considered are well described by the nonextensive statistics in terms of -gaussians . there exist some small quantitative variations in the nonextensivity -parameter values for different exchange rates ( which depend also on time scales studied ) and this can be related to the importance of a given exchange rate in the world s currency trade . temporal correlations organize the series of returns such that they develop the multifractal characteristics for all the exchange rates with a varying degree of symmetry of the singularity spectrum however . the most symmetric spectrum is identified for the gbp / usd . we also form time series of triangular residual returns and find that the distributions of their fluctuations develop disproportionately heavier tails as compared to small fluctuations which excludes description in terms of -gaussians . the multifractal characteristics for these residual returns reveal such anomalous properties like negative singularity exponents and even negative singularity spectra . such anomalous multifractal measures have so far been considered in the literature in connection with the diffusion limited aggregation and with turbulence . studying cross - correlations among different exchange rates we find that market inefficiency on short time scales leads to the occurrence of the epps effect on much longer time scales , though comparable to the ones for the stock market . although the currency market is much more liquid than the stock markets and it has much larger transaction frequency , the building - up of correlations takes up to several hours - time that does not differ much from what is observed in the stock markets . this may suggest that non - synchronicity of transactions is not the unique source of the observed effect .
situations where interference is known non - causally at the transmitter but not at the receiver model several useful multiuser communication scenarios . in , costa introduced and studied coding for such situations and called it `` writing on dirty paper '' . dirty paper coding ( dpc ) is now recognized as a powerful notion central to approaching capacity on multiuser channels .lattice - based ideas for dpc were suggested and shown to be capacity - approaching in .recently , many designs of lattice - based dpc schemes have been proposed in .lattice - based schemes typically use cosets of a convolutional code for lattice - quantizing or shaping to minimize the energy of the difference of the coded symbols and the interfering symbols .a part of the message bits is used to choose the specific coset used in the minimization .in addition to the shaping convolutional code , an error correction code needs to be used to obtain coding gain and approach capacity .the main source of complexity in lattice - based dpc designs is combining shaping and coding encoders / decoders at the transmitter / receiver .simple concatenation schemes are not applicable because of the following reasons - outer shaping followed by inner coding results in unshaped parity symbols that increase transmitted energy , while outer coding followed by inner shaping results in a poor inner code that needs to be iteratively decoded at the receiver with the outer code . in , encoding is done on a combined trellis of the source code ( turbo tcq ) and a channel code ( turbo tcm ) . at the receiver , decoding is done for turbo tcm followed by syndrome computation to recover message bits .the transmitter is complex in because of the use of the joint trellis .the dpc method proposed in is similar to that of . in ,multilevel coding is used , and there are different codes for different bits of the symbols . at the receiver , iterations have to be performed between decoders for some of the channel codes and the shaping decoder . in and , shaping follows channel coding and the receiver performs iterations between the shaping and channel decoders . in this work, we propose a lattice - based method that uses a novel combination of a convolutional code for sign - bit shaping and a low density parity check ( ldpc ) code for channel coding .as shown in specific designs and simulations , the method provides good shaping and coding gains at moderate complexity .the main idea for reducing complexity at the receiver is the introduction of a one - codeword delay at the transmitter , and the shaping of symbols from current message bits combined with parity bits from the previous codeword .this enables the ldpc decoder to be run first at the receiver ( with a one - codeword delay ) without any need for iterations with a shaping decoder . as an application ,we use the proposed dpc method to design codes for superposition coding in two - user gaussian broadcast channels . by simulations , we show that rate points outside the time - sharing region are achieved . the rest of the paper is organized as follows . after a brief review of the lattice - based dpc coding method in section [ cls ] , we present the proposed dpc method in section [ ps ] .this is followed by description and simulation of specific designs of dpc codes in section [ sr ] . in section [ bc ] , design of a superposition scheme using the proposed dpc method is described and simulation results are presented . concluding remarks are made in section [ conc ] .in a gaussian dirty paper channel , the received symbol vector ] denotes the transmitted vector , ] per symbol , and the interference power is denoted = p_s ] is encoded into a -symbol vector } ] , and we define vectors } ] with bits is mapped to a coset leader of the convolutional code using an inverse syndrome former .note that we need the rate of the convolutional code to be .let the coset chosen by be denoted .the sign - bit vector is chosen from so as to minimize the squared sum ( energy ) of the vector , where is the mmse factor and is the interference vector .that is , the minimization in ( [ eqn : min ] ) is implemented using the viterbi algorithm .the , are determined as follows .an ldpc code is used at the encoder with a systematic encoder .let ] , where is the parity - bit vector for the -th block .the parity - bit vector is delayed by one time step . for the -th block , the bits in $ ] and are rearranged by a permutation to form the vectors , .this permutation is necessary in an implementation of bicm .note that both the shaping and coding objectives have been met at the encoder .the transmitted symbols have minimal energy in the lattice defined by sign - bit shaping using the convolutional code . selected bits in successive blocks of symbols form codewords of the ldpc code . in summary ,the encoder structure achieves dpc shaping and ldpc coding with bit - interleaved modulation .the decoder for the proposed scheme is as shown in fig.[decoder ] .the demapper computes log likelihood ratios ( llrs ) for the bits from the received symbols in .the llrs of the message bits after a delay of one time step , and the llrs of the parity bits are de - interleaved .the llrs of the sign bits after a delay on one time step , and the output llrs of the de - interleaver are given as the input to the ldpc decoder .the ldpc decoder outputs message bits and bits of the sign bit vector of the previous block .now , the -bit sign vector is passed through the syndrome former to recover the remaining message bits .the demapper function at the receiver has to calculate llrs taking into account the modulo operation at the encoder .therefore , the received constellation is a replicated version of the -pam constellation used at the transmitter ( assuming that scaling factors have been corrected at the receiver ) .that is , the number of replications is chosen so that the average power of is approximately equal to the total average power , and the bit mapping of the symbol is the same as that for .the llr for the -th bit in the -th symbol is computed according to the constellation using the following formula : since the constellation mapping is nearly gray , iterations with the demapper do not provide significant improvements in coding gain , particularly for large .for simulations , we have taken , , with ; this results in .the constellation mapping is as given in fig .[ 16pam ] .we have chosen a rate-1/2 memory 8 ( 256 state ) non - systematic convolutional code with generator polynomials as the sign - bit shaping code .a non - systematic convolutional code is used to avoid error propagation problems .a randomly constructed irregular ldpc code ( 40000 , 35000 ) of rate with variable node degree distribution : and check node degree distribution is used as the channel code .the overall rate of transmission is seen to be bits per channel use .[ dpc ] shows ber plots over an awgn channel and a dpc channel with interference known at the transmitter .the interfering vector was generated at random for different power levels .the plot with interference did not change appreciably for all power levels of interference , and we have provided one plot for illustration .we see that a ber of is achieved at a snr of 19.45db with interference , and at a snr of 19.33 db without interference .we have simulated 1000 blocks of length 40000 to obtain sufficient statistics for a ber of . the awgn capacity at an snr of db for a rate of 3 bits per channel use .this shows that we are 1.46 db away from ideal dirty paper channel capacity .the granular gain is computed from the simulations to be 1.282db , where is the rate before channel coding , and is the transmit power ( obtained through simulations ) . from this , the shaping loss is calculated as follows : so , of the total gap of 1.46 db , we have a shaping gap of 0.2548db , and a coding gap of 1.2052db to capacity .we use the proposed scheme for superposition coding in a two - user gaussian broadcast channel and with .we let and , where is the total transmit power . here , user 2 is coded using dpc considering user 1 as interference .user 1 is shaped using sign - bit shaping and coded using an ldpc code over -pam .[ hbc ] shows a block diagram of the transmitter and receivers .the encoder structure for user 1 is as in fig .[ encoder ] with the interference vector .hence , for user 1 , the shaping coder minimizes the energy of .the demapper at receiver 1 calculates llr for the -th bit in the -th receiver symbol using the following formula . where for represents the _ a priori _ probability of the -pam symbol . at the receiver , we approximate using a gaussian distribution with variance assuming that the distribution of -pam symbols is approximately gaussian .we simulated a two user degraded broadcast channel with and using the proposed scheme with parameters from section [ sr ] .the total transmit power , power for user 1 and power for user 2 required for a bit error rate of ( at both receivers ) are estimated from the simulation and denoted , and , respectively .the snr for receiver 1 is computed as db . since dpc is done for user 2 , the effective snr at receiver 2 is computed as db . comparison with the snr needed for a single user capacity of 3 bits per channel use ( which is 17.99 db ) shows that the total loss for both the users is about 2.4642db .[ 256state1 ] shows the ( 3 , 3 ) rate pair in the capacity region of the two - user gaussian broadcast channel with total transmit power and noise power , , which is defined by , for .we see that the ( 3,3 ) rate point is clearly outside the time - sharing region .in this work , we have proposed a method for designing lattice - based schemes for dirty paper coding using sign - bit shaping and ldpc codes .simulation results show that the proposed design performs 1.46db away from the dirty paper capacity for a block length of at the rate of 3 bits / channel use .this performance is comparable to other results in the literature .however , as discussed in this article , a novel method for combining shaping and coding results in good gains at lesser complexity in our design , when compared to other lattice - based strategies . as an application, we have designed a superposition coding scheme for gaussian broadcast channels that is shown to perform better than time - sharing through simulations . out of the 1.46 db gap to capacity ,about 1.2 db is gap attributed to a sub - optimal choice of the ldpc code . optimizing the ldpc code will require use of genetic algorithms and asymmetric density evolution , which are topics for future work .t. philosof , u. erez , and r. zamir , `` combined shaping and precoding for interference cancellation at low snr , '' in _ information theory , 2003 . proceedings .ieee international symposium on _, june-4 july 2003 , pp .68. y. sun , y. yang , a. liveris , v. stankovic , and z. xiong , `` near - capacity dirty - paper code design : a source - channel coding approach , '' _ information theory , ieee transactions on _ , vol .55 , no . 7 , pp .30133031 , july 2009 .y. sun , a. liveris , v. stankovic , and z. xiong , `` near - capacity dirty - paper code designs based on tcq and ira codes , '' in _ information theory , 2005 .isit 2005 . proceedings .international symposium on _ , sept .2005 , pp . 184188 .
dirty paper coding ( dpc ) refers to methods for pre - subtraction of known interference at the transmitter of a multiuser communication system . there are numerous applications for dpc , including coding for broadcast channels . recently , lattice - based coding techniques have provided several designs for dpc . in lattice - based dpc , there are two codes - a convolutional code that defines a lattice used for shaping and an error correction code used for channel coding . several specific designs have been reported in the recent literature using convolutional and graph - based codes for capacity - approaching shaping and coding gains . in most of the reported designs , either the encoder works on a joint trellis of shaping and channel codes or the decoder requires iterations between the shaping and channel decoders . this results in high complexity of implementation . in this work , we present a lattice - based dpc scheme that provides good shaping and coding gains with moderate complexity at both the encoder and the decoder . we use a convolutional code for sign - bit shaping , and a low - density parity check ( ldpc ) code for channel coding . the crucial idea is the introduction of a one - codeword delay and careful parsing of the bits at the transmitter , which enable an ldpc decoder to be run first at the receiver . this provides gains without the need for iterations between the shaping and channel decoders . simulation results confirm that at high rates the proposed dpc method performs close to capacity with moderate complexity . as an application of the proposed dpc method , we show a design for superposition coding that provides rates better than time - sharing over a gaussian broadcast channel .
quantum turbulence , as well as classical turbulence , is dominated by reconnection of vortical structures which is much simpler to treat in the framework of quantum fluids rather than in viscous fluids , while leading to similar features such as time asymmetry . despite the fundamental differences between the two forms of turbulence , there are reasons to believe that the understanding of quantum turbulence might shed new light on the understanding of its classical counterpart .quantum fluids dynamics is properly described by the gross pitaevskii equation ( gpe ) is the complex wave function . through the madelung transformation , equationcan be viewed in classical fluid dynamical terms as where denotes density , velocity , pressure , and the so - called quantum stress ( ) .defects in the wave function are interpreted as infinitesimally thin vortices of constant circulation , with healing length .gpe conserves the mass and the energy the main reason for preferring the gpe approach to others for the study of quantum turbulence is that it guarantees a natural dynamics of interacting vortices while resolving fine scales up to the vortex core . on the contrary ,methods based on the inviscid euler equations ( either their direct numerical simulation or vortex filament methods ) are unable to automatically perform vortex reconnections , being forbidden by euler dynamics .the numerical solution of the gpe is normally carried out by employing time splitting fourier methods and by imposing vortices in the form of singular phase defects in a unitary background density , i.e. when . however , these methods assume periodic boundary conditions .solutions which are not periodic must be mirrored in the directions lacking periodicity , thus imposing doubling of the degrees of freedom in each of those directions and a consequent increase of the computational effort .recent developments have shown that reconnections in quantum fluids are strictly related to topological features characterizing the interacting vortex tubes such as writhe , total torsion and intrinsic twist .these quantities depend on the fine details of the curve that describes the vortex centerline ( its third derivative with respect to the curvilinear abscissa is required for computing torsion ) and on the phase of the wavefunction in the neighborhood of the vortex centerline .therefore it is paramount to resort to high resolution numerical simulations of equation , especially in the proximity of the reconnection event . with the goal of assessing the goodness of time - splitting fourier methods for singular solutions on uniform grids versus time - splitting finite differences on nonuniform grids , we first derive an analytic approximation of a two - dimensional steady state vortex that nullifies the right - hand - side of .then we perform a systematic comparison between the two approaches by measuring the deviation of the numerical solution from the initial condition ( being steady the initial condition should remain preserved ) .finally , we explore the possibility to evaluate the solution obtained by time - splitting fourier methods on nonuniform grids designed to guarantee higher spatial resolution in the proximity of vortex singularities .we seek a two - dimensional , steady - state solution of equation that represents a straight vortex centered at the origin .it is well - known that the classical two - dimensional euler vortex of circulation has azimuthal velocity where is the radius and ] , with boundary conditions and .we solve equation by central second order finite differences with equally spaced discretization points , . given the numerical solution of , the numerical approximation of the density is ^ 2\ ] ] where , .this rescaling provides denser points in the neighborhood of the origin , where they are mostly needed ( more than 95% of the points are in the interval ) .nevertheless , the computation of the initial solution for on a two - dimensional grid , for instance , requires to be interpolated. it would be therefore useful to have an analytic approximation of .however , since it is more convenient to find a pad approximation directly for ^ 2 ] , numerical solution of obtained by second - order finite differences on 5000 equispaced points , and different pad approximations . ] in figure [ fig : rhopade ] we show the comparison between the numerical solution of by employing second - order central finite differences with 5000 points and different pad approximations for .visual inspection confirms that is a poor representation of the solution of equation , especially for . and the numerical solution of with ( equally spaced grid , second - order finite differences , 5000 points ) . ] to appreciate quantitatively the error with respect to the numerical solution , in figure [ fig : errrho ] we report the relative error in a semilog plot .interestingly , the relative error does not reach its maximum close to the origin , meaning that any pad approximation reproduces quite well the behavior of the vortex for . on the other hand ,the maximum relative error is always reached for , i.e. in a region of interest for the numerical simulations that we will perform .widely used schemes for the numerical simulation of the dynamics of are the so - called time - splitting methods and the finite difference time domain methods ( see for a review ) .if we restrict the options to second - order accurate schemes in time , time splitting pseudospectral ( tssp ) methods , time splitting finite difference ( tsfd ) methods and crank nicolson finite difference ( cnfd ) method conserve the mass at the discretized level .however , cnfd is implicit and requires the solution of a coupled nonlinear system at each time step . for this reason we resorted to time splitting methods .we refer the reader to for higher - order time splitting methods . in ( * ?* example 4.1 ) tssp is suggested when the solution is smooth and tsfd otherwise , although the hint comes from a one - dimensional numerical experiment . in what follows ,we analyze two approaches : a classical time splitting fourier method and a time splitting _ nonuniform _ finite difference method . in any case ,equation is split into two parts where .the solution of the first equation depends on the space chosen for the discretization and will be described in the next two sections .the second equation can be solved exactly , taking into account that is preserved by the equation .therefore for any in the spatial domain .if we denote by and the two partial numerical solutions , the approximation of , where , can be recovered by the so - called strang splitting equation can be solved exactly in time within the fourier spectral space . a part from the error at machine - precision level coming from the necessary direct and inverse fast fourier transforms ( ffts ) , the only possible considerable error might arise from an insufficient number of fourier modes .this is usually not a big deal when approximating smooth solutions fastly decaying to zero , since spectral order of convergence takes place . for this to happen, the unbounded domain has to be truncated to a computational bounded domain large enough to support the most of a periodic approximation of the solution . however , when simulating the dynamics of vortex solutions not decaying to zero , as in our case where , there are some issues to take into account : the low regularity of the solution at the origin , due both to and to , and lack of periodicity at the boundaries , also considering the usual extension of the computational domain and reflection of the solution ( see ) . in fact , after such a mirroring , the solution takes the same values at opposite boundaries , but its derivatives do not . in order to investigate the accuracy of fourier approximation for vortex solutions, we consider the fourier series expansion of the function obtained by mirroring with respect to the axis and . the final computational domain is therefore , with , for which .the choice of the parameters , and corresponds to a two - dimensional straight vortex as described above .different choices provide more regular functions or functions fastly decaying to zero , for which the derivatives at the boundaries are almost periodic .we compute a reference approximation by an expansion into a series with fourier modes and compare it with expansions ranging between and modes , in the norm . by mirroring ) for different choices of the parameters .only the case of a regular and fast decaying to zero function ( , and , green , upward triangles ) shows the typical spectral rate of convergence . ] for a quite regular and periodic function , corresponding to , and we observe in figure [ fig : regularity ] a typical spectral rate of convergence . for any other choice of the parameters , which affects the regularity of the density ( , , ) , or the fast decay to zero of the function and its derivatives ( , , ) or the regularity of the phase ( , , ) we observe a strong order reduction .the same reduction occurs for the straight vortex ( , , ) .increasing the number of fourier coefficients so as to gain accuracy is often not an option .in fact , due to the necessary mirroring , this corresponds to a huge growth of the degrees of freedom .moreover , the use of hyperbolic sparse grids ( see , for instance ) is not possible , since the possibility of discarding coefficients and grid points is given only for highly regular solutions .the low regularity of the solutions to be approximated and the needed duplication along axes in order to satisfy at least the periodicity of the values of the solutions suggest to explore the alternative of a finite difference discretization in space .the main advantage of a finite difference approach is that the mirroring of the solution is not required , being the extension of the bounded domain replaced by the imposition of homogeneous neumann boundary conditions . given the low regularity of vortex solutions , we use centered second order finite differences . with the aim of increasing the spatial resolution around the vortex cores and keeping a reasonable degree of freedom , we employ a set of nonuniform grid points ( see , for instance , for locally adaptive finite element discretizations ) . the discretization of the laplace operator in one dimension with nonuniform finite differences on points provides the nonsymmetric matrix 0 & \ldots & 0 & \frac{2}{h_{m-1}^2 } & -\frac{2}{h_{m-1}^2 } \end{bmatrix}\ ] ] where , , .this is not exactly a second order approximation , although a discretization in which and a refinement with yields a first order term in the error decaying faster than the second order term ( see ) .the approximation for the two - dimensional and the three - dimensional cases can be simply obtained by kronecker products with the identity matrix .if we call the corresponding matrix , equation is transformed into the system of ordinary differential equations given the importance of the mass preservation , we investigate this issue for the numerical solution of system . a quadrature formula with positive weights for the computation of the mass writes it can be written as where denotes the transposed conjugate vector of and the matrix with diagonal .we define such that with . if is _ symmetric _, then the solution is an orthogonal matrix and this means that and therefore system preserves the mass at the discrete level if makes symmetric . from the structure of the matrix , it is clear that the vector of trapezoidal weights $ ] gives a matrix such that is symmetric .the extension to in the two - dimensional and three - dimensional cases is trivial and this is enough to get symmetric as well , in fact .we conclude that equation preserves the mass at the discrete level whenever the trapezoidal rule is used as quadrature formula and this is easily extended to any space dimension .system could be solved , for instance , by the crank nicolson scheme which preserves the discrete mass being symmetric ( see ) .this scheme is second order accurate in time , therefore the size of the time step has to be chosen such that the error is smaller than the time splitting error .moreover , crank nicolson scheme requires the solution of a linear system of equations with matrix at each time step . although this is not a big deal in one space dimension , since the matrix is tridiagonal , in higher dimensions the discretization yields a large , sparse , complex symmetric matrix .this implies the use of preconditioned krylov solvers for general matrices such as gmres or bicgstab or minimal residual methods for complex symmetric systems ( see ) .iterative methods converge to the solution up to a specified tolerance which therefore influences the mass conservation and the whole accuracy of the result . given these complications ,we prefer to consider a direct approximation of the exact solution nowadays there are several options for the computation of the action of the matrix exponential to a vector .we refer to for a review of _ polynomial _ methods which do not require the solution of linear systems . in this way ,the kinetic linear part is solved exactly in time , as in the fourier spectral method .in section [ sec : pade ] we have derived various approximations of for a straight , two - dimensional vortex , whose wavefunction is . in order to quantitativelycompare the two methods introduced in section [ sec : splitting ] , we measure the preservation of such a steady solution by reporting the relative error with , where is either or , the latter evaluated at any required by linear interpolation of . the origin is excluded since is zero therein .the time step is chosen such that is an integer , where is the final simulation time . in all our experiments , we selected , a reasonable value in quantum fluids simulations .the maximum over the continuum set in the error above is approximated by the maximum over a discrete set which will be specified later .although the preservation of the initial state may seem a trivial test , it is in fact a reliable and necessary experiment in order to validate the effectiveness of the proposed numerical methods .thanks to the reliability of the analytic solution , this test can show the influence of both the spatial approximation and the time splitting error in the numerical discretization of the pde . inwhat follows we will employ either tssp ( fourier ) or tsfd . for a computational grid with grid points in the physical domain of interest, tssp requires a total of degrees of freedom due to mirroring , whereas tsfd requires only degrees of freedom thanks to homogeneous neumann boundary conditions .+ we preliminary test the reliability of the three pad approximations and the numerical solution of equation obtained by central second order finite differences with uniformly distributed discretization points . for the solution of the gpe we employ tssp with fourier basis functions on a uniform two - dimensional computational grid .for this reason , the numerical solution of equation must be interpolated .results are reported in figure [ fig : diffic ] , where the relative error defined by is plotted versus time for different disks .the number of fourier modes is fixed to , i.e. degrees of freedom .we compare the solution at each time step with the initial condition on the grid nodes within the considered disk .the worst approximation of the steady - state solution is the commonly used pad approximation , whereas turns out to be as accurate as the numerical solution .for this reason , in the following experiments we will consider only .all curves collapse on each other in the case of the largest disk , meaning that the maximum error occurs at the boundaries , mainly due to the non - periodicity of the solution .+ we compare the performance of the finite difference approximation on a uniform versus nonuniform grid .the uniform grid has the same step - size as the grid employed for the fourier approach and reported in figure [ fig : diffic ] .the nonuniform grid is generated by taking into account different constraints . given the smallest step - size at the origin ( in the vortex core ) , we linearly increase the step - size according to in both and and in both positive and negative directions .we choose so as to reach the boundaries exactly , and in order to keep the ratio , where .the number of points of the nonuniform grid is chosen such that the mean value of equals the step - size of the uniform grid .these constraints guarantee a reasonable nonuniform grid .the comparison between the uniform and nonuniform grids is shown in figure [ fig : univsnon ] . in all casesthe initial condition is , thus the error for the uniform grid can be compared directly with that in figure [ fig : diffic ] for the case of pad approximation with ( red squares in both figures ) .we first focus on the results with the same number of points and the same boundaries , i.e. red squares and blue circles in figure [ fig : univsnon ] .the discrete mass variation along time integration is comparable and of order .this confirms the conservation of mass also for the case of nonuniform grid , as discussed in section [ sec : tsfd ] . in the nonuniform casethe error is roughly one order of magnitude smaller than in the uniform case on small disks and for not too large , whereas the curve of the nonuniform case tends to jump onto the uniform one after a certain time as the radius of the disk of interest increases .this suggests the idea that the error arises at the boundaries , where does not exactly fulfill neumann boundary conditions .motivated by this , we have changed the boundaries from to and to check the dependency of the error on the choice of the truncated domain . in doing so ,we have preserved the constraints on the nonuniform grids discussed above , obtaining degrees of freedom for ( upward green triangles ) and degrees of freedom for ( downward black triangles ) . with reference to figure [ fig : univsnon ] ,the domain bounded at is clearly too small and the error is always very large compared to all the other cases . on the other hand , the curves for ( blue circles ) and ( downward black triangles ) behave roughly in the same way up to a certain value of , after which the case consistently show larger errors than the case .this reinforces the claim that the error arises from the borders .now we concentrate on our main goal , which is the comparison between tssp with fourier basis function on uniform grids and tsfd on a nonuniform grid that we fix to , , . in order to compare the error defined by for the two methods, we always evaluate the tssp solution on the nonuniform grid points ( spectral solutions can be evaluated everywhere ) .this set of points has the advantage of being denser in the vortex core , where higher spatial resolution is desirable .results are reported in figure [ fig : tsfdvstssp ] , where sp stands for spectral and nfd for nonuniform finite differences . keeping in mind that the spectral fourier approach needs mirroring , i.e.the number of modes in each direction must be doubled , we first choose a number of fourier modes in each direction to make it equal to the number of points of the reference case for nonuniform finite differences ( ) in the physical ( un - mirrored ) domain .the overall behavior of the error for these two cases is comparable : tssp ( red squares ) performs better than tsfd ( blue circles ) for small values of , whereas the opposite happens for intermediate values of . for large the two curves collapse on each other . due to the fact that tssp needs mirroring , i.e. , in figure [ fig : tsfdvstssp ]we explore also the cases with less fourier modes , namely ( upward green triangles ) and ( downward black triangles ) .as observed for the case , in the long term all curves seem to provide similar errors , regardless of the disk radius . on the other hand , for small values of ,the number of degrees of freedom plays a rle in that a larger number of fourier modes ensures smaller errors .it is important to keep in mind that , for what seen in figure [ fig : regularity ] , the tssp fourier approach does not retain the spectral accuracy because of the singular nature of the solution at the origin and the lack of periodicity at the boundaries . as a final remark ,we observe that the error of the fourier solution computed on its own uniform grid , reported in figure [ fig : diffic ] with red squares , is smaller than the error of the fourier solution evaluated on the nonuniform grid , reported in figure [ fig : tsfdvstssp ] with red squares .as expected , from figure [ fig : tsfdvstssp ] we have seen that the smaller the number of fourier modes , the larger the relative error with respect to the initial condition .we wish to check if there exists an upper limit to the maximum resolution of fourier spectral method . for doing so, we increase the number of fourier modes and , proportionally , the number of time steps as suggested in .results are shown in figure [ fig : maxrestssp ] .we observe high accuracy in the core ( see smaller disk , left plot ) for small values of , immediately followed by saturation . in a larger disk ( right plot ) , saturation kicks in almost immediately .the errors reported in figure [ fig : maxrestssp ] suggest that is a reasonable value of fourier modes for the preservation of a two - dimensional quantum vortex .as explained in the introduction , our motivation to explore the nonuniform finite difference approach is based on the need , for the study of vortex reconnections , of high _ local _ spatial resolution and , possibly , accuracy . ) at nonuniform grids.,title="fig : " ] ) at nonuniform grids.,title="fig : " ] instead of increasing the number of fourier modes so as to reach higher _ global _ spatial resolution , one can resort to a tssp method with a reasonable number of modes ( considering that mirroring is needed ) and then evaluate the tssp solution on a nonuniform grid , with denser points where they are needed . in figure[ fig : tsspcoll ] we compare the reasonable case , for which the number of modes is relatively small , but not too small , with different nonuniform grids .we notice that the numerical integration itself is carried out only once and the fourier coefficients of the solution are stored at each time step .the evaluation at the grid points is performed afterward , in the post - processing stage , as many times as desired . moreover , tools like the nonuniform fast fourier transform ( nfft , see ) can be employed for the fast evaluation of trigonometric polynomials at arbitrary point sets .the constant spatial step - size of the tssp method is , whereas stands for the minimum value of the step - size , in the proximity of the origin , for the nonuniform grids . as seen before, there is a substantial difference in the error only for , whereas for larger values of evaluating the fourier solution on a nonuniform grid does not worsen the solution .it is important to note that vortex reconnections , usually , require a dynamics that takes a time of at least .evaluating a tssp solution on a locally refined grid is , thus , a very promising approach to study quantum vortex reconnections .after deriving a new accurate pad approximation for the density distribution of a two - dimensional steady - state vortex , we have used it as the initial condition for the gross pitaevskii equation to test the performance of the time - splitting fourier method .although it can not retain its classical spectral accuracy in space , being as accurate as low - order finite difference on nonuniform grids , it preserves quite well the steady - state solution , especially in the neighborhood of the singularity .the advantage of a post - processing evaluation on arbitrary points makes this approach suited for applications where _ local _ high resolution is required .[ [ section ] ] 10 url # 1`#1`urlprefixhref # 1#2#2 # 1#1 w. f. vinen , an introduction to quantum turbulence , phil .r. soc . a 366 ( 1877 )( 2008 ) 29252933 .m. s. paoletti , d. p. lathrop , quantum turbulence , ann .cond . mat .phys . 2 ( 2011 ) 213234 .c. f. barenghi , l. skrbek , k. r. sreenivasan , introduction to quantum turbulence , proc .usa 111 ( 1 ) ( 2014 ) 46474652 .u. frisch , turbulence , cambridge university press , cambridge , england , 1995 .s. b. pope , turbulent flows , cambridge university press , cambridge , england , 200 .f. hussain , k. duraisamy , mechanics of viscous vortex reconnection , phys .fluids 23 ( 2011 ) 021701(4 ) .s. zuccher , m. caliari , a. w. baggaley , c. f. barenghi , quantum vortex reconnections , phys .fluids 24 ( 125108 ) ( 2012 ) 121 .l. p. pitaevskii , vortex lines in an imperfect bose gas , sov .jetp 13 ( 1961 ) 45154 .gross , hydrodynamics of a superfluid condensate , j. math .phys . 4 ( 1963 ) 195207 .s. zuccher , r. l. ricca , helicity conservation under quantum reconnection of vortex rings , phys .e 92 ( 6 ) ( 2015 ) 061001 .l. kondaurova , v. lvov , a. pomyalov , i. procaccia , structure of a quantum vortex tangle in counterflow turbulence , phys .b 89 ( 2014 ) 014502 .m. d. bustamante , r. m. kerr , 3d euler about a 2d symmetry plane , physica d 237 ( 2008 ) 19121920 .r. hnninen , a. w. baggaley , vortex filament method as a tool for computational visualization of quantum turbulence , proc .usa 111 ( 1 ) ( 2014 ) 46674674 .j. koplik , h. levine , vortex reconnection in superfluid helium , phys .71 ( 9 ) ( 1993 ) 13751379 .a. j. allen , s. zuccher , m. caliari , n. proukakis , n. g. parker , c. f. barenghi , vortex reconnections in atomic condensates at finite temperature , phis .a 90 ( 2014 ) 013601 .m. w. scheeler , d. kleckner , d. proment , g. l. kindlmann , w. t. m. irvine , helicity conservation by flow across scales in reconnecting vortex links and knots , proc .usa 111 ( 43 ) ( 2014 ) 1535015355 .n. g. berloff , pad approximations of solitary wave solutions of the gross pitaevskii equation , j. phys . a : math .37 ( 2004 ) 16171632 .s. nazarenko , r. west , analytical solution for nonlinear schrdinger vortex reconnection , j. low temp .. 132 ( 1 ) ( 2003 ) 110 .d. p. meichle , c. rorai , m. e. fisher , d. p. lathrop , quantized vortex reconnection : fixed points and initial conditions , phys .b 86 ( 1 ) ( 2012 ) 014509 .w. bao , y. cai , mathematical theory and numerical methods for bose einstein condensation , kinet .models 6 ( 1 ) ( 2013 ) 1135 .m. thalhammer , m. caliari , c. neuhauser , high - order time - splitting hermite and fourier spectral methods , j. comput .228 ( 3 ) ( 2009 ) 822832 .v. gradinaru , strang splitting for the time - dependent schrdinger equation on sparse grids , siam j. numer .46 ( 1 ) ( 2007 ) 103123 .m. thalhammer , j. abhau , a numerical study of adaptive space and time discretisations for gross pitaevskii equations , j. comput .physics 231 ( 20 ) ( 2012 ) 66656681 .j. h. ferziger , m. peri , computational methods for fluid dynamics , 3rd edition , springer , 2002 .t. choi , minimal residual methods for complex symmetric , skew symmetric , and skew hermitian systems , tech . rep .anl / mcs - p3028 - 0812 , computation institute , university of chicago , chicago , illinois ( 2013 ) .m. caliari , p. kandolf , a. ostermann , s. rainer , comparison of software for computing the action of the matrix exponential , bit 54 ( 1 ) ( 2014 ) 113128 .w. bao , s. jin , p. a. markowich , on time - splitting spectral approximations for the schrdinger equation in the semiclassical regime , j. comput .175 ( 2 ) ( 2002 ) 487524 .j. keiner , s. kunis , d. potts , using nfft 3a software library for various nonequispaced fast fourier transforms , acm trans .software 36 ( 4 ) ( 2009 ) 19:119:30 ._ the case ._ the coefficients of this expansion are already known , however it is instructive to proceed with their derivation in order to understand how it works .we have to compute coefficients , , and , therefore we can use only equations .these equations are obtained by nullifying , respectively , the coefficients of the terms , and in the numerator ( lower - order powers of ) . by nullifying the coefficient of , we get from which . by nullifying the coefficient of , and replacing with the expression above, we get which gives .if we now nullify the coefficient of and replace with and with , we get the following equation clearly , is not acceptable , nor is .the only acceptable value is . as we mentioned before , equationcan not be satisfied exactly , however , an _ a posteriori _ evaluation reveals that the remaining coefficients of are smaller than and monotonically decreasing with . _ the case . _since we have to compute coefficients we need equations , which are obtained by imposing that the coefficients of the terms , , , and must be zero . by nullifying the coefficient of still get the same equation as for , , from which . by nullifying the coefficient of , and replacing with the expression above, we get which is easy to solve for leading to now we collect terms in and impose its coefficient to be zero . in this equationwe replace and with the expressions derived above and get the equation which we solve for : then we nullify the coefficient of , substitute all previously found , and , getting the equation which gives finally , we nullify the coefficient of , substitute , , and , and get the equation for this equation must be solved numerically and leads to many real solutions .however the only value that reproduces a physical behavior of for is .it is possible to compute the first derivative and verify that for all . in other words, is a physical , monotonically increasing , approximation of the density due to a two - dimensional quantum vortex .again , the coefficients of that are not zero are , indeed , smaller than and monotonically decreasing with ._ the case ._ now we have 7 coefficients to compute , therefore we need equations , i.e. we need to nullify the coefficients of for . by canceling the term and solving for get the usual expression . by nullifying the term , substituting and solving for get , which is the same expression obtained for . by canceling the term , substituting and found , and solving for we get by canceling the term , substituting , , and solving for we get by canceling the term , substituting , , and , and solving for we get by canceling the term , substituting all known , , , and solving for we get finally by canceling the term , substituting all , previously found , we get an equation for this equation has many real solutions , which can be determined numerically .however , the only value that leads to a physically acceptable for is .after computing all other coefficients and the first derivative , it is straighforward to verify that for all , i.e. is a physical , monotonically increasing , approximation of the density for a two - dimensional quantum vortex . as observed before , the coefficients of that are not zero are smaller than and monotonically decreasing with
we extensively study the numerical accuracy of the well - known time splitting fourier spectral method for the approximation of singular solutions of the gross pitaevskii equation . in particular , we explore its capability of preserving a steady - state vortex solution , whose density profile is approximated by a very accurate diagonal pad expansion of order 8 , here explicitly derived for the first time . although the fourier spectral method turns out to be only slightly more accurate than a time splitting finite difference scheme , the former is reliable and efficient . moreover , at a post - processing stage , it allows an accurate evaluation of the solution outside grid points , thus becoming particularly appealing when high resolution is needed , such as in the study of quantum vortex interactions . quantum fluids , nonuniform finite differences , time splitting , fourier spectral method
in brief , a bit commitment ( bc ) is the following task that is executed in two steps ( ( a ) and ( b ) below ) by two mistrustful parties , a sender , alice , and a receiver , bob.(a ) commit phase ( c - phase ) : alice chooses a bit ( or ) and commits it to bob .that is , she gives bob a piece of evidence that she has a bit in mind and that she can not change it ( in this case , the commitment is said to be binding ) .bob can not learn the value of the committed bit from that evidence until alice reveals further information ( in this case , the commitment is said to be concealing).(b ) opening phase ( o - phase ) : at a later time , alice opens the commitment .that is , she tells bob the value of and convinces him that it is indeed the genuine bit that she chose during the c - phase .if alice changes the value , it can be discovered by bob .a bc is an important cryptographic primitive with many applications in more sophisticated tasks and is of great theoretical and practical interest .current classical bc protocols are proven secure by invoking some unproven computational assumption ; that is , complexity of some kind of mathematical problems such as the hardness of factoring large integers .after the invention of the quantum computing algorithm that makes the computational assumption totally invalid , it has been brought to many researchers attention whether there exists a bc protocol that is guaranteed secure solely by physical principles . in recent years , mayers , lo , and chau have proven that an unconditionally secure bc is impossible ( no - go theorem for a bc ) under the standard nonrelativistic assumption. however , although their discussions are quite correct , their proofs are a bit formal and nonconstructive .it is not yet clear what prevents us from implementing the unconditionally secure bc protocol . in this paper , we give a constructive proof of the no - go theorem for a bc that would make things more transparent and convincing from the viewpoint of quantum information theory .we clarify why quantum mechanics does not help a quantum bc protocol to achieve more than a classical one does .first , let us consider an honest protocol .the most important point concerning the bc protocol is that alice needs to unveil a value of in the o - phase consistently with information transmitted in the c - phase . from the information - theoretic point of view, this implies that one bit of classical information should be transmitted from alice to bob at the end of the protocol .therefore , when we set and as the amounts of information in a bit transmitted in the c - phase and o - phase , the following identity holds : , the bc protocol is essentially a split transmission of one - bit information in two temporally separated steps : one in the c - phase and the other in the o - phase .only a fraction of the bit information needs to be transmitted in each step .noting this fact , we can formulate the quantum bc protocols reported so far as follows . in order to demand unconditional security, alice reveals to bob quantum information as a piece of evidence of her commitment by transmitting a system , such as a photon or an electron , in the c - phase . in the o - phase , she reveals to bob classical information which consists of the value of and the measurement basis on the system .finally , alice and bob test the consistency between the reported value of and the measurement results of the system . according to the quantum description of the protocol involving classical communication suggested by tal mor ,let subsystem ( bob s system ) be the system with arbitrary dimensional state space that carries quantum information in the c - phase and subsystem ( alice s system ) be the system with arbitrary dimensional state space that carries classical information in the o - phase. let be the genuine states of the joint system to be prepared by alice according to her choice of . then , eq .( [ eq1 ] ) is equivalent to the condition that and are orthogonal in the joint hilbert space , ; i.e. , , to avoid confusion throughout this paper , we use superscripts to denote the appropriate state space for a state or an operator .according to the protocol , by transmitting a subsystem to bob , alice reveals , in general , nonorthogonal marginal states ( or ) in the c - phase .it is proven in appendix a that from condition ( [ eq2 ] ) , we can always find two mutually orthogonal purifications of in the orthogonal subspace in which the support of the state lies .any set of two orthonormal states in should be represented by two orthonormal states in the two - dimensional subspace spanned by in .furthermore , any such subspace can be defined by a set of two orthonormal states in : an appropriate choice of two sets of orthonormal states and and coefficients and , where makes up a schmidt basis for .therefore , given a set of the mutually orthogonal purifications in , we can always find the following form of schmidt decomposition , \left| \psi _{ 1}^{ab}\right\rangle = -\sin \theta \left| 0^{ab}\right\rangle + \cos \theta \left| 1^{ab}\right\rangle , % \end{array}% \right .\label{eq4}\ ] ] by choosing appropriate schmidt coefficients and absorbing any phase factors in the definition of the bases . as a result of eqs .( [ eq5 ] ) and ( eq4 ) , the marginal states and are commutable and diagonalized simultaneously by schmidt basis as denotes a partial trace over subsystem , and two states orthogonal on ; i.e. , .the forms of and can be freely chosen in the protocol and various complex forms have been proposed to prevent cheating of both parties , but the concrete forms are irrelevant to the subject in the following discussion .according to the model given in sec [ sec2 ] , we will evaluate the performance of alice s and bob s cheating .the purpose of bob s cheating is to obtain as much information as possible about during the c - phase from the marginal states . in the following ,the amount of available information about for bob during the c - phase is evaluated as a measure of his cheating performance . from the protocol agreed by alice and bob ,the states to be prepared by alice are known to them .therefore , bob can calculate the schmidt bases , and , that diagonalize the marginal states and beforehand .bob can perform an optimal measurement for distinguishing and by making use of the orthogonality between and . to describe a measure of the available information about for bob from , consider the fidelity between and . it is given by that and are orthogonal , we can calculate from eq .( eq6 ) as in the representation in which and are diagonal .therefore , eq . ( [ eq6 ] ) gives the smaller the fidelity is , the more bob can distinguish between and correctly ; therefore , he can gain more information about . to confirm this, we consider the quantum error probability which gives the lower limit of error rate for distinguishing and . it is given as again that and are orthogonal , it follows from eq .( [ eq6 ] ) that in the representation in which and are diagonal .thus , it follows that us now introduce distinguishability between and as , the larger the distinguishability is , the more bob can distinguish between and correctly .thus , the distinguishability gives a measure of the available information about for bob from .it is easily seen that and satisfy , there is a trade - off relationship between and ; that is , the smaller is , the larger is and the more correctly bob can distinguish and , and vice versa .let us turn to the information - theoretic measure of available information for bob .mutual information between the value of genuine and the value of that is judged from the measurement of and is an appropriate measure from the viewpoint of information theory .when alice chooses the value of commit bit between 0 and 1 with equiprobability , this measure depends only on and and is given by is an entropy function ( in bit ) .the purpose of alice s cheating is to unveil her commit bit at her will in the o - phase while ensuring unveiled does not conflict with bob s measurement of his subsystem revealed by her during the c - phase . from the agreed protocol, alice can calculate the purification given in eqs .( eq4 ) and ( [ eq5 ] ) beforehand . in the following ,her ability to change the commit bit is evaluated for two known cheating strategies as a measure of her cheating performance .this is a strategy which was first proposed by mayers. alice honestly reveals either or in the c - phase by transmitting subsystem of the joint system prepared in the arbitrary purification associated to either or . in the o - phase , by a local unitary operation on subsystem in her hand , she can change the joint state into any purification of her chosen that satisfies . then , according to her necessity , she changes the joint state into the fake states , , the state saturates the upper bound of in eq .( [ eq18 ] ) and is most parallel to the state . she tells bob the basis to be used for his measurement on subsystem that is found from her projection measurement of subsystem by an appropriate basis .bob performs projection measurement on his subsystem according to her instruction and checks her commitment from the consistency between the value of that is unveiled by alice in the o - phase and his measurement results .the fake state given in eq .( [ eq19 ] ) is optimal in mayer s strategy . to confirm this, we consider the probability that alice causes and bob finds an inconsistency between the unveiled value of and bob s measured data .it is proven in appendix b that is zero when alice prepares the genuine state or the purification , and she prepares the fake state , where equality holds if and only if lies in the subspace spanned by a set of orthonormal states .applying eq .( [ eq18 ] ) to eq .( [ eq20 ] ) yields state in eq .( [ eq19 ] ) yields the lower bound for the probability that depends only on and .therefore , the fake states in eq .( [ eq19 ] ) give the least possibility of disclosing her cheating to bob and they are optimal for this strategy .the lower limit is a convenient measure of alice s ability to change her commitment in the o - phase .it should be noted that mayer s strategy is asymmetric with respect to the value of that alice unveils in the o - phase .for example , consider alice reveals in the c - phase .then , if she unveils honestly in the o - phase , bob s measured data on his subsystem is perfectly consistent with her disclosure and is zero .conversely , if she wants to unveil , she can cheat bob successfully with the probability preparing the fake state . here ,( [ eq10 ] ) and ( [ eq21 ] ) are used to derive eq .( [ eq22 ] ) .thus , the lower limit depends on that alice unveils in the o - phase .it should be further noted that if .this means that mayer s strategy can be applicable only when .now let us turn to the information - theoretic measure of alice s cheating performance .let be mutual information between the value of that alice unveils and the value of that bob judged from his measurement on his subsystem in the o - phase .taking into account the asymmetry noted in the previous paragraph , we get the upper bound of as a function only of and as follows : , is considered to be a good information - theoretic measure of alice s ability to change her commit bit for this strategy .this is a strategy which was first given by koashi and imoto in the context of quantum key distribution , but later applied to the bc protocol by hardy and kent. according to this strategy , alice reveals in the c - phase by transmitting the subsystem of the joint system prepared in the arbitrary purification of . when she unveils her commitment in the o - phase , she can change the joint state into any purification of satisfying performing a local unitary operation on her subsystem .then , according to her choice of , she changes the joint state into the fake state , for example , so that when , , saturates the upper bound of in eq .( [ eq25 ] ) and is the most parallel to the state . she tells bob the basis to be used for his measurement on subsystem that is found from her projection measurement on her subsystem by an appropriate basis .bob performs projection measurement on his system according to her instruction and checks her commitment from the consistency between the value of that is unveiled by alice in the o - phase and his measurement results .it can also be proven from appendix b that the lower bound of the probability in this strategy depends only on and , and it is given by states in eq .( [ eq26 ] ) yield the lower bound .therefore , they are optimal for this strategy .the lower limit gives a convenient measure of alice s ability to change her commitment in the o - phase .in contrast to mayer s strategy , hardy - kent s strategy is symmetric with respect to the value of that alice unveils in the o - phase .the lower limit is independent of her disclosure of .the upper bound of the mutual information for hardy - kent s strategy is written in terms of as is considered to be a good information - theoretic measure of alice s ability to change commit bit in this strategy . to compare the cheating performances of alice and bob for both mayer s and hardy - kent s strategies , we plot the three information theoretic measures , , and in fig . f1 as a function of the fidelity chosen as a common parameter .this figure clearly shows that there is a trade - off relationship between bob s available information in the c - phase ( ) and alice s ability to change commit bit in the o - phase ( ) .it is clear that the sum is bounded ; i.e. , (for ) .this equation is a direct consequence of eq .( [ eq15 ] ) , showing a trade - off relationship between the distinguishability , a measure of bob s information gain in the c - phase , and the fidelity , a measure of alice s ability to change commit bit in the o - phase .therefore , there is a trade - off in the performance of alice s and bob s cheating .figure [ f1 ] also shows that hardy - kent s strategy is superior to mayer s with respect to ability to change commit bit when the value of is large .a secure bc protocol must not allow cheating by either parties , alice or bob . to satisfy this condition , both and should vanish simultaneously . however , fig .[ f1 ] indicates that this requirement is never satisfied because of the trade - off relationship between and .in addition , even if we choose a balanced condition for both parties , , both and are already large enough .therefore , it is concluded that a law of quantum physics does not help to improve the security of the bc protocol .as already proven generally by mayers , lo , and chau , this conclusion should be valid not only for the particular cheating strategies described in this paper but also for any strategies that alice and bob can choose . to understand this conclusion ,consider the entropy of entanglement ( or entanglement in brief ) , which is known to be a unique measure of the amount of entanglement for the pure state. entanglement of the purification is defined as the von neumann entropy of the marginal state of or equivalently as the shannon entropy of the squares of the schmidt coefficients of . from eqs .( [ eq4 ] ) , ( [ eq5 ] ) , ( [ eq13 ] ) , and ( [ eq16 ] ) , it is easily calculated as , it should be noted that is equivalent to maximum information available from that is transmitted from alice to bob in the c - phase ; that is , eqs .( [ eq1 ] ) and ( [ eq29 ] ) to ( [ eq31 ] ) , we obtain the inequality , inequality implies two things .first , the performance of alice s cheating , when it is measured by , is bounded by the entanglement of the purification , which is determined only by its marginal state ( see eq .( [ eq30 ] ) ) .second , it is also bounded by the amount of information that is revealed in the o - phase .these implications are reasonable for the following reason .when alice wants to cheat bob , what she can do is restricted to local operation and measurement on the subsystem in her hand after she has revealed by transmitting the subsystem .it is known to be a fundamental law of quantum information processing that entanglement can not be increased if we are allowed to perform only local operations and subselection on the subsystem of a joint system . in this restricted situation , the best she can do to cheat is use the local unitary operation that conserves the entanglement shared in the joint system and keeps the marginal states unchanged .otherwise , the strategy must be by far an optimal one because a fraction of the entanglement must be lost from the joint system and dissipate into the environment during the local operation .under such circumstances , alice can change the information content encoded only in the relative phase between coefficients of each term in the purification , but she can not change the information content encoded in their absolute values . it is the entanglement resource that is responsible for alice s cheating , and there is no cheating strategy that can break the bound given by entanglement as shown in eq .( [ eq32 ] ) .in addition , it is also reasonable that only partial information that is to be revealed in the o - phase can be used for alice s cheating but the partial information already revealed in the c - phase can not .conversely , we must be aware that alice makes use of partial information that is reserved to be revealed in the o - phase as an entanglement resource for cheating .it is worth noting that the present proof can be regarded as a concrete example of the general proof of the no - go theorem for a zero - knowledge - convincing protocol recently given by horodecki et al.horodecki our proof clearly indicates that if alice wants to convince bob that she has a definite value of a commit bit ( which is , of course , _ classical _ information ) in mind in the c - phase , the information provided by her to him in the c - phase has to carry nontrivial information about the commit bit in her mind . if the information revealed in the c - phase is independent of her commit bit , alice can always try to cheat by proposing the test which would give some result with certainty and independently of her commit bit in the o - phase .our proof suggests an information - theoretic ground for the no - go theorem of the zero - knowledge - convincing protocol .namely , any protocol with a test message that convinces bob that alice knows some state ( which is , in general , _ quantum _ information ) , the test message has to carry non - zero information about state to prevent alice s cheating .the present proof implies that the conjecture of mayers about two - party secure computation , which states that the symmetric protocol might be possible whereas the asymmetric tasks , such as unidirectional secure computations , would be impossible , is correct. in the unidirectional two - party computation , which allows only one of the two parties to learn the result , both members of the party can be a cheater and security requirements for both members are incompatible .such unidirectional protocols under the standard nonrelativistic assumption are necessarily insecure .we believe that unidirectional quantum communication does not achieve more than classical communication alone in the two - party model .however , it has not yet been proven that no non - trivial cryptographic tasks in the two - party model using bidirectional quantum communication are unconditionally secure . indeed, there are some proposals on the quantum protocols for non - trivial weaker tasks in two - party bidirectional quantum communication such as quantum coin - tossing and quantum gambling. it will still be important to solve the general problem concerning what is possible and what is impossible in two - party secure computation when unproven computational assumptions are abandoned .in conclusion , we have given constructive proof why an unconditionally secure quantum bc is impossible in the light of quantum information theory .the bc protocol is in essence the protocol in which one - bit information is split and revealed in two temporally separated steps : the c - phase and the o - phase .it ensures only a fraction of the bit information is revealed at a time . in the quantum bc protocol ,increasing the information revealed in the c - phase is to bob s advantage ; conversely , increasing the information revealed in the o - phase makes things to alice s advantage .this situation is similar to the classical protocol .furthermore , the protocol that is unbiased to both alice and bob is not secure for both . therefore , it is impossible to design a bc protocol whose security is established solely on the law of quantum physics .in addition , it has been clarified that , alice can make use of the entanglement resource , which is equal to the amount of information reserved to be revealed in the o - phase , to cheat . to prevent alice s cheating ,the information revealed in the c - phase must depend on her commit bit , and it must inevitably carry non - zero information about her commitment . it can be concluded that quantum mechanics itself makes designing an unconditionally secure bc protocol impossible .suppose that the states ( ) of joint system that is to be prepared by alice are mutually orthogonal on the joint space ; i.e. , and commute , they can be diagonalized simultaneously in terms of orthonormal bases and in as follows : and are mutually orthogonal ; i.e. , , and and are sets of real eigenvalues of satisfying and .thus , and have orthogonal supports in .marginal states revealed by alice to bob in the c - phase are commutable and , in general , nonorthogonal states . using this representation, we can write them as , we consider mutually orthonormal states and that lie in the subspace to which and belong respectively ; i.e. , , the marginal states for them are the states of subsystem represent the classical information transferred from alice to bob in the o - phase , different states and are orthogonal on the subspace . therefore , noting that the second terms in eq .( [ a5 ] ) vanishes , it is concluded that by choosing and so that is always possible to obtain mutually orthogonal purification of in the subspace in which the support of the state lies .suppose that the state of subsystem is measured to be when of the joint system prepared in the state is subjected to projection measurement by the orthonormal basis for .according to general results of quantum measurement theory , the state of the subsystem is projected onto the pure state eqs .( [ eq4 ] ) and ( [ eq5 ] ) , it is easily seen that , we find that , if and only if the basis is chosen so that the overlap between and and that between and are the same , i.e. , states and become mutually orthogonal . in the quantum bc protocol , alice and bob agree to use the measurement basis on subsystem that has a one - to - one correspondence to a state on through the joint state .she reveals to bob the measurement basis associated with her state in the o - phase , and he measures his subsystem by this basis .now , we consider the probability that alice causes an inconsistency between the value of that is unveiled by her in the o - phase and bob s measured data when alice prepares an honest state .alice projects her subsystem of the joint system prepared in onto a state among the complete orthonormal basis for space .she can perform such a projection on her subsystem at her own free will .correspondingly , the state of bob s system is projected to be , from appendix a , the states and satisfy , we obtain the identity .this identity implies that if bob follows alice s instruction and measures his system by the measurement basis given by her , the value of unveiled by alice in the o - phase is perfectly correlated with bob s measurement result , no matter what alice prepares or .therefore , if bob is honest enough to follow alice s instruction , the probability that bob finds an inconsistency in his data vanishes if alice prepares or .consequently , alice can transmit one bit of classical information to bob with certainty .next , we consider the probability when alice prepares a fake state that lies in joint space . when the joint system prepared in the state subjected to the projection measurement by using the orthonormal basis for and the result is , the state of the subsystem is projected onto the pure state the fidelity between and be .then , the probability that bob finds an inconsistency in his data is given by the condition of eq .( [ b3 ] ) , it follows that is the projector onto two - dimensional subspace in that is spanned by a set of orthonormal states , and is the overlap between the state in and subspace . the equal sign in inequality ( [ b9 ] ) holds if and only if state lies within subspace . from eqs .( [ b1]),([b6]),([b8 ] ) , and ( [ b9 ] ) , we obtain eq . ( [ b10 ] ) to eq .( [ b7 ] ) , we finally obtain , equality holds if and only if the state lies within subspace .g. brassard , c. crpeau , r. jozsa , and d. langlois , in _ proceedings of the 34th annual ieee symposium on the foundation of computer science _( ieee computer society press , los alamitos , california 1993 ) , p. 362 - 371 .
we give a comprehensive and constructive proof of the no - go theorem of a bit commitment given by mayers , lo , and chau from the viewpoint of quantum information theory . it is shown that there is a trade - off relation between information acquired by bob during the commitment phase and the ability to change a commit bit by alice during the opening phase . it is clarified that a protocol that is unbiased to both alice and bob can not be , at the same time , secure against both parties . fundamental physical constraints that govern this no - go theorem are also discussed .
consider the problem of motion of two point particles with masses , in the absence of external forces ( the two - body problem ) . here and inwhat follows , all parameters belonging to the first particle will be labeled by index `` 1 '' and those belonging to the second particle , by `` 2 '' . in order to avoid unnecessary complication of our consideration , we assume the center of mass of the system to be at rest . if the motion of the system is uniform , one can readily come to the corresponding conclusions by introducing a moving coordinate frame .we shall look for the solution to the problem in the approximation used , for instance , in monograph , i.e. , in the nonrelativistic approximation . there is a well known solution to this problem involving the reduced mass ( cf ., e.g. , , 13 or , 12 ) . in this case ,analysis of the motion of two particles is replaced by consideration of the motion of one fictitious -particle with a mass ( reduced mass ) , with this -particle assumed to move in a centrally symmetric field with a center at rest placed at the center of mass of the two particles .the magnitude of this field is governed by the force of interaction of the two particles with one another .on finding the law by which the -particle moves , one will be able to readily reconstruct the motion of the real particles and .one can offer the other kind of solution .this solution assumes each particle moves in its own stationary potential well and for each body separately the energy and the angular momentum laws are valid . said otherwise , in place of an analysis of the motion of one -particle in a gravitational potential field associated with particle interaction one may consider the motion of each real particle in the stationary gravitational potential field created by the other particle .this stationary field differs , however from the gravitational field produced by the and masses .our task is to fined this potential fields for every particle .potential energy of interaction of two point masses at rest which are on the distance from one another is where is the gravitational constant , and is the vector from particle `` 1 '' to particle `` 2 '' .this expression one can write in another form here and are gravitational potentials created by particles and coordinate is the distance between the particles .it is not convenient to use this coordinate when particles are moving .it is more convenient to use coordinate which one can reckon from the center of mass .taking it into account we write energies of the particles `` 1 '' and `` 2 '' , which are located at points and similar to formula ( [ 2 ] ) here and are the potentials which we must find , and are the coefficients connected this potentials with the gravitational potentials generated by the particles and : this formulation suggests that we are describing not the energy of interaction of particles with one another but rather the potential energy of each particle separately , which is referenced to a certain level . as is common practice , we take for the zero level the energy of interaction of a particle with other particle at infinity .let us turn now to the above potentials .we shall use for this purpose the relations derived for the well known solution involving the reduced mass .we place the origin of coordinates at the center of mass of the system under consideration . in this case introduce a vector defining the relative positions of the particles we have defined this vector by rather than by as this is done usually , in order not to confuse it with the running coordinate which we reckon from the origin .vector is the same vector which enter in the ( [ 1 ] ) , ( [ 2 ] ) and ( [ 3 ] ) formulas .the equalities ( [ 6 ] ) and ( [ 7 ] ) yield differentiating eqs .( [ 6 ] ) ( [ 8 ] ) with respect to time , we come to similar relations for the point velocities here and are the velocities of the particles under consideration , and is the relative particle velocity .recall that it is the relative quantities and that are invoked in dealing with the problem with the use of reduced mass . and it is the coordinate that is employed to describe the trajectory of the -particle .we can use now eq .( [ 11 ] ) to express the kinetic energy of each particle in terms of the relative velocity : in eqs .( [ 12 ] ) and ( [ 13 ] ) , the kinetic energy is expressed both through the velocities of individual particles and the relative velocity .an analysis of eqs .( [ 12 ] ) and ( [ 13 ] ) suggests that the expressions for the energy written in terms of the velocities of individual particles and through the relative velocity differ in characteristic factors : we are going now to express the potential energy of each particle through its coordinates ( see eqs .( [ 4 ] ) and ( [ 5 ] ) ) , whereas up to now the potential energy of interaction of two particles was defined in terms of a relative parameter , namely , separation distance between the particles ( see eq . ( [ 1 ] ) ) .this suggests that a transition from description of the energy through separation distance between the particles , a relative parameter , to that in terms of the energy of each particle separately should bring about the appearance of factors of the kind of eq .( [ 14 ] ) .furthermore , by the virial theorem ( see , e.g. , , 10 , or , 6 ) , for the coulomb potential well the kinetic energy is related to the potential energy through where the line above denotes averaging over time .for circular motion , this relation is correct without averaging .thus , the potential energy is proportional to the kinetic energy .this all adds up to the following recasting of eqs .( [ 4 ] ) and ( [ 5 ] ) : we obtain and potentials both potentials and form a stationary potential wells placed at the origin in which the real particles and move .each particle , however , moves in its `` own '' potential well , i.e. , the particle moves in the potential well formed by the potential , and the particle , in the potential well created by the potential . the potentials and are naturally nothing more than conventional potentials . they have been introduced by convention to describe the motion of the real particles and .the time has come now to check whether the expressions for the potentials and are correct .how can one verify the correctness of these relations ?first , the sum of the potential energies of both particles should be equal to the potential energy ( [ 1 ] ) of interaction of the both particles with one another .summing eqs .( [ 15 ] ) and ( [ 16 ] ) in conjunction with eq .( [ 8 ] ) , we see clearly that this equality is upheld for all and at any moment of time .because each particle moves in its fixed potential well , the laws of conservation of energy and momentum should be met for each particle separately .let us check it .we use eqs .( [ 12 ] ) and ( [ 13 ] ) to find the ratio of the kinetic energies of the two particles whence next we take eqs .( [ 15 ] ) , ( [ 16 ] ) , and ( [ 8 ] ) to find the ratio of the potential energies of the two particles whence summing eqs .( [ 21 ] ) and ( [ 23 ] ) term by term , we come to the total energies and of each particle or , in a more concise form thus , we see that the ratio of the total energies of each particle obeys the equality similar to the relations ( [ 20 ] ) and ( [ 22 ] ) the sum of the total energies of each particle should yield the total energy of the whole system . using equalities ( [ 25 ] )we come to the total is conserved as the energy of a closed system . hence , as follows from equalities ( [ 27 ] ) and ( [ 28 ] ) , the total energies of each of the particles are conserved , i.e. , are coordinate independent thus , each particle moves in its potential well with its own energy .consider now the angular momentum of the system and of the particles separately .we express the angular momentum of particles through the coordinates and the relative velocity . combining eqs .( [ 8 ] ) and ( [ 11 ] ) , we write the angular momenta of the particles as follows = m_1\frac{m_2 ^ 2}{(m_1+m_2)^2}[\mathbf{l}\times\mathbf{v}],\ ] ] = m_2\frac{m_1 ^ 2}{(m_1+m_2)^2}[\mathbf{l}\times\mathbf{v}].\ ] ] as seen from eqs .( [ 30 ] ) and ( [ 31 ] ) , the vectors ] and r=|\mathbf l|= coordinate of vector is , because it is for the `` 2 '' ellipse that the constant . for the vector , the angular coordinate .apart from this , as already mentioned , for the ellipse `` 1 '' the constant . using eqs .( [ 44 ] ) and recalling that the directions of the vectors differ by , we come to express the parameters of the ellipse in terms of those of ellipses `` 1 '' and `` 2 ''. the angular momentum of the system should be the sum of momenta of the particles and equal to the momentum of the -particle , because it is the latter that describes the behavior of the system : recasting expression ( [ 39 ] ) to the form , we substitute in expression ( [ 47 ] ) from relation ( [ 46 ] ) , and from relation ( [ 41 ] ) , and take into account that for system of two particles .the end result is thus we have once again come to the well known result , namely , if we express the parameters of the ellipse along which the -point moves through the parameters of motion of real particles , then for the mass we should take , i.e. , the reduced mass . summing up, we have shown that in the `` two - body problem '' one can operate without using the concept of the reduced mass . in this case , however , one should use not the energy of particle interaction ( [ 37 ] ) but rather the energy of each particle , which is reckoned from the zero level : it might seem at first glance that this complicates solution of the problem .this approach permits one , however , to separate the potential energies and consider separately the motion of each particle in its potential well .we note in conclusion that using the standard approach with the reduced mass requires a two - step procedure , in which one first finds the solution for the reduced mass , and after that , derive from this solution the parameters of motion of the particles of interest .application of the above technique permits one to calculate _ * directly * _ the possible trajectories of _ * each * _ particle .such approach one can use in the case when particles have electric charges and ( in the case under consideration charges must have opposite signs ) .such approach can be used only if velocities of the charges ( is the light velocity ) . in this casewe can neglect by the energy of the magnetic field which appears in moving charged particles in comparison with the energy of the electric field .said otherwise , in this case we neglect by the vector potential and claim that particles move in scalar potential only .99 l.d.landau , e.m.lifshitz , mechanics , third edition , butterworth - heinemann , ( translation from the russian ) , 1976 .i.i.olkhovskii , theoretical mechanics for physicists ( in russian ) , second edition , publ .moscow university , moscow , 1974 .
in contrast to the well - known solution of the two - body problem through the use of the concept of reduced mass , a solution is proposed involving separation of potentials . it is shown that each of the two point bodies moves in its own stationary potential well generated by the other body , and the magnitudes of these potentials are calculated . it is shown also that for each body separately the energy and the angular momentum laws are valid . the knowledge of the potentials in which the bodies are moving permits calculation of the trajectories of each body without resorting to the reduced mass . * key words : * mechanics , two - body problem , gravitational potential , virial theorem . pacs : 45.50.jf
consider the estimation of a random vector from the measurement model illustrated in figure [ fig : model ] .the random vector , which is assumed to have independent and identically distributed ( i.i.d . )components , is passed through a known linear transform that outputs .the components of are generated by a componentwise transfer function .this work addresses the cases where the distributions and have some unknown parameters , and , that must be learned in addition to the estimation of .such joint estimation and learning problems with linear transforms and componentwise nonlinearities arise in a range of applications , including empirical bayesian approaches to inverse problems in signal processing , linear regression and classification , and , more recently , bayesian compressed sensing for estimation of sparse vectors from underdetermined measurements . also , since the parameters in the output transfer function can model unknown nonlinearities , this problem formulation can be applied to the identification of linear - nonlinear cascade models of dynamical systems , in particular for neural spike responses .when the distributions and are known , or reasonably bounded , there are a number of methods available that can be used for the estimation of the unknown vector . in recent years, there has been significant interest in so - called approximate message passing ( amp ) and related methods based on gaussian approximations of loopy belief propagation ( lbp ) .these methods originate from cdma multiuser detection problems in , and have received considerable recent attention in the context of compressed sensing .see , also the survey article .the gaussian approximations used in amp are also closely related to standard expectation propagation techniques , but with additional simplifications that exploit the linear coupling between the variables and .the key benefits of amp methods are their computational simplicity , large domain of application , and , for certain large random , their exact asymptotic performance characterizations with testable conditions for optimality .this paper considers the so - called generalized amp ( gamp ) method of that extends the algorithm in to arbitrary output distributions ( many original formulations assumed additive white gaussian noise ( awgn ) measurements ) .however , although the current formulation of amp and gamp methods is attractive conceptually , in practice , one often does not know the prior and noise distributions exactly . to overcome this limitation , vila and schniter and krzakala _ et al . _ have recently proposed extension of amp and gamp based on expectation maximization ( em ) that enable joint learning of the parameters along with the estimation of the vector . while simulations indicate excellent performance , the analysis of these methods is difficult .this work provides a unifying analytic framework for such amp - based joint estimation and learning methods .the main contributions of this paper are as follows : * generalization of the gamp method of to a class of algorithms we call _ adaptive gamp _ that enables joint estimation of the parameters and along with vector . the methods are computationally fast and general with potentially large domain of application .in addition , the adaptive gamp methods include the em - gamp algorithms of as special cases .* exact characterization of the asymptotic behavior of adaptive gamp .we show that , similar to the analysis of the amp and gamp algorithms in , the componentwise asymptotic behavior of adaptive gamp can be described exactly by a simple scalar _ state evolution _ ( se ) equations .* demonstration of asymptotic consistency of adaptive gamp with maximum - likelihood ( ml ) parameter estimation .our main result shows that when the ml parameter estimation is computed exactly , the estimated parameters converge to the true values and the performance of adaptive gamp asymptotically coincides with the performance of the oracle gamp algorithm that knows the correct parameter values .remarkably , this result applies to essentially arbitrary parameterizations of the unknown distributions and , thus enabling provably consistent estimation on non - convex and nonlinear problems .* experimental evaluation of the algorithm for the problems of learning of sparse priors in compressed sensing and identification of linear - nonlinear cascade models in neural spiking processes .our simulations illustrate the performance gain of adaptive gamp and its asymptotic consistency .adaptive gamp thus provides a computationally - efficient method for a large class of joint estimation and learning problems with a simple , exact performance characterization and provable conditions for asymptotic consistency .as mentioned above , the adaptive gamp method proposed here can be seen as a generalization of the em methods in . in , the prior described by a generic -term gaussian mixture ( gm ) whose parameters are identified by an em procedure . the expectation " or e - step is performed by gamp , which can approximately determine the marginal posterior distributions of the components given the observations and the current parameter estimates of the gm distribution .a related em - gamp algorithm has also appeared in for the case of certain sparse priors and awgn outputs .simulations in show remarkably good performance and computational speed for em - gamp over a wide class of distributions , particularly in the context of compressed sensing .also , using arguments from statistical physics , presents state evolution ( se ) equations for the joint evolution of the parameters and vector estimates and confirms them numerically . as discussed in section [ sec : em - gamp ] , em - gamp is a special case of adaptive gamp with a particular choice of the adaptation functions . therefore , one contribution of this paper is to provide a rigorous theoretical justification of the em - gamp methodology . in particular , the current work provides a rigorous justification of the se analysis in along with extensions to more general input and output channels and adaptation methods .however , the methodology in in other ways is more general in that it can also study seeded " or spatially - coupled " matrices as proposed in .an interesting open question is whether the analysis methods in this paper can be extended to these scenarios as well .an alternate method for joint learning and estimation has been presented in , which assumes that the distributions on the source and output channels are themselves described by graphical models with the parameters and appearing as unknown variables .the method in , called hybrid - gamp , iteratively combines standard loopy bp with amp methods .one avenue of future work is to see if methodology in this paper can be applied to analyze the hybrid - gamp methods as well .finally , it should be pointed out that while simultaneous recovery of unknown parameters is appealing conceptually , it is not a strict requirement . an alternate solution tothe problem is to assume that the signal belongs to a known class of distributions and to minimize the maximal mean - squared error ( mse ) for the class .this minimax approach was proposed for amp recovery of sparse signals in .although minimax approach results in the estimators that are uniformly good over the entire class of distributions , there may be a significant gap between the mse achieved by the minimax approach and the oracle algorithm that knows the distribution exactly .indeed , this gap was the main justification of the em - gamp methods in . due to its asymptotic consistency ,adaptive gamp provably achieves the performance of the oracle algorithm .the paper is organized as follows : in section [ sec : gampreview ] , we review the non - adaptive gamp and corresponding state evolution equations . in section [ sec :gamp ] , we present adaptive gamp and describe ml parameter learning . in section [ sec : convprop ], we provide the main theorems characterizing asymptotic performance of adaptive gamp and demonstrating its consistency . in section [ sec : numex ] , we provide numerical experiments demonstrating the applications of the method . section [ sec : concl ] concludes the paper . a conference version of this paper has appeared in .this paper contains all the proofs , more detailed descriptions and additional simulations .before describing the adaptive gamp algorithm , it is useful to review the basic ( non - adaptive ) gamp algorithm of . consider the estimation problem in fig .[ fig : model ] where the componentwise distributions on the inputs and outputs have some parametric form , [ eq : pxzlam ] p_x(x|_x ) , p_y|z(y|z , _ z ) , where and represent parameters of the distributions and and some parameter sets .the gamp algorithm of can be seen as a class of methods for estimating the vectors and for the case when the parameters and are _ known_. in contrast , the adaptive gamp method that is discussed in section [ sec : gamp ] enables _ joint _ estimation of the parameters and along with the vectors and . in order that to understand how the adaptation works , it is best to describe the basic gamp algorithm as a special case of the more general adaptive gamp procedure .the basic gamp algorithm corresponds to the special case of algorithm [ algo : gamp ] when the _ adaptation functions _ and output fixed values [ eq : hfix ] h^t_z(^t,,^t_p ) = ^t_z , h^t_x(^t,^t_r ) = ^t_x , for some _ pre - computed _ sequence of parameters and . by pre - computed " , we mean that the values do not depend on the data through the vectors , , and . in the oracle scenario and are set to the true values of the parameters and do not change with the iteration number . the _ estimation functions _ , and determine the estimates for the vectors and , given the parameter values and .as described in , there are two important sets of choices for the estimation functions , resulting in two variants of gamp : * _ sum - product gamp : _ in this case , the estimation functions are selected so that gamp provides a gaussian approximation of sum - product loopy bp .the estimates and then represent approximations of the mmse estimates of the vectors and . *_ max - sum gamp : _ in this case , the estimation functions are selected so that gamp provides a quadratic approximation of max - sum loopy bp and and represent approximations of the map estimates .the estimation functions of the sum - product gamp are equivalent to scalar mmse estimation problems for the components of the vectors and observed in gaussian noise . for max - sum gamp , the estimation functions correspond to scalar map problems .thus , for both versions , the gamp method reduces the vector - valued mmse and map estimation problems to a sequence of scalar awgn problems combined with linear transforms by and .gamp is thus computationally simple , with each iteration involving only scalar nonlinear operations followed by linear transforms .the operations are similar in form to separable and proximal minimization methods widely used for such problems .appendix [ sec : gampdetails ] reviews the equations for the sum - product gamp .more details , as well as the equations for max - sum gamp can be found in .in addition to its computational simplicity and generality , a key motivation of the gamp algorithm is that its asymptotic behavior can be precisely characterized when is a large i.i.d .gaussian transform .the asymptotic behavior is described by what is known as a _ state evolution _ ( se ) analysis . by now, there are a large number of se results for amp - related algorithms . here, we review the particular se analysis from which is based on the framework in . [ as : gamp ] consider a sequence of random realizations of the gamp algorithm , indexed by the dimension , satisfying the following assumptions : * [ as : gauss ] for each , the matrix has i.i.d . components with and the dimension is a deterministic function of satisfying for some as . * [ as : px0 ] the input vectors and initial condition are deterministic sequences whose components converge empirically with bounded moments of order as [ eq : thetaxinit ] _n ( , ^0 ) ( x,^0 ) , to some random vector for some . for the precise definition of this form of convergence .* [ as : pyz ] the output vectors and are generated by [ eq : zax ] = , y_i = h(z_i , w_i ) i=1, ,m , for some scalar function and vector representing an output disturbance .it is assumed that the output disturbance vector is deterministic , but empirically converges as [ eq : wlim ] _n w , where is as in assumption [ as : gamp ] ( b ) and is some random variable .we let denote the conditional distribution of the random variable . *the estimation function and its derivative with respect to , is lipschitz continuous in at , where is a deterministic parameter from the se equations below .a similar assumptions holds for .* the adaptation functions and are set to for some deterministic sequence of parameters and .also , in the estimation steps in lines [ line : zhat ] , [ line : shat ] and [ line : xhat ] of algorithm [ algo : gamp ] , the values of the and are replaced with the deterministic parameters and from the se equations defined below .assumption [ algo : gamp](a ) simply states that we are considering large , gaussian i.i.d .matrices . assumptions ( b ) and ( c ) state that the input vector and output disturbance are modeled as deterministic , but whose empirical distributions asymptotically appear as i.i.d .this deterministic model is one of key features of bayati and montanari s analysis in .assumption ( d ) is a mild continuity condition .assumption ( e ) defines the restriction of adaptive gamp to the non - adaptive gamp algorithm .we will remove this final assumption later .note that , for now , there is no assumption that the true " distribution of or the true conditional distribution of given must belong to the class of distributions for any parameters and .the analysis can thus model the effects of model mismatch .now , given the above assumptions , define the sets of vectors [ eq : thetaxz ] _x^t & : = & \{(x_j , r_j^t,_j^1 ) , j=1, ,n } , + _ z^t & : = & \{(z_i,_i^t , y_i , p_i^t ) , i=1, ,m } .the first vector set , , represents the components of the the true , " but unknown , input vector , its gamp estimate as well as .the second vector , , contains the components the true , " but unknown , output vector , its gamp estimate , as well as and the observed output .the sets and are implicitly functions of the dimension .the main result of shows that if we fix the iteration , and let , the asymptotic joint empirical distribution of the components of these two sets and converges to random vectors of the form [ eq : thetabarxz ] _ x^t : = ( x , r^t,^1 ) , _ z^t : = ( z,^t , y , p^t ) .we precisely state the nature of convergence momentarily ( see theorem [ thm : stateevogamp ] ) . in, is the random variable in assumption [ as : gamp](b ) , while and are given by [ eq : rvxt ] for some deterministic constants , , and that are defined below .similarly , for some covariance matrix , and [ eq : yzpt ] y = h(z , w ) , ^t = g_z^t(p^t , y,_p^t,_z^t ) , where is the random variable in and contains deterministic constants .the deterministic constants , , and represent parameters of the distributions of and and depend on both the distributions of the input and outputs as well as the choice of the estimation and adaptation functions .the se equations provide a simple method for recursively computing these parameters .the equations are best described algorithmically as shown in algorithm [ algo : se ] . in order that we do not repeat ourselves , in algorithm [ algo : se ] , we have written the se equations for adaptive gamp : for non - adaptive gamp , the updates and can be ignored as the values of and are pre - computed . with these definitions , we can state the main result from .[ thm : stateevogamp ] consider the random vectors and generated by the outputs of gamp under assumption [ as : gamp ] .let and be the random vectors in with the parameters determined by the se equations in algorithm [ algo : se ] .then , for any fixed , the components of and converge empirically with bounded moments of order as [ eq : thetalimgamp ] _n _ x^t _ x^t , _z^t _ z^t . where and are given in .in addition , for any , the limits [ eq : taulimgamp ] _ n_ r^t = _ r^t , _p^t = _ p^t , also hold almost surely .the theorem shows that the behavior of any component of the vectors and and their gamp estimates and are distributed identically to a simple scalar equivalent system with random variables , , and .this scalar equivalent model appears in several analyses and can be thought of as a _ single - letter characterization _ of the system . remarkably , this limiting property holds for essentially arbitrary distributions and estimation functions , even ones that arise from problems that are highly nonlinear or noncovex . from the single - letter characterization , one can compute the asymptotic value of essentially any componentwise performance metric such as mean - squared error or detection accuracy .similar single - letter characterizations can also be derived by arguments from statistical physics .[ algo : gamp ] [ line : taup ] [ line : phat ] [ line : lamz ] [ line : zhat ] [ line : shat ] [ line : taus ] [ line : taur ] [ line : rhat ] [ line : lamx ] [ line : xhat ] [ line : taux ] as described in the previous section , the standard gamp algorithm of considers the case when the parameters and in the distributions in are known .the adaptive gamp method proposed in this paper , and shown in algorithm [ algo : gamp ] , is an extension of the standard gamp procedure that enables simultaneous identification of the parameters and along with estimation of the vectors and . the key modification is the introduction of the two _ adaptation functions _ : and . in each iteration , these functions output estimates , and of the parameters based on the data , , , and .we saw the standard gamp method corresponds to the adaptation functions in which outputs fixed values and that do not depend on the data , and can be used when the true parameters are known . for the case when the true parameters are not known , we will see that a simple maximum likelihood ( ml ) can be used to estimate the parameters from the data . to understand how to estimate parameters via the adaptation functions ,observe that from theorem [ thm : stateevogamp ] , we know that the distribution of the components of are distributed identically to the scalar in .now , the distribution of only depends on three parameters , and .it is thus natural to attempt to estimate these parameters from the empirical distribution of the components of . to this end , let be the log likelihood [ eq : phixll ] _x(r,_x,_r,_r ) = p_r(r|_x,_r,_r ) , where the right - hand side is the probability density of a random variable with distribution then , at any iteration , we can attempt to perform a maximum - likelihood ( ml ) estimate + & & = _ _ x _ x _ ( _ r,_r ) s_x(_r^t ) \ { _ j=1^n _ x(r^t_j,_x,_r,_r)}. [ eq : hxml ] here , the set is a set of possible values for the parameters .the set may depend on the measured variance .we will see the precise role of this set below .similarly , the joint distribution of the components of and are distributed according to the scalar which depend only on the parameters and .thus , we can define the likelihood where the right - hand side is the joint probability density of with distribution then , we can attempt to estimate via the ml estimate again , the set is a set of possible covariance matrices .it is useful to briefly compare the above ml parameter estimation with the em - gamp method proposed by vila and schniter in and krzakala _ et ._ in .both of these methods combine the bayesian amp or gamp algorithms with a standard em procedure as follows .first , the algorithms use the sum - product version of the amp / gamp algorithms , so that the outputs can provide an estimate of the posterior distributions on the components of given the current parameter estimates .specifically , at any iteration , define the distribution + & = & p_x(x_j|_x^1 ) .[ eq : phatpost ] for the sum - product amp or gamp algorithms , it is shown in that the se equations simplify so that and , if the parameters were selected correctly . therefore , from theorem [ thm : stateevogamp ] , the conditional distribution should approximately match the distribution for large . if , in addition , we treat and as sufficient statistics for estimating given and , then can be treated as an approximation for the posterior distribution of given the current parameter estimate .some justification for this last step can be found in . using the approximation, we can approximately implement the em procedure to update the parameter estimate via a maximization + & : = & _ _ x _ x _ j=1^n , [ eq : lamem ] where the expectation is with respect to the distribution in . in ,the parameter update is performed only once every few iterations to allow to converge to the approximation of the posterior distribution of given the current parameter estimates . in , the parameter estimate is updated every iteration . a similar procedure can be performed for the estimation of .we thus see that the em - gamp procedures in and in are both special cases of the adaptive gamp algorithm in algorithm [ algo : gamp ] with particular choices of the adaptation functions and . as a result, our analysis in theorem [ thm : stateevo ] below can be applied to these algorithms to provide rigorous asymptotic characterizations of the em - gamp performance . however , at the current time , we can only prove the asymptotic consistency result , theorem [ thm : consistent ] , for the ml adaptation functions and described above .that being said , it should be pointed out that em - gamp update is generally computationally much simpler than the ml updates in and .for example , when is an exponential family , the optimization in is convex .also , the optimizations in and require searches over additional parameters such as and .thus , an interesting avenue of future work is to apply the analysis result , theorem [ thm : consistent ] below , to see if the em - gamp method or some similarly computationally simple technique can be developed which also provides asymptotic consistency .given the distributions in assumption [ as : gamp ] , compute the sequence of parameters as follows : * _ initialization _ set with [ eq : seinit ] _x^0 = ( x , ^0 ) , _x^0 = _ x^0 , where the expectation is over the random variables in assumption [ as : gamp](b ) and is the initial value in the gamp algorithm . * _ output node update : _ compute the variables associated with the output nodes + [ eq : outse ] compute the variables _ p^t & = & _ x^t , _ p^t = _ x^t , + _ z^t & = & h_z^t(p^t , y , _p^t ) , [ eq : lambarzse ] + _ r^t & = & -^-1 , [ eq : taubarrse ] + _ r^t & = & ( _ r^t)^2 , + _ r^t & = & _ r^t , [ eq : alphase ] + where the expectations are over the random variables and is given in . * _ input node update : _ compute + [ eq : inse ] _x^t & = & h_x^t(r^t,_r^t ) , [ eq : lambarxse ] + _ x^1 & = & _ r^t , + ^1_x & = & ( x,^1 ) , + where the expectations are over the random variables in .before proving the asymptotic consistency of the adaptive gamp method with ml adaptation , we first prove the following more general convergence result . [ as : agamp ] consider the adaptive gamp algorithm running on a sequence of problems indexed by the dimension , satisfying the following assumptions : * same as assumption [ as : gamp](a ) to ( c ) with . * for every , * [ thm : stateevo ] consider the random vectors and generated by the outputs of the adaptive gamp under assumption [ as : agamp ] .let and be the random vectors in with the parameters determined by the se equations in algorithm [ algo : se ] .then , for any fixed , the components of and converge empirically with bounded moments of order as [ eq : thetalim ] _ n _ x^t _ x^t , _z^t _ z^t . where and are given in .in addition , for any , the limits [ eq : taulamlim ] also hold almost surely .the result is a natural generalization of theorem [ thm : stateevogamp ] and provides a simple extension of the se analysis to incorporate the adaptation .the se analysis applies to essentially arbitrary adaptation functions .it particular , it can be used to analyze both the behavior of the adaptive gamp algorithm with either ml and em - gamp adaptation functions in the previous section .the proof is straightforward and is based on a continuity argument also used in .we cam now use theorem [ thm : stateevo ] to prove the asymptotic consistency of the adaptive gamp method with the ml parameter estimation described in section [ sec : mladapt ] .the following two assumptions can be regarded as _ identifiability_ conditions .[ def : pxident ] consider a family of distributions , , a set of parameters of a gaussian channel and function .we say that is _ identifiable with gaussian outputs _ with parameter set and function if : * the sets and are compact . * for any true " parameters , and , the maximization + & & , [ eq : lamxscore ] is well - defined , unique and returns the true value , .the expectation in is with respect to and .* for every and , , the function is pseudo - lipschitz continuous of order in .[ def : pzident ] consider a family of conditional distributions , generated by the mapping where is some random variable and is a scalar function .let be a set of covariance matrices and let be some function .we say that conditional distribution family is _ identifiable with gaussian inputs _ with covariance set and function if : * the parameter sets and are compact . * for any true " parameter and true covariance ,the maximization + & & , [ eq : lamzscore ] is well - defined , unique and returns the true value , , the expectation in is with respect to and .* for every and , the function is pseudo - lipschitz continuous in of order .in addition , it is continuous in uniformly over and . definitions [ def : pxident ] and [ def : pzident ] essentially require that the parameters and can be identified through a maximization . the functions and can be the log likelihood functions and , although we permit other functions as well , since the maximization may be computationally simpler .such functions are sometimes called _ pseudo - likelihoods_. the existence of a such a function is a mild condition . indeed ,if such a function does not exists , then the distributions on or must be the same for at least two different parameter values . in that case , one can not hope to identify the correct value from observations of the vectors or .[ as : agamp - ml ] let and be families of distributions and consider the adaptive gamp algorithm , algorithm [ algo : gamp ] , run on a sequence of problems , indexed by the dimension satisfying the following assumptions : * same as assumption [ as : gamp](a ) to ( c ) with .in addition , the distributions for the vector is given by for some true " parameter and the conditional distribution of given is given by for some true " parameter .* same as assumption [ as : agamp](c ) . *the adaptation functions are set to and .[ thm : consistent ] consider the outputs of the adaptive gamp algorithm with ml adaptation as described in assumption [ as : agamp - ml ] .then , for any fixed , * the components of and in converge empirically with bounded moments of order as in and the limits hold almost surely . * in addition , if , and the family of distributions , is identifiable in gaussian noise with parameter set and pseudo - likelihood ( see definition [ def : pxident ] ) , then [ eq : lamxcons ] _n ^t_x = _ x^t = ^*_x almost surely . *similarly , if for some , and the family of distributions , is identifiable with gaussian inputs with parameter set and pseudo - likelihood ( see definition [ def : pzident ] ) then [ eq : lamzcons ] _n ^t_z = _ z^t = ^*_z almost surely ._ proof : _ see appendix [ sec : consistentpf ] .the theorem shows , remarkably , that for a very large class of the parameterized distributions , the adaptive gamp algorithm is able to asymptotically estimate the correct parameters .moreover , there is asymptotically no performance loss between the adaptive gamp algorithm and a corresponding oracle gamp algorithm that knows the correct parameters in the sense that the empirical distributions of the algorithm outputs are described by the same se equations .there are two key requirements : first , that the optimizations in and can be computed. these optimizations may be non - convex .secondly , that the optimizations can be performed are over sufficiently large sets of gaussian channel parameters and such that it can be guaranteed that the se equations eventually enter these sets . in the examples below , we will see ways to reduce the search space of gaussian channel parameters .recent results suggest that there is considerable value in learning of priors in the context of compressed sensing , which considers the estimation of sparse vectors from underdetermined measurements ( ) .it is known that estimators such as lasso offer certain optimal min - max performance over a large class of sparse distributions . however ,for many particular distributions , there is a potentially large performance gap between lasso and mmse estimator with the correct prior .this gap was the main motivation for which showed large gains of the em - gamp method due to its ability to learn the prior . here, we illustrate the performance and asymptotic consistency of adaptive gamp in a simple compressed sensing example . specifically , we consider the estimation of a sparse vector from noisy measurements where the additive noise is random with i.i.d .entries . here, the output " channel is determined by the statistics on , which are assumed to be known to the estimator .so , there are no unknown parameters . as a model for the sparse input vector , we assumed the components are i.i.d . with the gauss - bernoulli distribution , [ eq : gaussbern ] x_j ~\ { ll 0 &= 1- , + ( 0,_x^2 ) & = . where represents the probability that the component is non - zero ( i.e.the vector s sparsity ratio ) and is the variance of the non - zero components .the parameters are treated as unknown . in the adaptive gamp algorithm, we use the estimation functions , , and corresponding to the sum - product gamp algorithm . as described in appendix [sec : gampdetails ] , for the sum - produce gamp the se equations simplify so that and .since the noise variance is known , the initial output noise variance obtained by adaptive gamp in algorithm [ algo : gamp ] exactly matches that of oracle gamp .therefore , for , the parameters and do not need to be estimated , and conveniently simplifies to where \times [ 0 , + \infty) ] .to illustrate the asymptotic consistency of the adaptive algorithm , we consider the signals of length and .we perform and random trials for long and short signals , respectively , and plot the average mse of the reconstruction against .as expected , for large , the performance of adaptive gamp is nearly identical ( within ) to that of oracle gamp .we have presented an adaptive gamp method for the estimation of i.i.d .vectors observed through a known linear transforms followed by an arbitrary , componentwise random transform .the procedure , which is a generalization of em - gamp methodology of that estimates both the vector as well as parameters in the source and componentwise output transform . in the case of large i.i.d .gaussian transforms , it is shown that the adaptive gamp method is provably asymptotically consistent in that the parameter estimates converge to the true values .this convergence result holds over a large class of models with essentially arbitrarily complex parameterizations .moreover , the algorithm is computationally efficient since it reduces the vector - valued estimation problem to a sequence of scalar estimation problems in gaussian noise .we believe that this method is applicable to a large class of linear - nonlinear models with provable guarantees can have applications in a wide range of problems .we have mentioned the use of the method for learning sparse priors in compressed sensing .future work will include learning of parameters of output functions as well as possible extensions to non - gaussian matrices .as described in , the sum - product estimation can be implemented with the estimation functions [ eq : gsp ] g_x^t(r,_r,_x ) & : = & , [ eq : gxsp ] + g_z^t(p , y,_p,_z ) & : = & , [ eq : gzsp ] + g_s^t(p , y,_p,_z ) & : = & ( g_z^t(p , y,_p,_z ) - p ) , where the expectations are with respect to the scalar random variables [ eq : gspdist ] the estimation functions correspond to scalar estimates of random variables in additive white gaussian noise ( awgn ) .a key result of is that , when the parameters are set to the true values ( i.e. ) , the outputs and can be interpreted as sum products estimates of the conditional expectations and .the algorithm thus reduces the vector - valued estimation problem to a computationally simple sequence of scalar awgn estimation problems along with linear transforms .moreover , the se equations in algorithm [ algo : se ] reduce to a particularly simple forms , where and in are given by ,\ ] ] where the expectations are over the random variables and is given in .the covariance matrix has the form \beta\tau_{x0 } - \taubar_p^t & \beta\tau_{x0 } - \taubar_p^t \\[0.3em ] \end{bmatrix},\ ] ] where is the variance of and is the asymptotic measurement ratio ( see assumption 1 for details ) .the scaling constant becomes .the update rule for also simplifies to ,\ ] ] where the expectation is over the random variables in .bayati and montanari s analysis in employs certain deterministic models on the vectors and then proves convergence properties of related empirical distributions . to apply the same analysis here, we need to review some of their definitions .we say a function is _ pseudo - lipschitz _ of order , if there exists an such for any , , now suppose that for each , is a set of vectors [ eq : vsetapp ] ^(n ) = \{_i(n ) , i=1, ,(n ) } , where each element and is the number of elements in the set .thus , can itself be regarded as a vector with components .we say that _ empirically converges with bounded moments of order _ as to a random vector on if : for all pseudo - lipschitz continuous functions , , of order , when the nature of convergence is clear , we may write ( with some abuse of notation ) or finally , let be the set of probability distributions on with bounded moments , and suppose that is a functional to some topological space . given a set as in ,write for where is the empirical distribution on the components of . also , given a random vector with distribution write for . then, we will say that the functional is _ weakly pseudo - lipschitz continuous _ of order if where the limit on the right hand side is in the topology of .the proof follows along the adaptation argument of .we use the tilde superscript on quantities such as , and to denote values generated via a non - adaptive version of the gamp .the non - adaptive gamp algorithm has the same initial conditions as the adaptive algorithm ( i.e. ) , but with and replaced by their deterministic limits and , respectively .that is , we replace lines [ line : zhat ] , [ line : shat ] and [ line : xhat ] with _ i^t & = & g_z^t(p_i^t , y_i,_p^t,_z^t ) , _i^t = g_s^t(p_i^t , y_i,_p^t,_z^t ) , + _ j^1 & = & g_x^t(r_j^t , _ r^t,_x^t ) .this non - adaptive algorithm is precisely the standard gamp method analyzed in .the results in that paper show that the outputs of the non - adaptive algorithm satisfy all the required limits from the se analysis .that is , where and are the sets generated by the non - adaptive gamp algorithm : the limits are now proven through a continuity argument that shows that the adaptive and non - adaptive quantities must asymptotically agree with one another . specifically , we will start by proving that the following limits holds almost surely for all the proof of the limits and is achieved by an induction on . although we only need to show the above limits for , most of the arguments hold for arbitrary .we thus present the general derivation where possible . to begin the induction argument , first note that the non - adaptive algorithm has the same initial conditions as the adaptive algorithm .thus the limits and hold for and , respectively .we now proceed by induction .suppose that and the limits and hold for some and , respectively . since has i.i.d .components with zero mean and variance , it follows from the marenko - pastur theorem that that its -norm operator norm is bounded .that is , there exists a constant such that [ eq : matrixbnd ] _n _ k c_a , _n ^_k c_a .this bound is the only part of the proof that specifically requires . from ,we obtain where ( a ) is due to the norm inequality . since , we have that for any positive numbers and [ eq : pbndtwo ] ( a + b)^k 2^k(a^k + b^k ) . applying the inequality into, we obtain now , since and are the outputs of the non - adaptive algorithm they satisfy the limits to prove , we first prove the empirical convergence of to . towards this end , let be any pseudo - lipschitz continuous function of order .then \right| } \nonumber \\ & \leq \frac{1}{m } \sum_{i = 1}^m \left|\phi(p_i^{t},y_i ) - \phi(\tilde{p}_i^{t},y_i ) \right| \nonumber\\ & + \left| \frac{1}{m } \sum_{i = 1}^m \phi(\tilde{p}_i^{t},y_i ) -\exp\left[\phi(p^t , y)\right]\right| \nonumber \\ & { { \overset{\mathrm{(a)}}{\leq } } } \frac{l}{m } \sum_{i = 1}^m \left(1 + |p_i^t|^{k-1 } + |\tilde{p}_i^t|^{k-1 } + |y_i|^{k-1 } \right)|p_i^t - \tilde{p}_i^t| \nonumber \\ & + \left| \frac{1}{m } \sum_{i = 1}^m \phi(\tilde{p}_i^{t},y_i ) -\exp\left[\phi(p^t , y)\right]\right| \nonumber \\ & { { \overset{\mathrm{(b)}}{\leq } } } lc \delta_p^t+\left| \frac{1}{m } \sum_{i = 1}^m \phi(\tilde{p}_i^{t},y_i ) -\exp\left[\phi(p^t , y)\right]\right| .\label{eq : phipybnd}\end{aligned}\ ] ] in ( a ) we use the fact that is pseudo - lipschitz , and in ( b ) we use hlder s inequality with and define as ^{k/(k-1 ) } } \nonumber \\ & \leq \frac{1}{m}\sum_{i=1}^m \left ( 1 + |p_i^t|^{k-1 } + |\tilde{p}_i^t|^{k-1 } + |y_i|^{k-1}\right)^{k/(k-1 ) } \nonumber \\ & \leq \mbox{const}\x\left [ 1 + \left(\frac{1}{m}\left\|{\mathbf{p}}^t\right\|_k^k\right)^{\frac{k-1}{k } } \right . \nonumber\\ & \left.+ \left(\frac{1}{m}\left\|\porc^t\right\|_k^k\right)^{\frac{k-1}{k } } + \left(\frac{1}{m}\left\|{\mathbf{y}}\right\|_k^k\right)^{\frac{k-1}{k } } \right ] , \label{eq : cpbnddef}\end{aligned}\ ] ] where the first step is from jensen s inequality .since satisfy the limits for the non - adaptive algorithm we have : [ eq : ptildeybnd ] < \infty \\ \lim_{n \rightarrow \infty } \frac{1}{m}\|{\mathbf{y}}\|_k^k & = \lim_{n \rightarrow \infty } \frac{1}{m}\sum_{i = 1}^m |y_i|^k = \exp\left[|y|^k\right ] < \infty\end{aligned}\ ] ] also , from the induction hypothesis , it follows that the adaptive output must satisfy the same limit < \infty.\ ] ] combining , , , , we conclude that for all [ eq : pyconv ] _n ( ^t , ) ( p^t , y ) . the limit along with and the continuity condition on in assumption [ as : gamp](d ) prove the limit in .the limit together with continuity conditions on in assumptions [ as : gamp ] show that , and hold for .for example , to show , we consider the limit of the following expression where at ( a ) we used the lipschitz continuity assumption .similar arguments can be used for and . to show , we proceed exactly as for . due to the continuity assumptions on ,this limit in turn shows that holds almost surely .then , and follow directly from the continuity of in assumptions [ as : gamp ] , together with and .we have thus shown that if the limits and hold for some , they hold for .thus , by induction they hold for all . finally , to show , let be any pseudo - lipschitz continuous function , and define \right|,\end{aligned}\ ] ] which, due to convergence of non - adaptive gamp , can be made arbitrarily small by choosing large enough .then , consider \right| \nonumber\\ & \leq \epsilon_n^t + \frac{1}{n } \sum_{j = 1}^n \left|\phi(x_j , \hat{r}_j^{t } , \xhat_j^{\tp1 } ) - \phi(x_j , \tilde{r}_j^{t } , \tilde{x}_j^{\tp1 } ) \right| \nonumber\\ & { { \overset{\mathrm{(a)}}{\leq } } } \epsilon_n^t + l\|{\mathbf{r}}^t - \rorc^t\|_1 + l\|\hat{{\mathbf{x}}}^{\tp1 } - \xorc^{\tp1}\|_1 \nonumber\\ & + \frac{l^\prime}{n}\sum_{j = 1}^n \left(|\hat{r}_j^t|^{k-1 } + |\tilde{r}_j^t|^{k-1 } \right)(|\hat{r}_j^t - \tilde{r}_j^t| + |\xhat_j^{\tp1 } - \tilde{x}_j^{\tp1}| ) \nonumber\\ & + \frac{l^\prime}{n}\sum_{j = 1}^n \left(|\xhat_j^{\tp1}|^{k-1 } + |\tilde{x}_j^{\tp1}|^{k-1}\right)(|\hat{r}_j^t - \tilde{r}_j^t| + |\xhat_j^{\tp1 } - \tilde{x}_j^{\tp1}| ) \nonumber\\ & { { \overset{\mathrm{(b)}}{\leq } } } \epsilon_n^t + l\left(\delta_r^t\right)^{\frac{1}{k } } + l\left(\delta_x^t\right)^{\frac{1}{k}}\nonumber\\ & + l^{\prime}\left(\delta_r^t\right)^{\frac{1}{k}}\left((\tilde{m}_x^{\tp1})^{\frac{k-1}{k}}+(\hat{m}_x^{\tp1})^{\frac{k-1}{k } } + ( \tilde{m}_r^{t})^{\frac{k-1}{k}}+(\hat{m}_r^{t})^{\frac{k-1}{k}}\right)\nonumber\\ & + l^{\prime}\left(\delta_x^t\right)^{\frac{1}{k } } \left((\tilde{m}_x^{\tp1})^{\frac{k-1}{k}}+(\hat{m}_x^{\tp1})^{\frac{k-1}{k } } + ( \tilde{m}_r^{t})^{\frac{k-1}{k}}+(\hat{m}_r^{t})^{\frac{k-1}{k}}\right ) \label{eq : xbound } \ ] ] where , are constants independent of and in ( a ) we use the fact that is pseudo - lipshitz , in ( b ) we use -norm equivalence and hlder s inequality with . by applying of , and since, , , , and converge to a finite value we can obtain the first equation of by taking .the second equation in can be shown in a similar way .this proves the limits . also , the first two limits in are a consequence of and .the second two limits follow from continuity assumptions in assumption [ as : gamp](e ) and the convergence of the empirical distributions in .this completes the proof .part ( a ) of theorem [ thm : consistent ] is a direct application of the general result , theorem [ sec : stateevo ] . to apply the general result ,first observe that assumptions [ as : agamp - ml](a ) and ( c ) immediately imply the corresponding items in assumptions [ as : agamp ] .so , we only need to verify the continuity condition in assumption [ as : agamp](b ) for the adaptation functions in and .we begin by proving the continuity of .fix , and let be a sequence of vectors and be a sequence of scalars such that [ eq : yptaulim ]_ n ( ^(n),^(n ) ) ( y , p^t ) _ n _ p^(n ) = ^t_p , where and are the outputs of the state evolution equations . for each , let [ eq : lamhatcons ] _ z^(n ) = h_z^t(^(n),^(n),_p^(n ) ) .we wish to show that , the true parameter .since and is compact , it suffices to show that , any limit point of any convergent subsequence is equal to .so , suppose that to some limit point on some subsequence . from and the definitionit follows that now , since and , we can apply the continuity condition in definition [ def : pzident](c ) to obtain \geq 0.\end{aligned}\ ] ] also , the limit and the fact that is psuedo - lipschitz continuous of order implies that [ eq : phizcompc ] .but , property ( b ) of definition [ def : pzident ] shows that is the maxima of the right - hand side , so [ eq : phizcompd ] = . since , by definition [ def : pzident](b ) , the maxima is unique , .since this limit point is the same for all convergent subsequences , we see that over the entire sequence . we have thus shown that given limits , the outputs of the adaptation function converge as thus , the continuity condition on in assumption [ as : agamp](b ) is satisfied .the analogous continuity condition on can be proven in a similar manner .so , it remains to show parts ( b ) and ( c ) of theorem [ thm : consistent ] .we will only prove ( b ) ; the proof of ( c ) is similar .also , since we have already established , we only need to show that the output of the se equations matches the true parameter .that is , we need to show .this fact follows immediately from the selection of the adaptation functions : + & & _ _ x _ x _ ( _ r,_r ) s_x(_r^t ) + & & _ _ x _ x _ ( _ r,_r ) s_x(_r^t ) + & & + & & _ x^ * [ eq : lamxlim ] where ( a ) follows from the se equation ; ( b ) is the definition of the ml adaptation function when interpreted as a functional on a random variable ; ( c ) is the definition of the random variable in where ; and ( d ) follows from definition [ def : pxident](b ) and the hypothesis that .thus , we have proven that , and this completes the proof of part ( b ) of theorem [ thm : consistent ] .the proof of part ( c ) is similar .
we consider the estimation of an i.i.d . ( possibly non - gaussian ) vector from measurements obtained by a general cascade model consisting of a known linear transform followed by a probabilistic componentwise ( possibly nonlinear ) measurement channel . a novel method , called adaptive generalized approximate message passing ( adaptive gamp ) , that enables joint learning of the statistics of the prior and measurement channel along with estimation of the unknown vector is presented . the proposed algorithm is a generalization of a recently - developed em - gamp that uses expectation - maximization ( em ) iterations where the posteriors in the e - steps are computed via approximate message passing . the methodology can be applied to a large class of learning problems including the learning of sparse priors in compressed sensing or identification of linear - nonlinear cascade models in dynamical systems and neural spiking processes . we prove that for large i.i.d . gaussian transform matrices the asymptotic componentwise behavior of the adaptive gamp algorithm is predicted by a simple set of scalar state evolution equations . in addition , we show that when a certain maximum - likelihood estimation can be performed in each step , the adaptive gamp method can yield asymptotically consistent parameter estimates , which implies that the algorithm achieves a reconstruction quality equivalent to the oracle algorithm that knows the correct parameter values . remarkably , this result applies to essentially arbitrary parametrizations of the unknown distributions , including ones that are nonlinear and non - gaussian . the adaptive gamp methodology thus provides a systematic , general and computationally efficient method applicable to a large range of complex linear - nonlinear models with provable guarantees .
consider a uniform rectilinear displacement of a finite fluid slice of viscosity by another fluid of viscosity in 2d porous media or a hele - shaw cell as shown in fig .[ fig : schematic ] .the frontal interface becomes unstable if , otherwise it is the rear interface which features the fingering instability , for .the viscosity of the fluids depends on the solute concentration , i.e. .fluids are assumed to be incompressible and neutrally buoyant . with an additional condition of slow diffusion, the above - mentioned flow problem can be described in terms of darcy - korteweg equations coupled with a convection - diffusion equation for the mass conservation of the solute concentration .for the dimensionless formulation of the equations the diffusive length and time scales , and , are used as the respective characteristic scales .characteristic pressure , velocity , concentration and viscosity are taken to be and , respectively . here is the constant permeability of the homogeneous porous medium . for simplicitywe have assumed a constant isotropic diffusion of the solute concentration , characterized by the diffusion coefficient .the dimensionless equations in a lagrangian frame of reference , moving with the speed are written as , here is the gap - averaged velocity having longitudinal and transverse components and , respectively , is the dynamic pressure , is the dimensionless korteweg stress constant and the operator . the governing equations ( [ eq : cont ] ) - ( [ eq : diff ] ) are associated with the following boundary conditions : at the longitudinal boundaries , ; and at the transverse boundaries , ( representing the constant pressure ) , , in the lagrangian frame of reference ( shown in fig .[ fig : schematic ] ) . the initial velocity is considered to be , while the initial distribution of the solute concentration is , inside the finite slice and outside of that .the relationship between the dynamic viscosity of the underlying fluids and the solute concentration is assumed to be of arrhenius type , where is the log - mobility ratio .hence , the displacement of a less ( more ) viscous slice by a more ( less ) viscous ambient fluid is represented by ( ) .direct numerical simulations of eqs .( [ eq : cont])-([eq : diff ] ) in terms of the stream function , ( ) , \cdot\hat{k } , \nonumber \\ & & \\ \label{eq : vs2 } & & c_t + \psi_yc_x - \psi_xc_y = \nabla^2 c,\end{aligned}\ ] ] are performed using a highly accurate pseudo - spectral method . here represents a unit vector normal to the -plane .numerical simulations are performed in a rectangular domain of dimensionless width , and length , where , the ratio of the length to the width of the domain , is the aspect ratio. a convergence analysis of the numerical method and grid independence have been performed .nonlinear simulations are carried out in the above - mentioned computational domain with and spectra , and it has been observed that the maximum relative error between these two sets of simulations is of . further refinement of the spatial disretization points in either direction does not alter the dynamics of the fingering pattern .hence , for optimal computational cost we discretize our computation domain of size \times [ 0 , 1024] ] ) ( {2mm}{.4pt}}\hspace{1mm}\textperiodcentered\hspace{1mm}{\rule[0.5ex]{2mm}{.4pt}}\hspace{1mm}\textperiodcentered\hspace{1mm}{\rule[0.5ex]{2mm}{.4pt}}}} ] ) .the inset images show the concentration distribution approximately at time of interactions between the stable and unstable interfaces.,width=480 ] the onset of fingering instability and the interaction between the stable and unstable interfaces can be quantified appropriately from the temporal evolution of the interfacial length , . in the diffusiondominated regime retains a constant value equal to the width of the domain , and increases with the growth of the fingers .hence , the onset of fingering is marked as the instance when starts increasing from the constant value of the diffusive regime .[ fig : nonlinear_il ] shows the temporal evolution of the interfacial length corresponding to the simulations of figs .[ fig : nonlinear1 ] and [ fig : nonlinear2 ] .it depicts that the displacements remain stable over the first two decades of time and the fingers form at , for the viscosity relations , and , while at for . decreases with time after the interaction of the fingers with the respective stable interface .inset images show the concentration distributions approximately at such instance of interaction and depict how this time differ from each other for different viscosity profiles .hence , all the qualitative features of the dynamics observed in the absence of any gradient stress , can be seen even with the gradient stresses , but with different quantitative measures due to the influence of transient interfacial tension .\(a ) ( b ) + for a more and less viscous finite slice of width , for and . , title="fig:",width=307 ] for a more and less viscous finite slice of width , for and . ,title="fig:",width=307 ] vf enhances the concentration gradient and the area available for diffusive flux across the interface by stretching the fluid - fluid interface and hence modifies the rate at which mixing occurs .it would be interesting to understand the effect of the korteweg stresses on the mixing of a more or less viscous finite slice .the degree of mixing is defined as , , in terms of the global variance of the concentration field , , where represents the spatial averaging over the domain .the degree of mixing of more and less viscous finite slices have been plotted in fig .[ fig : degree_mixing ] corresponding to the simulations of figs .[ fig : nonlinear1 ] - [ fig : nonlinear2 ] .due to stronger instability for the case of than , the degree of mixing , , is higher in the former than the latter ( see fig . [fig : degree_mixing](a ) ) . in fig .[ fig : degree_mixing](b ) , has been shown for the three cases : ( i ) , ( ii ) and ( iii ) .it depicts identical temporal evolution of for both more and less viscous slices until the interaction of the fingers with the respective stable interface .afterwards , becomes higher in the latter case than the former , and it asymptotically saturates to the maximum value of , corresponding to the completely mixed state .the coincidence of the three curves in the time interval confirms the results shown in figs .[ fig : nonlinear1](a ) , [ fig : nonlinear2 ] .here we discuss lsa of miscible slice in the presence of the korteweg stresses with the viscosity relations given by eqs .( [ eq : visco_new1])-([eq : visco_new2 ] ) and compare the results obtained with those of dns .finally , the obtained lsa results are compared with a lsa of immiscible slice which confirms that the korteweg stress and the surface tension have identical effects on the instability .\(a ) ( b ) + , ( b ) temporal evolution of the growth rate of the most dangerous wave perturbation ( ) .inset image shows the maximum growth , , from all possible wave number , , representing the onset of instability .the parameter values are and . ,title="fig:",width=307 ] , ( b ) temporal evolution of the growth rate of the most dangerous wave perturbation ( ) .inset image shows the maximum growth , , from all possible wave number , , representing the onset of instability .the parameter values are and ., title="fig:",width=307 ] \(a ) ( b ) + .ivc ( {1cm}{.4pt}} ] ) .inset image is magnified near the onset of instability .( b ) dispersion curves at for with the self - similar lsa ., title="fig:",width=278 ] .ivc ( {1cm}{.4pt}} ] ) .inset image is magnified near the onset of instability .( b ) dispersion curves at for with the self - similar lsa ., title="fig:",width=307 ] the initial - boundary value problem described above possesses a self - similar diffusive decaying solution in a similarity transformation -domain , /2 ] ) , ( {2mm}{.4pt}}\hspace{2mm}{\rule[0.5ex]{2mm}{.4pt}}\hspace{2mm}{\rule[0.5ex]{2mm}{.4pt}}}} ] . here and correspond to the larger and smaller growth rates , respectively .furthermore , and are the surface tension at the left and right interfaces , being the width of the middle layer , is the fluid velocity and are the viscosities of the left , middle and right fluid layers , respectively .these show that the obtained analytic expressions for the dispersion relations at the two interfaces involve explicitly the dynamic viscosities of the three fluid layers , not their ratio . as a consequence ,the dispersion curves corresponding to become different for two sets of viscosities although having the same viscosity ratio of two adjacent fluid layers .for instance , we consider three sets of viscosities in such a way that the interface separating the middle layer from the right layer becomes unstable for the first two cases , while for the third , it is the interface between the left and the middle layer that features instability . corresponding dispersion curves ( vs. ) are shown in fig .[ fig : daripaimmiscible ] for ; the solid line corresponds to the case of , dashed line corresponding to , and the circles representing the situation when .the viscosities of the left and right layers are chosen to be equal so that it reduces to an analogous form of a finite miscible slice discussed above .although the ratio of the high viscosity to the low viscosity remain to be the same , , the dispersion curve for the second case differs from the remaining two .it signifies that the dispersion curves corresponding to a more and less viscous middle layer will overlap if the dynamic viscosities of the less and more viscous fluids are kept unchanged .eventually , growth rate is always higher for the case with lager dynamic viscosity of the fluids .another important observation is the variation of the cutoff wave number ( ) for the unstable mode and the most unstable wave number ( ) with the variation of the surface tension .the qualitative changes of these two quantities are similar to those observed in miscible displacements under the influence of the korteweg stress .both in miscible and immiscible fluid systems , and decrease with the increase of the magnitude of and the surface tension force , respectively .we have theoretically investigated the vf instability of miscible slices with transient interfacial tension .an appropriate choice of the dynamic viscosity of the fluid in the characteristic scales manifests its importance in the study of vf with transient interfacial tension .our analysis is capable of reproducing the results existing in the literature .the results obtained depict that the onset of vf and their dynamical pattern can become identical only under certain scaling analysis for both the more and less viscosity slices in the presence of korteweg stress , similar to the case when these stresses are absent .synergetic mixing with vf and alternating injection will be the same with the present viscosity model irrespective of the choice of the viscosity of the fluid that fills the hele - shaw cell before the injection starts .alike effect of the dynamic viscosity is also observed in immiscible fluids .the findings of this letter will certainly help to understand multiphase viscous flows with different viscosity of each phase or when the viscosity depends non - monotonically on the solute concentration ; for instance , during the mixing process of chemical components , pollutant contamination in aquifers , co sequestration , etc .miscible displacement of viscous fluids with korteweg stress has important applications in chemistry . in this letter we present a preliminary understanding of the relative importance of the korteweg stress to the viscous stress by incorporating one of the simplest situation of constant diffusivity , , of the solute concentration . a velocity induced or concentration dependent diffusivity should be studied for a more realistic situation .more interesting and challenging problem of miscible vf in the presence of the korteweg stress induced by both the concentration and temperature gradients is the focus of the authors future work .
we perform linear stability analyses ( lsa ) and direct numerical simulations ( dns ) to investigate the influence of the dynamic viscosity on viscous fingering ( vf ) instability in miscible slices . selecting the characteristic scales appropriately the importance of the magnitude of the dynamic viscosity of individual fluids on vf in miscible slice has been shown in the context of the transient interfacial tension . further , we have confirmed this result for immiscible fluids and manifest the similarities between vf in immiscible and miscible slices with transient interfacial tension . in a more general setting , the findings of this pletter will be very useful for multiphase viscous flow , in which the momentum balance equation contains an additional stress term free from the dynamic viscosity . displacement processes through porous rocks and mixing of two miscible fluids are active areas of research , having several industrial and environmental applications , such as enhanced oil recovery , hydrology and filtration , carbon capture and storage , etc . vf , a hydrodynamic instability that occurs in both the immiscible and miscible fluids while displacing a more viscous fluid by a less viscous one , is inherent in such flow configuration . in immiscible fluids surface tension force at the interface acts against the instability . on the other hand , in miscible fluids , where a thermodynamically stable interface does not exist , a transition zone relaxes with time due to diffusion and acts against the finger growth . experiments reveal , when the diffusion is slow , a steep gradient in the form of density , concentration or temperature between the underlying fluids gives rise to a weak transient interfacial tension that mimic surface tension effect . this was first discussed by korteweg in 1901 , who introduced an additional stress term , known as the korteweg stress , in the equation of motion . the existence of korteweg stress or transient surface tension is also observed in the experiments of colloidal suspensions and in the binary liquid system of isobuteric acid and water . chen and wang analyzed the influence of vf instability on the spreading of a localized fluid slice having higher mobility than the surrounding fluid . on the other hand , de wit _ et al . _ studied the same problem in the context of separation in a chromatographic column when the viscosity of the sample is higher than the solvent . mishra _ et al . _ have shown that the onset of vf instability and the subsequent finger pattern near the onset are identical for both the less and more viscous slice . influence of the korteweg stresses on vf instability was investigated theoretically by joseph and his co - workers and chen _ et al . _ however , to the best of the authors knowledge the influence of such stresses on the nonlinear vf instability of more and less viscous miscible slices in a hele - shaw cell has never been addressed adequately . in particular this letter addresses the question , what is the influence of the korteweg stress that describes volume forces arising because of the nonlocal molecular interactions on the vf at the rear and frontal interfaces of a localized slice ? such a classical complex pattern dynamics has been investigated through a highly accurate fourier - spectral method based direct numerical simulations . it has been proved theoretically that an appropriate choice of the dynamic viscosity of underlying fluids results the identical onset of fingering instability and the subsequent finger patterns are also identical for both more and less viscous slices in the presence of the korteweg stresses . the dns results are found to be in excellent agreement with the corresponding lsa . also , the similarities between the immiscible slices and miscible one with transient interfacial tension have been proved through lsa , which affirms the classical nature of this study .
let us recall here some basic facts about the relationship between 3-edge colorings ( also called tait colorings ) and hamiltonian cycles in cubic graphs ; the details can be found in . by a _cubic graph _ we understand a connected 3-regular multi - graph that is allowed to have double edges , but no loops ( obviously , a graph with loops can not have either edge colorings or hamiltonian cycles ) . consider a set of three distinct elements called `` colors '' ( say , , where stands for `` red '' , for `` green '' , and for `` blue '' ) .3-edge coloring _ , or _ tait coloring _ is an assignment of a color to every edge such that the edges incident with each vertex have distinct colors .every 3-edge coloring of a cubic graph gives rise to three distinct 2-factors ( that is , 2-regular spanning subgraphs ) of called _ tait cycles _ : each tait cycle is the union of edges painted in two colors out of the three ( the complement to a tait cycle is a perfect matching the union of disjoint edges painted in the third color ) .if a cubic graph has a hamiltonian cycle , then also admits a 3-edge coloring that is unique up to a permutation of colors : just paint the hamiltonian cycle ( which always has even length ) in two intermittent colors , and paint the complement perfect matching in the remaining third color .the above connection between 3-edge colorings and hamiltonian cycles suggests the following method of enumerating ( and actually finding ) all the hamiltonian cycles in a cubic graph : 1 .find all 3-edge colorings ( tait colorings ) of a given cubic graph up to permutations of colors ; 2 .find all the corresponding 2-factors ( tait cycles ) ; 3 .check for connected 2-factors ( hamiltonian cycles ) .this procedure gives the complete list of hamiltonian cycles in a cubic graph .for the enumeration of 3-edge colorings we use an exhaustive backtracking algorithm that works in two runs . during the first run it dynamically separates all the vertices into two types : `` rigid '' ones( that admit a unique coloring of edges incident with them ) , and `` soft '' ones ( with exactly two possibilities of coloring the incident edges ) ; no backtracking is needed on this stage .the second run is the actual painting of edges : after it successfully colored the graph or was unable to complete the coloring , it returns to the last visited soft vertex and tries a different possibility .the details are explained in the next section .first we partition the set of vertices , of the graph into two disjoint subsets of rigid ( ) and soft ( ) vertices .we note that this partition is not canonical .initially we put and dynamically change their content .we also introduce a temporary set of _ unidentified _ vertices that we already visited , and an ordered list of colored edges .we label the vertices of by integers . for the vertex with number denote the numbers of adjacent vertices by . an edge connecting and we denote by ] incident with it to the list , and we add their endpoints to .now we check if any of the vertices in are the endpoints of at least two edges in . if this is the case , we move all such vertices from to , and for every such vertex we also add the remaining third edge incident to it to the set ( if it is not already there ) .we continue the above procedure until there is no vertex left in that is an endpoint of at least two edges in .now pick the vertex from with the smallest number , say , and move it into .note that is an endpoint of a single edge in the current set .next , we append to the two remaining edges incident with , add their endpoints to , and again check if any of the vertices in bound at least two edges in . if they do , such vertices are moved to , and the missing edges incident with these vertices are added to .otherwise , we pick a vertex from with the smallest number , move it to and repeat the procedure until becomes empty , or , equivalently , until coincides with . since is connected , this would mean that . setting and obtain the required partition . the final list of edges provides the order in which we attempt to paint the edges of the graph .as above , we start at the vertex 0 and paint the edges ,\,[0,n^0_1],\,[0,n^0_2] ] to and the vertices to .none of these vertices is an endpoint of two edges in , so we remove the vertex 1 from , append it to , and add the edges ] to . again , none of the vertices in is an endpoint of two edges in , so we move the vertex 2 to and add the edges ] to . nowthe vertex 3 bounds two edges in , namely , ] . we move it to ,add the edge ] and ] . let us start coloring the graph .we paint the edges ,[0,3],[0,7] ] in color and ] in color , and ] incident with the rigid vertex 3 necessarily have color . at the last soft vertex 4 both possibilities of coloring the edges ,[4,7] ] in , and ] in colors respectively , we get the last coloring .thus , there exist 4 distinct tait colorings of the graph ( up to permutations of colors ) listed in the following table : [ cols= " < , < , < , < , < " , ] the first one of these 4 edge colorings has no hamiltonian cycles associated with it , whereas the other three ones produce two hamiltonian cycles each . they are listed below ( we indicate in brackets the corresponding alternating colors ) : is clear that the computational complexity of this algorithm is of order since the set of soft vertices depends on the ordering of vertices of , the complexity also depends on this ordering . to give an upper bound for for a simple graph without double edges we note that , every time we add two new edges to incident with a soft vertex , we encounter one of the three possibilities : 1 .both new endpoints belong to ; 2 .one new endpoint belongs to an one to ( the vertex from then gives rise to new vertices that are added to ) ; 3 .both new endpoints belong to . in any case , every soft vertex gives rise to at least two new vertices in and the lower bound 2 is attained when the both endpoints of the edges incident with a soft vertex belong to thus , when the union becomes equal to for the first time , we have the inequality in particular , it implies that at this stage .let be the girth of ( that is , the length of the shortest cycle in ) .when we reach the stage , every new soft vertex gives rise to at least rigid vertices .this means that no more than vertices will be added to .therefore , in the case , i.e. , when is triangle free , the number of soft vertices as .thus , the speed of our algoritm is on a par with the fastest algoritms that find a single edge coloring or a single hamiltonian cycle in a cubic graph , or even better ( cf ., e.g. , ) .the absence of short cycles makes the algorithm even faster with complexity bounded from above by .( note that the presence of double edges does not slow down the algorithm because at least one of their two common endpoints is rigid . )it is instructive to compare the above complexity estimate with the results of .let $ ] denote the eigenvalues of the ( normalized ) adjacency matrix of a simple cubic graph .consider the mean and the variance of the exponents .for each fixed the points form clusters called _ filars _ that enjoy a fractal - like structure . from the results of combined with the above considerations it follows that the closer is the point to the origin , the faster works our algorithm for the corresponding graph .this algorithm was implemented in c++ code and compiled on a windows x86 machine ( pentium iv 3.40 ghz processor with 1 gb of ram ) using ccg gnu compiler ( the program code is given in appendix ) .a good benchmark for testing programs that search for a hamiltonian cycle is provided by the horton graph , displayed on fig .2 . it is a cubic bipartite graph on 96 vertices without hamiltonian cycles , but with many `` long '' cycles ( that is , cycles of length close to 96 ) .some programs choke when they reach such a long cycle , not being able to transform it into a hamiltonian one .our program completed the search in steps ( so that the actual complexity is of order ) .the process took 6336 sec .of machine time , found 143982592 tait colorings and no hamiltonian cycles .our program is an open source program and its ansi c++ code is available at the following address : ( we do not present it here because of its length ) .the code does not use any platform specific header files , and with minor modifications it can be compiled with essentially any c++ compiler that is not mentally challenged .* acknowledgement * we thank j. filar for his interest in this work .the work of ve , sr was supported , in part , by the australian research council discovery grant dp0666632 .the work of pz was partially supported by the president of russian federation grant nsh - u329.2006.1 and by the russian foundation for basic research grant 05 - 01 - 00899 .tutte , w. t. _ graph theory_. cambridge univ . press , 1984 .eppstein , d. improved algorithms for 3-coloring , 3-edge - coloring , and constraint satisfaction , 12th acm - siam symp .discrete algorithms , washington , 2001 , 329337 .bondy , j. a. and murty , u. s. r. , _ graph theory with applications_. ny , north holland , 1976 .ejov , v. v. , filar , j. a. , lukas , s. k. and zograf , p.g .clustering of spectra and fractals of regular graphs .preprint math.co/0610742 ( to appear in jmaa ) .
we propose an effective algorithm that enumerates ( and actually finds ) all 3-edge colorings and hamiltonian cycles in a cubic graph . the idea is to make a preliminary run that separates the vertices into two types : `` rigid '' ( such that the edges incident to them admit a unique coloring ) and `` soft '' ones ( such that the edges incident to them admit two distinct colorings ) , and then to perform the coloring . the computational complexity of this algorithm is on a par with ( or even below ) the fastest known algorithms that find a single 3-edge coloring or a hamiltonian cycle for a cubic graph .
connecting the bench of experiment and the blackboard of theory offers physicists opportunities for creativity that we propose to make explicit .traditional views underplay the physicist s role in making these connections .although physicists have wished for mathematics that would connect directly to experiments on the bench , the equations of quantum mechanics express quantum states and operators not directly visible in spectrometers or other devices . herewe look into quantum mechanics as mathematical language used to model behaviors of devices arranged on the laboratory bench . after separating models as mathematical objects from any assertion that a certain model describes a given experiment with devices ,we ask : given a certain form of model , which models , if any , fit the behavior of some particular devices on the bench ?in contrast to any hope for a seamless , unique blackboard description of devices on a laboratory bench , we argue , based on mathematical proofs presented in sec . 3 , that no matter what experimental trials are made , if a quantum model generates calculated probabilities that match given experimentally determined relative frequencies , there are other quantum models that match as well but that differ in their predictions for experiments not yet performed .the proofs demonstrate what before could only be suspected : between the two pillars of calculation and measurement must stand a third pillar of choice making , involving personal creativity beyond logic , so there can be no reason to expect or demand that any two people choose alike . what does recognizing choice mean for physicists ? in physics , as in artistic work , pleasure and joy come from the choices one makes that lead to something interesting . looking back, physicists can hardly help noticing that their proudest accomplishments , whether theoretical or experimental , have involved choices made by reaching beyond logic on the basis of intuition , hunches , analogies some kind of guess , perhaps inspired , but still outside of logic . to understand the proofs is to see opportunities for making guesses .although hunches and guesses and intuition can be as personal as dreams , the recognition of guesswork as a permanent pillar of physics has more than personal impact : 1 . describing device behaviorwill be recognized in sec . 4 as a bi - lingual enterprise , with a language of wrenches and lenses for experimental trials on the bench and a different language of states and operators for the blackboard , linked by metaphors as guesswork .we show how freedom to choose particles as constituents of models of devices both helps in modeling devices and allows us to replace a widespread but questionable postulate of `` state reductions '' by a theorem .the need for bi - lingual descriptions bridging bench and blackboard gives local color to certain concepts . in sec .5 we develop a notion of _ force _ in the context of light detection that gives meaning both at the blackboard and the bench not only to expectation values of light forces , but also to higher - order statistics associated with them , with application to models and detecting devices by which to better distinguish one source of weak light from another .3 . in sec . 6 we uncover a symmetry , pertaining to entanglement , to make vivid the way judgments about how to model light are interdependent with judgments about how to model light - detectors .experimental records can hold : ( 1 ) numerals interpreted as the settings of knobs that control an experiment , and ( 2 ) numerals interpreted as experimental outcomes , thought of as the clicks and flashes and electronically tallied pulses by which the devices used in the laboratory respond to and measure whatever is at issue . as an abstraction by which one can model experimental outcomes, quantum theory offers what we shall call _ theoretical outcomes _ ( referred to in the literature variously as _ outcomes , results , the finding of particles _ and _ the finding of states _ ) .probabilities of theoretical outcomes are expressed in terms of states and operators by what we shall call _ quantum models_. we discuss these first , and then distinguish the probabilities expressed by models from relative frequencies of experimental outcomes . the propositions about linking models to devices ( sec .3 ) can be proved using any formulation of quantum mechanics that includes probabilities of theoretical outcomes .here is a standard formulation taken from dirac and von neumann , bulwarked by a little measure theory ; however , as discussed in sec.[sec:4 ] , we invoke no postulate of state reductions .let be a hilbert space , let be any self - adjoint operator of unit trace on ( otherwise known as a density operator ) , and let be a -algebra of subsets of a set of possible_ theoretical outcomes_. by a _ theoretical outcome _ we mean a number or a list of numbers as a mathematical object , in contrast to an experimental outcome extracted from an experimental record .let be any projective resolution on of the identity operator on ( which implies that for any , is a self - adjoint projection ) .let be a unitary time - evolution operator ( typically defined by a schrdinger equation or one of its relativistic generalizations ) .these mathematical objects can be combined to define a probability distribution on , parameterized by : , \label{eq : mu}\ ] ] where is the probability of an outcome in the subset of , for the parameter value . for a probability to be compared to anything experimental , one needs to make explicit the dependence of the and that generate it on the experimentally controllable parameters .it is convenient to think of these parameters as the settings of various knobs ; to express them we let and be mathematically arbitrary sets ( interpreted as sets of knob settings ) .let be the set of functions from to density operators acting on .let be the set of functions from to projective resolutions of the identity on of the identity operator on .then what we shall call _ a specific quantum model _ is a triple of functions together with , , and a unitary evolution operator . by the basic rule of quantum mechanics , such a specific quantum model generates a probability - distribution as a function of knob settings and time : & & \qquad\mu(a , b , t ) ( \omega)=\mathrm{tr}[u(t ) \rho(a)u^\dag(t)e(b)(\omega ) ] .\label{eq : muab } \end{aligned}\ ] ] often one needs something less specific than a triple . by a _ model _we shall mean a set of properties of that limit but need not fully specify , , and .for example , in modeling entangled light , we might construct a model in this sense by specifying relevant symmetry properties of , , and , leaving many fine points unspecified . * remarks * 1. an element can be a list : ; similarly an element can be a list .2 . by the _ domain _ of a model ,we mean the cartesian product of the sets .models of a given domain can differ as to the functions , , and defined on the sets , , and , respectively , so that different models with a given domain can differ in the states , operators , and probabilities that they assert .3 . in defining the domain of a model, we view as a function from ] . ] and a little algebra implies & \leq & |\mathrm{tr}(a_1^{1/2}ea_2^{1/2})| + |\mathrm{tr}(a_1^{1/2}(1-e)a_2^{1/2})| \nonumber \\[3pt ] \noalign{\goodbreak } & \leq & ( \mathrm{tr}\,a_1)^{1/2 } [ \mathrm{tr}(ea_2^{1/2}a_2^{1/2}e)]^{1/2}\nonumber\\[3pt ] & & \mbox { } + [ \mathrm{tr}(a_1^{1/2}(1-e)(1-e)a_1^{1/2})]^{1/2 } ( \mathrm{tr}\,a_2)^{1/2 } \nonumber \\[3pt ] & = & [ \mathrm{tr}(ea_2^{1/2}a_2^{1/2}e)]^{1/2 } + [ \mathrm{tr}(a_1^{1/2}(1-e)(1-e)a_1^{1/2})]^{1/2 } \nonumber \\[3pt ] & = & [ \mathrm{tr}(a_2e)]^{1/2 } + [ 1 - \mathrm{tr}(a_1e)]^{1/2 } .\end{aligned}\ ] ] expanding the notation , we have & & \qquad\mathrm{tr}[\rho(a_1)^{1/2}\rho(a_2)^{1/2}]\leq \min_{b , t,\omega}\{[\mathrm{tr}(\rho(a_2)u^\dag(t)e(b)(\omega)u(t))]^{1/2 } \nonumber \\[3pt ] & & \hskip1.85in\mbox { } + [ 1-\mathrm{tr}(\rho(a_1)u^\dag(t)e(b)(\omega)u(t))]^{1/2}\},\end{aligned}\ ] ] which , with eq .( [ eq : muab ] ) , completes the proof ._ example _ : for , if for some and , and , then it follows that \leq \epsilon^{1/2 } + \delta^{1/2} ] . in this sensethe models and conflict concerning their predictions .the connection of any specific quantum model to experiments is via a probability function .this and the proofs of propositions 2 and 4 show something that experiments can not show , namely that modeling an experiment takes guesswork , and that a model , once guessed , is subject to surprises arising in experiments not yet performed .some guesses get tested ( one speaks of _ hypotheses _ ) , but testing a guess requires other guesses not tested . by way of example , to guide the choice of a density operator by which to model the light emitted by a laser , one sets up the laser , filters , and a detector on a bench to produce experimental outcomes . but to arrive at any but the coarsest properties of a density operator one needs , in addition to these outcomes , a model of the detector , and concerning this model , there must always be room for doubt ; we can try to characterize the detector better , but for that we have to assume a model for one or more sources of light . when we link bench and blackboard , we work in the high branches of a tree of assumptions , holding on by metaphors , where we can let go of one assumption only by taking hold of others . because of the guesswork needed to bridge between models and experiments , describing device behavior is forever a bi - lingual enterprise , with a language of wrenches and lenses for the bench and a different language of states and operators for the blackboard. we will show how some words work as metaphors , straddling bench and blackboard , where by ` words ' we mean to include whatever mathematical symbols are used to describe devices .we consider the mathematics of quantum mechanics not in contrast to words but as blackboard language , words of which are sometimes borrowed for use at the bench to describe devices . by showing some choices of metaphorical uses of the words _ state , operator , spacetime , outcome , and particle _ , we promote freedom to invent particles as needed to describe interesting features of device behavior .recognizing choices in word use reflects back on how we formulate quantum mechanics : the notion of repeated measurements ` of a state ' will be revealed as neither necessary nor sensible , and the so - called postulate of state reductions will evaporate , leaving in its place a theorem .we start by looking at several related but distinct uses of _spacetime coordinates_. in the laboratory one uses clocks and rulers to assign coordinates to acts of setting knobs , transmitting signals , recording detections , _ etc _ , and one thinks of these experimentally generated coordinates as points of a spacetime something mathematical .we call this a ` linear spacetime ' to distinguish it from a second spacetime , that we call ` cyclic , ' onto which the linear spacetime is folded , like a thread wound around a circle , so that experimental outcomes for different trials can be tallied in bins labeled by coordinates .distinct from linear and cyclic spacetimes , any quantum - mechanical model involves a third spacetime on which are defined solutions of a schrdinger equation ( or one of its relativistic generalizations ) , and it is with reference to this spacetime that particles as theoretical constructs are defined .any quantum model written in terms of particles generates probabilities , and if the probabilities of the model fit the relative frequencies of experimental outcomes well enough , one is tempted to say that one has `` seen the particles '' ; however , because particles in their mathematical sense are creatures of models , and multiple , conflicting models are consistent with any given experimental data , this `` seeing of particles '' stands on guesswork and metaphor , needed , for example , to bind the _ electron _ as a solution of the dirac equation defined on a model spacetime to a flash from a phosphor on a screen .this metaphorical role of _ electron , photon , etc ._ , though habitual and easily overlooked , can be noticed when a surprise prompts one to make a change in the use of the word _ electron _ at the blackboard while leaving the use at the bench untouched , or _vice versa_.=-1 next we address notions of ( a ) components of a theoretical outcome , ( b ) a distinction between signal particles and probe particles , and ( c ) various measurement times . we take these in order .the term _ theoretical outcome _ pertains to a vector space of multi - particle wave functions defined on a model spacetime .this vector space is a tensor product of factors , one factor for each particle . for a resolution of the identity that factorsaccordingly , we shall view each theoretical outcome for this resolution as consisting of a list of components , one component for each of the factors .a probability density for such multi - component outcomes can be viewed as a joint probability density for the component parts of the outcome , modeling the joint statistics of the detection of many particles .we who model are always free to shift the boundary between states ( as modeled by density operators ) and measuring devices ( as modeled by resolutions of the identity ) so as to include more of the measuring devices within the scope of the density - operator part of the model .consider for example a coarse model that portrays a detecting device by a resolution of the identity . while a resolution of the identity has no innards ,detecting devices do .to model , say , a photo - diode and its accompanying circuitry , we can replace model by a more detailed model : the quantum state asserted by model becomes what we call a _ signal _ state , a factor in a tensor product ( or more generally a sum of tensor products ) accompanied by factors for one or more _ probe - particle states_. according to this model , the signal state is measured only indirectly , via a probe state with which it has interacted , followed by a measurement of the probe states , as modeled by `` a resolution of the identity '' that works on the probe factor , not the signal factor .recognizing probe - states as free choices in modeling clarifies a variety of times relevant to quantum measurements .for any quantum model , the form of eq .( [ eq : muab ] ) links an outcome ( whether single- or multi - component ) to some point time ; however , the use of such a model is to describe an actual or anticipated experiment , and for this , as described above , one is always free to choose a more detailed model , in which the state of model appears as the _ signal _ state that interacts with a probe state , followed by a measurement of the probe state at some time after the interaction . thus model replaces the point time by a time stretch during which the signal and probe states interact , thus separating the time during which the signal state interacts with the probe from the time at which the probe state is measured . in more complex models involving more probe states ,a succession of `` times of measurement '' in the sense of interactions can be expressed by a single resolution of the identity .finally , in modeling spatially dispersed signal states that interact with entangled probe particles , one can notice a prior `` probe - interaction time '' during which the probe particles must interact with one another , in order to have become entangled .recognizing choice in modeling allows one to sidestep a long - troubling issue in formulating quantum language .in logical conflict with the schrdinger equation as the means of describing time evolution , dirac and other authors introduce _ state reductions _ by a special postulate that asserts an effect on a quantum state of a resolution of the identity ; allegedly needed to express repeated measurements of a system .once we recognize the modeling freedom to make signal - probe interactions explicit , we can always replace any story about devices involving a `` state to be measured repeatedly '' by a model in which a _ signal _ state interacts with a succession of probe states , followed by a simultaneous measurement of all the probe states , as expressed by a single resolution of the identity and a composite state that incorporates both signal and probe states .thus any apparent need for a postulate to do with `` repeated measurements '' evaporates , and with it the unfortunate appearance of state reductions in a postulate .although inconsistent as a postulate , state reduction still works in many cases as a trick of calculation , as justified by the following theorem . * theorem * : assume any specific model of the form eq .( [ eq : muab ] ) , and assume is an outcome with components and . if is a tensor product , then for any density - operator function , the joint probability distribution induces a conditional probability distribution for given that matches the quantum probability of obtained using a `` reduced density operator '' obtained by the usual rule for state reduction applied to ._ proof _ : streamline notation by suppressing the dependence on and , and incorporate into , so that the relevant form is that of eq .( [ eq : mu ] ) . for any state it follows from eq .( [ eq : mu ] ) that $ ] . the conditional probability of given is defined by bayes rule : }{\mbox{tr}[\rho ( e_a(j)\otimes \sum_{k'}e_b(k'))]}.\end{aligned}\ ] ] by the definition of a resolution of the identity, we have ; recalling that is a projection that commutes with , one then has ,\ ] ] for an operator }.\ ] ] this matches the ` reduced density operator ' obtained by the usual rule of state reduction . q.e.d . * remarks * : 1 . in case a pure state , then , where the reduced state which is one form of the usual rule for state reduction , but here obtained by calculation with no need for any postulate .2 . either of the outcomecomponents can be a composite , so the theorem applies to cases involving more than two outcome components .3 . in relativistic formulations of quantum mechanics , detections at spatially separated locations and can be modeled by projections of the form assumed .of interest in particle physics , astrophysics , and emerging practical applications , sources of weak light are characterized experimentally by the experimental outcomes of detectors . because detector outcomes are statistical, trial - to - trial differences in outcomes can arise both from trial - to - trial irregularity in the sources and from quantum indeterminacy in their detection . as we shall see , detecting devices work in two parts , one of which balances a light - induced force against some reference .by taking advantage of the freedom to invent probe particles when we model particle detectors , we are led to a quantum mechanical expression of force in the context of balancing devices , with application to models and detecting devices by which to better distinguish one source of weak light from another . in newtonian physics , the word _force _ is used both on the blackboard and with balancing devices . in quantum physics ,_ force _ , as used at the blackboard , gets re - defined in terms of the expectation values pertaining to dynamics of wave functions . we will find useful a concept of force in characterizing light .because of its employment in experimental work , our concept of _ force _ necessarily takes on local coloring special to one or another experimental bench ; we develop a notion of _ force _ in the context of light detection that gives meaning both at the blackboard and the bench not only to expectation values of forces , but also to higher - order statistics associated with them .these higher - order statistics allow the expression , within quantum mechanics , of the teetering of a balance that happens when forces are nearly equal .we begin by reviewing some details of detector behavior . under circumstances to be explored , particle detectors employed to decide among possible quantum states produce unambiguous experimental outcomes .seen up close , a detecting device consists of two components .the first is a _transducer _ such as a photo - diode that responds to light by generating a small current pulse . to tally a transducer response as corresponding to one or another theoretical outcome in the sense of quantum mechanics, one has to _ classify _ the response using some chosen criterion . as phrased in the engineering language of inputs and outputs , the response of the transduceris fed as an input to a second component of the detector , in effect an unstable balance implemented as a flip flop ( made of transistors organized into a cross - coupled differential amplifier ) .the flip - flop produces an out_put _ intended to announce a decision between two possible experimental out_comes _ , say 0 and 1 .if we think classically , we picture the flip - flop as a ball and two bins , one bin for each possible outcome , separated by a barrier , the height of which can be adjusted , as shown in fig .[ fig:1 ] .the ball , starting in bin 0 is kicked by the transducer ; an outcome of 1 is recorded if and only if the balance is tipped and the ball rolls past the barrier into bin 1 .this ball - and - bin technique avoids ambiguity by virtue of a convention that gives the record a certain leeway : it does not matter if the ball is a little off center in its bin , so long as the ball does not teeter on the barrier between bins .although usually producing an unambiguous outcome , the flip - flop , seen up close , can teeter in its balancing , perhaps for a long time , before slipping into one bin or the other .absent some special intervention , two parties ( people or machines ) to which a teetering output fans out can differ in how they classify this output as an outcome : one finds a 0 , the other finds a 1 . to reduce the risk of disagreement , the two parties have to delay their reading of the output , hopefully until the ball slips into one or the other bin .ugly in this classical cartoon are two related features : ( 1 ) the ball can teeter forever , so that waiting is no help , and ( 2 ) the mean time for teetering to end is entirely dependent on some _ ad hoc _ assumption about `` noise '' . although we have described the flip - flop classically , it is built of silicon and glass presumably amenable to quantum modeling .to see what quantum models offer us , the first step is to recall that a quantum model implies probabilities to be related to an experiment , so that inventing a quantum model and choosing an experimental design go hand in hand .thirty years ago , thinking not in quantum but in circuit terms , we designed and carried out an experiment to measure teetering of the output of a flip - flop .as we recognized only recently , the record of this experiment is compatible with some quantum models .a quantum model to be offered shortly describes the experiment already performed and serves as a guide for designing future experiments to exploit what can be called a statistical texture of force , previously obscured by the `` noise '' invoked in classical analysis . to experiment with the teetering of a detecting device comprised of a transducer ( for diode ) connected to a flip - flop , shown in fig .[ fig:2 ] , we replace the transducer by a laboratory generator of weak electrical pulses to drive ., one could replace one transistor of flip - flop by a photo - transistor . ]putting the flip - flop into a teetering state takes very sensitive adjustment of the pulse generator , achieved by feedback from a running average of the outcomes produced by and to the pulse generator .the output of is made to fan out , as shown in fig .[ fig:3 ] , to a matched pair of flip - flops , and , each of which acts as an auxiliary detector , not of the incoming light but of the output of .the flip - flops and are clocked at a time later than is .the experimental outcome consists of two binary components , one from and the other from .if after the waiting time the output of is still teetering , the flip - flops and can differ , one registering a 1 while the other registers a 0 ; the disagreement between the two flip - flops registers the teetering of the output of . the measured relative frequency of disagreements between and is shown in fig .[ fig:4 ] .we want to model the teetering statistics that we will use to discriminate among various sources of weak signals .traditional analyses of solid - state detecting devices and their flip - flops invoke quantum mechanics only to determine parameters for classical stories involving voltage and current .analyzed that way , teetering in a photo - diode - based detector that employs a flip - flop made of transistors arises in two ways : first , there can be teetering in the entry of electrons and holes into the conduction band of the photo - diode ; second , there can be teetering in the response of to whatever amplified pulse comes from the photo - diode .although both these teeterings involve electrons and holes going into a conduction band , the statistical spread of outputs for a given state is blamed on _ noise _ unconnected with the signal , and known analyses of a flip - flop invoke _ noise _ to evade the embarrassment of a possible infinite hesitation . avoiding the invocation of ` noise , ' we picture quantum mechanically as a pair of probe particles .light acting via the transducer applies a force to the two probe particles .this scattering process transforms an initially prepared state of the light and probes to an out - state consisting of a sum ( or integral ) of products , each of which has a factor for the light and a factor for the two probe particles of .after some waiting time of evolution , the two probe particles are measured , as expressed by a resolution of the identity that ignores the signal state ; thus the probabilities of theoretical outcomes of the detection after the interaction are expressible by a reduced density operator obtained by tracing over the signal states . in this view , teetering shows up in the probability of detecting the two probe particles on different sides of a reference ; we will show how this probability depends on both the signal detected and a waiting time , and how in this dependence planck s constant enters .in the model presented here , we simplify the effect of the signal state to that of preparing , at time 0 , a pair of probe - particle wave functions . for simplicity ,the probe - particle wave functions have only one space dimension .let be the space coordinate for one probe particle and be the space coordinate for the other .the difference between one possible signal state and another is reflected by concentrating the initial probe - state wave functions slightly to one side or the other of an energy hump centered at the origin , .( as might be expected , teetering is most pronounced in the borderline case of a signal state that puts the initial probe - particle wave functions evenly over the energy hump . )we model the flip - flop as a resolution of the identity that has an theoretical outcome of 1 for and 0 for ; similarly is modeled as a resolution of the identity for .thus the two - component theoretical outcomes are 00 for ; 01 for , ; 10 for , ; and 11 for . by assuming a coupling between the two probe particles, we will model how increasing the waiting until time to detect the probe particles decreases the probability of disagreement between and , _ i.e._diminishes the probability of and being measured with different signs .thinking of the -probe particle as a wave - function concentrated near an energy hump , assuming that the long - time behavior of the particle depends only on the hump curvature , and for the moment neglecting coupling between the two probe particles , we express the dynamics of the -particle by the schrdinger equation for an unstable oscillator : where the instability comes from the minus sign in the term proportional to .we express the probe particle similarly . in order to produce growth over time in the correlation of the detection probabilities, we put in the coupling term .this produces the following two - particle schrdinger equation which is the heart of our model : the natural time parameter for this equation is defined here by ; similarly there is a natural distance parameter . for the initial condition, we will explore a wave packet of the form : \exp[-(y - c)^2/2b^2 ] .\label{eq : initxy}\ ] ] for , this puts the recording device exactly on edge , while positive or negative values of bias the recording device toward 1 or 0 , respectively .as discussed in appendix a , the solution to this model is with , \nonumber \\ b_2 ^ 2(t ) & = & b^2\left[1 + \left(\frac{\hbar^2}{\omega^2 m^2 b^4(\lambda-1)}-1\right ) \sin^2 \sqrt{\lambda-1}\,t\right ] .\label{eq : bphysdef}\end{aligned}\ ] ] the probability of two detections disagreeing is the integral of this density , , over the second and fourth quadrants of the -plane . for the especially interesting case of ,this integral can be evaluated explicitly as shown in appendix a : \sin^2\sqrt{\lambda-1}\,\omega t}{\displaystyle 1 + \left(\frac{\hbar^2}{\omega^2 m^2 b^4}+1\right)\sinh^2 \omega t}\right)^{1/2}. \label{eq : edge1phys}\end{aligned}\ ] ] this formula works for all real . for , it shows an oscillation , as illustrated in fig .[ fig:4 ] . for the case , the numerator takes on the same form as the denominator , but with a slower growth with time and lacking the oscillation , so that the probability of disagreement still decreases with time , but more slowly . picking values of and to fit the experimental record, we get the theoretical curve of fig .[ fig:4 ] , shown in comparison with the relative frequencies ( dashed curve ) taken from the experimental record . for the curve shown , and times the characteristic distance . according to this model , a design to decrease the half - life of disagreementcalls for making both and large .raising above 1 has the consequence of the oscillation , which can be stronger than that shown in fig .[ fig:4 ] .when the oscillation is pronounced , the probability of disagreement , while decreasing with the waiting time , is not monotonic , so in some cases judging sooner has less risk of disagreement than judging later . as proposition 2 of sec .[ sec:3 ] suggests , one can construct alternatives to the above model of a flip - flop . instead of initial probe states specified by `` locating blobs , '' expressed in the choice of the value of in eq .( [ eq : initxy ] ) , a model can employ initial probe states specified by momenta . in this `` shooting of probe particles at an energy hump , ''the initial wave functions are concentrated in a region and propagate toward the energy saddle at . writing a 0is expressed by an expectation momentum for the initial state less than that for the initial state that corresponds to writing a 1 .hints for this approach are in the paper of barton , which contains a careful discussion of the energy eigenfunctions for the single inverted oscillator of eq .( [ eq : phiprob ] ) , as well as of wave packets constructed from these eigenfunctions . such a model based on an energy distinction emphasizes the role of a flip - flop as a decision device : it `` decides '' whether a signal is above or below the energy threshold . for finite , the limit of eq .( [ eq : edge1phys ] ) as is this classical limit of model contrasts with the quantum - mechanical eq .( [ eq : edge1phys ] ) in how the disagreement probability depends on .quantum behavior is also evident in entanglement exhibited by the quantum - mechanical model . at wave function is the unentangled product state of eq.([eq : initxy ] ) .although it remains in a product state when viewed in -coordinates discussed in appendix a , as a function of -coordinates it becomes more and more entangled with time , as it must to exhibit correlations in detection probabilities for the - and -particles . by virtue of a time - growing entanglement and the stark contrast between eq .( [ eq : edge1phys ] ) and its classical limit , the behavior of the 1-bit recording device exhibits quantum - mechanical effects significantly different from any classical description .the alternative model based on energy differences can be expected to depend on a _ sojourn time _ with its interesting dependence on planck s constant , as discussed by barton . both models and thus bring planck s constant into the description of decision and recording devices , not by building up the devices atom by atom , so to speak , but by tying quantum mechanics directly to the experimentally recorded relative frequencies of outcomes of uses of the devices .detection of teetering in a detector of weak light pulses allows finer distinctions by which to characterize a source of that light . without teetering , a first measure of a weak light sourceis its mean intensity , expressed operationally as the fraction of 1 s detected in a run of trials in which it illuminates a detector .now comes a refinement that draws on teetering . if the detector s balancing flip - flop fans out to auxiliary flip - flops and , two sources and that produce the same fraction of 1 s can be tested for a finer - grained distinction as followsfor each source , using feedback to stabilize the relation between the source and , so that the running fraction of 1 s detected is held nearly steady , conduct one run of trials for a fixed waiting time , a second run of trials for a fixed waiting time , _ etc ._ let denote the fraction of trials of source for which the outcome components from and disagree in the run of trials with waiting time ; similarly , denotes this fraction for source .these additional data express additional statistical `` texture '' by which to compare source against source .even when they produce the same overall fractions of 1 s , they are still measurably distinguishable if they consistently show strong differences , for some , between and .for example , if source has more classical jitter than source , so that the quantum state bounces up and down in expectation energy from pulse to pulse , then source is more apt than source to push the probe particles both over the hill or neither over the hill . in other words ,source will produce more teeterings , and hence we would find significantly greater than .with the the freedom to invoke probe particles as developed in secs . 4 and 5 , we can show a striking additional freedom of choice in modeling , resolvable only by going beyond the application of logic to experimental data .this freedom pertains to entanglement .suppose that experimental trials yield outcomes consistent with a model , according to which a source of entangled weak light illuminates a pair of unentangled detectors at two separate locations , and .models of detectors detailed enough to invoke probe - particle states , as in sec . 5 , must specify how these probes are prepared . as discussed in a different context long ago , there is the possibility of entangling the probe particles , amounting to entangling the detectors .this possibility of an entangled pair of detectors points to a symmetry relation between entanglement of signals and entanglement of probes . to see how this works ,consider first modeling a single detector involving a probe state prepared by choosing some , where is a parameter such as the expectation momentum for this state .consider also a set of possible signal states .assume that the detector model calls for detecting the probe particle after its interaction with the signal particle , as expressed by a measurement operator acting on the probe alone .then the probability of outcome resulting from an initial signal interacting with a probe has the form of the square of a complex amplitude that depends on both that labels the signal state and that labels the probe state : with here , acting on the combined signal - probe space of wave functions , expresses the signal - probe interaction . assume for the moment that the probability in eq .( [ eq : pj ] ) is symmetric under interchange of signal and probe : which implies that for some real - valued phase - function , to see how this symmetry impacts on modeling a pair of detectors , consider two such detectors , one at location , the other at , having identical initial probe states and , respectively .for unentangled signals having initial states and , the amplitude for the joint outcome at and at is then & & [ ( 1_{sa}\otimes e_a(j))\otimes(1_{sb}\otimes e_b(k))](u_a\otimes u_b ) \nonumber \\[8pt ] & & \quad [ ( |s_a(q_1)\rangle |p_a(q_0)\rangle)\otimes ( |s_b(q_2)\rangle \label{eq : joe } \end{aligned}\ ] ] ( which can be written as a product of - and -factors , expressing the lack of correlation when there is no entanglement ) .now consider the same pair of detectors responding to an entangled signal state , \label{eq : entstate}\ ] ] where is a normalizing constant , dependent on and , and we have condensed the notation by writing for , _ etc_. the combined signal - probe state written with the tensor products in the order assumed in eq .( [ eq : joe ] ) is then ; \label{eq : entstate3}\ ] ] thus the joint probability of outcomes for the entangled signal state is ,p_{a0 } p_{b0 } ) = } \hspace{1.0 in } \nonumber \\[8pt ] & & \|\mathcal{n}[(1_{sa}\otimes e_a(j))\otimes(1_{sb}\otimes e_b(k))](u_a\otimes u_b)\nonumber \\[8pt ] & & \;\;[s_{a1}p_{a0}s_{b2}p_{b0 } + e^{i\theta } s_{a2}p_{a0 } s_{b1}p_{b0}]\|^2 .\label{eq : joe1}\end{aligned}\ ] ] assuming the invariance up to phase of eq .( [ eq : syma ] ) , the exchange of signal and probe parameters results only in phases that leave the probability unaffected , leading to the relation ,p_{a0 } p_{b0})}\quad\nonumber\\[8pt ] & = & \pr(j_a , j_b|s_{a0}s_{b0},\mathcal{n}[p_{a1}p_{b2}+ e^{i\theta}p_{a2}p_{b1 } ] ) , \end{aligned}\ ] ] so that outcome probabilities are the same for an entangled state measured by untangled detectors as they are for an unentangled state measured by entangled detectors . without assuming eqs .( [ eq : symp ] ) , ( [ eq : syma ] ) , one can still ask : given a model that ascribes probabilities of outcomes to an entangled signal measured via an unentangled probe state , is there an alternative model that ascribes the same probabilities to an unentangled signal state measured via an entangled probe state ? we conjecture that the answer is `` yes , '' in which case no experiment can distinguish between a model of it that asserts entangled signal states measured via unentangled probes and a model that asserts unentangled signal states measured via entangled probes .tai tsun wu called our attention to the conflict between the schrdinger equation and state reductions .we thank howard e. brandt for discussions of quantum mechanics .we thank dionisios margetis for analytic help .this work was supported in part by the air force research laboratory and darpa under contract f30602 - 01-c-0170 with bbn technologies .starting with eq . ( [ eq : pairphys ] ) , and writing as the time parameter times a dimensionless `` '' and and as the distance parameter times dimensionless `` '' and `` , '' respectively , we obtain this equation is solved by introducing a non - local coordinate change : with this change of variable , eq .( [ eq : pair ] ) becomes for which separation of variables is immediate , so the general solution is a sum of products , each of the form the function satisfies its own schrdinger equation , which is the quantum - mechanical equation for an unstable harmonic oscillator , while satisfies which varies in its interpretation according to the value of , as follows : ( a ) for , it expresses an unstable harmonic oscillator ; ( b ) for , it expresses a free particle ; and ( c ) for , it expresses a stable harmonic oscillator .the last case will be of interest when we compare behavior of the model with an experimental record . by translating eq .( [ eq : initxy ] ) into -coordinates , one obtains initial conditions the solution to eq .( [ eq : phieq ] ) with these initial conditions is given by barton ; we deal with and in order . from ( 5.3 ) of ref . , one has where .\label{eq : bdef}\ ] ] similarly , integrating the green s function for the stable oscillator ( over the initial condition for yields where .\label{eq : b2def}\ ] ] multiplying these and changing back to -coordinates yield the joint probability density the probability of two detections disagreeing is the integral of this density over the second and fourth quadrants of the -plane .this is most conveniently carried out in -coordinates .for the especially interesting case of ( and , this integral can be transformed into & = & \frac{1}{\pi b_1(t ) b_2(t)}\int^{\infty}_{-\infty } dv \int^v_{-v}du\ , \exp\left\{\frac{-u^2}{b_1 ^ 2(t ) } - \frac{v^2}{b_2 ^ 2(t)}\right\ } \nonumber \\[6pt ] & = & \frac{4}{\pi}\int_0^{\infty}e^{-v^2}\ , dv\int_0^{b_2(t ) v / b_1(t)}du\ , e^{-u^2 } \nonumber \\[6pt ] \noalign{\goodbreak } & = & \frac{2}{\pi}\tan^{-1}\left(\frac{b_2(t)}{b_1(t)}\right ) \nonumber \\[6pt]\noalign{\goodbreak } & = & \frac{2}{\pi}\tan^{-1}\left ( \frac{\displaystyle 1 + \left[\frac{1}{b^4(\lambda-1)}-1\right]\sin^2\sqrt{\lambda-1}\ , t}{\displaystyle 1 + \left(\frac{1}{b^4}+1\right)\sinh^2 t}\right)^{1/2}.\qquad \label{eq : edge1}\end{aligned}\ ] ]it is easy to check that this formula works not only when but also for the case . for ,the numerator takes on the same form as the denominator , but with a slower growth with time , so that the probability of disagreement still decreases with time exponentially , but more slowly . converting eq .( [ eq : jointxy ] ) from dimensionless back to physical time and distance variables results in eq .( [ eq : jointxyphys ] ) with eqs .( [ eq : bphysdef ] ) , and similarly eq .( [ eq : edge1 ] ) leads to eq .( [ eq : edge1phys ] ) .j. m. myers and f. h. madjid , in : quantum computation and information , s. j. lomonaco , jr.and h. e. brandt ( eds . ) , contemporary mathematics series , vol .305 , american mathematical society , providence , 2002 , pp . 221244 .a. verevkin , g. n. goltsman , and r. sobolewski , in : opto - canada : spie regional meeting on optoelectronic , photonics , and imaging , technical digest of spie , vol .td01 , spie , bellingham , wa , 2002 , pp.3940 .
although quantum states nicely express interference effects , outcomes of experimental trials show no states directly ; they indicate properties of probability distributions for outcomes . we prove categorically that probability distributions leave open a choice of quantum states and operators and particles , resolvable only by a move beyond logic , which , inspired or not , can be characterized as a guess . by recognizing guesswork as inescapable in choosing quantum states and particles , we free up the use of particles as theoretical inventions by which to describe experiments with devices , and thereby replace the postulate of state reductions by a theorem . by using the freedom to invent probe particles in modeling light detection , we develop a quantum model of the balancing of a light - induced force , with application to models and detecting devices by which to better distinguish one source of weak light from another . finally , we uncover a symmetry between entangled states and entangled detectors , a dramatic example of how the judgment about what light state is generated by a source depends on choosing how to model the detector of that light . and quantum mechanics , modeling , detection , metastability , agreement 03.65.ta , 03.65.nk , 84.30.sk
severe acute respiratory syndrome ( sars ) is a new respiratory disease which was first identified in china s southern province of guangdong .sars is not merely a local endemic disease : it poses a serious risk to the medical community , is a threat to international travelers , is having a substantial negative economic impact in parts of east asia and is spreading world - wide .the serious danger sars poses to the medical community is illustrated by the numerous cases of transmission to health - care workers . startlingly , the man who awakened the world to the dangers of sars , dr .carlo urbani , succumbed to the disease .cases of transmission between aircraft passengers are suspected , and relatively short visits to epidemic regions have resulted in infection .the most striking feature of sars , however , has proven to be its ability to rapidly spread on a global scale .one man with sars made flights : from hong kong to mnich to barcelona to frankfurt to london , back to mnich and frankfurt before finally returning to hong kong .another individual , a -year - old airport worker , appears to have transmitted the disease to people .clearly , there is an unfortunate interaction between the incubation period of the virus , the widely distributed severity and infectiousness of sars in different people and the speed and volume of passenger air travel .the adverse economic impact in parts of east asia far exceeds the disruption of previous outbreaks of avian influenza , earning comparison with the 1998 financial market crisis in that part of the world .although the causative agent of sars has been determined , a detailed understanding of the causative virus pathogenticity and routes of transmission and the dynamics of the epidemic is still at a very early stage .it is uncertain how the virus is transmitted : by droplet or airborne transmission or person - to - person contact .the recent development of laboratory tests promises to improve the epidemiological situation somewhat .sars is a public health crisis on a scale rarely seen .the obvious question in such a crisis is , `` can sars be contained ? '' in this study , we report transmission parameters and epidemic dynamics from a model based on classes of people who are susceptible , exposed , infectious , diagnosed , and recovered ( `` seijr '' ) that includes the effect of patient isolation . our model is consistent with the possibility of containment in toronto , ontario .sars was first identified in november 2002 in the guongdong province of china . by february , officials in hong kong reported their first cases of sars and no later than march of this year the virus reached canada .as of april , canada is the only location outside of asia which has seen deaths as a result of sars ( so far ) . u.s. health officials are currently investigating cases in states ( apri 17 , 2003 ) .an individual exposed to sars may become infectious after an incubation period of days ( or longer ) with days being most common .most infected individuals either recover , typically after to days , or suffer mortality or higher .sars appears to be most serious in people over age , especially those who have other medical problems such as heart or liver disease .its symptoms are similar to pneumonia or other respiratory ailments and include a high fever ( ) , shortness of breath , dry cough , headache , stiff or achy muscles , fatigue and diarrhea . these symptoms, however , are not uniform . in the us , for example , the disease seems to be a milder one than in asia .the result has been that sars was , and for the moment remains , a diagnosis of exclusion .presently , there is no treatment for sars and diagnostic tests are just becoming available . the mortality rate is reported to be % or higher world - wide .experts estimate that between and percent of people with sars recover without medical intervention , while the condition of the remaining victims requires medical care .as of april , , the world health organization ( who ) reported cases ( a mixture of probable or suspected cases ) in countries . victims are reported to have died . although researchers in the erasmus medical center in rotterdam recently demonstrated that a coronavirus ( some of which produce common colds ) is the causative agent of sars , the mode of transmission still remains unknown .the current hypothesis is that sars is transmitted mainly by close person - to - person contact which may explain the relatively slow transmission scale .however , it could also be transmitted through contaminated objects , air or by other unknown ways .it is also a mystery how the disease originated , whether in birds , pigs or other animals , nor is it known if the origin is rural or urban . in this article , a simple model for sars outbreaksis formulated ( see ) .the model is used in conjunction with _ global and local _ sars data to estimate the initial growth rate of the sars epidemic .these rates are used to estimate sars _ basic reproductive number _ , , the classical epidemiological measure associated with the reproductive power of a disease . estimates the _ average _ number of secondary cases of infection generated by a _typical _ infectious individual in a population of susceptibles and hence , it is used to estimate the initial growth of a sars outbreak .we estimate ( using data from ontario , hong kong and singapore ) that is about .this value is not too different from past estimates of for influenza ( see ) despite the fact that _ superspreaders _ of sars have been identified .in fact , the parameter values resulting on this , on our population - scaled model , can lead to extremely high levels of infection ) .we show , via simple extrapolation , that the estimated rate of growth is consistent with the reported date for the first cases of sars in hong kong , however the first cases in toronto may be several weeks earlier than the february 23 date of the first case reported by the canadian health ministries .our best `` rough '' estimate for toronto is that the first case occurred sometime around january 29th , and not later than february 28th .the data for hong kong are fitted by fixing the parameters , and based on estimates of the observed rates for the corresponding processes .the growth rate is estimated from observed `` model - free '' exponential growth in singapore and hong hong .the _ average _ diagnostic rate and the measure of heterogeneity between the two susceptible classes and the effectiveness of patient isolation measures ( related to ) are then varied to fit the initial data for hong kong and singapore . to model the data in toronto, we must postulate that the parameters describing the rate of diagnosis ( ) and isolation ( ) in the canadian outbreak changed radically on march .two hospitals in toronto were closed about that time : scarborough grace hospital on march and york central hospital on march .the remainder of this article is organized as follows : section introduces the basic model and gives a formula for the basic reproductive number ; section describes the results of simulations and connections to data ; and , section collects our final thoughts .u.s . data is limited and sparsely distributed while the quality of china s data is hard to evaluate . on the other hand, there appears to be enough data for toronto , singapore and hong kong to make limited preliminary predictions using a model that includes the effects of _ suspected _ mechanisms for the spread of sars .limited data and inconclusive epidemiological information place severe restrictions on efforts to model the global spread of the sars etiological agent .thus , we model _ single _ outbreaks , ignoring demographic processes other than the impact of sars on survival .the model is applied to data from toronto , hong kong and singapore .because the outbreak dynamics in singapore and hong kong are different from those in toronto , some of the results may only be indicative of what is happening in those regions of the world ( in particular our parameters and may change ). the situation must be re - evaluated frequently as sars continues its travels around the world .here we describe a model that incorporates , in a _ rather crude _ way , some of the important characteristics suggested in the literature ( unequal susceptibility , symptomatic and asymptomatic individuals , mode of transmission , superspreaders , etc . )the goal is to use the results for single outbreaks as a first step in our efforts to gauge the _ global _ impact of sars .hence , we focus on three `` closed '' populations ( southern ontario ( toronto ) , singapore and hong kong ) and postulate differences in the degree of susceptibility to sars . these differences may be due to variations in contact rates , age - dependent susceptibility or `` unknown '' genetic factors .this last assumption is handled ( in a rather crude and arbitrary way ) via the introduction of two distinct susceptible classes : , the most susceptible , and , less so .initially , and where is the proportion of the population size that is initially at higher risk of sars infection .the parameter is a measure of reduced susceptibility to sars in class . ( `` exposed '' ) denotes the class composed of asymptomatic , possibly infectious ( at least some of the time ) individuals .typically , it takes some time before asymptomatic infected individuals become infectious .the possibility of limited transmission from class is included , in a rather crude way , via the parameter ( see table ) .the class denotes infected , symptomatic , infectious , and undiagnosed individuals .-individuals move into the diagnosed class at the rate .individuals recover at the rates ( class ) and ( class ) . the rate denotes sars disease - induced mortality .the classes is included to keep track of the cumulative number of diagnosed and recovered , respectively .furthermore , it is assumed that diagnosed individuals are handled with _care_. hence , they might not be ( _ effectively _ ) as infectious as those who have not been diagnosed ( if is small ) .the parameter takes into account their reduced impact on the transmission process ( small represents effective measures taken to isolate diagnosed cases and _ visa versa _ ) .table includes parameters definitions and the initial values used .our sars epidemiological model is given by the following nonlinear system of differential equations : which is refered to as `` seijr , '' after the variables used to name the classes .+ the values of and are not known and are fixed arbitrarily while and are varied and optimized to fit the existing data ( least - squares criterion ) for hong kong , singapore and toronto .we did not explored the sensitivity of the model to variations in and because they are not known and can not be controlled .all other parameters were roughly estimated from data and current literature .in particular , the transmission rate is calculated from the dominant root of the third order equation obtained from the linearization around the disease - free equilibrium .the parameters and were allowed to vary when fitting the data for each location ( singapore , hong kong and toronto ) .some restrictions apply , for example , the value of .we also require that , a statement that members of the diagnosed class recover at the same rate as members of the undiagnosed class . has been reported to be between and days . from the second generator approach , we obtain the following expression for the basic reproductive number : \right\ } \left\{\frac{q}{k } + \frac{1}{\alpha + \gamma_1 + \delta } + \frac{\alpha l}{(\alpha + \gamma_1 + \delta)(\gamma_2 + \delta)}\right\}\\ \end{array}\ ] ] which can be easily given an epidemiological interpretation .the use of parameters estimated from hong kong ( table ) gives a values of ( hong kong ) and ( toronto , assuming exponential growth ) and ( singapore ) .initial rates of growth for sars outbreaks in different parts of the world ( see figure [ myfig00 ] ) are computed using the data provided by who and the canadian ministry of health .these rates are computed exclusively from the number of cases reported between march and april .the values obtained are ( world data ) , ( hong kong ) , ( canada ) , ( toronto ) and ( singapore ) . for our numerical simulations , we start with an infectious individual ( not yet diagnosed , ) and _ crude _ estimates for the start of sars outbreaks ( ) are obtained from the formula , which assumes initial exponential growth ( , the estimated `` model - free '' rate of growth from the time series of the cumulative number of sars cases ) .results for toronto , hong kong , singapore and aggregated world data are shown in table .the estimated `` world '' start of the outbreak is november , a date consistent with the fact that the first sars case was detected in guangdong , china in november .these dates are used as the starting time of the respective outbreaks . + for the case of the province of ontario , canada the total population is approximately million .we assume that the population at _ major _ risk of sars infection lives in ontario s southern part ( particularly toronto ) , and is approximately of the total population ( in our model ) .it is worth pointing out that this value of is not critical ( that is , the most sensitive ) in the model .the `` model - free '' approximately exponential growth rates for the various regions of the world are roughly similar _ except for canada _ from march 31st ( day 61 ) to april 6th ( beginning the day of the jump in the number of reported canadian cases ) , the number of diagnosed cases grew , where is measured in days .this rate is substantially higher than elsewhere in the world . in the subsequent week( beginning april 7th , day 68 ) the number of probable or suspected canadian cases rapidly rolls over to a smaller growth rate not too far from the rest of the world .we conclude , based on the coincidence of the canadian hospital closures , the jump in the reported number of canadian sars cases on march 31st and the rapid rise in recognized cases in the following week , that canadian doctors were rapidly diagnosing pre - existing cases of sars ( in either class or on march 26th ) .if we make the assumption that the fundamental disease spreading parameters other than and are roughly constant throughout the world prior to march 26th , we can reach two important conclusions .beginning on march 26th , in toronto : * changed from a number days to days , and * changed from an uncertain and relatively large value to .if we assume that the fundamental growth rate is essentially constant from one region of the world to another , it is difficult for our model to produce growth rates well above the world average , except as a transient response to differences in diagnostic rate ( due to delays in response or change in policy ) .similarly , the seijr model requires fairly small values of to achieve a rapid roll - over in the growth rate of recognized cases .the parametric details of how a `` second '' initial condition for toronto on march 26th is generated do not affect the qualitative aspect of this argument : the canadian data prior to march 31st ( the day of the large jump ) are probably not as meaningful as data after that date , and hence only bound the model from below prior to march 26th .the essential aspect of this before - and - after hospital closure argument is that there were substantially more undiagnosed people in classes and than in class on march 26th .this is a reasonable assumption given that the number of cases reported by canadian officials more than double from march 30th to march 31st .the introduction of behavioral changes starting on march ( days),alters the fate of the disease in a dramatic fashion ( see table ) .+ fitting the model to the hong kong and singapore data is carried out in a similar fashion with , ( hong kong has about million inhabitants , singapore million ) .the estimated transmission rate from hong kong data is and for singapore .both hong kong and singapore s data are fit with the value .hong kong and singapore s measure of contact between diagnosed sars cases and susceptibles are and , respectively ( see figure 4 ) .even though there is some heterogeneity in the parameters for hong kong and singapore , they provide an important calibration of our model .their values for and are roughly consistent with each other , indicating that the difference with toronto is significant within our model , and pointing to the joint importance of rapid diagnosis and good isolation of diagnosed patients in controlling an outbreak .while there is some indication in the data from hong kong of a possible slowing of the outbreak , we did not attempt to analyze the slowing or assess its significance .a simple model that can capture the effect of average infectiousness in a heterogeneous population and the effect of isolating diagnosed patients has been introduced to explore the role of patient isolation and diagnostic rate in controlling a sars outbreak . by examining two cases with relatively clean exponential growth curves for the number of recognized cases , we are able to calibrate a seijr model with parameters ( sars diagnostic rate ) and ( isolation effectiveness ) .we then use our seijr model to examine the non - exponential dynamics of the toronto outbreak .two features of the toronto data , the steep increase in the number of recognized cases after march 31st and rapid slowing in the growth of new recognized cases , robustly constrained the seijr model by requiring that and days .the model is also used to look at the impact of drastic control measures ( isolation ) .the fitting of data shows that the initial rates of sars growth are quite similar in most regions leading to estimates of between and despite the recent identification of _superspreaders_. model simulations are fitted to sars reported data for the province of ontario , hong kong and singapore .good fits are obtained for reasonable values of , the rate of identification of sars infections ; `` reasonable '' values of the control parameters ( a measure of isolation ) ; possible values of , a _ crude _ measure of reduced susceptibility ( due to genetic factors , age or reduced contact rates ) ; a _ crude _ measure of the relative degree of infectiousness of asymptomatic individuals ; possible values of a measure of initial levels of population heterogeneity ; and , reasonable values of n the _ effective _ population size .it is worth noticing that for values of larger than the predictions ( proportion of cases at the end of the outbreak , etc . )are _ roughly _ the same . the introduction of behavioral changes that follow the identification of the first case ( reduce values of at the time of the identification and moving aggressively to identify cases of sars by increasing ) result in a dramatic reduction in the total number of cases and on mortality in toronto .given the fact that sars appears to kill between three and seven percent of infected ( diagnosed ) cases ( ) , it seems quite appropriate to isolate diagnosed people .although we do not examine the effect of quarantine by varying , it seems intuitive that quarantining those who came into close contact with positively diagnosed individuals will reduce the total number of cases .+ model results and simple estimates suggest that _ local _ outbreaks may follow similar patterns .furthermore , the use of relative extreme isolation measures in conjunction with rapid diagnosis has strong impact on the local dynamics ( toronto s situation ) .however , if sars has shown us anything it is that `` undetected '' and `` unchecked '' local disease dynamics can rapidly become a global issue . the research on this article used the latest data available ( april for canada and april for hong kong and singapore ) .recent disclosures reaffirm the importance of carrying out the analysis excluding data from china .we have redone the analysis including the data collected up to april and , our conclusions , remain the same .current data seem to support higher values for sars induced mortality rates . however , our model is _ most sensitive _ to the parameters ( effectiveness of isolation ) and ( ) diagnostic rate .it is not as sensitive to changes in .in fact , the consideration of a mortality ( ) rather than reduces the number of cases by about . in toronto ,we have estimated diagnosed cases with ( and after march 26th ) .perfect isolation after march 26th , ( ) reduces this number to diagnosed cases .the assumption of homogenous mixing implies that our model is likely to overestimate the size of the outbreak .hence , the situation in toronto seems to support the view that this _ outbreak _ is being contained .obviously , the case of the crude model ( by design ) can not handle high levels of variability ( an stochastic model would be desirable ) .this possibility is tested ( as it is often done in deterministic models ) by looking at the sensitivity of the model to parameters ( and being the most critical ) .such sensitivity analyses can also help `` estimate '' the variability in .we thank penny j. hitchcock , norman l. johnson , krastan b. blagoev , and the t-11 epidemiology discussion group at los alamos national laboratory and hans frauenfelder for enhancing our ability to carry out this research .we also thank fred brauer , simon levin , james watmough ( who reconfirmed our value of by the method ) , carlos w. castillo - garsow , and miriam nuno for their recent comments .this research has been supported through the center for nonlinear studies at los alamos national lab under department of energy contract w-7405-eng-36 and partially supported by nsf , nsa and sloan foundation grants to carlos castillo - chavez . during the final stages of preparation , it came to our attention that prof .roy anderson is examining similar questions about sars outbreak dynamics .99 r. m. anderson and r. m. may , _ infectious diseases of humans _ , oxford university press ( 1991 ) . c. castillo - chavez , h. w. hethcote , v. andreasen , s. a. levin , w. m. liu ._ epidemiological models with age structure , proportionate mixing , and cross - immunity _ , j. math .27 , 233 - 258 ( 1989 ) .bradsher , keith .`` carrier of sars made flights before treatment . '' _ the new york times . _ april 11 , 2003 .mcneil , donald g. and lawrence k. altman .`` how one person can fu el an epidemic . '' _ the new york times . _april 15 , 2003 .schoen , john w. `` sars business impact spreading . ''_ msn b news service ._ april 2 , 2003 .`` record sars deaths in hong kong . ''_ msnbc news service ._ website : http://www.msnbc.com/news/885653.asp .april 15 , 2003 .`` sars hits airlines , qantas cuts jobs . ''_ msnbc news service ._ website : http://www.msnbc.com/news/897719.asp .april 9 , 2003 . c. drosten ,_ , `` identification of a novel coronavirus in patients with severe acute respiratory syndrome , '' _ new england j. med . _ ( 2003 ). t. g. ksiazek , _ et al ._ , `` a novel coronavirus associated with severe acute respiratory syndrome , '' _ new england j. med . _ ( 2003 ). j. l. gerberding , `` faster . . .but fast enough ? '' _ new england j. med . _pomfret , john .`` doctor says health ministry lied about disease . '' _ the washington post ._ april 9 , 2003 .`` time line . ''_ msnbc news service ._ website : http://www.m snbc.com/news/885653.asp .record sars deaths in hong kong ; msnbc news service ( april , ) .stein , rob .`` scientists identify cause of outbreak . ''_ the washington post. _ april 17 , 2003 .`` iaff issues sars guidelines for first responders . ''_ fire chief magazine ._ website : http://www.firechief.com .april 9 , 2003 .`` symptoms and treatment of sars . ''_ c health ._ website : http://chealth.canoe.ca/ stein , rob . faq on sars . "_ the washington post ._ website : http://www.washingtonpost.com/wp-dyn/articles/a11849-2003apr2.html .sloan , rennie .`` will sars wreak havoc here ? . '' _ msnbc n ew service ._ website : http://www.msnbc.com/news/894251.asp .april 3 , 2003 .kalb , claudia .`` tracking sars . '' _newsweek_. april 28 , 2003 .private communication , april 21st , 2003 . international society for infectious diseases .promed - mail .archive number 20030325.0744 ._ sars - worldwide _ ( march 25 , 2003 ) .cdc ( centers for disease control ) : _ basic information about sars_. website : http://www.cdc.gov/ncidod/sars/factsheet.htm ( april 16th , 2003 ) .bradsher , keith .`` sars is here to stay , hong kong and singapore t ell citizens . ''_ the new york times . _ april 8 , 2003 . f. brauer , c. castillo - chavez , _ mathematical models in population biology and epidemiology _ , springer verlag new york ( 2000 ) .o. diekmann , j. a. p. heesterbeek , _ mathematical epidemiology of infectious diseases : model building , analysis and interpretation _ , wiley ( 2000 ) .canadian ministry of health .summary of severe acute respiratory syndrome ( sars ) cases : canada and international .http://www.hc-sc.gc.ca/pphb-dgspsp/sars-sras/eu-ae/ international society for infectious diseases .promed - mail .archive number 20030416.0930 ._ sars - worldwide _ ( april 16th , 2003 ) .coomer , john . suspected sars cases near 150 in u.s . _ the reporter .vanderbuilt medical center ._ april 11 , 2003 .pomfret , john .`` underreporting , secrecy fuel sars in b eijing , who says . '' _ the washington post ._ april 17 , 2003 . world health organization .cumulative number of reported probable cases of severe acute respiratory syndrome ( sars ) .http://www.who.int/csr/sarscountry/en/ cdc ( centers for disease control ) : _ isolation and quarantine_. website : http://www.cdc.gov/ncidod/sars/isolationquarantine.htm ( april 3 , 2003 ) . cdc ( centers for disease control ) : _ information for close contacts of sars patients _ .website : http://www.cdc.gov/ncidod/sars/factsheetcc.htm ( april 16th , 2003 ) .cdc ( centers for disease control ) : _ sars : frequently asked questions . _website : http://www.cdc.gov/ncidod/sars/faq.htm .april 16 , 2003 .pomfret , john .`` china seals hospitals to fight sars . ''_ the washington post ._ april 25 , 2003 .`` ministers may review sars status . ''_ bbc news ._ http://news.bbc.co.uk/1/hi/health/2979623.stm ( april 27 , 2003 ) .table 1 . parameter definitions and values that fit the cumulative number of cases in class ( `` diagnosed '' ) for hong kong .these parameters are used to compute the basic reproductive number .+ [ cols="^,<,^",options="header " , ] +
in this article we use global and regional data from the sars epidemic in conjunction with a model of susceptible , exposed , infective , diagnosed , and recovered classes of people ( `` seijr '' ) to extract average properties and rate constants for those populations . the model is fitted to data from the ontario ( toronto ) in canada , hong kong in china and singapore outbreaks and predictions are made based on various assumptions and observations , including the current effect of isolating individuals diagnosed with sars . the epidemic dynamics for hong kong and singapore appear to be different from the dynamics in toronto , ontario . toronto shows a very rapid increase in the number of cases between march 31st and april 6th , followed by a _ significant _ slowing in the number of new cases . we explain this as the result of an increase in the diagnostic rate and in the effectiveness of patient isolation after march 26th . our best estimates are consistent with sars eventually being contained in toronto , although the time of containment is sensitive to the parameters in our model . it is shown that despite the empirically modeled heterogeneity in transmission , sars average reproductive number is , a value quite similar to that computed for some strains of influenza . although it would not be surprising to see levels of sars infection higher than ten per cent in some regions of the world ( if unchecked ) , lack of data and the observed heterogeneity and sensitivity of parameters prevent us from predicting the long - term impact of sars . the possibility that 10 or more percent of the world population _ at risk _ could eventually be infected with the virus in conjunction with a mortality rate of three - to - seven percent or more , and indications of _ significant _ improvement in toronto support the stringent measures that have been taken to isolate diagnosed cases .
quantum cascade lasers ( qcls ) are high - power , coherent light sources emitting in the mid - infrared ( mid - ir ) and terahertz ( thz ) frequency ranges .qcls are electronically driven , unipolar devices whose active core consists of tens to hundreds of repetitions of a carefully designed stage .the qcl active core can be considered a superlattice ( sl ) , in which each stage is a multiple - quantum - well ( mqw ) heterostructure , where confined electronic states with specific energy levels are formed because of quantum confinement .the concept of achieving lasing in semiconductor sls was first introduced by kazarinov and suris in 1971 .the first working qcl was demonstrated by faist _. _ two decades later .qcls are typically iii - v material systems grown on gaas or inp substrates . molecular beam epitaxy ( mbe ) and metal - organic chemical vapor deposition ( mocvd ) are the techniques that enable precise growth of thin layers of various iii - v alloys .it is also possible to incorporate strain into the structure , as long as the total strain in a stage is balanced .both the precision and the possibility of introducing strain bring great flexibility to the design of the qcl active core , so lasing over a wide range of wave lengths ( from 3 to 190 m ) has been achieved .the growth techniques produce high - quality interfaces , with atomic - level roughness .mid - ir qcls ( wave - length range m ) have widespread military and commercial applications .a practical portable detector requires mid - ir qcls to operate at room - temperature ( rt ) , in continuous - wave ( cw ) mode , and with high ( watt - level ) output power .furthermore , these qcls must also have high wall - plug efficiency ( wpe , the ratio of emitted optical power to the electrical power pumped in ) and long - term reliability under these high - stress operating conditions . as the stress likely stems from excessive nonuniform heatingwhile lasing , improving device reliability and lifetime goes hand - in - hand with improving the wpe . in qcls ,multiple conduction subbands are formed in the active core by means of quantum confinement .qcls are unipolar devices , meaning that lasing is achieved through radiative intersubband transitions ( transitions between two conduction subbands ) instead of radiative interband transitions ( transitions between the conduction and valence bands ) in traditional quantum well ( qw ) semiconductor lasers . as a result, electrons do not combine with holes after the radiative transitions and can be used to emit another photon . in order to reuse electrons ,the same mqw heterostructure is repeated many times ( 2570 ) in the qcl active core ( the so - called cascading scheme ) .figure [ fig : bandstructure ] depicts a typical conduction - band diagram of two adjacent stages in a qcl under an electric field .each stage consists of an injector region and an active region .the injector region has several thin wells separated by thin barriers ( ) , so a miniband is formed , with multiple subbands that are close in energy and whose associated wavefunctions have high spatial overlap .typically , the lowest few energy levels in the miniband are referred to as the injector levels .the injector levels collect the electrons that come from the previous stage and inject them into the active region .the active region usually consists of 23 wider wells ( ) separated by thin barriers .consequently , a minigap forms in the active region between the upper lasing level ( 3 ) and the lower lasing level ( 2 ) .another important energy level in the active region is the ground state ( 1 ) .there is a thin barrier ( usually the thinnest among all layers ) between the injecting region and the active region , called the injection barrier . by design ,the injector levels are close in energy and strongly coupled to the upper lasing level because of the thin injection barrier .the upper and lower lasing levels have large spatial overlap , which allows a radiative transition between the two levels ; the wave length of the emitted light is determined by the energy spacing between these two levels .the lower lasing level overlaps with the ground state for efficient electron extraction .electron emission of longitudinal optical ( lo ) phonons is the dominant mechanism for electron extraction , so the energy spacing between the lower lasing level and the ground state is designed to be close to the lo phonon energy to facilitate extraction . with careful design ,the electron lifetime in the upper lasing level is longer than in the lower lasing level , so population inversion can be achieved . after reaching the ground state ,electrons tunnel through the injector into the the upper lasing level of the next stage , and the process is repeated .of course , the lasing mechanism description above is idealized . in reality, the efficiency of the radiative transition between the upper and lower lasing levels is very low . in recent years, considerable focus has been placed on improving the wpe and output power of qcls for rt cw operation . showed 8.4 % wpe and 1.3 w output power around 4.6 m in 2008 . shortly thereafter , lyakh _ et al . _ reported 12.7% wpe and 3 w power at 4.6 m .watt - level power with 6% wpe at 3.76 m and then lower power at 3.39 m and 3.56 m are reported by bandyopadhyay __ demonstrated 21% wpe and 5.1 w output power around 4.9 m in 2011 .much higher wpe and/or output power has been achieved at lower temperatures or at pulsed mode near 4.8 m .a summary of recent developments can be found in review papers .while good output powers and wpes have been achieved , long - term reliability of these devices under rt cw operation remains a critical problem .these devices are prone to catastrophic breakdown owing to reasons that are not entirely understood , but are likely related to thermal stress that stems from prolonged high - power operation .this kind of thermal stress is worst in short - wave length devices that have high strain and high thermal impedance mismatch between layers .in addition to improved device lifetime , we seek better cw temperature performance ( higher characteristic temperatures and , defined below ) .the first aspect is a weaker temperature dependence of the threshold current density .empirically , the threshold current density ( the current density at which the device starts lasing ) has an exponential dependence on the operating temperature : .higher characteristic temperature is preferred in qcl design , as it means less variation in as the temperature changes .another key temperature - dependent parameter is the differential quantum efficiency ( also called the slope efficiency of external quantum efficiency ) , defined as the amount of output optical power per unit increase in the pumping current : .the differential quantum efficiency is directly proportional to the wpe ( , where is the feeding efficiency ) .therefore , the higher the , the closer is to unity , and the higher the wpe .recently , deep - well structures with tapered active regions have demonstrated significant improvements in and with respect to the conventional device , underscoring that the suppression of leakage plays a key role in temperature performance . still , the microscopic mechanisms and leakage pathways that contribute to these empirical performance parameters remain unclear . under high - power , rt cw operation , both electron and phonon systems in qcls are far away from equilibrium . in such nonequilibrium conditions , both electronic and thermal transport modeling are important for understanding and improving qcl performance .electron transport in both mid - ir and thz qcls has been successfully simulated via semiclassical ( rate equations and monte carlo ) and quantum techniques ( density matrix , nonequilibrium green s functions ( negf ) , and lately wigner functions ) .inp - based mid - ir qcls have been addressed via semiclassical and quantum transport approaches ( 8.5- and 4.6- devices ) .there has been a debate whether electron transport in qcls can be described using semiclassical models , in other words , how much of the current in qcls is coherent .theoretical work by iotti and rossi show that the steady - state transport in mid - ir qcls is largely incoherent .monte carlo simulation has also been used to correctly predict transport near threshold . however , short - wavelength structures have pronounced coherent features , which can not be addressed semiclassically .negf simulations accurately and comprehensively capture quantum transport in these devices , but are computationally demanding .density - matrix approaches have considerably lower computational overhead than negf , but are still capable of capturing coherent - transport features .a comprehensive review of electron - transport modeling was recently written by jirauschek and kubis .electronic simulations that ignore radiative transitions are applicable for modeling qcls below or near threshold , where the interaction between electrons and the laser electromagnetic field can be ignored .such simulations are useful for predicting quantities such as threshold current density and . however , in order to accurately model qcls under lasing operations , the effect of the laser field on electronic transport would have to be included . in some cases , the effects of the laser field can be very strong , especially for high wpe devices , where the field - induced current can be dominant . when included in simulations ,the laser field is typically either modeled as an additional scattering mechanism or as a time - dependent sinusoidal electric field . in this work, we ignore the effect of the laser field on electron dynamics .thermal transport in qcls is often described through the heat diffusion equation , which requires accurate thermal conductivity in each region , a challenging task for the active core that contains many interfaces .it is also very important to include nonequilibrium effects , such as the nonuniform heat - generation rate stemming from the nonuniform temperature distribution and the feedback that the nonequilibrium phonon population has on electron transport . in this chapter ,we present a multiphysics ( coupled electronic and thermal transport ) and multiscale ( bridging between a single stage and device level ) simulation framework that enables the description of qcl performance under far - from - equilibrium conditions .we present the electronic ( sec .[ sec : electron ] ) and thermal ( sec .[ sec : thermal ] ) transport models , then bring them together for electrothermal simulation of a real device structure ( sec .[ sec : example ] ) .we strive to cover the basic ideas while pointing readers to the relevant references for derivation and implementation details .depending on the desired accuracy and computational burden , one can model electronic transport in qcls with varying degrees of complexity .the goal is to determine the modal gain ( proportional to the population inversion between the upper and lower lasing levels ) under various pumping conditions ( current or voltage ) and lasing conditions ( pulsed or continuous wave ) .a typical electron transport simulator relies on accurately calculated quasibound electronic states and associated energies in the direction of confinement .electronic wavefunctions and energies are determined by solving the schrdinger equation or the schrdinger equation combined with poisson s equation in highly doped systems .section [ sec : kp ] introduces a schrdinger solver coupled with a poisson solver .more information about other solvers for electronic states can be found in the review paper and references therein .the simulations of electronic transport fall into two camps depending on how the electron single - particle density matrix is treated .the diagonal elements of the density matrix represent the occupation of the corresponding levels and off - diagonal elements represents the `` coherence '' between two levels .transport is semiclassical or incoherent when the off - diagonal coherences are much smaller than the diagonal terms , and can be approximated as proportional to the diagonal terms times the transition rates between states . in that case , the explicit calculation of the off - diagonal terms is avoided and only the diagonal elements are tracked , which simplifies the simulation considerably .when the off - diagonal terms are appreciable , transport is partially coherent and has to be addressed using quantum - transport techniques , discussed below .semiclassical approaches assume that electronic transport between stages is largely incoherent `` hopping '' transport .the key quantities are populations of electronic states that are confined in the qcl growth direction , and electrons transfer between them due to scattering events . the scattering rates can be obtained empirically or more rigorously , via fermi s golden rule .common semiclassical approaches are the rate equations and ensemble monte carlo ( emc ) , the latter solving a boltzmann - like transport equation stochastically . in the rate - equation approach ,scattering between relevant states , i.e. , the injector level , the upper and lower lasing levels , and the ground state , is captured through transition rates .the rates include all relevant ( radiative and nonradiative ) scattering mechanisms , and can be either empirical parameters or calculated .the computational requirements of rate - equation models are low , so they are suitable for fast numerical design and optimization of different structures .the heterostructure in the qcl active core is a quasi - two - dimensional ( quasi-2d ) system , where electrons are free to move in the plane , while confined cross - plane , in the ; the confinement results in the formation of quasibound states and discrete energy levels corresponding to the bottoms of 2d energy subbands .the electron wavefunctions in 3d are plane waves in the plane and confined wavefunctions in .electronic transport is captured by a boltzmann - like semiclassical transport equation , which can be solved via the stochastic emc technique assuming instantaneous hops between states in 3d due to scattering .the simulation explicitly tracks the energy level and in - plane momentum of each particle in the simulation ensemble ( typically particles ) .tracking in - plane dynamics makes it more detailed than the rate - equation model .the transition rates are generally computed directly from the appropriate interaction hamiltonians , and therefore depend on the energy levels as well as the wavefunction overlaps between different electronic states .emc allows us to include nonequilibrium effect into transport , which is covered in more detail in sec .[ sec : emc ] .density matrix and negf are the two most widely used techniques to describe quantum transport in qcls .recently , a wigner - function approach was also successfully used to model a superlattice . in semiclassical approaches ,the central quantity of interest is the distribution function , the probability of an electron occupying an eigenstate and having an in - plane kinetic energy .the quantum - mechanical analogue is the single - electron density matrix , , where the diagonal elements are occupations and the off - diagonal elements are the spatial coherences between states and at the in - plane energy .when employing semiclassical methods , off - diagonal matrix elements are assumed to be much smaller than diagonal elements .this approximation may fail in some cases , for example , when two eigenstates with a large spatial overlap have similar energies .this scenario often arises when modeling terahertz qcls , but can also come up in mid - ir qcls . in these cases ,semiclassical models fail .the density - matrix models that have been employed for qcl modeling can be categorized into two groups .the first includes hybrid methods , where transport is treated semiclassically within a region of the device ( typically a single stage ) while the effects of tunneling between different regions , separated by barriers , is treated quantum mechanically using a density - matrix formalism with phenomenological dephasing times .the second group involves completely quantum - mechanical methods that rely on microscopically derived markovian master equations that guarantee positivity of the density matrix .both methods are more computationally expensive than their semiclassical counterparts , because the density matrix contains many more elements than its diagonal semiclassical analogue .the nonequilibrium green s function technique ( see a good overview in ) relies on the relationships between single - particle time - ordered green s functions and correlation functions .the correlation function , often referred to as the lesser green s function , is one of the central quantities and can be understood as a two - time generalization of the density matrix , where refers to the magnitude of in - plane wave vector .the correlation function contains both spatial correlations ( terms with ) as well as temporal correlations between times and ( not included in semiclassical or density - matrix models ) .typically , the potential profile is assumed to be time independent , in which case the correlation function only depends on the time difference .fourier transforming over the time difference into the energy domain gives the energy - resolved correlation function , which is the quantity which is usually solved for numerically .the main advantages of the negf formalism are that it provides spectral ( energy - resolved ) information and it includes the effects of collisional broadening ( the broadening of energy levels due to scattering ) , which is particularly important when the states are close in energy .these advantages carry a considerable computational cost , so negf calculations are much more time consuming than density - matrix approaches . here , we focus on presenting semiclassical modeling of electron transport in qcl structures via emc .the solver consists of two parts , a coupled schrdinger poisson solver and a transport kernel .we solve for the electronic states using the coupled schrdinger poisson solver and feed the energy levels and the wavefunctions of the relevant electronic states to the transport kernel .the transport kernel keeps track of the electron momentum , energy , and distribution among subbands .if the electron density inside the device is high , transport kernel will periodically feed the electron distribution back to the schrdinger poisson solver and update the electronic states .this loop is repeated until the electron distribution converges . by doing so, we solve for both electron transport and the electronic band structure self - consistently .since the active qcl core consists of repeated stages , the wavefunctions in any stage can be obtained from the wavefunctions in any other stage by translation in space and energy .this translational symmetry makes it possible to simulate electron transport in only one generic central stage instead of in the whole qcl core .typically , electronic states in nonadjacent stages have negligible overlap , which also means that the transition rates between them are negligible . as a result , it is sufficient to limit interstage scattering events to only those between adjacent stages .figure [ fig : emcsche ] shows a schematic of three adjacent stages under an applied field .we simulate electron transport in the central stage , while nearest - neighbor interstage ( ) and intrastage ( ) scattering is allowed .periodic boundary conditions ( pbcs ) are applied in the simulation , i.e. , whenever one electron scatters from the central stage out to the next stage ( process ) , an electron scatters from the previous stage into the central stage ( process ) and vice versa ( process and process ) .pbcs are justified by the cascading scheme .we employ the method to solve the schrdinger equation and couple it to a poisson solver .the method is an efficient way to solve for the electronic band structure near the band edges , where the transport happens in qcls .the method considers the contribution from the conduction band ( c ) , light - hole band ( lh ) , and the spin - orbit split - off band ( so ) ( the heavy - hole band ( hh ) decouples from the other three at the band edge ) .the contributions from the lh and so are especially important for narrow - gap materials , such as inp .moreover , in modern qcls , strain - balanced structures have been employed to obtain enhanced performance . in these structures , alternate layersare compressively or tensilely strained while the whole structure is strain free , with carefully designed thickness of each layer .the method allows for convenient inclusion of the effects of strain on the band structure .the implementation details of the solver can be found in .the solver can only solve for a finite structure rather than an infinite periodic one . as a result, we need to simulate a finite number of stages and add artificially high barriers to the two ends to confine all the states .if a stage is far enough from the boundaries , the calculated band structure in it should be the same as if we were to solve for the whole periodic structure .tests have confirmed that three stages , which we also use in emc , are enough when solving for the electronic states to ensure that the central - stage states are unaffected by the simulation - domain potential boundaries .the states from the central stage are then translated in energy and position to the neighboring stages according to the stage length and the applied electric field .when we need to solve for electron transport and electronic states self - consistently , it is necessary for the solver to be able to automatically pick out the electronic states belonging to the central stage .one intuitive criterion is to calculate the `` center of mass '' for each state ( the expectation value of the cross - plane coordinate , ) and assign those falling in the central stage to that stage .however , in our three - stage scheme , this method may pick up the states that are too close to the boundary .one can either extend the number of stages in the solver to five , so the three stages in the middle are all far from the boundary , or use additional criteria , such as that there be more than 50% possibility of finding an electron in the central stage , based on the probability density distribution , or requiring that the location of the probability - density peak be in the central stage .additional criteria requiring strong confinement of states have been explored in .the emc kernel tracks the hopping transitions of electrons between subbands and stages until convergence , and outputs the transport information for us to calculate the experimentally relevant quantities such as current and modal gain . in the transport kernel , both electron electron interactions and electron lo - phonon interactions are considered .other scattering processes such as intervalley scattering , impurity scattering , interface roughness scattering can be considered under different circumstances .photon emission is not considered , either . because emc tracks individual particles , nonequilibrium electron transport can be automatically captured .( emc tracks individual simulation particles , each of which might represent thousands of real electrons . )the most important scattering mechanism in qcls is electron lo - phonon scattering , which facilitates the depopulation of the lower lasing level .as shown in , phonon confinement has little effect on the electronic transport , therefore , for simplicity , lo phonons are treated as bulklike dispersionless phonons with energy .the transition rate between an initial state with energy and a final state with can be derived from fermi s golden rule as where is the electronic charge while and are static and high - frequency electronic permittivities of the material , respectively .the integrals are over the in - plane kinetic energy of the final state and the cross - plane momentum transfer . is the in - plane momentum transfer .latexmath:[\[\label{eq : oi } is defined as the overlap integral ( oi ) between the initial and final states , where is the cross - plane momentum transfer .the integration is over the angle between initial and final in - plane momenta and ( ) , cross - plane momentum component of the final state ( ) , and the kinetic energy of the final state ( ) . represents the number of lo phonons with momentum .the expression can be further simplified in the equilibrium case , where follows the bose - einstein distribution . in order to model nonequilibrium phonon effects, we numerically integrate the expression using a phonon number histogram according to both and . according to the uncertainty principle, position and momentum can not both be determined simultaneously .since our electrons are all confined in the central stage ( is finite ) , the cross - plane momentum is not exactly conserved during the scattering process ( ) .this analysis does not affect the momentum conservation in the plane , because we assume infinite uncertainty in position there .previously , the cross - plane momentum conservation has been considered through the momentum - conservation approximation ( mca ) and a broadening of according to the well width .the mca forbids a phonon emitted between subbands and to be re - absorbed by another transition between and if or , and thus might underestimate the electron - lo interaction strength .the concept of well width is hard to apply in a mqw structure such as the qcl active core .we observe that the probability of a phonon with cross - plane momentum being involved in an interaction is proportional to the overlap integral in equation ( [ eq : oi ] ) .figure [ fig : overlap ] depicts the typical overlap integrals for both intersubband ( and ) and intrasubband ( ) transitions . as a result , in each electron lo - phonon scattering event , we randomly select a following the distribution from the overlap integral ( fig .[ fig : overlap ] ) . depending on the mechanism ( absorption or emission ) , a phonon with ( ) is removed / added to the histogram according to the 2d density of states ( dos ) and the effective simulation area . once the phonons with a certain momentum are depleted , transitions involving such phonons become forbidden . from eq . ( [ eq : oi ] ) versus cross - plane phonon wave vector for several transitions ( intersubband and ; intrasubband ) .figure reproduced from , y. b. shi and i. knezevic , j. appl .116 , 123105 ( 2014 ) , with the permission of aip publishing.,title="fig : " ] in order to couple the emc solver to the thermal transport solver , we need to keep a detailed log of heat generation during electron transport . in all the relevant scattering events ,electron lo - phonon scattering is the only inelastic mechanism and therefore is the only mechanism that contributes to heat generation . as a result ,the total energy emitted and absorbed in the form of lo phonons is recorded during each step of the emc simulation . the nonequilibrium phonons decay into acoustic longitudinal acoustic ( la ) phonons via a three - phonon anharmonic decay process .the formulation and the parameters here follow .the simulation results of emc including nonequilibrium phonons are shown in section [ sec : example ] .the dominant path of heat transfer in a qcl structure is depicted in fig .[ fig : heatflow ] .the operating electric field of a typical qcl is high , which means that considerable energy is pumped into the electronic system . these energetic , `` hot '' electrons relax their energy largely by emitting lo phonons .lo phonons have high energies but flat dispersions , so their group velocities are low and they are poor carriers of heat .an lo phonon decays into two la phonons via a three - phonon process referred to as anharmonic decay .la phonons have low energy but high group velocity and are the main carriers of heat in semiconductors . if we neglect the diffusion of optical phonons , the flow of energy in a qcl can be described by the equations where , , and are the lo phonon , acoustic phonon , and electron energy densities , respectively . is the thermal conductivity in the system and is the acoustic - phonon ( lattice ) temperature .the term describes heat diffusion , governed by acoustic phonons .we have also used the fact that the rate of increase in the lo - phonon energy density equals the difference between the rate of its generation by electron lo - phonon scattering and the rate of anharmonic decay into la phonons . in a nonequilibrium steady state ,both the lo and la energy densities are constant , so as described in the previous section , the right - hand side of equation ( [ eq : heatdiff ] ) is the heat - generation rate and can be obtained by recording electron lo - phonon scattering events in electronic emc where is the electron density ( is the sheet density and is the length of a single stage ) while and are the number of simulation particles and the simulation time , respectively . and are the energies of the emitted and absorbed lo phonons , respectively . to solve equation ( [ eq : heatdiff ] ) ,we need information on both the thermal conductivity and the heat - generation rate ; they are discussed in subsections [ sec : iiivsl ] and [ sec : device ] , respectively .the qcl active core is a sl : it contains many identical stages , each with several thin layers made from different materials and separated by heterointerfaces .the thermal - conductivity tensor of a sl system reduces to two values : the in - plane thermal conductivity ( in - plane heat flow is assumed isotropic ) and the cross - plane thermal conductivity .experimental results have shown that , in sls , the thermal conductivity is very anisotropic ( ) while both are smaller than the weighted average of the constituent bulk materials .both effects can be attributed to the interfaces between adjacent layers . here , we discuss a semiclassical model for describing the thermal - conductivity tensor of iii - v sl structures .note that the model described here is in principle applicable to sls in other material systems , as long as they have high - quality interface and thermal transport is mostly incoherent .in particular , we focus on thermal transport in iii - arsenide - based sls , as they are most commonly used in mid - ir - qcl active cores . under qcl operation conditions of interest ( k , and typically near rt ) , thermal transportis dominated by acoustic phonons and is governed by the boltzmann transport equation ( bte ) . to obtain the thermal conductivity , we solve the phonon bte with full phonon dispersion in the relaxation - time approximation . to capture both the anisotropic thermal transport and the reduced thermal conductivity in sl systems , we need to observe the twofold influence of the interface .first , it reduces by affecting the acoustic - phonon population close to the interfaces .second , it introduces an interface thermal boundary resistance ( itbr ) , which is still very difficult to model .common models are the acoustic mismatch model ( amm ) and the diffuse mismatch model ( dmm ) ; the former assumes a perfectly smooth interface and only considers the acoustic mismatch between the two materials , while the latter assumes complete randomization of momentum after phonons hit the interface . as most iii - v based qcls are grown by mbe or mocvd , both well - controlled techniques allowing consistent atomic - level precision , neither amm nor dmm captures the essence of a iii - v sl interface .figure [ fig : slsche ] shows a schematic of interface roughness in a lattice - matched sl .the jagged dashed boundaries depict transition layers of characteristic thickness between the two materials . .this effective interface roughness leads to phonon - momentum randomization and to interface resistance in cross - plane transport .figure reproduced from , s. mei and i. knezevic , j. appl .118 , 175101 ( 2015 ) , with the permission of aip publishing.,title="fig : " ] we introduce a simple model that calculates a more realistic itbr ( a key part in calculating ) by interpolating between the amm and dmm transmission rates using a specularity parameter .the model has a single fitting parameter : the effective interface rms roughness . since the growth environment is well controlled ,using one to describe all the interfaces is justified .we use to calculate a momentum - dependent specularity parameter where is the magnitude of the phonon wave vector and is the angle between and the normal direction to the interface .consistent with the twofold impact of interface roughness , affects the thermal conductivity through two channels .apart from calculating the itbr , an effective interface scattering rate dependent on the same specularity parameter is added to the internal scattering rate to calculate modified ( see detailed derivations in ) . by adjusting only , typically between 1 - 2 , the calculated thermal conductivity using this model fits a number of different experiments .thermal transport inside the active core of a qcl is usually treated phenomenologically : is typically assumed to be 75% of the weighted average of the bulk thermal conductivities of the constituent materials , while is treated as a fitting parameter ( constant for all temperatures ) to best fit the experimentally measured temperature profile .we calculated the thermal - conductivity tensor of a qcl active core and showed that the typical assumption is not accurate and that the degree of anisotropy is temperature dependent ( fig .[ fig : sltc ] ) .ga and in .the solid curve , dashed curve , and dashed - dotted curve show the calculated in - plane , cross - plane , and averaged bulk thermal conductivity , respectively . in the calculations .the inset shows the ratio between the calculated in - plane and the averaged bulk thermal conductivities .figure reproduced from , s. mei and i. knezevic , j. appl .118 , 175101 ( 2015 ) , with the permission of aip publishing.,title="fig : " ] the ratio between and the averaged bulk value ( inset to fig .[ fig : sltc ] ) varies between 45% and 70% over the temperature range of interest . has a weak dependence on temperature , in keeping with the common assumption in simplified models ; the weak temperature sensitivity means that itbr dominates cross - plane thermal transport .these results show that it is important to carefully calculate the thermal - conductivity tensor in qcl thermal simulation and we will use this thermal - conductivity model in the device - level simulation .the active core is not the only region we need to model in a device - level thermal simulation .figure [ fig : qclsche ] shows a typical schematic ( not to scale ) of a qcl device in thermal simulation with a substrate - side mounting configuration .the active core ( in this case , consisting of 36 stages and 1.6 m thick ) with width is embedded between two cladding layers ( 4.5- - thick gaas ) . the waveguide is supported by a substrate ( gaas ) with thickness . an insulation layer ( si ) with thickness is deposited around the waveguide and then etched away from the top to make the contact .finally , a contact layer ( au ) with thickness and a thin layer of solder ( ) are deposited on top .there is no heat generation in the regions other than the active core .further , these layers are typically thick enough to be treated as bulk materials . bulk - substrate ( gaas or inp ) thermal conductivities are readily obtained for iii - v materials from experiment , as well as from relatively simple theoretical models ( table [ table : bulktc ] ) ..thermal conductivity as a function of temperature for materials in a qcl structure .[ cols="<,<",options="header " , ] the length of a qcl device is much greater than its width , therefore we can assume the length is infinite and carry out a 2d thermal simulation .the schematic of the simulation domain ( not to scale ) is shown in fig .[ fig : qclsche ] .the boundary of the simulation region is highlighted in green , and certain boundary conditions ( heat sink at fixed temperature , convective boundary condition , or adiabatic boundary condition ) can be applied ( independently ) to each boundary .typically , the bottom boundary of the device is connected to a heat sink while other boundaries have the convective boundary condition at the environment temperature ( single - device case ) or the adiabatic boundary condition ( qcl - array case ) .typical values for the layers thickness are m , m , m , m , m .we use the finite - element method to solve for the temperature distribution .the whole device is divided into different regions according to their materials properties .each stage of the active region is treated as a single unit with the heat - generation rate tabulated in the device table in order to capture the nonuniform behavior among stages .the active core is very small , but is also the only region with heat generation , small thermal conductivity , and spatial nonuniformity . to capture the behavior of the active regionwhile saving computational time , we use a nonuniform mesh in the finite - element solver to emphasize the active core region .figure [ fig : qclmesh ] shows a mesh generated in the simulation .it is known that among all the stages in the active core , the temperature and the electric field ( represents the stage index ) are not constant , but we have no _ a priori _ knowledge of how they depend on the stage index .however , we know that the charge current continuity equation must hold , and in the steady state ; this implies that the current density must be uniform , as the current flow is essentially in one dimension , along .this insight is key to bridging the single - stage and device - level simulations . from sec .[ sec : emc ] , we can obtain the heat - generation rate inside the active core by running the single - stage emc simulation .each single - stage emc is carried out at a specific electric field and temperature and outputs both the current density and the heat - generation rate . by sweeping and in range of interest ,we obtain a table connecting different field and temperature to proper current density and heat - generation rate [ .however , from the discussion above , the input in the thermal simulation needs to be the constant parameter .therefore , we `` flip '' the recorded table to a so - called device table , suitable for coupled simulation .figure [ fig : thermalflchart ] depicts the flowchart of the device - level electrothermal simulation . before the simulation, we obtain the device table [ ] , as discussed above .we also have to calculate the thermal conductivities ( and ) of the active region as a function of temperature and tabulate them , based on the model described in sec .[ sec : iiivsl ] .we also need the bulk thermal conductivity of other materials in the device ( cladding layer , substrate , insulation , contact , and solder ) as a function of temperature .these material properties are standard and already well characterized . and temperature profile across the whole device . based on the tabulated information from the single - stage simulation and assumed , we get stage - by - stage profile for the electric field and the heat - generation rate profiles. an accurate temperature - dependent thermal conductivity model , which includes the boundary resistances of layers , and the temperature profile guess are used as input to the heat diffusion equation , which is then iteratively solved ( with updated temperature profile in each step ) until the thermal boundary conditions are satisfied.,title="fig:",width=5 ] each device - level thermal simulation is carried out in a certain environment ( i.e. , for a given set of boundary conditions ) and with a certain current density . at the beginning of the simulation , an initial temperature profile is assigned . with the input from the device table and the thermal conductivity data in each region , we use a finite - element method to iteratively solve the heat diffusion equation until convergence . at the end of the simulation , we obtain a thermal map of the whole device .further , from the temperature in each stage and the injected current density , we obtain the nonuniform electric field distribution . with the electric field in each stage and given the stage thickness , we can accurately calculate the voltage drop across the device and obtain the current voltage characteristic . by changing the mounting configuration ( , , , , ) or the boundary conditions ,the temperature profile can be changed .in this section , we present detailed simulation results of a 9- m gaas / al mid - ir qcl based on a conventional three - well active region design .the chosen structure has 36 repetitions of the single stage ; each stage has 16 layers . starting from the injection barrier ,the layer thicknesses in one stage ( in ) are * 46*/19/*11*/54/*11*/48/*28*/34/*17*/30/**//**//*26*/30 . here ,the barriers ( al ) are in bold while the wells ( gaas ) are in normal font ; the underlined layers are doped to a sheet density of .the results at 77 k are shown here .figure [ fig : pageband ] shows the electronic states of the chosen structure under the design operating field of 48 kv / cm , calculated from the coupled solver ( see sec . [ sec : kp ] ) .the active - region states of the central stage are represented in bold red curves ; 1 , 2 , and 3 are the ground state and the lower and upper lasing levels , respectively .injector states are labeled and .other blue states together form the miniband .( when the electron density in the qcl is high , the electronic bands have to be calculated self - consistently with emc . )-valley subbands in two adjacent stages of the simulated gaas / algaas - based structure .the bold red curves denote the active region states ( 1 , 2 , and 3 represent the ground state and the lower and upper lasing levels , respectively ) .the blue curves represent injector states , with and denoting the lowest two .figure reproduced from , y. b. shi and i. knezevic , j. appl .116 , 123105 ( 2014 ) , with the permission of aip publishing.,title="fig : " ] the current density vs field curve , one of the key qcl characteristics at a given temperature , is intuitive to obtain in emc . after calculating the electronic band structure at a certain field , the wavefunctions , energy levels , and effective masses of each subband and each stage are fed into the emc solver . in the emc simulation, we include all the scattering mechanisms described in sec .[ sec : emc ] .since we employ periodic boundary conditions , the current density can be extracted from how many electrons cross the stage boundaries in a certain amount of time in the steady state .the net flow of electrons is calculated by subtracting the flow between the central stage and the previous stage ( ) from the flow between the central stage and the next stage ( ) in each time step .the current density is then calculated as where is the time interval during which the flow is recorded . is the effective in - plane area of the simulated device .since doping is the main source of electrons , the area is calculated as where is the number of simulated electrons and is the sheet doping density ( in ) in the fabricated device . in the current simulation , and . due to the stochastic nature of emc, we need to average the current density over multiple time steps . in practice , one can record the net cumulative number of electrons per unit area that leave a stage over time and obtain a linear fit to this quantity in the steady state ; the slope yields the steady - state current density . from each individual simulation, we extract the current density at a given electric field and temperature . to obtain the curve at that temperature , we sweep the electric field .to demonstrate the importance of including nonequilibrium phonons effects , we carry out the simulation with thermal phonons alone and with both thermal and excess nonequilibrium phonons .figure [ fig : jf ] is the curve for the simulated structure with ( filled squares ) and without ( empty squares ) nonequilibrium phonons at 77 k. it can be seen that the current density at a given field considerably increases when nonequilibrium phonons are included and the trend holds up to 60 kv / cm . this difference is prominent at low temperatures ( < 200 k ) and goes away at rt . )curve of the simulated device with ( filled squares ) and without ( empty squares ) the nonequilibrium phonon effect at 77 k. the inclusion of nonequilibrium phonons considerably increases the current density at a given field up to 60 kv / cm ., title="fig : " ] we calculate the modal gain as where is the permittivity of free space .some constants are obtained from experiment : waveguide confinement factor , stage length nm , optical - mode refractive index , and full width at half maximum .the dipole matrix element between the upper and lower lasing levels ( nm ) and the emission wave length ( m ) are also estimated in experiment , but we calculate these two terms directly . the dipole matrix element is calculated as the value is slightly different at different fields , as the band structure changes . at 48 kv / cm ,the calculated matrix element is nm . similarly , the wave length of emitted photon also changes at different fields .one can calculate the value from the energy difference between the upper and lower lasing levels .the calculated wave length at 48 kv / cm is 8.964 m . is the population inversion obtained from emc .again , due to the randomness of emc , the population inversion needs to be averaged over a period of time after the steady - state has been reached .figure [ fig : gm ] shows the modal gain of the device with nonequilibrium ( filled squares ) and thermal ( empty squares ) phonons as a function of ( a ) electric field and ( b ) current density at 77 k. horizontal dotted line indicates the total estimated loss in the device , which is used to help find the threshold current density , .lasing threshold is achieved when the modal gain equals the total loss .we consider two sources of loss , mirror ( ) and waveguide ( ) , so the total loss is .the intercepts between the total loss line and the vs [ fig .[ fig : gm](a ) ] and vs [ fig .[ fig : gm](b ) ] curves give the threshold field and threshold current density , respectively .like the current density , the modal gain of the device is also considerably higher when nonequilibrium phonons are considered , which leads to a lower and a lower .the reason for the increased current density and modal gain with nonequilibrium phonons can be attributed to the enhanced injection selectivity and efficiency . ) of the simulated device with nonequilibrium ( filled squares ) and thermal ( empty squares ) phonons as a function of ( a ) applied electric field and ( b ) current density at 77 k. horizontal dotted line shows the total estimated loss of the device.,title="fig : " ] the way to obtain the heat - generation rate is similar to how we get the current density .we record the cumulative net energy emission as a function of time and fit a straight line to the region where the simulation has reached a steady state .the slope of the line is used in place of .figure [ fig : hgen ] shows the heat - generation rate as a function of electric field at 77 k. the filled squares and the empty squares depict the situation with and without nonequilibrium phonons , respectively . at 77k with ( filled squares ) and without ( empty squares ) nonequilibrium phonons.,title="fig : " ] this section serves to illustrate how the described simulation is implemented in practice , and what type of information it provides at the single - stage and device levels .first , the single - stage coupled simulation has to be performed at different temperatures , as in fig .[ fig : singlestagejf](a ) .we note the the calculated curves show a negative - differential - conductance region , which is typical for calculations , but generally not observed in experiment . instead ,a flat dependence is typically recorded .at every temperature and field , we also record the heat - generation rate , as depicted in fig .[ fig : singlestagejf](b ) .second , the thermal model for the whole structure is developed .considering that growth techniques improve over time , structures grown around the same time should have similar properties . since the device studied here was built in 2001 , we assume the active core should have similar effective rms roughness to other lattice - matched gaas / alas sls built around the same time . from our previous simulation work on fitting the sl thermal conductivities , we choose an effective rms roughness in this calculation .figure [ fig : pagetc ] shows the calculated thermal conductivities ( solid line ) and ( dashed line ) , along with the calculated bulk thermal conductivity ( dash - dotted line ) for the substrate gaas . ;solid line ) and cross - plane ( ; dashed line ) thermal conductivities of the active core , along with the bulk thermal conductivity of the gaas substrate ( dash - dotted line ) .the effective rms roughness is taken to be ,title="fig : " ] the structure we considered operated in pulsed mode at 77 k. depending on the duty cycle , the temperature distribution in the device can differ considerably .figure [ fig : tempprofileschematic ] depicts a typical temperature profile across the device , while fig .[ fig : tempprofiledutycycle ] depicts the profile across the active core alone at duty cycles of 100% ( essentially continuous wave lasing , if the device achieved it ) and 0.01% ( as in experiment ) .clearly , cw operation would results in dramatic heating of the active region .finally , fig .[ fig : jv ] shows the curve of the entire simulated device at 77 k with a duty cycles of 0.01% , 100% , and as observed in experiment .and the duty cycle is 100%.,title="fig:",width=302 ] ) .,title="fig : " ]we overviewed electronic and thermal transport simulation of qcls , as well as recent efforts in device - level electrothermal modeling of these structures , which is appropriate for transport below threshold , where the effects of the optical field are negligible .we specifically focused on mid - ir qcls in which electronic transport is largely incoherent and can be captured by the ensemble monte carlo technique .the future of qcl modeling , especially for near - rt cw operation , will likely include improvements on several fronts : 1 ) further development of computationally efficient yet rigorous quantum - transport techniques for electronic transport , to fully account for coherent transport features that are important in short - wavelength mid - ir devices ; 2 ) a better understanding and better numerical models for describing the role of electron electron interaction , impurities , and interface roughness on device characteristics ; 3 ) holistic modeling approaches in which electrons , phonons , and photons are simultaneously and self - consistently captured within a single simulation .the goal of qcl simulation should be nothing less than excellent predictive value of device operation across a range of temperatures and biasing conditions , along with unprecedented insight into the fine details of exciting nonequilibrium physics that underscores the operatiuon of these devices .the authors gratefully acknowledge support by the u.s .department of energy , basic energy sciences , division of materials sciences and engineering , physical behavior of materials program , award no .de - sc0008712 .the work was performed using the resources of the uw - madison center for high throughput computing ( chtc ) .80ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) link:\doibase 10.1109/5.649646 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.332655 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.3385159 [ * * , ( ) ] link:\doibase 10.1109/jstqe.2012.2237387 [ * * , ( ) ] \doibase http://dx.doi:10.1038/nphoton.2009.262 [ * * , ( ) ] \doibase http://dx.doi.org/10.1038/nphoton.2012.143 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.2894569 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.3238263 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.3496489 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.4719110 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.3586773 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.3484279 [ * * , ( ) ] link:\doibase 10.1364/oe.23.008462 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.2907862 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/j.sse.2010.03.015 [ * * , ( ) ] link:\doibase 10.1049/el.2012.0017 [ * * , ( ) ] * * , ( ) \doibase http://dx.doi.org/10.1063/1.1474613 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.1494457 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.1882768 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.87.146603 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.1644337 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.2711153 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.4896400 [ * * , ( ) ] link:\doibase 10.1103/physrevb.67.085315 [ * * , ( ) ] link:\doibase 10.1103/physrevb.80.245316 [ * * , ( ) ] , link:\doibase 10.1103/physrevb.79.165322 [ * * , ( ) ] link:\doibase 10.1103/physrevb.81.205311 [ * * , ( ) ] http://stacks.iop.org/1367-2630/12/i=3/a=033045 [ * * , ( ) ] \doibase http://10.1007/s10825 - 016 - 0869 - 3 [ ( ) , http://10.1007/s10825 - 016 - 0869 - 3 ] , link:\doibase 10.3390/photonics3020038 [ * * , ( ) ] link:\doibase 10.1103/physrevb.66.245314 [ * * , ( ) ] link:\doibase 10.1002/pssb.201350322 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) \doibase http://dx.doi.org/10.1063/1.4895123 [ * * , ( ) ] http://stacks.iop.org/0034-4885/68/i=11/a=r02 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.3147943 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.4863665 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.3284523 [ * * , ( ) , http://dx.doi.org/10.1063/1.3284523 ] link:\doibase 10.1109/jstqe.2013.2239613 [ * * , ( ) ] link:\doibase 10.1109/jqe.2008.922327 [ * * , ( ) ] link:\doibase 10.1007/s00340 - 011 - 4744 - 4 [ * * , ( ) ] link:\doibase 10.1007/s10825 - 012 - 0397 - 8 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.4935142 [ * * , ( ) ] ( ) , link:\doibase 10.1103/physrevb.73.245320 [ * * , ( ) ] link:\doibase 10.1109/jqe.2002.1005404 [ * * , ( ) ] link:\doibase 10.1109/jqe.2007.907563 [ * * , ( ) ] _ _ , ph.d . thesis , ( ) _ _ , ph.d .thesis , ( ) \doibase http://dx.doi.org/10.1063/1.2136420 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/s0370-1573(01)00029-1 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.2387485 [ * * , ( ) ] _ _ ( , ) link:\doibase 10.1007/s10825 - 008 - 0175 - 9 [ * * , ( ) ] link:\doibase 10.1103/physrevb.39.7852 [ * * , ( ) ] http://stacks.iop.org/0022-3719/15/i=28/a=021 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.2172225 [ * * , ( ) ] link:\doibase 10.1103/physrevb.50.14179 [ * * , ( ) ] link:\doibase 10.1109/jproc.2006.879794 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.4832615 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.98526 [ * * , ( ) ] link:\doibase 10.1115/1.2911404 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.114919 [ * * , ( ) ] \doibase http://dx.doi.org/10.1016/0921-4526(95)00858-6 [ * * , ( ) ] link:\doibase 10.1103/physrevb.59.8105 [ * * , ( ) ] link:\doibase 10.1115/1.2824212 [ * * , ( ) ] link:\doibase 10.1103/physrevb.57.14958 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.1455693 [ * * , ( ) ] link:\doibase 10.1103/physrevb.90.165406 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physrevb.88.155318 [ * * , ( ) ] link:\doibase 10.1103/revmodphys.61.605 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.2222074 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.1374520 [ * * ( ) , http://dx.doi.org/10.1063/1.1374520 ] \doibase http://dx.doi.org/10.1063/1.2840136 [ * * , ( ) ]
note : this is a book chapter that will appear in _ handbook of optoelectronic device modeling and simulation _ , taylor & francis books , 2017 . editor : joachim piprek . for table of contents , see http://www.nusod.org/piprek/handbook.html
in , the authors analyzed the average capacity for a point - to - point channel model where slow changes result in varying severity of noise . in this work, we apply a similar concept to interference networks by assuming that slowly changing deep fading conditions result in link erasures .we consider the linear interference network introduced by wyner , with the consideration of two fading effects .long - term fluctuations that result in link erasures over a complete block of time slots , and short - term fluctuations that allow us to assume that any specific joint realization for the non - zero channel coefficients , will take place with zero probability .we study the problem of associating receivers with transmitters and setting up the backhaul links for coordinated multi - point ( comp ) transmission , in order to achieve the optimal average degrees of freedom ( dof ) .this problem was studied in for the case of no erasures . here ,we extend the schemes in to consider the occurrence of link erasures , and propose new schemes that lead to achieving better average dof at high probabilities of erasure .we use the standard model for the interference channel with single - antenna transmitters and receivers , where is the time index , is the transmitted signal of transmitter , is the received signal at receiver , is the zero mean unit variance gaussian noise at receiver , and is the channel coefficient from transmitter to receiver over the time slot . we remove the time index in the rest of the paper for brevity unless it is needed . for any set ] to denote the set .each transmitter can only be connected to its corresponding receiver as well as one following receiver , and the last transmitter can only be connected to its corresponding receiver .more precisely , .\ ] ] in order to consider the effect of long - term fluctuations ( shadowing ) , we assume that communication takes place over blocks of time slots , and let be the probability of block erasure . in each block , we assume that for each , and each , with probability .moreover , short - term channel fluctuations allow us to assume that in each time slot , all non - zero channel coefficients are drawn independently from a continuous distribution .finally , we assume that global channel state information is available at all transmitters and receivers .for each ] be the transmit set of receiver , i.e. , those transmitters with the knowledge of .the transmitters in cooperatively transmit the message to the receiver .the messages are assumed to be independent of each other .the _ cooperation order _ is defined to be the maximum transmit set size : a message assignment strategy is defined by a sequence of transmit sets , k\in\{1,2,\ldots\} ] , , |{\cal t}_{i , k}| \leq m ] , let be the number of messages available at the transmitter , and let .it is clear that the sequence can be obtained from the transmit sets ] , the transmit sets , , ] , then ] , and since , it follows that , \forall i\notin [ x] ] , it follows that there exists an index ] .it is now clear that the transmitter carries messages and , and each transmitter with an index is carrying message , and each transmitter with an index is carrying message .the transmit sets are then determined as follows . ] , in the above scenario . if the last transmitter in the subnetwork is the transmitter , and , then each message in this subnetwork is available at the transmitter with the same index .we use lemma [ lem : equiv ] to describe message assignment strategies for large networks through repeating patterns of short ternary strings . given a ternary string of fixed length such that , we define , as follows : * if , * if .we now evaluate all possible message assignment strategies satisfying the cell association constraint using ternary strings through the above representation .we only restrict our attention to irreducible message assignments , and note that if there are two transmitters with indices and such that and each is carrying two messages , then there is a third transmitter with index such that that carries no messages .it follows that any string defining message assignment strategies that satisfy the cell association constraint , has to have one of the following forms : * , * , * , * .we now introduce the three candidate message assignment strategies illustrated in figure [ fig : msgassignment ] , and characterize the tdma per user dof achieved through each of them ; we will show later that the optimal message assignment strategy at any value of is given by one of the three introduced strategies .we first consider the message assignment strategy defined by the string having the form . here , each message is available at the transmitter having the same index .[ lem : highp ] under the restriction to the message assignment strategy , ] , define as the set of indices of transmitters that exclusively carry the messages for the receivers in , and the complement set is .more precisely , \backslash\cup_{i \notin { \cal a } } { \cal t}_i ] , a function , and a function whose definition does not depend on the transmit power constraint , and , then the sum dof .[ thm : mone ] the average per user dof for the cell association problem is given by , where , , and are given in , , and , respectively . in order to prove the statement , we need to show that ; we do so by using lemma [ lem : dofouterbound ] to show that for any irreducible message assignment strategy satisfying the cell association constraint , and any network realization , the asymptotic per user dof is given by that achieved through the optimal tdma scheme .consider message assignment strategies defined by strings having one of the forms , , and .we view each network realization as a series of atomic subnetworks , and show that for each atomic subnetwork , the sum dof is achieved by the optimal tdma scheme . for an atomic subnetwork consisting of a number of users , we note that users are active in the optimal tdma scheme ; we now show in this case using lemma [ lem : dofouterbound ] that the sum dof for users in the subnetwork is bounded by .let the users in the atomic subnetwork have the indices , then we use lemma [ lem : dofouterbound ] with the set , except the cases of message assignment strategies defined by strings having one of the forms and with an even number of ones , where we use the set .we now note that each transmitter that carries a message for a user in the atomic subnetwork and has an index in , is connected to a receiver in , and this receiver is connected to one more transmitter with an index in , and hence , the missing transmit signals can be recovered from and .the condition in the statement of lemma [ lem : dofouterbound ] is then satisfied ; allowing us to prove that the sum dof for users in the atomic subnetwork is upper bounded by .the proof is similar for message assignment strategies defined by strings that have the form .however , there is a difference in selecting the set for atomic subnetworks consisting of users with indices , where , and messages and are both available at transmitter . in this case, we apply lemma [ lem : dofouterbound ] with the set defined as above , but including indices and excluding indices .it can be seen that the condition in lemma [ lem : dofouterbound ] will be satisfied in this case , and the proved upper bound on the sum dof for each atomic subnetwork , is achievable through tdma . in figure[ fig : monemax ] , we plot at each value of . the result of theorem [ thm : mone ] implies that the message assignment strategies considered in lemmas [ lem : highp ] , [ lem : lowp ] , [ lem : middlep ] are optimal at high , low , and middle values of the erasure probability , respectively .we note that in densely connected networks at a low probability of erasrue , the _ interference - aware _ message assignment strategy in figure [ fig : lowp ] is optimal ; through this assignment , the maximum number of interference free communication links can be created for the case of no erasures . on the other hand, the linear nature of the channel connectivity does not affect the choice of optimal message assignment at high probability of ersure . as the effect of interference diminishes at high probability of erasure , assigning each message to a unique transmitter , as in the strategy in figure [ fig : highp ] , becomes the only criterion of optimality . at middle values of , the message assignment strategy in figure [ fig : middlep ]is optimal ; in this assignment , the network is split into four user subnetworks . in the first subnetwork ,the assignment is optimal as the maximum number of interference free communication links can be created for the two events where there is an atomic subnetwork consisting of users or users .we have shown that there is no message assignment strategy for the cell association problem that is optimal for all values of .we show in this section that this statement is true even for the case where each message can be available at more than one transmitter ( ) .recall that for a given value of , we say that a message assignment strategy is universally optimal if it can be used to achieve for all values of .[ thm : comp ] for any value of the cooperation constraint , there does not exist a universally optimal message assignment strategy the proof follows from theorem [ thm : mone ] for the case where .we show that for any value of , any message assignment strategy that enables the achievability of at high probabilities of erasure , is not optimal for the case of no erasures , i.e. , can not be used to achieve for . for any message assignment strategy , consider the value of and note this value equals the average number of transmitters in a transmit set that can be connected to the designated receiver . more precisely , where in corresponds to an optimal message assignment strategy at high probabilities of erasure .it follows that there exists a value such that for any message assignment strategy that enables the achievability of for , almost all messages are assigned to the two transmitters that can be connected to the designated receiver , i.e. , if we let , then .we recall from that for the case of no erasures , the average per user dof equals .we also note that following the same footsteps as in the proof of ( * ? ? ?* theorem ) , we can show that for any message assignment strategy such that , the per user dof for the case of no erasures is upper bounded by ; we do so by using lemma [ lem : dofouterbound ] for each channel with the set defined such that the complement set , i=(2m-1)(j-1)+m , j\in{\bf z}^+\} ] .message is assigned as follows , we illustrate this message assignment in figure [ fig : mtwojonenew ] .we note that the transmit signals are inactive , and hence , we split the network into five user subnetworks with no interference between successive subnetworks .we explain the transmission scheme in the first subnetwork and note that a similar scheme applies to each following subnetwork . in the proposed transmission scheme ,any receiver is either inactive or receives its desired message without interference , and any transmitter will not transmit more than one message for any network realization .it follows that 1 dof is achieved for each message that is transmitted .messages , , , and are transmitted through , , , and , respectively , whenever the coefficients , , , and , respectively .note that the transmit beam for message contributing to can be designed to cancel its interference at .similarly , the interference caused by at can be cancelled through .it follows that dof is achieved for each of , and hence , .also , message is transmitted through if it can not be transmitted through and message is not transmitted through .more precisely , message is transmitted through if and and , thereby achieving an extra dof .similarly , message can be transmitted through if and and .it follows that , finally , message will be transmitted through if message is not transmitted through , and message is not causing interference at .message is not transmitted through whenever the coefficient , and message does not cause interference at whenever the coefficient or the coefficient or can be transmitted through .more precisely , message is transmitted through if and only if all the following is true : * , and * , or , or it is the case that and .it follows that dof is achieved for message , where , similary , can be transmitted through if and only if message is not transmitted through and message is either not transmitted or can be transmitted without causing interference at , i.e. , if and only if all the following is true : * , and * , or , or it is the case that and . the above conditions are satisfied with probability . since we have counted twice the event that and and and , it follows that dof is achieved for . summing the dof achieved for other messages in ,we conclude that , which is the same inequality as in .although the scheme of theorem [ thm : mtwoicaware ] is optimal for the case of no erasures ( ) , we know from theorem [ thm : comp ] that better schemes exist at high erasure probabilities . since in each five user subnet in the scheme of theorem[ thm : mtwoicaware ] , only three users have their messages assigned to the two transmitters that can be connected to their receivers , and two users have only one of these transmitters carrying their messages , we get the asymptotic limit of for the achieved average per user dof normalized by as .this leads us to consider an alternative message assignment where the two transmitters carrying each message are the two transmitters that can be connected to its designated receiver .such assignment would lead the ratio as . in the following theorem, we analyze a transmission scheme based on this assignment .[ thm : mtwoic ] for , the following average per user dof is achievable , where , and , for any message assignment , no message can be transmitted if the links from both transmitters carrying the message to its designated receiver are absent , and hence , the average dof achieved for each message is at most .it follows that .we then need only to prove that the inner bound in is valid . in the achieving scheme ,each message is assigned to the two transmitters that may be connected to its designated receiver , i.e. , ] , that is introduced in .however , we did not include it here as it does not increase the maximum of the bounds derived in and at any value of .finally , although the considered channel model allows for using the interference alignment scheme of over multiple channel realizations ( symbol extensions ) , all the proposed schemes require only coding over one channel realization because of the sparsity of the linear network .we considered the problem of assigning messages to transmitters in a linear interference network with link erasure probability , under a constraint that limits the number of transmitters at which each message can be available .for the case where , we identified the optimal message assignment strategies at different values of , and characterized the average per user dof . for general values of , we proved that there is no message assignment strategy that is optimal for all values of .we finally introduced message assignment strategies for the case where , and derived inner bounds on that are asymptotically optimal as and as . , v. s. annapureddy , and v. v. veervalli , `` interference channels with coordinated multi - point transmission : degrees of freedom , message assignment , and fractional reuse , '' _ submitted to ieee trans .theory , available at http://arxiv.org/abs/1211.2897._ nov .
a linear interference network is considered . long - term fluctuations ( shadow fading ) in the wireless channel can lead to any link being erased with probability . each receiver is interested in one unique message that can be available at transmitters . in a cellular downlink scenario , the case where reflects the cell association problem , and the case where reflects the problem of setting up the backhaul links for coordinated multi - point ( comp ) transmission . in both cases , we analyze degrees of freedom ( dof ) optimal schemes for the case of no erasures , and propose new schemes with better average dof performance at high probabilities of erasure . for , we characterize the average per user dof , and identify the optimal assignment of messages to transmitters at each value of . for general values of , we show that there is no strategy for assigning messages to transmitters in large networks that is optimal for all values of .
in the pase two decades , the lattice boltzmann ( lb ) method has received great success in modeling various fluid systems , and the capability in simulating multiphase flows can be recognized as its unique advantage that distinguishes from the traditional numerical methods .one of most fundamental problems in lb method for multiphase flows is how to describe the interfacial dynamics , which is a natural consequence of intermolecular interactions between different phases . up to now , based on different physical pictures of the interactions , several types of lb multiphase models has been established , which are substantially divided into four categories : the colour model , the pseudo - potential model , the free - energy model and the phase - field based model . in most of these lb multiphase models , the interface is not tracked explicitly , and the region with non - zero density gradient is identified as the interface . therefore the physics of interface tracking equation is unknown .fortunately , the phase - field theory provides a firm foundation on the interface physics , in which the interface is tracked by an order parameter that mimics the cahn - hilliard equation ( che ) , where represents an order variable , is the mobility coefficient , is the chemical potential and a function of , where and are related to the interface thickness and surface tension by the relationships and . in eq.(1 ) is the fluid velocity and governed by the incompressible navier - stokes equations + { { \bf{f}}_s } + { \bf{g}},\ ] ] where is the fluid density , is the pressure , is the kinematic viscosity , is the surface tension , and is the external force . some researchers have constructed some lb multiphase models based on the phase - field theory , where the interface is needed to be tracked explicitly by an index or order distribution function . proposed a lb model for incompressible multiphase flows , in which they adopted an index distribution function to track the interface and a pressure distribution function for solving the flow field . based on this model , they successfully simulated the two - dimensional rayleigh - taylor instability , and later the three - dimensional case using d3q15 lattice structure in lb equations for both the interface capturing and flow field .although this model is rather robust , it suffers from some limitations , one of which is that the recovered interface equation is inconsistent with the che noticed by zheng __ . to this end, they developed a lb model for the che , in which a source term on a spatial difference of the distribution function is introduced .the model is subsequently extended to the three - dimension using d3q7 lattice model .recently , zu _ et al . _ introduced another similar lb model for the che where a spatial difference term on the equilibrium distribution function was included .they also modified the equilibrium distribution function in lb equation for flow field such that the continuity equation ( 3a ) can be derived .however , the computation of the macroscopic pressure and velocity in their scheme is implicit .more recently , we proposed a novel lb model for two - dimensional multiphase flows .on one hand , a simpler time - dependent source term is incorporated in lb equation for the interface capturing . as a result, the two - dimensional che can be recovered correctly . on the other hand , a equilibrium distribution functionis delicately designed for flow field to derive the correct continuity equation while the hydrodynamic properties can be computed explicitly .this improved model is also extended to study axisymmetric multiphase flows as a continuing work , in this paper an efficient three - dimensional lb model for incompressible multiphase flow systems is developed based on the multiple - relaxation - time ( mrt ) method .this model has some distinct advantages .firstly , the model for the che requires only seven discrete velocities in three dimensions ( d3q7 ) , therefore the expenditure in data storage and computational time is smaller than that of other models using d3q15 lattice .secondly , the mrt collision model is adopted , which has a better accuracy and stability than the single - relaxation - time ( srt ) model used commonly in other lb multiphase models .finally , the present model is able to deal with fluid flows at a large peclet number or a high reynolds number .the rest of this paper is organized as follows .[ sec : method ] presents our three - dimensional mrt lb model for multiphase flows .the model then is verified by several classical numerical experiments in sec .[ sec : results ] .finally , we made a brief summary in sec .[ sec : sum ] .the lb equation with a mrt collision model for the che can be written as , + { \delta_t}{f_i}({\bf{x}},t),\ ] ] where is the order distribution function used to track the interface , is the equilibrium distribution function defined as where the discrete velocities , the weighting coefficients and the sound speed depend on the choice of the discrete - velocity model , is a parameter related to the mobility . in this work , an efficient d3q7 discrete - velocity model is adopted for the che , and then can be given by and further according to the following equations ( is the unit matrix ) can be obtained as where is a free parameter , . to ensure the positive weighting coefficients ,the parameter should satisfy .the transformation matrix of the d3q7 model is defined by which is constructed based on the polynomial set of the discrete velocities . in eq .( 1 ) is a diagonal relaxation matrix , where , and if the parameters equal to each other , the mrt model can reduce to the srt model . to recover the correct che , the source term in eq .( 9 ) should be defined as {ij}r_j,\ ] ] where is given by in the present model , the order parameter is computed by and the density is taken as a linear function of the order parameter , where and represent the densities of liquid and gas phases , respectively . the evolution of lb equation ( 4 ) can be commonly divided into two steps , i.e. , the collision process , + { \delta_t}{f_i}({\bf{x}},t),\ ] ] and the propagation process , to reduce the matrix operations , it is wise that the collision process of mrt model is implemented in the moment space . by premultiplying the transformation matrix , we can easily derive the equilibrium distribution function in moment space , where , and are the x- , y- and z- components of macroscopic velocity , respectively .similarly , the source term in the moment space can be presented as the chapman - enskog analysis is carried out on the lb evolution equation ( 4 ) , and the results demonstrate that the che can be derived correctly from the present mrt model , and the relationship between the mobility and the relaxation parameter is also derived as where , and .the mrt lb equation with a source term for the nses reads as + { \delta _ t}g_i,\ ] ] where is the density distribution function , is the equilibrium distribution function and is defined by with .\ ] ] to simulate the fluid flows in three dimension , we can choice several types of lattice velocity models , such as d3q15 or d3q19 . the d3q15 lattice model is used in this work due to its smaller data storage and higher computational efficiency . following the work of qian ,the discrete velocities of the d3q15 lattice model can be given by where .according to the ordering of , the weight coefficients are presented as the transformation matrix in eq .( 20 ) can be given by and the corresponding diagonal relaxation matrix is denoted by where .the source term in eq . is defined as {ij}{t_j}\ ] ] where ,\ ] ] in which is an interfacial force , and is the surface tension taken the potential form . in the present model, the macroscopic pressure and velocity can be obtained from .\ ] ] the collision process of mrt lb equation for nses is also carried out in the moment space .after some algebraic manipulations , the equilibrium distribution function and source term in the moment space can be respectively derived as and ^{\rm t}\nonumber\\\end{aligned}\ ] ] we also conduct the chapman - enskog analysis on lb equation ( 20 ) , and the results show that the present model can correctly recover to the nses with the kinematic viscosity determined by where , and . in practical applications , the derivative terms in present modelshould be discretized by suitable difference schemes . as widely adopted in the references , an explicit difference scheme is used for computing the time derivative in eq .( 12 ) , and the isotropic central schemes and }{c_s^2 \delta_t^2}}\ ] ] are employed for calculating the gradient and the laplacian operator , respectively . in above equations , represents an arbitrary function .it should be noted that the schemes ( 35 ) and ( 36 ) not only can preserve a secondary - order accuracy in space , but also can ensure the global mass conservation of a multiphase system .in this section , we will validate the present three - dimensional mrt lb model with several numerical examples .we first test the performance of the three - dimensional lb model in interface capturing by simulating two classical benchmark problems : rotation of the zalesak s sphere and deformation field flow . in these simulations , the evolution equation ( 4 )is only adopted in that the velocity distribution has been specified in advance .next , we simulated the three - dimensional rayleigh - taylor instability to show the ability of the present model for multiphase flows , where a comparison between the numerical results and some available results is also conducted . : ( a ) the present mrt model ; ( b ) the previous three - dimensional lb model .,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the previous three - dimensional lb model .,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the previous three - dimensional lb model .,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the previous three - dimensional lb model .,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the previous three - dimensional lb model .,title="fig:",width=115,height=116 ] + ( a ) + : ( a ) the present mrt model ; ( b ) the previous three - dimensional lb model .,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the previous three - dimensional lb model .,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the previous three - dimensional lb model .,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the previous three - dimensional lb model .,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the previous three - dimensional lb model .,title="fig:",width=115,height=116 ] ( b ) + : ( a ) the present mrt model ; ( b ) the present srt model.,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the present srt model.,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the present srt model.,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the present srt model.,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the present srt model.,title="fig:",width=115,height=116 ] + ( a ) + : ( a ) the present mrt model ; ( b ) the present srt model.,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the present srt model.,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the present srt model.,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the present srt model.,title="fig:",width=115,height=116 ] : ( a ) the present mrt model ; ( b ) the present srt model.,title="fig:",width=115,height=116 ] ( b ) + rotation has been widely used in the literatures to test interface tracking methods .here we consider the rotation of the zalesak s sphere which has a slot of 16 lattice units width in a domain .this sphere is initially centered at and the radius occupies 40 lattice units .the revolution of the sphere is driven by a constant vorticity velocity field , where takes a value of so that the sphere complete one cycle every time steps . in our simulations ,the interface thickness and surface tension are fixed as 2.0 and 0.04 , respectively ; the relaxation matrix is set as the periodic boundary condition is applied at all boundaries . to examine the mobility effect ,we introduce the dimensionless peclet number , which is defined as , where is the characteristic length , and set as the value of .different mobilities can be obtained by changing the value of .figure 1 shows the revolution processes of the sphere during one period at .it is seen from fig .1 that the present model can precisely capture the interface shape in one period , and the sphere returns to its initial configuration at time , which is accordance with the expected results . for a comparison ,the results obtained by the previous three - dimensional lb model are also presented in fig .it is observed that the previous model not only produces some obvious sawteeth on the sphere surface , but also induces some unphysical disturbances around the computational domain .this implies that the present model can obtain a more accurate and stable interface .it has been mentioned above that the srt model can be deemed as a special case of the mrt model when all the relaxation parameters equal to each other . due to the more adjustable relaxation factors, the mrt model should show more potential to achieve a better numerical stability against the srt model .to illustrate this point , we present a comparison between them using this case .figure 2 depicts snapshots of rotating sphere during one period at by the mrt and srt models .it is clearly seen that the results of the srt model are unstable .the slot of the sphere is slightly distorted and there produces some extra jetsam at the corners of the computational domain .in contrast , the present mrt model can capture the moving interface of the sphere correctly . to further quantitatively describe the accuracy of the interface capturing , the global relative error on the order parameteris introduced as where is the value of the order parameter at position and time , and is the exact solution at initial time .we computed the relative errors generated by the lb previous model and the present mrt and bgk models at various peclet numbers , and show the results in fig .3 . as seen from this figure , one can find that the mrt model is more accurate than the srt model , which is further more accurate than the previous model , especially at large pelcet number .the effect of the model parameter is also examined , and it is found from fig .3 that the parameter has almost no effect on the accuracy of the mrt model . without loss of generality , in the following simulations we fix the parameter at . : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model . the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model . the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model . the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model . the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model . the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] + ( a ) + : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model . the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model .the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model . the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model . the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model . the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] ( b ) + : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model .the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model . the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model . the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model . the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] : ( a ) the present mrt model ; ( b ) the present srt model ; ( c ) the previous lb model . the times from the left pattern to the right are , , , and .,title="fig:",width=115,height=119 ] ( c ) + the above test does not induce large changes of the interface . to show the capacity of the present mrt model , in this subsection we will consider a rather challenging problem of deformation field flow , in which the interface could undergo a large deformation .the initial setup of this problem is described as follows .a sphere with a radius of is placed in a computational domain centered at .the velocity field of this flow is strongly nonlinear and is given by and we postmultiply a time - dependent function to make the flow periodic , where is the iteration step divided by . in our simulations , and are given as and ; the other physical parameters and boundary condition are set as those in the last test . in theory , the sphere will undergo the deformation continuously until time , and the velocity field then is revised in time , the sphere goes back and returns to its original position .figure 4 shows the evolution of the interface pattern at obtained by the use of the mrt model , the srt model and the previous lb model .it can be clearly observed that the present mrt model can give an accurate prediction in the evolution of the interface : it has a largest deformation at time and moves back to the initial configuration at time .the behaviors of the interface conform to the theoretical results .in contrast , the results of the srt model and the previous three - dimensional lb model are unstable . at time or ,some jagged shapes are produced by the srt model in the vicinity of the interface . at the same time, the previous lb model is worst in tracking the interface .a massive amount of unphysical disturbances can be clearly observed in the system .we also perform simulations with different peclet numbers , and compare the relative errors generated by these lb models .the results are presented in fig .5 . from this figure, one can find that the present mrt model is more accurate than the srt model and the previous lb model . at last, we simulated a benchmark problem of the rayleigh - taylor instability ( rti ) .rti is a classical and common instability phenomenon , which occurs at perturbed interface between two different fluids , where the gravity force is applied .rti studies are useful since it has particular relevance and importance in fields including inertial confinement fusion and astrophysics . for this reason, rti has become a subject of intensive researches in the past 60 years using theoretical analysis , experimental methods and also numerical approaches . however , to our best knowledge , most of previous numerical studies are limited to the two - dimensional case , and relatively few attention has been paid on the three - dimensional rti . on the other hand ,the simulated reynolds numbers considered in previous studies are small in general . in this subsection, we applied the present mrt model to study the three - dimensional rayleigh - taylor instability , and the effect of reynolds number on the evolution of the interface was examined ..,title="fig:",width=67,height=230 ] .,title="fig:",width=67,height=230 ] .,title="fig:",width=67,height=230 ] .,title="fig:",width=67,height=230 ] .,title="fig:",width=67,height=230 ] .,title="fig:",width=67,height=230 ] .,title="fig:",width=67,height=230 ] .,title="fig:",width=67,height=230 ] + ( a)(b ) + the physical problem we considered here is a rectangular box with an aspect ratio of , where is the box width .the initial interface is located at the midplane ( ) , with an imposed square - mode perturbation ,\ ] ] and the initial order distribution then can be given by according to , the reynolds numbers ( re ) characterizing rti can be defined as where is a gravitational acceleration , and is the kinematic viscosity . in our simulations , we take the densities of liquid and gas phases as 3.0 and 1.0 , corresponding to an atwood number of 0.5 ; some physical parameters are given as : , , , ; the peclet number defined in [ 10,11 ] is fixed at 50.0 ; the relaxation matrix is set to be and the relaxation factors in are chosen to be unity expect for the ones related to the kinematic viscosity . the periodic boundary conditions in the lateral directions and no - slip boundary condition in the vertical direction are applied in our studies .figure 6(a ) shows the evolution of the density contours in immiscible rti at a reynolds number of 1024 .it can be seen that , due to the gravity effect , a heavy fluid and a light one penetrate into each other at early time , and then forms the spike and bubble , respectively .after that , the heavy fluid rolls up along the flank of the spike , and a mushroom - like structure appears ( t=3.0 ) , which can be attributed to kelvin - helmholtz instabilities providing the rolling motion of the interface .finally , the mushroom develops further and becomes much bigger .the patterns of the liquid interface obtained by the present model compare well with previous results .we also simulated the three - dimensional rti at a high reynolds number of 4000 , which has not been considered in ref .[ 15 ] , and presented the results in fig .it can be observed that the interface takes on almost similar behaviors before time 3.0 .however , it subsequently presents a distinct manner . due to larger shear interaction between different layers ,the interface becomes more complex , which eventually undergoes a breakup inducing some tiny dissociative drops in the system . to observe the evolution of the interface more clearly , we plotted in fig .7 the interface patterns at the diagonal vertical plane ( ) with above two reynolds numbers .it is found that for both and , the development of the initial mode follows the pattern similar from two - dimensional simulations . in the following ,two pairs of counter - rotating vortices are formed at the spike tip and the saddle point ( see time 3.0 ) , which is significantly different from the two - dimensional results .the vortices grow with time for , while they become unstable for , resulting in the mixing of two different fluids at the vicinity of the spike tip .we also gave a quantitative study on the reynolds number effect .figure 8 depicts the evolution of the positions of the bubble front and spike tip obtained by the present mrt model and the previous numerical results in ref . [it is shown that the results of obtained by our model agree well with those in ref . [ 15 ] , which verifies the numerical accuracy of the present mrt model in dealing with complex interfacial flows . from fig .8 , one can also find that there is no evident difference in trajectory of bubble front at two different reynolds numbers , while the spike tip moves slightly faster at a larger reynolds number . ) with different reynolds numbers : ( a ) re=1024 ; ( b ) re=4000 .the corresponding times from the left pattern to the right are t=1.0 , 2.0 , 3.0 , 4.0 , where t is normalized by the characteristic time . , title="fig:",width=67,height=187 ] ) with different reynolds numbers : ( a ) re=1024 ; ( b ) re=4000 .the corresponding times from the left pattern to the right are t=1.0 , 2.0 , 3.0 , 4.0 , where t is normalized by the characteristic time . , title="fig:",width=67,height=187 ] ) with different reynolds numbers : ( a ) re=1024 ; ( b ) re=4000 .the corresponding times from the left pattern to the right are t=1.0 , 2.0 , 3.0 , 4.0 , where t is normalized by the characteristic time ., title="fig:",width=67,height=187 ] ) with different reynolds numbers : ( a ) re=1024 ; ( b ) re=4000 .the corresponding times from the left pattern to the right are t=1.0 , 2.0 , 3.0 , 4.0 , where t is normalized by the characteristic time . , title="fig:",width=67,height=187 ] ) with different reynolds numbers : ( a ) re=1024 ; ( b ) re=4000 .the corresponding times from the left pattern to the right are t=1.0 , 2.0 , 3.0 , 4.0 , where t is normalized by the characteristic time ., title="fig:",width=67,height=187 ] ) with different reynolds numbers : ( a ) re=1024 ; ( b ) re=4000 .the corresponding times from the left pattern to the right are t=1.0 , 2.0 , 3.0 , 4.0 , where t is normalized by the characteristic time . , title="fig:",width=67,height=187 ] ) with different reynolds numbers : ( a ) re=1024 ; ( b ) re=4000 .the corresponding times from the left pattern to the right are t=1.0 , 2.0 , 3.0 , 4.0 , where t is normalized by the characteristic time . , title="fig:",width=67,height=187 ] ) with different reynolds numbers : ( a ) re=1024 ; ( b ) re=4000 .the corresponding times from the left pattern to the right are t=1.0 , 2.0 , 3.0 , 4.0 , where t is normalized by the characteristic time ., title="fig:",width=67,height=187 ] + ( a)(b ) + and , respectively.,width=384,height=288 ]in this work , an efficient three - dimensional lattice boltzmann model based on the mrt collision method is proposed for multiphase flow systems .this model is a straightforward extension of our previous two - dimensional model to the three - dimensions .the present model for the che only utilizes seven discrete velocities in three dimensions , while most of previous lb models need at least fifteen discrete velocities . as a result, the computational efficiency of the present model can be greatly improved in simulating three - dimensional multiphase flows .in addition , the advanced mrt collision model is adopted in lb equations for both the che and the nses , which has a better stability than the srt model .two classical interface - capturing problems including rotation of the zalesak s sphere and deformation field flow were conducted to test the model , and the results show that the present mrt model is more stable and accurate than the srt model and the previous lb model in tracking the interface . finally , the present mrt model is applied to study the three - dimensional rayleigh - taylor instability at various reynolds numbers .it is found that the numerical results at low reynolds numbers agree well with previous data , while the instability at a high reynolds numbers induces a more complex structure of the interface .this work is financially supported by the national natural science foundation of china(grant nos .11272132 ) , and the fundamental research funds for the central universities ( grant no .2014ts065 ) .
in this paper , an efficient three - dimensional lattice boltzmann ( lb ) model with multiple - relaxation - time ( mrt ) collision operator is developed for the simulation of multiphase flows . this model is an extension of our previous two - dimensional model ( h. liang , b. c. shi , z. l. guo , and z. h. chai , phys . rev . e. 89 , 053320 ( 2014 ) ) to the three dimensions using the d3q7 ( seven discrete velocities in three dimensions ) lattice for the chan - hilliard equation ( che ) and the d3q15 lattice for the navier - stokes equations ( nses ) . due to the smaller lattice - velocity numbers used , the computional efficiency can be significantly improved in simulating real three - dimensional flows , and simultaneously the present model can recover to the che and nses correctly through the chapman - enskog procedure . we compare the present mrt model with the single - relaxation - time model and the previous three - dimensional lb model using two benchmark interface - tracking problems , and numerical results show that the present mrt model can achieve a significant improvement in the accuracy and stability of the interface capturing . the developed model is also able to deal with multiphase fluids with very low viscosities due to the using of the mrt collision model , which is demonstrated by the simulation of the classical rayleigh - taylor instability at various reynolds numbers . the maximum reynolds number considered in this work reaches up to , which is larger than those of almost previous simulations . it is found that the instabilty induces a more complex structure of the interface at a high reynolds number .
let be a self - adjoint possibly unbounded operator on a separable hilbert space such that the spectrum of is separated into two disjoint components , that is , let be a bounded self - adjoint operator on .it is well known ( see , e.g. , ( * ? ? ?* theorem v.4.10 ) ) that the spectrum of the perturbed self - adjoint operator is confined in the closed -neighbourhood of the spectrum of the unperturbed operator , that is , where denotes the open -neighbourhood of . in particular , if then the spectrum of the operator is likewise separated into two disjoint components and , where therefore , under condition , the two components of the spectrum of can be interpreted as perturbations of the corresponding original spectral components and of . clearly , the condition is sharp in the sense that if , the spectrum of the perturbed operator may not have separated components at all .the effect of the additive perturbation on the spectral subspaces for is studied in terms of the corresponding spectral projections .let and denote the spectral projections for and associated with the borel sets and , respectively .it is well known that since the corresponding inequality holds for every difference of orthogonal projections in , see , e.g. , ( * ? ? ? * section 34 ) . moreover ,if then the spectral projections and are unitarily equivalent , see , e.g. , ( * ? ? ?* theorem i.6.32 ) . in this sense , if inequality holds , the spectral subspace can be understood as a rotation of the unperturbed spectral subspace .the quantity serves as a measure for this rotation and is called the _ maximal angle _ between the spectral subspaces and .a short survey on the concept of the maximal angle between closed subspaces of a hilbert space can be found in ( * ? ? ?* section 2 ) ; see also , ( * ? ? ?* theorem 2.2 ) , ( * ? ? ?* section 2 ) , and references therein .it is a natural question whether the bound is sufficient for inequality to hold , or if one has to impose a stronger bound on the norm of the perturbation in order to ensure .basically , the following two problems arise : 1 . what is the best possible constant ] is given by \arcsin\bigl(\sqrt{\frac{2\pi^2x-4}{\pi^2 - 4}}\,\bigr ) & \text { for } \quad \frac{4}{\pi^2 + 4 } < x < 4\,\frac{\pi^2 - 2 } { \pi^4}\,,\\[0.15 cm ] \arcsin\bigl(\frac{\pi}{2}(1-\sqrt{1 - 2x}\,)\bigr ) & \text { for }\quad 4\,\frac{\pi^2 - 2}{\pi^4 } \le x \le \kappa\,,\\[0.15 cm ] \frac{3}{2}\arcsin\bigl(\frac{\pi}{2}(1-\sqrt[\leftroot{4}3]{1 - 2x}\,)\bigr ) & \text { for } \quad \kappa< x \le c_\mathrm{crit}\ , . \end{cases}\ ] ] here , is the unique solution to the equation {1 - 2\kappa}\,\bigr)\bigr)\ ] ] in the interval ] , and continuously differentiable on .numerical calculations give the estimate in theorem [ thm : mainresult ] remains valid if the constant in the definition of the function is replaced by any other constant within the interval , see remark [ rem : kapparepl ] below .however , the particular choice ensures that the function is continuous and as small as possible .in particular , we have for and where and are given by and respectively , see remark [ rem : estoptimality ] below . from theorem [ thm : mainresult ]we immediately deduce that and both are the best respective bounds for the two problems ( i ) and ( ii ) known so far .the paper is organized as follows : in section [ sec : optprob ] , based on the triangle inequality for the maximal angle and a suitable a priori rotation bound for small perturbations ( see proposition [ prop : genrotbound ] ) , we formulate a constrained optimization problem , whose solution provides an estimating function for the maximal angle between the corresponding spectral subspaces , see definition [ def : optprob ] , proposition [ prop : mainestimate ] , and theorem [ thm : soloptprob ] . in this way ,the approach by albeverio and motovilov in is optimized and , in particular , a proof of theorem [ thm : mainresult ] is obtained . the explicit solution to the optimization problemis given in theorem [ thm : soloptprob ] , which is proved in section [ sec : soloptprob ] .the technique used there involves variational methods and may also be useful for solving optimization problems of a similar structure .finally , appendix [ app : sec : inequalities ] is devoted to some elementary inequalities used in section [ sec : soloptprob ] .in this section , we formulate a constrained optimization problem , whose solution provides an estimate on the maximal angle between the spectral subspaces associated with isolated parts of the spectrum of the corresponding perturbed and unperturbed operators , respectively .in particular , this yields a proof of theorem [ thm : mainresult ] .we make the following notational setup .[ app : hyphyp ] let be as in theorem [ thm : mainresult ] , and let be a bounded self - adjoint operator on the hilbert space . for , introduce , , and denote by the spectral projection for associated with the open -neighbourhood of .under hypothesis [ app : hyphyp ] , one has for .taking into account the inclusion , the spectrum of each is likewise separated into two disjoint components , that is , where in particular , one has moreover , the mapping is norm continuous , see , e.g. , ( * ? ? ?* theorem 3.5 ) ; cf .also the forthcoming estimate . for arbitrary , we can consider as a perturbation of .taking into account the a priori bound , we then observe that furthermore , it follows from and the inclusion that is exactly the part of that is contained in the open -neighbourhood of , that is , let be arbitrary , and let with be a finite partition of the interval ] . in particular , if partitions with arbitrarily small mesh size are considered , then , as a result of , the norm of each corresponding projector difference in is arbitrarily small as well .at the same time , the corresponding riemann sums are arbitrarily close to the integral .since as , we conclude from and that once the bound has been generalized to the case where the operator is allowed to be unbounded , this argument is an easy and straightforward way to prove the bound .albeverio and motovilov demonstrated in that a stronger result can be obtained from .they considered a specific finite partition of the interval ] additionally satisfies in this case , it follows from , , , and proposition [ prop : genrotbound ] that along with a specific choice of the partition of the interval ] , so that for every fixed parameter the right - hand side of inequality is minimized .an equivalent and more convenient reformulation of this approach is to maximize the parameter in estimate over all possible choices of the parameters and for which the right - hand side of takes a fixed value .obviously , we can generalize estimate to the case where the finite sequence is allowed to be just increasing and not necessarily strictly increasing .altogether , this motivates the following considerations .[ def : params ] for define and let .every finite partition of the interval ] from this sequence .[ lem : seq ] 1 . for every the mapping \ni t\mapsto t+x(1 - 2t) ] with . in this respect ,the approach to optimize the parameter in with a fixed right - hand side can now be formalized in the following way .[ def : optprob ] let denote the ( non - linear ) operator that maps every sequence in to the corresponding increasing and eventually constant sequence given by the recursion .moreover , let \to\bigl[0,\frac{\pi}{4}\bigr] ] define and where with .for every fixed ], we have to maximize over .this constrained optimization problem plays the central role in the approach presented in this work .the following proposition shows how this optimization problem is related to the problem of estimating the maximal angle between the corresponding spectral subspaces .[ prop : mainestimate ] assume hypothesis [ app : hyphyp ] .let \ni\theta\mapsto s(\theta)\in\bigl[0,s\bigl(\frac{\pi}{2}\bigr)\bigr]\subset\bigl[0,\frac{1}{2}\bigr] ] is continuous and strictly increasing .it therefore satisfies the hypotheses of proposition [ prop : mainestimate ] . in this respect , it remains to compute for ] the equation has a unique solution .moreover , the quantity given in has the representation \frac{2}{\pi^2}+\frac{\pi^2 - 4}{2\pi^2}\sin^2\theta & \text { for } \quad \arctan\bigl(\frac{2}{\pi } \bigr)<\theta<\arcsin\bigl(\frac{2}{\pi}\bigr)\,,\\[0.1 cm ] \frac{1}{2}-\frac{1}{2}\bigl(1-\frac{2}{\pi}\sin\theta\bigr)^2 & \text { for } \quad \arcsin\bigl(\frac{2}{\pi}\bigr)\le\theta\le\vartheta\,,\\[0.1 cm ] \frac{1}{2}-\frac{1}{2}\bigl(1-\frac{2}{\pi}\sin\bigl(\frac{2\theta}{3}\bigr)\bigr)^3 & \text { for } \quad \vartheta <\theta \le \frac{\pi}{2}\ , . \end{cases}\ ] ] the mapping \ni\theta\mapsto t(\theta) ] , and continuous differentiable on . theorem [ thm : mainresult ] is now a straightforward consequence of proposition [ prop : mainestimate ] and theorem [ thm : soloptprob ] . according to theorem [ thm : soloptprob ] , the mapping \ni\theta\mapsto t(\theta) ] , where is given by .let \to\bigl[0,\frac{\pi}{2}\bigr] ] .furthermore , the function is continuously differentiable on since the mapping is continuously differentiable on .let be a bounded self - adjoint operator on satisfying .the case is obvious .assume that .then , , , and for satisfy hypothesis [ app : hyphyp ] .moreover , one has with .applying proposition [ prop : mainestimate ] to the mapping finally gives which completes the proof .[ rem : kapparepl ] numerical evaluations give and .however , the estimate remains valid if the constant in the explicit representation for the function is replaced by any other constant within the interval .this can be seen by applying proposition [ prop : mainestimate ] to each of the two mappings these mappings indeed satisfy the hypotheses of proposition [ prop : mainestimate ] .both are obviously continuous and strictly increasing , and , by particular choices of , it is easy to see from the considerations in section [ sec : soloptprob ] that they are less or equal to , see equation below .the statement of theorem [ thm : soloptprob ] actually goes beyond that of theorem [ thm : mainresult ] . as a matter of fact , instead of equality in, it would be sufficient for the proof of theorem [ thm : mainresult ] to have that the right - hand side of is just less or equal to .this , in turn , is rather easy to establish by particular choices of , see lemma [ lem : critpoints ] and the proof of lemma [ lem : twoparams ] below .however , theorem [ thm : soloptprob ] states that the right - hand side of provides an exact representation for , and most of the considerations in section [ sec : soloptprob ] are required to show this stronger result . as a consequence ,the bound from theorem [ thm : mainresult ] is optimal within the framework of the approach by estimate .in fact , the following observation shows that a bound substantially stronger than the one from proposition [ prop : genrotbound ] is required , at least for small perturbations , in order to improve on theorem [ thm : mainresult ] .one can modify the approach by replacing the term by and relaxing the condition to . yet, it follows from theorem [ thm : soloptprob ] that the corresponding optimization procedure leads to exactly the same result .this can be seen from the fact that each is of the form of the right - hand side of ( cf .the computation of in section [ sec : soloptprob ] below ) , so that we are actually dealing with essentially the same optimization problem . in this sense , the function is a fixed point in the approach presented here .we close this section with a comparison of theorem [ thm : mainresult ] with the strongest previously known result by albeverio and motovilov from .[ rem : estoptimality ] one has for , and the inequality holds for all , where and \to\bigl[0,\frac{\pi}{2}\bigr] ] is strictly increasing , this implies that split the proof of theorem [ thm : soloptprob ] into several steps .we first reduce the problem of computing to the problem of solving suitable finite - dimensional constrained optimization problems , see equations and .the corresponding critical points are then characterized in lemma [ lem : critpoints ] using lagrange multipliers . the crucial tool to reduce the set of relevant critical pointsis provided by lemma [ lem : paramsubst ] .finally , the finite - dimensional optimization problems are solved in lemmas [ lem : twoparams ] , [ lem : threeparams ] , and [ prop : soloptprob ] . throughout this section, we make use of the notations introduced in definitions [ def : params ] and [ def : optprob ] .in addition , we fix the following notations . for and ] be arbitrary .since , we obtain moreover , we observe that in fact , we show below that for every , so that , see lemma [ prop : soloptprob ] .let be arbitrary and let .denote .it follows from part ( b ) of lemma [ lem : seq ] that .moreover , we have since , this implies that in particular , we obtain the explicit representation an immediate conclusion of representation is the following statement . [lem : parampermut ] for the value of does not depend on the order of the entries .another implication of representation is the fact that can be considered as a continuous function of the variables .since the set is compact as a closed bounded subset of an -dimensional subspace of , we deduce that can be written as hence , is determined by a finite - dimensional constrained optimization problem , which can be studied by use of lagrange multipliers . taking into account the definition of the set , it follows from equation and representation that there is some point ^{n+1} ] let with .assume that .if , in addition , and , then either one has so that or there is with in the latter case , and satisfy where let and . in particular , one has .hence , it follows from that does not depend on . if , then all coincide and one has , that is , . inserting this into representation yields equation .now assume that .a straightforward calculation shows that is the only critical point of the mapping \nix\mapsto \frac{\sqrt{1-\pi^2x^2}}{1 - 2x}\,,\ ] ] cf .[ fig : functionplot ] .the image of this point is .moreover , and are mapped to , and is mapped to . in particular , every value in the interval has exactly two preimages under the mapping , and all the other values in the range ] let .if , then for every one has suppose that .the case in the claim obviously agrees with this hypothesis .let be arbitrary and denote .it follows from part ( b ) of lemma [ lem : seq ] that . in particular , one has since .assume that , and let with .denote and .again by part ( b ) of lemma [ lem : seq ] , one has and .taking into account part ( a ) of lemma [ lem : seq ] and the definition of the operator , one obtains that iterating this estimate eventually gives , which contradicts the case from above .thus , as claimed .lemma [ lem : paramsubst ] states that if a sequence solves the optimization problem for , then every truncation of solves the corresponding reduced optimization problem .this allows to exclude many sequences in from the considerations once the optimization problem is understood for small .the number of parameters in can thereby be reduced considerably .the following lemma demonstrates this technique .it implies that the condition in lemma [ lem : critpoints ] is always satisfied except for one single case , which can be treated separately .[ lem : lambda_0 ] for and ] and .we start with the case .[ lem : twoparams ] the quantity has the representation \frac{2}{\pi^2}+\frac{\pi^2 - 4}{2\pi^2}\sin^2\theta & \text { for } \quad\arctan\bigl(\frac{2}{\pi}\bigr)<\theta<\arcsin\bigl(\frac{2}{\pi}\bigr)\,,\\[0.1 cm ] \frac{1}{2}-\frac{1}{2}\bigl(1-\frac{2}{\pi}\sin\theta\bigr)^2 & \text { for } \quad \arcsin\bigl(\frac{2}{\pi}\bigr)\le\theta\le\frac{\pi}{2}\ , .\end{cases}\ ] ] in particular , if and with , then the strict inequality holds .the mapping \ni\theta\mapsto t_1(\theta) ] , and continuously differentiable on .since , the representation is obviously correct for . for one has , so that by representation. this also agrees with the claim .now let be arbitrary .obviously , one has if , and if .hence , and if . by lemmas [ lem : parampermut ] , [ lem : critpoints ] , and [ lem : lambda_0 ]there are only two sequences in that need to be considered in order to compute .one of them is given by with .for this sequence , representation yields the other sequence in that needs to be considered is with and satisfying and where it turns out shortly that this sequence exists if and only if . using representation and the relations in , one obtains the objective is to rewrite the right - hand side of in terms of .it follows from and the relations and that taking into account that , equation can be rewritten as in turn , this gives that is , we show that the second case in does not occur . since , by equation one has , which implies that , combining relations and , can be expressed in terms of alone .hence , by equation the quantity can be written as a continuous function of the sole variable . taking the limit in equationthen implies that and , therefore , .this yields for every by continuity , that is , the sequence can exist only if .taking into account that satisfies , it now follows from that the sequence exists if and only if and , in this case , one has equations and finally gives for .as a result of lemmas [ lem : parampermut ] , [ lem : critpoints ] , and [ lem : lambda_0 ] , the quantities , , and are the only possible values for , and we have to determine which of them is the greatest . the easiest case is since then is the only possibility for .the quantity is relevant only if . in this case, it follows from parts ( [ app : lem1:b ] ) and ( [ app : lem1:c ] ) of lemma [ app1:lem1 ] that gives the greatest value of the three possibilities and , hence , is the correct term for here . for , by part ( [ app : lem1:d ] ) of lemma [ app1:lem1 ] the quantity is greater than .therefore , is given by in this case .finally , consider the case . since , it follows from part ( [ app : lem1:e ] ) of lemma [ app1:lem1 ] that is greater than and , hence , coincides with .this completes the computation of for ] is continuously differentiable on each of the corresponding subintervals .it remains to prove that the mapping is continuous and continuously differentiable at the points and .taking into account that for , the continuity is straightforward to verify .the continuous differentiability follows from the relations and where the latter is due to this completes the proof .so far , lemma [ lem : paramsubst ] has been used only to obtain lemma [ lem : lambda_0 ] .its whole strength becomes apparent in connection with lemma [ lem : parampermut ] .this is demonstrated in the following corollary to lemma [ lem : twoparams ] , which states that in the sequences with do not need to be considered .[ cor : critpoints : ln ] in the case in lemma [ lem : critpoints ] one has .the case is obvious . for let with for some . in particular, one has , which implies that .hence , it follows from lemma [ lem : twoparams ] that by lemmas [ lem : parampermut ] and [ lem : paramsubst ] one concludes that this leaves as the only possibility in .we now turn to the computation of for ] the equation has a unique solution .moreover , the quantity has the representation \dfrac{1}{2}-\dfrac{1}{2}\biggl(1-\dfrac{2}{\pi}\sin\bigl(\dfrac{2\theta}{3}\bigr)\biggr)^3 & \text { for } \quad \vartheta<\theta\le\frac{\pi}{2}\ , . \end{cases}\ ] ] in particular , one has if , and the strict inequality holds for ] is strictly increasing , continuous on ] be arbitrary .it follows from lemmas [ lem : parampermut ] , [ lem : critpoints ] , and [ lem : lambda_0 ] and corollary [ cor : critpoints : ln ] that there are only two sequences in that need to be considered in order to compute .one of them is with . for this sequence representation yields other sequence in that needs to be considered is , where and and are given by and .using representation , one obtains according to lemma [ app : lem3 ] , this sequence can exist only if satisfies the two - sided estimate .however , if exists , combining lemma [ app : lem3 ] with equations and yields therefore , in order to compute for ] .moreover , taking into account lemma [ lem : twoparams ] , equation , and the inequality , it follows that if and only if .this proves the claimed representation for . by lemma[ lem : twoparams ] and the choice of it is obvious that the mapping \ni\theta\mapsto t_2(\theta) ] , and continuously differentiable on . in order to prove theorem [ thm : soloptprob ] , it remains to show that coincides with .[ prop : soloptprob ] for every ] be arbitrary . as a result of equation, it suffices to show that for all .let and let .the objective is to show that .first , assume that .we examine the two cases and . if , then . in this case, it follows from lemma [ lem : twoparams ] that with .hence , by lemma [ lem : paramsubst ] one has .if , then which is possible only if , that is , . in this case , one has .taking into account representation , it follows from lemma [ app : lem4 ] that so , one concludes that again .now , assume that satisfies . since, in particular , , lemma [ lem : threeparams ] implies that it follows from lemmas [ lem : parampermut ] and [ lem : paramsubst ] that that is , once again . hence , by lemmas [ lem : parampermut ] , [ lem : critpoints ] , and [ lem : lambda_0 ] and corollary [ cor : critpoints : ln ] the inequality holds for all , which implies that .now the claim follows by induction .we close this section with the following observation , which , together with remark [ rem : estoptimality ] above , shows that the estimate from theorem [ thm : mainresult ] is indeed stronger than the previously known estimates .[ rem : amvss ] it follows from the previous considerations that where and \to\bigl[0,\frac{\pi}{2}\bigr] ] be arbitrary and set .define by and by using representation , a straightforward calculation shows that in both cases one has if ( cf .remark [ rem : kapparepl ] ) , that is , , then it follows from lemma [ lem : threeparams ] that .if , then the inequality holds since , in this case , is none of the critical points from lemma [ lem : critpoints ] .so , in either case one has .[ app1:lem1 ] the following inequalities hold : 1 . ,[app : lem1:a ] 2 . ,[app : lem1:b ] 3 . ,[app : lem1:c ] 4 . ,[app : lem1:d ] 5 . .[app : lem1:e ] one has which is strictly positive if and only if a straightforward analysis shows that the last inequality holds for , which proves ( [ app : lem1:a ] ) . for onehas and .thus , the inequality in ( b ) becomes an equality for .therefore , in order to show ( b ) , it suffices to show that the corresponding estimate holds for the derivatives of both sides of the inequality , that is , this inequality is equivalent to for , which , in turn , follows from .this implies ( [ app : lem1:b ] ) .the claim ( [ app : lem1:c ] ) follows immediately from finally , observe that for , the right - hand side of is positive if and only if is less than .this is the case if and only if , which proves ( [ app : lem1:d ] ) .the proof of claim ( [ app : lem1:e ] ) is analogous .[ app : lem2 ] there is a unique such that and define by obviously , the claim is equivalent to the existence of such that for and for .observe that for .in particular , is strictly decreasing on the interval ] , so that the inequality holds on ] and has exactly one root in the interval . combining this with equation ,one obtains that has a unique zero in and that changes its sign from minus to plus there . observing that and , this yields on , that is , is strictly decreasing on ] . since on ] , that is , is strictly increasing on ] . observing the inequality , one deduces that \,.\ ] ] moreover , one has . combining this with and the fact that is strictly increasing on ] .straightforward calculations show that so that taking into account that , that is , , this leads to in particular , is the only critical point of in the interval and changes its sign from plus to minus there .moreover , using and , one has , so that since , this proves the two - sided inequality .further calculations show that where the polynomial is strictly negative on the interval ] by the claim is equivalent to the inequality for .since and , hence , , one has .moreover , a numerical evaluation gives .therefore , in order to prove for , it suffices to show that has exactly one critical point in the interval and that takes its maximum there . using and taking into account that , one computes hence , for one obtains where \to{\mathbb{r}} ] and that changes its sign from minus to plus there .moreover , takes a positive value in this root of , so that on ] .it follows from that on , so that is strictly decreasing .in particular , one has on .a straightforward calculation yields where .the polynomial is positive and strictly decreasing on the interval ] and satisfies .one computes where the polynomial is strictly increasing on ] . since on ] , that is , is strictly increasing on $ ] . with and one deduces that has a unique zero in and that changes its sign from minus to plus there . since and , it follows that has a unique zero in , where it changes its sign from minus to plus . finally , observing that and , one concludes that for .the author is indebted to his ph.d .advisor vadim kostrykin for introducing him to this field of research and fruitful discussions .the author would also like to thank andr hnel for a helpful conversation . [10 ] n. i. akhiezer , i. m. glazman , _ theory of linear operators in hilbert space _ , dover publications , new york ( 1993 ) .s. albeverio , a. k. motovilov , _ sharpening the norm bound in the subspace perturbation theory _, complex anal .oper . theory * 7 * ( 2013 ) , 13891416 .l. g. brown , _ the rectifiable metric on the set of closed subspaces of hilbert space _ ,* 337 * ( 1993 ) , 279289 . c. davis , _ the rotation of eigenvectors by a perturbation _ , j. math .* 6 * ( 1963 ) , 159173 . c. davis , w. m. kahan , _ the rotation of eigenvectors by a perturbation ., siam j. numer .7 * ( 1970 ) , 146. t. kato , _ perturbation theory for linear operators _ , springer - verlag , berlin heidelberg ( 1966 ) .v. kostrykin , k. a. makarov , a. k. motovilov , _ on a subspace perturbation problem _ , proc .* 131 * ( 2003 ) , 34693476 .v. kostrykin , k. a. makarov , a. k. motovilov , _ existence and uniqueness of solutions to the operator riccati equation . a geometric approach _ ,contemp . math . *327 * , amer .( 2003 ) , 181198 .v. kostrykin , k. a. makarov , a. k. motovilov , _ perturbation of spectra and spectral subspaces _ ,* 359 * ( 2007 ) , 7789 .k. a. makarov , a. seelmann , _ metric properties of the set of orthogonal projections and their applications to operator perturbation theory _ , e - print arxiv:1007.1575 [ math.sp ] ( 2010 ) .a. seelmann , _ notes on the theorem _ , e - print arxiv:1310.2036 [ math.sp ] ( 2013 ) .
the problem of variation of spectral subspaces for linear self - adjoint operators under an additive bounded perturbation is considered . the aim is to find the best possible upper bound on the norm of the difference of two spectral projections associated with isolated parts of the spectrum of the perturbed and unperturbed operators . in the approach presented here , a constrained optimization problem on a specific set of parameters is formulated , whose solution yields an estimate on the arcsine of the norm of the difference of the corresponding spectral projections . the problem is solved explicitly . this optimizes the approach by albeverio and motovilov in [ complex anal . oper . theory * 7 * ( 2013 ) , 13891416 ] . in particular , the resulting estimate is stronger than the one obtained there .
in wireless communications , information is transmitted in the form of waves and space is considered as the physical medium for information transfer . hence , as a physical process , waves propagate in space via line of sight or multiple paths due to reflection , diffraction and scattering by objects present in the physical environment . like any other physical phenomenon, wave propagation is governed by the laws of physics .these laws determine the process itself as well as the amount of diversity waves carry along their path .the spatial diversity of multipath influences the amount of information that can be communicated through wave propagation , thus , using spatial diversity of multipath we can ensure better system performances , including capacity improvement , high transmission rate , improved bit error rate etc . , .in effect , the study of the spatial degrees of freedom of different multiple antenna systems ( i.e. , multi - user mimo systems , distributed mimo systems , mimo cognitive radio systems etc . ) has gained renewed attention and has more recently been addressed by .this motivates to study the fundamental limits that space imposes on the degrees of freedom of band limited signals observed over finite spatial and temporal windows . in this paper ,our aim is to determine the upper limit to the degrees of freedom of signals available in a band limited multipath wavefield when the wavefield is observed in , or coupled to a limited source - free region of space over a finite time window .we may assume that multiple antennas or sensors are located in the region of space to sample the observable multipath field for signal processing or communication purposes .we , however , aim to find an upper bound on the available degrees of freedom without explicitly considering a specific propagation condition , physical setup or application and thus , to show that the coupling of time and band limited multipath signals into a spatial region is fundamentally limited by a finite number of spatial modes . throughout the paper, we will frequently refer to the radius/ size of the multipath observation region .our derived result has great significance in a wide range of applications , including ( ) measuring the number of receive antennas required to sample a given region to maximize the performance gain , ( ) characterizing broadband beamforming techniques for next generation wireless communication to provide high quality video and audio , ( ) developing interference alignment scheme for mimo wireless networks , ( ) characterizing the degrees of freedom of distributed multi - antenna communications for broadband transmissions .we review the degrees of freedom available in spatially diverse multipath fields in different contexts .earlier works , focused on multipath fields that exhibit rich scattering and there are independent fading paths between transmitter and receiver antenna elements . according to these works , available degrees of freedomis the minimum number of transmit and receive antenna elements and channel capacity can be improvement remarkably by increasing number of the antenna elements .however , insufficient antenna spacing violates the assumption of independent fading and prevent channel capacity to increase linearly with degrees of freedom .the impact of fading correlation on spatially diverse multipath fields was studied by a large number of research works ( e.g. , ) .afterwards , independent works provided the characterization of the spatial degrees of freedom in multi - antenna systems as a function of the area , geometry of the antenna arrays and the angular spread of the physical environment .in addition , estimated the degrees of freedom available in source - free narrowband multipath fields observed over a spatial window and showed that the available degrees of freedom scales with the spatial dimension in terms of wavelengths .in contrast , poon et al . and franceschetti applied antenna theory and slepian s theory of spectral concentration , respectively , to derive a fundamental limit on the degrees of freedom available in a wideband multi - antenna systems for a given constraint on the area of the spatial region and observation time and defined the degrees of freedom as a product of spatial degrees of freedom and degrees of freedom of the wideband channel itself .since for wideband transmissions , space time , and frequency are strongly coupled , available bandwidth and observation time over space respectively differ from actual bandwidth and observation time depending on the available spatial information , the works of did not take this into account . in another approach , characterized multi - antenna systems in a wideband transmission regime and stated that in case of wideband frequency transmission , space and time are strongly coupled .however , how information is conserved in space - time was left as an open and important problem .the analysis in this paper considers a wideband multipath wavefield observed within a limited source - free region of space over a finite time window .the signals observable within this wavefield are studied as solutions to the helmholtz wave equation and they are encoded in infinite but countable number of orthogonal waveforms or spatial modes .this mathematical framework is similarly used in . however , in comparison , our derived result is more accurate , since we have considered the affect of available spatial information not only on the frequency bandwidth but also on the observation time .further , the degrees of freedom result provided in is derived by using a complex geometrical argument to extend the narrowband degrees of freedom result of to a broadband scenario and resulted in a loose bound .further , it is unclear , for different spatial modes , how the usable ( effective ) bandwidth varies from the given frequency bandwidth . in this work , on the contrary, the degrees of freedom result is derived in a simple manner . moreover, we clarify that at each spatial mode , how ( and why ) the observable signals are band limited within an effective frequency bandwidth rather than the given frequency bandwidth .in addition , we illustrate that beyond a certain spatial mode , the effective bandwidth becomes zero which in turn , truncates the wavefield from its infinite representation to a finite number of spatial modes .afterwards , by counting the number of spatial modes required to represent any signal within the given multipath field , we derive an analytical expression to determine the degrees of freedom of the signal wavefields and wavefields . ] .we depict the strong coupling relation between space and time as well as space and frequency in spatially diverse wideband multipath fields .we show that the effective observation time is fixed , independent of spatial modes , different from given observation time and essentially related to the spatial dimension of the observable region . whereas , for broadband transmissions , at each spatial mode , the observable signal is band limited within an effective frequency bandwidth , since even though the usable bandwidth at the lower spatial modes is equal to the given frequency bandwidth , for the higher modes , the usable bandwidth is less than the given frequency bandwidth .the coupling relations also indicate that for spatially diverse wideband multipath fields , the classical degrees of freedom result of _ time - bandwidth product _ can not be extended directly to the product of spatial degrees of freedom and _ time - bandwidth product _ as shown in , rather the degrees of freedom result should portray how the time and band limited signals are coupled to a limited region of space .our derived degrees of freedom result evidently portrays the impact of the coupling relations on the available degrees of freedom in spatially diverse wideband multipath fields .we also show the affect of the acceptable signal to noise ratio ( snr ) on the available degrees of freedom of each spatial mode .the reminder of the paper is organized as follows . in section [ problem ] , the problem statement together with background on shannon s time - frequency degrees of freedom and the eigenbasis expansion of the wavefield are discussed . in section[ results ] , we present our main results , while , section [ plot ] provides graphical analysis of our derived results .next , section [ insights ] elaborates the physical insights of the main results and briefly discusses the applications .we summarize the main contributions of this paper in section [ conclusion ] .in this paper , we consider a multipath field band limited to ] within a spatial window enclosed by a spherical region of radius . here, represents the mid band frequency .any signal sampled or recorded within this spatial region can be expressed as a function of space and time whose spectra lies within the frequency range and whose time function lies within the time interval . since it is not possible to confine any waveform in both time and frequency , we consider that the spectrum is confined entirely within the frequency range and the time function is negligible outside the time interval .+ we now express the space - time signal as rcl ( * _ x_*,t ) = ^_-(*_x_*,)e^j t d where is the fourier transform of with respect to , * _ x _ * represents a position in 3d space , such that denotes the euclidean distance of * _ x _ * from the origin , which is the center of the region of interest and . due to the band limitedness , is assumed to be zero outside the band ] and limited to the time interval ] and its spectrum is band limited to ] . as a result, the observable multipath field captures information content of the time domain signal over a time interval . in the following theorem , we formalize this statement for the mode space - time wavefield generated by the mode time domain signal .[ observation time of the spatial modes ] given a multipath field observed over a spherical region of radius for a time interval that is encoded in a countable number of spatial modes , then it is possible to capture information about the underlying mode time domain signal over an effective time interval , this effective time interval is independent of the spatial mode index .proof of the theorem is provided in appendix [ a1 ] .observing the mode space - time wavefield over a time window ] .hence , the effective time interval is essentially related to the spatial observation region .this indicates the coupling relation between space and time .further , the effective time interval is fixed and independent of spatial modes . in the next subsection, we show the coupling relation between space and frequency .this relation truncates the expansion in to a finite number of spatial modes .the performance of wireless communication systems is highly determined by noise . ideally ,if the wireless communication systems are noiseless , it would be possible to measure signals with infinite precision and each spatial mode would have an effective bandwidth equal to the given frequency bandwidth , i.e. , from to .however , in practical systems , signals are perturbed by noise .hence , it is not possible to detect signals within the band of frequencies where the signal to noise ratio ( snr ) drops below a certain threshold .this threshold is dependent on the antenna/ sensor sensitivity or the robustness of the signal processing method to noise . to determine how noise affects the available bandwidth at each spatial mode ,let us assume that is the white gaussian noise on the surface of the spherical region ( at radius ) associated with the antenna/ sensor at the angular position .hence , from and , the space - frequency spectrum on the sphere is rcl [ field ] ( r , , ) = _ n=0^ _ m =- n^n _ nm ( ) j_n(r)y_nm()+ _ r ( , ) . in the following theorem and corollary, we characterize the white gaussian noise at the different modes .[ white gaussian noise in given a zero mean white gaussian noise with variance in represented by a random variable where , such that for any function the complex scalar rcl _ i _ ^2 _ r ( ) _ i^ ( ) d = _ r ( ) , _ i ( ) is also a zero mean gaussian random variable with variance .* eqn 8.1.35 ) considering to be the orthonormal basis functions , the spatial fourier coefficients for the noise is rcl [ noise ] _ nm ( ) = _ ^2 _ r ( , ) y_nm^ ( ) d. applying theorem , are also zero mean gaussian random variables with variance rcl e\{|_nm()|^2 } = _ 0 ^ 2 ( ) _ ^2 |y_nm^()|^2 d= _ 0 ^ 2 ( ) where the noise power is independent of the mode .further , since the noise is white gaussian , the noise power is the same at all frequencies , i.e. , rcl e\{|_nm()|^2 } = _ 0 ^ 2 .based on corollary , we can define the mode space - frequency spectrum at radius as rcl [ coefficient ] _nm(r , ) = _ nm ( ) j_n(r)+ _ nm ( ) and we assume that the noise and the signal are not dependent on each other . here , the mode space - frequency spectrum takes white gaussian noise into account .this white gaussian noise has the property that each spatial mode is perturbed independently of all the others .further , is the mode signal spectrum band limited over the range ] is encoded in a finite number of spatial modes where the effective frequency bandwidth of the mode space - frequency spectrum is rcl[bw ] w_n = 2w , & + f_0+w- f_n , & + 0 , & + here , is the lowest spatial mode beyond which the critical frequency , is the lowest spatial mode beyond which the critical frequency and rcl [ cf ] f_n = + ( ) with the threshold depicting the ability of the system to detect signals buried in noise .further , we consider that the power of the mode signal spectrum is finite and bounded for all modes , orders and frequencies , i.e. , rcl [ emax ] e\{|_nm()|^2}^2 , hence , the maximum snr of the signal spectrum for any mode is rcl [ k ] ( snr ) _proof of the theorem is given in appendix [ a2 ] .the effective frequency bandwidth of each spatial mode is related to the spatial observation region and varies from the given frequency bandwidth depending on the critical frequency .this portrays the strong coupling relation between space and frequency .further , for , the critical frequency is greater than the upper bound of the given frequency range . therefore , we can truncate the expansion in to a finite number of spatial modes as rcl [ expansion1 ] ( * _ x _ * , ) = _ n=0^n _ _ m =- n^n_ nm(r , ) y_nm ( ) . using , the upper bound for the spatial modes is rcl [ nmax ] n_= e ( f_0+w ) + ( ) where is the ceiling value , since by definition spatial modes are integers .we are now in a position to provide an upper bound to the available degrees of freedom of wideband signals observed over finite spatial and temporal windows . in order to do so ,it is useful to think of mode space - frequency spectrum in time domain , in which case we obtain rcl[s ] _nm(r , t)= _ _ n _ nm(r , ) e^jt d where is the inverse fourier transform of with respect to and the integration is taken over with ] for where is defined in .+ we expand over the frequency range using the fourier series expansion , similar to , as follows rcl [ s ] _nm(r,)= _ = -^c_nm(r ) e^-j where the fourier coefficients are proportional to the samples of and is the effective frequency of the mode defined in .the result is obtained from when .it illustrates that the samples of determine the coefficients in the fourier series expansion .therefore , analogous to shannon s work , we can reconstruct the mode space - time signal from its samples as follows rcl [ ftruncation ] _nm(r , t)&= & _= -^_nm(r,)e^j2w_0n(t- ) where is the mid band frequency of the mode .we obtain by substituting the fourier series in , applying and then exchanging integration and summation .+ hence , it is possible to reconstruct the space - time signal by summing the mode space - time signals for all possible values of and over all propagation directions , i.e. , rcl ( * _ x_*,t ) & = & _ n=0^n _ _ m =- n^n _ nm(r , t ) y_nm ( ) and substituting yields rcl [ stsignal ] ( * _ x_*,t ) & = & _ n=0^n _ _ m =- n^n_ = -^_nm(r,)e^j2w_0n(t- ) y_nm ( ) .observe that the spherical harmonics are orthogonal over the spherical region as shown in .further , considering rcl _ ( t ) = e^j2w_0n(t- ) , the functions are orthogonal over time .proof of the orthogonality of the functions is provided in appendix [ a3 ] .therefore , following the same reasoning as shannon , discussed in section [ shannon ] , the minimum numbers of terms required in the sum that satisfy the constraints on observation region size , bandwidth and observation time window provide the available signal degrees of freedom within the given multipath field . here, can be truncated to ] over the time interval } \nonumber \\ & t&= d /f_0 , ~~\mbox{}\end{aligned}\ ] ] where , and are real numbers .furthermore , and represents the extreme broadband scenario . hence , can be rewritten as \end{aligned}\ ] ] where considering , and in fig .[ stf ] , degrees of freedom in is plotted as a function of radius of the spherical region and fractional bandwidth . it is evident from the figure that for a given observation time window , there is a sub - quadratic growth in available degrees of freedom with increasing region size and bandwidth .note that we consider a scenario with a small value of .increasing ( which is equivalent of minimizing the affect of noise ) we can achieve higher signal degrees of freedom .[ ht ] vs. radius vs. fractional bw at a fixed observation time window ( ) for .radius , fractional bw and observation time are defined in .,title="fig : " ] next , we portray the affect of snr on signal degrees of freedom considering different values of . it is evident from fig .[ rho ] that for a given observation time window , increasing the value of , we can obtain a growth in degrees of freedom as a function of ( a ) radius of the observable region and ( b ) bandwidth , respectively .[ ht ] at a fixed time window ( ) ( a ) for a fixed bandwidth ( ) , ( b ) for a fixed radius of the region ( ).,height=245 ] at a fixed time window ( ) ( a ) for a fixed bandwidth ( ) , ( b ) for a fixed radius of the region ( ).,height=245 ] we now present the signal degrees of freedom as a function of bandwidth at a fixed observation time window .the parameter is the radius of the observable spherical region with .the results in fig . [ space ] ( a ) demonstrate that the degrees of freedom increases sub - quadratically as the radius of the observable region increases .+ on the contrary , it is clear from fig . [ space ] ( b ) that considering a fixed bandwidth , by increasing the radius of the observable region , it is possible to obtain a rapid non - linear growth in the degrees of freedom as a function of observation time .[ ht ] considering ( a ) for a fixed time window ( ) , ( b ) for a fixed bandwidth ( ).,height=245 ] considering ( a ) for a fixed time window ( ) , ( b ) for a fixed bandwidth ( ).,height=245 ] + note that the two scenarios mentioned above clearly indicate that we can obtain significantly high signal degrees of freedom only by increasing the radius of the observable region irrespective of bandwidth or observation time .this significant growth is also true if we increase the bandwidth but keep the radius of the region and observation time constant .+ from the results , increasing the frequency or the radius leads to a sub - quadratic growth in the degrees of freedom . on the other hand , increasing the observation time window or the snr does not provide such a significant increase in the degrees of freedom .in this section , we elucidate the physical insights of the main results and attempt to briefly discuss about their applications . we can make the following comments based on theorem : * from a spatial diversity perspective , in shannon s proposed communication model , wideband signals are encoded in only one spatial mode or one channel over which information is transmitted . on the contrary , our proposed model contains spatially diverse wideband signals that are encoded in a finite number of spatial modes . as a result, intuitively the available degrees of freedom of any wideband signal observed over finite spatial and temporal windows should be shannon s degrees of freedom result extended form one spatial mode to modes .our derived result clearly comply with this intuition .the number of modes are the number of independent channels available to receive information due to the availability of measurements over a spatial region .this means that each mode or channel has its own time - frequency degrees of freedom . spatial diversity , therefore ,provides a number of independent channels over which time - frequency information can be transmitted .* the degrees of freedom result does not agree with the well established result of evaluating degrees of freedom of spatially diverse wideband signals as a product of spatial degrees of freedom and time - frequency degrees of freedom .however , in the propagation of waves even though space , time and frequency are separate entities , in spatially diverse wideband transmissions , space and time as well as space and frequency are strongly coupled , the results of fail to show those coupling relationships . on the contrary , our derived result takes the coupling relations in account . *shannon s work considers broadband transmission over a single channel and shows that the channel has _ `time - bandwidth product + 1 ' _ degrees of freedom . on the contrary , in addition to broadband transmission , we take spatial diversity in account . therefore , in this work , we consider broadband transmission over finite number of channels .our analysis indicates that each of these channels has _ ` effective time - effective bandwidth product + 1 ' _ degrees of freedom .this means that considering spatial diversity , we can capture more information from broadband transmission . for higher modes ,the effective/ usable bandwidth is less than the measured bandwidth and so not all spatial modes can covey the same amount of information . therefore , from theorem , for modes above , the _ ` effective time - effective bandwidth product + 1 ' _ decreases as the mode increases .let us consider that the threshold is equivalent to the maximum snr of the mode signal spectrum , i.e. , .this means that signals below the frequency are submerged in noise and can not be detected . as a result , from we obtain rcl[asymptotic ] d & & _ + _ + & & + _ which yields the following observations : * for non - spatially diverse multipath fields , in , reduces to 1 , reduces to and becomes zero .thus , non - spatially diverse multipath fields provide degrees of freedom . * for narrowband wavefields , in , reduces to , whereas , both and become zero . hence , there are degrees of freedom available in narrowband wavefields operating at the mid band frequency . as a result , in terms of wavelengths ,degrees of freedom available in narrowband wavefields is where is the wavelength corresponding to the mid band frequency . *if any signal observable within a multipath field is representable with only one sample in time domain , then , by substituting in , we obtain + rcl [ t=0 ] d&&(e ( f_0 + w ) + 1)^2 + 4w^2 + & & + 2w . +this equation indicates that even when there is only one sample available in time domain , spatial diversity influences the amount of information that can be captured within the observable region .the derived result represents the degrees of freedom of signals observable within a broadband multipath field over finite spatial and time windows , assuming the signals are submerged in noise for frequencies less than the critical frequency and are not detectable .this result is consistent with shannon s time - frequency degrees of freedom result when we take sample at a single spatial point .further , is consistent with the degrees of result derived in ( * ? ? ?44 ) at wavelength when we consider narrowband frequency transmissions . the degrees of freedom result obtained in this paper can be used to provide insights and bounds in the following areas : * in the context of spatial broadband communications such as wireless mimo communications , this work addresses the fundamental question of how the spatial degrees of freedom is interrelated to the time - frequency degrees of freedom .the result provides insights into gains or losses of degrees of freedom in space and time - frequency analysis . * for broadband beamforming the degrees of freedomcharacterises the maximum resolution that can be obtained .the greater the degrees of freedom the higher the resolution can be obtained .this has particular importance in this area since we have more variables to work with to perform broadband beamforming .the performance of beamforming in wireless networks improves with the available degrees of freedom and has been shown in . in next generation of wireless communications capable of transmitting high quality video and audio ,array gain is obtained by using broadband beamforming which exploits the spatial degrees of freedom and the effective bandwidth of each of these spatial degrees .our work shows that as more spatial degrees are exploited for beamforming , for a receive antenna occupying a limited spatial region , the effective bandwidth of the higher spatial degrees or modes are less than the bandwidth and decreases with the mode index . * for broadband reception of signals by antennas placed within a given spatial region , initially linear growth in the degrees of freedomis obtained with increasing number of antennas .once the number of antennas is greater than , the increase in degrees of freedom reduces with each antenna added until number of antennas is equivalent to .after that no gain can be obtained .this is because the wavefield constraint results in correlations between channels when the number of antennas becomes too large . *interference alignment is a promising new area introduced in the last two decades .this seeks to solve the spectrum scarcity in wireless communications by utilizing the available degrees of freedom in space , time and frequency .however , in mimo wireless networks , interference alignment uses the parallel channels in space offered by spatial degrees of freedom for alignment .our derived results can be used to develop an interference alignment scheme for mimo wireless networks .our results show that optimal signal alignment needs to consider that not all spatial channels are equal and can place interference on the spatial channels that have the lower time - frequency degrees of freedom . hence, the interference channels should be placed in the spatial channels corresponding to the higher spatial modes .further , the degrees of freedom analysis of this work provides the maximum degrees of freedom that can be utilized in these broadband communications with interference . *recently , distributed mimo communications have seen an increase in importance due to the popularity of sensor and ad - hoc networks .distributed mimo includes all muti - user communication configurations where the communications input and outputs are distributed over different users .works of have studied the spatial degrees of freedom for these considering different channel conditions and showed performance gains .these works looked at only narrowband transmissions , however , practical wireless transmissions are performed over a bandwidth .considering this , our work shows the maximum degrees of freedom available over space , time and frequency for users in a limited spherical region cooperating to receive broadband information .also , we show how the time - frequency degrees of freedom is distributed over the spatial modes .this paper provides an upper bound to the degrees of freedom of any signal observed within a band limited multipath wireless field over finite spatial and temporal windows .this upper bound is obtained characterizing the multipath field as solution to helmholtz wave equation encoded in a finite number of spatial modes .the analysis of the work shows that the effective observation time is independent of spatial modes and related to the spatial dimension of the observable region .further , for broadband transmissions , at each spatial mode , the observable signals are band limited within an effective frequency bandwidth and depending on the mode , the effective bandwidth varies from the given frequency bandwidth .these findings imply that when both spatial diversity and broadband transmissions are taken in account , space and time as well as space and frequency can not be decoupled . + the degrees of freedom result derived in this work takes the coupling relations into account and portrays the interrelation between spatial degrees of freedom and time - frequency degrees of freedom . from a spatial diversity perspective ,shannon s proposed communication model considers wideband signal encoded in only one spatial mode or one channel over which information is transmitted , the available degrees of freedom of spatially diverse wideband signal encoded in finite number of spatial modes is shannon s degrees of freedom result extended form one spatial mode to modes .this means that each mode or channel has its own time - frequency degrees of freedom .+ we also show that analogous to time , space can be treated as an information bearing object , since degrees of freedom increases sub - quadratically as the size of the observable spatial region increases irrespective of bandwidth or time window .further , the derived result portrays how the degrees of freedom is affected by the acceptable snr at each spatial mode .let be the inverse fourier transform of , then , the inverse fourier transform of is rcl[ot ] _nm(r , t)= a_nm(t)p_n ( ) where the time domain coefficients are the inverse fourier transform of and the legendre polynomials are the inverse fourier transform of . in , we can consider as the mode time domain signal .therefore , the mode space - time signal is a convolution between the mode time domain signal with the legendre polynomial . the convolution with the legendre polynomial represents the wavefield constraint and information content in the mode space - time signal is contained in .further , from the definition in , legendre polynomials are defined only for .this characteristic of legendre polynomials is also evident from fig .[ lp ] . for . ]+ we observe the mode space - time signal over a time window ] . if we consider that the mode space - time signal is observed within a sphere of radius , it is possible to capture information about the mode time domain signal over a maximum of time interval .hence , observing the mode space - time signal within a spherical region of radius over the time window ] .observable multipath field on the surface of the sphere ( at radius ) is rcl ( r , , ) = _ n=0^_m =- n^n _ nm(r , ) y_nm ( ) applying parserval s theorem with respect to the spherical harmonics , the average power of the observable multipath field from all propagation directions is a sum over the average power in the different modes , such that rcl _e\{|(r , , ) |^2 } d= _n=0^_m =- n^n e\{|_nm(r,)|^2}. since the noise is independent of the signal , using we obtain rcl [ parserval ] e\{|_nm(r,)|^2}= e\{|_nm()|^2 } |j_n(r)|^2 + e\{|_nm()|^2}. according to corollary , .therefore , we can rewrite as rcl e\{|_nm(r,)|^2}= e\{|_nm()|^2 } |j_n(r)|^2 + _ 0 ^ 2 . from this , signal to noise ratio ( snr ) for the mode at frequency rcl ( snr)_nm ( ) & = & [ snrn ] + & & ( snr ) _ , |j_n(r)|^2 [ snr ] where follows from and . + next, based on the properties of the bessel functions and the spherical bessel functions and following a few intermediate steps , we can derive the following bound for the spherical bessel functions for large rcl [ j ] |j_n(z)| ^n , where is the gamma function .therefore , rcl [ snr3 ] ( snr)_nm ( ) ^n+1 .now , using the exponential inequality , for , we rewrite as rcl ( snr)_nm ( ) _ , e^(e r / c-2n ) , where . since , for , we have rcl ( snr)_nm ( ) < ( snr ) _ , e^(e r / c-2n ) . note that at the mode, it is not possible to detect signals within the band of frequencies where the snr drops below a certain threshold .hence , for a frequency to be usable to capture information at the mode , the snr must be larger than or equal to the threshold .the frequency at which the is at least equal to the threshold is the critical frequency ( where ) .therefore , rcl[eq : detec ] ( snr ) _ ,e^(2e f_n r / c-2n ) = .this result is easily derived based on the reasoning provided in section [ effectivebw ] . here, we briefly discuss the reasoning : as depicted in fig .[ sbessel ] , expect for the order , spherical bessel functions show a high pass characteristics . as a result , for frequencies less then a critical frequency ,the magnitude of the order spherical bessel function is negligible .therefore , at each spatial mode , for frequencies less than the critical frequency , it is not possible to maintain the snr at least equal to the threshold .+ now , making the subject of the formula in yields .this means that for spatial modes , signals below frequency are not detectable since will not be satisfied .observe that for any particular mode , if , the effective bandwidth of that mode is .in addition , if , the effective bandwidth of this mode and modes above this is zero .it should also be noted that for a fixed value of radius , is active within the frequency range as depicted in fig [ sbessel ] , hence , effective bandwidth of the mode is .these arguments are written mathematically as .since rcl _ ( t ) = e^j2w_0n(t- ) , rcl [ timebasis ] _ -^ _ ( t )^_(t ) dt & = & _ -^e^j2w_0n(t- ) e^-j2w_0n(t- ) dt + & = & _ -^ dt + & = & \ { rl 0 & + & . where and for , . here ,is derived using the fact that ( * ? ? ?11 ) rcl & _ -^ & dt + & = & \ { rl 0 & + & . thus , the functions are orthogonal over time .we start from and by expanding the sum , we obtain s. hua , c. geng , t. gou , and s. a. jafar , `` degrees of freedom of mimo x networks : spatial scale invariance , one - sided decomposability and linear feasibility , '' in _ eee international symposium on information theory proceedings ( isit ) _ , 2012 , pp .20822086 .a. ozgur , o. leveque , and d. tse , `` spatial degrees of freedom of large distributed mimo systems and wireless ad hoc networks , '' _ ieee journal on selected areas in communications _ ,31 , no . 2 ,pp . 202214 , february 2013 .r. a. kennedy , p. sadeghi , t. d. abhayapala , and h. m. jones , `` intrinsic limits of dimensionality and richness in random multipath fields , '' _ ieee transactions on signal processing _ ,55 , no . 6 , pp .25422556 , june 2007 .a. s. y. poon , r. w. brodersen , and d. n. c. tse , `` degrees of freedom in multiple - antenna channels : a signal space approach , '' _ ieee transactions on information theory _51 , no . 2 ,pp . 523536 , feb .2005 .f. bashar , s. a. salehin , and t. d. abhayapala , `` analysis of degrees of freedom of wideband random multipath fields observed over time and space windows , '' in _ ieee statistical signal processing workshop _ , gold coast , australia , 29 jun - 02 jul 2014 , accepted .f. bashar and t. d. abhayapala , `` degrees of freedom of band limited signals measured over space , '' in _ the 12th international symposium on communications and information technologies ( iscit 2012 ) _ , 2012 , pp .735 740 . , `` prolate spheroidal wave functions , fourier analysis and uncertainty - iii : the dimension of the space of essentially time- and band - limited signals , '' _ bell system technical journal _ , vol . 41 , pp .12951336 , july 1962 .m. dollard , `` on the time - bandwidth concentration of signal functions forming given geometric vector configurations , '' _ ieee transactions on information theory _ , vol .10 , no . 4 , pp . 328338 , oct .1964 .a. bostrom , g. kristensson , and s. strom , _ transformation properties of plane spherical and cylindrical scalar & vector wave functions_.1em plus 0.5em minus 0.4emelsevier science publishers , 1991 , ch . 4 , pp . 165210 .j. krolik and d. swingler , `` multiple broad - band source location using steered covariance matrices , '' _ ieee transactions on acoustics , speech and signal processing _ , vol .37 , no .1481 1494 , oct. 1989 . a. el - keyi and b. champagne , `` cooperative mimo - beamforming for multiuser relay networks , '' in _ ieee international conference on acoustics , speech and signal processing , 2008 , icassp 2008 _ , march 2008 , pp .27492752 .k. gomadam , v. r. cadambe , and s. a. jafar , `` a distributed numerical approach to interference alignment and applications to wireless interference networks , '' _ ieee transactions on information theory _ , vol .57 , no . 6 , pp . 33093322 , june 2011 .m. franceschetti , m. d. migliore , and p. minero , `` the capacity of wireless networks : information - theoretic and physical limits , '' _ ieee transactions on information theory _55 , no . 8 , pp .34133424 , august 2009 .
the study of degrees of freedom of signals observed within spatially diverse broadband multipath fields is an area of ongoing investigation and has a wide range of applications , including characterising broadband mimo and cooperative networks . however , a fundamental question arises : given a size limitation on the observation region , what is the upper bound on the degrees of freedom of signals observed within a broadband multipath field over a finite time window ? in order to address this question , we characterize the multipath field as a sum of a finite number of orthogonal waveforms or spatial modes . we show that ( ) the effective observation time " is independent of spatial modes and different from actual observation time , ( ) in wideband transmission regimes , the effective bandwidth " is spatial mode dependent and varies from the given frequency bandwidth . these findings clearly indicate the strong coupling between space and time as well as space and frequency in spatially diverse wideband multipath fields . as a result , signal degrees of freedom does not agree with the well - established degrees of freedom result as a product of spatial degrees of freedom and time - frequency degrees of freedom . instead , analogous to shannon s communication model where signals are encoded in only one spatial mode , the available signal degrees of freedom in spatially diverse wideband multipath fields is the _ time - bandwidth _ product result extended from one spatial mode to finite modes . we also show that the degrees of freedom is affected by the acceptable signal to noise ratio ( snr ) in each spatial mode . degrees of freedom , multipath propagation , spatial sampling , broadband mimo networks , distributed mimo .
data storage demands have been increasing very rapidly , leading to a lot of interest in distributed storage systems .erasure - coded distributed storage systems provide reliable data storage at a a fraction of the cost as compared to the systems based on simple data replication .characterizing the reliability of the storage system is an important problem .reliability has been explored for exponential repair times for serial repair and parallel repair models .recently , there has been interest to characterize the reliability for deterministc repair times , where geometric interpretations are provided . however , there is no complete characterization of reliability for parallel and serial repair models .the authors of considers availability of the k - out - of - n systems with general repair times while this paper considers the reliability of k - out - of - n systems . in this paper, we give the laplace stieltjes transform of the probability of data loss for both parallel and serial repair models . using the laplace stieltjes transform formulas , mean time to data loss can be found . even though expressions are difficult to evaluate exactly , we can find the dominant terms for the case when repair rate is much higher than the failure rate and find that the dominant term for the serial repair is the same as in the case of exponential repair in .however , we also find that the parallel repair also gives the same dominant term as the serial repair , and thus not achieving the factor improvement in mean time to data loss as given in for the parallel exponential repair .the results are compared in simulations , where we see that the mean time to data loss with parallel repair is higher , but the asymptote as the repair rate increases is the same .the rest of the paper is organized as follows . in section 2, we give our results for the serial repair model .section 3 presents our results for the parallel repair model .section 4 gives the numerical results and section 5 concludes this paper .the serial repair model can be described with the following assumptions .1 . components ( disks ) are identical and independent .the failure rate for each disk is constant , and equals ( failure time is exponentially distributed for each disk ) .3 . the repair times are independent and fixed , and equals .4 . the repair for the previous broken components must be re - initialized when a new component fails , and only the newest broken component is repaired . 5 . the system stops working when components have failed for , .suppose that the probability of data loss in time is .then the laplace stieltjes transform of reliability is given as ] , where is the random variable representing the time to data loss , and .the next result gives the laplace stieltjes transform of reliability for parallel repair .the laplace stieltjes transform of reliability for parallel repair is given by where the above can be simplified to the proof steps are very similar to that for serial repair model , where the difference is that all the failed components get repaired simultaneously thus going from state i to state 0 on repair rather than to state i-1 for the serial repair model .using , we can find the mean time to data loss as . given the exact expression of , we can evaluate .however , the expression is cumbersome and thus omitted .we consider however the scenario where and give the mean time to data loss in that scenario . the mean time to data loss for parallel deterministic repair for approximately given as the proof follows by using expression for and ignoring all terms of lower order .the detailed steps are omitted .we note that the approximation is the same , thus giving the same asymptotic expression as in the case of serial repsir model .for exponential parallel repair model with rate of repair , mean time to data loss was characterized in , and is given as with an approximation given as thus , even though exponential parallel repair has times higer mtdl than serial repair , the mtdl approximations for the serial and parallel deterministic repair models is the same .we assume and .we assume that the failue rate is per year .we plot the increase in mean time to data loss with decreasing time to repair in figure [ fig : awesome_image ] .we note that for serial repair model , the mean time to data loss is lower for the deterministic repair model as compared to the serial exponential repair model .we further see that parallel deterministic repair has higher mean time to data loss as compared to serial deterministic repair .finally , we note that the approximation result approximates all these three mean time to data loss expressions as the repair time becomes smaller .this paper gives laplace stieltjes transform of the data loss for the k - out - of - n distributed storage system with deterministic repair times .two repair models , namely the serial and parallel repair are considered .we show that for failure rate much lower than the repair rate , mean time of data loss for the two models is the same unlike the case for exponential repair models .l. mearian , world s data will grow by 50x in next decade , idc study predicts , " computerworld , jun 2011 .p. m. chen , e. k. lee , g. a. gibson , r. h. katz and d. a. patterson , raid : high - performance , reliable secondary storage , " journal of the acm volume 26 , issue 2 ( 1994 ) , p. 145j. e. angus , on computing mtbf for a k - out - of - n : g repairable system , " ieee transactions on reliability volume 37 , number 3 ( 1988 ) , p. 312 - 313 .m. s. moustafa , time dependent availability of k - out - of - n : g systems with exponential failures and general repairs , " economic quality control , vol .59 - 66 , 2003 .v. aggarwal , c. tian , v. a. vaishampayan , and y. r. chen , distributed data storage systems with opportunistic repair , " proc .ieee infocom 2014 .a. campello and v. a. vaishampayan , reliability of erasure coded systems : a geometric approach , " proc .ieee international conference on bigdata 2013 , santa clara , ca , oct . 2013 . v. a. vaishampayan and a. campello , reliability of erasure coded storage systems : a combinatorial - geometric approach , " ieee transactions on information theory , vol .5795 - 5809 , nov .
in this paper , we find the laplace stieltjes transform of the probability of data loss for the k - out - of - n distributed storage system with deterministic repair times . we consider two repair models , namely the serial and parallel repair . we show that for failure rate much lower than the repair rate , mean time of data loss for the two models is the same unlike the case for exponential repair models . k - out - of - n storage systems , parallel repair , serial repair , reliability , mean time to data loss
suppose we take samples , , of size , which are independent and identically distributed ( i.i.d . ) as a -variate distribution . here , we consider situations where the data dimension is very high compared to the sample size .let and assume , , with ] and ] , , such as =({\mbox{\boldmath { }}}_{i1}, ... ,{\mbox{\boldmath { }}}_{in})^t ] * v ] , take-n1 + 1 ] + join[take floor\}], ] * v ] , take + 1 ] , + join floor-n1\}, ] * do= ] , , ]m]m ]m]m) s s ] , .they showed that has the asymptotic normality as when the underlying distribution is gaussian and . also , note that only under the gaussian assumption .contrary to that , from corollary 3.1 , has the asymptotic normality as even for non - gaussian situations and .also , one can claim that without any assumptions such as ( a - i ) . in figure 3, we summarized the findings obtained by averaging the outcomes from 4000 say ) replications for ( i ) to ( iii ) . here, the first replications were generated for ( a ) when and the last replications were generated for ( b ) when .we defined when was falsely rejected ( or not ) for , and was falsely rejected ( or not ) for .we gave to estimate the size in the left panels and to estimate the power in the right panels .their standard deviations are less than . let . from theorem 4.1 in view of remark 5, we expected that and for ( [ 4.1 ] ) are close to and , respectively . in figure 4, we gave the averages ( in the left panels ) and the sample variances ( in the right panels ) of and by the outcomes for ( b ) when in cases of ( i ) to ( iii ) .from remark 5 , the asymptotic variance for was given by . from figures 3 and 4, we observed that gives good performances for the gaussian case .however , for non - gaussian cases such as ( ii ) and ( iii ) , seems not to give a preferable performance .especially , it gave quite bad performances for ( iii ) .that is probably because ( a - i ) ( or ( a - ii ) ) is not met in ( iii ) . on the other hand, gave adequate performances for high - dimensional cases even in the non - gaussian situations .we observed that is quite robust against other non - gaussian situations as well .hence , we recommend to use for the test of ( [ 1.1 ] ) and for the estimation of .are denoted by the dashed lines in the left panels and the values of are denoted by the dashed lines in the right panels for the tests by ( [ 4.1 ] ) and ( sr ) in cases of ( i ) to ( iii ) .the asymptotic powers were given by which was denoted by the solid lines in the right panels ., title="fig : " ] + ( i ) . + are denoted by the dashed lines in the left panels and the values of are denoted by the dashed lines in the right panels for the tests by ( [ 4.1 ] ) and ( sr ) in cases of ( i ) to ( iii ) .the asymptotic powers were given by which was denoted by the solid lines in the right panels ., title="fig : " ] + ( ii ) the chi - squared distribution with degree of freedom .+ are denoted by the dashed lines in the left panels and the values of are denoted by the dashed lines in the right panels for the tests by ( [ 4.1 ] ) and ( sr ) in cases of ( i ) to ( iii ) .the asymptotic powers were given by which was denoted by the solid lines in the right panels ., title="fig : " ] + ( iii ) . and are denoted by the dashed lines in the left panels and their sample variances , and , are denoted by the dashed lines in the right panels in cases of ( i ) to ( iii ) .the asymptotic variance of was given by which was denoted by the solid lines in the right panels.,title="fig : " ] + ( i ) .+ and are denoted by the dashed lines in the left panels and their sample variances , and , are denoted by the dashed lines in the right panels in cases of ( i ) to ( iii ) .the asymptotic variance of was given by which was denoted by the solid lines in the right panels.,title="fig : " ] + ( ii ) the chi - squared distribution with degree of freedom .+ and are denoted by the dashed lines in the left panels and their sample variances , and , are denoted by the dashed lines in the right panels in cases of ( i ) to ( iii ) .the asymptotic variance of was given by which was denoted by the solid lines in the right panels.,title="fig : " ] + ( iii ) .in this section , we give several applications of the results in section 3 .we construct a confidence interval for by ,\end{aligned}\ ] ] where .then , from corollary 3.1 , it holds that as under ( a - ii ) , ( a - iii ) and ( a - v ) .hence , one can estimate by .if one considers as a candidate of , one can check whether is a valid candidate or not according as or not . as discussed in section 3, holds the consistency property when ( a - iv ) is met , and holds the asymptotic normality when ( a - v ) is met . here ,we propose a method to check whether ( a - iv ) holds or not .let .we have the following result .assume ( a - i ) .it holds that as . from proposition 5.1 , one can distinguish ( a - iv ) and ( a - v ) .if is sufficiently small , one may claim ( a - iv ) , otherwise ( a - v ) .let . here , is the ( population ) rv - coefficient which is a multivariate generalization of the squared pearson correlation coefficient .note that ] and ^t ] from 10000 replications . on the other hand , we considered the pmse for the full isoprenoid ( genes ) .then , similar to above , we constructed % ci of the pmse by ] by schwarz s inequality .it concludes the result .let .when we consider the singular value decomposition of , it follows that , where denote singular values of , and ( or ) denotes a unit left- ( or right- ) singular vector corresponding to .then , it holds that similarly , we claim that , so that thus under ( a - iii ) , it holds that as .then , we claim that as , so that under ( a - iii ) and ( a - v ) by noting that and , from lemma 3.1 and ( [ a.6 ] ) , we have that as under ( a - i ) , ( a - iii ) and ( a - v ) from the fact that as under ( a - v ) . from ( [ a.5 ] ), it holds that , so that .then , from lemma 3.1 and , it holds that as under ( a - i ) .thus , under ( a - iv ) , from chebyshev s inequality , we can claim the result .we first consider the first result of lemma a.1 .let for all .let and .note that and under ( a - ii ) . here , we claim that , so that then , under ( a - ii ) , we have that for , it is necessary to consider the terms of because it does not hold that unless or . here , under ( a - ii ) , we evaluate that for sufficiently large from the fact that for all .similarly , for other terms , we can evaluate the order as .hence , we can claim that under ( a - ii ) , so that from ( [ a.7 ] ) . by noting that , we can conclude the first result of lemma a.1 . from ( [ a.2 ] ) , we have that under ( a - i ) then , from ( [ a.6 ] ) , we have that as under ( a - i ) , ( a - iii ) and ( a - v ) . by combining ( [ a.8 ] ) with ( [ a.4 ] ) , from the fact that , we can conclude the result .let for .note that .also , note that here , we have for , that .then , we consider applying the martingale central limit theorem given by mcleish .let ^{1/2} ] as under ( a - ii ) , ( a - iii ) and ( a - v ) .then , by combining ( [ a.12 ] ) with lemmas 3.2 and a.2 , we have that as under ( a - ii ) , ( a - iii ) and ( a - v ) .it concludes the result .we first consider the proof of corollary 4.1 . from theorem 3.1 , under ( a - i ) and ( a - iv ) , we have that as from the fact that as under ( a - i ) and ( a - iv ) .it concludes the result of corollary 4.1 .next , we consider the proof of theorem 4.1 . from corollary 3.1 , under ( a - ii ) , ( a - iii ) and ( a - v ) , we have that as from the fact that as under ( a - ii ) .we can conclude the results of size and power when ( a - v ) is met in theorem 4.1 .we note that as under ( a - iv ) , so that we obtain the result of power when ( a - iv ) is met from corollary 4.1 .hence , by considering the convergent subsequence of , we can conclude the result of power in theorem 4.1 .the proofs are completed .next , we consider the case when ( a - v ) is met . from ( [ a.5 ] ) , it holds that , so that as under ( a - v ) .then , from lemma 3.1 and ( [ a.1 ] ) , under ( a - i ) and ( a - v ) , we claim that as .note that as under ( a - v ) .thus under ( a - i ) and ( a - v ) , it holds that as .then , from lemma 3.3 , under ( a - i ) and ( a - v ) , we have that as it concludes the result when ( a - v ) is met .the proof is completed .let .similarly to ( [ a.5 ] ) , it holds that then , by noting that and , from lemma 5.1 , we have that as under ( a - i ) .thus , under ( a - iv ) with , from chebyshev s inequality , we can claim the result of corollary 5.1 .similarly to the proof of lemma a.2 , under ( a - i ) , ( a - iii ) , ( a - v ) and ( a - v ) with , we can claim that as .thus , similar to ( [ a.13 ] ) , from lemma 3.3 , we can conclude the result .research of the first author was partially supported by grant - in - aid for young scientists ( b ) , japan society for the promotion of science ( jsps ) , under contract number 26800078 .research of the second author was partially supported by grants - in - aid for scientific research ( b ) and challenging exploratory research , jsps , under contract numbers 22300094 and 26540010 .m. aoshima , k. yata , asymptotic normality for inference on multisample , high - dimensional mean vectors under mild conditions , methodol .( 2013 ) , in press .doi : 10.1007/s11009 - 013 - 9370 - 7 .z. bai , h. saranadasa , effect of high dimension : by an example of a two sample problem , statist .sinica 6 ( 1996 ) 311 - 329 .chen , y .- l .qin , a two - sample test for high - dimensional data with applications to gene - set testing , ann .38 ( 2010 ) 808 - 835 .a. wille , p. zimmermann , e. vranova , a. frholz , o. laule , s. bleuler , l. hennig , a. prelic , p. von rohr , l. thiele , e. zitzler , w. gruissem , p. bhlmann , sparse graphical gaussian modeling of the isoprenoid gene network in arabidopsis thaliana , genome biol . 5 ( 2004 )
in this paper , we consider testing the correlation coefficient matrix between two subsets of high - dimensional variables . we produce a test statistic by using the extended cross - data - matrix ( ecdm ) methodology and show the unbiasedness of ecdm estimator . we also show that the ecdm estimator has the consistency property and the asymptotic normality in high - dimensional settings . we propose a test procedure by the ecdm estimator and evaluate its asymptotic size and power theoretically and numerically . we give several applications of the ecdm estimator . finally , we demonstrate how the test procedure performs in actual data analyses by using a microarray data set . + + * keywords : * correlations test ; cross - data - matrix methodology ; graphical modeling ; large , small ; pathway analysis ; rv - coefficient 0.5 cm * kazuyoshi yata and makoto aoshima * + institute of mathematics , university of tsukuba , ibaraki , japan +
information is about making distinctions or differences . in james gleick s book , _ the information : a history , a theory , a flood _ , he noted the focus on differences in the seventeenth century polymath , john wilkins , who was a founder of the royal society . in 1641 ,the year before newton was born , wilkins published one of the earliest books on cryptography , _ mercury or the secret and swift messenger _ , which not only pointed out the fundamental role of differences but noted that any ( finite ) set of different things could be encoded by words in a binary code ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ for in the general we must note , that whatever is capable of a competent difference , perceptible to any sense , may be a sufficient means whereby to express the cogitations .it is more convenient , indeed , that these differences should be of as great variety as the letters of the alphabet ; but it is sufficient if they be but twofold , because two alone may , with somewhat more labour and time , be well enough contrived to express all the rest .xvii , p. 69 ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ wilkins explains that a five letter binary code would be sufficient to code the letters of the alphabet since ._ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ thus any two letters or numbers , suppose a.b .being transposed through five places , will yield thirty two differences , and so consequently will superabundantly serve for the four and twenty letters ... .xvii , p. 69 ) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ as gleick noted : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ any difference meant a binary choice .any binary choice began the expressing of cogitations . here , in this arcane and anonymous treatise of 1641 , the essential idea of information theory poked to the surface of human thought , saw its shadow , and disappeared again for [ three ] hundred years . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ in this paper , we will start afresh by deriving an information - as - distinctions notion of logical entropy from the new logic of partitions that is mathematically dual to the usual boolean logic of subsets .then the usual shannon entropy will be essentially derived from the concepts behind logical entropy as another way to measure information - as - distinctions .this treatment of the various notions of shannon entropy ( e.g. , mutual , conditional , and joint entropy ) will also explain why their interrelations can be represented using a venn diagram picture .the logic normally called `` propositional logic '' is a special case of the logic of subsets originally developed by george boole . in the boolean logic of subsets of a fixed non - empty universe set ,the variables in formulas refer to subsets and the logical operations such as the join , meet , and implication are interpreted as the subset operations of union , intersection , and the conditional .then `` propositional '' logic is the special case where is the one - element set whose subsets and are interpreted as the truth values and ( or false and true ) for propositions . in subset logic ,a _ valid formula _ or _ tautology _ is a formula such as \rightarrow t ] .this gives the same result as the usual definition of the partition meet in the literature .defined here with refinement as the ordering relation . ]perhaps surprisingly , the other logical operations such as the implication do not seem to be defined for partitions in the literature .since the subset operation of implication is , we define the _ partition implication _ as the partition whose dit set is : ] or , in this case , ^{n}=\left ( \frac{1}{3}\right ) ^{n} ] . since the probability of a typical message is for , the typical messages are equiprobable .hence the number of typical messages is ^{n} ] where : ^{n}\right\ } = n\log\left [ \pi_{k=1}^{n}p_{k}^{-p_{k}}\right ] ] -\sum _ { i}n_{i}\left [ \ln\left ( n_{i}\right ) -1\right ] \right ] ] where ( and where the formula with logs to the base only differs from the usual base formula by a scaling factor ) .entropy is in fact an excellent numerical approximation to for large ( e.g. , in statistical mechanics ) .but the common claim is that shannon s entropy has the _ same functional form _ as entropy in statistical mechanics , and that is simply false .if we use a three - term stirling approximation , then we obtain an even better numerical approximation : , mackay also uses stirling s approximation to give a `` more accurate approximation '' ( using the next term in the stirling approximation ) to the entropy of statistical mechanics than the shannon entropy . ] but no one would suggest using that `` more accurate '' entropy formula in information theory .shannon s formula should be justified and understood by the arguments given previously , and not by over - interpreting the approximate relationship with entropy in statistical mechanics .the basic datum is `` the '' set of elements with the equal probabilities . in that basic case of an equiprobable set, we can derive the dit - bit connection , and then by using a probabilistic average , we can develop the shannon entropy , expressed in terms of bits , from the logical entropy , expressed in terms of ( normalized ) dits , or vice - versa .given with equiprobable elements , the number of dits ( of the discrete partition on ) is so the normalized dit count is : normalized dits .that is the dit - count or logical measure of the information is a set of distinct elements . or the normalized count . ]but we can also measure the information in the set by the number of binary partitions it takes ( on average ) to distinguish the elements , and that bit - count is : bits . by solving the dit - count and the bit - count for and equating, we can derive each measure in terms of the other : and the dit - bit conversion formulas .the common thing being measured is an equiprobable where .the dit - count for is and the bit - count for is , and the bit - dit connection gives the relationship between the two counts .using this dit - bit connection between the two different ways to measure the `` information '' in , each entropy can be developed from the other .we start with the logical entropy of a probability distribution : .it is expressed as the probabilistic average of the dit - counts or logical entropies of the sets with equiprobable elements . , there is no necessity that is an integer so the dit - counts for are extrapolations while the bit - counts or binary partition counts for are already extrapolations even when is an integer but not a power of . ]but if we switch to the binary - partition bit - counts of the information content of those same sets of equiprobable elements , then the bit - counts are and the probabilistic average is the shannon entropy : . both entropies have the mathematical form : and differ by using either the dit - count or bit - count to measure the information in .clearly the process is reversible , so one can use the dit - bit connection in reverse to develop the logical entropy from the shannon entropy .thus the two notions of entropy are simply two different ways , using distinctions ( dit - counts ) or binary partitions ( bit - counts ) , to measure the information in a probability distribution .moreover the dit - bit connection carries over to the compound notions of entropy so that the shannon notions of conditional entropy , mutual information , and joint entropy can be developed from the corresponding notions for logical entropy . since the logical notions are the values of a probability measure , the compound notions of logical entropy have the usual venn diagram relations such as the inclusion - exclusion principle .there is a well - known analogy between the `` venn diagram '' relationships for the shannon entropies and the relationships satisfied by any measure on a set ( , ) . as l. l. campbell puts it , the analogy : _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ suggests the possibility that and are measures of sets , that is the measure of their union , that is the measure of their intersection , and that is the measure of their difference .the possibility that is the entropy of the `` intersection '' of two partitions is particularly interesting .this `` intersection , '' if it existed , would presumably contain the information common to the partitions and . _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ all of campbell s desiderata are precisely true when : * `` sets '' = dit sets , and * `` entropies '' = normalized counting measure of the ( dit ) sets , i.e. , the logical entropies . since the logical entropies are the values of a measure , by developing the corresponding shannon notions from the logical ones , we have an explanation of why the shannon notions also exhibit the same venn diagram relationships . the expository strategy is to first develop the shannon and logical compound notions of entropy separately and then to show the relationship using the dit - bit connection .given two partitions and on a finite set , how might one measure the new information that is provided by that was not already in ?campbell suggests associating sets with partitions so the conditional entropy would be the measure of the difference between the sets .taking the information as distinctions , we take the difference between the sets , i.e. , , and then take the normalized counting measure of that subset of : _ logical conditional entropy of _ _ _ given _ _ .when the two partitions and are joined together in the join , whose blocks are the non - empty intersections , their information as distinctions is also joined together as sets , ( the `` union '' mentioned by campbell ) , which has the normalized counting measure of : ] _ logical entropy of the joint distribution _ which is the probability that two samplings of the joint distribution will yield a pair of _ distinct _ ordered pairs , , i.e. , with an -distinction _ or _ a -distinction . for the definition of the conditional entropy , we simply take the product measure of the set of pairs and that give an -distinction but not a -distinction .thus given the first draw , we can again use a venn diagram to compute the probability that the second draw will have but . to illustrate this using venn diagram reasoning ,consider the probability measure defined by on the subsets of .given the first draw , the probability of getting an -distinction on the second draw is and the probability of getting a -distinction is . a draw that is a -distinction is , a fortiori , an -distinction so the area is contained in the area .then the probability of getting an -distinction that is not a -distinction on the second draw is the difference : .figure2.eps figure 2 : = probability of an -distinction but not a -distinction on .since the first draw was with probability , we have the following as the product measure of the subset of ^{2} ] that are -distinctions but not -distinctions : ] -\left [ 1-\sum _ { y}p\left ( y\right ) ^{2}\right ] = h\left ( x , y\right ) -h\left ( y\right ) ] .the shannon conditional entropy for partitions and is based on subset reasoning which is then averaged over a partition . given a subset ,a partition induces a partition of with the blocks .then is the probability distribution associated with that partition so it has a shannon entropy which we denote : .the shannon conditional entropy is then obtained by averaging over the blocks of : _ shannon conditional entropy of _ _ _ given _ _ .since the join is the partition whose blocks are the non - empty intersections , .developing the formula gives : = h\left ( \pi \vee\sigma\right ) -h\left ( \sigma\right ) ] where is the normalized dit count for the discrete partition on a set with equiprobable elements .hence that same equiprobable set requires the bit - count of binary partitions to distinguish its elements .similarly is the normalized dit count for ( the discrete partition on ) a set with equiprobable elements , so it requires binary partitions to make those distinctions .those binary partitions are included in the binary partitions ( since a -distinction is automatically a -distinction ) and we do nt want the -distinctions so they are subtracted off to get : bits . taking the same probabilistic average ,the average number of binary partitions needed to make the -distinctions but not the -distinctions is : = \sum_{x , y}p\left ( x , y\right ) \log\left ( \frac{p\left ( y\right ) } { p\left ( x , y\right ) } \right ) = h\left ( x|y\right ) . ] .figure6.eps figure 6 : + \left [ 1-p\left ( y\right ) \right ] -\left [ 1-p\left ( x , y\right ) \right ] ] _ logical mutual information in a joint probability distribution_. the probability of two independent draws differing in _ either _ the _ or _the is just the logical entropy of the joint distribution : = 1-\sum_{x , y}p\left ( x , y\right ) ^{2} ] .the usual heuristic motivation for shannon s mutual information is much like its development from the logical mutual information so we will take that approach at the outset .the logical mutual information for partitions can be expressed in the form : ] _ shannon s mutual information for partitions_. keeping the log s separate gives the venn diagram picture : \\ & = h\left ( \pi\right ) + h\left ( \sigma\right ) -h\left ( \pi\vee \sigma\right)\end{aligned}\ ] ] inclusion - exclusion analogy for shannon entropies of partitions . to move from partitions to probability distributions , consider again the joint distribution on . then developing the shannon mutual information from the logical mutual information amounts to replacing the block probabilities in the join by the joint probabilities and the probabilities in the separate partitions by the marginals ( since and ) , to obtain : _ shannon mutual information in a joint probability distribution_. then the same proof carries over to give the : figure8.eps figure 8 : inclusion - exclusion `` picture '' for shannon entropies of probability distributions .the logical mutual information formula : + \left [ 1-p\left ( y\right ) \right ] -\left [ 1-p\left ( x , y\right ) \right ] \right ] ] .thus the genuine venn diagram relationships for the product probability measure that gives the logical entropies carry over , via the dit - count to bit - count conversion , to give a similar venn diagram picture for the shannon entropies .two partitions and are said to be ( stochastically ) _ independent _ if for all and , . if and are independent , then : , so that : shannon entropy for partitions additive under independence . in ordinary probability theory , two events for a sample space are said to be _ independent _ if .we have used the motivation of thinking of a partition - as - dit - set as an event in a sample space with the probability of that event being , the logical entropy of the partition .the following proposition shows that this motivation extends to the notion of independence .[ independent partitions have independent dit sets]if and are ( stochastically ) independent partitions , then their dit sets and are independent as events in the sample space ( with equiprobable points ) . for independent partitions and , we need to show that the probability of the event is equal to the product of the probabilities and of the events and in the sample space . by the assumption of stochastic independence , we have so that . by the previous structure theorem for the mutual information set : , where the union is disjoint so that : so that : . hence the logical entropies behave like probabilities under independence ; the probability that _ and _ distinguishes , i.e. , , is equal to the probability that distinguishes times the probability that distinguishes : logical entropy multiplicative under independence .it is sometimes convenient to think in the complementary terms of an equivalence relation `` equating '' or identifying rather than a partition distinguishing .since can be interpreted as the probability that a random pair of elements from are distinguished by , i.e. , as a distinction probability , its complement can be interpreted as an _ identification probability _ ,i.e. , the probability that a random pair is equated by ( thinking of as an equivalence relation on ) . in general , \left [ 1-h\left ( \sigma\right ) \right ] = 1-h\left ( \pi\right ) -h\left ( \sigma\right ) + h\left ( \pi\right ) h\left ( \sigma\right ) = \left [ 1-h\left ( \pi\vee\sigma\right ) \right ] + \left [ h\left ( \pi\right ) h\left ( \sigma\right ) -m(\pi , \sigma\right ] ] . thus if and are independent , then the probability that the join partition identifies is the probability that identifies times the probability that identifies : \left [ 1-h\left ( \sigma\right ) \right ] = \left [ 1-h\left ( \pi\vee\sigma\right ) \right ] ] . the _ kullback - leibler divergence _ ( or _ relative entropy _ ) is defined as a measure of the distance or divergence between the two distributions where .a basic result is the : with equality if and only if _ information inequality _ . given two partitions and , the inequality obtained by applying the information inequality to the two distributions and on the sample space : with equality iff independence . in the same manner, we have for the joint distribution : with equality iff independence . the _ symmetrized kullback - leibler divergence _is : = h_{s}\left ( p||q\right ) -\left [ \frac{h\left ( p\right ) + h\left ( q\right ) } { 2}\right ] ] so that taking the sum for gives : -\frac{1}{2}\left [ \left ( 1-{\textstyle\sum\nolimits_{i } } p_{i}^{2}\right ) + \left ( 1-{\textstyle\sum\nolimits_{i } } q_{i}^{2}\right ) \right ] \\ & = h\left ( p\vert q\right ) -\frac{h\left ( p\right ) + h\left ( q\right ) } { 2}\text{.}\ ] ] logical divergence = _ jensen difference _ between probability distributions .then the information inequality implies that the logical cross - entropy is greater than or equal to the average of the logical entropies : with equality iff .the half - and - half probability distribution that mixes and has the logical entropy of ] that develops via the dit - bit connection to : ] .thus the logical divergence develops via the dit - bit connection to the symmetrized version of the kullback - leibler divergence .the following table summarizes the concepts for the shannon and logical entropies .we use the case of probability distributions rather than partitions , and we use the abbreviations , , and .[ c]l|c|c| & & + & & + & & + & & + & & \left [ 1-h\left ( y\right ) \right ] ] + & & + & & + & & + & & /2 $ ] + & & + the above table shows many of the same relationships holding between the various forms of the logical and shannon entropies due ultimately to the dit - bit connection . the dit - bit connection between the two notions of entropy is based on them being two different measures of the `` amount of information - as - distinctions , '' the dit - count being the normalized count of the distinctions and the bit - count being the number of binary partitions required ( on average ) to make the distinctions .logical entropies arise naturally as the normalized counting measure for partition logic just as probabilities arise as the normalized counting measure for subset logic , where the two logics are dual to one another .all the forms of logical entropy have simple interpretations as the probabilities of distinctions .shannon entropy is a higher - level and more refined notion adapted to the theory of communications and coding where it can be interpreted as the average number of bits necessary per letter to code the messages , i.e. , the average number of binary partitions necessary per letter to distinguish the messages .ricotta , carlo and laszlo szeidl 2006 . towards a unifying approach to diversity measures : bridging the gap between the shannon entropy and rao s quadratic index ._ theoretical population biology_. 70 : 237 - 43 .
the logical basis for information theory is the newly developed logic of partitions that is dual to the usual boolean logic of subsets . the key concept is a `` distinction '' of a partition , an ordered pair of elements in distinct blocks of the partition . the logical concept of entropy based on partition logic is the normalized counting measure of the set of distinctions of a partition on a finite set just as the usual logical notion of probability based on the boolean logic of subsets is the normalized counting measure of the subsets ( events ) . thus logical entropy is a measure on the set of ordered pairs , and all the compound notions of entropy ( join entropy , conditional entropy , and mutual information ) arise in the usual way from the measure ( e.g. , the inclusion - exclusion principle)just like the corresponding notions of probability . the usual shannon entropy of a partition is developed by replacing the normalized count of distinctions ( dits ) by the average number of binary partitions ( bits ) necessary to make all the distinctions of the partition .
the characteristic polynomials of random matrices have attracted considerable attention in the last few years , one reason being that their correlations seem to reflect the correlations of the eigenvalues . in this paperwe continue with our investigation of the second - order correlation function of the characteristic polynomial of a sample covariance matrix .this function is given by where , , and is a complex or real sample covariance matrix defined as follows : * complex sample covariance matrices .* let be a distribution on the real line with expectation , variance and finite fourth moment , and for given with , let denote the matrix whose entries are complex random variables whose real and imaginary parts are independent , each with distribution .let denote the conjugate transpose of .then the hermitian matrix is called the ( unrescaled ) _ sample covariance matrix _ associated with the distribution .* real sample covariance matrices .* let be a distribution on the real line with expectation , variance and finite fourth moment , and for given with , let denote the matrix whose entries are real random variables with distribution .let denote the transpose of .then the symmetric matrix is called the ( unrescaled ) _ sample covariance matrix _ associated with the distribution .we are interested in the asymptotic behavior of the values as , for certain choices of the sequences , , , . in a recent paper we considered the `` square '' case where the difference is fixed .we showed that in this situation , the second - order correlation function of the characteristic polynomial of a complex sample covariance matrix is asymptotically given ( after the appropriate rescaling ) by the sine kernel in the bulk of the spectrum , by the airy kernel at the soft edge of the spectrum , by the bessel kernel at the hard edge of the spectrum .moreover , similar results were obtained for real sample covariance matrices .the purpose of this note is to derive similar results for the `` non - square '' case where the difference tends to infinity in such a way that the ratio tends to some constant sufficiently quickly .we will show that in this situation , the second - order correlation function of the characteristic polynomial of a complex sample covariance matrix is asymptotically given ( after the appropriate rescaling ) by the sine kernel in the bulk of the spectrum , by the airy kernel at the edge of the spectrum .note that both edges of the spectrum are `` soft '' when .these results are in consistence with results by ben arous and pch , tao and vu , soshnikov , pch , and feldheim and sodin , who obtained similar results for the ( more relevant ) correlation function of the eigenvalues , yet under stronger assumptions on the underlying distributions .ben arous and pch proved the occurrence of the sine kernel in the bulk of the spectrum for a certain class of complex sample covariance matrices with .very recently , tao and vu extended this result to a quite general class of sample covariance matrices , still with .soshnikov and pch established the occurrence of the airy kernel at the upper edge of the spectrum for real and complex sample covariance matrices whose underlying distributions are symmetric with exponential tails ( or at least finite 36th moments ) .recently , feldheim and sodin proposed another approach to obtain these results , which also works for the lower edge of the spectrum .thus , our results add some support to the wide - spread expectation that correlation functions in random matrix theory are universal , and subject to very weak moment conditions only .it is well - known that the asymptotic distribution of the global spectrum of the rescaled sample covariance matrix is given ( both in the complex case and in the real case ) by the marenko - pastur distribution of parameter , which has the density where and . note that as well as implicitly depend on , but this dependence will be kept implicit throughout this paperalso , note that are the solutions to the equation unless otherwise mentioned , we will always assume that for all , as , and as , where }0;1{]} ] .( the case where remains bounded is already considered in . )furthermore , we will usually omit the index of the parameters , and also of several auxiliary functions defined below . by proposition [ gf ], we have the integral representation where denotes a contour around the origin .similarly as in , we will show that the main contribution to the contour integral in comes from a small neighborhood of the point . to determine the asymptotic behavior of the values in theorems [ bulk-1 ] and [ edge-1 ], we proceed similarly as in , but we have to address some additional complications arising from the circumstance that . to deal with the this problem , we will use various well - known _ uniform asymptotics _ for the modified bessel function of large positive order ( see chapters 10 and 11 in olver ) . for convenience , we state these results ( and several bounds deduced therefrom ) in section 3 of this paper .we will choose the contour consisting of the following parts , ( see figure [ newcontour ] ) : * , , * , , * , , * , , * , , here is a positive real number which will finally be chosen sufficiently large , and for the bulk of the spectrum and for the edge of the spectrum ..,width=249 ] the rough idea behind the choice of this contour is the following : after the appropriate rescaling , the main contribution to the integral comes from the part , for which we can use the uniform asymptotics for the modified bessel function in the positive half - plane .the other parts are essentially error terms , which can be made arbitrary small by picking and sufficiently large . however ,to prove this , we need rather precise approximations and bounds for the modified bessel function close to the imaginary axis , and this is our main motivation for the choice of the above contour . for the contours and , we can use the uniform bound for the modified bessel function _ near _ the imaginary axis , but away from the turning points .for the contours and , we can use the uniform bounds and for the modified bessel function _ on _ the imaginary axis , including the vicinity of the turning points . in the following proofs, we adopt the convention that all asymptotic bounds may depend on the sequence as well as on the `` shift parameters '' , unless otherwise indicated .a similar remark applies to large positive constants , typically denoted by . after these comments , we turn to the proofs of the main theorems .let be as in theorem [ bulk-1 ] , and let .we assume throughout this proof that is sufficiently large . by ( [ intrep ] ) and the definition of , we have where and is the contour specified above , with .note that implicitly depends on . for any ] .we will show that for any , there exists some such that the following holds : then , by symmetry , similar bounds hold for the integrals along and . using these results ,it is easy to see that now , by laplace inversion , we have , for , , ( see p.245 in ) .we therefore get now replace the local shift parameters with and observe that ( which is a simple consequence of ) to obtain . to prove , we write where and since a simple calculation yields ( recall our convention that implicit constants in -terms may depend on , , which are regarded as fixed ) .we first prove . in doing so, we use the notation to denote a bound involving an implicit constant depending also on ( in addition to , ) .let .substituting and in the integral on the left - hand side in , we obtain now , using taylor expansion , we have the following approximations , for : for the second approximation , we have also used the uniform asymptotics for the modified bessel function .indeed , since and for , , it follows from that whence the approximation for . inserting the above approximations into and recalling the definition of yields .we now prove and . for , write with ] .since , we then have the estimates in particular , and show that is uniformly bounded .moreover , the proof of and is therefore reduced to showing that for any , there exists some such that for the first integral in , note that for ] and using , as well as the uniform bound for the modified bessel function , it therefore follows that for all sufficiently large ( depending on ) , where is a positive constant which does not depend on and which may change from step to step in the sequel . combining this with the estimates , , it follows that since for sufficiently large , the right - hand side is clearly bounded above by , this proves . for the second integral in , note that for , we have and from and .thus , we may use the bounds and for the modified bessel function on the imaginary axis .writing with ] , let we will show that for any , there exists some such that the following holds : then , by symmetry , similar bounds hold for the integrals along and . using these results , it is easy to see that substituting , , and shifting the path of integration back to the line ( which is easily justified by cauchy s theorem ) , we obtain making the replacements , and multiplying by , we further obtain by the integral representation for the airy kernel ( see proposition 2.2 in ) , the latter expression is equal to , whence and . to prove , we proceed similarly as in the preceding proof . to begin with , we write where and similarly as above , we then have starting from this representation , the proof is by and large similar to the preceding proof. however , a notable difference is given by the fact the leading - order terms cancel out now , which is why we have to keep track of several additional terms in the asymptotic approximations .we begin with the contour integral for .let .substituting and in the integral on the left - hand side in , we obtain similarly as in the preceding proof , we use the notation to denote a bound involving an implicit constant depending also on ( in addition to , ) .then , putting for abbreviation and using taylor expansion , we have the approximations , for , for the second approximation , we have used the uniform asymptotics for the modified bessel function .indeed , since and for , , it follows from that now , by straightforward expansion , plugging this into the preceding expression and using once again the assumption yields the approximation for . inserting the above approximations into and ordering with respect to fractional powers of , we obtain with by the characterizing equation for the edge of the spectrum , we have thus , the terms in the large round brackets cancel out , and the remaining sum can be simplified to recall that at the upper edge and the lower edge of the spectrum , respectively .plugging this into the expression for and simplifying , we obtain and the proof of is complete .the proof of is similar to that of . setting with ] , so we can use and to bound .let be a small constant which will be chosen later , let denote the subset of those $ ] such that , and let denote the complement of this subset . using the stronger bound on and the weaker bound on , we then have , for sufficiently large , here denotes the length of the interval , is a constant which depends only on , and is a constant which may additionally depend on .( of course , both constants may change from occurrence to occurrence as usual . ) clearly , once is fixed , we may make the first term arbitrarily small by choosing sufficiently large .hence , to complete the proof of , it remains to establish an appropriate bound on the second term , on . to this purpose , first note that and therefore for the upper edge of the spectrum , we have , which implies that the interval is empty for sufficiently small and sufficiently large , and is proven .for the lower edge of the spectrum , we have as well as ( as is readily verified ) and therefore , for sufficiently small and sufficiently large , \cap 2 \arcsin \left ( \frac{\xi}{(1-\gamma_\infty)(1 + 2\varepsilon)};\frac{\xi}{(1-\gamma_\infty)(1 - 2\varepsilon ) } \right ) \,.\end{aligned}\ ] ] in particular , for some positive constant depending only on and .moreover , can be made arbitrarily small by picking sufficiently close to zero .thus , is also proven .the proof of theorem 1.3 is complete now .in this section we state some uniform asymptotic approximations for bessel functions from the literature ( see chapters 10 and 11 in olver ) . moreover, we deduce several asymptotic bounds for bessel functions which are sufficient for controlling the error bounds in the preceding proofs .these asymptotic bounds should be well - known , but we have not been able to find explicit statements suiting our purposes in the literature . throughout this section ,we assume that and .by proposition 3.1 we have the following bound on , uniformly in the set : using proposition 3.2 it can be shown that this bound in fact remains valid up to the imaginary axis : fix , and consider with , .we then have the uniform asymptotic approximation .we now use the following asymptotic approximations for the airy function and its derivative , which hold for , ( see chapters 11.1 and 11.8 in olver ) : using the inequalities , , and the fact that , are holomorphic functions , it follows that uniformly in such that and .it can be checked that for , , and for some depending only on .hence , uniformly in , where is defined as in and . inserting this into , it follows that the -bound holding uniformly in . thus , for , we have when and when .a comparison of the definitions and ( together with a careful discussion of the choice of branches ) shows that which completes the proof of the lemma .[ jboundlemma ] there exists a constant such that for all sufficiently large , more precisely , for any , there exists a constant such that for all sufficiently large , these bounds are not the best possible , but they are sufficient for our purposes . in terms of the modified bessel function , they read and respectively .this is the form in which they have been used in the last section .we start from the uniform asymptotic approximation , which we now consider for only .note that is a decreasing function of with for and for .there exists a constant such that latexmath:[\[\begin{aligned } \label{jbound - tmp } for all for all sufficiently large .we now use the bounds which follow from well - known asymptotic approximations for the airy function and its derivative ( see section 11.1 in olver ) . in particular , these bounds imply that and throughout the real line .it therefore follows from that there exist constants such that for all for all sufficiently large . to deduce , use the first term inside the minimum and observe that since for , for , and is an analytic function of with ( see section 11.10 in olver ) ,
we consider the sample covariance matrices of large data matrices which have complex matrix entries and which are non - square in the sense that the difference between the number of rows and the number of columns tends to infinity . we show that the second - order correlation function of the characteristic polynomial of the sample covariance matrix is asymptotically given by the sine kernel in the bulk of the spectrum and by the airy kernel at the edge of the spectrum . similar results are given for real sample covariance matrices .
the challenge in facility location problems is to place service centers or facilities so that demand points are optimally served ( see for example ref . for an overview ) .facilities can be hospitals , supermarkets , fire stations , libraries , warehouses , or any other supply centers providing vital resources to the population living at the demand points ( e.g. , households or cities ) .here we consider the case where the demand points are at regular intervals along a one - dimensional geographic object , such as a road or a river , and where every demand point is a possible location for a facility .the number of people who require the facilities services is assumed to be known at each demand point .this number is typically very heterogeneous across geographic space .depending on the context , there are different strategies for the placement of the facilities . in this article , we concentrate on the -median problem , an important special case , where the objective is to minimize the average distance between a person s demand point and the nearest facility .( a recent summary of the vast literature on the -median problem can be found in ref . ) .-median problem along a line .the population is known at the demand points . in this article ,the distance between neighboring demand points is assumed to be constant .facilities will be placed on of these demand points .( in the figure , . )their locations are to be determined so that the average distance between a demand point and the nearest facility , weighted by , is minimized .after the facilities have been located , the line can be divided into segments so that the -th segment corresponds to the service region of the -th facility ., width=325 ] let us call the facility locations from left to right .these positions are chosen among the demand points , which are equidistant ( i.e. , for ) along a line ( see fig . [ pmed_intro ] ) . if the population at is denoted by , the -median problem consists of minimizing the cost function for a given location problemis constant , we could in principle directly minimize the numerator in eq .[ pmedian ] and ignore the denominator .we have decided to keep the denominator so that equals the average distance . can then be more easily compared across different location problems . ] because only trips to the nearest facility play a role in eq .[ pmedian ] , the line along which the demand points are located can be partitioned into segments or service regions .demand points belong to the same segment if and only if they share the same closest facility , see fig .[ pmed_intro ] .the length of facility s service region is given by we will now take a closer look at the relation between and the population density around facility .at first sight , it is plausible that the spatial density of facilities should follow the same trend as the population density : where there are more people there should be proportionately more facilities .however , as we will see shortly , the -median solution does not follow this rule that would give every facility an equal number of customers .instead facilities are less abundant per capita in the high - demand regions than in the low - demand regions .( gray histogram ) varies little between neighboring demand points , can be approximated by a continuous function ( black curve ) .( b ) the function is defined as the length of the segment covering position .strictly speaking , is a piecewise constant function .however , if the spatial variations in are sufficiently small , can be approximated by a continuous function ( indicated by the dotted curve ) ., width=325 ] for a spatially heterogeneous population distribution , it is difficult to deduce this general trend directly from eq . [ pmedian ] . with certain approximations , however , the problem becomes analytically tractable ; essentially , we translate the line of reasoning developed in ref . and for the two - dimensional -median problem to the one - dimensional case .first we define the population density which is the number of people per unit length in the vicinity of .equation [ pmedian ] can be rewritten as where we have used the new notation to replace sums by integrals .if we allow to be piecewise constant , this expression is still exact , but later it will be more convenient to approximate with a continuous function ( fig . [cont_approx]a ) .next we define to be the length of the segment serviced by the facility closest to ( see fig .[ cont_approx]b ) .the average distance from facility to a point inside its service region is equal to , where depends on the exact location of the facility .for example , if is close to the center of the segment , . in the spirit of a mean - field approximation, we will now assume that varies little over the size of a segment .then we can replace the exact distance , , in the numerator of eq .[ pmed_cont_approx ] with its average , the index was dropped in eq .[ pmed_mf_approx ] assuming that most facilities will be close to the center of their service region so that is approximately constant .unlike in eq .[ pmedian ] , the locations no longer appear explicitly in eq .[ pmed_mf_approx ] .instead we have to find the function that minimizes subject to the constraint that there are facilities .this constraint can be expressed as introducing a lagrange multiplier , the problem is equivalent to finding the zero of the functional derivative = 0,\ ] ] solved by the lagrange multiplier can be eliminated by inserting this expression into eq . [ constraint ] .after some algebra , ^{-1/2}. \label{sqrt_scaling}\ ] ] the lengths of the service regions are thus inversely proportional to the square root of the population density .the spatial density of facilities increases , but the per - capita density decreases with growing population .the square - root scaling is a compromise providing most services where they are most needed , namely in the densely populated regions , but still leaving sufficient resources in sparsely populated regions where travel distances are longer .this result implies an economy of scales : in crowded cities fewer facilities per capita can supply a larger population than in rural areas .if facilities and demand points are not restricted to be along a line , but can be placed in two - dimensional space , the scaling exponent is instead of ( see section [ conclusion ] ) .however , economies of scale are also predicted in two dimensions .empirical studies have indeed reported this effect for certain classes of real facilities .the calculation in the previous section assumes that the population density varies little within a service region . as we can see from eq .[ sqrt_scaling ] , this implies that the segment length is also a smooth function ( fig .[ cont_approx]b ) .real census data , however , typically reveal strongly varying populations even on small spatial scales . in fig .[ test_sets]a d , we show population numbers near three us interstate highways and the navigable mississippi river .the data were generated from the us census of the year 2000 .first , interstates 5 , 10 , 90 and the mississippi river were parameterized by arc length and markers were placed at regular 1-km intervals. then census blocks within 10 km of the highways or the mississippi were identified and their population assigned to the nearest kilometer marker . as is clear from fig .[ test_sets]a d , neither of the four populations is a smooth function . whether the assumptions behind eq .[ sqrt_scaling ] are valid , is questionable , but it turns out that the scaling law for the service regions still holds with surprising accuracy . as a function of position along ( a ) interstate 5 , ( b ) 10 , ( c ) 90 , ( d ) the navigable part of the mississippi river .the small squares below the -axes indicate the optimal -median positions of facilities .( e ) map of the roads , the river , and the facility locations ., width=325 ] versus the mean population in this service region .lines indicate least - squares fits to eq .[ log - log - scaling ] .scaling is in good agreement with the analytic prediction .,width=325 ] to compute the scaling exponent , facilities are placed on each of the four test data sets .the optimal locations are calculated with the efficient algorithm of ref . .their positions along the roads and the river in geographic space are shown in fig .[ test_sets]e .the segment lengths are calculated for each facility . in fig .[ scaling_exact_opt ] , is plotted versus the mean value of inside the segment , denoted by . and , of the segment are half - integers , is defined as if or is an integer , or is added to the sum ( i.e. , half of the population is assigned to the facility on the right , half to the equally distant facility on the left ) . ]ordinary least - squares fits of to the data yield slopes ( i-5 ) , ( i-10 ) , ( i-90 ) , and ( mississippi river ) , close to the prediction of eq .[ sqrt_scaling ] .the correlations are strong ; is consistently bigger than . assuming that the residuals are log - normally distributed , the predicted value is in all cases within the 95% confidence intervals .thus , the equivalent of eq .[ sqrt_scaling ] , , obtained by replacing the continuous variables and by their discrete counterparts and , is a good approximation .this observation demonstrates that scaling at the exact -median configuration is robust even in the presence of strong spatial fluctuations .that the square - root scaling of the service regions is discernible even for realistically heterogeneous input , establishes a potential link to previous empirical work .data collected in ref . suggest , at least for certain classes of facilities , a sublinear dependence of service facilities on population numbers .it has been conjectured that the -median model or a generalization thereof might explain this trend .admittedly , we are looking in this article at a simplified linear geometry . yetthat sublinear scaling is robust even for substantially noisy input , might be viewed as supporting evidence for this conjecture . however , there is more to the problem than first meets the eye . although it is mathematically convenient to assume that facilities are placed to minimize an objective function such as eq .[ pmedian ] , it is far from clear that the exact minimum will be achieved in reality .decisions about facility locations are probably more haphazard in real life .for example , site selections may be swayed by political interests , short - term fluctuations in property prices , or based on an incomplete knowledge of the actual demand .even if the best effort is made to reach the global optimum , `` accidents of history '' may keep the facility locations trapped in a costlier local optimum .it seems overly optimistic to draw conclusions about the scaling of real service regions only from the best of all solutions . the available literature for real facility distributions rather than the numerically optimal ones discussed in sec .[ exact_numerical ] also justifies cautious skepticism , as some significant differences to the -median result have been observed in reality , albeit in two dimensions .how many facility configurations with costs near , but not necessarily equal to , the global minimum exist ?there is no simple way to answer this question .although the algorithm of ref . can find the global optimum very efficiently , it does not provide information about non - optimal solutions .scanning all possible configurations is out of the question because their number is too vast . even for our smallest test dataset ( i-5 ) there are different ways to locate the facilities .the situation is reminiscent of many - particle systems in physics where one wishes to calculate the large number of micro - states at a certain energy level out of an even larger number of all conceivable micro - states . in that context, statistical mechanics has developed many powerful numerical tools .we will build on this analogy in order to estimate the number of non - optimal facility locations .let us call the number of facility locations with costs between and .the function plays the role of the `` density of states '' in statistical mechanics .as we will see , increases very rapidly as exceeds the minimum , so that it will be more convenient to work with its logarithm , the entropy .our aim is to calculate with monte carlo simulations .several methods exist ; here we apply the wang - landau algorithm .first , the range of possible costs is divided into small discrete intervals of length .then a random walk through the set of facility locations is performed and we count , in the form of a histogram , how often each interval is visited . the main idea behind the wang - landau algorithm is to bias the random walk in such a manner that all intervals are visited equally often .for such a `` flat histogram '' we obtain equally good statistics for all intervals , an advantage when is the basis of further calculations .we describe details of our implementation in app .[ wang_landau ] .( i.e. , the logarithm of the density of states ) versus the cost .the inset shows the same four curves as the main panel , but with rescaled abscissa where is the number of demand points.,width=325 ] from calculations for four different empirical population distributions ( fig .[ entropy ] ) it is clear that is singular at , the smallest possible cost .thus , increases enormously in the vicinity of and the density of states grows even more rapidly .the results for four different empirical population distributions suggest that follows approximately the same curve ( inset of fig .[ entropy ] ) if regarded as a function of , where is the total number of demand points .therefore , it appears to be a universal feature that for all realistic populations a large number of different possible configurations must be considered if the assumption of optimality is relaxed .this observation raises the question : can the scaling relation of eq .[ sqrt_scaling ] still be observed if facility locations are not exactly optimal , but are among the numerous configurations achieving almost but not exactly ?if we randomly select a facility configuration with a cost in the interval ] .it is also conceivable that not all configurations within this range are equally likely , so that the best - fit exponents may behave differently .we may also replace the -median model by a different optimization principle ( e.g. , competitive facility location such as the hotelling model ) which can change the exponent at the optimum .however , we believe that a steep increase in the number of possible configurations is a generic tendency of most models that relax the constraint of strict optimization even to a small degree .the author thanks m. e. moses , s. banerjee , b. blasius , and h. youn for stimulating discussions .the author acknowledges support from imperial college .the wang - landau algorithm is designed to calculate the density of states for in some interval ] with a probability ^{-1} ] .simultaneously we maintain a histogram , which counts how often a cost between and is encountered during the course of a random walk . at the beginning for all .the random walk through the set of facility locations proceeds as follows .starting from an arbitrary initial configuration , a new set of facility positions is generated with probability .in addition , a uniform random number r_1\neq q_1 r_i-1\neq r_{i-1} i=2,\ldots , p r_i+1\neq r_{i+1} i=1,\ldots , p-1 r_p\neq q_n ] which become smaller as approaches .we interpolate between all estimates of , which all differ from the real by a multiplicative constant , with a straightforward least - squares algorithm to obtain a single curve for the entropy over all measured values of .there is exactly one constant left to be fixed because the wang - landau algorithm can calculate the entropy only up to an additive constant .we adopt the normalization that at the extrapolated maximum .
the -median problem is a common model for optimal facility location . the task is to place facilities ( e.g. , warehouses or schools ) in a heterogeneously populated space such that the average distance from a person s home to the nearest facility is minimized . here we study the special case where the population lives along a line ( e.g. , a road or a river ) . if facilities are optimally placed , the length of the line segment served by a facility is inversely proportional to the square root of the population density . this scaling law is derived analytically and confirmed for concrete numerical examples of three us interstate highways and the mississippi river . if facility locations are permitted to deviate from the optimum , the number of possible solutions increases dramatically . using monte carlo simulations , we compute how scaling is affected by an increase in the average distance to the nearest facility . we find that the scaling exponents change and are most sensitive near the optimum facility distribution . quantitative studies in many branches of science frequently reveal scaling laws where two sets of observables are related by a power law over several orders of magnitude . examples range from astronomy ( e.g. , kepler s third law ) to biology where , for example , kleiber s law states that the metabolic rates of mammals scale approximately as the three - quarter power of their body mass . here we look at a problem from economic geography , the relationship between the spatial distribution of a population and the distribution of service establishments ( e.g. , post offices or gas stations ) . physicists typically enjoy the luxury of measuring scaling exponents in carefully designed and repeatable experiments . in biology and the social sciences , by contrast , the exact circumstances of an experiment are generally more difficult to control and to repeat . as a consequence , power - law exponents are frequently obfuscated by noise in the measurement and in the process generating the scaling law itself . the remaining uncertainty can lead to heated debates if , for example , the scaling exponent in kleiber s law is not truly instead of . the available geographic data for the distribution of service establishments leave similar room for interpretation so that various scaling laws have been proposed . facing such controversies , theorists often try to calculate the `` correct '' exponent from deterministic models . one recurring idea is that scaling should emerge naturally from some appropriate model if an objective function ( energy dissipation , earnings , travel distance , etc . ) is optimized . this approach has led to elegant theories , but it leaves one key problem unaddressed . knowing that evolutionary biology , human decisions , or other processes shaping the available empirical data are intrinsically stochastic , there is in principle a huge variety of outcomes . how many different solutions are conceivable ? how close to optimal does the observed solution need to be in order to exhibit the theoretically predicted scaling exponent ? here we study a model which serves as an example of computational techniques suited to address these questions . the model is the -median problem of optimal facility location along a strongly heterogeneously populated line ( e.g. , a transcontinental highway ) . the task is to place facilities along the line and find the configuration that minimizes an objective function , in this case the average distance to the nearest facility . ignoring small - scale heterogeneity in the population , an analytic calculation predicts a simple scaling law for the length of the line segments served by different facilities . the exact optimum locations can be computed numerically for realistic input data and are in good agreement with the analytic prediction . using techniques from statistical physics , we calculate the number of possible facility locations for non - minimal costs . with monte carlo simulations we will then quantify how deviations from the optimum make it less likely to find the theoretical exponent .
a large number of empirical studies has shown that financial time series exhibit statistical features strongly departing from the gaussian behavior .this finding dates back to the work of mandelbrot , whose attention was mainly focused in recognizing realizations of stable processes , and to the analysis of fama concerning the long tailed nature of the dow jones industrial average single components . since these fundamental contributions the modeling of financial returns has considerably grown , and very heterogeneous models , able to reproduce the degree of asymmetry and the excess of kurtosis of the measured distributions , have been proposed .a non exhaustive list includes approaches developing from specific distributional assumptions , as it is the case of the lvy flights , the generalized student- or tsallis distributions and the exponential one .past empirical analysis have also proved the existence of non trivial scalings of higher order correlations between returns at different times , pointing toward the existence of a secondary stochastic process , as fundamental as that of the price , governing the volatility of returns .many effective mechanisms allowing to reproduce the observed correlation structures , where the stochastic nature of the volatility plays a central role , were proposed .discrete time models include arch - garch processes , and multifractal models , inspired by the cascades originally introduced by kolmogorov in the context of turbulent flows . as far as continuous time approaches are concerned , fractional brownian motion and stochastic volatility models have been extensively analyzed . for a review of the latter approach we suggest and remind the reader to the discussion in section [ sec : model ] .focusing on the continuous time stochastic volatility framework , in this work we aim at reproducing many of the above mentioned facts which are generally accepted as universal evidences , shared among different markets in different times .the structure of the paper is the following . after introducing a general class of stochastic models driving the evolution of the volatility , in section [ sec : model ] we concentrate on a linear one able to reproduce an inverse gamma distribution in the long run . in section[ sec : fattails ] we detail the derivation of the moments of the probability density function of the returns over the time lag , taking into account explicitly the time at which the process has started and deriving rigorously the stationary limit of the volatility .we describe the mechanism through which the power law distribution of induces fat tails on for all the finite time lags . in sections [ sec :lev ] and [ sec : autoc ] we derive the analytical expressions of the leverage correlation and the volatility autocorrelation functions respectively . in section [ sec :estimation ] we propose a systematic methodology for estimating the model parameters , and we apply it to the time series of the daily returns of the standard & poor 500 index . the relevant conclusions , along with possible perspectives , will be summarized in section [ sec : concl ] .we consider a model where the asset price is a function of the stochastic centred log - return and is a constant drift coefficient .we assume that can be modeled with the following stochastic differential equation ( sde ) where is the instantaneous volatility of the price and is the increment of a standard wiener process . since , from the above assumption we have that and for all . in the context of stochastic volatility models ( svms )the instantaneous volatility is assumed to be a function of an underlying driving process , i.e. .typically , the dynamics chosen for corresponds to a particular case of the following general multiplicative diffusion process with suitable constraints on the parameters , in order to ensure the well definiteness of the process .moreover , the two standard wiener processes are possibly correlated with ] satisfies an integral volterra equation of the second kind , whose derivation is detailed in appendix [ app : leverage ] , and the final expression of the leverage correlation reads \times \right.\\ \left.\exp{\left[\left ( 2 a + \frac{3}{2 } c \right)\tau\right ] } -\frac{b}{a+c } \,\mu_2(t ) \exp{\left[\left ( a+\frac{c}{2 } \tau \right)\right]}\right\ } \ , , \label{eq : lev3}\end{gathered}\ ] ] which inherits the explicit dependence on from the moments of . in order to compare the previous expression with real data , following the discussion at the beginning of section [ sec : fattails ], we take the limit , so that we can replace and with their asymptotic values , whose general expression , valid for , is substitution in eq .reveals that the first term vanishes , and the leverage correlation reduces to where the leverage decay time reads so , the model correctly forecasts the exponential decay of and its vanishing for negative correlation times .the volatility autocorrelation provides an estimate of how much the volatility at time depends on the value it had at time and it is usually defined as \ , { \mathrm{var}}[dx_{t+\tau}^2]}}\ , .\label{eq : acorrdef}\ ] ] it is a well known stylized fact that decays with multiple time scales and in particular , it shows a long range memory effect , vanishing over a time scale of the order of few years for stock indexes . for the model under investigation ,the volatility autocorrelation can be computed exactly too .recalling again the novikov theorem and the fact that , the correlation entering the numerator of becomes } \,dw_{1,t } \right\rangle } \,dt^2 \,,\end{gathered}\ ] ] but , due to the presence of , the second term results to be of order and therefore it can be discarded .the exact expression of the autocorrelation function can be obtained as explained in appendix [ app : autocorr ] , leaving us with +\exp{\left[(a+c)\tau\right ] } \left [ \mu_4(t ) \!+\ ! \frac{2b}{a+c } \mu_3(t ) \!-\ !\mu_2(t)\left ( \mu_2(t ) \!+\ !\frac{2b}{a+c } \mu_1(t ) \right ) \right ] \right\}\,,\ ] ] where the denominator of eq . has been approximated with =c^2\left [ 3\mu_4(t)-\mu_2(t)^2 \right]\,dt^2 $ ] in view of the stationary limit for .after replacing the moments with their asymptotic expressions we end with \ , , \label{eq : acorrfinal}\ ] ] where the coefficients read and we also defined the two volatility autocorrelation time scales as at this point it is crucial to notice that in deriving eq .and eq . we assumed implicitly that the moments of up to the order do converge asymptotically .recalling the expression of the shape parameter in , we see this assumption imposes which has to be interpreted as a consistency relation for the model .this constraint imposes the following strict ordering between the time scales of the model where the second inequality for follows from the convergence of third moment of which requires .the expression obtained for fails to capture the persistence of this correlation identified in several analysis reviewed in .the lacking of power law scaling would not be , in principle , a serious drawback as far as one of the two time scales involved in was sufficiently long .however , the ordering , which is peculiar of the considered model , makes these scales too close each other and the volatility autocorrelation to decay as fast as , an undesired feature shared with other models , such as the stein - stein one .the persistence of can be accounted for introducing a non linear volatility , as it is for the exponential ornstein - uhlenbeck model , or coupling a third stochastic equation driving the dynamics of the long run value of as in .a further possibility to induce a non exponential time decay would be to consider a non linear drift term for the dynamics of , even though the analytical tractability of the present model will not be preserved .now we provide a systematic methodology for estimating the model parameters , which are the constants , , entering the dynamics of , plus the correlation coefficient .we perform the estimation over the standard & poor 500 ( s&p500 ) index daily returns from 1970 to 2010 , approximating with where ( one trading day ) . taking into account that is independent of and that is distributed accordingly to a folded normal law , the following relations hold for the model the constants and can be measured directly from the data , providing us an estimation of the ratio through the relation the value of these quantities extracted from the series of the daily returns of the s&p500 indexare reported in table [ tab : spotvalues ] .it is crucial to observe that the value obtained for the ratio is compatible with the constraint , supporting the consistency of our model and the convergence of the volatility autocorrelation .moreover , the same ratio provides an estimate of ( see eq . )implying for the order of the highest converging moment a value ; consequently , relation indicates the following range for the tail index of the leverage correlation provides a way to obtain the two further relations needed to fix the four free parameters of the models .indeed , a two parameters fit of the function gives estimates for the time scale and for the limit with the results reported in table [ tab : fitlev ] and fig .[ fig : levfit ] . cccl estimators & & + a & & 0.1457 & + b & & 0.0295 & + c & & 0.0107 & + & & 1.7895 & + and . ] in particular , the value obtained for the leverage time scale , , and for its amplitude are consistent with those quoted in past analysis of different stock indexes such as the dow jones industrial average , and confirm the short range nature of this effect .ccrl estimators & & + & & 0.0864 & yr + & & -30.9515 & + at this point all the parameters can be recovered through the following relations ^{-1 } \\ a & = c\ , d \\b & = -\frac{a+c}{\sqrt{c } } \,\frac{c}{b } \\\label{eq : paramsfinalrho } \rho & = - \frac { b \,(a+c)}{a \,(2 a + c ) } \,{\mathcal{l}}(0^+ ) \,.\end{aligned}\ ] ] ccrl parameter & & + & & & + & & & + & & & + & & & + the final results , reported in table [ tab : parsfinal ] , show a negative correlation coefficient , in agreement with the known leftward asymmetry of daily return distributions .moreover , our calibration provides for the relaxation time of the volatility process a finite value , implying that , from a practical point of view , the limit is equivalent to .the fitted values of and provides a good description of real data , as shown in fig .[ fig : levfit ] ; on the other hand , fig .[ fig : acorrth ] shows that the theoretical volatility autocorrelation for the estimated values of the parameters , eq . , does not capture the long range persistence of the empirical volatility , as expected from the constraints , while it describes correctly the exponential decay for small values of .finally , it is important to compare the return pdf predicted by the model with the data sample from which the model parameters were estimated . since we model the return dynamics for increasing , it is even more important to asses to which extent the diffusion process is able to capture the scaling properties of the empirical distribution over different time horizons . at this aim , with the parameters fixed from the daily s&p500 series , we reconstruct the theoretical pdfs simulating the process at different time scales , , and we compare them with the corresponding empirical distributions obtained aggregating the daily returns .this comparison is shown in fig .[ fig : linpdf ] and fig .[ fig : linlogpdf ] .the daily distribution is very well reproduced by the theoretical pdf , which is able to fully capture the leptokurtic nature of the daily data .the plots also confirm that the diffusive dynamics , once the parameters have been fixed at the daily scale , follows closely the evolution of the empirical curves for larger .in particular , it captures the progressive convergence in the central region to a distribution with vanishing skewness and kurtosis .in this work , we have introduced a class of svms where the volatility is driven by the general process with multiplicative noise analyzed in detail in .more specifically , we focused on the set of parameters resulting in an inverse gamma stationary distribution for the process .we provided an analytical characterization of the moments of the return distribution , revealing the role played by the power law behavior of the inverse gamma in the emergence of fat tails .nevertheless , even though the highest order moments of diverge for every time lag , the analytical expressions we obtained reveal the vanishing of both the skewness and the kurtosis , in agreement with the normality of returns for long horizons .as far as the estimation procedure is concerned , it is worth noticing that we do not exploit directly the statistical properties of the instantaneous volatility which is an hidden process , but on the contrary we infer the inverse gamma parameters from well established robust stylized facts holding at the daily scale .indeed our model correctly predicts zero autocorrelation for the returns , and the short range exponential decay of the leverage .the persistence of the volatility autocorrelation over yearly horizons is not captured , and in this perspective we would like to explore the possibility of coupling a third sde in the same spirit of .moreover , we expect that relaxing the time homogeneity of the processes , as done in , we may induce time scalings more general than the exponential one .we also expect the proposed dynamics to be a good candidate to describe the price and volatility dynamics even at higher frequencies .this belief is supported by the empirical analysis discussed in the literature concerning the statistical properties of volatility proxies for intra - day data .a further perspective would be to explore possible ways to characterize analytically the pdf associated to the process or its characteristic function .this task requires to solve the fokker - planck equation for the pdf or its equivalent version in the fourier space , analogously to what has been done in for the heston case .such a result would also allow for an application of the model in the context of market risk evaluation , possibly exploiting efficient fourier methodologies such as those proposed in .here we report the explicit expressions of the coefficients entering the expansion of the moments of for the cases and . they were used to plot the analytical curves in fig .[ fig : t0scaling ] .where the coefficients and , entering the expansion of the moments of , readafter expressing in terms of its integral solution form to , the function can be rewritten in the form taking into account that for we can always split as the function becomes where we changed the variable of integrations to .since the process is normally distributed with zero mean and variance , and recalling the expression of the gaussian characteristic function , , we can write application of the novikov theorem also gives where we expressed the wiener variation in terms of a gaussian white noise as . replacing the previous expressions in eq .we conclude that has to satisfy which is a volterra equation of the second kind , whose solution leads to eq . .with reference to the model , the cross correlation can be computed exactly . provided to express as integral solution from to is straightforward to check that satisfies the following equation which is an ode provided that the correlation has been computed at the lower order . in particular , for the case , we need the following correlation whose substitution in eq . provides the solution we wish to acknowledge the anonymous referee for fruitful comments .we sincerely thank guido montagna for useful discussions and comments about this work , and we also gratefully acknowledge the continuous support over past years of oreste nicrosini .
in this work we afford the statistical characterization of a linear stochastic volatility model featuring inverse gamma stationary distribution for the instantaneous volatility . we detail the derivation of the moments of the return distribution , revealing the role of the inverse gamma law in the emergence of fat tails , and of the relevant correlation functions . we also propose a systematic methodology for estimating the parameters , and we describe the empirical analysis of the standard & poor 500 index daily returns , confirming the ability of the model to capture many of the established stylized fact as well as the scaling properties of empirical distributions over different time horizons .
the recent massive work in the area of compressed sensing , surveyed in , rigorously demonstrated that one can algorithmically recover sparse ( and , more generally , compressible ) signals from incomplete observations .the simplest model is a -dimensional signal with a small number of nonzeros : such signals are called -sparse .we collect nonadaptive linear measurements of , given as where is some by measurement matrix .we then wish to efficiently recover the signal from its measurements .a necessary and sufficient condition for exact recovery is that the map be one - to - one on the set of -sparse vectors .cands and tao proved that under a stronger ( quantitative ) condition , the sparse recovery problem is equivalent to a convex program and therefore is computationally tractable .this condition is that the map is an almost isometry on the set of -sparse vectors : a measurement matrix satisfies the _ restricted isometry condition _ ( ric ) with parameters for if we have under the restricted isometry condition with parameters , the convex program exactly recovers an -sparse signal from its measurements .the restricted isometry condition can be viewed as an abstract form of the uniform uncertainty principle of harmonic analysis ( , see also and ) .many natural ensembles of random matrices , such as partial fourier , bernoulli and gaussian , satisfy the restricted isometry condition with parameters , provided that see e.g. section 2 of and the references therein . therefore , a computationally tractable exact recovery of sparse signals is possible with the number of measurements roughly proportional to the sparsity level , which is usually much smaller than the dimension .an important alternative to convex programming is greedy algorithms , which have roots in approximation theory .a greedy algorithm computes the support of iteratively , at each step finding one or more new elements ( based on some `` greedy '' rule ) and subtracting their contribution from the measurement vector .the greedy rules vary .the simplest rule is to pick a coordinate of of the biggest magnitude ; this defines the well known greedy algorithm called orthogonal matching pursuit ( omp ) , known otherwise as orthogonal greedy algorithm ( oga ) .greedy methods are usually fast and easy to implement , which makes them popular with practitioners .for example , omp needs just iterations to find the support of an -sparse signal , and each iteration amounts to solving one least - squares problem ; so its running time is always polynomial in , and .in contrast , no known bounds are known on the running time of as a linear program . future work on customization of convex programming solvers for sparse recovery problems may change this picture , of course . for more discussion ,see and .a variant of omp was recently found in that has guarantees essentially as strong as those of convex programming methods .this greedy algorithm is called regularized orthogonal matching pursuit ( romp ) ; we state it in section [ s : stable ] below . under the restricted isometry condition with parameters , romp exactly recovers an -sparse signal from its measurements . summarizing ,_ the uniform uncertainty principle is a guarantee for efficient sparse recovery ; one can provably use either convex programming methods or greedy algorithms ( romp ) . _a more realistic scenario is where the measurements are inaccurate ( e.g. contaminated by noise ) and the signals are not exactly sparse . in most situations that arise in practice , one can not hope to know the measurement vector with arbitrary precision .instead , it is perturbed by a small error vector : . herethe vector has unknown coordinates as well as unknown magnitude , and it needs not be sparse ( as all coordinates may be affected by the noise ) . for a recovery algorithm to be stable ,it should be able to approximately recover the original signal from these perturbed measurements .the stability of convex optimization algorithms for sparse recovery was studied in , , , .assuming that one knows a bound on the magnitude of the error , , it was shown in that the solution of the convex program is a good approximation to the unknown signal : .in contrast , the stability of greedy algorithms for sparse recovery has not been well understood .numerical evidence suggests that omp should be less stable than the convex program , but no theoretical results have been known in either the positive or negative direction .the present paper seeks to remedy this situation .we prove that _ romp is as stable as the convex program ._ this result essentially closes a gap between convex programming and greedy approaches to sparse recovery . regularized orthogonal matching pursuit ( romp ) [ t : stability ] assume a measurement matrix satisfies the restricted isometry condition with parameters for .let be an -sparse vector in .suppose that the measurement vector becomes corrupted , so we consider where is some error vector .then romp produces a good approximation to : note that in the noiseless situation ( ) the reconstruction is exact : .this case of theorem [ t : stability ] was proved in .our stability result extends naturally to the even more realistic scenario where the signals are only approximately sparse . here and henceforth ,denote by the vector of the biggest coefficients in absolute value of .[ t : stabsig ] assume a measurement matrix satisfies the restricted isometry condition with parameters for .consider an arbitrary vector in .suppose that the measurement vector becomes corrupted , so we consider where is some error vector .then romp produces a good approximation to : * 1 .* the term in the corollary can be replaced by for any .this change will only affect the constant terms in the corollary .* 2 . * by applying corollary [ t : stabsig ] to the largest coordinates of and using lemma [ l : ve ] below , we also have the error bound for the entire vector : * 3 .* for the convex programming method , the stability bound was proved in , and even without the logarithmic factor .we conjecture that this factor is also not needed in our results for romp .* 4 . * unlike the convex program , romp succeeds with absolutely no prior knowledge about the error ; its magnitude can be arbitrary . in the terminology of , the convex programming approach needs to be `` noise - aware '' while romp needs not .* one can use romp to approximately compute a -sparse vector that is close to _ the best -term approximation _ of an arbitrary signal . to this end ,one just needs to retain the biggest coordinates of .indeed , corollary [ c : napprox ] below shows that the best -term approximations of the original and the reconstructed signals are close : * 6 . *an important special case of corollary [ t : stabsig ] is for the class of compressible vectors , which is a common model in signal processing , see , .suppose is a compressible vector in the sense that its coefficients obey a power law : for some , the -th largest coefficient in magnitude of is bounded by .then yields the following bound on the reconstructed signal : as observed in , this bound is optimal ( within the logarithmic factor ) ; no algorithm can perform fundamentally better . the rest of the paperis organized as follows . in section [s : proof ] , we prove our main result , theorem [ t : stability ] . in section [s : consequences ] , we deduce the extension for approximately sparse signals , corollary [ t : stabsig ] , and a consequence for best -term approximations , corollary [ c : napprox ] . in section [ s : implementation ] , we demonstrate some numerical experiments that illustrate the stability of romp .we shall prove a stronger version of theorem [ t : stability ] , which states that _ at every iteration _ of romp , either at least of the newly selected coordinates are from the support of the signal , or the error bound already holds .[ t : it ] assume satisfies the restricted isometry condition with parameters for .let be an -sparse vector with measurements . then at any iteration of romp , after the regularization step where is the current chosen index set, we have and ( at least ) one of the following : 1 .[ j support ] ; 2 .[ error ] .in other words , either at least of the coordinates in the newly selected set belong to the support of or the bound on the error already holds .we show that the iteration invariant implies theorem [ t : stability ] by examining the three possible cases : * case 1 : ( ii ) occurs at some iteration .* we first note that since is nondecreasing , if ( ii ) occurs at some iteration , then it holds for all subsequent iterations . to show that this would then imply theorem [ t : stability ], we observe that by the restricted isometry condition and since , then again by the restricted isometry condition and definition of , thus we have that thus ( ii ) of the iteration invariant would imply theorem [ t : stability ] .* case 2 : ( i ) occurs at every iteration and is always non - empty . * in this case , by ( i ) and the fact that is always non - empty , the algorithm identifies at least one element of the support in every iteration .thus if the algorithm runs iterations or until , it must be that , meaning that .then by the argument above for case 1 , this implies theorem [ t : stability ] .* case 3 : ( i ) occurs at each iteration and for some iteration .* by the definition of , if then for that iteration . by definition of , this must mean that this combined with part 1 of proposition [ p : cons ] below ( and its proof , see ) applied with the set yields then combinining this with part 2 of the same proposition , we have since , this means that the error bound ( ii ) must hold , so by case 1 this implies theorem [ t : stability ] .we now turn to the proof of the iteration invariant , theorem [ t : it ] .we will use the following proposition from .[ p : cons ] assume a measurement matrix satisfies the restricted isometry condition with parameters . then the following holds . 1 ._ ( local approximation ) _ for every -sparse vector and every set , , the observation vector satisfies 2 ._ ( spectral norm ) _ for any vector and every set , , we have 3 . _( almost orthogonality of columns ) _ consider two disjoint sets , .let denote the orthogonal projections in onto and , respectively . then the proof of theorem [ t : it ] is by induction on the iteration of romp .the induction claim is that for all previous iterations , the set of newly chosen indices is disjoint from the set of previously chosen indices , and either ( i ) or ( ii ) holds . clearly if ( ii ) held in a previous iteration , it would hold in all future iterations .thus we may assume that ( ii ) has not yet held .since ( i ) has held at each previous iteration , we must have let be the residual at the start of this iteration , and let , be the sets found by romp in this iteration . as in , we consider the subspace and its complementary subspaces the restricted isometry condition in the form of part 3 of proposition [ p : cons ] ensures that and are almost orthogonal . thus is close to the orthogonal complement of in , the residual thus still has a simple description : [ residual ] here and thereafter , let denote the orthogonal projection in onto a linear subspace . then by definition of the residual in the algorithm , . to complete the proofwe need that .this follows from the orthogonal decomposition and the fact that .now we consider the signal we seek to identify at the current iteration , and its measurements : to guarantee a correct identification of , we first state two approximation lemmas that reflect in two different ways the fact that subspaces and are close to each other .[ c : proj ] we have by definition of , we have .therefore , by lemma [ residual ] , , and so now we use part 3 of proposition [ p : cons ] for the sets and whose union has cardinality at most by .it follows that as desired .[ l : uj ] consider the observation vectors and . then for any set with , since , we have by lemma [ c : proj ] and the restricted isometry condition that to complete the proof , it remains to apply part 2 of proposition [ p : cons ] , which yields .we next show that the energy ( norm ) of when restricted to , and furthermore to , is not too small . by the regularization step of romp , since all selected coefficients have comparable magnitudes , we will conclude that not only a portion of energy but also of the _ support _ is selected correctly , or else the error bound must already be attained .this will be the desired conclusion .[ c : uj ] we have .let = .since , the maximality property of in the algorithm implies that by lemma [ l : uj ] , furthermore , since , by part 1 of proposition [ p : cons ] we have putting these three inequalities together , we conclude that this proves the lemma .we next bound the norm of restricted to the smaller set , again using the general property of regularization . in our context , lemma 3.7 of applied to the vector yields along with lemma [ c : uj ] this directly implies : [ c : uj0 ] we have now finish the proof of theorem [ t : it ] . the claim that follows by the same arguments as in .the nontrivial part of the theorem is its last claim , that either ( i ) or ( ii ) holds .suppose ( i ) in the theorem fails .namely , suppose that , and thus set . by the comparability property of the coordinates in and since , there is a fraction of energy in : where the last inequality holds by lemma [ c : uj0 ] . on the other hand , we can approximate by as since , , and using lemma [ l : uj ] , we have furthermore , by definition of , we have .so , by part 1 of proposition [ p : cons ] , using the last two inequalities and , we conclude that this is a contradiction to ( [ e : ubig ] ) so long as if this is true , then indeed ( i ) in the theorem must hold .if it is not true , then by the choice of , this implies that this proves theorem [ t : it ] .next we turn to the proof of corollary [ t : stabsig ] .we first partition so that . then since satisfies the restricted isometry condition with parameters , by theorem [ t : stability ] and the triangle inequality , the following lemma as in relates the -norm of a vector s tail to its -norm .an application of this lemma combined with will prove corollary [ t : stabsig ] .[ l : ve ] let , and let be the vector of the largest coordinates in absolute value from . then let denote the largest entry of . if then so the claim holds .thus we may assume this is not the case .then we have simplifying gives the desired result . by lemma 29 of , we have applying lemma [ l : ve ] to the vector we then have combined with , this proves the corollary .we now show that by truncating the reconstructed vector , we obtain a -sparse vector very close to the original signal .[ c : napprox]assume a measurement matrix satisfies the restricted isometry condition with parameters for .let be an arbitrary vector in , let be the measurement vector , and the reconstructed vector output by the romp algorithm . then where denotes the best -sparse approximation to ( i.e. the vector consisting of the largest coordinates in absolute value ) .let and , and let and denote the supports of and respectively . by corollary [ t : stabsig ] , it suffices to show that . applying the triangle inequality ,we have we then have and since , we have . by the definition of , every coordinate of in is greater than or equal to every coordinate of in in absolute value .thus we have , thus , and so this completes the proof .* corollary [ c : napprox ] combined with corollary [ t : stabsig ] and implies that we can also estimate a bound on the whole signal : section describes our experiments that illustrate the stability of romp .we experimentally examine the recovery error using romp for both perturbed measurements and signals .the empirical recovery error is actually much better than that given in the theorems .first we describe the setup of our experiments . for many values of the ambient dimension , the number of measurements , and the sparsity , we reconstruct random signals using romp . for each set of values , we perform trials . initially , we generate an gaussian measurement matrix . for each trial , independent of the matrix , we generate an -sparse signal by choosing components uniformly at random and setting them to one . in the case of perturbed signals , we add to the signal a -dimensional error vector with gaussian entries . in the case of perturbed measurements ,we add an -dimensional error vector with gaussian entries to the measurement vector .we then execute romp with the measurement vector or in the perturbed measurement case .after romp terminates , we output the reconstructed vector obtained from the least squares calculation and calculate its distance from the original signal .figure [ fig : meas2 ] depicts the recovery error when romp was run with perturbed measurements .this plot was generated with for various levels of sparsity .the horizontal axis represents the number of measurements , and the vertical axis represents the average normalized recovery error .figure [ fig : meas2 ] confirms the results of theorem [ t : stability ] , while also suggesting the bound may be improved by removing the factor .figure [ fig : sig4 ] depicts the normalized recovery error when the signal was perturbed by a gaussian vector .the figure confirms the results of corollary [ t : stabsig ] while also suggesting again that the logarithmic factor in the corollary is unnecessary .
we demonstrate a simple greedy algorithm that can reliably recover a vector from incomplete and inaccurate measurements . here is a measurement matrix with , and is an error vector . our algorithm , regularized orthogonal matching pursuit ( romp ) , seeks to close the gap between two major approaches to sparse recovery . it combines the speed and ease of implementation of the greedy methods with the strong guarantees of the convex programming methods . for any measurement matrix that satisfies a uniform uncertainty principle , romp recovers a signal with nonzeros from its inaccurate measurements in at most iterations , where each iteration amounts to solving a least squares problem . the noise level of the recovery is proportional to . in particular , if the error term vanishes the reconstruction is exact . this stability result extends naturally to the very accurate recovery of approximately sparse signals .
as a relativistic wave equation , the dirac equation provides naturally a description of an electron .following dirac s discovery of the linear equation of the electron , there appears the fundamental idea of nonlinear description of an elementary spin-1/2 particle which makes it possible to take into account its self - interaction .heisenberg put forward the idea to use a nonlinear dirac ( nld ) equation as a possible basis model for a unified field theory .a key feature of the nld equation is that it allows solitary wave solutions or particle - like solutions the stable localized solutions with finite energy and charge .that is , the particles appear as intense localized regions of field which can be recognized as the basic ingredient in the description of extended objects in quantum field theory .different self - interactions give rise to different nld models mainly including the thirring model , the soler model , the gross - neveu model ( equivalent to the massless soler model ) , and the bag model ( _ _ i.e. _ _ the solitary waves with finite compact support ) , all of which attracted wide interest of physicists and mathematicians around the 1970s and 1980s , especially on looking for the solitary wave solutions and investigating the related physical and mathematical properties . for the nld equation in ( 1 + 1 ) dimensions ( _ _ i.e. _ _ one time dimension plus one space dimension ) ,several analytical solitary wave solutions are derived in for the quadric nonlinearity , for fractional nonlinearity as well as for general nonlinearity by using explicitly the constraint resulting from energy - momentum conservation , and summarized by mathieu .in contrast , there are few explicit solutions in ( 1 + 3 ) dimensions except for some particular cases shown in in spite of their existence claimed by mathematicians for various situations ( the readers are referred to an overview on this topic ) , and most understanding is based on numerical investigations , _ _e.g. _ _ . beyond this , the study of the nld equation in ( 1 + 1 ) dimensions could be very helpful for that in ( 1 + 3 ) dimensions since the ( 1 + 1)-dimensional nld equation correspond to the asymptotic form of the equation in the physically interesting case of ( 1 + 3 ) dimensions as emphasized by kaus .that is , some qualitative properties of the nld solitary waves could be similar in such two cases .an interesting topic for the nld equation is the stability issue , which has been the central topic in works spread out over several decades in an effort that is still ongoing .analytical studies of the nld solitary wave stability face serious obstacles , while results of computer simulations are contradictory .the stability analysis of the nld solitary waves is still a very challenging mathematical problem to date .recent efforts in this direction can be found in .another rising mathematical interest related to the nld equation is the analysis of global well - posedness , _ _e.g. _ _ see and references therein . in the case of that theoretical methods were not able to give the satisfactory results , numerical methods were used for obtaining the solitary wave solutions of the nld equation as well as for investigating the stability .an important step in this direction was made by alvarez and carreras in 1981 , who simulated the interaction dynamics between the ( 1 + 1)-dimensional nld solitary waves of different initial charge for the soler model by using a second - order accurate crank - nicholson ( cn ) scheme .they first saw there : charge and energy interchange except for some particular initial velocities of the solitary waves ; inelastic interaction in binary collisions ; and oscillating state production from binary collisions .motivated by their work , shao and tang revisited this interaction dynamics problem in 2005 by employing a fourth - order accurate runge - kutta discontinuous galerkin ( rkdg ) method .they not only recovered the phenomena mentioned by alvarez and carreras but also revealed several new ones , _ _e.g. _ _ collapse in binary and ternary collisions of two - humped nld solitary waves ; a long - lived oscillating state formed with an approximate constant frequency in collisions of two standing waves ; full repulsion in binary and ternary collisions of out - of - phase waves .their numerical results also inferred that the two - humped profile could undermine the stability during the scattering of the nld solitary waves .note in passing that the two - humped profile was first pointed out by shao and tang and later gotten noticed by other researchers .besides the often - used cn and rkdg methods , there exist many other numerical schemes for solving the ( 1 + 1)-dimensional nld equation : split - step spectral schemes , the linearized cn scheme , the semi - implicit scheme , legendre rational spectral methods , multi - symplectic runge - kutta methods , adaptive mesh methods _ etc_. the fourth - order accurate rkdg method is very appropriate for investigating the interaction dynamics of the nld solitary waves due to their ability to capture the discontinuous or strong gradients without producing spurious oscillations , and thus performs better than the second - order accurate cn scheme . however , the high cost due to the relatively more freedoms used in each cell and the stringent time step constraint reduce its practicality in more realistic simulations where realtime and quantitative results are required .recently , there has been a remarkable upsurge of the interest in the nld models , as they emerge naturally as practical models in other physical systems , such as the gap solitons in nonlinear optics , bose - einstein condensates in honeycomb optical lattices , as well as matter influencing the evolution of the universe in cosmology . in view of such new trend ,longtime stable , efficient , conservative and high - order accurate numerical methods for solving the nld equation are highly desirable .finite difference methods , usually as the first try in practice , enable easy coding and debugging and thus are often used by physicists and engineers .however , detailed discussion and careful comparison on finite difference solvers for the nld equation are not existed . to this end , the present work as the first step will extendedly propose the finite deference schemes for solving the nld equation with the scalar and vector self - interaction .a general and precise comparison among them will be presented .however , all of these finite difference methods are often of the second order accuracy and thus sustain fast error growth with respect to time .to achieve relatively slow error growth , high - order accurate numerical methods are required . by exploiting the local conservation laws of the nld equation , we present exponential operator splitting ( os ) schemes which are time reversible andcan conserve the discrete charge .one of the high - order accurate os schemes is afterwards adopted to investigate the interaction dynamics for the nld solitary waves under the general scalar and vector self - interaction .it should be noted that the experiments carried out in the literatures are all limited to the collisions of the nld solitary waves under the quadric scalar self - interaction . here , the binary collisions of the nld solitary waves under the cubic scalar self - interaction or under the vector self - interaction or under the linear combination of the scalar and vector self - interactions are all studied for the first time .the paper is organized as follows .there is a brief review of the nld equation in section [ sec2:gnld ] and the solitary wave solutions are also derived there for the general scalar and vector self - interaction .the numerical schemes are presented in section [ sec : diffsch ] and corresponding numerical analysis is given in section [ sec : discussion4fdm ] .the numerical results are presented with discussion in section [ sec : discussion4fdm ] .the paper is concluded in section [ sec : conclusion ] with a few remarks .this section will introduce the ( 1 + 1)-dimensional nld equation with the scalar and vector self - interaction and derive its two solitary wave solutions . throughout the paper , units in which both the speed of light and the reduced planck constant are equal to one will be used .our attention is restricted to the nld equation in dimensions which can be written in the covariant form /\partial \wbar{{{\mbox{\boldmath \small }}}}=0 , \label{generalnld}\ ] ] where is the spinor with two complex components , denotes the adjoint spinor , is the complex conjugate transpose of , \psi \gamma \psi \psi \psi \psi \psi \gamma \psi \psi \gamma \psi \psi \psi ] which shows that the power exponent to is . in such sense , we call that the self - interaction lagrangian has the power exponent . hereafter the _ quadric _ and _ cubic _ self - interaction will be referred to the case and the case , respectively .the self - interaction implies the so - called homogeneity relation }}{\partial{\wbar{{{\mbox{\boldmath \small }}}}}}}=(k+1){l}_\text{i}[{{\mbox{\boldmath \small } } } ] .\label{ppl = l}\ ] ] combining it with the definition of the lagrangian \psi ] and the energy - momentum tensor are independent of the time , because & \equiv \omega \wbar{{{\mbox{\boldmath \small }}}}{{\mbox{\boldmath \small }}}^0{{\mbox{\boldmath \small }}}+\mi\wbar{{{\mbox{\boldmath \small }}}}{{\mbox{\boldmath \small }}}^1{{\mbox{\boldmath \small }}}_x - m\wbar{{{\mbox{\boldmath \small }}}}{{\mbox{\boldmath \small }}}+{l}_\text{i}[{{\mbox{\boldmath \small } } } ] , \\ t_{00 } & \equiv-\mi \wbar{{{\mbox{\boldmath \small }}}}{{\mbox{\boldmath \small }}}^1{{\mbox{\boldmath \small }}}_x+m\wbar{{{\mbox{\boldmath \small }}}}{{\mbox{\boldmath \small }}}- { l}_\text{i}[{{\mbox{\boldmath\small } } } ] , \quad t_{01}\equiv \mi\wbar{{{\mbox{\boldmath \small }}}}{{\mbox{\boldmath \small }}}^0{{\mbox{\boldmath \small }}}_x , \\t_{10 } & \equiv-\omega \wbar{{{\mbox{\boldmath \small }}}}{{\mbox{\boldmath \small }}}^1{{\mbox{\boldmath \small } } } , \quad t_{11}\equiv -\mi \wbar{{{\mbox{\boldmath \small }}}}{{\mbox{\boldmath \small }}}^1{{\mbox{\boldmath \small }}}_x+{l}[{{\mbox{\boldmath \small } } } ] , \end{aligned}\ ] ] where . using the conservation laws further gives = 0 .\label{t11}\ ] ] the first equation implies that is imaginary because thus , without loss of generality , we may assume that is real and is imaginary , and they are in the form where both and are pending real functions , and is assumed to satisfy the inequality . on the other hand , combining the first equation in with the second equation in yields =0,\ ] ] which becomes for combining with leads to which reduces to for because for when , the solution of may be derived as follows : for initial data and . in fact , under the previous assumption , one may verify . if coefficients and in belong to the set for , or for , then from eq .one has non - trivial for the localized solution as follows hereto , the standing wave solution of the nld equation with has been derived , and will be denoted as follows where and are given in eqs . and , respectively .this solution represents a solitary wave with zero velocity and contains some special cases in the literature __ e.g. _ _ .it has been pointed out in that the profile of the charge density for the standing wave under the scalar self - interaction ( _ _ i.e. _ _ and ) with can be either one - humped or two - humped , which is also recently confirmed for any by other researchers in .they further pointed out there that the profile can only be one - humped for any in the case of and . for the linear combined self - interaction with and , we find that the profile can also be one - humped ( see figs .[ fig.os-dg-rhoq ] and [ fig : k1-sv ] where the charge density is denoted by for convenience ) or two - humped ( see fig . [fig : k1 - 2humped - sv ] ) .this subsection will derive another solution of the ( 1 + 1)-dimensional nld equation with the self - interaction by using the lorentz covariance of the nld equation . consider a frame f with an observer o and coordinatesthe observer o describes a particle by the wavefunction which obeys the nld equation with given in , _ _i.e. _ _ /\partial \wbar{{{\mbox{\boldmath \small }}}}\right)(x , t)=0.\ ] ] in another inertial frame f with an observer o and coordinates given by the lorentz transformation with a translation in the -direction which is called `` boost '' in the , where is any given position , is the relative velocity between frames in the -direction , and is the lorentz factor . according to the relativity principle , the observer o describes the same particle by which should also satisfy /\partial \wbar{{{\mbox{\boldmath \small }}}}'\right)(x',t')=0.\ ] ] using some algebraic manipulations , the `` transformation '' matrix may be found as follows which takes to under the lorentz transformation , __ i.e. _ _ where is the sign function which returns if , if , and if . applying the transformation with to the standing wave solutiongives another solution of the nld equation - , __ i.e. _ _ the moving wave solution this solution represents a nld solitary wave with velocity and will return to the standing wave if setting and .this subsection will show that the nld equation with given in remains invariant under the time reversal operation where denotes the complex conjugate operation on , _ _ i.e. _ _ , the time - reversal operator satisfies due to the anticommutation relation , and is the unit matrix . from the relations, it can be easily verified that so that the self - interaction lagrangian in satisfies (x^\prime , t^\prime ) = l_\text{i}[{{\mbox{\boldmath \small }}}](x , t),\ ] ] under the time reversal transformation .applying the time - reversal operator to the nld equation and using the definition as well as the relations and lead to an equation which has the same form as shown in .that is , if a spinor satisfies the nld equation , then the transformed spinor by the time reversal operation will also satisfy the same nld equation , _ _i.e. _ _ the nld equation is time reversible under the operation .as we mentioned in section [ sec : intro ] , some numerical methods have been well developed for the nld equation with a scalar or vector self - interaction .this section will extendedly present and compare several numerical methods for solving the ( 1 + 1)-dimensional nld equation with the general scalar and vector self - interaction .their numerical analyses will be presented in section [ sec : discussion4fdm ] . for the sake of convenience , divide the bounded spatial domain into a uniform partition with a constant stepsize and give a grid in time with a time stepsize , and recast the nld equation into where all of which are real functions , and the dependence of on is implied .use to denote approximation of and define the forward , backward and centered difference operators in space and time by : where is the identity operator , and and are the translation operators in time and space , respectively , defined by whose inverses exist and are defined by besides , several symbols are also introduced for arithmetic averaging operators : and for an extrapolation operator : * crank - nicolson schemes * the cn scheme and its linearized version have been studied in for the nld equation with the quadric scalar self - interaction . for the system, the extension of the cn scheme in may be written as by approximating at point with compact central difference quotient in place of the partial derivative , where the above cn scheme ( named as cn hereafter ) is fully implicit and forms a nonlinear algebraic system . in practice ,the linearization technique is often used to overcome difficulty in directly solving such nonlinear algebraic system .two linearization techniques for numerical methods of the nonlinear schrdinger equation are borrowed here .the first linearized cn scheme we consider is using wholly the extrapolation technique to the nonlinear self - interaction terms in which will be called by lcn1 . the second linearized cn scheme , denoted by lcn2 , is by partially applying the extrapolation technique to the nonlinear self - interaction terms .it is worth noting that the above linearized cn schemes are not linearized version of the cn scheme .the lcn2 scheme may conserve the charge and behaves better than the lcn1 scheme ( _ vide post _ ) .for the ( 1 + 1)-dimensional nld equation with the quadric scalar self - interaction lagrangian , the cn scheme ( named by cn0 ) proposed in is and its linearized version ( called by lcn0 ) is given in as follows we will show in section [ sec : discussion4fdm ] that the cn , cn0 and lcn0 schemes conserve the charge and the cn scheme further conserves the energy .* odd - even hopscotch scheme * the odd - even hopscotch scheme is a numerical integration technique for time - dependent partial differential equations , see .its key point is that the forward euler - central difference scheme is used for the odd grid points while at the even points the backward euler - central difference scheme is recovered .thus the odd - even hopscotch scheme may be explicitly implemented .such scheme applied to the system becomes in the following we will call it by hs . * leapfrog scheme * the leapfrog scheme looks quite similar to the forward scheme , see __ e.g. _ _ ,except it uses the values from the previous time - step instead of the current one . for the system , it is which is a three - level explicit scheme in time with a central difference in space and will be named by lf .* semi - implicit scheme * another three - level scheme considered here for the system is which is obtained by approximating explicitly the nonlinear terms but implicitly the linear terms and will be called by si .it is worth noting that such semi - implicit scheme has been studied for the nld equation with the quadric scalar self - interaction in .this subsection goes into discussing exponential operator splitting scheme for the nld equation . for convenience ,we rewrite the system as follows where the linear operator and both nonlinear operators and are defined by then the problem may be decomposed into three subproblems as follows due to the local conservation laws which are discussed in section [ sec : dis - nonlinear ] , solutions of the nonlinear subproblem or may be expressed as an exponential of the operator or acting on `` initial data '' .thus we may introduce the exponential operator splitting scheme for the the nld equation or , imitating that for the linear partial differential equations , see __ e.g. _ _ and references therein . based on the exact or approximate solvers of those three subproblems , a more general -stage -th order exponential operator splitting method for the system can be cast into product of finitely many exponentialsas follows where denotes the time stepsize used within the -th stage and satisfies and is any permutation of .hereafter we call the operator splitting scheme by os( ) .although one single product of finitely many exponentials exponentials is employed here , it should be pointed out that the linear combination of such finite products can also be used to construct exponential operator splitting schemes as shown in .a simple example is the well - known second - order accurate operator splitting method of strang ( named by os( ) ) with another example is the fourth - order accurate operator splitting method with which is denoted by os( ) in the following .another operator splitting scheme is studied in for the nld equation ( [ generalnld ] ) but only with the quadric scalar self - interaction lagrangian , and the second - order accurate strang method is applied there . for the system , it is based on the following operator decomposition with for the linear parabolic equation which is an irreversible system , a more general exponential operator splitting scheme and its accuracy as well as stability are discussed in , based on linear combinations of products of finitely many exponentials .it is shown that for such irreversible system , negative weights or negative time stepsizes may lead to instability ; and the highest order of the stable exponential operator splitting approximation ( only with positive weights and positive sub - stepsizes in time ) is two . however , for time - reversible systems , such as the hamilton system , the schrdinger equations and the nld equation with given in , it is immaterial whether or not the weights or time stepsizes are positive , where a general framework was presented for understanding the structure of the exponential operator splitting schemes and both specific error terms and order conditions were analytically solved .we are now solving the the linear subproblem . denote its `` initial data '' by at the -th stage in .if the spinor is periodic ( _ _ e.g. _ _-periodic ) with respect to , the fourier spectral method is employed to solve and gives here and denote the discrete fourier transform operator and its inverse , respectively , defined by where is the grid point number , and the matrix exponential in can be easily evaluated as follows with .when the spinor is not periodic with respect to , the fifth - order accurate finite difference weno scheme will be used to solve the linear subproblem .the readers are referred to for details . in this case , the linear subproblem can also be solved by using the characteristics method .the nonlinear subproblems and are left to be solved now .their `` initial data '' is still denoted by at the -th stage in , and define for nonlinear subproblem , it is not difficulty to verify that so that using this local conservation law gives the solution at of with the `` initial data '' as follows for the nonlinear subproblem , one may still similarly derive the following local conservation laws by direct algebraic manipulations if using the fact that , and are all real . consequently , integrating from to gives its solution as follows where and .it is because the local conservation laws and are fully exploited here that we can solve exactly the nonlinear subproblems and which imply the more higher accuracy of the os method than that of other methods . in summary, we have * the cn and cn0 schemes are nonlinear and implicit , and could be solved by iterative algorithms such as picard iteration and newton method .* the lcn0 , lcn1 , lcn2 and si schemes are linear and implicit . * the hs - , lf , and os( ) schemes are explicit .before investigating the performance of the numerical methods proposed in section [ sec : diffsch ] , this section will go first into numerical analysis of them , including the accuracy in the sense of the truncation error , time reversibility and the conservation of the charge or energy .if is periodic , then the cn , cn0 , lcn0 , lcn1 , lcn2 , hs , lf and si schemes are of order , and the os( ) scheme is of order for any arbitrary large .the proof is very straightforward by using directly the taylor series expansion for the finite difference schemes and the fourier spectral analysis for the os( ) scheme , and thus is skipped here for saving space .the cn , cn0 , hs , lf , si , and os( ) schemes are time reversible , but the lcn0 , lcn1 , lcn2 schemes are not .we give the proof for the cn and lcn1 schemes as an example and the others can be proved in a similar way . according to the transformation, the relation between the transformed finite difference solution and the original one should be with .consequently , we have and then using the relations in yields for . applying the time - reversal operator to the cn scheme and using the commutation relation and eqs . and lead to which is exactly the cn scheme applied to .that is , the cn scheme is invariant under the the time - reversal transformation , namely , it is time reversible .the fact that the lcn1 scheme is not time reversible can be observed directly if noting \\ \neq & \mi \ell_t^\text{e } \left(f^\prime_\text{s}{{\mbox{\boldmath \small }}}^0 { { \mbox{\boldmath \small }}}^\prime + f^\prime_\text{v } \wbar{{{\mbox{\boldmath \small }}}}^\prime{{\mbox{\boldmath \small }}}_\mu { { \mbox{\boldmath \small }}}^\prime{{\mbox{\boldmath\small }}}^0{{\mbox{\boldmath \small }}}^\mu { { \mbox{\boldmath \small }}}^\prime \right)_j^{n^\prime-1}.\end{aligned}\ ] ] next , we will discuss the conservation of the discrete energy , linear momentum and charge defined below for the numerical methods given in section [ sec : diffsch ] .after performing the integration in the computational domain \phi \gamma \phi \phi \gamma \phi \phi \gamma \phi \phi \gamma \phi \phi \gamma \phi ] it can be checked that the following leibniz rules holds for any two spinors and , and then we have thus the identity holds .because the operator is the adjoint operator of and is an hermite matrix , we get the identity directly by rearranging the summation .the identity can be easily verified if using the fact .the proof of ( resp . ) is similar with that of ( resp . ) .[ thm : conservation ] the cn , cn0 , lcn0 , lcn2 , and os( ) schemes conserve the discrete charge , but only the cn scheme conserves the discrete energy . we begin with the discrete conservation law of charge for the cn scheme .performing the inner product of and the cn scheme leads to and then the conservation law of the discrete charge can be easily verified by taking directly the real part as follows where lemma [ le:1 ] ( a ) is applied to the first term in eq ., ( b ) to the second term and ( c ) to the third and fourth terms .similarly , it can be verified that holds for the cn0 , lcn0 and lcn2 schemes .performing the inner product of the cn scheme and , keeping the imaginary part and applying lemma [ le:1 ] give directly the conservation of the discrete energy for the fourier spectral method , using the fact that in is a unitary matrix yields and then \right\|^2 \\ & = \left\|\exp\left ( -\mi \tau_i ( \kappa { { \mbox{\boldmath \small }}}^0{{\mbox{\boldmath \small }}}^1+m{{\mbox{\boldmath \small }}}^0)\right ) \mathcal{f}\left({{\mbox{\boldmath \small }}}^{(0)}\right)\right\|^2 = \left\| \mathcal{f}\left({{\mbox{\boldmath \small }}}^{(0)}\right)\right\|^2 = \left\|{{\mbox{\boldmath \small }}}^{(0)}\right\|^2 , \ ] ] where parseval s identity is applied twice .it can be readily verified that the matrix exponents in eqs . and are unitary , thus holds both for eqs . and , __ i.e. _ _ should be conserved for solutions of the nonlinear subproblems .therefore , the os( ) scheme satisfies the conservation law of charge .it will be verified by numerical results in section [ sec : numericalresults ] that the lcn1 , hs , lf and si schemes do not conserve the discrete charge or energy , and none of the numerical methods presented in section [ sec : diffsch ] conserves the discrete linear momentum .this section will conduct numerical simulations to compare the performance of numerical schemes proposed in section [ sec : diffsch ] and then utilize the os( ) scheme to investigate the interaction dynamics for the nld solitary waves under the scalar and vector self - interaction . for those localized nld solitary waves , the periodic boundary condition for the os( ) scheme and the non - reflection boundary condition for other schemescould be adopted at the boundaries of the computational domain if a relatively large computational domain has been taken in our numerical experiments .all calculations are performed on a lenovo desktop computer with intel core i5 650 cpu and 4 gb ram using double precision in the 3.0.0 - 24-generic x86_64 linux operation system and the compiler is gcc 4.6.1 .the computational domain will be taken as ] in example [ eg.scalar-2humped ] . and the particle mass in eq .is chosen to be .[ eg.1 ] the first example is devoted to comparing the numerical performance of all the numerical methods in section [ sec : diffsch ] in terms of the accuracy , the conservativeness , the efficiency and the error growth .a one - humped solitary wave with the velocity is simulated here under the quadric scalar self - interaction ( _ _ i.e. _ _ and ) , traveling from right to left with the parameters in : , , and .the -rkdg method is also included here for comparison , which is assembled with a fourth - order accurate runge - kutta time discretization in time and the legendre polynomials of degree at most as local basis functions in the spatial galerkin approximation . .example [ eg.1 ] .part i : numerical comparison of the accuracy , the conservativeness and the efficiency at with the time stepsize being set to .the cpu time in seconds is recorded for the finest mesh . [ cols="<,>,^,^,^,^,^,^,^,^ " , ] [ eg.scalar ] the collision of two equal one - humped solitary waves under the scalar self - interaction , _ _ i.e. _ _ case b1 in table [ 3caselist ] , is studied in this example .the interaction dynamics for the quadric case are shown in the left plot of fig .[ fig : k1-s ] , where two equal waves with the initial amplitude of move close at a velocity of and overlap each other , then separate into a left moving wave and a right moving wave with the amplitude of and the velocity of .similar phenomena are observed for the cubic case shown in the right plot of fig .[ fig : k1-s ] except that ( 1 ) two waves overlap more stronger around now due to the stronger nonlinearity ; ( 2 ) after collision , the amplitude decreases to from the initial amplitude of while the velocity also decreases to . in both cases , the discrete charge ,energy and linear momentum are approximately conserved in the interaction since the variation of them at is under - .[ eg.v ] the collision of two equal one - humped solitary waves under the vector self - interaction , _ _ i.e. _ _ case b2 in table [ 3caselist ] , is studied in this example .to the best of our knowledge , it is the first time to study binary collision of the nld solitary waves under the vector self - interaction .the interaction dynamics for the quadric case are shown in the left plot of fig .[ fig : k1-v ] , where the waves keep the shape and the velocity after the collision .a totally different phenomenon appears for the cubic vector self - interaction as displayed in the right plot of fig .[ fig : k1-v ] .the initial one - humped equal waves first merge into a single wave , then separate and overlap again . around , collapse happens and highly oscillatory waves are generated and moving outside with a big velocity near , meanwhile a one - humped wave with small amplitude is formed at the center . in both cases , the discrete charge ,energy and linear momentum are approximately conserved in the interaction since the variation of them at is under - .note in passing that the collapse here is different from that shown in .it was reported there that the strong negative energy and radiation appear when the collapse happens during the binary collision of two - humped waves .[ eg.scalar-vector ] this example is devoted into investigating for the first time the collision of two equal nld solitary waves under the scalar and vector self - interaction , __ i.e. _ _ case b3 in table [ 3caselist ] .the interaction dynamics for the quadric case are shown in the left plot of fig . [fig : k1-sv ] , where two equal waves with the initial amplitude of move close at a velocity of and overlap each other , then separate into a left moving wave and a right moving wave with the amplitude of and the velocity of .the collapse similar to that shown in right plot of fig .[ fig : k1-v ] happens again for the cubic vector self - interaction , see the right plot of fig .[ fig : k1-sv ] .the initial one - humped equal waves first merge into a single wave at , then separate and overlap again . around , collapse happens and highly oscillatory waves are generated and moving outside with a big velocity near . in both cases , the discrete charge , energy and linear momentum are approximately conserved in the interaction since the variation of them at is under - .[ eg.scalar-2humped ] as reported before in , collapse happens in binary and ternary collisions of the nld solitary waves under the quadric scalar self - interaction if the two - humped waves are evolved . in this example, we will show further that collapse could happen in binary collision of equal two - humped waves under the cubic scalar self - interaction and under the linear combination of scalar and vector self - interactions .first , case b4 in table [ 3caselist ] is studied and the interaction dynamics are shown in fig .[ fig : k1 - 2humped ] , which clearly shows that ( 1 ) collapse happens in both quadric and cubic cases but is more stronger in the latter ; ( 2 ) two initial waves at the same velocity are decomposed into groups with different velocities after the collision , but there is no such decomposition for the cubic case . in the left plot of fig .[ fig : k1 - 2humped ] , the highly oscillating waves with small amplitude move outside at a big velocity of , while the one - humped waves with big amplitude follow them at a small velocity of . in both cases , the discrete charge , energy and linear momentum are approximately conserved in the interaction since the variation of them at is under - .second , binary collision of equal two - humped solitary waves under the scalar and vector self - interaction , __ i.e. _ _ case b5 in table [ 3caselist ] , is plotted in fig .[ fig : k1 - 2humped - sv ] .the phenomena are very similar to that shown in fig .[ fig : k1 - 2humped ] , and the decomposition " phenomenon for the quadric case is more obvious than that shown in the left plot of fig .[ fig : k1 - 2humped ] .several numerical methods for solving the nld equation with the scalar and vector self - interaction have been presented and compared theoretically and numerically .our results have revealed that among them , the os( ) scheme , one of the fourth - order accurate os methods , performs best in terms of the accuracy and the efficiency .particularly , the os( ) scheme is usually more accurate than the -rkdg method in the mesh of the same size , but the former needs much more less computational cost than the latter .such superior performance of the os methods is credited to the full use of the local conservation laws of the nld equation such that the nonlinear subproblems resulted from them are exactly solved .the interaction dynamics for the nld solitary waves under the quadric and cubic self - interaction have been investigated with the os( ) scheme .we have found that such interaction dynamics depend on the exponent power of the self - interaction .actually , it has been observed for the first time in our numerical experiments that , ( 1 ) collapse happens in collision of two equal one - humped nld solitary waves under the cubic vector self - interaction but such collapse does not appear for corresponding quadric case ; ( 2 ) two initial waves at the same velocity are decomposed into groups with different velocities after collapse in binary collision of two - humped nld solitary waves under the quadric scalar self - interaction or under the quadric scalar and vector self - interaction but such phenomenon does not show up for corresponding cubic case .more efforts on the interaction dynamics for the nld solitary waves under more general self - interaction with the os( ) method are still going on .sihong shao was partially supported by the national natural science foundation of china ( project no .11101011 ) and the specialized research fund for the doctoral program of higher education ( project no .20110001120112 ) .huazhong tang was partially supported by the national natural science foundation of china ( project no .10925101 ) .the authors would also like to thank the referees for many useful suggestions .
this paper presents a review of the current state - of - the - art of numerical methods for nonlinear dirac ( nld ) equation . several methods are extendedly proposed for the ( 1 + 1)-dimensional nld equation with the scalar and vector self - interaction and analyzed in the way of the accuracy and the time reversibility as well as the conservation of the discrete charge , energy and linear momentum . those methods are the crank - nicolson ( cn ) schemes , the linearized cn schemes , the odd - even hopscotch scheme , the leapfrog scheme , a semi - implicit finite difference scheme , and the exponential operator splitting ( os ) schemes . the nonlinear subproblems resulted from the os schemes are analytically solved by fully exploiting the local conservation laws of the nld equation . the effectiveness of the various numerical methods , with special focus on the error growth and the computational cost , is illustrated on two numerical experiments , compared to two high - order accurate runge - kutta discontinuous galerkin methods . theoretical and numerical comparisons show that the high - order accurate os schemes may compete well with other numerical schemes discussed here in terms of the accuracy and the efficiency . a fourth - order accurate os scheme is further applied to investigating the interaction dynamics of the nld solitary waves under the scalar and vector self - interaction . the results show that the interaction dynamics of two nld solitary waves depend on the exponent power of the self - interaction in the nld equation ; collapse happens after collision of two equal one - humped nld solitary waves under the cubic vector self - interaction in contrast to no collapse scattering for corresponding quadric case . nonlinear dirac equation , solitary wave , interaction dynamics , finite difference method , operator splitting method
since its induction , distance correlation has had many applications in , e.g. , life science and variable selection , and has been analyzed , extended in various aspects . if distance correlation were implemented straightforwardly from its definition , its computational complexity can be as high as a constant times for a sample size .this fact has been cited for numerous times in the literature as a disadvantage of adopting the distance correlation . in this paper , we demonstrate that an o( ) algorithm for a version of the distance correlation exits .the main idea behind the proposed algorithm is to use an idea rooted in the the avl tree structure .the same idea has been utilized to develop fast algorithm for computing the kendall s rank correlation coefficient .we extend it to make it suitable for our purpose .the derivation of the fast algorithm also involves significant reformulation from the original version of the distance correlation .details are presented in this paper . in simulations ,not only we demonstrate the effectiveness of the fast algorithm , but also we testify that the advantage of using distance correlation ( in comparison with other existing methods ) become more evident when the sample sizes increase .these experiments become feasible due to the availability of the proposed fast algorithm . in one experiment ( see details in section [ sec : screening ] ) , we increased the sample size by fold from a previously published simulation study .the rest of this paper is organized as follows .section [ sec : reviewdistcorr ] reviews the distance covariance / correlation and its relevant properties . in section [ sec : unbiased ] , we consider a reformulation of the distance covariance , such that the new estimator is both unbiased and a u - statistic . in section [ sec : algorithm ] , an algorithm with the average complexity of was presented .extensive simulations are presented in section [ sec : simulations ] to demonstrate the additional capability we obtained due to the proposed fast algorithm .finally , some concluding remarks are made in section [ sec : conclude ] .detailed description of the algorithm is relegated to the appendix , along with most of the technical proofs .distance covariance and distance correlation was introduced in 2005 by one of the co - authors of this paper , g. j. szkely , in several lectures to address the deficiency of pearson s correlation , namely that the pearson s correlation can be zero for dependent variables . in the following , we start with a definition of the distance covariance .the population distance covariance between random vectors and with finite first moments is the nonnegative number defined by a weighted norm measuring the distance between the joint characteristic function ( c.f . ) of and , and the product of the marginal c.f.s of and .if and take values in and , respectively , is where .the integral exists provided that and have finite first moments .this immediately shows that distance covariance is zero if and only if the underlying random variables are independent .the beauty of kernel is that the corresponding sample statistic has the following surprisingly simple form .denote the pairwise distances of the observations by and the pairwise distances of the observations by for and denote the corresponding double centered distance matrices by , and where it is clear that the row sums and column sums of these double centered matrices are 0 .the squared sample distance covariance is the following simple formula the corresponding squared sample variance is and we can define the sample distance correlation as the standardized sample covariance : for more details see and a discussion paper .it is clear that is rigid motion invariant and scale invariant . for recent applications of distance correlation ,see e.g. , * li2012 and .the population version of distance covariance and distance correlation can be defined without characteristic functions , see .this definition is as follows .let and be random variables with finite expectations .the random distance functions are and . herethe primed random variable denotes an independent and identically distributed ( i.i.d . )copy of the variable , and similarly are i.i.d .introduce the real - valued function = e|x - x| = \int |x - x'| df_x(x'),\ ] ] where is the cumulative distribution function ( cdf ) of , and which is a real - valued random variable .for simplicity we write and .next we define the counterpart of centered distance matrices .the centered distance function is .\ ] ] for random variables we have ,\end{aligned}\ ] ] where = e[m(x , f_x ) ] = \int \int |x - x'| \, df_x(x ' ) \ , df_x(x).\ ] ] similarly define the centered distance function and the random variable . now for i.i.d . , and i.i.d . ,such that and have finite expectations , the population distance covariance is defined by .\ ] ] we have that is always nonnegative , and equates zero if and only if and are independent .it is clear by inspection that without further efforts the implementation of the sample distance covariance and the corresponding sample distance correlation requires o( ) steps . in this paperwe show that for real - valued random variables and , we do not need more than o( ) steps .in this section , a reformulation is given in section [ sec : reformulation ] .we then show in section [ sec : u - stat ] that the newly formed unbiased estimator is a u - statistic .we will work with the unbiased version of the squared sample distance covariance , which is published in .the definition is as follows .let be a symmetric , real valued matrix with zero diagonal , .define the -centered matrix as follows : the -th entry of is here the -centered " is so named because as shown below , the corresponding inner product ( which will be specified in ) defines an unbiased estimator of the squared distance covariance .let denote a sample of observations from the joint distribution of random vectors and .let be the euclidean distance matrix of the sample from the distribution of , and be the euclidean distance matrix of the sample from the distribution of .then if , for , the following is an unbiased estimator of squared population distance covariance .the proof of the above proposition is in the appendix of .let denote the inner product defined in .the following notations will be used . define the column and row sums as follows : will need the following lemma .[ lem : omega1 ] if is the inner product defined in then we have for the proof see the appendix .formula will be used to prove that ( i ) the estimator in is a u - statistic and thus we can apply the relevant limit theorems to study its asymptotic behavior ; ( ii ) the estimator in can be computed in steps .suppose is a sample . for positive integer ,let denote all the distinct -subsets of .for a set , we define notation .let be a symmetric real - valued or complex - valued kernel function of variables . for each ,the associated u - statistic of order , , is equal to the average over ordered samples of size of the sample values . in other words , for u - statistics , we can verify the following lemma .[ lem : check01 ] for , we denote where is defined in after removing the element .then we must have in , each term is counted times on both sides .hence the equality holds .in fact , using arithmetic deduction , one can prove that the converse of the above is also true .in other words , the _ jackknife invariance _ is a necessary and sufficient condition for being u - statistics . for a very similar ( equivalent ) approach[ lem : check02 ] if there exists a positive integer , such that for any , function satisfies and , then there must be a kernel function of order , such that can be written in a form as in ; i.e. , is a u - statistic .a proof of the above can be found in the appendix .the two lemmas above show that the recursive relation is a necessary and sufficient condition for a u - statistic . for later use ,we explicitly restate the result below .[ th : check01 ] let be a statistic of a sample .let , , be a statistic of a reduced sample ; i.e. , is the statistic after removing the observation .the necessary and sufficient condition for to be a u - statistic of order is holds for all .the above can be extended to a two - sample problem , in which a sample is for . by replacing with ,all the previous arguments still hold . .combine lemma [ lem : check01 ] and lemma [ lem : check02 ] , and simplify , we have .let denote the inner product that is defined in .note is based on the entire sample ( i.e. , . for ,let denote the corresponding statistic after knocking out pair from the entire sample .the following lemma establish counterpart for , where .[ lem : omega2 ] for , let , , , and denote the corresponding sums after entry is removed from the sample . if is the inner product that is defined in after knocking off the -th entry , we have we will not provide the proof for lemma [ lem : omega2 ] , because it will be identical with the proof of lemma [ lem : omega1 ] . [th : u - stat ] estimator inner product that is defined in is a u - statistic .the kernel function of the corresponding u - statistic is the inner product that was defined in with .see a proof in the appendix .now we know that is a u - statistic and it is easy to see that is in fact a u - statistic with a degenerate kernel under the null hypothesis of independence of and , thus we can see from corollary 4.4.2 of that if the second moments of and are finite then under the null hypothesis , the limit distribution of has the form , where , and are i.i.d .standard normal random variables . under the alternative hypothesiswe have that , thus we can easily construct a consistent test of independence . for a technically much more difficult approach ,see where a similar result was derived for a related v - statistic using deep results on complex - valued gaussian processes .we now argue that when and are univariate , there is an o algorithm to implement .we start with several intermediate results , which are presented as lemmas below .[ lem : aidot ] denote for , we also denote we have a proof is relegated to the appendix . due to symmetry , the following is the counterpart fact for .we state it without a proof .[ lem : bidot ] denote for , we denote we have using formulas and , the following two equations can be easily established .we state them without a proof .[ cor : abdots ] we have and the following lemma will be used .[ lem : aijbij ] we define a sign function , for , for any sequence , for , we define the following is true : .\ ] ] we have .\end{aligned}\ ] ] per the definition of , one can verify that the above equates to .[ lem : gamma ] for any sequence , there is an o algorithm to compute for all ( ) , where .again , we relegate the proof to the appendix .the main idea of the proposed algorithm is a modification as well as an extension of the idea that was used in and , which developed a fast algorithm for computing the kendall s rank correlation coefficient .the principle of the avl tree structure was adopted .despite they are in a similar spirit , the algorithmic details are different .we now present the main result in the following theorem .[ th : fastalgo ] the unbiased estimator of the squared population distance covariance ( that was defined in ) can be computed by an o( ) algorithm . in lemma[ lem : omega1 ] , the unbiased statistic has been rewritten as in . for the first term on the right hand side of , per lemmas [ lem : aijbij ] and[ lem : gamma ] , there is an o( ) algorithm to compute it . for the second term on the right hand side of , note that quantities , and that were defined in lemmas [ lem : aidot ] and [ lem : bidot ] , respectively , are partial sums , which can be computed for all s with o( ) algorithms .the factor is inserted , because one may need to sort s or s in order to compute for , and .then by and , all and can be computed at order o( ) .consequently , the second term on the right hand side of can be computed by using an o( ) algorithm . for the third term on the right hand side of , using and in corollary [ cor : abdots ] , we can easily see that it can be computed via an o( ) algorithm .from all the above , the theorem is established .for readers convenience , we present a detailed algorithm description in appendix , where algorithm [ alg : partialsum2d ] realizes the idea that is described in the proof of lemma [ lem : gamma ] ; algorithm [ alg : sub01 ] is a subroutine that will be called in algorithm [ alg : partialsum2d ] ; and the algorithm [ alg : fadcor ] is the algorithm that can compute for the distance covariance at o( ) .in section [ sec : implement ] , we describe a matlab and c based implementation of the newly proposed fast algorithm .this fast algorithm enables us to run some simulations with sample sizes that were impossible to experiment with before its appearance .we report some numerical experiments in section [ sec : effectiveness ] .distance correlation has been found helpful in feature screening . in section [ sec : screening ] , we redo experiments on this regard , increasing the sample size from to . it is observed that the advantage of using the distance correlation is more evident when the sample size becomes larger .the fast algorithm was implemented in matlab , with a key step ( of dyadic updating ) being implemented in c. it was then compared against the direct ( i.e. , slow ) implementation .table [ tab : fadcor01 ] presents the average running time for the two different implementations in matlab with replications at each sample size .the sample size goes from ( ) to ( ) . in all these cases , the two methods ended with identical solutions ; this validates our fast algorithm .note a comparison in matlab is not desirable for our fast algorithm .the direct method calls some matlab functions , which achieve the speed of a low - level language implementation , while the implementation of the fast method is not . in theory, the fast algorithm will compare more favorably if both methods are implemented in a low - level language , such as in c or c++ ..running times ( in seconds ) for the direct method and the fast method for computing the distance correlations .the values in the parentheses are sample standard errors . at each sample size , repetitions were run.[tab : fadcor01 ] [ cols=">,>,>",options="header " , ] the fast method allows us to study how the sample distance correlation converge to the population counterpart as a function of the sample size .[ fig : converge01 ] shows the convergence of the sample distance correlation and pearson s correlation .it is worth noting that in cases ( 3)-(8 ) , the pearson s correlation quickly converges to zero , while the sample distance correlation clearly stays away from zero .this experiments shows that a previous observation in fig .[ fig : showcases02 ] should occur with large probability . covering interval of both sample pearson s correlation ( solid line , with low and upper sample quartiles marked by ` ' ) and sample distance correlation ( dotted lines , with both quartiles marked by ` ' ) .the horizontal axis equates the size .the vertical axis corresponds to the values of correlations . in cases ( 3 ) through ( 8) , the two correlations clearly converge to different constants , when the pearson s correlation always seems to converge to zero . ] in * li2012 , distance correlation has been proposed to facilitate feature screening in ultrahigh - dimensional data analysis .the proposed sure independence screening procedure based on the distance correlation ( dc - sis ) has been proven to be effective in their simulation study . due to the use of the direct method , they restricted their sample size to .we redo the simulations as in * li2012 , however increases the sample size to , i.e. , times of the originally attempted .it is observed that the use of distance correlation becomes more advantageous when the sample size increases. the screening algorithm , which was initially advocated by , works as follows . for each covariate , a ` marginal utility ' function was computed .such a marginal utility function can be the pearson s correlation , the distance correlation that was discussed in this paper , or other dependence measure such as the one in * jasa2011sir that was also used in the simulation studies of * li2012 .the ` screening ' is based on the magnitude of the values of these marginal utility function . sometimes , forward ,backward , or a hybrid stepwise approach is proposed . in this paper , we refrain from further discussion in this potential research direction .our simulation setup follows the one in .note that an alternative approach named _ sure independent ranking and screening _ ( sirs ) was compared against . for a sample , , of two random variable and , the sirs dependence measure ( i.e. , the marginal utility function )is defined as ^ 2,\ ] ] where is an indicator function . the formulation in the above definition seemingly hint an o algorithm .the following theorem will show that it can be computed via an o algorithm .the proof and the algorithmic details are relegated to the appendix .[ th : sirs ] for a sample , , of a bivariate random vector , the sirs measure in can be computed via an algorithm whose average complexity is o . for completeness , we state our simulation setup below .we generate from normal distribution with zero mean and covariance matrix , and the error term from the standard normal distribution .two covariance matrices are considered to assess the performance of the dc - sis and to compare with existing methods : ( 1 ) and ( 2 ) .note that a covariance matrix with entries enjoys a known cholesky decomposition : , where if , and , , for and , . in our simulations , we take advantage of this known decomposition .the dimension varies from to .each experiment was repeated times , and the performance is evaluated through the following three criteria : 1 . : the minimum model size to include all active predictors .we report the , and quantiles of out of replications . : the proportion that an individual active predictor is selected for a given model size in the replications . : the proportion that all active predictors are selected for a given model size in the replications .the is used to measure the model complexity of the resulting model of an underlying screening procedure .the closer to the minimum model size the is , the better the screening procedure is .the sure screening property ensures that and are both close to one when the estimated model size is sufficiently large .different from , the is chosen to be ] denotes the integer part of .an innovative stopping rule is introduced in for dc - sis .we did not implement it here , because the new stopping rule requires a multivariate version of the distance correlation , which is not covered by this paper .the example is designed to compare the finite sample performance of the dc - sis with the sis and the sirs . in this example , we generate the response from the following four models : * , * , * , * , where is an indicator function .the regression functions in models ( 1.a)-(1.d ) are all nonlinear in .in addition , models ( 1.b ) and ( 1.c ) contain an interaction term , andmodel ( 1.d ) is heteroscedastic . following fan and lv ( 2008 ) ,we choose for , and , where , and .we set in this example to be consistent with the experiments in : challenging the feature screening procedures under consideration . for each independence screening procedure, we compute the associated marginal utility between each predictor and the response .that is , we regard as the predictor vector in this example .tables [ tab : ex1a ] , [ tab : ex1b ] , and [ tab : ex1b-2 ] present the simulation results for , and .the performances of the dc - sis , sis , and sirs are quite similar in model ( 1.a ) , indicating that the sis has a robust performance if the working linear model does not deviate far from the underlying true model .the dc - sis outperforms the sis and the sirs significantly in models ( 1.b)-(1.d ) .both the sis and the sirs have little chance to identify the important predictors and in models ( 1.b ) and ( 1.c ) , and in model ( 1.d ) . comparing tab.s [ tab : ex1a ] and [ tab : ex1b ] with the counterparts in, one can clearly see that the advantage of using the distance correlation becomes more evident , observing smaller sample quantiles of for dc - sis , and larger coverage probabilities in and .distance correlation has been found useful in many applications .a direct implementation of the distance correlation led to an algorithm with sample size . we propose a fast algorithm .its computational complexity is on average .armed with this fast algorithm , we carry out some numerical experiments with sample sizes that have not been attempted before .we found that in many cases , the advantage of adopting the distance correlation becomes even more evident .the proposed fast algorithm certainly makes the distance correlation more applicable in situations where statistical dependence needs to be evaluated .algorithm [ alg : fadcor ] is the algorithm that can compute for the distance covariance at o( ) .algorithm [ alg : partialsum2d ] realizes the idea that is described in the proof of lemma [ lem : gamma ] .algorithm [ alg : sub01 ] is a subroutine that will be called in algorithm [ alg : partialsum2d ] .[ alg : fadcor ] algorithm : fast computing for distance covariance ( fadcor ) * inputs : * observations , and . +* outputs : * the distance covariance that was defined in . 1 . sort , and .let and denote the order indices ; i.e. , if for , , then is the smallest observations among . + similarly if for , , then is the smallest observations among .2 . let , and denote the order statistics .+ denote the partial sums : they can be computed using the following recursive relation : , 3 .compute , , , and that are defined in lemma [ lem : aidot ] and [ lem : bidot ] , using the following formula : for , we have 4 .compute and per their definitions in lemma [ lem : aidot ] and [ lem : bidot ] .5 . using and ,compute .[ alg:1.part2 ] 6 . using and ,compute and .[ alg:1.part3 ] 7 ._ partialsum2d _ to compute for , , , and . 8 . using to compute .[ alg:1.part1 ] 9 .finally , apply the results of steps [ alg:1.part2 ] ., [ alg:1.part3 ] . , and [ alg:1.part1 ] .[ alg : partialsum2d ] algorithm : fast algorithm for a -d partial sum sequence ( partialsum2d ) * inputs : * observations , , and . + * outputs : * quantity that is defined in lemma [ lem : aijbij ] . 1 .compute for the order statistics for . then rearrange triplets s such that we have .each triplet ( ) stay unchanged .2 . let denote the order statistics for , and assume that , are the order indices ; i.e. , if , then is the -th smallest among . without loss of generality, we may assume that .evidently aforementioned function is invertible .let denote its inverse .define the partial sum sequence : for , the following recursive relation enables an o( ) algorithm to compute for all s , 4 . for ,define again the above partial sums can be computed in o( ) steps .compute 6 . callsubroutine _ dyadupdate _ to compute for for all , we have that [ alg : sub01 ] subroutine : a dyadic updating scheme ( dyadupdate ) * inputs : * sequence and , where is a permutation of .+ * outputs : * quantities , . 1 . recall that we have assumed .if is not dyadic , we simply choose the smallest such that . recall that for , , we define a close interval .\ ] ] 2 .assign , and .3 . for , we do the following . 1 .fall all s , such that .then for these s , do update 2 .find nonnegative integers such that let .for , compute 3 .one can verify the following equalities : the following will be used in our simplification too .we have similarly , we have in the following , we simplify the statistic in .we have .\end{aligned}\ ] ] furthermore , we have \\ & & + \frac{1}{n(n-2)^2 ( n-3 ) } \sum_{i \neq j } ( a_{i\cdot } + a_{\cdot j})(b_{i\cdot } + b_{\cdot j})\\ & & -\frac{a_{\cdot \cdot } b_{\cdot \cdot}}{(n-1)(n-2)^2(n-3 ) } \\ & \stackrel{\mbox{\eqref{eq : check04},\eqref{eq : check06}}}{= } & \frac{1}{n(n-3 ) } \sum_{i \neq j } a_{ij}b_{ij } -\frac{4}{n(n-2)(n-3 ) } \sum_{i=1}^n a_{i\cdot}b_{i\cdot } \\ & & + \frac{1}{n(n-2)^2 ( n-3 ) } \sum_{i \neq j } ( a_{i\cdot } + a_{\cdot j})(b_{i\cdot } + b_{\cdot j } ) -\frac{a_{\cdot \cdot } b_{\cdot \cdot}}{(n-1)(n-2)^2(n-3)}.\end{aligned}\ ] ] now bringing in and , we have \\ & = & \frac{1}{n(n-3 ) } \sum_{i\neq j } a_{ij}b_{ij } -\frac{2}{n(n-2)(n-3 ) } \sum_{i=1}^n a_{i\cdot}b_{i\cdot } + \frac{a_{\cdot \cdot } b_{\cdot \cdot}}{n(n-1)(n-2)(n-3)},\end{aligned}\ ] ] which is .we use arithmetic induction .suppose , becomes by defining , we can verify that is a kernel function with variables .consequently , can be written as .now suppose for any , has the form as in , with the function that was defined above .applying with , we can show that still has the form as in , with the same function that was defined above .we omit further details .it is evident to verify that the followings are true : for , we then have for the right hand side of , we have the following : }{(n-1)(n-3)(n-4 ) } \\ & & + \frac{(n-4)a_{\cdot \cdot } b_{\cdot \cdot } + 4\sum^n_{k=1 } a_{k\cdot } b_{k \cdot}}{(n-1)(n-2)(n-3)(n-4 ) } \\ & = & \frac { \sum_{i \neq j } a_{ij}b_{ij}}{n-3 } - \frac{2}{(n-2)(n-3 ) } \sum^n_{i=1 } a_{i\cdot } b_{i \cdot } + \frac{a_{\cdot \cdot } b_{\cdot \cdot}}{(n-1)(n-2)(n-3)}.\end{aligned}\ ] ] compare with , we can verify that the above equates to , which ( per theorem [ th : check01 ] ) indicates that is a u - statistic .the kernel function of the corresponding u - statistic is the inner product that was defined in with . without loss of generality ( wlog ) , we assume that .we have note that we can verify the following equations : where .we can rewrite as follows : we will argue that the three summations on the right hand side can be implemented by o algorithms . first , term is a formula for partial sums .it is known that an o( ) algorithm exists , by utilizing the relation : second , after sorting s at an increasing order , sums is transferred into a partial sums sequence .hence it can be implemented via an o( ) algorithm .if quicksort ( * ? ? ?* section 5.2.2 : sorting by exchanging ( pages 113 - 122 ) ) is adopted , the sorting of s can be done via an o( ) algorithm .we will argue that sums can be computed in an o( ) algorithm .wlog , we assume that , is a permutation of the set .wlog , we assume that is dyadic ; i.e. , , where or is a nonnegative integer . for , , we define an close interval .\ ] ] we then define the following function where , , and .we argue that computing the values of for all , can be done in o( ) .first of all , it is evident that for all , we have suppose for all , s have been computed for all and . for each , there is only one , such that $ ] . by the definition of , we have the above dynamic programming style updating scheme needs to be run for times ( i.e. , for all ) , however each stage requires no more than updates .overall , the computing for all takes no more than o( ) . for a fixed , ,we now consider how to compute for .if , obviously we have . for , there must be a unique sequence of positive integers , such that since , we must have .we then define as follows one can then verify the following : for , since , the above takes no more than o( ) numerical operations .consequently , computing for all , can be done in o( ) .( we realized that the above approach utilized the avl tree structure . ) from all the above , we established the result .we have the following sequence of equations : ^ 2 & = & \sum_{j=1}^n \left[\sum_{i=1}^n x_i\mathbf{1}(y_i < y_j ) \right ] \cdot \left[\sum_{k=1}^n x_k \mathbf{1}(y_k < y_j ) \right ] \\ & = & \sum_{j=1}^n \sum_{i=1}^n \sum_{k=1}^n x_i \cdot x_k \cdot \mathbf{1}(y_i < y_j \mbox { and } y_k < y_j)\\ & = & \sum_{i=1}^n x_i \left [ \sum_{k : y_i \le y_k } x_k \sum_{j=1}^n \mathbf{1}(y_k< y_j ) + \sum_{k : y_i > y_k } x_k \sum_{j=1}^n \mathbf{1}(y_i < y_j ) \right ] .\end{aligned}\ ] ] the last expression implies the following steps to compute for sirs . since s , s , and s are partial sums , it is easy to verify that each of the above steps can be done within o operations on average , hence the entire algorithm takes o operations on average .adelson - velskii , g. , and landis , e. m. 1962 , `` an algorithm for the organization of information , '' _ proceedings of the ussr academy of sciences _ , 146 , 263266 .( russian ) english translation by myron j. ricci in soviet math .doklady , 3:1259 - 1263 , 1962 .kong , j. , klein , b. e. k. , klein , r. , and wahba , g. 2012 , `` using distance correlation and ss - anova to acess associations of familial relationnships , lifestyle factors , diseases , and mortality , '' _ proceedings of the national academy of sceineces _ , 109(50 ) , 2035220357 .koroljuk , v. s. , and borovskich , y. v. 1994 , _ theory of u - statistics _ , vol .273 of _ mathematics and its applications _ , dordrecht : kluwer academic publishers group .translated by p. v. malyshev , d.v .malyshev from the 1989 russian original edition .
distance covariance and distance correlation have been widely adopted in measuring dependence of a pair of random variables or random vectors . if the computation of distance covariance and distance correlation is implemented directly accordingly to its definition then its computational complexity is o( ) which is a disadvantage compared to other faster methods . in this paper we show that the computation of distance covariance and distance correlation of real valued random variables can be implemented by an o( ) algorithm and this is comparable to other computationally efficient algorithms . the new formula we derive for an unbiased estimator for squared distance covariance turns out to be a u - statistic . this fact implies some nice asymptotic properties that were derived before via more complex methods . we apply the fast computing algorithm to some synthetic data . our work will make distance correlation applicable to a much wider class of applications . * author s footnote : * dr . xiaoming huo is a professor in the school of industrial and systems engineering at the georgia institute of technology . he has been serving as a rotator at the national science foundation since august 2013 . mailing address : 765 ferst dr , atlanta , ga 30332 ( email : xiaoming.gatech.edu ) . dr . gbor j. szkely is a program officer at the national science foundation . mailing address : 4201 wilson blvd , arlington , va 22203 ( email : gszekely.gov ) . keywords : distance correlation , fast algorithm , statistical dependence
in , christopher hardin and alan taylor present a remarkable consequence of the axiom of choice .consider an arbitrary set and a function .we think of the domain of as representing time , and for each , we think of as the state of some system at time .the question hardin and taylor address is whether or not it is possible to predict the state of the system from its past behavior . in other words, can we predict from ? if there is no restriction on the function , then it seems that gives us no information about , and therefore it is hard to see how such a prediction could be possible .nevertheless , using the axiom of choice , hardin and taylor produce a prediction strategy with the property that for every function , the prediction is correct for all but countably many values of . it may be helpful to imagine that at each time there is an agent who is making the prediction of .each agent can see the past but not the present or future , so the agent at knows but no other values of .hardin and taylor make an extensive study of the predictions that can be made by agents in this and similar situations in their book .one idea considered by hardin and taylor is the possibility that the agents can see the past but do nt know what time it is in other words , the agent at time knows the value of for every , but does nt know the value of .they define a prediction strategy to be _ anonymous _ if it can be used by agents whose knowledge is restricted in this way . andthey show that even with this restriction on the knowledge of the agents , there is a prediction strategy that guarantees that for every function , all but countably many agents are correct . in this paperwe consider several variations on the notion of anonymity .what if the agents not only do nt know what time it is , they also do nt know the rate at which time passes ?what if each agent knows the order of past events , but nothing about the length of time between events ?we will see that in some cases the agents are still able to perform well , with only countably many making incorrect predictions , while in other cases they do very badly , with only countably many making correct predictions .to state our results formally , we will need some definitions .let denote the set of all functions from to .each such function represents a possible time evolution of the system under study . following hardin and taylor , we call such functions _scenarios_. given a scenario , the predictions made by all of the agents could be thought of as determining another scenario , which we will denote .thus , , and for any time , denotes the prediction of made by the agent at time under the scenario .since the agent at can see only the past , scenarios and will be indistinguishable to him if , and therefore his predictions in these scenarios should be the same .we therefore make the following definition . a function is called a _ predictor_ if it has the property that for all functions and all , if then .we say that the agent at _ guesses correctly _ if his prediction of the value of is correct that is , if . otherwise , agent _ guesses incorrectly_. we can now state hardin and taylor s main result from .[ thm : ht ] there is a predictor such that for every scenario , only countably many agents guess incorrectly ; in other words , is countable .the details of the proof can be found in , but the idea is simple . using the axiom of choice , we can let be a well ordering of .now for any and any , let = \{g \in { } ^\mathbb{r}s : f \upharpoonright ( -\infty , x ) = g \upharpoonright ( -\infty , x)\} ] .in other words , is the least scenario that is consistent with the information available to the agent at time . let .hardin and taylor show that is well ordered ( by the usual ordering on ) , and any well ordered subset of is countable .( to see why a well ordered set must be countable , define a one - to - one function by letting be a rational number larger than but smaller than the next element of , if there is one . ) thus , with this predictor , only countably many agents guess incorrectly .next we present hardin and taylor s definition of anonymity .suppose that is a scenario , and define another scenario by the equation , for some constant .in other words , , where is the function . we will call a _ shift function_. we could think of as the scenario that represents precisely the same time evolution of the system as , but shifted in time by units . now consider agents at times and who can see the past , but who do nt know where they are located in time . for any , we have this means that the past from the point of view of the agent at in scenario looks exactly the same as the past from the point of view of the agent at in scenario .we would therefore expect these agents to make the same predictions .in other words , we would expect , or equivalently .this motivates hardin and taylor s definition of anonymity .a predictor is _ anonymous _ if for every scenario and every real number , anonymity of a predictor can be thought of as a sort of invariance under shifts .this suggests a natural generalization of the notion of anonymity ( see ) .let be a family of functions from to .a predictor is_ -anonymous _if for every scenario and every , thus , anonymity is just -anonymity , where is the family of shift functions .but there are other families of functions that we might use instead : if is a scenario and , then is a scenario that is the same as , except that events have been shifted in time , and also the rate at which events happen has been changed .thus , a -anonymous predictor is one that can be used by agents who not only do nt know what time it is , they also do nt know how fast time passes . similarly , - and -anonymous predictors are predictors that can be used by agents who are insensitive to more extreme distortions of time .hardin and taylor ask in how successful a -anonymous predictor can be ( see question 7.8.3 on p. 82 ) .note that , and therefore the requirement of -anonymity becomes stronger as increases . we will show that there is a -anonymous predictor that always performs well ( see theorem [ thm : main1 ] ) , but there is a scenario in which all -anonymous predictors perform badly ( see theorem [ thm : main2 ] ) .we need a few definitions and lemmas before we can prove our main result about -anonymous predictors .recall that a scenario is said to be _ periodic _ if there is a number such that for all , ; is called a _ period _ of .equivalently , is periodic with period if .we will need to generalize this concept to functions in .we will say that a scenario is _ affine - invariant _ if there is some such that is not the identity function and .since our agents can see only the past , it will also be convenient to have names for scenarios that are periodic or affine - invariant up to some point in time .suppose that is a real number and is a scenario .if there is some such that , then we say that is _ past - periodic on with period . similarly , if there is some nonidentity function such that , then we say that is _ past - affine - invariant on . clearly , if a scenario is periodic or affine - invariant , then it is also past - periodic or past - affine - invariant . in the other direction ,if a scenario is past - periodic or past - affine - invariant , then it can be modified to make it periodic or affine - invariant , as our next two lemmas show .[ lem : perext ] suppose the scenario is past - periodic on .then there is a periodic scenario with the property that .choose so that is past - periodic on with period , and define an equivalence relation on by let ] , for every real number . to define ,we simply let the value of on any equivalence class ] .it is easy to see that agrees with on and is periodic .[ lem : afext ] suppose the scenario is past - affine - invariant on .then there is a scenario that is affine - invariant and has the property that .since is past - affine - invariant on , there is some nonidentity function such that . for any positive integer , let denote the -fold composition of with itselfsimilarly , let denote the -fold composition of with itself , and let be the identity function .now if we define to mean that there is some integer such that , then is an equivalence relation on .the rest of the proof is similar to the proof of lemma [ lem : perext ] : is constant on the intersection of each equivalence class with , and we can define to agree with on and to be constant on all equivalence classes .we will also need to know that the periods in periodicity and past - periodicity of a function always match .[ lem : permatch ] suppose is past - periodic on with period .if is also periodic , then it has period .suppose is periodic with period . to see that is also a period of , consider any real number .choose an integer such that is less than .then we are now ready to prove that there is a -anonymous predictor that performs very well .[ thm : main1 ] there is a -anonymous predictor such that for every scenario , is countable . in other words ,only countably many agents guess incorrectly .the predictor we use is similar to the one described in . using the axiom of choice ,fix a well ordering of that lists periodic scenarios first , then other affine - invariant scenarios , and then the rest of the scenarios .suppose is an arbitrary scenario and is an arbitrary agent .let = \ { g \in { } ^{\mathbb{r } } s : f \upharpoonright ( -\infty , x ) = ( g \circ t ) \upharpoonright ( -\infty , x ) \text { for some }\} ] .to define , choose any for which , and let we first need to show that is well defined that is , that the value of does nt depend on the choice of .suppose that for some scenario and some agent , there are two distinct functions such that in order to prove that is well defined , we need to show that . in other words , letting and , we must show that . of course, if then this clearly holds , so we may assume that .let .then , and it follows from that and in particular , reveals that is past - affine - invariant on , so lemma [ lem : afext ] tells us that there is an affine - invariant function that agrees with on .therefore agrees with on , so is an element of ] , it follows that .since lists affine - invariant scenarios first , and is affine - invariant , we deduce that is affine - invariant as well .so we may fix an in with the property that is not the identity function and an immediate consequence of this is that . by replacing with if necessary , we may assume that . to prove that , we now consider two cases .case 1 : and commute .if , then since , this means that the linear function has two distinct fixed points a contradiction , since is not the identity function .hence , and since we have , we conclude that .therefore as required. case 2 : and do not commute .consider any .in what follows , we abbreviate expressions like to to prevent parentheses from stacking up .note that implies that , and therefore .thus , we let the reader verify that is a shift function , and since and do nt commute , it is not the identity function .therefore we can fix so that . equation now tells us that ; in other words , is past - periodic on with period . by lemma [ lem : perext ] we know that there is a periodic scenario that agrees with on .fix such a .then agrees with on , and therefore is an element of ] . to see why ,suppose is an element of ] .the other inclusion has an almost identical proof .the fact that = [ f]_{t(x)} ] , we can choose some such that , and therefore applying the definition of ( equation ) , we have since was arbitrary , it follows that .finally , we must prove that with the predictor , only countably many agents guess incorrectly .let be the set of agents who guess incorrectly ; that is , .suppose that and are both in and .we claim that .to prove this claim , we first show that \subseteq [ f]_x ] .fix a in so that agrees with on . since , it follows trivially that agrees with on ; hence , is in ] , the -least element of ] . in other words , .we now claim that .suppose , toward contradiction , that .choose a in such that agrees with on .then by the definition of , where the last equal sign follows from the fact that and agree on and . however , this is a contradiction , since we assumed that guesses incorrectly .so .combining and , we have . now , suppose has an infinite descending chain .by the claim we have just proven , this gives us an infinite descending chain in , contradicting the fact that is a well ordering .thus has no infinite descending chain , so it is well ordered , and therefore countable .in this section , we construct a scenario in which agents using _ any _ -anonymous predictor perform poorly . broadly speaking, this scenario ensures that every agent sees the same thing looking into the past , up to distortion by elements in .the condition of -anonymity then implies that every agent guesses the same thing .the scenario takes each of its values only countably many times , so only countably many agents guess correctly .we spend the rest of the section working out the details of this argument .central to the construction of this scenario will be functions that are _ smooth _ ( that is , infinitely differentiable at each point in their domain ) , but fail to be analytic at certain points . for example the function is smooth everywhere , but fails to be analytic at 0 . note in particular that for every positive integer , , where denotes the derivative of .we can use to define a _ smooth transition function _\to [ 0,1] ] , and for every positive integer , . (the subscripts and here indicate that these are one - sided derivatives . ) by shifting and rescaling , we can define similar smooth transition functions connecting other points .let and be two points in , and suppose that and ; we say that is _ above and to the right of _we define the function \to [ q_1 , q_2] ] .when we use the notation , we implicitly assume that is above and to the right of .notice that since is increasing , it is invertible. also , by the chain rule , we have and using the chain rule times , we have in particular , at the endpoints of the domain we have the one - sided derivatives .thus , is very flat at and . before continuing ,we note one reason that these smooth transitions functions will be useful to us .suppose that , , and are three points in the plane , with above and to the right of , which is above and to the right of .if we concatenate and ( in the obvious way ) , we obtain a smooth increasing function passing through all three of the points , , and . repeating this reasoning, we can create a smooth increasing function passing through a sequence of points going up and to the right in .we now use our smooth transition functions to define an equivalence relation on .we will say that a point in is a _ rational pair _ if both and are rational numbers .since the set of rational pairs is countable , the set is countable as well . for real numbers and , we define to mean that there is some such that either or .we will say that a finite sequence of numbers is an _-path of length _ from to if now we let mean that there is an -path from to . for example , if , then is an -path of length 2 from to , so .for any real number , the one - term sequence is an -path of length 1 from to , and therefore ; in other words , is reflexive .we have already seen that is reflexive .clearly is symmetric .it follows that the reverse of an -path is another -path , and therefore is symmetric .transitivity of can be proven by concatenating -paths . to see that the equivalence classes are countable , observe that since is countable , for any there are only countably many such that .it follows that there are only countably many -paths starting at . for each , use the mean value theorem to fix a so that . rearranging this equation and taking the limit of both sides yields the left side of this equation is the definition of , and the right side equals by hypothesis , so .but now the same reasoning can be applied to , yielding .since we can continue to repeat this reasoning , the lemma follows by induction .[ thm : main2 ] there is a set and a scenario such that for every -anonymous predictor , is countable . in other words ,only countably many agents guess correctly .let {\sim_f^ * } : x \in \mathbb{r } \} ] .suppose is a -anonymous predictor .we claim that using this predictor , all agents make the same guess in the scenario .assuming this claim , we can let {\sim_f^ * } \in s ] guess correctly , and everyone else guesses incorrectly .thus , proving this claim will suffice to prove the theorem . to construct ,let be an increasing sequence of rational numbers converging to with .let to be an increasing sequence of rationals converging to with the property that for every , notice that for every we have and and therefore for each negative integer ,let and .for every integer , let , and for let .we claim that the following function has the properties that we want : x \in [ w+i , w+i+1] i \ge 0 ] . then for each in ] , then now , to show that , suppose .fix large enough that .now suppose . then ] for some integer , and .note that , since . therefore {\sim_f^ * } & & \text{(by the definition of )}\\ & = [ x]_{\sim_f^ * } & & \text{(since )}\\ & = f(x ) & & \text{(by the definition of ).}\end{aligned}\ ] ] there is plenty to discover about -anonymous predictors when .in particular ,what if consists of increasing polynomials or , alternatively , increasing analytic functions ?what if consists of increasing computable functions ? at what point do agents start to perform poorly ?9 hardin , c. s. and taylor , a. d. , a peculiar connection between the axiom of choice and predicting the future , _ amer .math . monthly _ * 115 * ( 2008 ) , pp .hardin , c. s. and taylor , a. d. , _ the mathematics of coordinated inference : a study of generalized hat problems _ , springer international publishing , cham , switzerland , 2013 .
consider an arbitrary set and an arbitrary function . we think of the domain of as representing time , and for each , we think of as the state of some system at time . imagine that , at each time , there is an agent who can see and is trying to guess other words , the agent is trying to guess the present state of the system from its past history . in a 2008 paper , christopher hardin and alan taylor use the axiom of choice to construct a strategy that the agents can use to guarantee that , for every function , all but countably many of them will guess correctly . in a 2013 monograph they introduce the idea of anonymous guessing strategies , in which the agents can see the past but do nt know where they are located in time . in this paper we consider a number of variations on anonymity . for instance , what if , in addition to not knowing where they are located in time , agents also do not know the rate at which time is progressing ? what if they have no sense of how much time elapses between any two events ? we show that in some cases agents can still guess successfully , while in others they perform very poorly .
the evolution of many complex systems in natural , economical , and social sciences is usually presented in the form of time series . in order to analyze time series , several of statistical measures have been introduced in the literature .these include such concepts as probability distributions , autocorrelations , multi - fractals , complexity , or entropy densities .recently , it has been pointed out that the _ transfer entropy _ ( te ) is a very useful instrument in quantifying statistical coherence between time evolving statistical systems . in particular , in schreiber s paper it was demonstrated that te is especially expedient when global properties of time series are analyzed .prominent applications are in multivariate analysis of time series , including e.g. , study of multichannel physiological data or bivariate analysis of historical stock exchange indices .methods based on te have substantial computational advantages which are particularly important in analyzing a large amount of data . in all past works , including , the emphasis has been on various generalizations of transfer entropies that were firmly rooted in the framework of shannon s information theory .these , so called shannonian transfer entropies are , indeed , natural candidates due to their ability to quantify in a non - parametric and in explicitly non - symmetric way the flow of information between two time series .an ideal testing ground for various te concepts are financial - market time series because of the immense amount of electronically recorded financial data .recently , economy has become an active research area for physicists .they have investigated stock markets using statistical - physics methods , such as the percolation theory , multifractals , spin - glass models , information theory , complex networks , path integrals , etc .. in this connection the name econophysics has been coined to denote this new hybrid field on the border between statistical physic and ( quantitative ) finance . in the framework of econophysicsit has became steadily evident that the market interactions are highly nonlinear , unstable , and long - ranged .it has also became apparent that all agents ( e.g. , companies ) involved in a given stock market exhibit interconnectedness and correlations which represent important internal force of the market .typically one uses correlation functions to study the internal cross - correlations between various market activities .the correlation functions have , however , at least two limitations : first , they measure only linear relations , although it is clear that linear models do not faithfully reflect real market interactions .second , all they determine is whether two time series ( e.g. , two stock - index series ) have correlated movement .they , however , do not indicate which series affects which , or in other words , they do not provide any directional information about cause and effect .some authors use such concepts as time - delayed correlation or time - delayed mutual information in order to construct asymmetric `` correlation '' matrices with inherent directionality .this procedure is in many respects _ ad hoc _ as it does not provide any natural measure ( or quantifier ) of the information flow between involved series . in the present paper we study multivariate properties of stock - index time with the help of econophysics paradigm . in order to quantify the information flow between two or more stock indices we generalize schreibers shannonian transfer entropy to rnyi s information setting . with thiswe demonstrate that the corresponding new transfer entropy provides more detailed information concerning the excess ( or lack ) of information in various parts of the underlying distribution resulting from updating the distribution on the condition that a second time series is known .this is particularly relevant in the context of financial time series where the knowledge of tale - part ( or marginal ) events such as spikes or sudden jumps bears direct implications , e.g. , in various risk - reducing formulas in portfolio theory .the paper is organized as follows : in section [ sec2 ] we provide some information - theoretic background on shannon and rnyi entropies ( re s ) . in particular , we identify the _ conditional _ rnyi s entropy with the information measure introduced in ref .apart from satisfying the chain rule ( i.e. , rule of additivity of information ) the latter has many desirable properties that are to be expected from a conditional information measure .another key concept the _ mutual _ rnyi entropy , is then introduced in a close analogy with shannon s case .the ensuing properties are also discussed .transfer _ entropy of schreiber is briefly reviewed in section [ sec4 ] .there we also comment on effective transfer entropy of marchinski _et all_. the core quantity of this work the rnyian _ transfer _ entropy ( rte ) , is motivated and derived in section [ sec7 ] .in contrast to shannonian case , the rnyian transfer entropy is generally not positive semi - definite .this is because re non - linearly emphasizes different parts of involved probability density functions ( pdf s ) . with the help of campbell s coding theorem we show that the rte rates a gain / loss in risk involved in a next - time - step behavior in a given stochastic process , say , resulting from learning a new information , namely historical behavior of another ( generally cross - correlated ) process , say . in this viewthe rte can serve as a convenient _ rating _ factor of a _ riskiness _ in inter - connected markets .we also show that rnyian transfer entropy allows to amend spurious effects caused by a finite size of a real data set which in shannon s context must be , otherwise , solved by means of the surrogate data technique and ensuing effective transfer entropy . in section [ sec10 ] we demonstrate the usefulness and formal consistency of rte by analyzing cross - correlations in various international stock markets . on a qualitative level we use simultaneously recorded data points of the eleven stock exchange indices , sampled at a daily ( end - of - trading day ) rate to construct the _ heat maps _ and _ net flows _ for both shannon s and rnyi s information flows . on a quantitative levelwe explicitly discuss time series from the dax and s&p500 market indices gathered on a minute - tick basis in the period from december 1990 till november 2009 in the german stock exchange market ( deutsche brse ) . presented calculations of rnyi and shannon transfer entropiesare based on symbolic coding computation with the open source software .our numerical results imply that rte s among world markets are typically very asymmetric .for instance , we show that there is a strong surplus of an information flow from the asia - pacific region to both europe and the u.s .a surplus of the information flow can be also observed to exists from europe to the u.s . in this last casethe substantial volume of transferred information comes from tail - part ( i.e. , risky part ) of underlying asset distributions .so , despite the fact that the u.s . contributes more than half of the world trading volume , this is not so with information flow .further salient issues , such as dependence of rte on rnyi s parameter or on the data block length are numerically also investigated . in this connectionwe find that the cross - correlation between dax and s&p500 has a long - time memory which is around 200 - 300 mins .this should be contrasted with typical memory of stock returns which are of the order of seconds or maximally few minutes .various remarks and generalizations are proposed in the concluding section [ sec12 ] . for readers convenience we give in appendix a a brief dictionary of market indices used in the main text and in appendix b we tabulate an explicit values of effective transfer entropies used in the construction of heat maps and net information flows .in order to express numerically an amount of information that is shared or transferred between various data sets ( e.g. , two or more random processes ) , one commonly resorts to information theory and especially to the concept of entropy . in this sectionwe briefly review some essentials of shannon s and rnyi s entropy that will be needed in following sections .the entropy concept was originally introduced by clausius in the framework of thermodynamics . by analyzing a carnot engine he was able to identify a new state function which never decreases in isolated systems .the microphysical origin of clausius phenomenological entropy was clarified more than years later in works of boltzman and ( yet later ) gibbs who associated clausius entropy with the number of allowed microscopic states compatible with a given observed macrostate .the ensuing _ boltzmann gibbs entropy _ reads where is boltzmann s constant , is the set of all accessible microstates compatible with whatever macroscopic observable ( state variable ) one controls and denotes the number of microstates .it should be said that the passage from boltzmann gibbs to clausius entropy is established only when the conditional extremum of subject to the constraints imposed by observed state variables is inserted back into .only when this _ maximal entropy prescription _ is utilized turns out to be a thermodynamic state function and not mere functional on a probability space . in information theory , on the other hand , the interest was in an optimal coding of a given source data . by _optimal code _ is meant the shortest averaged code from which one can uniquely decode the source data .optimality of coding was solved by shannon in his 1948 seminal paper . according to shannon s _source coding theorem _ , the quantity corresponds to the averaged number of bits needed to optimally encode ( or zip " ) the source dataset with the source probability distribution . on a quantitative level ( [ ii.2.a ] )represents ( in bits ) the minimal number of binary ( yes / no ) questions that brings us from our present state of knowledge about the system to the one of certainty .it should be stressed that in shannon s formulation represents a discrete set ( e.g. , processes with discrete time ) , and this will be also the case here .apart from the foregoing _ operational _ definitions , eq .( [ ii.2.a ] ) has also several axiomatic underpinnings .axiomatic approaches were advanced by shannon , khinchin , fadeev an others . the quantity ( [ ii.2.a ] )has became known as shannon s entropy ( se ) .there is an intimate connection between boltzmann gibbs entropy and shannon s entropy .in fact , thermodynamics can be viewed as a specific application of shannon s information theory : the thermodynamic entropy may be interpreted ( when rescaled to `` bit '' units ) as the amount of shannon information needed to define the detailed microscopic state of the system , that remains uncommunicated " by a description that is solely in terms of thermodynamic state variables . among important properties of se is its concavity in , i.e. for any pair of distributions and , and a real number holds eq . ( [ i.19.a ] ) follows from jensen s inequality and a convexity of for .concavity is an important concept since it ensures that any maximizer found by the methods of the differential calculus yields an absolute maximum rather than a relative maximum or minimum or saddle point . at the same time it is just a sufficient ( i.e. , not necessary ) condition guarantying a unique maximizer .it is often customary to denote se of the source as rather than .note that se is generally not convex in !it should be stressed that the entropy ( [ ii.2.a ] ) really represents a self - information : the information yielded by a random process about itself .a step further from a self - information offers the _joint entropy _ of two random variables and which is defined as and which represents the amount of information gained by observing jointly two ( generally dependent or correlated ) statistical events .a further concept that will be needed here is the _ conditional entropy _ of given , which can be motivated as follows : let us have two statistical events and and let event has a sharp value , then the gain of information obtained by observing is here the conditional probability . for general random one defines the conditional entropy as the averaged shannon entropy yielded by under the assumption that the value of is known , i.e. from ( [ ii.a.5a ] ) , in particular , follows that identity ( [ ii.2.6.a ] ) is known as additivity ( or chain ) rule for shannon s entropy . in statistical thermodynamicsthis rule allows to explain , e.g. , gibbs paradox .applying eq .( [ ii.2.6.a ] ) iteratively , we obtain : another relevant quantity that will be needed is the _ mutual information _ between and .this is defined as : and can be equivalently written as this shows that the mutual information measures the average reduction in uncertainty ( i.e. , gain in information ) about resulting from observation of .of course , the amount of information contained in about itself is just the shannon entropy : notice also that from eq .( [ eq : mi ] ) follows and so provides the same amount of information on as does on . for this reasonsthe mutual information is not a useful measure to quantify a flow of information .in fact , the flow of information should be by its very definition directional . in the following we will also find useful the concept of _ conditional mutual entropy _ between and given which is defined as the latter quantifies the averaged mutual information between and provided that is known .applying ( [ eq : condmii ] ) and ( [ eq : mi2 ] ) iteratively we may write & = & \ i(x;y_1\cap \cdots\cap y_n \cap z_1\cap \cdots \cap z_m)\nonumber \\ & - & \ i(x;z_1\cap \cdots\cap z_m ) \ , .\label{eq : condmiiii}\end{aligned}\ ] ] for further details on the basic concepts of shannon s information theory , we refer the reader to classical books , e.g. , ash and , more recently , csiszr and shields .rnyi introduced in refs . a one - parameter family of information measures presently known as _ rnyi entropies _ . in practice , however , only a singular name rnyi s entropy is used .re of order of a distribution on a finite set is defined as for re ( [ renyi ] ) one can also formulate source coding theorem .while in the shannon case the cost of a code - word is a linear function of the length so the optimal code has a minimal cost out of all codes , in the rnyi case the cost of a code - word is an exponential function of its length .this is , in a nutshell , an essence of the so - called campbell s coding theorem ( cct ) . according to this re corresponds to the averaged number of bits needed to optimally encode the discrete source with the probability , provided that the codeword - lengths are exponentially weighted . from the form ( [ renyi ] ) one can easily see that for re depends more on the probabilities of the more probable values and less on the improbable ones .this dependence is more pronounced for higher . on the other hand , for marginal eventsare accentuated with decreasing . in this connectionwe should also point out that campbell s coding theorem for re is equivalent to shannon s coding theorem for se provided one uses instead of the _ escort distribution _ : the pdf was first introduced by rnyi and in the physical context brought by beck , schlgl , kadanoff and others ( see , e.g. , refs . ) . note ( cf .[ fig2 ] ) for event probability and varying rnyi s parameter .arrows indicate decreasing values of for ( dark arrow ) or increasing values of for ( lighter arrow).,width=377 ] ( 20,7 ) ( 105,65 ) ( -135,185) that for the escort distribution emphasizes the more probable events and de - emphasizes more improbable ones .this trend is more pronounced for higher values of .for the escort distribution accentuates more improbable ( i.e. , marginal or rare ) events .this dependence is more pronounced for decreasing . this fact is clearly seen on fig . [ fig1 ] . : . , width=302 ] ( 20,7 ) ( 90,80 ) ( -35,55 ) ( 125,155) so by choosing different we can scan " or probe " different parts of the involved pdf s .it should be stressed that apart from cct , re has yet further operational definitions , e.g. , in the theory of guessing , in the buffer overflow problem or in the theory of error block coding .re is also underpinned with various axiomatics . in particular, it satisfies identical khinchin axioms as shannon s entropy save for the additivity axiom ( chain rule ) : where the conditional entropy is defined with the help of the escort distribution ( [ ii.15.a ] ) ( see , e.g. , refs . ) . for re reduces to the shannon entropy : as one can easily verify with lhospital s rule .we define the _ joint rnyi entropy _ ( or the _ joint entropy _ of order ) for two random variables and in a natural way as : the _ conditional entropy _ of order of given is similarly as in the shannon case defined as the averaged rnyi s entropy yielded by under the assumption that the value of is known . as shown in refs . this has the form & = & \\frac{1}{1-q}\log_2 \frac{\sum_{x\ \ !\in \ \ ! { x } , \ \ ! y \ \ !{ y}}p^{\,q}(x , y)}{\sum_{y\ \ !\in \ \ ! { y } } q^{\,q}(y)}\ , .\label{re.condentropy}\end{aligned}\ ] ] in this connection it should be mentioned that several alternative definitions of the conditional re exist ( see , e.g. , refs . ) , but the formulation ( [ re.condentropy ] ) differs from other versions in a few important ways that will show up to be desirable in the following considerations .the conditional entropy defined in ( [ re.condentropy ] ) has the following important properties , namely : : , where is a number of elements in , : : only when uniquely determines ( i.e. , no gain in information ) , : : , : : when and are independent then . unlike shannon s case one cannot , however , deduce that the equality implies independency between event and .also the inequality ( i.e. , an extra knowledge about lessens our ignorance about ) does not hold here in general .the latter two properties may seem as a serious flaw .we will now argue that this is not the case and , in fact , it is even desirable .first , in order to understand why does not imply independency between and we define the information - distribution function which represents the total probability caused by events with information content . with thiswe have and thus taking the inverse laplace transform with the help of the so - called _post s inversion formula _ we obtain \right|_{q \ = \k/(x \ln 2 ) + 1 } .\label{iii.b.20a}\end{aligned}\ ] ] analogous relation holds also for and associated . as a resultwe see that when working with of different orders we receive much more information on underlying distribution than when we restrict our investigation to only one ( e.g. , to only shannon s entropy ) .in addition , eq . ( [ iii.b.20a ] ) indicates that we need all ( or equivalently all , see ) in order to uniquely identify the underlying pdf . in view of eq .( [ iii.b.20a ] ) we see that the equality between and at some neighborhood of merely implies that for some .this naturally does not ensure independency between and .we need equality for all ( or for all ) in order to secure that holds for all which would in turn guarantee that .therefore , all re with ( or all with ) are generally required to deduce from an independency between and . in order to understand the meaning of the inequality we first introduce the concept of mutual information .the _ mutual information of order _ between and can be defined as ( cf .( [ eq : mi2 ] ) ) & = & \ s_q^{(r)}(x)\ + \s_q^{(r)}(y ) \ - \ s_q^{(r)}(x\cap y ) \, , \label{re.mutinf}\end{aligned}\ ] ] which explicitly reads & = & \ \frac{1}{1-q}\log_2 \frac{\sum_{x\ \ !\in \ \ ! { x } , \ \ ! y \ \ !\in \ \ ! { y } } q^{\,q}(y ) p^{\,q}(x)}{\sum_{x\ \ ! \in \ \ ! { x } , \ \ !\in \ \ ! { y}}q^{\,q}(y)p^{\,q}(x|y ) } \ , .\label{ii.20a}\end{aligned}\ ] ] note that we have again the symmetry relation as well as the consistency condition .so similarly as in the shannon case , rnyi s mutual information formally quantifies the average reduction in uncertainty ( i.e. , gain in information ) about that results from learning the value of , or vice versa . from eq .( [ re.mutinf ] ) we see that the inequality in question , i.e. , implies .according to ( [ ii.20a ] ) this can be violated only when & & \sum_{x\ \ !\in \ \ ! { x } }p^{\,q}(x ) \ < \sum_{x\ \ ! \in \ \ ! { x } } \langle { \mathcal{p}}^{\,q}(x|y)\rangle_q \;\;\;\;\;\ ; \mbox{for } \;\;\;\ ; 0 < q < 1\ , . \label{2.b.26a}\end{aligned}\ ] ] here is an average with respect to the escort distribution ( see eq .( [ ii.15.a ] ) ) . by taking into account properties of the escort distribution, we can deduce that when a larger probability events of receive by learning a lower value .as for the marginal events of , these are by learning indeed enhanced , but the enhancement rate is smaller than the suppression rate of large probabilities . for instance , this happens when for the inequality ( [ 2.b.28a ] ) ensures that holds . the left inequality in ( [ 2.b.28a ] )saturates when , see also fig .[ fig2a ] . for the situation analogous . hereproperties of the escort distribution imply that when marginal events of get by learning a higher probability .the suppression rate for large ( i.e. close - to - peak ) probabilities is now smaller than the enhancement rate of marginal events .this happens , for example , for distributions , with fulfilling again the inequality ( [ 2.b.28a ] ) .this can be also directly seen from fig .[ fig2a ] when we revert the sign of .when we set then both inequalities ( [ 2.b.26a ] ) are simultaneously satisfied yielding as it should .in contrast to a shannonian case where the mutual information quantifies the average reduction in uncertainty resulting from observing / learning a further information , in the rnyi case we should use campbell s coding theorem in order to properly understand the meaning of .( 20,7 ) ( 118,235) ( -100,85) according to the cct corresponds to the minimal average cost of a coded message with a non - linear ( exponential ) weighting / pricing of codeword - lengths . while according to shannon we never increase ignorance by learning ( i.e. , possible correlations between and can only reduce the entropy ) , in rnyi s setting extra knowledge about might easily increase the minimal price of coding because of the nonlinear pricing . since the cct penalizes long codewords which in shannon s coding have low probability , the price of the code may easily increase , as we have seen in examples ( [ ii.20aaa ] ) and ( [ ii.20aaaa ] ) . in the key context of financial timeseries , the risk valuation of large changes such as spikes or sudden jumps is of a crucial importance , e.g. , in various risk - reducing formulas in portfolio theory .the rle of campbell s pricing can in these cases be interpreted as a risk - rating method which puts an exponential premium on rare ( i.e. , risky ) asset fluctuations . from this point of view the mutual information represents a _ rating factor _ which rates a gain / loss in risk in resulting from learning a new information , namely information about .the _ conditional mutual information _ of order between and given _ z _ is defined as note that because of a validity of the chain rule ( [ iii.b.15a ] ) , relations ( [ ii.a.8a ] ) and ( [ eq : condmiiii ] ) also hold true for the re . to close this section, we shall stress that information entropies are primarily important because there are various coding theorems which endow them with an operational ( that is , experimental ) meaning , and not because of intuitively pleasing aspects of their definitions . while coding theorems do exist both for the shannon entropy and the rnyi entropythere are ( as yet ) no such theorems for tsallis , kaniadakis , naudts and other currently popular entropies .the information - theoretic significance of such entropies is thus not obvious . since the information - theoretic aspect of entropies is of a crucial importance here , we will in the following focus only on the se and the re .as seen in section [ sec2b ] , the mutual information quantifies the decrease of uncertainty about caused by the knowledge of .one could be thus tempted to use it as a measure of an informational transfer in general complex systems . a major problem , however , is that shannon s mutual information contains no inherent directionality since .some early attempts tried to resolve this complication by artificially introducing the directionality via time - lagged random variables . in this wayone may define , for instance , the _ time - lagged mutual _ ( or _ directed kullback leibler _ ) _ information _ as the later describes the average gain of information when replacing the product probability by the joint probability .so the information gained is due to cross - correlation effect between random variables and ( respectively , ) .it was , however , pointed out in ref . that prescriptions such as ( [ iii.a.1.a ] ) , though directional , also take into account some part of the information that is statically shared between the two random processes and . in other words , these prescriptions do not produce statistical dependences that truly originate only in the stochastic random process , but they do include the effects of a common history ( such as , for example , in the case of a common external driving force ) .for this reason , schreiber introduced in ref . the concept of ( shannonian ) transfer entropy ( ste ) .the latter , apart from directionality , accounts only for the cross - correlations between statistical time series and whose genuine origin is in the source " process .the essence of the approach is the following .let us have two time sequences described by stochastic random variables and .let us assume further that the time steps ( data ticks ) are discrete with the size of an elementary time lag and with ( is a reference time ) .the transfer entropy can then be defined as & = & i(x_{t_{m+1 } } ; x_{t_1}\cap \cdots \capx_{t_{m}}\cap y_{t_{m - l+1}}\cap \cdots \cap y_{t_{m } } ) - i(x_{t_{m+1}};x_{t_1 } \cap \cdots \capx_{t_{m } } ) \ , .\nonumber \\ & & \mbox{\hspace{-15 mm } } \label{iii.a.23a}\end{aligned}\ ] ] the last line of ( [ iii.a.23a ] ) indicates that represents the following .[ cols= " < , < " , ]in this appendix we specify explicit values of effective transfer entropies that are employed in section [ sec10 ] .these are calculated for alphabet with . j .- s .yang , w. kwak , t. kaizoji and i .-kim , _ eur .j. _ * b61 * ( 2008 ) 389 ; k. matal , m. pal , h. salunkay and h.e .stanley , _ europhys . lett ._ * 66 * ( 2004 ) 909 ; h.e .stanley , l.a.n .amaral , x. gabaix , p. gopikrishnan and v. plerou , _physica _ * a299 * ( 2001 ) 1 .chaudy and j.b .edinburgh mathematical notes _ * 43 * ( 1960 ) 7 ; for h. tverberg s , p.m. lee s or d.g .kendall s axiomatics of shannon s entropy see e.g. , s. guias , _ information theory with applications _ ,( mcgraw hill , new york , 1977 ) .this exponential weighting is also known as a kolmogorov nagumo averaging . while the linear averaging is given by , the exponential weighting ia defined as with .the factor is known as campbell exponent .one can map with to with via duality that exists between and .in fact , we can observe that and .so with and with carry equal amount of information .y. liu , p. cizeau , m. meyer , c .- k .peng , h.e .stanley , _ physica _ * a245 * ( 1997 ) 437 ; _ physica _ * a245 * 441 ; y. liu , p. gopikrishnan , p. cizeau , m. mayer , c .- k .peng , h.e .stanley , _ phys .* e60 * ( 1999 ) 1390 .
in this paper , we quantify the statistical coherence between financial time series by means of the rnyi entropy . with the help of campbell s coding theorem we show that the rnyi entropy selectively emphasizes only certain sectors of the underlying empirical distribution while strongly suppressing others . this accentuation is controlled with rnyi s parameter . to tackle the issue of the information flow between time series we formulate the concept of rnyi s transfer entropy as a measure of information that is transferred only between certain parts of underlying distributions . this is particularly pertinent in financial time series where the knowledge of marginal events such as spikes or sudden jumps is of a crucial importance . we apply the rnyian information flow to stock market time series from world stock indices as sampled at a daily rate in the time period 02.01.1990 - 31.12.2009 . corresponding _ heat maps _ and _ net information flows _ are represented graphically . a detailed discussion of the transfer entropy between the dax and s&p500 indices based on minute tick data gathered in the period from 02.04.2008 to 11.09.2009 is also provided . our analysis shows that the bivariate information flow between world markets is strongly asymmetric with a distinct information surplus flowing from the asia pacific region to both european and us markets . an important yet less dramatic excess of information also flows from europe to the us . this is particularly clearly seen from a careful analysis of rnyi information flow between the dax and s&p500 indices . pacs numbers : 89.65.gh , 89.70.cf , 02.50.-r keywords : econophysics ; rnyi entropy ; information transfer ; financial time series # 1 # 1
with the rapid growth of mobile demands and the ever - increasing use of smart phones , mobile network has been one of the fastest growing technologies that impact major aspects of our life . in recent years ,the 4 g mobile communication system is being deployed worldwide , leading to a rapid growth in the mobile network capacity , which further dramatically stimulates the mobile demands . as mobile network evolves from voice - oriented to media - oriented , incremental improvements of networkscan no longer keep the pace with the increase in mobile data demands , since there exist several challenges that the current mobile network can hardly address . _1 ) convergence of heterogeneous wireless networks_. there exist various heterogeneous wireless networks , i.e. pluralistic standards such as gsm , umts , lte and wlan , and they will coexist for a long time . however , as current wireless operators typically deploy their networks by vertically constructed method , these heterogeneous networks can hardly interconnect with each other , which makes operators incapable of efficiently optimizing and dynamically scheduling from the global perspective .for example , although there may be many mobile networks around us , we may not be able to access the most appropriate one or select multiple networks simultaneously .the only thing we can do is to access one specific network all the time , even if this network performs quite poorly ._ 2 ) efficient utilization of wireless resources_. as a result of the difficulty in achieving the convergence of heterogeneous networks , many network devices are not fully utilized and plenty of wireless resources are wasted , while generating enormous amounts of energy consumption .while several big carriers , including at&t , verizon , t - mobile and sprint , say that in the next few years they may not have enough spectrum to meet the demands for mobile data , many scientists and engineers , including martin cooper , father of the mobile phone , are convinced that the main reason for spectrum crisis is that the spectrum resources are not fully utilized ._ 3 ) network innovations_. proliferating mobile demands compel wireless operators to build sustained innovations to continuously boost their network performances .however , tightly coupling with specific hardware and lack of flexible control interfaces , the current mobile network can hardly provide a fast track for technological innovations ._ 4 ) wireless applications and services_. wireless services proliferate significantly , e.g. big data , and different kinds of services require very different network characteristics . unfortunately , the current mobile network often can only support these wide - ranging and very different services with the same network characteristics and , consequently , it may provide users with poor quality of service ( qos ) and poor quality of experience ( qoe ) .because the current mobile network is vertically constructed over the years , it generally becomes ossified and closed , and has difficulty to meet the above mentioned challenges .this calls for the research and deployment of new generation of mobile network , specifically , 5 g , which is capable of providing a more open and efficient mobile network .in particular , 5 g should benefit from the convergence of heterogeneous networks , facilitate network evolution , boost the network performance , enhance users qos and qoe , and at the same time save resources and energy .these objectives require mobile networks to offer much more efficient controlling and user data processing functions across the network layer down to physical layer by jointly considering the both layers together .however , since the physical layer technologies in mobile networks are much more complicated than in the wired scenarios , implementing efficient control and programmable user planes across the network layer and the physical layer is extremely challenging .the problem gets worse as the mobile environment becomes more complex , e.g. high mobility , frequent handovers , and heavy interferences .consequently , the key technologies of 5 g system are still open and attract increasingly attentions . software defined network ( sdn ) , an innovative paradigm , advocates separating the control plane and data plane , and abstracting the control functions of the network into a logically centralized control plane , referred to as sdn controller .aiming to establish a new paradigm for 5 g mobile network , in this article , we propose a cross - layer software - defined 5 g network architecture . in our architecture , the concept of sdn controller is extended to take the control functions of the physical layer into consideration as well , not just those of the network layer .furthermore , we also introduce the novel cloud computing pool which considers the programmable user data processing functions of both the network layer and the physical layer jointly . specifically , by abstracting the control functions of the both layers , we introduce a logically centralized programmable control plane oriented towards both network and physical layers , through which we achieve the fine - grained controlling and flexible programmability of the both layers .for example , we can control the packets forwarding in network layer as well as direct the beamforming and interference canceling in physical layer .the cloud computing pool proposed in our architecture provides powerful computing capability to implement the baseband data processing of multiple heterogeneous networks , which efficiently improves the convergence of heterogeneous wireless networks .moreover , the programmable scheme in our architecture flexibly supports the network evolution and the deployment of any potential technological innovations which is particularly important as the key technologies of network and physical layers of 5 g .we also discuss the main technical challenges of our architecture , including the fine - grained control strategies oriented to the both network and physical layers , network virtualization that supports the customizability , and programmability that is beneficial to network evolution .qualitative simulations are deployed to validate the advantages of our proposal .our study demonstrates that our proposed 5 g architecture is capable of achieving its design goals , namely , convergence of heterogeneous networks , fine - grained controllability , efficient programmability for network evolution , and network and service customizability , and therefore it offers an architecture design of future mobile network .the remainder of this article is organized as follows .we first offer a rethinking of mobile network , and this is followed by the introduction to sdn .next , we present our cross - layer software - defined 5 g architecture and analyze its features and system feasibility. then we interpret the key technologies and challenges of our architecture .several quantitative performance results are provided .finally , we conclude the article by summarizing our contribution .the challenges to the current mobile network are mostly caused by the ossification , closeness , and vertically constructed features . simply evolving the traditional networking architecturecan hardly meet the requirements of future mobile networks .we rethink the problems of the current mobile networking and envisage how the future mobile network looks like , and consequently propose the design goals of our architecture .* _ ossified _ _ vs _ _controllable_*. controlling in the current mobile network is limited and ossified .it is quite difficult to dynamically adjust the control strategies when the network status changes quickly .for example , many users with high mobility may go through numerous and frequent handovers , especially when the handovers occur among heterogeneous networks , which needs more fine - grained and dynamical controlling .therefore , efficiently controlling is a fundamental function that the future network has to provide . *_ closed _ _ vs _ _ programmable_*. technological innovations in mobile network need to be continuously developed in order to meet the demands .unfortunately , mobile network technologies are usually solidified in the hardware .for example , the unequal error protection ( uep ) , a physical layer technology , can provide higher video quality while consuming less spectrum .but to make such performance enhancing changes typically require replacing the hardware due to the tightly coupling with the hardware in the form of specific asics designed for each protocol of the lte network .in particular , dynamically deploying specific technologies based on the run - time network status becomes especially difficult .therefore , it is necessary and highly desired to introduce greater programmability into mobile networks . *_ vertical _ _ vs _ _ converging_*. the vertical - constructed method has dominated the mobile networking almost since the first generation mobile communication system was conceived , and it has brought great successes to mobile communication . however , as the scale of mobile network is quickly enlarging and the network becomes ever - increasingly dense , traditional vertical - constructed networking severely restricts the development of mobile networks .for example , 3 g , lte and wlan can hardly interconnect , and the operators are incapable of controlling the network from the global perspective .thus , enabling efficiently convergence of heterogeneous networks must be supported in the future mobile network .* _ unified _ _ vs _ _ evolvable_*. multiple mobile protocols and pluralistic standards will coexist for a long time .each of them possesses the distinctive advantage for specific services or situations . for example, 3 g provides better voice services and wlan suits for video applications .the diversified world stimulates pluralistic demands , and thus it is almost impossible to find one unified network with solidified technologies to fullfill all the expectations .consequently , the future mobile network must support smooth evolution for pluralistic standards .based on the above analysis , we can envisage the basic characteristics of the future mobile network and naturally define the _ * design goals * _ of our proposed 5 g architecture , which are summarized as follows .* beneficial to the convergence of heterogeneous networks ; * fine - grained controllability for mobile networks ; * high programmability and supply open interfaces for network technological innovations ; * high evolvability and fast deployment ; * customizability of networks and services .sdn has become one of the hottest topics in computer networking . by separating the control plane and the data plane, sdn abstracts the control functions of the network layer into a logically centralized control plane , known as the sdn controller .this sdn controller makes the rules and control the behaviors of data plane devices from the global perspective .openflow is a most common realization of sdn .many manufacturers , such as hp , nec , ibm , produce the commercial openflow switch , and several kinds openflow controllers , e.g. nox and floodlight , are available .the openflow protocol is sustainably released by open networking foundation ( onf ) , and sdn is becoming ever - increasingly popular in networking today . [ cols=">,<,>,<,>,<,>,<,>,<",options="header " , ] in recent years , several researchers attempt to extend sdn into wireless and mobile networks .openradio mostly focuses on the programmable data plane design of software - defined wireless network , but this work does not propose the holistic network structure with the control plane architecture . present a software - defined cellular network architecture , cellsdn , that covers both access network and core network . however , this work is oriented to the lte network , and it can hardly be adapted to the circumstances of heterogeneous networks .softran is a software - defined control plane of the radio access network by introducing a virtual big - base station , but this architecture mostly focuses on the one - tier situation , consisting of micro - cells , which is not suitable for the scenarios of heterogeneous networks .openrf and softcell focus on the software defined wifi network and core network respectively. meanwhile , mobiflow and openroad focus on the transport network that is similar to wired network and offer no special support for mobile access network .moreover , almost all the existing works , except for openradio and openrf , neglect the vital programmability features .table [ tab1 ] compares our proposed architecture with these existing works .in order to address the challenges and to achieve the goals introduced in the previous sections , we propose a cross - layer software - defined 5 g architecture , as depicted in fig . [fig : architecture ] .we introduce a logically centralized control plane , which makes the rules and controls the behaviors of network devices as well as provides appropriate application program interfaces ( apis ) for various applications . to achieve the efficient controlling of both the network layer and physical layer , we design a fine - grained control strategywhich is compatible with the both layers .because the limitation of poor computing capability in baseband data processing and the lack of interconnection severely impede the traditionally designed physical layer s controllability , we introduce a novel cloud based baseband data processing pool .the radio transceiver devices and the baseband data processing devices are completely separated .we also introduce the programmability feature to make the network more evolvable and flexible for technological innovations .our architecture contains four main parts : wireless access layer , cloud computing resource pool , centralized control plane , and application plane .wireless access layer consists of a large number of physical remote radio units ( prrus ) distributed at various locations .it enables several virtual rrus ( vrrus ) corresponding to different mobile communication protocols coexisting in one shared prru by radio frequency virtualization technology , to efficiently support fast deployment and the convergence of heterogeneous networks .for example , one prru can fast deploy one 3gpp vrru and one wlan vrru ( vap ) simultaneously , according to the requirements .cloud computing resource pool is comprised of a large amount of physical processors with high - performance computing and storage capability as well as high speed links . the current mobile network adopts the vertically constructed networking , in which one rru is tightly coupled with one specific base band unit ( bbu ) . to loosen this inefficient coupling , in our architecture , the traditional bbus ,base station controllers ( bscs ) and routers are replaced by virtual ones ( vbbus , vbscs , and vrouters ) which are implemented in the shared physical processors through virtualization .these shared physical processors can simultaneously run multiple vbbus , vbscs , and vrouters by allocating appropriate computing resources and storage resources . specifically , each vbbu can deploy appropriate wireless protocol softwares to implement the physical layer programmability .the corresponding virtual elements ( vbscs , vbbus , vrouters , and vrrus ) constitute a complete wireless network with specific protocols .our cloud based data processing pool significantly improves the computing capacity of baseband processing and overcomes the interconnection difficulty of traditional mobile networks , which greatly benefits the convergence of heterogeneous networks . to guarantee the run - time properties ,the wireless access pool is connected to the cloud computing resource pool through high speed optical links .the wireless access layer and the cloud computing resource pool comprise the data plane of our architecture .the centralized control plane is the `` brain '' of our architecture by abstracting and combining the control functions of both the network and physical layers .the control plane can acquire the configurations and real - time status of the both layers to make the decision from the global perspective .we partition the control plane into several functional submodules . _1 ) flow scheduler _ , which is related to the network layer , is similar to the wired sdn controller .it makes the polices and schedules the behaviors of each virtual forwarding element , i.e. guiding them to process the packets ._ 2 ) wireless scheduler _ , which is oriented to the physical layer , is the `` highlight '' in our architecture .it can deploy appropriate wireless protocol softwares into each vbbu .moreover , it also dynamically schedules the physical layer actions , for example , beamforming , power control , interference cancelling , massive mimo and carrier aggregation ._ 3 ) virtualization _ , which is an indispensable element of sdn , makes it possible to customize networks and services .specifically , virtualization scheduler manages the strategies of virtualization .we introduce three kinds of virtualization flow space virtualization , cloud level virtualization , and spectrum level virtualization that conspicuously improve the network performance from different angles ._ 4 ) resource scheduler_. because there are various resources in our architecture , including spectrum , computing , and storage , it is necessary to efficiently allocate these resources to different virtual elements , which is implemented in resource scheduler ._ 5 ) global - view database_. all the above - mentioned submodules need to know the static configurations and dynamic status of the whole network and , therefore we introduce a database with global view to collect these information .the application plane contains a variety of network applications , each of which utilizes the apis abstracted by the control plane to guarantee its qos and qoe .we now emphasize that our proposed cross - layer software - defined 5 g architecture achieves the design goals set out in section [ s2 ] .firstly , we abandon the traditional vertically constructed networking method and introduce a cloud computing pool to process the baseband data of heterogeneous networks. therefore , our architecture conspicuously benefits the convergence of heterogeneous networks . with our architecture, the mobile network can easily acquire real - time global network view , which enables its centralized control plane to efficiently control the entire network by considering both the network and physical layers as a whole .this makes the network much more open and controllable .the flexible programmability proposed not only provides the opportunities for fast deployment of technological innovations but also makes the network more evolvable .our proposed three types of virtualization further enhance the customizability of the networks and services .recent years , sdn and network virtualization attract increasingly attentions from both the academic and industrial communities .many prototypes , testbeds and products have been developed and produced , in particular , openflow controller , switch and protocols , as well as geni network virtualization platform .furthermore , data center network and cloud computing also stimulate fast development of sdn and virtualization technologies .moreover , there are plenty of prototypes and testbeds for software defined radio ( sdr ) as well as its extended version , cognitive radio , which offer considerable programmability .we also note that china mobile ltd .positively focuses on the research and development of the cloud based mobile network architecture , referred to as cran . based on these progresses, we believe that our proposed architecture is quite feasible .the first technical challenge is to design the fine - grained control strategy , and we divide the problem into two subproblems : jointly optimizing strategy and individual control strategies . the jointly optimizing strategy makes the decision to achieve the efficient controlling according to the static configurations and real - time status of both the network and physical layers .consider for example that the qos for a user at cell boundary deteriorates .we may need to determine the appropriate routing and bandwidth requirements as well as to schedule the beamforming and interference cancelling .after the decision has been made , the control plane decomposes the decision into two parts corresponding to the two layers , and then leverages the flow based `` match - action '' strategy for the both layers to make rules and control behaviors of each virtual element , as illustrated in fig .[ fig : control ] .correspondingly , each data plane device integrates a sdn agent , which complies with the control strategy , resolves the control flow , and communicates with the control plane .virtual elements constantly report the real - time status to the control plane , which may then dynamically refresh the global - view database . since the responsibilities of the network layer and physical layer are distinctly different , the individual control strategies for the two layers are different , which defines the second subproblem of our fine - grained control strategy . _network layer_. since the packet header of each flow possesses several function fields called _ match fields _ , such as ip address and mac address , the data plane devices can process the arriving packets by checking the match fields . specifically, when a virtual element receives a packet , it first checks whether this flow matches its control rules .if so , it executes the corresponding actions .otherwise , the packets will be dropped or sent to the control plane .an example can be interpreted as : * if * dest ip = xx.xx.xx.xx , * then * forward to vbbu1 with speed s1 ._ physical layer_. although the physical layer controlling also adopts the `` match - action '' strategy , the match fields , rules , and actions are quite different from the network layer. since key technologies of 5 g physical layer are still open , the format of match fields should be elastic .for example , the basic information filed should contain the encoding mode , modulation type , carrier information , _etc_. additionally , the power control field may present the power control information , while the beamforming field and the interference cancelling field should provide some corresponding information .the control plane dynamically makes the decision according to the run - time status , and then sends control flows to the vbbus .the `` actions '' in vbbus refers to the appropriate physical layer technologies .for example , when qos of a user deteriorates , the control plane may launch beamforming , while when the users at cell boundaries encounter severe inter - cell interference , the control plane may start interference cancelling .network virtualization as a promising means of the future network enables multiple concurrent virtual networks to run on the shared substrate resources .network virtualization is an indispensable part of sdn , while sdn architecture facilitates network virtualization .network virtualization is responsible for customizing the networks and services .therefore , how the virtualization scheme is designed directly influences the system performance and resource utilization .there are three types of virtualization , flow space virtualization , cloud level virtualization , and spectrum level virtualization , in our architecture , and we now elaborate each of them . _flow space virtualization_. different operators or network services require different network characteristics . as the control strategy in our architectureis flow based , flow space can be divided into multiple slices , each of which corresponds to one virtual network . as shown in fig .[ fig : virtualization ] , we introduce a flow space abstracting layer . by abstracting the control functions of virtual elements ,this layer slices the flow space according to the requirements of operators or services , and provides each flow slice with one controller. then virtual networks are generated , and each virtual network is managed by its corresponding controller .the virtual networks share the same underlying data plane devices . in vbsc or vrouter ,different virtual networks have different rules and run different actions .operators often need different vbbus due to the specific wireless protocols ._ cloud level virtualization_. the control plane first creates vbbus , vbscs and vrouters by virtualizing physical processors and allocating appropriate computing and storage resources .then it establishes the forwarding and data processing rules for virtual elements .moreover , it deploys the corresponding wireless protocol softwares in vbbus by utilizing the programmability . _ spectrum level virtualization_. it refers to the virtualization of spectrum by radio frequency virtualization technology , which enables several vrrus with different wireless protocols to coexist in one shared prru .spectrum level virtualization extends the virtualization scope , lessens the networking difficulty and saves the cost .the physical layer programmability faces several challenges including how to guarantee sufficient computing capability and how to efficiently implement physical layer technologies .our proposed cloud based baseband processing pool provides powerful computing capability . in the physical layer , although different wireless protocols operate quite different from each other , they always share some same modules , such as modulation , coding and interleaving . inspired from sdr , we modularize wireless protocols and establish a wireless module library in vbbus .vbbus can select different modules , and then connect these selected modules to implement the specific protocol . as 5 g physical layer technologies are still open , our proposed programmability provides the flexibility for technological innovations , benefits the future network evolution , as well as enables fast deployment and customization of wireless protocols .although in the current stage the architecture proposed has not been deployed in a real scenario , we try to evaluate the performance with real traces from a large cellular deployment .we show that our proposed architecture significantly improves the energy efficiency and qos utility .to truly reflect the performance advantages , we use the real datasets named `` sitefinder '' which is released by ofcom on behalf of uk s government . these datasets record massive base stations information supplied by the mobile network operators .the information contains location information ( latitude and longitude ) , operator information , base station type , transmission power , communication protocol ( gsm , umts , and tetra ) , frequency band , antenna information , and etc . considering that the geographical area covered by these datasets is too wide , we select geographic area from manchester .consequently , there are 159 gsm base stations as well as 255 umts base stations deployed by five operators i.e. o2 , orange , three , t - mobile , and vodafone . in recent years , many researchers focus on the energy efficiency in mobile communications and , as a result , lots of energy - efficient mechanisms are presented. however , a form of stalemate exists : these efficient mechanisms are difficult to implement and deploy in real scenarios since the network functions are highly solidified in the hardware and the network devices are inflexible to control .meanwhile , our proposed architecture provides flexible programmability and centralized controlling , which makes it easy to implement these energy - efficient schemes in real scenarios . considering that network traffic at midnight is low , i.e. tide effect ,we propose two energy - efficient strategies : 1 ) the centralized control plane intelligently turn off the base stations on behalf of individual operator , referred to as iit strategy ; 2 ) control plane turn off the base stations from global perspective , i.e. cooperation cross operators , referred to as cit strategy .it is notable that both of these two strategies needs to guarantee the whole area coverage .[ fig : energy ] depict the energy - saving performance of gsm and umts . as coverage range of each base station increases ,both strategies turn more base stations and save more energy .for example , when coverage range is , 39 percent base stations can be turned off and 35 percent energy can be save by utilizing iit strategy for umts .moreover , as figures shown , apparently , the cit strategy achieves much more energy - efficient benefits than iit since cit allows the control plane to optimize the energy - efficient solution across multiple mobile operators . in the current mobile network , every end user has no choice but to access one specific network. they may not be able to access the most appropriate one or select multiple networks simultaneously , which leads to poor qos .in contrast , in our architecture , all the network devices are programmable and controlled by the centralized control plane .consequently , the control plane may guide the end users to access the most appropriate networks , by which qos can be improved .we assume that there are 10,000 end users and each one runs an ongoing mobile service .we define the qos utility between each pair of user , base station , and assume the utility follows the uniform distribution in the range of $ ] by normalizing .the location ( latitude and longitude ) of each end user is randomly generated .we introduce four types of strategies : 1 ) current strategy .each business is bound to one specific network .2 ) intra operator optimizing ( intra - opr ) : the control plane guides each end user to access the base station , belonging to the specific operator , with the largest qos utility .3 ) inter operator optimizing ( inter - opr ) : breaking the limit of one specific operator .4 ) breaking the limit of operator as well as one communication standard ( heter - stra ) , i.e. it allows the end users to the access the base station among several heterogeneous networks ( in this senario : cross umts and gsm ) . as fig .[ fig : qos](a ) shows , both intra - opr and inter - opr obtain much more qos utility than current strategy , and this advantage becomes much more obvious as the coverage range increases .after that , we consider the heterogeneous scenario and combine base stations .[ fig : qos](b ) confirms the perform advantages of heter - stra .in this article , we have proposed a cross - layer software - defined 5 g architecture , which addresses some of the key technical challenges facing the future mobile network .our architecture is able to control the mobile network from the global perspective through the fine - grained controlling for the both network and physical layers .moreover , the cloud computing pool in our architecture provides powerful computing capability to implement the baseband data processing of multiple heterogeneous networks , while the programmability feature efficiently supports the network evolution and technological innovations .the key technologies we introduced to our architecture include fine - grained control strategy , network virtualization , and programmability , and the key features of our proposed 5 g architecture are controllability , programmability , evolvability and customizability . since the key technologies of 5 g are still open , we believe our flexible architecture is capable of supporting the 5 g standards that eventually emerge and efficiently satisfying the requirements of mobile demands .chen b x. ( 2012 ) carriers warn of crisis in mobile spectrum .the new york times .accessed 17 april 2012 bansal m , mehlman j , katti s , levis p ( 2012 ) openradio : a programmable wireless dataplane . in : proceedings of the first workshop on hot topics in software defined networks , ser .hotsdn 12 , new york , usa , pp 109 - 114 gudipati a , perry d , li l e , katti s ( 2013 ) softran : software defined radio access network . in : proceedings of the second acm sigcomm workshop on hot topics in software defined networking , ser .hotsdn 13 , new york , usa , pp 25 - 30 jin x , li l e , vanbever l , rexford j ( 2013 ) softcell : scalable and flexible cellular core network architecture . in : proceedings of the ninth acm conference on emerging networking experiments and technologies , ser .conext 13 , new york , usa , pp 163 - 174 yap k k , sherwood r , kobayashi m , huang t y , chan m , handigol n , mckeown n , parulkar g ( 2010 ) blueprint for introducing innovation into wireless mobile networks . in : proceedings of the second acm sigcomm workshop on virtualized infrastructure systems and architectures , ser .visa 10 , new york , usa , pp 25 - 32
in the past few decades , the world has witnessed a rapid growth in mobile communication and reaped great benefits from it . even though the fourth generation ( 4 g ) mobile communication system is just being deployed worldwide , proliferating mobile demands call for newer wireless communication technologies with even better performance . consequently , the fifth generation ( 5 g ) system is already emerging in the research field . however , simply evolving the current mobile networks can hardly meet such great expectations , because over the years the infrastructures have generally become ossified , closed , and vertically constructed . aiming to establish a new paradigm for 5 g mobile networks , in this article , we propose a cross - layer software - defined 5 g network architecture . by jointly considering both the network layer and the physical layer together , we establish the two software - defined programmable components , the control plane and the cloud computing pool , which enable an effective control of the mobile network from the global perspective and benefit technological innovations . specifically , by the cross - layer design for software - defining , the logically centralized and programmable control plane abstracts the control functions from the network layer down to the physical layer , through which we achieve the fine - grained controlling of mobile network , while the cloud computing pool provides powerful computing capability to implement the baseband data processing of multiple heterogeneous networks . the flexible programmability feature of our architecture makes it convenient to deploy cross - layer technological innovations and benefits the network evolution . we discuss the main challenges of our architecture , including the fine - grained control strategies , network virtualization , and programmability . the architecture significantly benefits the convergence towards heterogeneous networks and it enables much more controllable , programmable and evolvable mobile networks . qualitative simulations validate these performance advantages .
the study of quantum graphs has become popular in a number of fields in quantum mechanics ranging from molecular physics and the physics of disordered systems to quantum chaology and quantum computation ( see _ e.g. _ ) .quantum graphs serve as computationally inexpensive models with the ability to mimic a variety of features also present in more realistic quantum systems .for example , the now 20 years old conjecture by bohigas , giannoni and schmidt ( bgs ) , stating that the spectral statistics of quantum systems whose classical limit is chaotic follow those of random hermitian or unitary matrices in the semiclassical limit is well reproduced on quantum graphs . in this paper , we will address two fundamental , but seemingly disconnected questions related to quantum graphs , namely , we will look at * _ a ways to introduce symmetries on connected quantum graphs _ and investigate * _ the degree of complexity or randomness necessary on a quantum graph to fall within the universal regime of random matrix statistics ._ the first point has hardly been addressed in the context of quantum graphs .symmetries on quantum graphs play an important role in studies on quantum random walks considered recently in the context of quantum computation ( see _ e.g. _ ) .speed up of mixing - parameters of quantum random walks over classical random walks found on certain graphs is indeed related to interference effects due to symmetries in the quantum propagation .we will suggest a method for imposing a large class of symmetries on certain types of graphs which has potential applications in the design of effective quantum random walks .it is furthermore expected that symmetries on graphs will have a profound influence on the statistical properties of spectra of quantum graphs .the existence of discrete symmetries and associated good quantum numbers on connected quantum graphs is expected to lead to deviations from random matrix results .the second point addresses the range of validity of the bgs - conjecture .it is widely believed that the spectra of unitary propagators on quantum graphs follow random matrix statistics if the correlation exponents of an underlying stochastic dynamics are bound away from zero in the limit of large graphs sizes and the length of the arcs of the graph are incommensurate .we will argue here that the last condition can be considerably relaxed and that , in context of regular graphs , the existence or absence of random matrix statistics is related to the commutativity properties of certain sets of _ connectivity matrices _ to be defined in detail later . similar results for the spectra of laplacians of regular graphs have been reported in .a related discussion of spectra of adjacency matrices of cayley graphs of certain groups can be found in . + we start by briefly reviewing the notion of a quantum graph . a quantum graph is given by an underlying graph and a set of local scattering matrices at the vertices as well as a set of arc lengths .a ( finite ) _ directed graph _ or _ digraph _ consists of a finite set of _ vertices _ and a set of ordered pairs of vertices called _arcs_. we denote by and the set of vertices and the set of arcs of the digraph , respectively . given an ordering of the vertices ,the _ adjacency matrix _ of a digraph on vertices , denoted by , is the -matrix where the -th element is defined by {ll}% & if \\ & otherwise .\end{tabular } \right.\ ] ] an _ undirected graph _( for short , _ graph _ ) is a digraph whose adjacency matrix is symmetric .the _ line digraph _ of a digraph , denoted by , is defined as follows , ( see _ e.g. _ ) : and , given , the ordered pair if and only if . a _ quantum graph _ associated with a digraph on vertices may then be defined in terms of a set of unitary vertex scattering matrices on vertices and a set of arc - lengths defined for every arc .waves propagate freely along the directed arcs , transitions between incoming and outgoing waves at a given vertex are described by the scattering matrix .the two sets specify a unitary propagator of dimension defining transitions between arcs which has the form {ccc } & with & % \end{tabular}\ ] ] where is the wave number and {ccc } & with & % \end{tabular}\ ] ] the local scattering matrices depend on the boundary conditions and local potentials at the vertex which we do not want to specify here any further . for our purpose , we may regard the s as arbitrary unitaries .let and be the number of incoming and outgoing arcs of a vertex , respectively .a sufficient and necessary condition for a digraph to be quantisable in the way above is then , that for every vertex , .this means in particular that if is an undirected graph then it is quantisable .the `` classical '' dynamics corresponding to a quantum graph defined by a unitary propagator is given by a stochastic process with transition matrix note that both the quantum mechanics as well as the associated stochastic dynamics relate to transitions between arcs and is thus defined on the line digraph of .the paper is organised as follows . in section [ sec : reg - graph ] , we will introduce the notion of regular quantum graphs and discuss a factorisation of the propagator in terms of connectivity matrices for a special class of such graphs . in section [ sec : sym - reg ] , we relate the existence or absence of symmetries on a connected regular quantum graph to properties of the connectivity matrices .we discuss some specific examples for completely connected graphs including statistical properties of the spectra in section [ sec : examples ] . in section [ sec : rmt ] we show numerically that by inscribing a _ single _ ( unitary matrix into a large regular quantum graph one still obtains random matrix statistics despite huge degeneracy in the set of arc lengths and scattering matrices for a generic choice of connectivity matrices .we will implement symmetries on quantum graphs for which the wave dynamics at the vertices of the digraph is `` locally indistinguishable '' when going from one vertex to the next .we will restrict ourselves to wave dynamics on -regular digraphs . recall that a digraph is said to be _ -regular _ if , for every vertex , and thus . extending the concept of local indistinguishableness to quantum graphs, we will consider quantum graphs on -regular digraphs with local scattering matrices and set of outgoing arc lengths being identical at every vertex up to permutations of the incoming or outgoing channels .that is , there are unitary matrices and with and local permutation matrices , such that {ccc}% & and & \end{tabular } \ \ \ \\ ] ] combining the local matrices and to a single matrix , we obtain we call a quantum graph with these properties a _ regular quantum graph_. the matrix is called the _ coin _ in the context of quantum random walks on graphs .we denote by the matrix with all elements being equal to 1 and is the identity matrix .the following observation will be useful in what follows , ( see also ) : [ proadjlg ] let be the adjacency matrix of a -regular digraph .the adjacency matrix of has up to reordering the arcs the form where the matrices of dimension have entries 0 or 1 and represent discrete functions on the vertex set , that is and in addition * remark : * a given matrix assigns to every vertex a specific arc , see fig .[ fig : graph ] . note that , for , the choice of matrices is not unique and that the s do not need to be invertible .different sets of s fulfilling the conditions in the proposition give rise to adjacency matrices of the line digraph which are equivalent up to permutations in the ordering of the arcs in ( [ arc - ord ] ) .+ condition [ cond - rho ] ensures that for every and ; writing out eqn .( [ adj - lg ] ) , we obtain {cccc } \rho_{1 } & 0 & \cdots & 0\\ 0 & \rho_{2 } & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & \rho_{d}% \end{array } \right ] , \quad \mbox{and thus } \quad \left ( \bigoplus_{i=1}^{d}\rho_{i}\right ) \cdot \left(j_{d}\otimes i_{n}\right ) = \left [ \begin{array } [ c]{cccc}% \rho_{1 } & \rho_{1 } & \cdots & \rho_{1}\\ \rho_{2 } & \rho_{2 } & \cdots & \rho_{2}\\ \vdots & \vdots & \ddots & \vdots\\ \rho_{d } & \rho_{d } & \cdots & \rho_{d}% \end{array } \right ] \ , .\ ] ] the choice of matrices fixes now a certain ordering of the arcs ; ordering the arcs according to one deduces that non - zero matrix elements of as defined in ( [ adj - lg ] ) refer to transitions for every and every , which are exactly the allowed transition in the line digraph of .we pointed out in the introduction that the wave propagation on a quantum graph actually lives on the line digraph of the underlying digraph . generalising ( [ adj - lg ] ) to describe unitary propagation on digraphs, we write {cccc } c_{11}\,\rho_{1 } & c_{12}\,\rho_{1 } & \cdots & c_{1d}\,\rho_{1}\\ c_{21}\,\rho_{2 } & c_{22}\,\rho_{2 } & \cdots & c_{2d}\,\rho_{2}\\ \vdots & \vdots & \ddots & \vdots\\ c_{d1}\,\rho_{d } & c_{d2}\,\rho_{d } & \cdots & c_{dd}\,\rho_{d}% \end{array } \right ] , \label{qu - lg}%\ ] ] with being the unitary coin and the matrices fulfil the condition ( [ cond - rho ] ) .we have to add the additional constraints here that the s are invertible , that is , that they are permutation matrices .the condition is necessary to ensure that is unitary .we will refer to the permutations as the _ connectivity matrices _ in what follows , see [ fig : graph ] . satisfies all the properties of a regular quantum graph as defined above .the matrix is in particular the coin from which the local scattering matrices at vertices can be deduced .one obtains the connectivity matrices and specify thus the pair of arcs related through the transition at a given vertex .* remark : * in contrast to ( [ adj - lg ] ) where different decomposition of of the form ( [ cond - rho ] ) lead to equivalent adjacency matrices ( up to reordering the arcs ) , this is no longer the case for ( [ qu - lg ] ) .different sets of connectivity matrices lead here to different regular quantum graphs which may have very different spectral properties as will be discussed in the next section .* remark : * note that not all regular quantum graphs can be written in the form ( [ qu - lg ] ) .any pair of permutation matrices and leaving the adjacency matrix of a line graph of a -regular graph invariant , that is , transform an associate propagator of a regular graph , , into a propagator of a -regular , albeit different , quantum graph .if is of the form ( [ qu - lg ] ) , one easily finds permutations and such that is not of this form .so far we have considered general regular digraphs . in the special case where the underlying graph is undirected it is natural to consider associated time - reversal symmetric regular quantum graphs ; that is , regular quantum graphs for which for every ( wave)-paths there exists an equivalent time - reversed paths undergoing the same transitions .a time - reversal symmetric unitary propagator of the form ( [ qu - lg ] ) for an undirected regular graph can be constructed by choosing symmetric coin and connectivity matrices , that is , {ccccc } & and & & & for every \end{tabular } \\ ] ] note , that the symmetry conditions for the connectivity matrices severely limit the choice of possible graphs and decompositions .we note first that if a regular quantum graph can be written in the form ( [ qu - lg ] ) and there exists an invertible matrix such that {ccc}% & & for every % \end{tabular } \\label{comm}%\ ] ] then {ccc}% & with & % \end{tabular } \ \label{comm1}%\ ] ] independently of the choice of the coin .the result follows immediately from =0\text { } .\ ] ] it is obvious that the condition ( [ comm ] ) implies =0\text { } ] , is the adjacency matrix of a d - regular digraph and is a unitary matrix .let be the unitary matrix simultaneously diagonalising the s , that is , and is the -th eigenphase of the connectivity matrix where the order is determined by the transformation .the spectrum of , , is then where define and note that =0.\ ] ] thus there exist permutation matrices such that and thus is indeed block - diagonal of the form stated in the proposition . * remark : * note that the decomposition is independent of the coin .it can be shown that a set of commuting connectivity matrices of a connected graph always form a subset of the regular ( permutation ) representation of an abelian group and the underlying symmetry of the corresponding quantum graph is given by that group .( the commutativity of the s does in fact imply that is a cayley digraph of an abelian group ; it must therefore have the form of a discretised torus ) .the sub - spectra obtained from may then be characterised in terms of the eigenbasis of the generators of the abelian group represented by the connectivity matrices .let be the generators of such an abelian group , {cccc } with & and & where & \end{tabular}\ ] ] the eigenbasis of the connectivity matrices may then be written in dirac notation as with and the sub - spectra obtained from ( [ sub - ab ] ) are characterised by a set of `` quantum numbers '' .such a regular quantum graph is thus a discretized version of a quantum systems whose underlying classical dynamics has -integrals of motion in involution .some additional degrees of freedom are represented by the coin which may or may not be related to classical chaotic dynamics depending on the properties of and the group .next we consider the case that a symmetry exists with =0 ] for some .that implies that has degenerate eigenvalues ; could for example represent a symmetry of the graph , that is , with eigenvalues only .let us assume that has distinct eigenvalues , , each with multiplicity with let be a unitary matrix diagonalising in the form then brings into block - diagonal form , that is , with .the spectrum of is now decomposed in the following way : [ prop2]let be of the form ( [ qu - lg ] ) and the matrices , have the properties as described above ; is then of the form with {ccc}% & where & % \end{tabular } \\label{sub - pi}%\ ] ] the proof goes along the line of the proof of proposition [ prop1 ] .the decomposition is again independent of the coin , but the sub - spectra are now of dimension there is a trivial symmetry independent of the particular choice of the s related to the fact that every permutation matrix has an eigenvalue 1 with corresponding eigenvector .the symmetry in question has the form having two distinct eigenvalues and the eigenvalue has geometric multiplicity . as a consequence any can be block - diagonalised containing as an block , and thus constructing particular examples , it is useful to start with the completely symmetric graph , namely that of a _ fully connected graph_. this graph , also called the _ complete graph _ , has adjacency matrix . as =0 ( \rho_{j})_{kl}=\delta_{k,(l+j)\operatorname{mod}n} \chi^j_{m}=e^{2\pi\mathrm{i } \frac{jm}{n}}, j , m=1,\ldots , n\,. ] , but then fixed .the average is taken over the wavelength . ]both approaches are equivalent under very general conditions . ]the numerical results shown in fig . [ fig : stat]a ) indeed suggest poisson - statistics apart form deviations in the form factor on scales due to the ` chaotic nature ' of the coin . andnearest neighbour spacing distribution for ( a ) s are the regular representation of the cyclic group ; ( b ) s represent the symmetric group ; ( c ) a generic set s without non - trivial symmetries .the dashed curve in ( b ) labelled red .poisson corresponds to a distribution of degenerate levels being poisson distributed otherwise.,height=718 ] next , we consider a specific example of a non - abelian group , namely the symmetric group with elements ; we will discuss spectral properties of general groups elsewhere .the regular representation of can be decomposed in terms of its irreducible representations ( for short _ irreps _ ) ; each contains each -dimensional irrep exactly times .the group has 2 one - dimensional , 1 two - dimensional and 2 three - dimensional irreps , such that denote the irreps of the group element as with and the index counting different irreps of the same dimension ; there exists then a transformation such that the connectivity matrices are thus of the form as discussed in section [ sec : sec - part ] .note that the sub - matrices related to -dimensional irreps occur now times in the decomposition .we thus have 5 independent sub - spectra , 2 of dimension 24 , 1 of dimension 48 and 2 of dimension 72 of which the latter are of multiplicity two and three , respectively .the huge degeneracy in the spectra can clearly be seen in the spectral statistics ; it is manifest in the peak a in ( see figure [ fig : stat]b ) and leads to {ccc } & for & % \end{tabular } \ \\ ] ] the spectra appear to be uncorrelated otherwise ; note however , that the spectrum for each sub - matrix alone are correlated following statistics , which manifests itself in the deviations from purely poisson behaviour in ( cf . dashed curve ) as well as in the behaviour of the form factor for which is dominated by the sub - spectra of the three dimensional irreps .the overwhelming number of decompositions of the form ( [ cond - jd ] ) will of course have no common symmetry apart from the trivial symmetry discussed in section [ sec : sec - part ] .even though no further analytical results can be given in this case , a numerical study may reveal interesting insights into the range of validity of the universal rmt - regime .figure [ fig : stat]c ) shows the level statistics of a regular quantum graph obtained from a fully connected graph for a generic choice of connectivity matrices .one indeed finds good agreement with random matrix theory for the cue - ensemble .deviations in the formfactor for small can be attributed to the fact that the spectrum of is contained in the full spectrum . after removing this separable part of the spectrum as done for the nns in figure [ fig : stat]c )there is good agreement with random matrix results .it is worth keeping in mind , that this is a highly non - random matrix ; we are dealing here with the spectrum of the unitary matrix which has only non - zero elements of which only are independent .in particular , the arc lengths in the graph are not incommensurate , the arcs in the graph share indeed only different lengths among them .still , universality is obtained .the origin of the complexity in this type of quantum graphs is here clearly not due to the randomness in the choice of the matrix elements but due to _ the lack of a common symmetry _ in the set of connectivity matrices .the results in the last section suggest that spectral statistics of regular quantum graphs can to a large extent be controlled by properties of the permutation matrices independently of the coin .it is thus natural to ask whether we can reduce the dimension of the coin to its smallest possible value , namely , by considering large 2 - regular quantum graphs and still obtain random matrix correlations .we can only expect random matrix statistics on a regular quantum graph if the corresponding quantum graph with randomly chosen arc - lengths falls into the random matrix category .we therefore need to consider 2-regular graphs leading to fast ( classical ) mixing and not for example diffusive networks like ring graphs exhibiting 1d - anderson localisation .the d - regular digraphs with the fastest mixing rates are so - called -ary _ de bruijn graphs _ of order , , being the line graph generation of a complete graph of size .that is and is iteratively defined as .de bruijn graphs have size and are equipped with a complete symbolic dynamics of order .they play an important role in coding theory and parallel algorithms .numerical evidence suggests that quantum graphs based on de bruijn digraphs with incommensurate arc lengths follow random matrix statistics in the limit of large graphs sizes even for . in fig .[ fig : debrstat ] we show results for a regular quantum graph based on a binary de bruijn graph with k = 9 and quantum propagator where are permutation matrices with and is a unitary matrix . the statistics in fig .[ fig : debrstat ] are obtained by averaging over the space of unitaries with respect to the haar measure .the connectivity matrices of dimension have been chosen randomly . to avoid accidental symmetries , different sets of connectivity matriceshave been produced and the statistics of the corresponding ensemble averages combined .the spectral statistics of these regular quantum graphs agrees again very well with cue statistics .recall that the unitary matrix of size = 1024 has nonzero matrix elements which do , however , take on only 4 different ( complex ) values ! we have thus constructed extremely non - random matrices which still show universal random matrix statistics and have thereby shown that the bgs - conjecture is valid far beyond regimes previously thought to be included in the conjecture . and nearest neighbour spacing distribution for binary de bruijn graph of order with .,height=188 ] similar numerical results were also found for the spectra of the laplacian of regular graphs .a laplacian on a - regular undirected , loop - less graph is and is thus a symmetric matrix whose non - zero matrix elements take on the values 1 or only . when averaging over sets of - regular graphs agreement with goe statistics was found .this underlines once more that the origin of universality in spectral statistics lies not in the randomness of the matrix elements .we introduce a decomposition of certain regular quantum graphs which separates the quantum propagator on a graph into a topological part containing the connectivity matrices and a trivial part containing the quantum scattering information at the vertices .this allows one to implement global symmetries on the graph by choosing the connectivity matrices according to desired symmetry properties .we demonstrate that the complexity in the quantum spectrum ( which may be seen to take on its maximal value when the statistics coincides with rmt ) can here be linked to the amount of complexity contained in the set of permutation matrices building up the quantum graph .we present examples , where for a given graph and a fixed coin matrix , we were able to construct anything from poisson to rmt - statistics just by changing the set of connectivity matrices . by doing so ,we leave the local properties of the graph invariant , but change the way in which incoming and outgoing channels between vertices are connected and thus the global structure of the wave dynamics .we take this concept to its extreme by demonstrating numerically that unitary matrices representing 2-regular quantum graphs whose non - zero matrix elements take on only four different values still follow cue statistics for de bruijn graphs .we believe that our results open up new perspectives in understanding universality in spectral statistics .it transforms the question from a continuous into an essentially discrete problem focusing on the way local scattering processes are connected and condensing the parameter space to an absolute minimum , ( namely four - dimensional ) . +* acknowledgement : * we would like to thank andreas winter for interesting discussions and stephen creagh and jens marklof for reading the manuscript and for valuable comments .the main part of the work was carried out during a _ royal society industrial fellowship _ by one of us ( gt ) who would like to thank the royal society and hewlett - packard , bristol , for financial support .g. tanner , _ j. phys . a : math .gen . _ * 34 * 8485 - 8500 ( 2001 ) . nlin.cd/0104014 .a. terras , _ amer . math .monthly _ * 109 * , 121 - 139 ( 2003 ) ; a. terras , _ fourier analysis on finite groups and applications _ , ( cambridge university press , cambridge , 1999 ) .
we introduce the concept of regular quantum graphs and construct connected quantum graphs with discrete symmetries . the method is based on a decomposition of the quantum propagator in terms of permutation matrices which control the way incoming and outgoing channels at vertex scattering processes are connected . symmetry properties of the quantum graph as well as its spectral statistics depend on the particular choice of permutation matrices , also called connectivity matrices , and can now be easily controlled . the method may find applications in the study of quantum random walks networks and may also prove to be useful in analysing universality in spectral statistics .
a variety of applications share the core task of selecting a subset of columns from a short , wide matrix with rows and columns. the criteria for selecting these columns typically aim at preserving information about the span of while generating a well - conditioned submatrix .classical and recent examples include experimental design , where we select observations or experiments ; preconditioning for solving linear systems and constructing low - stretch spanning trees ( here is a version of the node - edge incidence matrix and we select edges in a graph ) ; matrix approximation ; feature selection in -means clustering ; sensor selection and graph signal processing . in this work ,we study an elegant randomized approach that holds promise for all of these applications .this approach relies on sampling columns of according to a probability distribution defined over its submatrices : the probability of selecting a set of columns from , with , is where is the submatrix consisting of the selected columns .this distribution is reminiscent of _ volume sampling _, where columns are selected with probability proportional to the determinant of a matrix , i.e. , the squared volume of the parallelepiped spanned by the selected columns .( volume sampling does _ not _ apply to as the involved determinants vanish . ) in contrast , uses the determinant of an matrix and uses the volume spanned by the _ rows _ formed by the selected columns .hence we refer to -sampling as _ dual volume sampling ( dvs)_. [ [ contributions . ] ] contributions .+ + + + + + + + + + + + + + despite the ostensible similarity between volume sampling and dvs , and despite the many practical implications of dvs outlined below , efficient algorithms for dvs are not known and were raised as open questions in . in this work ,we make two key contributions : =2em we develop polynomial - time randomized sampling algorithms and their derandomization for dvs .surprisingly , our proofs require only elementary ( but involved ) matrix manipulations .we establish that is a _ strongly rayleigh _measure , a remarkable property that captures a specific form of negative dependence .our proof relies on the theory of real - stable polynomials , and the ensuing result implies a provably fast - mixing , practical mcmc sampler .moreover , this result implies concentration properties for dual volume sampling .the selection of columns from a short and wide matrix has many applications .our algorithms for dvs hence have several implications and connections ; we note a few below .* experimental design .* the theory of optimal experiment design explores several criteria for selecting the set of columns ( experiments ) .popular choices are here , denotes the moore - penrose pseudoinverse of , and the minimization ranges over all such that has full row rank .a - optimal design , for instance , is statistically optimal for linear regression .finding an optimal solution for these design problems is np - hard ; and most discrete algorithms use local search .3.1 ) show that dual volume sampling yields an approximation guarantee for both a- and e - optimal design : if is sampled from , then \le {m - n + 1\over k - n+1}\|a^\dagger\|_f^2;\quad \mathbb{e}\left[\|a_s^\dagger\|_2 ^ 2\right ] \le \left(1 + { n(m - k)\over k - n+1}\right ) \|a^\dagger\|_2 ^ 2.\end{aligned}\ ] ] provide a polynomial time sampling algorithm only for the case .our sampling algorithms achieve the bound in expectation , and the derandomization in section [ sec : derand ] achieves the bound deterministically . recently ( in parallel ) achieved approximation bounds for a - optimality via a different algorithm combining a convex relaxation and greedy method .other methods for experimental design includes leverage score sampling and predictive length sampling . * low - stretch spanning trees and applications . * objectives [ eq : opt ] also arise in the construction of low - stretch spanning trees , which have important applications in graph sparsification , preconditioning and solving symmetric diagonally dominant ( sdd ) linear systems , among others . in the node - edge incidence matrix of an undirected graph with nodes and edges ,the column corresponding to edge is .let be the svd of with .the stretch of a spanning tree in is then given by . in those applications ,we hence search for a set of edges with low stretch .* network controllability . *the problem of sampling columns in a matrix also arise in network controllability problems .for example , in , the authors consider selecting control nodes ( under certain constraints ) over time in complex networks to control a linear time - invariant network . after transforming the problem into a column subset selection problem from a short and wide controllability matrix ,the objective becomes essentially an e - optimal design problem , for which the authors use greedy heuristics .we focus on a specific , widely applicable setup of column subset selection : _ picking columns from a short and wide matrix _ ( to optimize certain criteria ) .this setup was analyzed by where they minimize a norm of the pseudoinverse of the induced submatrix .formally , let with be a full row - rank matrix , and a sampling parameter with .let denote the rank of , thus .let denote the submatrix of with rows and columns indexed by ] , respectively .for brevity we use to denote columns of indexed by ] , such that denotes the frobenius norm or spectral norm , respectively , and denotes the moore - penrose pseudoinverse of .the initial motivation of for using this objective was to find low - stretch spanning trees in an undirected graph , a problem that reduces to with .however , problem arises much more widely . for example, it is known that a solution to is the statistically optimal solution for experimental design in linear regression .additional applications include low - rank matrix approximation , feature selection in -means clustering and sensor selection , among many others .given its many applications , finding efficient methods to solve is valuable .a promising approach advanced in is a sampling algorithm based on determinants of submatrices .although refer to their approach as `` volume sampling , '' it is different from the original volume sampling method of therefore , to avoid confusion , we will refer to the method as _ dual volume sampling ( dvs ) _( see [ sec : setup ] ) .it has been shown that dvs solves well .specifically , if we sample a subset of cardinality via dvs , then the approximation guarantees hold \le { m - n + 1\over k - n+1}\|a^\dagger\|_f^2;\quad \mathbb{e}\left[\|a_s^\dagger\|_2 ^ 2\right ] \le \left(1 + { n(m - k)\over k - n+1}\right ) \|a^\dagger\|_2 ^ 2.\end{aligned}\ ] ] unfortunately , provide a polynomial time sampling algorithm only for the case .no polynomial time algorithm for dvs for the more important setting is currently known .further , no _derandomized _ algorithm that satisfies the dvs bounds deterministically is known either . raise development of efficient algorithms for both these problems as open problems .the main focus of this paper is to provide positive answers to these open problems . as a result, dvs becomes a practical method for column subset selection ( in the harder regime ) .in particular , we provide two polynomial time sampling methods for dvs : ( i ) an exact algorithm polynomial in , ; and ( ii ) an efficient approximate sampling algorithm via markov chains .moreover , we also present a derandomized version of our exact sampling algorithm that deterministically samples a subset satisfying .our exact sampling algorithm uses a perturbation argument to compute the partition function and marginal probabilities .this computation , if done naively , will take time exponential in .we refer to the perturbation as the `` __ -trick _ _ '' .further , based on computation of marginal probabilities , we propose a derandomization of our first algorithm to obtain a method that samples deterministically . to obtain our fast mixing markov chain sampler, we appeal to the powerful theory of stable polynomials and _ strongly rayleigh _measures .in particular , we first establish that the measure induced by dual volume sampling is ( homogeneously ) strongly rayleigh .thereafter , fast mixing follows from the recent result of , that provides a fast mixing ( for suitably initialized ) markov chain for sampling from strongly rayleigh measures .* contributions . * in summary ,the key contributions of this paper are : 1 .the _ first _ polynomial - time exact dvs algorithm ( section [ sec : dualvol ] ) 2 .the _ first _ polynomial - time derandomized dvs algorithm that deterministically generates a subset satisfying ( section [ sec : derand ] ) .3 . a fast - mixing approximate dvs sampler built on markov chains for strongly rayleigh measures ;this yields a method that runs in time _linear _ in the dataset size ( section [ sec : mcmc ] ) .[ [ notation . ] ] notation . + + + + + + + + + from a matrix with columns , we sample a set ] .we denote the singular values of by , in decreasing order .we will assume has full row rank , so .we also assume that for every ] such that .observe that while ostensibly similar , these distributions are quite different : one is for sampling and the other for .both are related to volumes induced by submatrices is proportional to the square of the volume of the parallelepiped spanned by the _ rows _ of , while is proportional to the volume spanned by the _ columns _ of .these differences and connections make closely related yet different from , which explains our choice of name `` dual volume sampling . ''we describe in this section our method to sample from the distribution .our first method relies on the key insight that , as we show , the marginal probabilities for dvs can be computed in polynomial time . to demonstrate this , we begin with the partition function and then derive marginals .the partition function has a convenient simple closed form , which follows from the cauchy - binet formula and was also derived in .[ lem : partition ] for with and , we have }\det(a_s a_s^\top ) = \binom{m - n}{k - n}\det(a a^\top).\ ] ] next , we will need the marginal probability that a given set ] denotes the ( set ) complement of , and denotes the orthogonal complement of .[ thm : marginal ] let ] and .then the marginal probability of in dvs is \times \left[\prod_{j=1}^{r(b)}\sigma_j^2(b)\right ] \times \gamma \over z_a}.\end{aligned}\ ] ] we prove theorem [ thm : marginal ] via a perturbation argument that connects dvs to volume sampling .specifically , observe that for and it holds that carefully letting bridges volumes with `` dual '' volumes .the technical remainder of the proof further relates this equality to singular values , and exploits properties of characteristic polynomials .a similar argument yields an alternative proof of lemma [ lem : partition ] .we show the proofs in detail in appendix [ app : sec : partition ] and [ app : sec : marginal ] respectively . in this section ,we show how to compute the partition function and marginal distributions for .we base our analysis on a simple perturbation argument , which we call the _ -trick_.this serves as a bridge between dual volume sampling and volume sampling , based on which we can perform the desired computations in polynomial time .theorem [ thm : partition ] illustrates the -trick , which is subsequently used in theorem [ thm : marginal ] to obtain polytime computation of the marginal distribution .this theorem is the key building block for our final polynomial time exact dvs algorithm of section [ sec : dualvol ] .before we dive into partition functions and marginals , we recall two easily verified facts about determinants that will be useful in our analysis : the partition function of , happens to have a pleasant closed - form formula . although this formula is known , and follows immediately by an application of the cauchy - binet identity , we present an alternative proof based on the -trick perturbation for its conceptual value and subsequent use .[ thm : partition ] given where and , we have }\det(a_s a_s^\top ) = \left(\begin{array}{c}m - n\\k - n\end{array}\right ) \det(a a^\top).\end{aligned}\ ] ] first note that for and any , by we have taking limits as on both sides we have let us focus on .we construct an identity matrix , then we have }\right ) .\end{split}\ ] ] in other words , this value is proportional to the probability of sampling columns from } ] , , and let \backslash t ] and .then the marginal probability of sampling via dual volume sampling is given by \times \left[\prod_{j=1}^{r(b_{t_c})}\sigma_j^2(b_{t_c})\right ] \times \gamma_t \over \binom{m - n}{k - n}\det(a a^\top)}.\end{aligned}\ ] ] the proof is an application of the -trick ; due to its length we defer it to appendix [ app : sec : marginal ] . [ [ complexity . ] ] complexity .+ + + + + + + + + + + the numerator of in theorem [ thm : marginal ] requires time to compute the first term , to compute the second and to compute the third .the denominator takes time , amounting in a total time of to compute the marginal probability .the marginal probabilities derived above directly yield a polynomial - time _ exact _dvs algorithm . instead of -sets, we sample ordered -tuples ^k ] be the sampled tuple .since each permutation of the is equally likely , it follows that the joint probability of the tuple can be decomposed into a product of conditional probabilities as given these conditional probabilities , sampling is straightforward : in the -th iteration , we calculate the conditional probability of given for each ] .doing so introduces a small distortion factor of order .this procedure can be further accelerated by applying volume - preserving random projections , resulting in an algorithm with running time .this time is linear in but of high order in , and thus efficient only if . for sdd matrices ,recent work may yield improvements to .[ thm : approx ] for any and there is an algorithm that , in time , samples a subset from an approximate distribution with and .\end{aligned}\ ] ] in summary , we obtain two sampling algorithms : an exact one with running time , and an approximate one with time .next , we derandomize the above sampling algorithm to _ deterministically _ select a subset that satisfies the bound for the frobenius norm , thereby answering another question in .the key insight for derandomization is that conditional expectations can be computed in polynomial time , given the marginals in theorem [ thm : marginal ] : [ cor : conditional ] let ^{t-1} ] and append it to our selection : .the final set is the non - ordered version of .theorem [ thm : derandbounds ] shows that this greedy procedure succeeds , and implies a deterministic version of the bound ( [ eq : bounds ] ) .[ thm : derandbounds ] the greedy derandomization selects a column set satisfying the proof is done by constructing an iterative algorithm where in each iteration , we go through all elements that have not been selected and compute the expectation conditioned on this element included to current set .then we choose the element with the lowest conditional expectation to actually include into current set .such inclusion of elements will only decrease the conditional expectation , thus retaining the bound in theorem [ thm : derandbounds ] .detailed proof is deferred to appendix [ app : sec : greedy ] . * complexity . * at each iteration , greedy selection requires to compute the conditional expectation for items .thus , the total running time for iterations is .the approximation bound for the spectral norm is slightly worse than that in , but of the same order if .[ [ complexity.-1 ] ] complexity .+ + + + + + + + + + + at each iteration , algorithm [ algo : derand ] takes to compute the conditional expectation for items , thus the total running time for iterations is . also note that the approximation bound for spectral norm is slightly worse than that in , though it will of the same order if , we investigate dvs more deeply and discover that it possesses a remarkable structural property , namely , the _ strongly rayleigh ( sr ) _ property .this property has proved remarkably fruitful in a variety of recent contexts , including recent progress in approximation algorithms , fast sampling , graph sparsification , extensions to the kadison - singer problem , and certain concentration of measure results , among others . for dvsthe sr property has two major consequences : it leads to a fast mixing practical mcmc sampler , and it implies results on concentration of measure . [ [ strongly - rayleigh - measures . ] ] strongly rayleigh measures .+ + + + + + + + + + + + + + + + + + + + + + + + + + + sr measures were introduced in the landmark paper of , who develop a rich theory of negatively associated measures . in particular , we say that a probability measure }\to { \mathbf{r}}_+ ] with _ disjoint _ support .this property reflects a `` repelling '' nature of , a property that occurs more broadly across probability , combinatorics , physics , and other fields see and references therein .the negative association property turns out to be quite subtle in general ; the class of sr measures captures a strong notion of negative association and provides a framework for analyzing such measures .specifically , sr measures are defined via their connection to real - stable polynomials .a multivariate polynomial ] is real - stable .notable examples of sr measures are determinantal point processes , balanced matroids , bernoullis conditioned on their sum , among others .it is known ( see ( * ? ? ?* pg . 523 ) ) that the class of sr measures is exponentially larger than the class of determinantal measures .theorem [ thm : sr ] establishes the sr property for dvs and is the main result of this section . here andin the following , we use the notation .[ thm : sr ] let and .then the multiaffine polynomial } \det(a_s a_s^\top)\prod_{i\in s}z_i\quad=\quad \sum_{|s|=k , s\subseteq[ m ] } \det(a_s a_s^\top)z^{s},\ ] ] is real stable .consequently , is an sr measure .the proof of theorem [ thm : sr ] relies on key properties of real stable polynomials and sr measures established in .essentially , the proof demonstrates that the generating polynomial of can be obtained by applying a few carefully chosen stability preserving operations to a polynomial that we know to be real stable .stability , although easily destroyed , is closed under several operations noted in the important proposition below .[ prop : basic ] let be a stable polynomial .the following properties preserve stability : * substitution * : for ; * differentiation * : for any ] be an diagonal matrix . using the cauchy - binet identity we have } \det ( ( ay)_{:,t})\det((a^\top)_{t , : } ) = { \sum\nolimits}_{|t|=n , t\subseteq [ m]}\det(a_t^\top a_t)y^t .\end{split}\ ] ] thus , when , the ( diagonal ) indicator matrix for , we obtain .consequently , in the summation above only terms with survive , yielding we are now ready to sketch the proof of theorem [ thm : sr ] .notationally , it is more convenient to prove that the `` complement '' polynomial }\det(a_sa_s^\top)z^{s_c} ] but then have to constrain the support to the case when and .in other words , we may write ( using iverson - brackets ) }{\llbracket s_c\cap r=\emptyset \wedge s_c\cap t=\emptyset \rrbracket}\det(l_{t , t})z^{s_c}w^{r}.\ ] ] next , we truncate polynomial at degree by restricting . by ( * ? ? ?* corollary 4.18 ) this truncation preserves stability , whence \\ |s_c\cup r|=m - n}}{\llbracket s_c\cap r=\emptyset \rrbracket}\det(l_{s\backslash r , s\backslash r})z^{s_c}w^{r},\ ] ] is also stable . using prop .[ prop : basic]-(iii ) , setting retains stability ; thus \\r|=m - n } } { \llbracket s_c\cap r=\emptyset \rrbracket}\det(l_{s\backslash r , s\backslash r})z^{s_c}y^{|r|}\\ & = \sum_{s\subseteq [ m]}\bigl({\sum\nolimits}_{|t|=n , t\subseteq s}\det(l_{t , t})\bigr)y^{|s|-|t|}z^{s_c } = \sum_{s \subseteq [ m ] } e_n(l_{s , s})y^{|s|-n}z^{s_c},\end{aligned}\ ] ] is also stable .next , differentiating , times with respect to and evaluating at preserves stability ( prop .[ prop : basic]-(ii ) and ( i ) ) .in doing so , only terms corresponding to survive , resulting in }e_n(l_{s , s})z^{s_c } = ( k - n)!\sum_{|s|=k , s\subseteq[m]}\det(a_s a_s^\top ) z^{s_c},\ ] ] which is just ( up to a constant ) ; here , the last equality follows from prop .[ prop : elsym ] .this establishes stability of and hence of .since is in addition multiaffine , it is the generating polynomial of an sr measure , completing the proof .the sr property of established in theorem [ thm : sr ] implies a fast mixing markov chain for sampling .the states for the markov chain are all sets of cardinality .the chain starts with a randomly - initialized active set , and in each iteration we swap an element with an element with a specific probability determined by the probability of the current and proposed set .the stationary distribution of this chain is the one induced by dvs , by a simple detailed - balance argument .the chain is shown in algorithm [ algo : mcmc ] .* input : * the matrix of interest , the target cardinality * output : * initialize ] uniformly randomly + with probability the convergence of the markov chain is measured via its mixing time : the _ mixing time _ of the chain indicates the number of iterations that we must perform ( starting from ) before we can consider as an approximately valid sample from .formally , if is the total variation distance between the distribution of and after steps , then is the _ mixing time _ to sample from a distribution -close to in terms of total variation distance .we say that the chain mixes fast if is polynomial in the problem size .the fast mixing result for algorithm [ algo : mcmc ] is a corollary of theorem [ thm : sr ] combined with a recent result of on fast - mixing markov chains for homogeneous sr measures .theorem [ thm : fastmc ] states this more precisely .[ thm : fastmc ] the mixing time of markov chain shown in algorithm [ algo : mcmc ] is given by since is -homogeneous sr by theorem [ thm : sr ] , the chain constructed for sampling following that in mixes in time .[ [ implementation . ] ] implementation .+ + + + + + + + + + + + + + + to implement algorithm [ algo : mcmc ] we need to compute the transition probabilities .let and assume .by the matrix determinant lemma we have the acceptance ratio thus , the transition probabilities can be computed in time .moreover , one can further accelerate this algorithm by using the quadrature techniques of to compute lower and upper bounds on this acceptance ratio to determine early acceptance or rejection of the proposed move .[ [ initialization . ] ] initialization .+ + + + + + + + + + + + + + + a remaining question is initialization .since the mixing time involves , we need to start with such that is sufficiently bounded away from .we show in appendix [ app : sec : init ] that by a simple greedy algorithm , we are able to initialize such that , and the resulting running time for algorithm [ algo : mcmc ] is , which is _ linear _ in the size of data set and is efficient when is not too large . [ [ concentration . ] ] concentration .+ + + + + + + + + + + + + + show concentration results for strong rayleigh measures . as a corollary of our theorem [ thm : sr ] together with their results , we directly obtain tail bounds for dvs . [ [ algorithms - for - experimental - design . ] ] algorithms for experimental design .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + widely used , classical algorithms for finding an approximate optimal design include fedorov s exchange algorithm ( a greedy local search ) and simulated annealing .both methods start with a random initial set , and greedily or randomly exchange a column with a column .apart from very expensive running times , they are known to work well in practice .yet so far there is no theoretical analysis , or a principled way of determining when to stop the greedy search . curiously , our mcmc sampler is essentially a randomized version of fedorov s exchange method .the two methods can be connected by a unified , simulated annealing view , where we define with temperature parameter .driving to zero essentially recovers fedorov s method , while our results imply fast mixing for , together with approximation guarantees . through this lens, simulated annealing may be viewed as initializing fedorov s method with the fast - mixing sampler . in practice, we observe that letting improves the approximation results , which opens interesting questions for future work .[ thm : fastmc ] the mixing time of markov chain shown in algorithm [ algo : mcmc ] is given by the proof of the fast mixing time bound relies on the theory of strongly rayleigh measures , for which the following important result was recently shown in : [ thm : srmix ] for any strongly rayleigh k - homogeneous probability distribution }\to \mathbb{r}_+ ] where and , we have . * ( theorem [ thm : fastmc ] ) .* since is -homogeneous strongly rayleigh by theorem [ thm : sr ] , the chain constructed for sampling following that in mixes in time .one can first sample and take the complement to get when the chain has mixed well , resulting in algorithm [ algo : mcmc ] .it remains to prove theorem [ thm : sr ] .let \backslash s ] .we also use the notation .[ thm : sr ] let and .then the multiaffine polynomial } \det(a_s a_s^\top)\prod_{i\in s_c}z_i\quad=\quad \sum_{|s|=k , s\subseteq[ m ] } \det(a_s a_s^\top)z^{s_c},\ ] ] is real stable . in other words, is a -homogeneous strongly rayleigh ( sr ) measure .the proof of theorem [ thm : sr ] relies on key properties of real stable polynomials and sr measures established in the remarkable work of .a polynomial ] is stable ; * diagonalization * : is stable , and hence is stable . with propositions [ prop : det ] and [ prop : basic ]in hand , we are now ready to prove theorem [ thm : sr ] . * ( theorem [ thm : sr ] ) .* let {i=1}^m) ] using volume sampling .therefore , using the definition of we have }&\det(a_s^\top a_s + { \varepsilon}i_k ) = { 1\over { \varepsilon}^{k - n } } e_k ( a^\top a + { \varepsilon}i_m)\\ & = { 1\over { \varepsilon}^{k - n } } e_k ( \operatorname{diag}([(\sigma_1 ^ 2(a ) + { \varepsilon } ) , ( \sigma_2 ^ 2(a ) + { \varepsilon}),\ldots , ( \sigma_n^2(a ) + { \varepsilon } ) , { \varepsilon},\ldots,{\varepsilon}]))\\ & = \binom{m - n}{k - n}\prod\nolimits_{i=1}^n ( \sigma_i^2(a ) + { \varepsilon } ) + o({\varepsilon}).\end{aligned}\ ] ] now taking the limit as we obtain }\det(a_s a_s^\top ) & = \lim_{{\varepsilon}\to 0}\binom{m - n}{k - n}\prod\nolimits_{i=1}^n ( \sigma_i^2(a ) + { \varepsilon } ) + o({\varepsilon } ) = \binom{m - n}{k - n}\det(a a^\top).\end{aligned}\ ] ]the marginal probability of a set ] .then we have }{\left[\begin{array}{c}b_{t_c}\\ \sqrt{{\varepsilon}}u_{t_c } \\\sqrt{{\varepsilon}}c_{t_c}\end{array}\right]}^\top\right)\end{aligned}\ ] ] where we construct an orthonormal matrix whose columns are basis vectors. since we are free to chose any orthonormal , we simply let it be .let ] and \backslash s_1 ] and .let be the eigenvalue decomposition of where .further , let be the complement column space of , thus we have \left[\begin{array}{c c } q_{b_{t_c } } & q_{b_{t_c}}^\perp \end{array}\right ] = i_{|t_c| } = i_{n-|t|}\end{aligned}\ ] ] then for any \backslash [ r(b_{t_c})] ] . with the volume sampling algorithm proposed in ,the resulting running time would be .[ thm : projection ] for any , and , the random gaussian projection of where satisfies for all ] to a lower - dimensional space in a way that the values for submatrix determinants are preserved up to a small multiplicative factor .then we perform volume sampling .we project columns of } ] , namely matrix with row deleted .we have \\ & = \sum_{(i_t,\ldots , i_k)\in[m]^{k - t+1 } } \|a_s^\dagger\|_f^2 \overrightarrow{p}(s_1=i_1,\ldots , s_k = i_k ; a \mid s_1=i_1,\ldots , s_{t-1}=i_{t-1})\\ & = \sum_{(i_t,\ldots , i_k)\in[m]^{k - t+1 } } \|a_s^\dagger\|_f^2 { \overrightarrow{p}(s_1=i_1,\ldots , s_k = i_k ; a ) \over \overrightarrow{p}(s_1=i_1,\ldots , s_{t-1}=i_{t-1 } ; a)}\\ & = { \sum_{(i_t,\ldots , i_k)\in[m]^{k - t+1}}\det(a_{\{i_1,\ldots , i_k\ } } a_{\{i_1,\ldots , i_k\}}^\top ) \|a_{\{i_1,\ldots , i_k\}}^\dagger\|_f^2 \over \sum_{(i_t,\ldots , i_k)\in[m]^{k - t+1}}\det(a_{\{i_1,\ldots , i_k\ } } a_{\{i_1,\ldots , i_k\}}^\top ) } \\ & = { \sum_{j=1}^n\sum_{(i_t,\ldots , i_k)\in[m]^{k - t+1 } } \det(a_{\{i_1,\ldots , i_k\}}^j ( a_{\{i_1,\ldots , i_k\}}^j)^\top ) \over \sum_{(i_t,\ldots , i_k)\in[m]^{k - t+1}}\det(a_{\{i_1,\ldots , i_k\ } } a_{\{i_1,\ldots , i_k\}}^\top ) } \\\end{aligned}\ ] ] while the denominator is the ( unnormalized ) marginal distribution , the numerator is the summation of ( unnormalized ) marginal distribution for . by theorem [ thm : marginal ]we can compute this expectation in time .* input : * matrix to sample columns from , the target size * output : * set such that with the guarantee initialize as empty tuple compute conditional expectation ] is minimized . since at the beginning we have \le { m - n + 1\over k - n+1}\|a^\dagger\|_f^2 ; \quad t\sim p(t;a),\end{aligned}\ ] ] it follows that the conditional expectation satisfies \le { m - n + 1\over k - n+1}\|a^\dagger\|_f^2.\end{aligned}\ ] ] hence we have \le { m - n + 1\over k - n+1}\|a^\dagger\|_f^2.\end{aligned}\ ] ] further , by using standard bounds relating the operator norm to the frobenius norm , we obtain , whereby ^\top\bigr).\end{aligned}\ ] ] the rhs is a distribution induced by volume sampling .greedily choosing columns of one by one gives a approximation to the maximum volume submatrix .this results in a set such that thus , .note that in practice it is hard to set to be exactly , but a small approximate value suffices .
we study _ dual volume sampling _ , a method for selecting columns from an short and wide matrix ( ) such that the probability of selection is proportional to the _ volume _ spanned by the rows of the induced submatrix . this method was proposed by ( ) , who showed it to be a promising method for column subset selection and its multiple applications . however , its wider adoption has been hampered by the lack of polynomial time sampling algorithms . we remove this hindrance by developing an exact ( randomized ) polynomial time sampling algorithm as well as its derandomization . thereafter , we study dual volume sampling via the theory of real - stable polynomials and prove that its distribution satisfies the `` strong rayleigh '' property . this result has remarkable consequences , especially because it implies a provably fast - mixing markov chain sampler that makes dual volume sampling much more attractive to practitioners . this sampler is closely related to classical algorithms for popular experimental design methods that are to date lacking theoretical analysis but are known to empirically work well .
the use of latent variables has several interesting applications in statistics , econometrics and related fields like quantitative marketing .models like tobit , and probit , ordered probit and multinomial probit are good examples . references and examplescould be found in geweke , keane and runkle ( 1997 ) and allenby and rossi ( 1999 ) , especially for the multinomial probit .when the estimation process uses some simulation technique , in particular in bayesian analysis , the need of drawing a sample from the distribution of the latent variable naturally arises .this procedure _ augments _ the observed data with a new variable which will be referred as a _ latent _ variable .it is usually referred to as _ data augmentation _ ( tanner and wong ( 1987 ) , tanner ( 1991 ) , gelfand and smith ( 1990 ) ) and in other contexts as _ imputation _ ( rubin 1987 ) .the data augmentation algorithm is used when the likelihood function or posterior distribution of the parameter given the latent data is simpler than the posterior given the original observed data . if the distribution of the variables is a multivariate normal in dimensions and is the indicator function of the set , the drawing is made from the normal distribution truncated to ( and/or ) .the goal of this paper is to produce exact ( or perfect ) samples from random variables with distributions supported on a -dimensional box ; we call _ box _ the cartesian product of bounded intervals .we construct a discrete - time stationary markov process in the state space whose time - marginal distribution at any time ( that is , the law of ) has a given density distribution with support in .the construction is then implemented to perfectly simulate normal vectors of reasonable large dimension truncated to bounded boxes .the approach is also useful to show uniqueness of the invariant measure for a family of processes in a infinite - dimensional space ^{z^d} ] , .if is the configuration at time , then at time a site is chosen uniformly in , say and is substituted by a value chosen with the density conditioned to the values of the other coordinates ; that is with the density given by where , , .note that does not depend on .the distribution in with density is reversible measure for this dynamics .we construct a stationary gibbs sampler as a function of a sequence of independent identically distributed and uniform in ] such that is measurable with respect to the field generated by the uniform random variables and the schedule in ] .in fact we construct simultaneously processes } , -\infty ] is the gibbs sampler starting at time with configuration . for each fixed , },\,s\le t,\,\zeta\in\cb) ] the process } , s\le t'\le t) ] and the initial .the crucial property of the coupling is that does not depend on .the construction is a particular implementation of the _ coupling from the past ( cftp ) _ algorithm of propp and wilson ( 1996 ) to obtain samples of a law as a function of a finite ( but random ) number of uniform random variables . for an annotated bibliography on the subject see the web page of wilson ( 1998 ) .a markov process having as unique invariant measure is constructed as a function of the uniform random variables .the processes starting at all possible initial states run from negative time to time using the uniform random variables and a fixed updating schedule .if at time 0 all the realizations coincide , then this common state has distribution .if they do not coincide then start the algorithm at time using the random variables and continue this way up to the moment all realizations coincide at time 0 ; is the maximal such that this holds .the difficulty is that unless the dynamics is monotone , one has to effectively couple all ( non countable ) initial states .murdoch and green ( 1998 ) proposed various procedures to transform the infinite set of initial states in a more tractable finite subset . in the normal case , for some intervals and covariance matrices , the maximal coupling works without further treatment .moeller ( 1999 ) studies the continuous state spaces when the process undergoes some kind of monotonicity ; this is not the case here unless all correlations are non negative .philippe and robert ( 2003 ) propose an algorithm to perfectly simulate normal vectors conditioned to be positive using coupling from the past and slice sampling ; the method is efficient in low dimensions .consider a -dimensional random vector with normal distribution of density where the covariance matrix is a positive definite matrix , is the mean vector and in this case we say that .we denote the truncation of to the box ; its density is where =\int_{\cb}f({\bx})d{\bx} ] and for our method is faster than the rejection method .the simulations indicate that the computer time for the rejection method grows exponentially with while it grows like for our method . using a simple program in matlab in desktop microcomputer the method permits to simulate up to dimension for the interval ^d ] , our method is less efficient but in any case it is much faster than the rejection one . since our method uses several times the same uniform variables , to compare the computation time we have counted the number of times each of the one - dimensional uniform random variable is used .it is expected that if the box is sufficiently small and the covariance matrix has a `` neighbor structure '' , the number of steps may grow as . in this caseone should be able to simulate the state of an infinite dimensional gaussian field in a finite box .correlated normal vectors can be mapped into independent standard normal vectors using the cholesky transformation ( see chapter xi in devroye ( 1986 ) ) . in the case of truncated normals to a box , the transformation maps the box into a set .one might simulate standard normals and reject if they do not belong to the transformed box .the difficulties are similar to those of the rejection method : as the dimension grows , the transformed set has small probability and the expected number of iterations grows exponentially with the dimension .when the truncating set is not a box , our method generally fails as the coupling event for each coordinate has probability zero in general .we illustrate this with an example in section [ s6 ] .another possibility , also discussed by devroye ( 1986 ) is to compute the marginal of the first coordinate , then the second marginal conditioned to the first one and so on .the problem here is that the computation of the marginals conditioned to all the other coordinates be in may be as complicated as to compute the whole truncated vector . in section [ s2 ]we define coupling and maximal coupling . in section [ s3 ]we describe the stationary theoretical construction of the gibbs sampler and the properties of the coupling .section [ s4 ] is devoted to the pseudocode of the perfect simulation algorithm . in section [ s5 ]we compute the functions entering the algorithm for the truncated normal case .in section [ s6 ] we give some examples and compare our perfect simulation algorithm with the rejection one based on the uniform distribution .a _ coupling _ of a family of random variables with a label set , is a family of random variables with the same marginals ; that is , such that an event is called _ coupling event _ , if are identical in .that is , we consider continuous random variables in with densities satisfying for any coupling event .this is always true if is countable and in the normal case treated here . when there is a coupling event such that identity holds in the coupling is called _maximal_. see thorisson ( 2000 ) for a complete treatment of coupling , including historical quotations .a natural maximal coupling of these variables is the following .let be the corresponding cumulative distribution functions and define let .let be a random variable uniformly distributed in ] and for , }(x):=\int_{-\infty}^{x}\inf_{\lambda\in \lambda_\ell}g_\lambda(y)dy;\qquad r_{[\ell]}:=r_{[\ell]}(\infty)\ , .\end{aligned}\ ] ] assume }= 1 ] and for define }(x)= g_\lambda(x ) - r_{[\ell]}(x ) + r_{[\ell]} ] , is the conditional density of the component of given the other coordinates given in .reversibility follows from the identity for any and continuous functions from to , where and .in particular the law with density is invariant for the gibbs sampler .it is indeed the unique invariant measure for the dynamics , as we show in theorem [ p31 ] . to define a stationary version of the processwe construct a family of couplings for each time interval ] and , a family of independent variables with uniform distribution in and independent of .now for each we define a process },\,t\ge s) ] and later each coordinate is either a ( random ) point or the full interval .more precisely , for and set }= 0\ , , \qquad r_{[s , s]}(x)= 0\ , , \qquad d_{[s , s]}(x)=0\ , , \qquad \eta_{[s , s]}=\cb.\end{aligned}\ ] ] fix and assume }(x) ] and } ] is the probability that the coupling event is attained at coordinate for all the processes starting at times smaller than or equal to .the coupling event is attained for the process starting at when } ] , for given by the biggest such that } ] does not depend on and , for , }(\cdot),\,r_{[s , t]},\,d_{[s , t ] } ) \hbox { is a function of } ( ( u_n,\kappa(n)),\,n = s+1,\dots , t-1).\\ & & \eta_{[s , t ] } \hbox { is a function of } ( r_{[s , t]}(\cdot),\,r_{[s , t]},\,d_{[s , t ] } ) \hbox { and } ( u_t,\kappa(t))\,.\end{aligned}\ ] ] the coordinate of } ] as , we get the following explicit expression for : }^{-1}(u_t)\,{\bf 1}\{r_{[n+1,t]}<u_t\leq r_{[n , t]}\}\ , , \qquad\mbox { if}\;k=\kappa(t ) ; \end{array } \right.\,.\end{aligned}\ ] ] for each and we now construct the process },\ , t\ge s) ] and for , } ( k)&=&\left\ { \begin{array}{lll } \zeta^\zeta_{[s , t-1]}(k)\,,\qquad \mbox{if}\;k\neq \kappa(t),\\ \\ \sum_{n = s}^{t-1 } \one\{u_t\in[r_{[n+1,t]},r_{[n , t]}]\}\ ,d^{-1}_{[n+1,t]}(u_t)\\ \qquad\qquad \qquad\qquad + \ ; \one\{u_t > r_{[s , t]}\ } \,\hg_{[s , t]}^{-1}(u_t|\zeta^\zeta_{[s , t-1]})\ , , \qquad\mbox { if } \;k=\kappa(t ) \end{array } \right.\,.\end{aligned}\ ] ] for each fixed and , }(k),\ , \zeta\in\cb),\,s\le t) ] is the process starting with configuration at time .stationarity follows from the construction .indeed , for all and , } ] as a function of .the function \times [ 0,1]^{\{-\infty,\dots , t-1\}}\to b(\kappa(t)) ] if , then follows .this also implies that } = \zeta_t ]converges weakly to the law of as .the same is true for } ] has been reduced to a point . is finite for all because the event and occurs for infinitely many with probability one .the same argument shows the exponential decay of the tail of .[ [ remark ] ] remark + + + + + + the velocity of convergence of the gibbs sampler to equilibrium depends on the values . in turn , these values depend on the size of the box and the correlations of the distribution . strongly correlated vectors produce small values and hence slow convergence to equilibrium .the efficiency of the algorithms discussed in the next sections will also depend on these values .the construction of section [ s3 ] is implemented in a perfect simulation algorithm .let be a box contained in . for each , let and as in and .when we denote by the value of .let be the updating schedule .it can be either a family of iid chosen uniformly in or the periodic sequence {{\rm mod } \d}+1 ] be the function defined by let be boxes contained in , ] + ` if ` ` , compute the ` `th coordinate of ` ` : ` + ` ` + ` end `in this section we implement the construction in the normal case .we start with elementary facts of the one dimensional normal distribution . [[ one - dimension ] ] one dimension + + + + + + + + + + + + + let be the standard normal density in and the corresponding distribution function .let be real numbers .a random variable with mean and variance has _ truncated normal distribution _ to the interval ] .let let ] : .let and be defined by and let ] coincides with the normal with mean up to and with the normal with mean from this point on .[ p22 ] the integrand in has the following explicit expression as a consequence,{\bf 1}\{x < x(i,\sigma)\}\nonumber\\ & & \qquad+\,\frac{1}{a^{+}}\left[\phi\left(\frac{x(i,\sigma)-\mu^+}{\sigma}\right ) -\phi\left(\frac{a-\mu^+}{\sigma}\right)\right]{\bf1}\{x\geq x(i,\sigma)\}\nonumber\\ & & \qquad+\,\frac{1}{a^{-}}\left[\phi\left(\frac{x-\mu^-}{\sigma}\right ) -\phi\left(\frac{x(i,\sigma)-\mu^-}{\sigma}\right)\right]{\bf 1}\{x\geq x(i,\sigma)\}. \nonumber\end{aligned}\ ] ] in particular , + \frac{1}{a^{-}}\left[\phi\left(\frac{b-\mu^-}{\sigma}\right ) -\phi\left(\frac{x(i,\sigma)-\mu^-}{\sigma}\right)\right]\ ] ] and for each ] such that .[ [ remark-1 ] ] remark + + + + + + .indeed , because . if , then all densities coincide unless ; by hypothesis this trivial case is excluded . [ [ multivariate - normal ] ] multivariate normal + + + + + + + + + + + + + + + + + + + let , be the vector conditioned to and .the law of conditioned to is the truncated normal , where we see that does not depend on and \,.\end{aligned}\ ] ] where [ [ remark-2 ] ] remark + + + + + + since for all , almost surely for all .this implies that the truncated multivariate normal case falls under theorem s [ p31 ] hypothesis and our algorithm works .[ [ back - to - one - dimension .- minimum - of - truncated - normals ] ] back to one dimension .minimum of truncated normals + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + we prove proposition [ p22 ] . to simplify notation write instead of .observing that it is sufficient to prove for . the proof is based in the following elementary lemmas . [ s14 ]let .then where [ s16 ] for all it holds [ [ proof - of - proposition - p22 ] ] proof of proposition [ p22 ] + + + + + + + + + + + + + + + + + + + + + + + + + + let .by lemma [ s14 ] we have hence if and only if \,.\end{aligned}\ ] ] but by lemma [ s16 ] the interval in is empty for all .this implies that , from where holds for . the corresponding identity ( [ marzo1 ] )follows by applying lemma [ s14 ] to and .in this section we compare our method with the rejection method in some examples .the conclusion is that the rejection method may be better than ours in dimension 2 for some regions but ours becomes better and better as dimension increases .then we show why the method does not work when the region is not a box .this discards the following tempting approach : multiply the target vector by a matrix to obtain a vector with iid coordinates .the transformed vector is much easier to simulate ; the problem is that it is now conditioned to a transformed region .when the region is not a box , the conditioned vector is not an iid vector and a coupling must be performed .we show here that in general the corresponding coupling event has probability zero .[ [ the - rejection - method ] ] the rejection method + + + + + + + + + + + + + + + + + + + + we compare our algorithm with the following rejection algorithm : simulate a uniformly distributed vector in ] .let and .we consider two cases .( i ) boxes \times[0,1]+(x_1,x_2) ] ( type ) and \times[0,r] ] example 1 .fix and , where is the identity matrix and .we consider two cases : ( i ) boxes ^d ] , .the results are shown in figure [ tres ] and table [ cuatro ] for ( i ) and ( ii ) , respectively . ] .the results are shown in figure [ cinco ] .[ [ strongly - correlated - variables ] ] strongly correlated variables + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the algorithm slows down very fast with the dimension when the variables are strongly correlated .an associate editor proposes to consider a gaussian vector truncated to the box ^d ] .if we rotate and scale the coordinates to obtain iid s , i.e. , if we put , then the box constraints are of the form , where and . when the constraints are of this form , the conditional distribution of has a probability density function whose support depends on the condition . in such casethe maximal coupling in our approach may have a coupling event with zero probability .indeed , it could happen that this implies that for all . to see this in ,observe that the transformation takes the rectangular box into a parallelogram with sides not in general parallel to the coordinate axes .we need to simulate the standard bi - dimensional normal conditioned to .it is still true that the law of conditioned on is normal truncated to the slice of the parallelogram .it is a matter of geometry that there are different and such that ; this implies the infimum above is zero .take for instance , and conditioned to the box \times[0,1] ] : which corresponds to an infinite volume gaussian field , each coordinate truncated to \subset \r ] ( corresponding to , but now it is constant in the coordinates ) . to determine the value of site at time one explores backwards the process ; calling the random variable used for the last time ( say ) before to update site there are two cases : or . if , we do not need to go further back , the value is determined . if , we need to know the values of neighbors at time . repeating the argument, we construct a `` oriented percolation '' structure which will eventually finish if .this value is obtained by dominating the percolation structure with a branching process which dies out with probability and produces offsprings with probability .the value of depends on the length of the interval $ ] and the strength of the interaction governed by .one can imagine that for small intervals and small things will work .we thank christian robert and havard rue for discussions .propp , j. g. and wilson , d. b. ( 1996 ) exact sampling with coupled markov chains and applications to statistical mechanics .proceedings of the seventh international conference on random structures and algorithms ( atlanta , ga , 1995 ) ._ random structures algorithms * 9 * _ , no . 1 - 2 , 223252 .d. b. wilson .annotated bibliography of perfectly random sampling with 77arkov chains . in d.aldous and j. propp , editors , _ microsurveys in discrete probability _ , volume 41 of _ dimacs series in discrete mathematics and theoretical computer science _ , pages 209220 .american mathematical society , 1998 .updated versions can be found at http://dimacs.rutgers.edu/126dbwilson/exact .
the target measure is the distribution of a random vector in a box , a cartesian product of bounded intervals . the gibbs sampler is a markov chain with invariant measure . a `` coupling from the past '' construction of the gibbs sampler is used to show ergodicity of the dynamics and to perfectly simulate . an algorithm to sample vectors with multinormal distribution truncated to is then implemented .
in this paper we show how to solve the inverse problem for ifsm in the case of trajectories of stochastic processes in .the method is based on the solution of the inverse problem for ifsm due to forte and vrscay .this is an extension of classical ifs methods which can be used for approximating a given element of thus in particular trajectories of stochastic processes on this space .the final goal of this approach is simulation . indeed ,several methods are currently available to simulate paths of stochastic processes and in particular of the brownian motion .paths of the bm can be simulated using the properties of the increments of the process like in the euler scheme , or as the limit of a random walk or via decomposition like the kac - siegert / karnounen - loeve series . in this paperwe first propose a ifsm ( iterated function systems with maps ) operator whose fixed point is the trajectory of the bm .we then use this representation of the process to simulate its trajectories .the resulting simulated trajectories are self - affine , continuous and fractal by construction .this fact produces more realistic trajectories than other schemes in the sense that their geometry is closer to the one of the true bm s trajectories .the ifsm trajectory of the bm can then be used to generate more realistic solutions of stochastic differential equations .the paper is organized as follows : section [ sec : ifsm ] recalls the theory of ifsm on , section [ sec : l2 ] recalls some details for stochastic processes with trajectories in and the link with the ifsm theory .section [ sec : bm ] presents the application of the ifsm theory to the problem of simulation with particular attention to the case of the brownian motion .the basic idea of iterated function systems ( ifs ) can be traced back to some historical papers but the use of such systems to construct fractals and other similar sets was first described by hutchinson ( 1981 ) .the fundamental result on which the ifs method is based is banach theorem .the mathematical context is the following : given in a complete metric space , find a contractive operator that admits a unique fixed point such that is small enough .in fact if one is able to solve _ the inverse problem _ with arbitrary precision , it is possible to identify with the operator which has it as fixed point .the fundamental theorems on which the ifs method is based on are the following : ( banach theorem ) let be a complete metric space ; suppose there exists a mapping such that for all and some . is said to be the contractivity factor of .then there exits a unique such that and for any we have when .( collage theorem ) let be a complete metric space . given suppose that there exists a contractive map with contractivity factor such that . if is the fixed point of then .let be a complete metric space and be two contractive mappings with fixed points and .then where and is the contractivity factor of .we are going to use a particular class of ifs operators , known as ifsm ( ifs with maps ) , introduced by forte and vrscay in 1994 .let ] for each t as * \vi ) with uncorrelated with mean and variance .in the literature there several methods of simulation of the trajectory of the browian motion , i.e. the stochastic process \} ] 5 . but it appears to be still a too difficult problem . in practice , it is preferable to use all the above maps and not only the subset of non - overlapping maps . in this case , we need simulate the value of the trajectory of the brownian motion on a fixed grid ( using one of the known methods ) and we use these points to approximate the integrals in the quadratic form .we then solve the constrained quadratic programming problem using standard algorithms ( see e.g. ) .figure [ fig2 ] bottom shows an example of trajectory generated using the ifsm approach using wavelet type maps for , .figure [ fig2 ] top represents the euler trajectory built on 50 gaussian terms which has been used to build the ifsm .as one can notice the ifsm path shows more `` fractal '' complexity then the corresponding euler path .we have proposed a new method to generate paths of the brownian motion .these ifsm paths seems to mimic more closely the fractal nature of the trajectory of the brownian motion then existing schemes . at current stagewe are not able to show formal property of the ifsm path in terms of strong and weak approximation ( see ) .open source software for generating ifsm trajectories written in c and r language is available via ` ifs ` package at http://cran.r-project.org for free download .forte , b. , vrscay , e.r .( 1998 ) , `` inverse problem methods for generalized fractal transforms '' , in _ fractal image encoding and analysis _ , nato asi series f , vol .y. fisher , springer verlag , heidelberg .
several methods are currently available to simulate paths of the brownian motion . in particular , paths of the bm can be simulated using the properties of the increments of the process like in the euler scheme , or as the limit of a random walk or via decomposition like the kac - siegert / karnounen - loeve series . in this paper we first propose a ifsm ( iterated function systems with maps ) operator whose fixed point is the trajectory of the bm . we then use this representation of the process to simulate its trajectories . the resulting simulated trajectories are self - affine , continuous and fractal by construction . this fact produces more realistic trajectories than other schemes in the sense that their geometry is closer to the one of the true bm s trajectories . the ifsm trajectory of the bm can then be used to generate more realistic solutions of stochastic differential equations . * ams subject classification : * * keywords : * iterated function systems , brownian motion , simulation of stochastic differential equations
fundamental aspects of phase transitions in complex systems can be revealed by the analysis of its microcanonical thermostatistics , which is characterised by the well known entropy , where denotes the density of states of a system with energy . in particular , the analysis of inflection points of the microcanonical inverse temperature plays an important role in the identification of stable , unstable and metastable regions in the phase diagram , providing alternative insights to the usual canonical analysis .also , free - energy profiles can be obtained from the caloric curves vs , from where one can easily evaluate the values of barrier heights and latent heats . in this way , the microcanonical thermostatistics analysis has been incorporated in many studies in the literature , _e.g. _ refs . to name a few .nevertheless , any analysis must rely on data obtained from efficient exploration of the configurational space .it is well known that numerical simulations performed with monte carlo ( mc ) and molecular dynamics ( md ) methods pose limitations to the achievement of reliable data sampling .such limitations are related to the critical slowing down , which is observed in studies of continuous phase transitions , and to the entrapment in local minima , in the case of systems with rugged energy landscapes . in both cases ,the configurational space is poorly explored in a reasonable computational simulation time , which may produce biased physical averages . to overcome the trapping problem ,it has been suggested that configurations must be sampled using algorithms based on generalised ensembles , where the updates are performed with non - boltzmann statistical weights .for instance , the multicanonical algorithm ( muca ) , the extended gaussian ensemble ( ege ) , tsallis statistical weight , and replica exchange method ( rem ) either use a series of boltzmann weights or any convenient generalised sampling weight .muca simulations sample configurations with a weight in such a way that the energy distribution is uniform , .thus , a precise determination of is equivalent to obtain an estimate for the density of states , _i.e. _ . the weights $ ] follows from the parameterization of the entropy , where and are the so - called multicanonical parameters .the iterative procedure to obtain the muca parameters is described in detail in references , and can be read as , {m } , \label{eq : an } \\\ ] ] where stands for the multicanonical simulation .the recursion steps require the discretisation of the energy for continuous energy models .therefore , it is convenient to define , where is the binsize , is an integer , and is a constant that defines a reference energy .all the energies in the interval contribute to the histogram .methods to improve sampling based on simulations at different temperatures have been proposed to either be conducted in parallel ( rem ) or as a random walk between different temperatures . in rem , non - interacting replicas of the system are simultaneously simulated by the usual mc or md algorithms , and from time to time , pairs of replicas at neighboring temperatures are exchanged with a transition probability . from the data produced by simulations performed at a single temperature or at a set of temperatures , with ,it is necessary to employ a reweighing scheme to evaluate physical averages at a given temperature .reweighting techniques use data from either a single histogram or multiple histograms obtained from mc or md simulations .recently , a simple method called statistical weighted histogram analysis method ( st - wham ) has been proposed as an iteration - free procedure to obtain an estimate for the microcanonical inverse temperature . in this methodthe usual wham equations are converted into a weighted average of the individual densities of states obtained from simulations carried out with different sampling weights . from energy histograms produced by multiple simulations , st - wham yields a statistical temperature , which is an estimate of the inverse microcanonical temperature .interestingly , there is a numerical procedure based on the multicanonical recursion relations ( [ eq : an ] ) and ( [ eq : bn ] ) , which is called st - wham - muca , that can be used to replace the direct integration in order to evaluate the entropy .although both st - wham and st - wham - muca have the advantage of _ a posteriori _ discretisation of energies , their naive implementations may lead to biased evaluations of physical quantities for continuous energy models just like all the aforementioned algorithms . as described in sectionii , the estimates for inverse microcanonical temperature depends on the derivatives of the energy histograms ( see eq .( [ eq : st - wham2 ] ) ) . here, we analyse how the estimates are energy binning dependent and , in section iii , we present two alternative approaches that avoids the need for energy binning to evaluate the microcanonical caloric curve for continuous energy models : ( i ) a proposal by berg and harris , which involves an empirical cumulative distribution ( cdf ) and uses discrete fourier series ; and ( ii ) a bayesian approach to model this cdf and assumes that the thermodynamic phase transitions are well described by the coexistence of two phases . a comparative analysis between these approachesis made in order to characterise for two systems that undergo first - order - like phase transitions : the folding transition of a coarse - grained protein model for ubiquitin and the aggregation transition of two heteropolymers that follows a fibonacci sequence .these examples allow us to describe the statistical and systematic errors involved in the numerical calculations of and , which are presented in section iv .conclusions on this comparative analysis are presented in section v.the st - wham yields a direct estimate of the inverse microcanonical temperature by considering the statistical inverse temperature where , and .it is preferable to write eq .( [ eq : st - wham1 ] ) as note that for simulations with the canonical weight . with the set of estimates , muca recurrence relations ( [ eq :an ] ) and ( [ eq : bn ] ) can be applied to obtain estimates for the microcanonical entropy , this st - wham - muca algorithm is quite simple if one has .we can numerically evaluate the derivatives in eq .( [ eq : st - wham2 ] ) in a naive way , where the derivatives at energies follow from a linear regression around this point .for instance , we use a linear regression with points ; selecting points means that the derivative at is calculated with the values of , where .we chose a value for according to the energy binsize . consequently , we calculate the derivatives in the energy range . in this method , it is more convenient to directly calculate the derivative of than the derivative of .we calculate the linear regression with a subroutine easily adapted from the linear fit subroutine in .another approach can be devised by considering an algorithm based on the cumulative distribution function ( cdf ) .the advantage of such approach is that it avoids histogramming when describing probability densities , dismissing the need for any _ ad hoc _ energy discretisation .the method defines an estimator for the cdf , where the function is an empirical cumulative distribution function ( ecdf ) for the probability density .the algorithm sorts the energy time series of length ndat in an ascending order , so any outliers can be eliminated by constructing a restricted ecdf in the range between two meaningful points and ( in general one takes and ) .the basic idea is to propose an approximating function to describe , from where the difference function is defined , this function can be expanded in fourier series , which gives the fourier coefficients , here we use the same criteria of where the maximum number mmax of coefficients is obtained by imposing the two - sided kolmogorov test .notably , eq . ( [ eq : di ] ) provides all the information that one needs to obtain a parameter - free energy probability density .the reason is because eq .( [ eq : re - def ] ) yields a smooth estimate of the cdf given mmax coefficients in the fourier expansion , even if one assumes a linear _ ansatz _ for ( see ) . in turn, it becomes easy to obtain estimates for the probability density by differentiation , which consists in a smooth estimation of .now , let us go back to the numerical differentiation in eq .( [ eq : st - wham2 ] ) . to obtain parameter - free derivatives , we just need to differentiate again .this corresponds to the following numerical calculation , unlike the linear regression method , where we calculate the derivative of , here is more convenient to compute the derivative of directly .we have included in appendix the function derpd(x ) , which can be used to estimate this derivative from the fourier coefficients .one can easily incorporate this function in the program cdf_pd subroutine provided by berg and harris .next we present our approach to implement a bayesian analysis . to this end , input data ( denoted by ) is the empirical cumulative distribution and not the histograms that are dependent on the energy binsize .thus , the aim is to describe the empirical cumulative distribution by a given model ( called model m ) and using the bayesian analysis , extract values for its parameters ( denoted by ) from the _ posterior _ probability density function ( pdf ) . here, denotes possible parameters related to the experimental conditions , which is the temperature in our numerical experiment , and refers to a set of observed experimental points . from the _posterior _ pdf , we extract the desired probability distributions for each parameter , which correspond to the marginalized distributions .these marginalized distributions are depicted with histograms in fig .[ fig:6 ] for the ubiquitin . for each protein ,our _ experimental _ data is produced by dividing the observed energy range into 35 points and _ experimental _ errors are extracted from the jackknife procedure .we anticipate here that these _ experimental _ data are displayed in figures [ fig:4 ] and [ fig:10 ] for the folding - unfolding transition of ubiquitin and for the aggregation of two fibonacci sequences , respectively . within a bayesian approach ,the set of observations is just a set among possible experimental outcomes at the points .thus , given a fixed model m and a set of experimental values , parameters estimation within bayesian approach is obtained as follows , where is the _ prior _ distribution for a fixed model m. since our aim is to characterise in a first - order phase transition region , we assume that the energy distribution ( _ i.e. _ a continuous estimate for in eq .( [ eq : st - wham2 ] ) ) is well described by a double - peak function .this function characterises the coexistent phases at temperatures close enough to the transition temperature , whose shape is a consequence of the free - energy barrier between the ordered and disordered phases .therefore , the expected energy distribution can be written as a normalized sum of two gaussian distributions with peaks centered at energies and , corresponding to the disordered and ordered phases , respectively .if we assume the model m as the cumulative distribution function of , and recalling that for a gaussian distribution the cdf is the error function , we arrive to the following modelling , .\label{eq : fe}\ ] ] this is a 5-parameter model , which we call model i in our analysis , with . now , to obtain the _ posterior _ probability of the parameter set , given data set , from the _ likelihood _ times the initial _ prior _ probabilities for our model i , we use a markov chain monte carlo . to this end, we consider the metropolis algorithm with the proposal function ^ 2/2 \sigma_i^2 } \, , \ ] ] where are the errors in our computational experiment .the values are calculated with the accepted values for the parameters of model i.in the following , we present results where we evaluate the performance of the different approaches in obtaining continuous estimates for and , consequently , the estimate for microcanonical inverse temperature . the comparative analysis is made with data obtained from both mc and md simulations , where we employ two simplified off - lattice protein models , _i.e. _ all atoms in the polypeptide chain are replaced by beads located at the -atom position , and present a continuous energy spectrum . for ubiquitin obtained with linear regression ( ) and cdf method with data generated from independent md simulations at .statistical errors for the cdf method were calculated with 80 jackknife bins.,scaledwidth=45.0% ] for ubiquitin .error bars are calculated with 20 jackknife bins and are shown only for the cdf method .inset : comparative behaviour of produced with the cdf method and bayesian approach assuming model i.,scaledwidth=95.0% ] for ubiquitin .error bars are calculated with 20 jackknife bins and are shown only for the cdf method .inset : comparative behaviour of produced with the cdf method and bayesian approach assuming model i.,scaledwidth=95.0% ] ubiquitin is a 76 residue small protein ( pdb code 1ubq ) for which evidences suggest a two - state folding mechanism .it had received considerable attention for what concerns solvent response and temperature dependence in the secondary structure formation and stretching experiments . to study the performance of the numerical approaches in evaluating through the folding - unfolding transition , we perform simulations with a coarse - grained version for ubiquitin but with a rather detailed potential .we use the structure - based model implemented in smog web server to perform md simulations with gromacs package .the analyses were made from measurements during 250 ns of simulation after a period of thermalization .measurements were obtained from 7 independent simulations at temperatures with and .first , we evaluate the performance of the approaches considering the data obtained from a single md trajectory produced at , which is close to the transition temperature evaluated with statistics obtained with 7 independent md simulations .figure [ fig:1 ] displays the estimates for the caloric curve using the linear regression to evaluate derivatives of from data obtained at .this figure shows how noisy the caloric curve can be as a function of the energy binsize and 1.3 , keeping the number of points fixed .figure [ fig:2 ] compares what seems a good energy discretisation with the cdf method .error bars based on 20 jackknife bins are presented only for the cdf method .now , fig .[ fig:3 ] compares both methods when we consider the statistics obtained with the entire set of temperatures , . to present error bars for the cdf method, we have increased the number of jackknife bins to 80 because this larger statistics .nice agreement between both methods are obtained if we choose conveniently .figure [ fig:4 ] displays what we call input experimental points to perform the bayesian analysis .this _ experimental _ data was obtained from md simulations at .the dashed line corresponds to the cumulative distribution obtained with model i. the parameters of this model that fit the input data are displayed in table 1 .we include values calculated from means of the marginalized distributions , and global modes that presented the smallest /d.o.f of the model . to verify how the proposed model describes the energy distributionwhen compared with the cdf method and naive histogramming , we show in fig .[ fig:5 ] the results obtained from these methods for the statistics collected at .we realize that model i does not recover properly the region between the peaks of the energy distributions obtained with previous numerical methods .the inset in this figure compares the calculations of following from the cdf and bayesian approaches . in fig .[ fig:6 ] we display the marginalized distributions of , including the correlation between the parameters and to illustrate their interdependence .we have always used flat _priors _ over appropriated ranges to obtain the parameter distributions . and with data obtained for ubiquitin at .,scaledwidth=48.0% ] figure [ fig:7 ] shows the caloric curve for the aggregation of two chains of monomers described by the ab model , a coarse - grained off - lattice protein model that replaces the all - atom potentials by simpler physical interactions .this model reduces the protein to a chain of monomers of two types : the hydrophobic ( ) and the polar or hydrophilic ( ) types , located at the atoms .the ab model has also been used in studies to explore aggregation phenomenon , where more than one chain is included in the system , and in studies of microcanonical thermostatistics of heteropolymers . here , we consider a simple system which consists of two heteropolymers defined by fibonacci sequences with monomers , _ i.e. _ abbabbababbab . the statistics for this system amounts to sweeps per replica produced with rem .attempts to exchange the replicas occur every sweeps , using a scheme that alternates attempts between even replicas and their neighbors , and odd replicas and their neighbors .although one has several manners to define the temperature set , here we determine it using an arithmetic progression .we consider replicas choosing , and . for the aggregation of two fibonacci sequences with the linear regression ( ) and cdf method for data obtained at .statistical errors for the cdf method were calculated with 20 jackknife bins.,scaledwidth=95.0% ] for the aggregation of two fibonacci sequences with the linear regression ( ) and cdf method for data obtained at .statistical errors for the cdf method were calculated with 20 jackknife bins.,scaledwidth=95.0% ] for the aggregation of two fibonacci sequences with the linear regression ( ) and cdf method for energy time series obtained from rem at .,scaledwidth=45.0% ] figure [ fig:7 ] depicts the caloric curves obtained with energy binsizes , and .the value was used to calculate the derivatives of .different values of allows one to verify the systematic errors associated with and the use of linear regression . for a fixed , small values of ( or )tend to reproduce the statistical fluctuations of the data . on the other hand , a value as large as eliminates the physical information associated with the small s - loop at . for comparison ,we include in fig .[ fig:8 ] the estimates from the cdf method .these comparisons are based on a single time series selected from data produced with rem at the inverse temperature .it is important to reliably calculate the caloric curve when assessing the canonical critical temperature and the latent heat of the transition .we can figure out how important the fluctuations observed in these curves are by calculating the statistical errors associated with the cdf method .figure [ fig:8 ] displays the statistical error bars calculated with 20 jackknife bins for the dataset with measurements obtained with rem at . both methods present comparable statistical error bars .this figure shows that the small s - loop at does not result from the statistical fluctuations in our dataset .therefore , any smooth estimate of this curve at , obtained for example with ( ) , would hide physical information . for the aggregation of two fibonacci sequences .error bars are calculated with 20 jackknife bins are shown for the cdf method .inset : comparative behaviour of as predicted with the cdf method and bayesian analysis assuming model i.,scaledwidth=95.0% ] for the aggregation of two fibonacci sequences .error bars are calculated with 20 jackknife bins are shown for the cdf method .inset : comparative behaviour of as predicted with the cdf method and bayesian analysis assuming model i.,scaledwidth=95.0% ] in fig .[ fig:9 ] we include data produced with replicas together .comparison of the results presented in this figure with those obtained by multicanonical simulations in refs . reveals that they are in excellent agreement : even the small s - loop near appears in our case .computation of the inverse of aggregation temperature also resulted in a quantitative agreement .larger error bars appear around , probably because the lack of temperatures in the set used to exchange the replicas .the averaged estimates from both derivative calculation methods agreed for a particular choice of and points in the linear regression method .now , fig .[ fig:10 ] shows the _ experimental _ data we have used to perform the bayesian analysis .again , we initially keep the analysis restricted to a single energy time series , in this case produced at .we have included the bayesian estimates ( dashed line ) of the cumulative function assuming the model i. the parameters for this model are presented in table i. to verify how well this model reproduces , we include in fig .[ fig:11 ] the energy distributions obtained with a naive histogramming procedure , cdf method and bayesian analysis . clearly , bayesian analysis does not reproduce the expected behaviour between both peaks .this leads to a biased estimation of as can be seem in the inset of this figure .moreover , the approach with model i does not reproduce the small s - loop observed at ( data not shown ) , in contrast to the estimates with a convenient choice of or the cdf method . as a consequence , we again do not follow further analysis with the bayesian approach including all statistics from the 7 replicas .we presented two alternative approaches for the estimation of the energy probability distributions avoiding the need of energy binning in order to obtain the microcanonical thermostatistics analysis from st - wham .numerical comparisons between the approaches were presented and we showed the statistical and systematic errors that can arise when one evaluates the microcanonical inverse temperature for two continuous energy models that exhibits first - order like phase transitions .our results indicate that the cdf method yields reliable estimates for both and when compared with the linear regression method , and far more successfully than the bayesian approach with model i. unlike the linear regression method , the cdf method avoids the undesirable task of choosing the energy binsize for a careful evaluation of the microcanonical temperature , as highlighted in the analysis of the aggregation transition of the two fibonacci sequences . in this case, the caloric curve presents a quite unusual behaviour with two s - loops , indicating two transitions .in particular , we showed that the use of large values for in linear regression method would hide physical information about the small s - loop at . on the other hand, the small s - loop could not be detected with the bayesian analysis ( data not shown ) as a consequence of the poor evaluation of and its derivative ( fig .[ fig:11 ] ) .the reason is because the model i consists of only 5 parameters .this is a low number compared to the cdf method , which allows a variable number of parameters ( _ i.e. _ fourier coefficients ) defined depending on the demand of the ecdf .for instance , the cdf method needs 13 fourier coefficients to obtain probability densities for the ubiquitin data and this number goes up to 74 for fibonacci sequences .furthermore , model i was constructed on the hypothesis that the coexistence of two thermodynamic bulk phases can be well described by two gaussian distributions .actually , this is an approximation because the energies in between the two peaks , describing mixed phase configurations , are not properly take into account in the two gaussian model .we thank bernd berg for carefully reading our manuscript .this work has been supported by the brazilian agencies fapesp , cnpq and rusp ( university of so paulo ) .derivative of the probability density : .... function derpd(x ) include ' .. / .. /libs / fortran / implicit.sta ' include ' .. / ../libs / fortran / constants.par ' parameter(nmax=100 ) common /cdfprob/ xmin , xrange , dn(nmax),m !m number of fourier coefficients derpd = zero do j=1,m aux = j*pi / xrange aux = aux*aux derpd = derpd - dn(j)*aux*sin(j*pi / xrange*(x - xmin ) ) enddo return end .... d.h.e .gross , microcanonical thermodynamics : phase transitions in small systems , lecture notes in physics 66 ( world scientific , singapore , 2001 ) .gross , and j.f .kenney , j. chem .( 2005 ) 224111 .frigori , l.g .rizzi , and n.a .alves , j. phys .( 2010 ) 012018 .s. schnabel , d.t .seaton , d.p .landau , and m. bachmann , phys .e 84 ( 2011 ) 011127 .rocha , s. schnabel , d.p .landau , and m. bachmann , phys .e 90 ( 2014 ) 022601 .frigori , l.g .rizzi , and n.a .alves , j. chem .( 2013 ) 015102 . c. junghans , m. bachmann , and w. janke , phys .97 ( 2006 ) 218103 . c. junghans , m. bachmann , h. arkin , and w. janke , j. chem phys .128 ( 2008 ) 085103 .m. mddel , w. janke , and m. bachmann , phys .phys . 12 ( 2010 ) 11548 .t. bereau , m. bachmann , and m. deserno , j. am .( 2010 ) 13129 .church , c.e .ferry and a. e. van giessen , j. chem .( 2012 ) 245102 .cho , j. kim , j. lee , t. keyes , j.e .straub , and k.s .kim , phys .( 2014 ) 157802 .a. mitsutake , y. sugita , and y. okamoto , biopolymers 60 ( 2001 ) 96 .landau , and k. binder , a guide to monte carlo simulations in statistics physics . cambridge university press , 2000 .berg , 2000 .monte carlo methods , fields institute communications vol .26 , edited by n. madras ( american mathematical society , providence , ri , 2000 ) , p. 1 - 24 .berg , comput .commun . 153( 2003 ) 397 .johal , and a. planes , e. vives , phys .e 68 ( 2003 ) 056113 .t. neuhaus , and j.s .hager , phys .e 74 ( 2006 ) 036702 .frigori , l.g .rizzi , and n.a .alves , eur .j. b 75 ( 2010 ) 311 .i. andricioaei , and j.e .straub , phys . rev .e 53 ( 1996 ) r3055 .hansmann , and y. okamoto , phys .e 56 ( 1997 ) 2228 .k. hukushima , and k. nemoto , j. phys .japan 65 ( 1996 ) 1604 .j. kim , t. keyes , and j.e .straub , j. chem .( 2010 ) 224107 .a.m. ferrenberg , and r.h .swendsen , phys .61 ( 1988 ) 2635 .[ erratum : 63 ( 1989 ) 1658 ] a.m. ferrenberg , and r.h .swendsen , phys .63 ( 1989 ) 1195 .alves , b.a .berg , and s. sanielevici , nucl .b 376 ( 1992 ) 218 .j. kim , t. keyes , and j.e .straub , j. chem .( 2011 ) 061103 .rizzi , and n.a .alves , j. chem .( 2011 ) 141101 .berg , and r.c .harris , comput .phys . commun . 179( 2008 ) 443 .a. caldwell , d. kollr , and k. krninger , comput .( 2009 ) 2197 ._ , _ numerical recipes _ ( cambridge university press , london , 1986 ) .j. lee , and j.m .kosterlitz , phys .b 43 ( 1991 ) 3265 .j. jacob , b. krantz , r.s .dothager , p. thiyagarajan , and t.r .sosnick , j. mol .( 2004 ) 369 .chung , and a. tokmakoff , proteins 72 ( 2008 ) 474 .dastidar , and c. mukhopadhyay , phys .e 72 ( 2005 ) 051928 .sulkowska , a. kloczkowski , t.z .sen , m. cieplak , r.l .jernigan , proteins 71 ( 2008 ) 45 .a. das , c. mukhopadhyay , proteins 75 ( 2009 ) 1024 .a. imparato , a. pelizzola , phys .( 2008 ) 158104 .noel , p.c .whitford , k.y .sanbonmatsu , and j.n .onuchic , nucl .acids res .38 ( 2010 ) w657 .fiore , j. chem .( 2011 ) 114107 .cclllllc + [ -0.35 cm ]+ [ -0.35 cm ] examples & & & & & & & /d.o.f .+ + [ -0.15 cm ] ubiquitin & mean & 250.36(37 ) & 22.73(22 ) & 102.64(27 ) & 22.31(15 ) & 0.4474(31 ) & 0.131 + & global mode & 250.51 & 22.64 & 102.55 & 22.27 & 0.4474 & 0.130 + fibonacci & mean & 0.797(15 ) & 0.7786(79 ) & -8.128(37 ) & 1.968(25 ) & 0.4838(35 ) & 0.350 + sequences & global mode & 0.803 & 0.7754 & -8.147 & 1.958 & 0.4841 & 0.349 + +[ -0.35 cm ]
microcanonical thermostatistics analysis has become an important tool to reveal essential aspects of phase transitions in complex systems . an efficient way to estimate the microcanonical inverse temperature and the microcanonical entropy is achieved with the statistical temperature weighted histogram analysis method ( st - wham ) . the strength of this method lies on its flexibility , as it can be used to analyse data produced by algorithms with generalised sampling weights . however , for any sampling weight , st - wham requires the calculation of derivatives of energy histograms , which leads to non - trivial and tedious binning tasks for models with continuous energy spectrum such as those for biomolecular and colloidal systems . here , we discuss two alternative methods that avoid the need for such energy binning to obtain continuous estimates for in order to evaluate by using st - wham : ( i ) a series expansion to estimate probability densities from the empirical cumulative distribution function ( cdf ) , and ( ii ) a bayesian approach to model this cdf . comparison with a simple linear regression method is also carried out . the performance of these approaches is evaluated considering coarse - grained protein models for folding and peptide aggregation .
the security of coherent state continuous variable quantum key distribution ( cv - qkd ) is fundamentally based on the inability of an eavesdropper to perfectly distinguish between non - orthogonal quantum states . in this paper , we look at how much information a potential eavesdropper can gain when trying to distinguish between two pure coherent states as opposed to distinguishing between two mixed coherent states .this is of particular interest in cv - qkd protocols , such as post - selection ) , where it is important to determine if an eavesdropper obtained more information in the case of distinguishing between two pure coherent states or distinguishing between two mixed states .in our analysis , we will use the _ probability of error _ ( ) measure to distinguish between quantum states .we point out that one could potentially consider other distinguishability measures such as the kolmogorov distance , the bhattacharyya coefficient and the shannon distinguishability ( for a review of these measures see ) .however , as we shall see the probability of error measure has a number of useful properties and can be directly calculated for the quantum states we consider in our analysis .we consider the distinguishability between two general quantum states that are described by the two density matrices and .it was originally shown by helstrom that the probability of error between these two density matrices is minimized by performing an optimal positive operator - valued measure ( povm ) . in this case , the probability of error for the distinguishing between two general quantum states can be expressed as $ ] and are the eigenvalues of the matrix .we note that when the two states are indistinguishable the probability of error is . on the other hand , in the casewhen the two states are completely distinguishable the probability of error is . and ) and ( b ) two mixed coherent states ( and ) . here the dotted lines and shadings in ( b ) indicate which of the two coherent states are mixed.,width=453 ]we now look at distinguishing between two coherent states using the previously defined probability of error .a coherent state is defined as where is the displacement operator .it is also a minimum uncertainty state and an eigenstate of the annihilation operator , i.e. where is the amplitude of the electromagnetic wave .any two coherent states and are always non - orthogonal and only approach orthogonality ( i.e. ) when where the magnitude is . in the following analysiswe will define a coherent state displace in the amplitude and phase quadratures , by an amount and respectively , as .consequently , we can write the density operators of two pure coherent states and that we consider here as in our analysis we also consider two mixed coherent states , i.e. an equally weighted mixture of coherent states mirrored in the phase quadrature and with both mixtures having the same amplitude component .the density operators corresponding to these two mixed states , and , are defined as ) and fig .( [ two_mixed_vs_two_pure_discrete]b ) give a two - dimensional phase space illustration of the two pure coherent states and the two mixed coherent states defined by eq .( 2 ) and eq .( 3 ) respectively . according to eq . ( [ error_probability ] )we need to determine the eigenvalues of for both the two pure states and two mixed states . to do this we write in its matrix representation which can be expanded in terms of the orthogonal fock or number states defined as where is the creation operator of a harmonic oscillator and .for example , the coherent state written in terms of this fock basis is once is written in matrix form we can then numerically determine its eigenvalues . in this fock state expansion the inner product of an arbitrary coherent state with a fock state is given by where and are fock states . calculating the general matrix coefficients for the case of the two pure coherent states we obtain \end{aligned}\ ] ] similarly for the two mixed state case we find \end{aligned}\ ] ]numerically we calculate the eigenvalues of eq .( [ matrices_pure ] ) and eq .( [ matrices_mixed ] ) up to certain values of and .then according to eq .( [ error_probability ] ) this will give us the probability of error in distinguishing between two quantum states .now having numerically calculated we would like to interpret this in terms of the information gained from using the distinguishing measure .in the context of cv - qkd it is important to determine how much shannon information an eavesdropper can obtain by distinguishing between two ( pure or mixed ) quantum states . the information obtained by distinguishing between two states can be calculated using the shannon information formula for a binary symmetric channel figure ( [ 3d_plot ] ) shows the difference between the shannon information obtained by distinguishing between two coherent states compared with distinguishing between two mixed states .this information difference is defined as .figure ( [ 3d_plot ] ) plots in terms of the amplitude and phase quadrature displacements of the pure and mixed states as defined in eq . ( 3 ) and eq .( 4 ) , respectively . herewe have expanded up to fock states , i.e. in our numerical analysis .there are two main features of this plot .firstly , we notice that , given our distinguishability measure and initial configuration of coherent states in phase space , two mixed states never give more information than two pure state , i.e. .secondly , there is a flat region where the information gain is zero , i.e. the information from distinguishing between two mixed states is the same as that of two pure states .this means as we move the states further and further apart in the amplitude quadrature ( while keeping the phase quadrature fixed ) , the probability of error tends to zero and hence an information gain of zero .the same result occurs when the amplitude quadrature is fixed while varying the phase quadrature .this is somewhat surprising because the more separated two mixed states become the more indistinguishable they are and consequently less information can be extracted .but in this case what it is telling us is that at some point the two mixed states start behaving " like two pure states . .in this case the two mixed states never give more information than two pure states.,width=302 ]in conclusion , we have shown that a continuous variable quantum key distribution protocol where an eavesdropper needs to distinguish between two pure coherent states , rather than two mixed coherent states ( where the various mixtures have the same amplitude component ) , the eavesdropper will never get more information from the two mixed coherent states .we showed this using the probability of error as the distinguishability measure along with the shannon information formula .
we use the probability of error as a measure of distinguishability between two pure and two mixed symmetric coherent states in the context of continuous variable quantum cryptography . we show that the two mixed symmetric coherent states ( in which the various components have the same real part ) never give an eavesdropper more information than two pure coherent states .
as summarized by norberg , an insurance company which invests its wealth in a financial market is exposed to two kinds of risk . the first kind, called insurance risk , is the traditional liability risk caused by insurance claims , and the second kind , called financial risk , is the asset risk related to risky investments .the interplay of the two risks unavoidably leads to a complicated stochastic structure for the wealth process of the insurance company .paulsen proposed a general continuous - time risk model in which the cash flow of premiums less claims is described as a semimartingale and the log price of the investment portfolio as another semimartingale .since then the study of ruin in the presence of risky investments has experienced a vital development in modern risk theory ; some recent works include paulsen , klppelberg and kostadinova , heyde and wang , hult and lindskog , bankovsky _ et al . _ , and hao and tang . during this research ,much attention has been paid to an important special case of paulsen s set - up , the so - called bivariate lvy - driven risk model , in which the two semimartingales are independent lvy processes fulfilling certain conditions so that insurance claims dominate financial uncertainties .a well - known folklore says that risky investments may impair the insurer s solvency just as severely as do large claims ; see norberg , kalashnikov and norberg , frolova _ et al . _ , and pergamenshchikov and zeitouny . in this paper, we describe the insurance business in a discrete - time risk model in which the two risks are quantified as concrete random variables .this discrete - time risk model builds an efficient platform for investigating the interplay of the two risks .the ruin probabilities of this model have been investigated by nyrhinen , tang and tsitsiashvili , collamore , and chen , among many others .concretely , for each , denote by the insurer s net loss ( the total amount of claims less premiums ) within period and by the stochastic discount factor ( the reciprocal of the stochastic accumulation factor ) over the same time period .then the random variables , , and , , represent the corresponding insurance risks and financial risks , respectively . in this framework , we consider the stochastic present values of aggregate net losses specified as and consider their maxima if , , form a sequence of independent and identically distributed ( i.i.d . )random pairs fulfilling and , then , by lemma 1.7 of vervaat , converges almost surely ( a.s . ) as . in this case , denote by the a.s . limit .clearly , is non - decreasing in and thus , if and , then also converges a.s . to a limit , denoted by , as .we conduct risk analysis of the insurance business through studying the tail probabilities of and for .the study of tail probabilities is of fundamental interest in insurance , finance , and , in particular , quantitative risk management .moreover , the tail probability of with is immediately interpreted as the finite - time or infinite - time ruin probability . in most places of the paper ,we restrict ourselves to the standard framework in which , , form a sequence of i.i.d .random variables with generic random variable and common distribution on , , , form another sequence of i.i.d .random variables with generic random variable and common distribution on , and the two sequences are mutually independent . under the assumption that the insurance risk has a regularly - varying tail dominating that of the financial risk , tang and tsitsiashvili obtained some precise asymptotic formulas for the finite - time and infinite - time ruin probabilities .the dominating relationship between and holds true if we consider the classical black scholes market in which the log price of the investment portfolio is modelled as a brownian motion with drift and , hence , has a lognormal tail , lighter than every regularly - varying tail .however , empirical data often reveal that the lognormal model significantly underestimates the financial risk .it shows particularly poor performance in reflecting financial catastrophes such as the recent great recession since 2008 .this intensifies the need to investigate the opposite case where the financial risk has a regularly - varying tail dominating that of the insurance risk . in this case , the stochastic quantities in ( [ s ] ) and ( [ m ] ) become much harder to tackle with the difficulty in studying the tail probability of the product of many independent regularly - varying random variables .tang and tsitsiashvili gave two examples for this opposite case illustrating that , as anticipated , the finite - time ruin probability is mainly determined by the financial risk .chen and xie also studied the finite - time ruin probability of this model and they obtained some related results applicable to the case with the same heavy - tailed insurance and financial risks . in this paper , under certain technical conditions ,we give a unified treatment in the sense that no dominating relationship between the two risks is required .that is to say , the obtained formulas hold uniformly for the cases in which the insurance risk is more heavy - tailed than , less heavy - tailed than , and equally heavy - tailed as the financial risk . in our main result , under the assumption that every convex combination of and is of strongly regular variation ( see definition [ srv ] below ) , we derive some precise asymptotic formulas for the tail probabilities of and for .all the obtained formulas appear to be linear combinations of and .hence , if one of and dominates the other , then this term remains in the formulas but the other term is negligible ; otherwise , both terms should simultaneously present .these formulas are in line with the folklore quoted before , confirming that whichever one of the insurance and financial risks with a heavier tail plays a dominating role in leading to the insurer s insolvency . in the rest of this paper , section [ sec2 ] displays our results and some related discussions after introducing the assumptions , section [ sec3 ] prepares some necessary lemmas , and section [ sec4 ] proves the results .throughout this paper , all limit relationships hold for unless otherwise stated . for two positive functions and , we write or if , write if both and , and write if both and . for a real number , we write and .we restrict our discussions within the scope of regular variation .a distribution on is said to be of regular variation if for all and the relation holds for some . in this case, we write .however , such a condition is too general to enable us to derive explicit asymptotic formulas for the tail probabilities of the quantities defined in ( [ s ] ) and ( [ m ] ) . to overcome this difficulty ,our idea is to employ some existing results and techniques related to the well - developed concept of convolution equivalence .a distribution on is said to be convolution equivalent if for all and the relations and hold for some , where stands for the -fold convolution of .more generally , a distribution on is still said to be convolution equivalent if is . in this case, we write .relation ( [ l(alpha ) ] ) itself defines a larger class denoted by .it is known that the constant in relation ( [ s(alpha ) ] ) is equal to see cline and pakes .we shall use the notation as above for the moment generating function of a distribution throughout the paper .the class coincides with the well - known subexponential class .examples and criteria for membership of the class for can be found in embrechts and cline .note that the gamma distribution belongs to the class for some but does not belong to the class .hence , the inclusion is proper .recent works in risk theory using convolution equivalence include klppelberg _ et al . _ , doney and kyprianou , tang and wei , griffin and maller , griffin _ et al . _ , and griffin . for a distribution on ,define which is still a proper distribution on .actually , if is a real - valued random variable distributed as , then denotes the conditional distribution of on . for every ,it is clear that if and only if .we now introduce a proper subclass of the class .[ srv]a distribution on is said to be of strongly regular variation if defined by ( [ u v ] ) belongs to the class for some . in this case, we write .examples and criteria for membership of the class can be given completely in parallel with those in embrechts and cline .this distribution class turns out to be crucial for our purpose . clearly , if follows for some , then since the conditional distribution of on belongs to the class .our standing assumption is as follows : [ assumption a]every convex combination of and , namely for , belongs to the class . some interesting special cases of assumption [ assumption a ] include : a. and ; or , symmetrically , and . b. , , and ; or , symmetrically , , , and . c. , , and the function is -regularly varying ( that is to say , for every ) . for ( a ) and ( b ) , recall a fact that , if , , and , then and are equivalent ; see theorem 2.1(a ) of klppelberg and the sentences before it .this fact can be restated as that , if , , and , then and are equivalent . by this fact , the verifications of ( a ) and ( b ) are straightforward . for ( c ) , by theorem 2.0.8 of bingham _ et al . _ , the relation holds uniformly on every compact -set of .then the verification can be done by using theorems 3.4 and 3.5 of cline . in this subsection , we assume that and are two independent sequences of i.i.d .random variables with distributed as on and as on . under assumption[ assumption a ] , by lemma [ c2 ] below ( with ) , we have note that both and are finite under assumption [ assumption a ] .the moments of will appear frequently in the paper , so we introduce a shorthand for to help with the presentation . starting with this asymptotic formula and proceeding with induction , we shall show in our main result that the relations and hold for every , where the coefficients , , and are given by furthermore , we shall seek to extend relations ( [ r1 ] ) and ( [ r2 ] ) to . for this purpose , it is natural to assume ( which excludes the case ) to guarantee the finiteness of the constants , , and . note in passing that implies , which is an aforementioned requirement for and to be a.s .straightforwardly , it is easy to see that actually , when we use the elementary inequality for any nonnegative sequence , and when we use minkowski s inequality . in order for to be a.s .finite , we need another technical condition .the finiteness of can be verified similarly to ( [ bound of moment ] ) . applying the dominated convergence theorem to the expressions for and , we obtain now we are ready to state our main result , whose proof is postponed to sections [ proof of ( a)][proof of ( c ) ] .[ no1]let and be two independent sequences of i.i.d .random variables with distributed as on and as on . under assumption[ assumption a ] , we have the following : a. relations ( [ r1 ] ) and ( [ r2 ] ) hold for every ; b. if , then relation ( [ r1 ] ) holds for ; c. if and , then relation ( [ r2 ] ) holds for .as we pointed out before , theorem [ no1 ] does not require a dominating relationship between and . even in assertions( b ) and ( c ) where is assumed , there is not necessarily a dominating relationship between and , though the conditions on and become not exactly symmetric any more .additionally , theorems 5.2(3 ) and 6.1 of tang and tsitsiashvili are two special cases of our theorem [ no1](a ) with and , respectively . since the famous work of kesten , the tail probabilities of and have been extensively investigated , mainly in the framework of random difference equations and most under so - called cramr s condition that holds for some .traditional random difference equations appearing in the literature are often different from ones such as ( [ m1 ] ) and ( [ t recursive equation ] ) below associated to our model .nevertheless , under our standard assumptions on and , these subtle differences are not essential and the existing results can easily be transformed to our framework .we omit such details here . corresponding to our model , kesten s work shows an asymptotic formula of the form assuming , among others , that fulfills cramr s condition and fulfills a certain integrability condition involving .kesten s constant , though positive , is generally unknown .see enriquez _ for a probabilistic representation for this constant .goldie studied the same problem but in a broader scope and he simplified kesten s argument .note that cramr s condition is essentially used in these works . among few works on this topic beyond cramrs condition we mention grey and goldie and grbel . for the casewhere for some , for some , and , indicating that the insurance risk dominates the financial risk , grey s work shows a precise asymptotic formula similar to ours .goldie and grbel interpreted the study in terms of perpetuities in insurance and finance and they derived some rough asymptotic formulas .corresponding to our model , their results show that exhibits a light tail if is light tailed and , while must exhibit a heavy tail once , regardless of the tail behavior of , all being consistent with the consensus on this topic that risky investments are dangerous .we also refer the reader to hult and samorodnitsky , collamore , blanchet and sigman , and hitczenko and wesoowski for recent interesting developments on the topic .in contrast to these existing results , we do not require cramr s condition or a dominating relationship between and in theorem [ no1](b ) , ( c ) .the coefficients and appearing in our formulas , though still generally unknown , assume transparent structures as given in ( [ bc ] ) , which enable one to easily conduct numerical estimates .the condition in theorem [ no1](b ) , ( c ) is made mainly to ensure the finiteness of and .however , it excludes some apparently simpler cases such as and classical random walks ( corresponding to ) .the tail behavior of the maximum of a random walk with negative drift , especially with heavy - tailed increments , has been systematically investigated by many people ; see , for example , feller , veraverbeke , korshunov , borovkov , denisov _ et al . _ , and foss _ et al . _ , among many others .the study of random walks hints that the tail probabilities of and behave essentially differently between the cases and .actually , if , then all of , , and diverge to as , and theorem [ no1 ] leads to this fails to give precise asymptotic formulas for and , though still consistent with kesten and goldie s formula since . for this case, intriguing questions include how to capture the precise asymptotics other than kesten and goldie s for and and how to connect the asymptotics for and as to kesten and goldie s formula .the approach developed in the present paper seems not efficient to give a satisfactory answer to either of these questions. admittedly , the standard complete independence assumptions on the two sequences and , though often appearing in the literature , are not of practical relevance .however , theorem [ no1 ] offers new insights into the tail probabilities of the sums in ( [ s ] ) and their maxima in ( [ m ] ) , revealing the interplay between the insurance and financial risks .furthermore , extensions that incorporate various dependence structures into the model are expected and usually without much difficulty .we show in the next subsection a simple example for such extensions .as done by chen , in this subsection we assume that is a sequence of i.i.d .random pairs with following a farlie gumbel morgenstern ( fgm ) distribution , x\in\mathbb{r } , y>0,\ ] ] where on and on are two marginal distributions .in view of the decomposition the fgm structure can easily be dissolved .hereafter , for a random variable and its i.i.d .copies and , denote by a random variable identically distributed as and independent of all other sources of randomness . under assumption [ assumption a ] , by ( [ pi ] ) and lemma [ c2 ] below, we can conduct an induction procedure to obtain and for every , where additionally , under the conditions of theorem [ no1](b ) , ( c ) , letting leads to where and are independent of and .it is easy to verify the finiteness of and .we summarize the analysis above into the following corollary and will show a sketch of its proof in section [ proof of cor ] .[ no2]let be a sequence of i.i.d .random pairs with common fgm distribution ( [ fgm ] ) . under assumption[ assumption a ] , we have the following : a. relations ( [ r3 ] ) and ( [ r4 ] ) hold for every ; b. if , then relation ( [ r3 ] ) holds for ; c. if and , then relation ( [ r4 ] ) holds for . as a sanity check , letting , the results in corollary [ no2 ] coincide with those in theorem [ no1 ] .in this section , we prepare a series of lemmas , some of which are interesting in their own right .we first recall some well - known properties of distributions of regular variation and convolution equivalence . if for some , then for every and every there is some constant such that potter s bounds hold whenever and ; see theorem 1.5.6(iii ) of bingham _ et al .since if and only if defined by ( [ u v ] ) belongs to , potter s bounds above can easily be restated in terms of a distribution as that , for every and every there is some constant such that the inequalities hold whenever and . by lemma 5.2 of pakes , if then it holds for every that the first lemma below describes an elementary property of convolution equivalence .[ l1]let be i.i.d .real - valued random variables with common distribution for some .then for every and , write by relation ( [ pakes ] ) , we have and where in the last step we used and the dominated convergence theorem . similarly , plugging these limits into ( [ 123 ] ) yields the desired result . hereafter , for distributions , denote by a convex combination of , where .[ new1]let be distributions and let .the following are equivalent : a. for every ; b. for some and the relation holds for every and every .first prove that ( b ) implies ( a ) .denote by this specific element in such that .for every , it is easy to see that and that by ( [ assumption ] ) .thus , follows from the closure of the class under weak equivalence as mentioned in the last paragraph of section [ a ] . for the other implication ,we only need to use ( a ) to verify ( [ assumption ] ) . for arbitrarily fixed and every , each of the sums and is proportional to a convolution - equivalent tail .thus , by the arbitrariness of , relation ( [ assumption ] ) follows .the following lemma shows the usefulness of convolution equivalence in dealing with the tail probability of the sum of independent random variables .note that the lemma does not require any dominating relationship among the individual tails .additionally , in view of lemma [ new1 ] , letting in lemma [ l2 ] retrieves theorem 1 of li and tang .[ l2]let be distributions on and let . if for every , then and clearly , we only need to prove relation ( [ for n ] ) .introduce independent random variables with distributions , respectively . for every and , according to whether or not there is exactly only one occurring in the union , we split the probability on the right - hand side into two parts as first we deal with . for a real vector ,write , and for each , write we have since is proportional to a convolution - equivalent tail , by the dominated convergence theorem , hence , it follows from ( [ assumption ] ) and the dominated convergence theorem that next we turn to .write , which has a convolution - equivalent tail proportional to , and let be i.i.d .copies of . clearly , thus , by lemma [ l1 ] , plugging ( [ a ] ) and ( [ b ] ) into ( [ c ] ) yields the desired result . due to the connection between convolution equivalence and strongly regular variation, we can restate lemmas [ new1 ] and [ l2 ] in terms of strongly regular variation .actually , the next lemma shows an equivalent condition for assumption [ assumption a ] .[ new2]let be distributions and let .the following are equivalent : a. for every ; b. for some and the relation holds for every and every .the lemma below expands the tail probability of the product of independent , nonnegative , and strongly regular random variables , forming an analogue of the well - known breiman s theorem in a different situation . for breimans theorem , see breiman and cline and samorodnitsky . [ c2]let be independent nonnegative random variables with distributions , respectively , and let .if for every , then the distribution of belongs to the class and the next lemma shows kesten s bound for convolution tails without the usual requirement .it improves lemma 5.3 of pakes for the case .[ kesten]let be a distribution on .if for some , then for every there is some constant such that the relation holds for all and all . when , the assertion has been given in lemma 5.3 of pakes , we only need to consider ( for which must hold ) .let be a sequence of i.i.d .random variables with common distribution , and set . clearly , note that the distribution of still belongs to the class and .hence , for every , by lemma 5.3 of pakes , there is some constant such that , for all and all , by ( [ potter s for l ] ) , there are some constants and such that , for all and all , plugging ( [ k2 ] ) into ( [ k1 ] ) and noticing that , we have , for all and all , for , we choose an integer .then , for and , using the same derivations as in ( [ k1])([k3 ] ) , we obtain at last , for and , it is obvious that a combination of ( [ k3])([k5 ] ) gives that , for some constant and for all and all , by setting to be small enough such that , we complete the proof .the following lemma will be crucial in proving theorem [ no1](b ) , ( c ) .[ infinite]let be a sequence of ( arbitrarily dependent ) random variables with common distribution on , let be another sequence of i.i.d .random variables with common distribution on , and let the two sequences be mutually independent .assume that there is some distribution for such that assume also that .then choose some large constant such that the inequality holds for all , and then introduce a nonnegative random variable with a distribution clearly , for all and for all large . the inequality for all means that is stochastically not greater than , denoted by . moreover , since , there is some large but fixed constant such that .for this fixed , define and then introduce another nonnegative random variable with a distribution clearly , , for all , and for all .thus , . let , , copies of independent of .choose some such that .by lemma [ kesten ] , there is some constant such that , for all and all , noticeably , the derivation in ( [ inf1 ] ) tacitly requires that are positive .nevertheless , in case assigns a mass at , the upper bound in ( [ inf1 ] ) is still correct and can easily be verified by conditioning on . by lemma [ c2 ] , moreover , by ( [ potter s ] ) , there is some constant such that , for all and , now we start to estimate the tail probability in ( [ bound ] ) . choosing some large such that .clearly , for all , \\[-8pt ] & \leq & \biggl ( \sum_{i>\sqrt{x / x_{0}}}+\sum _ { n < i\leq\sqrt { x / x_{0 } } } \biggr ) \pr \biggl ( x^{\ast}\prod _ { j=1}^{i}y_{j}^{\ast } > \frac { x}{i^{2 } } \biggr ) \nonumber \\ & = & i_{1}(x)+i_{2}(n , x ) , \nonumber\end{aligned}\ ] ] where is understood as in case . first we deal with . by chebyshev s inequality , this means that converges to at least semi - exponentially fast since .thus , we deal with .we further decompose it into three parts as \\[-8pt ] & & { } + \sum_{n <i\leq\sqrt{x / x_{0}}}\pr \biggl ( x^{\ast } \prod _ { j=1}^{i}y_{j}^{\ast } > \frac{x}{i^{2}},\prod_{j=1}^{i}y_{j}^{\ast } \leq1 \biggr ) \nonumber \\ & = & i_{21}(n , x)+i_{22}(n , x)+i_{23}(n , x ) .\nonumber\end{aligned}\ ] ] by conditioning on and then applying ( [ inf1])([inf22 ] ) , we obtain since , it follows that applying both ( [ inf1 ] ) and ( [ inf22 ] ) , we have which implies that similarly , applying ( [ inf22 ] ) twice , which , together with , gives that a combination of relations ( [ two terms])([inf25 ] ) completes the proof .we first prove relation ( [ r1 ] ) .it is easy to verify that where denotes equality in distribution ; see also theorem 2.1 of tang and tsitsiashvili .we proceed with induction . for , it follows from lemma [ c2 ] that thus , relation ( [ r1 ] ) holds for .now we assume by induction that relation ( [ r1 ] ) holds for and prove it for . by this induction assumption and assumption [ assumption a ] , we know that every convex combination of the distributions of and belongs to the class . applying lemma [ l2 ] with , we have which , together with assumption [ assumption a ] , implies that every convex combination of the distributions of and belongs to the class .applying lemma [ c2 ] , we obtain therefore , relation ( [ r1 ] ) holds for .next we turn to relation ( [ r2 ] ) .introduce a sequence of random variables through the recursive equation equipped with .it is easy to see that for .then the proof of relation ( [ r2 ] ) can be done by using the recursive equation ( [ t recursive equation ] ) and going along the same lines as in the proof of relation ( [ r1 ] ) above .note that and increasingly converge to the finite constants and . also recall lemma [ infinite ] .hence , for every , there is some large integer such that both and hold .now we start to deal with . on the one hand , for every , by theorem [ no1](a ) , relation ( [ t4 ] ) , and assumption[ assumption a ] , in turn , we obtain \\[-8pt ] & \sim & ( 1-\varepsilon ) ^{-\alpha } \bigl ( a_{n_{0}}\overline { f}(x)+b_{n_{0}}\overline{g}(x ) \bigr ) + \delta \varepsilon^{-\alpha } \bigl ( \overline{f}(x)+\overline{g}(x ) \bigr ) \nonumber \\ & \leq & \bigl ( ( 1-\varepsilon ) ^{-\alpha}a_{\infty } + \delta \varepsilon^{-\alpha } \bigr ) \overline{f}(x)+ \bigl ( ( 1-\varepsilon ) ^{-\alpha}b_{\infty}+\delta\varepsilon^{-\alpha } \bigr ) \overline{g}(x ) .\nonumber\end{aligned}\ ] ] on the other hand , by theorem [ no1](a ) and relation ( [ t3 ] ) , by the arbitrariness of and in ( [ t5 ] ) and ( [ t6 ] ) , we obtain relation ( [ r1 ] ) for .first we establish an asymptotic upper bound for .as in the proof of theorem [ no1](b ) , for every , suitably choose some large integer such that relations ( [ t3 ] ) , ( [ t4 ] ) , and the relation hold simultaneously . for every , by theorem [ no1](a ) , relation ( [ t4 ] ), assumption [ assumption a ] , and relation ( [ t7 ] ) , in turn , we obtain since and are arbitrary positive constants , it follows that for the corresponding asymptotic lower bound , as analyzed in the proof of theorem [ no1](a ) , it suffices to prove that where is the weak limit of the sequence defined by ( [ t recursive equation ] ) .we apply the method developed by grey to prove ( [ lower ] ) .consider the stochastic difference equation which inherits a stochastic structure from ( [ t recursive equation ] ) .note that the weak solution of ( [ stochastic difference equation ] ) exists and is unique .furthermore , the limit distribution of is identical to this unique solution and , hence , it does not depend on the starting random variable .see vervaat and goldie for these and related statements .it is easy to check that ; see the proof of theorem 1 of grey for a similar argument . construct a new starting random variable independent of with tail starting with , the recursive equation ( [ t recursive equation ] )generates the sequence correspondingly . comparing ( [ starting random variable ] ) with ( [ stochastic difference equation ] ) ,we see that and , hence , every are stochastically not greater than ; namely , it holds for all and all that furthermore , it holds that where the last step is analogous to ( [ cited ] ) .thus , by assumption [ assumption a ] , the distribution of belongs to the class .then , by going along the same lines of the proof of theorem [ no1](a ) and using equation ( [ t recursive equation ] ) starting with , we obtain with since weakly converges to and , it is easy to see that and , with the latter subject to a straightforward application of the dominated convergence theorem . thus , substituting ( [ lower 2 ] ) into ( [ lower 1 ] ) and letting on the right - hand side of the resulting formula , we arrive at relation ( [ lower ] ) as desired .clearly , the recursive equations ( [ m1 ] ) , ( [ t recursive equation ] ) , and the identity for still hold since is a sequence of i.i.d . random pairs . introduce four independent random variables , , , and with distributions , , , and , respectively , and let them be independent of . using decomposition ( [ pi ] ), we have when , applying lemma [ c2 ] to each term on the right - hand side of ( [ decomposition ] ) gives then , as in the proof of theorem [ no1](a ) , proceeding with induction according to ( [ decomposition ] ) leads to ( [ r3 ] ) .relation ( [ r4 ] ) can be derived similarly .this proves corollary [ no2](a ) .corollary [ no2](b ) , ( c ) can be verified by the similar ideas used in proving theorem [ no1](b ) , ( c ) .the key ingredient is establishing a relation similar to ( [ bound ] ) .write , , , and so on .it follows from ( [ c1 ] ) that as in the proof of lemma [ infinite ] , we can construct independent random variables and both with tails equal to for all large such that , , and . for some large such that , we write then , going along the same lines of the rest of the proof of lemma [ infinite ] , we obtain which suffices for our purpose .the authors are very grateful to the two reviewers and the associate editor for their insightful comments and constructive suggestions , which have helped significantly improve this work .li s research was supported by the national natural science foundation of china ( grant no .11201245 ) and the research fund for the doctoral program of higher education of china ( grant no .20110031120003 ) .tang s research was supported by a centers of actuarial excellence ( cae ) research grant ( 20132016 ) from the society of actuaries .
consider an insurance company exposed to a stochastic economic environment that contains two kinds of risk . the first kind is the insurance risk caused by traditional insurance claims , and the second kind is the financial risk resulting from investments . its wealth process is described in a standard discrete - time model in which , during each period , the insurance risk is quantified as a real - valued random variable equal to the total amount of claims less premiums , and the financial risk as a positive random variable equal to the reciprocal of the stochastic accumulation factor . this risk model builds an efficient platform for investigating the interplay of the two kinds of risk . we focus on the ruin probability and the tail probability of the aggregate risk amount . assuming that every convex combination of the distributions of and is of strongly regular variation , we derive some precise asymptotic formulas for these probabilities with both finite and infinite time horizons , all in the form of linear combinations of the tail probabilities of and . our treatment is unified in the sense that no dominating relationship between and is required . ./style / arxiv - general.cfg
consider the wave equation with non - homogeneous boundary conditions : system ( [ contwavecontrolled ] ) is said to be _ exactly controllable _ in time if , for all , there exists a control function such that the solution of ( [ contwavecontrolled ] ) can be driven at rest at time , i.e. we also introduce the adjoint wave equation with homogeneous boundary conditions : this system is well known to be well posed in the energy space and the energy below is conserved in time : the hilbert uniqueness method ( hum ) introduced in allows showing that the property of _ exact controllability _ for ( [ contwavecontrolled ] ) is equivalent to the _ boundary observability _ of ( [ contwaveadjoint ] ) .the _ observability property _ of the wave equation ensures that the following _ observability inequality _ holds for all solutions of ( [ contwaveadjoint ] ) provided : the best constant in ( [ contobsineq ] ) is the so - called _observability constant_. the observability time has to be larger than the characteristic time which is needed by any solution associated to initial data supported in a very narrow neighborhood of to travel along the characteristic rays , get to the boundary and come back to the boundary along the characteristics .it is also well known that for all and all solutions of the adjoint problem ( [ contwaveadjoint ] ) with initial data , the following _ admissibility inequality _ holds : so that for all , and are equivalent norms . as a consequence of these results ,it is easy to see that for all , there exists a control driving the solution of ( [ contwavecontrolled ] ) to the rest at , i.e. such that ( [ contnullcontrol ] ) holds .this turns out to be equivalent to the fact that for all solutions of ( [ contwaveadjoint ] ) , where is the duality product between and . the hum control , the one of minimal -norm , has the explicit form where is the solution corresponding to the minimum of the quadratic functional the effect of substituting the continuous controlled wave equation ( [ contwavecontrolled ] ) or the corresponding adjoint problem ( [ contwaveadjoint ] ) by discrete models has been intensively studied during the last years , starting from some simple numerical schemes on uniform meshes like finite differences or linear - finite element methods in and , more recently , more complex schemes like the discontinuous galerkin ones in .in all these cases , the convergence of the approximation scheme in the classical sense of the numerical analysis does not suffice to guarantee that the sequence of discrete controls converges to the continuous ones , as one could expect .this is due to the fact that there are classes of initial data for the discrete adjoint problem generating high frequency wave packets propagating at a very low group velocity and that , consequently , are impossible to be observed from the boundary of the domain during a finite time , uniformly as the mesh - size parameter tends to zero .this leads to the divergence of the discrete observability constant as the mesh size tends to zero .high frequency pathological phenomena have also been observed for numerical approximation schemes of other models , like the linear schrdinger equation ( cf . ) , in which one is interested in the uniformity of the so - called _ dispersive estimates _ , which play an important role in the study of the well - posedness of the non - linear models .several _ filtering techniques _ have been designed to face these high frequency pathologies : the _ fourier truncation method _ ) , which simply eliminates all the high frequency fourier components propagating non - uniformly ; the _ bi - grid algorithm _ introduced in and and rigorously studied in , or in the context of the finite differences semi - discretization of the and wave equation and of the schrdinger equation ( cf. ) , which consists in taking initial data with slow oscillations obtained by linear interpolation from data given on a coarser grid ; and the _ numerical viscosity _ method , which by adding a suitable dissipative mechanism damps out the spurious high frequencies ( , ) .we should emphasize that the _ mixed finite element method _ analyzed in is , as far as we know , the unique method that preserves the propagation and controllability properties of the continuous model uniformly in the mesh size without requiring any extra filtering .the interested reader is referred to the survey articles and for a presentation of the development and the state of the art in this topic .the purpose of the present paper is to analyze the behavior of the quadratic - finite element semi - discretization of the problems ( [ contwavecontrolled ] ) and ( [ contwaveadjoint ] ) from the uniform controllability / observability point of view . in * section [ sectintrop2 ]* we introduce in a rigorous way the discrete analogue of ( [ contwavecontrolled ] ) and ( [ contwaveadjoint ] ) and explain the minimization process generating the discrete controls . in * section [ sectfour ] * we analyze the spectral problem associated to this discrete model and reveal the co - existence of two main types of fourier modes : an _ acoustic _ one , of physical nature , related to the nodal components of the numerical solution , and an _ optic _ one , of spurious nature , related to the curvature with which the quadratic finite element method perturbs the linear approximation .we also study finer properties of the spectrum , for example the _ spectral gap _ , identifying three regions of null gap : the highest frequencies on both acoustic and optic modes and the lowest frequencies on the optic one .the content of this section is related to previously existing work .for instance , the dispersive properties of higher - order finite element methods have been analyzed in in the setting of the helmholtz equation .an explicit form of the acoustic dispersion relation was obtained for approximations of arbitrary order .it was shown that the numerical dispersion displays three different types of behavior , depending on the approximation order relative to the mesh - size and the wave number . in * section [ sectboundobseig ]* we obtain some spectral identities allowing us to analyze the discrete observability inequality for the adjoint system . in * section [ sectingham ] * we show that the _ fourier truncation _ of the three pathological regions of the spectrum leads to an uniform observability inequality . in * section [ sectbigrid ] * we prove that a filtering mechanism consisting in , firstly , considering piecewise linear initial data and , secondly , preconditionning the nodal components by a bi - grid algorithm guarantees uniform observability properties . within the proof, we use a classical _ dyadic decomposition argument _ ) , which mainly relies on the fact that for this class of initial data the total energy can be bounded by the energy of the projection on the low frequency components of the acoustic dispersion relation .we should emphasize that our results are finer than the ones in or , where one obtains uniform observability properties for finite element approximations of any order , but by filtering the fourier modes much under the critical scale .here we only consider the particular case of quadratic finite element approximation on meshes , but we get to the critical filtering scale .note however that the results in and apply in the context of non - uniform grids as well . in * section [ sectconvergence ]* we present the main steps of the proof of the convergence of the discrete control problem under the assumption that the initial data in the corresponding adjoint problem are filtered through a fourier truncation or a bi - grid algorithm .* section [ sectopenpbms ] * is devoted to present the conclusions of the paper and some related open problems .let , and be the _ nodes _ of an _ uniform grid _ of the interval ] , consider the following second - order algebraic equation in : whose solutions are with .the superscripts and stand for _ acoustic _ and _ optic_. we will also need the square roots of the fourier symbols ( [ fouriersymbols ] ) , the so called _ dispersion relations _ : for each , set we refer to and as the _ acoustic _ and the _ optic branch _ of the spectrum .the corresponding eigenvectors are * the resonant mode . * up to this moment , we have explicitly calculated solutions of the eigenvalue problem ( [ p2spectral ] ) . to do this, we have supposed that .but is also an eigenvalue corresponding to the _ resonant _ mode , the superscript standing for _ resonant_. the components of the corresponding eigenvector are for any normalized eigenvalue , define , , ( see figure [ figp2eigenvalues ] ) .( left ) versus their square roots ( right ) : the continuous ones ( blue ) , the acoustic ( red ) , the optic ( black ) and the resonant ( green ) modes ., title="fig:",width=283,height=188 ] ( left ) versus their square roots ( right ) : the continuous ones ( blue ) , the acoustic ( red ) , the optic ( black ) and the resonant ( green ) modes ., title="fig:",width=283,height=188 ] * normalized eigenvectors . * for any eigenvector , we define the -normalized eigenvector . . in blue / red ,we represent their nodal / midpoint components.,title="fig:",width=207,height=170 ] . in blue / red ,we represent their nodal / midpoint components.,title="fig:",width=207,height=170 ] . in blue / red ,we represent their nodal / midpoint components.,title="fig:",width=207,height=170 ] + . in blue / red ,we represent their nodal / midpoint components.,title="fig:",width=207,height=170 ] . in blue / red ,we represent their nodal / midpoint components.,title="fig:",width=207,height=170 ] . in blue / red ,we represent their nodal / midpoint components.,title="fig:",width=207,height=170 ] + . in blue / red ,we represent their nodal / midpoint components.,title="fig:",width=207,height=170 ] . in blue / red ,we represent their nodal / midpoint components.,title="fig:",width=207,height=170 ] . in blue / red , we represent their nodal / midpoint components.,title="fig:",width=207,height=170 ] using the expression ( [ discretenorms ] ) of the discrete norm and and the identity ( [ p2spectral3 ] ) , we obtain the following representations of the and -norms of the acoustic and optic eigenvectors in terms of their nodal components , for all and : \sum\limits_{j=0}^n|\tilde{\varphi}_{j+1}^{\alpha , k}+\tilde{\varphi}_j^{\alpha , k}|^2 + \frac{h}{12}\sum\limits_{j=0}^n|\tilde{\varphi}_{j+1}^{\alpha , k}-\tilde{\varphi}_j^{\alpha , k}|^2\label{h0normeig}\ ] ] and then , using the representation formula ( [ h0normeig ] ) , the identities and ( [ p2spectral5 ] ) , we obtain let us remark that blows - up as . with the above notation , where and standing for the _ nodal _ and _ midpoint _ components . using the explicit form of the - norm and the characterization of the resonant mode ( [ p2resonanteigenvector ] ) , we obtain that and therefore , the normalized resonant mode satisfies let us introduce the sets of eigenvalues and of -normalized eigenfunctions , i.e. * fourier representation of discrete solutions .* since is an orthonormal basis in , the initial data in ( [ p2adjoint ] ) admit the following fourier representation : with , and .correspondingly , the solution of ( [ p2adjoint ] ) can be represented as follows : , \label{p2adjointsol}\ ] ] where the total energy ( [ p2energy ] ) of the solutions of ( [ p2adjoint ] ) is then as follows : * limits and monotonicity of the eigenvalues .* firstly , let us remark that as , , and as , . on the other hand , the so - called _ group velocities _ , which are first - order derivatives of the fourier symbols ( [ fouriersymbols ] ) or of the corresponding dispersion relations ( [ dispersionrelations ] ) , verify the following positivity condition which means that _ the acoustic branch is strictly increasing _ and _ the optic one is strictly decreasing _ in .consequently , the high frequency wave packets involving only the acoustic or the optic modes and concentrated around a given frequency propagate in opposite directions .moreover , at or , the group velocities satisfy which , according to the analysis in , shows , in particular , that there are waves concentrated on each mode which propagate at arbitrarily slow velocity .the main result of this section is as follows : for all and all , the following identity holds for both acoustic and optic eigensolutions : moreover , for the resonant mode , the following identity holds : [ propobservabilityeigenvectors ] the identity ( [ observabilityeigenvectorsao ] ) is the discrete analogue of the continuous one , where is the -normalized eigenfunction corresponding to the eigenvalue .[remark0 ] due to the monotonicity of the fourier symbols , we have that and , for all . the quadratic equation has the roots and .this allows us to guarantee that , for all .[remark1 ] due to the form of the denominator in the right hand side of ( [ observabilityeigenvectorsao ] ) and from the above lower and upper bound of the fourier symbols and the behavior of the group velocities , we deduce that the coefficient of in the right hand side of ( [ observabilityeigenvectorsao ] ) is singular as both when or and when and .[ remark2 ] fix and .obviously , it is enough to prove ( [ observabilityeigenvectorsao ] ) for the un - normalized eigenvectors .we will use two approaches to prove the identity ( [ observabilityeigenvectorsao ] ) .the first one consists on using the classical multiplier ( which is a discrete version of the continuous one ) in the simplified spectral problem ( [ p2spectral6 ] ) and then to apply the _ abel summation by parts formula _ for all and . inwhat follows , we will add the superscript to the solution of ( [ p2spectral4 ] ) . in this way , we deduce the following identity : replacing the crossed sum obtained from the identity with into the one with , we get the following equality : using the representation ( [ h1normeig ] ) of the -norm of the optic and acoustic eigenvectors in terms of the nodal components , we obtain that ( [ obseig ] ) is equivalent to replacing the representation of the -norm of the eigenvectors ( [ h0normeig ] ) into the one of the -norm , ( [ h1normeig ] ) , we obtain : on the other hand , the and -norms of the eigenvetors are related as follows : replacing ( [ h0h1normeig ] ) into ( [ obseig2 ] ) , we get } h^2||\mathbf{\tilde{\varphi}}^{\alpha , k}_h||_{h,1}^2.\label{obseig3}\ ] ] by combining ( [ obseig1 ] ) and ( [ obseig3 ] ) , we obtain from which the identity ( [ observabilityeigenvectorsao ] ) follows immediately . the second approach to prove ( [ observabilityeigenvectorsao ] )is much more direct .it consists in using the representation ( [ h1normeig ] ) of the -norm of the eigenvectors , the trigonometric identities ( [ trigonometricidentities ] ) , the fact that and the relation ( [ p2spectral5 ] ) .thus , for as in ( [ observabilityeigenvectorsao ] ) , we get : the identity ( [ observabilityeigenvectorsr ] ) follows by combining the explicit expressions of the components of the resonant eigenvector ( [ p2resonanteigenvector ] ) and ( [ discretenorms ] ) .this concludes the proof of ( [ observabilityeigenvectorsr ] ) .* the observability inequality . * in this section we prove that the discrete observability inequality ( [ p2observabilityinequalitygeneral ] ) holds uniformly as in a truncated class of initial data for the observation operator introduced in ( [ observationoperators ] ) .more precisely , consider and and correspondingly the wave numbers and introduce the subspace of given by consider the truncated subspace defined by ( see figure [ figp2truncatedclass ] ) : .in red / black / green , the acoustic / optic / resonant mode . , width=340,height=170 ] the main result of this subsection is as follows : for all and independent of , all initial data introduced in ( [ p2subsptruncation ] ) , with , given by ( [ wavenumbers ] ) , and all }\partial_{\eta}\lambda^a,\min\limits_{\eta\in[\eta_+^o,\eta_-^o ] } ( -\partial_{\eta}\lambda^o(\eta))\}}\]][theoremfouriertruncation ] the observability inequality ( [ p2observabilityinequalitygeneral ] ) holds uniformly as for the operator in ( [ observationoperators ] ) . the fact that the eigenmodes involved in the class are such that the gap in each branch has a strictly positive lower bound uniformly as allows us to apply ingham theorem ( cf . , theorem 4.3 , pp .more precisely , the spectral gap on each branch is bounded as follows : }\partial_{\eta}\lambda^a(\eta)>0,\ \forall k\pi h,(k+1)\pi h\in[0,\eta_+^a]\ ] ] and }(-\partial_{\eta}\lambda^{o}(\eta))>0,\ \forall k\pi h,(k+1)\pi h\in[\eta_+^o,\eta_-^o].\ ] ] then }\partial_{\eta}\lambda^a(\eta ) , \min\limits_{\eta\in[\eta_+^o,\eta_-^o]}(-\partial_{\eta}\lambda^{o}(\eta))\}>0 ] ( with uniform inclusion as due to the fact that and do not depend on ) , and that for , the weight is strictly positive , we can guarantee that then the proof concludes by taking in ( [ p2observabilityinequalitygeneral ] ) with given by ( [ observationoperators ] ) the observability constant which is independent of . * the admissibility inequality .* using the _ direct inequality _ in ingham theorem ( cf . , pp .60 , ( 4.8 ) ) , we can also prove that the inequality ( [ p2directinequalitygeneral ] ) , with as in ( [ observationoperators ] ) , holds uniformly as for all , with , where is the constant of the direct -ingham inequality and * 1 . the observability inequality . * in this section , will be an _ odd _ number .we consider the space containing _ piecewise linear functions _ given below and the space of the discrete functions whose _ nodal components _ are given by a _ bi - grid algorithm _ , i.e. the even components are arbitrarily given and the odd ones are computed as average of the two even neighboring values : we also define the subspace of the aim of this section is to prove that the observability inequality ( [ p2observabilityinequalitygeneral ] ) still holds uniformly as for initial data in the bi - grid subspace introduced in ( [ p2subsbigrid ] ) : [ theorembigrid]for all and all initial data in the adjoint problem ( [ p2adjoint ] ) belonging to introduced in ( [ p2subsbigrid ] ) , the observability inequality ( [ p2observabilityinequalitygeneral ] ) with given by ( [ observationoperators ] ) holds uniformly as .note that , in the bi - grid filtering mechanism we have designed , the data under consideration have been taken , before filtering through the classical bi - grid algorithm , to be piecewise linear in each interval , , which imposes a further restriction .this allows to obtain the sharp observability time .the bi - grid filtering algorithm proposed in theorem [ theorembigrid ] yields optimal observability time , i.e. the characteristic one .this is due to the fact that for a numerical scheme the minimal time required for the observability to hold is , where is the minimal group velocity involved in the corresponding solution . from our analysis, we will see that the bi - grid filtering algorithm above acts mainly as a fourier truncation of the whole optic diagram and of the second half ( the high frequency one ) of the acoustic one .consequently , }\partial_{\eta}\lambda^{a}(\eta) ] , we conclude that and then the observability time of the numerical scheme is sharp : .the following two auxiliary results hold : if the initial data in ( [ p2adjoint ] ) belong to , then the resonant fourier coefficients in ( [ p2adjointsol ] ) vanish , i.e. and the optic and acoustic ones are related by the following two identities : and [ proplineardata ] taking squares in ( [ opticalacoustical1 ] ) and ( [ opticalacoustical2 ] ) and in view of ( [ p2eignormalized ] ) , we deduce that we will prove only ( [ nullresonant ] ) and ( [ opticalacoustical1 ] ) , the proof of ( [ opticalacoustical2 ] ) being similar to the one for ( [ opticalacoustical1 ] ) .observe that the fourier representation of the identity that characterizes is \sin(k\pi x_{j+1/2})\\&+(\widehat{u}^r_++\widehat{u}^r_- ) \frac{\sqrt{15}}{2\sqrt{2}}\sin((n+1)\pi x_{j+1/2}),\nonumber\end{aligned}\ ] ] for all . multiplying ( [ bigrid1 ] ) by , , adding in and taking into account that , for all , we conclude the two identities ( [ nullresonant ] ) and ( [ opticalacoustical1 ] ) .the total energy of data in ( [ p2adjoint ] ) can be written only in terms of the nodal components and coincides with the one of the -finite element method taking into account the form of the fourier coefficients ( [ nullresonant ] ) and ( [ opticalacoustical ] ) corresponding to linear initial data , we obtain that the fourier representation of the total energy ( [ p2energylineardata ] ) is as follows : latexmath:[\[\mathcal{e}_h(\mathbf{u}_h^0,\mathbf{u}_h^1)=\frac{1}{2 } \sum\limits_{k=1}^n\lambda^{a , k}_h\big[\big(1+\frac{\lambda^{o , k}}{\lambda^{a , k}}\frac{w_1(\lambda^{a , k})}{w_1(\lambda^{o , k})}\big ) + \big(1+\frac{w_1(\lambda^{a , k})}{w_1(\lambda^{o ,k})}\big ) the second auxiliary result establishes that for initial data in the bi - grid subspace in ( [ p2subsbigrid ] ) , the high frequency fourier coefficients on the acoustic branch can be evaluated in terms of the low frequency ones : [ proplowhighfreqacoustical]for each element introduced in ( [ p2subsbigrid ] ) , the following identities hold : and for all , with taking squares in ( [ lowhighfreqacou1]-[lowhighfreqacou2 ] ) , we obtain that where due to the orthogonality in of the eigenvectors in , to the fact that and using the representation ( [ p2eignormalized ] ) of the normalized eigenvectors , the following identity holds now , using the fact that , we have taking ( [ p2spectral5 ] ) into account ,we have for , we obtain . since , we obtain ( [ lowhighfreqacou1 ] ) by equating the expressions of form the identity ( [ proof2 ] ) corresponding to and to .the proof of ( [ lowhighfreqacou2 ] ) is similar , based on the fact that , from which for we obtain that , which concludes ( [ nullmidfrequency ] ) . replacing the fourier representations ( [ lowhighfreqacou3 ] ) and ( [ lowhighfreqacou4 ] ) into the total energy with linear initial data ( [ p2energylineardatafourier ] ), we obtain that energy corresponding to initial data in ( [ p2subsbigrid ] ) is given by ,\label{p2energlinearbigriddatafourier}\ ] ] where the low and high frequency coefficients are given by and for any which does not depend on and any solution ( [ p2adjointsol ] ) of ( [ p2adjoint ] ) , let us introduce its projection on the first frequencies of the acoustic branch to be observe that the projection ( [ p2projacoudelta ] ) is still a solution of ( [ p2adjoint ] ) , therefore its total energy is conserved in time .set the following result provides a bound of the total energy of the solutions of ( [ p2adjoint ] ) with initial data as in ( [ p2subsbigrid ] ) in terms of the total energy of their projections on the first half of the acoustic mode : for any solution of ( [ p2adjoint ] ) with initial data introduced in ( [ p2subsbigrid ] ) , there exists a constant which does not depend on such that [propprojacousticallowfreq ] in order to obtain the estimate ( [ estimprojacousticallowfreq ] ) , we claim that it is sufficient to prove that there exist independent of such that and , for all . assuming this for a moment, we can take for which ( [ estimprojacousticallowfreq ] ) holds .let us analyze the boundedness of the four coefficients . observe that involves the product of and for .but is singular only for , whereas for , due to the increasing nature of , we have . also is singular as , but since is decreasing in we have for all . in the same way , contains the product of with , for . the second factor has a singularity as , but is evaluated for which is far from the singularity for all . since contains a factor , we deduce that has the same singularities as , i.e. , but since we work on the first half of the acoustic diagram , is far from that singularity .we conclude the existence of independent of such that .the coefficient contains two terms .the first of them is constituted by the factors and .since is not singular for any , then is bounded for all . on the other hand, has three singularities : , and .since all -s situated on the first half of the acoustic mode are far for all the three singularities , we deduce the boundedness of .the second term is a product of four factors : , , and . in view of our previous analysis, we deduce the boundedness of the last three factors .the first factor blows - up like for small .nevertheless , in the same range of -s , behaves like , compensating in this way the singularity of the first factor , so that is bounded for all .the coefficient also contains two terms .the first of them includes the factors : , , and .we have already analyzed the first , the third and the fourth ones . the second one blows - up like for small .but , as we said , behaves like for small , compensating the singularity of the second factor , so that is bounded for all .the second term in contains the factors , , and and is bounded by the same arguments we used for the first term . consequently , there exist such that , which concludes the proof .[ remark3]set and .observe that and , where is the space in ( [ p2subsbigrid ] ) .we want to point out that the result of theorem [ theorembigrid ] is not longer true when replace by , with , so that the condition on the initial data to be linear is sharp . indeed , when replacing by , in particular in ( [ p2energlinearbigriddatafourier ] ) has to be substituted by one can show that , for , it is not longer true that as , so that this factor can not compensate the singularity of as , like for .consequently , for , at least is not bounded for .proposition [ proplowhighfreqacoustical ] ensures that the total energy of initial data in introduced in ( [ p2subsbigrid ] ) is uniformly bounded by the energy of their projection on the first half of the acoustic mode .on the other hand , theorem [ theoremfouriertruncation ] guarantees that the observability inequality ( [ p2observabilityinequalitygeneral ] ) with as in ( [ observationoperators ] ) holds uniformly as in the class of truncated data lying on the first half of the acoustic mode .combining these two facts , one can apply a dyadic decomposition argument as in and conclude the proof of theorem [ theorembigrid ] .* 2 . the admissibility inequality . * in the rest of this section , our aim is to prove the direct inequality ( [ p2directinequalitygeneral ] ) for the boundary operator in ( [ observationoperators ] ) using the spectral identities ( [ observabilityeigenvectorsao ] ) and ( [ observabilityeigenvectorsr ] ) .firstly , let us observe that for all matrix operator and all , we have the following inequality : for any matrix , the matrix norm involved in the right hand side of ( [ directineq1 ] ) is defined as in the above definition of the norm of a matrix , we can reduce the set of test functions to introduced in ( [ p2seteigenvectors ] ) , which is an orthonormal basis in . let us remark that for any eigenfunction , the corresponding eigenvalue verifies the identity consequently , for any matrix and , we have for in ( [ observationoperators ] ) , using the identities ( [ directineq2 ] ) , ( [ observabilityeigenvectorsao ] ) and ( [ observabilityeigenvectorsr ] ) , we conclude that is a quantity independent of .in this section , we describe the algorithm of constructing the discrete controls of minimal -norm and we prove their convergence towards the continuous hum boundary control in ( [ conthumcontrol ] ) as , under the hypothesis that both inverse and direct inequalities ( [ p2observabilityinequalitygeneral ] ) and ( [ p2directinequalitygeneral ] ) hold uniformly as .as we saw in the previous sections , the above hypothesis holds when the initial data in the adjoint system ( [ p2adjoint ] ) is filtered through a fourier truncation or a bi - grid algorithm . using the _ admissibility inequality _ ( [ p2directinequalitygeneral ] ) and the _ observability one _ ( [ p2observabilityinequalitygeneral ] ), one can prove the _ continuity _ and the _ uniform coercivity _ of defined by ( [ p2functionalgeneral ] ) on , where can be both the truncated space ( [ p2subsptruncation ] ) or the bi - grid one ( [ p2subsbigrid ] ) .moreover , it is _ strictly convex_. applying the _ direct method for the calculus of variations _ ( dmcv ) ( cf . ) , one can guarantee the existence of an unique minimizer of , i.e. : moreover , the euler - lagrange equation ( [ p2eulerlagrange ] ) associated to characterizes the optimal control ( [ p2optimalcontrol ] ) .remark that when the space of initial data in ( [ p2adjoint ] ) is restricted to a subspace , for example the ones given by ( [ p2subsptruncation ] ) or ( [ p2subsbigrid ] ) , the exact controllability condition ( [ p2exactcontrol ] ) holds for all .this does not imply that the final state in the controlled problem ( [ p2controlledpbm ] ) is exactly controllable to the rest , but its orthogonal projection from on the subspace , i.e. set to be the last component of the control in ( [ p2optimalcontrol ] ) ( the other ones vanish ) .since and taking into account the inverse inequality ( [ p2observabilityinequalitygeneral ] ) , we obtain : where and is the observability constant in ( [ p2observabilityinequalitygeneral ] ) under filtering .set , , and be the space of square summable sequences . since is a hilbertian basis in each , , the initial data to be controlled in the continuous problem ( [ contwavecontrolled ] ) admits the following fourier decomposition : moreover , their -norm has the following fourier representation : since the set introduced in ( [ p2seteigenvectors ] ) is a basis in , the initial data to be controlled in the discrete problem ( [ p2controlledpbm ] )admit the following decomposition with moreover , their -norm can be written in terms of the fourier coefficients as follows : ) , we consider initial data with the following two properties : and then where is the last component of the discrete optimal control given by ( [ p2optimalcontrol ] ) obtained by the minimization of the functional on the subspace defined in ( [ p2subsptruncation ] ) or in ( [ p2subsbigrid ] ) and is the continuous hum control ( [ conthumcontrol ] ) . moreover , if the convergences in ( [ p2convdatacontracoustical ] ) and ( [ p2convdatacontropt ] ) are strong , then the convergence of controls in ( [ convcontr ] ) is also strong .[ theoremweakconvergence ] firstly , let us observe that from ( [ p2convdatacontracoustical ] ) and ( [ p2convdatacontropt ] ) , we obtain that there exists a constant independent of such that by combining ( [ boundp2control ] ) and ( [ p2boundeddatacontrol ] ) , we get the uniform boundedness as of the discrete control in , so that it is sufficient to prove that the weak limit coincides with the continuous hum control given by ( [ conthumcontrol ] ) . the control can be characterized as the unique control in ( [ contwavecontrolled ] ) which can be expressed as the space derivative of a solution of the adjoint problem ( [ contwaveadjoint ] ) evaluated at .then , we have to prove that is an admissible control of the continuous wave equation , i.e. it verifies the identity ( [ identitycontcontrol ] ) , and that , where is the solution of the adjoint problem ( [ contwaveadjoint ] ) for some initial data .* step i - the weak limit is a control in the continuous problem ( [ contwavecontrolled ] ) . * since is an orthonormal basis in , then the fact that is a control in ( [ contwavecontrolled ] ) , so it satisfies ( [ identitycontcontrol ] ) , it is equivalent to prove ( [ identitycontcontrol ] ) for all initial data of the form , .the solution of the adjoint problem ( [ contwaveadjoint ] ) with this kind of initial data is .the condition ( [ identitycontcontrol ] ) is equivalent to the following one : let us check that satisfies ( [ identitycontrolfourier ] ) . to do it ,we distinguish between the two cases of subspaces of filtered data .firstly , let us consider the case of truncated initial data , i.e. is given by ( [ p2subsptruncation ] ) .a particular class of initial data in is for which the solution of the discrete adjoint problem ( [ p2adjoint ] ) is , for all . from ( [ p2eulerlagrange ] ), we see that the discrete control verifies the identity let us fix ( independent of ) . in that case , as strongly in , so that , taking into account the weak convergence ( [ p2weakconvcontrol ] ) , we can pass to the limit as in the left hand side of ( [ p2eulerlagrangefourier ] ) and we obtain the left hand side of ( [ identitycontrolfourier ] ) with substituted by . on the other hand , taking into account the condition ( [ p2convdatacontracoustical ] ) , which is valid for all test sequences in and in particular for the basis functions of , ( meaning that the weak convergence in is a pointwise convergence ) , and additionally the fact that and as for each fixed , passing to the limit as in the right hand side of ( [ p2eulerlagrange ] ) , we obtain the right hand side of ( [ identitycontrolfourier ] ) .then , the weak limit of the optimal control obtained by minimizing the functional on in ( [ p2subsptruncation ] ) is a control for the continuous problem .let us consider now the case of linear initial data given by a bi - grid algorithm , i.e. is given by ( [ p2subsbigrid ] ) . taking into account propositions [ proplineardata ] and [ proplowhighfreqacoustical ], we see that for initial data the fourier representation ( [ fourierrepresentationdata ] ) has the more particular form the basis function for the space is given by where and is defined by ( [ normh0 ] ) .let us fix and consider the homogeneous problem ( [ p2adjoint ] ) with initial data , , for which the solution takes the form where the low frequency coefficients are and the high frequency ones are and by considering the particular class of solutions given by ( [ typicalsolutionbigrid ] ) into ( [ p2eulerlagrange ] ) , we see that the control satisfies the identity where the error terms are and passing to the limit as in the left hand side and in the first term in the right hand side of ( [ p2eulerlagrangefourierbigrid ] ) can be done as for the truncated space in ( [ p2subsptruncation ] ) .therefore , in order to prove that the weak limit satisfies ( [ identitycontrolfourier ] ) , it is enough to show that the error terms are small as , i.e. from the fact that the -norm of the discrete control is uniformly bounded as , the cauchy - schwartz inequality and the explicit expressions of , and , we obtain with and where and are as in ( [ basisbigrid ] ) . on the other hand ,since the - norm of the initial data to be controlled is uniformly bounded as , we see that with and for a fixed , let us study the convergence as of the terms , , : * from ( introduced in ( [ normh0 ] ) ) , , , and , we see that as . * from , , and , we see that as .* remark that , , , but .nevertheless , , so that at the end as .* remark that and .observe that like .this compensates the singularity of so that as . * since it involves the factor .this cancels the singularity introduced by and ensures that as .* since it contains the factor which compensates the singularity of so that as .this concludes ( [ converrorbigrid ] ) and the fact that the weak limit of the sequence of discrete hum controls obtained by minimizing the functional over the bi - grid class in ( [ p2subsbigrid ] ) is a control in the continuous problem ( [ contwavecontrolled ] ) .* step ii - the weak limit is the normal derivative of a solution of the continuous adjoint problem ( [ contwaveadjoint ] ) . *consider ( which in what follows can be both the subspace in ( [ p2subsptruncation ] ) or the one in ( [ p2subsbigrid ] ) ) to be the minimum of the functional .due to the uniform nature of the observability inequality ( [ p2observabilityinequalitygeneral ] ) , is uniformly bounded , i.e. there exists a constant independent of such that due to property ( [ nullresonant ] ) , the resonant mode in the solution of the adjoint problem ( [ p2adjoint ] ) for initial data in the filtered space in ( [ p2subsptruncation ] ) or ( [ p2subsbigrid ] ) vanishes , so that the fourier representation of the total energy in the left hand side of ( [ unifboundminimizer ] ) is valid for both filtered spaces in ( [ p2subsptruncation ] ) and ( [ p2subsbigrid ] ) .remark however that the high frequency components vanish for data in the truncation subspace in ( [ p2subsptruncation ] ) . on the other hand , for data in the bi - grid space in ( [ p2subsbigrid ] ) , the relations between the optic and the acoustic modes and the high frequencies in the acoustic mode and the lowest ones described in propositions [ proplineardata ] and [ proplowhighfreqacoustical ] hold . from ( [ unifboundminimizer ] ), we deduce that set , observe that and denote by the corresponding solution of ( [ contwaveadjoint ] ) .firstly , we prove that in fact , for arbitrary functions and , we will prove the following estimate : where is the projection on the acoustic branch defined by ( [ p2projacoudelta ] ) . in a similar way , we define the projection on the optic branch , . in order to prove ( [ estimateweakconv ] ), we decompose its right hand side as follows : taking into account that , by integration by parts , we have from the cauchy - schwartz and the admissibility inequality , the bound ( [ unifboundminimizer ] ) and the fact that for all , we deduce that ) , we conclude ( [ p2weakconvdiscnormalderiv ] ) by using the following three ingredients : * the weak convergence ( [ p2convdatacontracoustical ] ) combined with the strong convergence , and allowing us to pass to the limit as in the sense of in the sum in the right hand side of so that we can guarantee that the first term in the right hand side of ( [ estimateweakconv ] ) is small as . * the density of in , allowing to choose so that is arbitrarily small . * an appropriate choice of the mesh size according to , so that is arbitrarily small .let us check that in .indeed , using ( [ p2weakconvcontrol ] ) and then ( [ p2weakconvdiscnormalderiv ] ) , we have : for an arbitrary .therefore , in and also , which , jointly with the continuous observability inequality ( [ contobsineq ] ) gives that in and then in , .this means that once we have identified that the weak limit of the discrete controls is the continuous hum control , we have the -convergence of the discrete minimizer to the continuous one .* step iii - strong convergence of the discrete controls . * in order to prove that converges strongly in to as , it is enough to prove that using as test function in ( [ p2eulerlagrange ] ) the minimizer , we have that where let us remark that using the strong convergence of the acoustic part of the initial data to be controlled ( [ p2convdatacontracoustical ] ) and the boundedness of the energy of the minimizer of , on the other hand , taking into account the uniform boundedness of the energy of the minimizer of , we get which concludes ( [ p2convl2normcontr ] ) and the strong convergence of the optimal control . in , it was proved that , for initial data in the continuous control problem ( [ contwavecontrolled ] ) belonging to the more regular space , the numerical controls obtained for the finite difference or the linear finite element semi - discretization of the wave equation ( [ contwavecontrolled ] ) converge to the continuous hum controls with an error order .this is due to the fact that , where can be each one of the dispersion relations for the finite difference or finite element approximation of the wave equation . in , we observed the fact that the acoustic dispersion relation of the - finite element method approximates the continuous one with error order for all $ ] , so that the convergence error for the numerical controls obtained by the bi - grid algorithm in the quadratic approximation of the wave equation increases to under the same regularity assumptions on the continuous initial data to be controlled .[ remark5 ]* all the results in this paper can be extended to finite element methods of arbitrary order , with the additional difficulty that when computing the eigenvalues , the quadratic equation ( [ p2spectral6 ] ) has to be replaced by a -th order algebraic equation in which is technically complicated to be solved explicitly .the same difficulty arises when passing to several space dimensions . * the results in providing a general method to obtain uniform observability results for time discretizations of conservativesystem lead to the extension of our observability results for the - space semi - discretization to fully discrete conservative approximations of the wave equation . *the extension of the results in this paper to non - uniform meshes is a completely open problem . *the last open problem we propose is related to and concerns the dispersive properties of the schrdinger equation approximated in space using the - finite element method . designing appropriate bi - grid algorithms taking care of all the singularities of both group velocity and acceleration simultaneously is an open problem .99 ainsworth m. , _ discrete dispersion relation for hp - version finite element approximation at high wave number _, siam j. numer .anal . , 42(2004 ) , 553575 .castro c. , micu s. , _ boundary controllability of a linear semi - discrete wave equation derived from a mixed finite element method _ ,math . , 102(3)(2006 ) , 413462 .dacorogna b. , _ direct methods in the calculus of variations _ , springer - verlag , 1989 .ervedoza , s. , _ spectral conditions for admissibility and observability of wave systems _ , numer .math . , 113(3)(2009 ) , 377415 .ervedoza s. , _ admissibility and observability for schrdinger systems : applications to finite element approximation schemes _ , asymptot ., to appear .ervedoza s. , zheng c. , zuazua e. , _ on the observability of time - discrete conservative linear systems _ , j. functional analysis , 254(12)(2008 ) , 30373078 .ervedoza s. , zuazua e. , _ the wave equation : control and numerics _ , in _ control and stabilization of pdes _, p. m. cannarsa and j. m. coron eds . , lecture notes in mathematics , cime subseries , springer verlag , to appear .glowinski r. , _ ensuring well posedness by analogy : stokes problem and boundary control for the wave equation _ , j. comput .physics , 103(2)(1992 ) , 189221 .glowinski r. , li c.h ., lions j.l . , _ a numerical approach to the exact boundary controllability of the wave equation .i. dirichlet controls : description of the numerical methods _ , japan j. appl . math . , 7(1)(1990 ) , 176 .hughes t.r.j ., reali a. , sangalli g. , _ duality and unified analysis of discrete approximations in structural dynamics and wave propagation : comparison of p - method finite elements with k - method nurbs _ , comput . meth ., 197(49- 50)(2008 ) , 41044124 .ignat l. , zuazua e. , _ numerical dispersive schemes for the nonlinear schrdinger equation _, siam j. numer .anal . , 47(2)(2009 ) , 13661390 .ignat l. , zuazua e. , _ convergence of a two - grid algorithm for the control of the wave equation _soc . , 11(2)(2009 ) , 351391 .infante j .- a ., zuazua e. , _ boundary observability for the space semidiscretization of the wave equation _ , m2an ,33(1999 ) , 407438 .lions j.l . , _ contrlabilit exacte , perturbations et stabilisation des systmes distribus _ , vol .1 , masson , paris , 1988 .loretti p. , komornik v. , _ fourier series in control theory _ , springer , 2004 .loreti p. , mehrenberger m. , _ an ingham type proof for a two - grid observability theorem _, eaim : cocv , 14(3)(2008 ) , 604631 .marica a. , zuazua e. , _ localized solutions and filtering mechanisms for the discontinuous galerkin semi - discretizations of the wave equation _ , c. r. acad .paris ser .i , 348(2010 ) , 10871092 .marica a. , zuazua e. , _ on the quadratic finite element approximation of waves : propagation , observation , control and numerical implementation _ ,proceedings of cfl-80 : a celebration of 80 years of the discovery of cfl condition " , c. kubrusly and c. a. moura , eds . , springer proceedings in mathematics , springer verlag , to appear .micu s. , _ uniform boundary controllability of a semidiscrete 1-d wave equation with vanishing viscosity _ , siam j. control optim . , 47(6)(2008 ), 28572885 .negreanu m. , zuazua e. , _ convergence of a multigrid method for the controllability of a wave equation _ , c. r. math .paris , 338(2004 ) , 413418 .tcheugou tbou l.r . ,zuazua e. , _ uniform boundary stabilization of the finite difference space discretization of the wave equation _ , advances comput .math . , 26(2007 ) , 337365 .zuazua e. , _ propagation , observation , control and numerical approximations of waves _ ,siam review , 47(2)(2005 ) , 197243 .
we study the propagation , observation and control properties of the wave equation on a bounded interval discretized in space using the quadratic - classical finite element approximation . a careful fourier analysis of the discrete wave dynamics reveals two different branches in the spectrum : the _ acoustic _ one , of physical nature , and the _ optic _ one , related to the perturbations that this second - order finite element approximation introduces with respect to the one . on both modes there are high frequencies with vanishing group velocity as the mesh size tends to zero . this shows that the classical property of continuous waves of being observable from the boundary fails to be uniform for this discretization scheme . as a consequence of this , the controls of the discrete waves may blow - up as the mesh size tends to zero . to remedy these high frequency pathologies , we design filtering mechanisms based on a bi - grid algorithm for which one can recover the uniformity of the observability constant in a finite time and , consequently , the possibility to control with uniformly bounded - controls appropriate projections of the solutions . this also allows showing that , by relaxing the control requirement , the controls are uniformly bounded and converge to the continuous ones as the mesh size tends to zero .
true to `` stigler s law of eponymy '' ( stigler , 1980 ) , simpson s paradox has a long history in statistics going back to yule s ( 1903 ) ` spurious ' association , but it is currently credited to simpson ( 1951 ) for reframing it as a ` paradox ' ; see blyth ( 1972 ) . the paradox seems to have a number of alternative conceptions , and thus , it is often described interchangeably as a counter - intuitive statistical result pertaining to : \(a ) statistical associations that reverse themselves , such as `` a marginal association can have a different direction from each conditional association '' ( agresti , 2013 ) .\(b ) either the magnitude or the direction of an association between two variables is influenced by a third variable , such as " the association between a pair of variables reverses sign upon conditioning on a third variable , .( pearl , 2014 ) .\(c ) apparent statistical associations that after closer scrutiny of the data are rendered ` spurious ' ( yule , 1903 ) .the recent discussions in statistics have focused on adopting one of the perspectives ( a)-(c ) , and using actual or hypothetical data to either explain away the paradox or criticize other proposed ` solutions ' .the current dominating view revolves around perspective ( b ) that differs from ( a ) in so far as it emphasizes the causal dimension of conditioning on a confounder ; see pearl ( 2009 ) , spirtes et al .armistead ( 2014 ) put forward a dissenting view by arguing that perspective ( b ) is rather narrow to explain the different facets of the paradox : a strong case can be made that simpson s paradox has different dimensions that are often conflated or ignored in the literature .as argued by wasserman ( 2004 ) : the primary aim of this paper is to shed light on the different conceptions of the paradox by bringing out the similarities and differences between perspectives ( a)-(c ) .the key is provided by yule s idea of ` spuriousness ' in ( c ) .beginning with yule ( 1903 ) , the problem of ` fictitious ' associations and ` spurious ' correlations was a recurring theme in yules papers that culminated in yule ( 1926 ) on `` nonsense - correlations '' .although he shed some light on the issues involved , he did not succeed in establishing a direct link between spurious associations and invalid probabilistic assumptions for reasons to be discussed in the sequel .the notion of statistical misspecification can be used to formalize the term ` spurious ' as ` statistically untrustworthy ' results , stemming from unreliable inference procedures .this enables one to delineate between two distinct cases of association reversal : case 1 .the reversal is statistically trustworthy due to statistical adequacy .the reversal is statistically untrustworthy due to statistical misspecification .it turns out that the statistical misspecification perspective suggests that in both cases there is nothing counterintuitive to explain . in section 2, we discuss the case where the reversal is statistically trustworthy due to the fact that the statistical models involved are _ statistically adequate _ : the invoked probabilistic assumptions are valid for the particular data . when this is not the case , the inference results are likely to be statistically untrustworthy ( spurious ) .this is discussed in section 3 using two empirical examples that bring out the distinction between statistical and substantive misspecification .the statistical misspecification argument is illustrated further in section 4 using several widely discussed examples of the paradox . in section 5, we revisit the causal ` resolution ' of the paradox in an attempt to delineate the modeling and inference issues raised by the statistical misspecification perspective .consider the case of a linear regression ( lr ) model:{c}y_{t}\mathit{=}\beta_{0}+\beta_{1}x_{1t}+\beta_{2}x_{2t}\mathit{+}u_{t},\medskip\\ ( u_{t}\mathit{\mid}x_{1t}\mathit{=}x_{1t},x_{2t}\mathit{=}x_{2t})\mathit{\backsim}\text{\textsf{niid}}(0,\sigma_{u}^{2}),\ t\mathit{\in } \mathbb{n } , \end{array } \vspace*{-0.07in}\label{rm1}\ ] ] where ` niid ' stands for ` normal , independent and identically distributed ' .it is often insufficiently appreciated that the error assumptions imply a particular statistical parameterization for the unknown parameters in terms of the moments of the observable process underlying data ( see appendix ) .alternatively , one can derive the parameterization directly using the joint distribution of the observable random variables involved:{c}\left ( \begin{array } [ c]{c}y_{t}\\ x_{1t}\\ x_{2t}\end{array } \right ) \backsim\text { \textsf{niid}}\left ( \left ( \begin{array } [ c]{c}\mu_{1}\\ \mu_{2}\\ \mu_{3}\end{array } \right ) , \left ( \begin{array } [ c]{ccc}\sigma_{11 } & \sigma_{12 } & \sigma_{13}\\ \sigma_{12 } & \sigma_{22 } & \sigma_{23}\\ \sigma_{13 } & \sigma_{23 } & \sigma_{33}\end{array } \right ) \right ) \end{array } \vspace*{-0.1in}\label{j}\ ] ] in this case , the regression and skedastic functions take the form:{c}e(y_{t}\mathit{\mid}x_{1t}\mathit{=}x_{1t},x_{2t}\mathit{=}x_{2t})\mathit{=}\beta_{0}\mathit{+}\beta_{1}x_{1t}\mathit{+}\beta_{2}x_{2t},\ var(y_{t}\mathit{\mid}x_{1t}\mathit{=}x_{1t},x_{2t}\mathit{=}x_{2t})\mathit{=}\sigma^{2 } , \end{array } \vspace*{-0.1in}\ ] ] where the parameterizations of are ( table 1):{c}\beta_{0}\mathit{=}\mu_{1}\mathit{-}\beta_{1}\mu_{2}\mathit{-}\beta_{2}\mu _ { 3},\ \beta_{1}\mathit{=}\frac{(\sigma_{12}\sigma_{33}-\sigma_{13}\sigma _ { 23})}{(\sigma_{22}\sigma_{33}-\sigma_{23}^{2})},\ \beta_{2}\mathit{=}\frac{(\sigma_{13}\sigma_{22}-\sigma_{12}\sigma_{23})}{(\sigma_{22}\sigma _ { 33}-\sigma_{23}^{2})}\end{array } \vspace*{-0.1in}\label{b}\]]{cl}\sigma_{u}^{2 } & \mathit{=}\sigma_{11}\mathit{-}\sigma_{12}\left ( \frac{\sigma_{12}\sigma_{33}-\sigma_{13}\sigma_{23}}{\sigma_{22}\sigma _ { 33}-\sigma_{23}^{2}}\right ) \mathit{-}\sigma_{13}\left ( \frac{\sigma _ { 13}\sigma_{22}-\sigma_{12}\sigma_{23}}{\sigma_{22}\sigma_{33}-\sigma _ { 23}^{2}}\right ) \mathit{=}\sigma_{11}\mathit{-}\sigma_{12}\beta _ { 1}\mathit{-}\sigma_{13}\beta_{2}\end{array } \label{s}\ ] ] these results offer the key to elucidating perspectives ( a)-(b ) on simpson s paradox . *perspective ( a ) on simpson s paradox*. the correlation between and is positive ( ) , but the coefficient in ( [ rm1 ] ) is negative ( ) . * is this reversal of association possible , and under what circumstances ? * in light of the parameterization of in ( [ b ] ) , its numerator is negative when:{c}\left [ ( \sigma_{12}\sigma_{33}\mathit{-}\sigma_{13}\sigma_{23})<0\right ] \rightarrow\left [ \frac{\sigma_{13}\sigma_{23}}{\sigma_{33}}>\sigma _ { 12}\right ] \end{array } \vspace*{-0.07in}\ ] ] multiplying both terms in the last expression by yields: {c}\frac{\sigma_{13}\sigma_{23}}{\sigma_{33}\sqrt{\sigma_{11}\sigma_{22}}}\mathit{=}\rho_{13}\rho_{23}>\rho_{12}\mathit{=}\frac{\sigma_{12}}{\sqrt{\sigma_{11}\sigma_{22 } } } , \end{array } \vspace*{-0.05in}\ ] ] where and .hence , and occur when the following conditions hold : \(i ) the correlation coefficients and have the _ same sign _ , \(ii ) the product of and is greater than , i.e. and \(iii ) the determinant of the correlation matrix of is positive:{c}corr(\mathbf{z}_{t})\mathit{=}1\mathit{-}\rho_{12}^{2}\mathit{-}\rho_{13}^{2}\mathit{-}\rho_{23}^{2}\mathit{+}2\rho_{12}\rho_{13}\rho_{23}\mathit{>}0 . \end{array } \vspace*{-0.07in}\ ] ] condition ( iii ) ensures that in ( [ j ] ) is proper , giving rise to a well - defined conditional distribution ; see spanos and mcguirk ( 2002 ) .assuming without any loss of generality , let the relevant correlations be : which satisfy ( i)-(iii ) above .\(a ) for values : \(b ) for values : note that the sign of reflects the common sign of in light of these results , it is clear that there is nothing paradoxical , or surprising , about the reversal of sign between the simple correlation [ stemming from the joint distribution , and the regression coefficient [ stemming from the conditional distribution .this reversal is due to the conditions ( i)-(iii ) above , which are easily testable in practice ; see spanos ( 2006b ) .it is well - known that there is a direct connection between and the regression coefficient of in the context of the simple linear regression:{c}y_{t}\mathit{=}\alpha_{0}\mathit{+}\alpha_{1}x_{1t}+\varepsilon_{t},\medskip\\ ( \varepsilon_{t}\mathit{\mid}x_{1t}\mathit{=}x_{1t})\mathit{\backsim } \text{\textsf{niid}}(0,\sigma_{\varepsilon}^{2}),\ t\mathit{\in}\mathbb{n } , \end{array } \vspace*{-0.08in}\label{rm2}\ ] ] whose implicit statistical parameterization of is:{c}\alpha_{0}\mathit{=}\mu_{1}\mathit{-}\alpha_{1}\mu_{2},\ \alpha_{1}\mathit{=}\frac{\sigma_{12}}{\sigma_{22}},\ \sigma_{\varepsilon}^{2}\mathit{=}\sigma_{11}\mathit{-}\frac{\sigma_{12}^{2}}{\sigma_{22}}. \end{array } \vspace*{-0.08in}\ ] ] this is because is a scaled reparameterization of :{c}\rho_{12}\mathit{=}\frac{\sqrt{\sigma_{22}}}{\sqrt{\sigma_{12}}}\alpha_{1}. \end{array } \vspace*{-0.12in}\label{cr}\ ] ] in the above numerical example , and confirming the sign reversal .this implies that one can consider the question of association reversal by comparing the inference results in ( [ rm1 ] ) and ( [ rm2 ] ) . in conclusion, it is very important to emphasize that in the above example , both lr models , ( [ rm1 ] ) and ( [ rm2 ] ) , are assumed to be statistically adequate : their probabilistic assumptions are valid . in the case of real data on , one needs to establish the statistical adequacy of both models using comprehensive misspecification testing .what are the probabilistic assumptions that need to hold for data ?in this section we bring out more explicitly the probabilistic assumptions comprising the linear regression ( lr ) model with a view to illustrate the role of statistical misspecification in shedding light on the various aspects of simpson s paradox .traditionally , the probabilistic assumptions underlying the linear regression ( lr ) model are specified in terms of the error term ; see appendix .it turns out , however , that such specifications are often incomplete and sometimes include non - testable assumptions .table 1 specifies the lr , generically defined by : in terms of the statistical generating mechanism ( gm ) and assumptions [ 1]-[5 ] that constitute a complete , internally consistent and testable set of assumptions in terms of the observable process underlying the data .this provides a purely probabilistic construal for the notion of a statistical model , viewed as a particular parameterization of the process .intuitively , the statistical model comprises the totality of probabilistic assumptions one imposes on the process with a view to render data a ` typical ' realization thereof .the ` typicality ' is testable using thorough misspecification testing ; see spanos ( 2006a).{l}\hline\begin{tabular } [ c]{l}\textbf{table 1 : linear regression model}\end{tabular } \\\hline\hline y_{t}\mathit{=}\beta_{0}+\mathbf{\beta}_{1}^{\top } \mathbf{x}_{t}+u_{t},\ t\mathbf{\mathit{\in}}\mathbb{n} \\ \left ( y_{t}\mathbf{\midx}_{t}\mathit{=}\mathbf{x}_{t}\right ) \backsim\mathsf{n}(.,.), e\left ( y_{t}\mathbf{\mid x}_{t}\mathit{=}\mathbf{x}_{t}\right ) \mathit{=}\beta_{0}+\mathbf{\beta}_{1}^{\top}\mathbf{x}_{t } , var\left ( y_{t}\mathbf{\mid x}_{t}\mathit{=}\mathbf{x}_{t}\right ) \mathit{=}\sigma^{2}, \{\left ( y_{t}\mathbf{\mid x}_{t}\mathit{=}\mathbf{x}_{t}\right ) , \t\mathbf{\mathit{\in}}\mathbb{n}\}, \left ( \beta_{0},\mathbf{\beta}_{1},\sigma ^{2}\right ) t,\ \ \ \ \ \\beta_{0}\mathit{=}e(y_{t})\mathit{-}\mathbf{\beta}_{1}^{\top}e(\mathbf{x}_{t}),\ \mathbf{\beta}_{1}\mathit{=}[cov(\mathbf{x}_{t})]^{-1}cov(\mathbf{x}_{t},y_{t}),\ \sigma^{2}\mathit{=}var(y_{t})\mathit{-}cov(\mathbf{x}_{t},y_{t})^{\top}[cov(\mathbf{x}_{t})]^{-1}cov(\mathbf{x}_{t},y_{t})\\\hline \end{tabular}\ ] ] * statistical adequacy*. an estimated lr model is said to be _ statistically adequate _ when all assumptions [ 1]-[5 ] are valid for data in practice , statistical adequacy can be appraised using comprehensive misspecification testing ; see spanos ( 1999 , 2015 ) .the importance of establishing statistical adequacy stems from the fact that it secures the statistical reliability of inference based on such a model .that is , the inference propositions associated with the lr model , including the optimal properties of the mle estimators and the relevant error probabilities of the t and f tests , are reliable in the sense that their actual sampling distributions approximate closely the theoretical ones derived by invoking the validity of assumptions [ 1]-[5 ] .* unreliability of inference*. when any subset of the assumptions [ 1]-[5 ] are invalid , the reliability of inference of such procedures is called into question .statistical misspecifications are likely to give rise to inconsistent estimators as well as induce sizeable discrepancies between the nominal ( assumed ) error probabilities and the actual ones in testing . for instance , when any of the assumptions [ 2 ] , [ 4]-[5 ] are invalid , the ols estimators of are likely to be inconsistent , and the nominal error probabilities associated with the significance t - tests for the coefficients are likely to have significant discrepancies from the actual error probabilities ; see spanos and mcguirk ( 2001 ) , spanos ( 2010 ) .applying a significance level t - test when the actual type i error is closer to is likely to give rise to unreliable inferences .it is important to emphasize that for assumptions [ 4 ] and [ 5 ] to be testable , one needs to select an ordering of interest for data . in the case of time - series data ,the ordering of interest is invariably ` time ' , which is an interval scale variable . for cross - section data , however ,there are often several orderings of interest , depending on the individual unit being observed , and the modeler needs to think about such potential orderings as they relate to [ 4]-[5 ] .potential orderings for cross - section can vary from gender ( nominal scale ) , to age ( ratio scale ) , etc .let us return to example 1 , where the problem of association reversal can be viewed in the context of comparing the regression coefficients of and in the context of two linear regression models:{ll}model 1 : & \\ model 2 : & \end{tabular } \vspace*{-0.08in}\ ] ] where in the previous section , it was argued that when both models are statistically adequate , it could happen that the estimated coefficients and differ in both sign and magnitude . there is , however , a sizeable literature on ` omitted variables ' which would call model 2 misspecified when turns out to be statistically significant ; see greene ( 2011 ) . in what sense is model 2 misspecified if its assumptions [ 1]-[5 ] ( table 1 ) are valid ?similarly , the literature on causal modeling would test the significance of the covariances and as they relate to the regression coefficients , to decide whether is a confounder ; see pearl ( 2011 ) .how does this relate to the statistical misspecification perspective ? a closer look at the literature suggests that statistical misspecification is often conflated with substantive misspecification , using confusing and confused claims , such as the ols estimator of in model 2 is an inconsistent estimator of in model 1 ( greene , 2011 ) , ignoring the fact that the two coefficients represent very different parameterizations: to make any sense of such comparisons , one needs to distinguish between _ statistical _ and _ substantive adequacy _ because the former requires only that assumptions [ 1]-[5 ] are valid for .assumptions [ 1]-[5 ] have nothing to do with : the lr model includes all ` substantively ' relevant variables .the latter is a substantive assumption that pertains to the explanatory potential of the estimated model as it relates to the phenomenon of interest .substantive inadequacy can arise from missing but relevant variables , false causal claims , etc .the crucial importance of this distinction stems from the fact that when models 1 - 2 are statistically misspecified , both the test for an omitted variable , as well as the tests for deciding whether is a confounder , or a mediator , are likely to give rise to untrustworthy results ; see spanos ( 2006b ) .this distinction is also important when the term ` spurious ' is employed without being qualified to differentiate between _ statistically _ and _ substantively spurious _ inference results .indeed , the term ` spurious correlation ' is often used to describe the case where the statistical significance of a correlation coefficient is taken at face value , and an attempt is made to explain it away using substantive arguments ; see sober ( 2001 ) . more often than not , however, one can show that the statistical significance is more apparent than real , because it is just an untrustworthy result stemming from a statistically misspecified model .the problem of ` spurious ' associations , first noted by pearson ( 1896 ) , was high up in yule s agenda during the first quarter of the 20th century , returning to it on several occasions ; see yule ( 1909 , 1910 , 1921 ) .yule ( 1926 ) is the culmination of his efforts to unravel the puzzle of ` spurious ' results using the high correlations between time series data as an example .he used data measuring the ratio of church of england marriages to all marriages ( ) and the mortality rate ( ) over the period 1866 - 1911 , to demonstrate that their estimated correlation was both very high and statistically significant .he described this result as ` nonsense - correlation ' because .he went on to reject any attempt , however ingenious , to rationalize such a statistical result on substantive grounds : yule ( 1926 ) attempted to articulate the premise that ` nonsense - correlations ' have something to do with the fact that his time series data are _ not _ ` random series ' .he could not establish a clear and direct link between ` spurious ' associations and statistical misspecification , however , because he was missing two key components that were yet to be integrated into statistics .the first is the notion of a ` parametric statistical model ' , innovated by fisher ( 1922 ) , and the second is the theory of ` stochastic processes ' founded by kolmogorov ( 1933 ) .the former comprises all the probabilistic assumptions imposed on the data , and the latter formalizes the notions of a ` random series ' into a realizationn of an iid stochastic proceses , as well as departures from it in the form of probabilistic concepts for dependence and heterogeneity .* yule s reverse engineering*. given that there was no notion of a prespecified parametric statistical model , comprising the probabilistic assumptions imposed on the data , yule resorted to ` reverse engineering ' : he went on to consider the formula for estimating the sample standard error and elicit the implicit probabilistic assumptions that render it a ` good ' estimator of the distribution standard error .let us emulate yule s reverse engineering using the sample correlation coefficient , which is the focus of his paper:{c}\widehat{corr(x_{t},y_{t})}\mathit{=}\frac{\begin{array } [ c]{c}\frac{1}{n}\sum\nolimits_{t=1}^{n}(y_{t}-\overline{y})(x_{t}-\overline{x } ) \end{array } \smallskip}{\sqrt{\left [ \frac{1}{n}\sum\nolimits_{t=1}^{n}(x_{t}-\overline{x})^{2}\right ] \left [ \frac{1}{n}\sum\nolimits_{t=1}^{n}(y_{t}-\overline{y})^{2}\right ] } } , \medskip\\\begin{array } [ c]{c}\overline{x}\mathit{=}\frac{1}{n}\sum\nolimits_{t=1}^{n}x_{t},\ \overline { y}\mathit{=}\frac{1}{n}\sum\nolimits_{t=1}^{n}y_{t},\ \widehat{var(x_{t})}\mathit{=}\frac{1}{n}\sum\nolimits_{t=1}^{n}(x_{t}-\overline{x})^{2},\medskip\\ \widehat{var(y_{t})}\mathit{=}\frac{1}{n}\sum\nolimits_{t=1}^{n}(y_{t}\mathit{-}\overline{y})^{2},\ \widehat{cov(x_{t},y_{t})}\mathit{=}\frac{1}{n}\sum\nolimits_{t=1}^{n}(y_{t}\mathit{-}\overline{y})(x_{t}\mathit{-}\overline{x } ) , \end{array } \end{array } \label{cor}\ ] ] as a ` good ' estimator of the distribution correlation coefficient:{c}corr(x_{t},y_{t})\mathit{=}\frac{cov(x_{t},y_{t})}{\sqrt{var(x_{t})var(y_{t})}}\end{array } \vspace*{-0.08in}\ ] ] the first assumption implicit in these formulae is the _ constancy _ of the moments:{c}e(y_{t})\mathit{=}\mu_{1},\ e(x_{t})\mathit{=}\mu_{2},\ var(y_{t})\mathit{=}\sigma_{11},\ var(x_{t})\mathit{=}\sigma_{22},\ cov(x_{t},y_{t})\mathit{=}\sigma_{12},\ t\mathit{\in}\mathbb{n } , \end{array } \vspace*{-0.08in}\ ] ] which corresponds to a form of the _ i d assumption_. the formulae for and implicitly assume _ non - correlation _ over , otherwise they should have included covariances over terms .yule also sought to unveil the implicit distributional assumption the sample moments are not always ` optimal ' estimators of the distribution moments .for instance , the estimators in ( [ cor ] ) will be ` optimal ' under normality , but they will be non - optimal if the distribution is uniform ; see carlton ( 1946 ) . in light of the fact that under normality the assumption of i d reduces to the constancy of the first two moments , and non - correlationcoincides with _ independence _ , one could make a case that the implicit parametric statistical model underlying the above formulae is the * * simple bivariate normal * * in table 2.{l}\hline\begin{tabular } [ c]{l}\textbf{table 2 - the simple } ( bivariate)\textbf{\ normal model}\end{tabular } \medskip\\\hline\hline \mathbf{z}_{t}=\mathbf{\mu } + \mathbf{u}_{t}, \mathbf{z}_{t}\backsim\mathsf{n}(.,.), e(\mathbf{z}_{t})\mathit{=}\mathbf{\mu}, var(\mathbf{z}_{t})\mathit{=}\mathbf{\sigma}, \{\mathbf{z}_{t},\ t\in\mathbb{n}\}\textbf{}\end{tabular}\]]{c}\mathbf{z}_{t}\mathit{:=}\left ( \begin{array } [ c]{c}y_{t}\\ x_{t}\end{array } \right ) , \\mathbf{\mu}\mathit{:=}\left ( \begin{array } [ c]{c}\mu_{1}\\ \mu_{2}\end{array } \right ) , \ \mathbf{\sigma}\mathit{:=}\left ( \begin{array } [ c]{cc}\sigma_{11 } & \sigma_{12}\\ \sigma_{12 } & \sigma_{22}\end{array } \right ) \end{array } \label{bn}\ ] ] when any of the assumptions [ 1]-[4 ] are invalid for the particular data , the estimated correlation coefficient is likely to be ` spurious ' ( statistically untrustworthy ) .granted , certain departures from particular assumptions , such as [ 2]-[4 ] , are more serious than other departures , say from [ 1 ] .a glance at the t - plots of yule s ( 1926 ) data suggests , to borrow his phrase on p. 5 , that : ( aka iid).{2.7959in}{\begin{center } \includegraphics [ natheight=3.464400 in , natwidth=5.190600 in , height=1.7331 in , width=2.7959 in ] { .. / .. / .. /swp55/docs - toshiba / o6m4v000.wmf}\\\protect\begin{tabular } [ c]{l}fig .1 : t - plot of -ratio of church of\protect\\ england marriages to all marriages \protect\end{tabular } \end{center } } } { \parbox[b]{2.6922in}{\begin{center } \includegraphics [ natheight=3.464400 in , natwidth=5.190600 in , height=1.7331 in , width=2.6922 in ] { .. / .. / .. /swp55/docs - toshiba / o6m4v001.wmf}\\\protect\begin{tabular } [ c]{l}fig .2 : t - plot of -the mortality rate\protect\\ for the period 1866 - 1911 \protect\end{tabular } \end{center } } } \ ] ] both data series exhibit clear departures from iid ( fig .6 ) in the form of mean -heterogeneity ( trending mean ) and dependence ( irregular cycles ) . to bring out the cycles in the original data more clearly one needs to subtract the trending means using , say , a generic 3rd degree trend polynomial.{2.6714in}{\begin{center } \includegraphics [ natheight=3.464400 in , natwidth=5.190600 in , height=1.7331 in , width=2.6714 in ] { .. / .. / .. /swp55/docs - toshiba / o6of6x00.wmf}\\ fig .3 : t - plot of detrended \end{center } } } { \parbox[b]{2.629in}{\begin{center } \includegraphics [ natheight=3.464400 in , natwidth=5.190600 in , height=1.7331 in , width=2.629 in ] { .. / .. / .. /swp55/docs - toshiba / o6of6x01.wmf}\\ fig .4 : t - plot of detrended \end{center } } } \ ] ] in light of the direct relationship between the correlation ( ) and the regression coefficient ( ) in ( [ cr ] ) , one can pose the question of statistical adequacy in the context of the linear regression model , which will yield:{c}y_{t}\mathbf{\mathit{=}}\underset{(1.416)}{-10.847}\mathit{+}\underset{(.020)}{.419}x_{t}+\widehat{u}_{t},\;r^{2}\mathbf{\mathit{=}}.905,\;s\mathbf{\mathit{=}}.664,\;n\mathbf{\mathit{=}}46 , \end{array } \vspace*{-0.1in}\label{eq5}\ ] ] where the standard errors are reported in brackets below the coefficient estimates . both coefficients statistically significant since the t - ratios are:{cc}\tau_{0}(\mathbf{z}_{0})\mathit{=}\frac{10.847}{1.416}\mathit{=}7.660[.000 ] , & \tau_{1}(\mathbf{z}_{0})\mathit{=}\frac{.419}{.020}\mathit{=}20.95[.000 ] , \end{array } \vspace*{-0.1in}\ ] ] and the p - values are given in square brackets .note that the implied correlation ( see ( [ cor ] ) ) yields the value in yule ( 1926 ) : ] which is totally statistically insignificant . in summary, the notion of statistical adequacy provides a direct and testable link between statistical misspecification and statistically untrustworthy ( spurious ) associations , or inference results more generally .a likely criticism of this link is that the probability assumptions of the assumed model in ( [ bn ] ) are too strong , in contrast to the current statistical practice favoring as weak a set of assumptions as possible .the short reply to such a charge is that weaker but non - testable assumptions ( i ) do not render the assumed model less vulnerable to statistical misspecifications , and ( ii ) they underestimate the importance of securing statistical adequacy .in addition , weak assumptions often rely on asymptotic sampling distributions without testing the validity of the assumptions invoked by limit theorems ; see spanos ( 2015 ) .the truth of the matter is that the trustworthiness of all inference results will rely exclusively on the approximate validity of the probabilistic assumptions imposed on and nothing else . as argued by le cam ( 1986 , p. xiv ) : in this sub - section we consider an empirical example based on cross - section data because statistical adequacy is less well appreciated in such a context .consider the case where a practitioner wants to evaluate the effect of education on a person s income .the data refer to education , -years of schooling , and income , -thousands of dollars , for working people within the age group of 30 - 40 years old selected from a city s population .the estimated lr model yields:{c}y_{t}\mathbf{\mathit{=}}\underset{(1.957)}{53.694}\mathit{-}\underset{(.147)}{.474}x_{t}+\widehat{u}_{t},\;r^{2}\mathbf{\mathit{=}}.096,\;s\mathbf{\mathit{=}}3.307,\;n\mathbf{\mathit{=}}100 . \end{array } \vspace*{-0.08in}\label{eq1}\ ] ] both coefficients appear to be statistically significant since the t - ratios are:{cc}\tau_{0}(\mathbf{z}_{0})\mathit{=}\frac{53.694}{1.957}\mathit{=}27.437[.0000 ] , & \tau_{1}(\mathbf{z}_{0})\mathit{=}\frac{.474}{.147}\mathit{=}3.224[.001 ] .\end{array } \vspace*{-0.08in}\ ] ] the practitioner is surprised by the negative sign of the coefficient of since that implies that additional years of education contribute negatively to one s income .he takes a closer look at the data and decides to run separate linear regressions for men ( ) and women ( ) .the estimated lr model for men yields : {c}y_{1t}\mathbf{\mathit{=}}\underset{(2.236)}{45.229}\mathit{+}\underset{(.172)}{.409}x_{1t}+\widehat{u}_{1t},\;r^{2}\mathbf{\mathit{=}}.973,\;s\mathbf{\mathit{=}}2.371,\;n_{1}\mathbf{\mathit{=}}50 . \end{array } \vspace*{-0.1in}\label{eq2}\ ] ] the estimated lr model for women yields:{c}y_{2t}\mathbf{\mathit{=}}\underset{(2.937)}{35.106}\mathit{+}\underset{(.199)}{.675}x_{2t}+\widehat{u}_{2t},\;r^{2}\mathbf{\mathit{=}}.193,\;s\mathbf{\mathit{=}}2.124,\;n_{2}\mathbf{\mathit{=}}50 . \end{array } \vspace*{-0.1in}\label{eq3}\ ] ] the estimation results in ( [ eq2])-([eq3 ] ) indicate that for both estimated regressions : the coefficients are statistically significant , and the sign of the coefficient of education variable , is positive . the positive sign of the estimated clearly contradicts the negative sign in ( [ eq1 ] ) , which is usually interpreted as a case where a statistical association is reversed .this is considered as an example of simpson s paradox when viewed from perspective ( b ) , where gender ( ) is viewed as a confounding variable that correlates with both -income and -education . in econometrics ,this is usually viewed as a case of ` omitted - variable bias ' ; see greene ( 2011 ) . according to pearl ( 2014 ) , p. 10, the only way to decide whether to rely on the aggregated data regression in ( [ eq1 ] ) or the disaggregated data regressions ( [ eq2])-([eq3 ] ) is to use causal calculus .{3.039in}{\begin{center } \includegraphics [ natheight=3.464400 in , natwidth=5.190600 in , height=1.7331 in , width=3.039 in ] { .. / .. / .. /swp55/docs - toshiba / o6hx9702.wmf}\\ fig . 7 : residuals from equation ( \ref{eq1 } ) \end{center } } } \ ] ] upon reflection , however , the statistical misspecification perspective provides an alternative way to resolve the paradox on statistical adequacy grounds .the above estimation and testing results in ( [ eq1])-([eq3 ] ) are trustworthy only when the model assumptions [ 1]-[5 ] are valid for the particular data for each of the three estimated equations . estimating the aggregated data equation ( [ eq1 ] ) using ` gender ' as the ordering of interest , and plotting the residuals ( fig .7 ) suggests that ( [ eq1 ] ) is _ statistically misspecified _ because the t - plot is far from being normal white - noise .assumption [ 5 ] is clearly invalid since its sample mean is not constant around zero , but shifts from positive for the first half to negative for the second , and the variance appears smaller for the second half ; see spanos ( 1999 ) , ch .this form of t - heterogeneity differs from that in yule s data discussed above .this is confirmed by the auxiliary regression using the residuals ( ) : {c}\widehat{u}_{t}\mathbf{\mathit{=}}\underset{(2.00)}{-15.986}\mathit{+}\underset{(.134)}{.967}x_{t}+\underset{(.616)}{6.556}d_{t},\;r^{2}\mathbf{\mathit{=}}.54,\;s\mathbf{\mathit{=}}2.263,\;n\mathbf{\mathit{=}}100 , \end{array } \vspace*{-0.12in}\ ] ] where , 1-male , 0-female , since its coefficient is statistically significant : . y_t) x_t)\text{\textbf{ales}}\text{\textbf{emales}} \text{\textbf{total}} 3738 1494 5232 4704 2827 7531 8442 4321 12763\begin{array } [ c]{c}\widehat{\theta}_{m}\mathit{=}\frac{3738}{8442}\mathit{=}.44,\ \widehat{\theta } _ { f}\mathit{=}\frac{1494}{4321}\mathit{=}.35 \end{array } \begin{tabular } [ c]{|l||ll||r|}\hline \fbox{\begin{tabular } [ c]{l}a \end{tabular } } & \textbf{m} & \textbf{f} & \\\hline\hline admit & & & \\\hline deny & & & \\\hline\hline total & & & 933\\\hline\hline \multicolumn{4}{|l|}{ } \\\hline \end{tabular } \begin{tabular } [ c]{|l||ll||r|}\hline \fbox{\begin{tabular } [ c]{l}b \end{tabular } } & \textbf{m} & \textbf{f} & \\\hline\hline admit & & & \\\hline deny & & & \\\hline\hline total & & & \\\hline\hline \multicolumn{4}{|l|}{ } \\\hline \end{tabular } \begin{tabular } [ c]{|l||ll||r|}\hline \fbox{\begin{tabular } [ c]{l}c \end{tabular } } & \textbf{m} & \textbf{f} & \\\hline\hline admit & & & \\\hline deny & & & \\\hline\hline total & & & \\\hline\hline \multicolumn{4}{|l|}{ } \\\hline \end{tabular } \begin{tabular } [ c]{|l||ll||r|}\hline\fbox{\begin{tabular } [ c]{l}d \end{tabular } } & \textbf{m} & \textbf{f} & \\\hline\hline admit & & & \\\hline deny & & & \\\hline\hline total & & & \\\hline\hline \multicolumn{4}{|l|}{ } \\\hline \end{tabular } \begin{tabular } [ c]{|l||ll||r|}\hline \fbox{\begin{tabular } [ c]{l}e \end{tabular } } & \textbf{m} & \textbf{f} & \\\hline\hline admit & & & \\\hline deny & & & \\\hline\hline total & & & \\\hline\hline \multicolumn{4}{|l|}{ } \\\hline \end{tabular } \begin{tabular } [ c]{|l||ll||r|}\hline \fbox{\begin{tabular } [ c]{l}f \end{tabular } } & \textbf{m} & \textbf{f} & \\\hline\hline admit & & & \\\hline deny & & & \\\hline\hline total & & & \\\hline\hline \multicolumn{4}{|l|}{ } \\\hline \end{tabular } \left .\begin{tabular } [ c]{lll}\multicolumn{3}{l}{statistical gm:\qquad\qquad}\\ \lbrack1 ] & bernoulli : & \\ \lbrack2 ] & constant mean : & \\ \lbrack3 ] & constant variance : & \\ \lbrack4 ] & independence : & \textbf{} is independent .\end{tabular } \right\ } t\mathit{\in}\mathbb{n} x_{t}\backsim(.2,.16) z_{t}\backsim(.6,.24) ] processes with and respectively .as can be seen from these figures , the concentration of longer ` runs ' [ group of successive values of 0 s or 1 s ] switches from the value 0 to the value 1 as increases above .hence , any attempt to ignore the differences in the two moments of such processes will give rise to a misspecified bernoulli model .that invalidates any inferences based on the aggregate data , and the only potentially reliable inference can be drawn from the disaggregated data .the above statistical misspecification perspective can be used to explain the seemingly contradictory results in lindley and novick s ( 1981 ) hypothetical data shown below .this is a particularly interesting example because , as argued by armistead ( 2014 ) , the ordering of interest might become apparent after the data are collected .for instance , in a clinical trial the ` gender ' or / and ` age ' ordering(s ) might turn out to be relevant after the data are collected .the estimated s for the aggregated data in table 6:{c}\widehat{\theta}_{w}\mathit{=}\frac{20}{40}\mathit{=}.5,\ \widehat{\theta}_{b}\mathit{=}\frac{16}{40}\mathit{=}.4 , \end{array } \vspace*{-0.07in}\ ] ] are very different from those based on the disaggregated data ( table 7 ) , rendering the former statistically untrustworthy because it imposes an invalid assumption : the means of the bernoulli process underlying the disaggregated data are constant.{l}\hline\begin{tabular } [ c]{l}\textbf{table 6 : lindley - novick}\\ \textbf{\ \ \ \ aggregated data}\end{tabular } \medskip\\\hline\hline\begin{tabular } [ c]{|l||ll||r|}\hline & \textbf{white } & \textbf{black } & \\\hline\hline high & & & \\\hline low & & & \\\hline\hline total & & & \\\hline \multicolumn{4}{|l|}{ } \\\hline \end{tabular } \end{tabular}\]]{ll}\hline \multicolumn{2}{c}{\begin{tabular } [ c]{l}\textbf{table 7 : lindley - novick disaggregated data}\end{tabular } } \\\hline\hline\begin{tabular } [ c]{|l||ll||r|}\hline short & \textbf{white } & \textbf{black } & \\\hline\hline high & \multicolumn{1}{||c}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{||c|}{}\\\hline low & \multicolumn{1}{||c}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{||c|}{}\\\hline\hline total & \multicolumn{1}{||c}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{||c|}{}\\\hline\hline \multicolumn{4}{|l|}{ } \\\hline \end{tabular } & \begin{tabular } [ c]{|l||ll||r|}\hline tall & \textbf{white } & \textbf{black } & \\\hline\hline high & \multicolumn{1}{||c}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{||c|}{}\\\hline low & \multicolumn{1}{||c}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{||c|}{}\\\hline\hline total & \multicolumn{1}{||c}{ } & \multicolumn{1}{c||}{ } & \multicolumn{1}{||c|}{}\\\hline\hline \multicolumn{4}{|l|}{ } \\\hline \end{tabular } \end{tabular}\ ] ]pearl s ( 2014 ) claims that the only way to resolve the paradox is to use causal calculus : viewing examples 2 - 5 from the misspecification perspective , however , lends support to the armistead s ( 2014 ) key argument : indeed , in cases where the ` third variable ' represents an _ ordering _ of potential interest for the particular data , the only relevant criterion to decide which orderings are relevant for the statistical analysis of the particular data is the _ statistical adequacy _ of the estimated equations .that is , when two or more alternative orderings are potentially relevant for a particular data set , one needs to test the statistical adequacy of all three equations relative to each of these orderings before one could draw reliable conclusions concerning how to resolve any apparently paradoxical results .where does this leave the pearl ( 2014 ) claim quoted above ? cartwright ( 1979 ) rightly points out that reliance on regularities and frequencies for statistical inference purposes is not sufficient for representing substantively meaningful causal relations . on the other hand ,imposing causal relations that belie the chance regularities in the data would only give rise to untrustworthy inference results . while the causal dimension remains an important component in delineating the issues raised by simpson s paradox , it is not the only relevant , or even the most important , dimension in unraveling the puzzle .indeed , the suggestion that in cases where the third variable ( ordering of interest ) is noncausal one should accept the results based on the aggregated data ( pearl , 2009 ) , is called into question by examples 2 - 5 .this is because when the model estimated using the aggregated data is statistically misspecified , the causal inference results pertaining to conditional independence are likely to be untrustworthy .one way or another , the modeler needs to account for the statistical information not accounted for by the original statistical model , with a view to ensure the trustworthiness of the ensuring statistical results .it is interesting to note that yule ( 1926 ) considered the third variable causal explanation , but questioned its value as a general ` solution ' to the problem : a crucial issue that needs to be addressed by the causal explanation is that conditioning on a third variable is not as straightforward as adherents to this ` explanation ' of simpson s paradox would have us believe . in practice, the question whether a particular variable constitutes a confounder is not just a matter of testing whether relates to and the right way ; see pearl ( 2009 ) , spirtes et al .before such testing can even begin , one needs to test for the statistical adequacy of the estimated model with respect to a relevant ordering .although ` time ' is the obvious ordering for time series , it is no different than other deterministic orderings for cross - section data such as ` gender ' , marital status , age , geographical position , etc . ; only the scale of measurement differs .when the original model is statistically misspecified , it needs to be respecified with a view to secure statistical adequacy .often one can restore statistical adequacy using generic terms relating to that ordering . to secure substantive adequacy, however , one needs to replace such generic terms with proper explanatory random variables without foregoing the statistical adequacy .the latter ensures the reliability of testing whether is a confounder or not ; see spanos ( 2006b ) .yule ( 1926 ) considered ` time ' as a third variable and expressed his misgivings : viewing his comment from the vantage point of today s probabilistic perspective , the proposal to ` condition ' on a third variable raises technical issues , since the conditional distribution , defined by:{c}f(y_{t}\mathit{\mid}x_{t},d_{t};\mathbf{\varphi})\mathit{=}\frac { f(y_{t}\mathit{,}x_{t},d_{t};\mathbf{\psi})}{f(d_{t};\mathbf{\phi})},\ \forall y_{t}\mathit{\in}\mathbb{r}_{y } , \end{array } \vspace*{-0.07in}\ ] ] makes no probabilistic sense when is a _ determinist ordering _ ( variable ) such as time ; see williams ( 1991 ) .this issue arises more clearly in cases where the ordering was deemed potentially important after the data have been collected , such as having plants grow short or tall , blood pressure being high or low , black or white plants , etc . ; see armistead ( 2014 ) .how does one bridge the gap between a deterministic ordering of interest and conditioning on a third random variable related to that ordering ? * separating modeling from inference*. the statistical misspecification perspective suggests that to ensure the reliability of inference one needs to separate the initial stages of _ specification _ ( initial model selection ) _ misspecification testing _ and _ respecification _ , from _ inference _ proper .the latter includes testing for _ substantive adequacy _ , such as attributing causality to statistical associations . in practice, this requires focusing first on the ordering(s ) of interest that could potentially reveal statistical misspecifications that pertain to dependence and heterogeneity uncovered by misspecification testing .the next step is to respecify the initial model with a view to account for the statistical information revealed by the misspecification testing .this is usually achieved by employing _generic _ terms , such as shifts , trends and lags , to ` capture ' such forms of systematic statistical information .once statistical adequacy is secured one can then proceed to ` model ' such information by replacing the generic terms with appropriate explanatory variables with a view to improve the _ substantive adequacy _ without forgoing the statistical adequacy .this is because a third degree trend polynomial might capture the mean heterogeneity in the data to ensure the statistical reliability of inference , but from the substantive perspective it represents ignorance .replacing the trend polynomial with explanatory variables without forsaking statistical adequacy will add to our understanding of the phenomenon of interest ; see spanos ( 2010 ) .viewing the problem from a broader perspective , the primary reason for the untrustworthiness is that the question of probing for the nature of any causal connections pertains to _ substantive _ , and not _ statistical adequacy _, even though the distinction between the two might not always be clear cut or obvious ; see spanos ( 2010 ) .this distinction is crucial because any attempt to probe for substantive adequacy , including causal connections , before securing statistical adequacy of the assumed statistical model is likely to give rise to unreliable results . to avoid this problem of unreliable inferences, one needs to establish the statistical adequacy of the original model first before probing for any form of substantive adequacy , such as attributing a causal interpretation to statistical associations .these include probing for the appropriateness of a particular confounder or choosing between different potential confounders ; see spanos ( 2006b ) for an extensive discussion .this distinction is crucial in differentiating between _ statistically _ and _ substantively _ ` spurious ' inferential results .unfortunately , in the statistics and philosophy of science literatures the term ` spurious ' is often used to describe the latter ; see blyth ( 1972 ) .what is often insufficiently appreciated is that one needs to establish first that there is a statistically trustworthy statistical association , before attempting to explain it away as substantively spurious .the statistical misspecification perspective also calls into question certain philosophical discussions of simpson s paradox that focus primarily on the ` numbers ' associated with the relevant probabilities / associations as in the case of example 1 . a typical representation of simpson s paradox in terms of events is:{c}\begin{tabular } [ c]{l}\\ and\\ \end{tabular } \end{array } \vspace*{-0.07in}\label{ph}\ ] ] where ` ' denotes the ` negation ' operator .malinas and bigelow ( 2016 ) illustrate ( [ ph ] ) using made up numbers that satisfy the above inequalities , and describe the source of the paradox as follows : they proceed to claim that their artificial illustration provides a way to explain an empirical example from cohen and nagel ( 1934 ) concerning death rates in 1910 from tuberculosis in richmond , virginia and new york city .as argued above , however , in the case of observed data some of the ` numbers ' used in such arguments might be statistically untrustworthy , undermining the soundness of the logical argument in ( [ ph ] ) .indeed , oversimplifications of the form ( [ ph ] ) , contribute to the perpetuation of the misconceptions beleaguering the paradox .what is often insufficiently appreciated in statistical modeling and inference is that the inference propositions ( optimal estimators , tests , and predictors and their sampling distributions ) depend crucially on the validity of the probabilistic assumptions one imposes on the data .the totality of these assumptions comprise the underlying statistical model , which is used to define the distribution of the sample and the likelihood function .if the statistical model is misspecified , in the sense that any of its assumptions are invalid for the particular data , the reliability of inference based on such a model is usually undermined , giving rise to untrustworthy evidence .the paper revisited simpson s paradox using the statistical misspecification perspective with a view to shed light on several silent features of the paradox . using this perspective, it was argued that the key to unraveling the various counterintuitive results associated with this paradox is to formalize the vague notion of ` spurious ' inference results into ` statistically untrustworthy ' results which can be evidenced using misspecification testing .this enables one to distinguish between two different cases of the paradox as it relates to the reversal of statistical associations .case 1 , where the reversal is statistically trustworthy because the underlying statistical models are statistically adequate ( example 1 ) .case 2 , where the apparent reversal is statistically untrustworthy due to statistical misspecification ( examples 2 - 5 ) .the real issue is whether the inference results pertaining to statistical associations are statistically trustworthy or not , and the key criterion to appraise that is statistical adequacy .hence , the statistical misspecification perspective puzzles out simpson s paradox because in both cases there is nothing counterintuitive to explain .the statistical misspecification perspective is also used to revisit the causal dimension of the paradox by distinguishing between statistical and substantive inadequacy ( spuriousness ) . to ensure the reliability of any inferences relating to testing whether a third variable constitutes a confounder , requires that the underlying statistical model is statistically adequate .this is particularly problematic for the causal resolution of the paradox when the third variable is related to a relevant ordering of interest which is revealed after the data are collected . in such cases one needs to account for any departures from the model assumptions as they relate to the ordering in question , and replace the generic terms used to capture the neglected statistical information with substantively meaningful explanatory variables. 99 armistead , t.w .( 2014 ) , resurrecting the third variable : a critique of pearl s causal analysis of simpson s paradox , _ the american statistician _ , 68 : 1 - 7 .bickel , p.j .hammel , and j.w .oconnell ( 1975 ) , sex bias in graduate admissions : data from berkeley , _ science _ , 187 , 398404 .bishop , y.m .fienberg , p.w .holland ( 1975 ) , _ discrete multivariate analysis : theory and practice _ , mit press , cambridge , ma .blyth , c.r . , ( 1972 ) , on simpson s paradox and the sure thing principle , _ journal of the american statistical association _ , 67 : 364366 .carlton , a. g. ( 1946 ) , estimating the parameters of a rectangular distribution , _ the annals of mathematical statistics _, * 17 * , 355 - 358 .cartwright , n. , ( 1979 ) , causal laws and effective strategies , _ nos _ , 13 ( 4 ) : 419437 .cohen , m. , and nagel , e. ( 1934 ) , _ an introduction to logic and the scientific method _, new york : harcourt , brace and company .fisher , r. a. ( 1922 ) , on the mathematical foundations of theoretical statistics , _ philosophical transactions of the royal society a _ , * 222 * : 309 - 368 .greene , w.h .( 2011 ) , _ econometric analysis _ , 7th ed ., new jersey : prentice hall .kolmogorov , a. n. ( 1933 ) , _ foundations of the theory of probability _ , 2nd english edition , chelsea publishing co. ny . le cam , l. ( 1986 ) , _ asymptotic methods in statistical decision theory _ , springerverlag ,lindley , d.v . , and m.r .novick ( 1981 ) , the role of exchangeability in inference , _ journal of the american statistical association _, 9 : 4558 .malinas , g. and j. bigelow ( 2016 ) , simpson s paradox , the stanford encyclopedia of philosophy ( summer 2016 edition ) , edward n. zalta ( ed . ) , forthcoming url = ://plato.stanford.edu / archives / sum2016/entries / paradox - simpson/ .pearl , j. ( 2009 ) , _ causality : models , reasoning , and inference _( 2nd ed . ) , cambridge university press , ny . pearl , j. ( 2011 ) , why there is no statistical test for confounding , why many think there is , and why they are almost right , department of statistics , ucla .pearl , j. ( 2014 ) , comment : understanding simpson s paradox , _ the american statistician _, 68(1 ) , 8 - 13 .pearson , k. ( 1896 ) , mathematical contributions to the theory of evolution on a form of spurious correlation which may arise when indices are used in the measurement of organs , _ proceedings of the royal society of london _, 60(359 - 367 ) , 489 - 498 .samuels , m.l .( 1993 ) , simpson s paradox and related phenomena , _ journal of the american statistical association _, 88:421 , 81 - 88 .simpson , e.h .( 1951 ) , the interpretation of interaction in contingency tables , _ journal of the royal statistical society_. series b ( methodological ) , 238 - 241 . sober , e. ( 2001 ) , venetian sea levels , british bread prices , and the principle of the common cause , _ the british journal for the philosophy of science _ , 52 : 331 - 346 .spanos , a. , ( 1986 ) , _ statistical foundations of econometric modelling _ , cambridge university press , cambridge .spanos , a. ( 1999 ) , _ introduction to probability theory and statistical inference _ , cambridge university press .spanos , a. ( 2006a ) , where do statistical models come from ? revisiting the problem of specification , pp .98 - 119 in _ optimality : the second erich l. lehmann symposium _ , edited by j. rojo , lecture notes - monograph series , vol .49 , institute of mathematical statistics .spanos , a. ( 2006b ) , revisiting the omitted variables argument : substantive vs. statistical adequacy , _ journal of economic methodology _, 13 : 179 - 218 .spanos , a. ( 2010 ) , statistical adequacy and the trustworthiness of empirical evidence : statistical vs. substantive information , _ economic modelling _, 27 : 14361452 .spanos , a. ( 2015 ) , statistical mis - specification testing in retrospect and prospect , working paper , virginia tech .spanos , a. and a. mcguirk ( 2001 ) , the model specification problem from a probabilistic reduction perspective , _ journal of the american agricultural association _ , * 83 * : 1168 - 1176 .spanos , a. and a. mcguirk ( 2002 ) , the problem of near - multicollinearity revisited : erratic vs. systematic volatility , _ journal of econometrics _ , * 108 * : 365 - 393 . spirtes , p. , c.n .glymour , r. scheines ( 2000 ) , _ causation , prediction , and search_. mit press , ma .stigler , s. m. ( 1980 ) , stigler s law of eponymy , _ transactions of the new york academy of sciences _ , 39 : 147 - 157 .wasserman , l. ( 2004 ) , _ all of statistics : a concise course in statistical inference _ , springer , ny . williams , d. ( 1991 ) , _ probability with martingales _, cambridge university press , cambridge .yule , g.u .( 1903 ) , notes on the theory of association of attributes in statistics , _ biometrika _ , 2 : 121 - 134 .yule , g.u .( 1909 ) , the applications of the method of correlation to social and economic statistics , journal of the royal statistical society , 72 : 721 - 730 .yule , g. u. ( 1910 ) , on the interpretation of correlations between indices or ratios , _ journal of the royal statistical society _ , 73 : 644 - 647 .yule , g. u. ( 1921 ) , on the time - correlation problem , with especial reference to the variate - difference correlation method , _ journal of the royal statistical society _ , 84 : 497 - 537 .yule , g.u .( 1926 ) , why do we sometimes get nonsense - correlations between time - series ? a study in sampling and the nature of time - series , _ journal of the royal statistical society _ , 89 : 1 - 64 .the traditional specification of the lr model takes the form:{c}y_{t}\mathit{=}\beta_{0}+\mathbf{\beta}_{1}^{\top}\mathbf{x}_{t}\mathit{+}u_{t},\medskip\\ \text{\lbrack i]\ } e(u_{t}\mathit{\mid}\mathbf{x}_{t}\mathit{=}\mathbf{x}_{t})\mathit{=}0,\ \text{[ii]\ } e(u_{t}^{2}\mathit{\mid}\mathbf{x}_{t}\mathit{=}\mathbf{x}_{t})\mathit{=}\sigma^{2},\medskip\\ \text{\lbrack iii ] } e(u_{t}u_{s}\mathit{\mid}\mathbf{x}_{t}\mathit{=}\mathbf{x}_{t})\mathit{=}0,\ \text{[iv ] } u_{t}\mathit{\backsim}\text{\textsf{n}}(.,.),\ t\mathit{\in}\mathbb{n}. \end{array}\ ] ] assumptions [ i]-[iii ] relating to the first two moments of the conditional distribution imply that the model parameters :**=** have the following statistical parameterizations in terms of the primary parameters of the joint distribution , : {ll} & \\ \multicolumn{2}{l}{}\end{tabular } \label{p}\ ] ] assumption [ i ] implies that:{c}e(u_{t}\mathit{\mid}\mathbf{x}_{t}\mathit{=}\mathbf{x}_{t})\mathbf{=}0\leftrightarrow e(y_{t}\mathit{\mid}\mathbf{x}_{t}\mathit{=}\mathbf{x}_{t})\mathbf{=}\beta_{0}+\mathbf{\beta}_{1}^{\top}\mathbf{x}_{t}. \end{array } \label{ce}\ ] ] the law of iterated expectations ( williams , 1991 ) : {c}\left ( e\left [ e(y\mathit{\mid}\sigma(\mathbf{x}))\right ] \right ) \mathbf{=}e(y ) , \end{array } ] and taking expectations yields:{cl}cov(y_{t},\mathbf{x}_{t } ) & \mathit{:=}e\left ( \left [ y_{t}\mathit{-}e(y_{t})\right ] \left [ \mathbf{x}_{t}\mathit{-}e(\mathbf{x}_{t})\right ] ^{\top}\right ) \mathit{=}\\ & \mathit{=}\mathbf{\beta}_{1}^{\top}\left [ \mathbf{x}_{t}\mathit{-}e(\mathbf{x}_{t})\right ] \left [ \mathbf{x}_{t}\mathit{-}e(\mathbf{x}_{t})\right ] ^{\top}\mathit{+}e(u_{t}\left [ \mathbf{x}_{t}\mathit{-}e(\mathbf{x}_{t})\right ] ^{\top } ) .\end{array}\ ] ] since , the last term is zero : {c}e(\mathbf{x}_{t}^{\top}u_{t})\mathit{=}e\left [ e(u_{t}\mathit{\mid}\sigma(\mathbf{x}_{t}))\right ] \mathit{=}0 , \end{array } \smallskip\newline ] . in the case of we use a theorem analogous to the _ lie _ for the variance ( williams , 1991):{c}var(y_{t})=e\left [ var(y_{t}\mathit{\mid}\sigma(\mathbf{x}_{t}))\right ] + var\left [ e(y_{t}\mathit{\mid}\sigma(\mathbf{x}_{t}))\right ] , \end{array}\ ] ] where , by definition \mathbf{=}\sigma^{2}.$ ] the mean deviation of ( [ ce ] ) is:{c}\left [ \beta_{0}\mathit{+}\mathbf{\beta}_{1}^{\top}\mathbf{x}_{t}\right ] -e\left ( \left [ \beta_{0}\mathit{+}\mathbf{\beta}_{1}^{\top}\mathbf{x}_{t}\right ] \right ) \mathit{=}\mathbf{\beta}_{1}^{\top}\left [ \mathbf{x}_{t}-e(\mathbf{x}_{t})\right ] , \end{array}\ ] ] and thus , by definition:{c}var\left [ e(y_{t}\mathit{\mid}\mathbf{x}_{t})\right ] \mathbf{=}e\left [ \mathbf{\beta}_{1}^{\top}\left [ \mathbf{x}_{t}\mathit{-}e(\mathbf{x}_{t})\right ] \left [ \mathbf{x}_{t}\mathit{-}e(\mathbf{x}_{t})\right ] ^{\top}\mathbf{\beta}_{1}\right ] \mathit{=}\mathbf{\beta}_{1}^{\top}\left [ cov(\mathbf{x}_{t})\right ] \mathbf{\beta}_{1}. \end{array } \vspace*{-0.1in}\ ] ] from this , it follows that : {c}var(y_{t})\mathbf{=}\sigma^{2}\mathit{+}\mathbf{\beta}_{1}^{\top}\left [ cov(\mathbf{x}_{t})\right ] \mathbf{\beta}_{1}\longrightarrow\sigma ^{2}\mathit{=}var(y_{t})\mathit{-}\mathbf{\beta}_{1}^{\top}\left [ cov(\mathbf{x}_{t})\right ] \mathbf{\beta}_{1 } , \end{array } \vspace*{-0.1in}\ ] ] which yields the parameterization in ( [ p ] ) .
the primary objective of this paper is to revisit simpson s paradox using a statistical misspecification perspective . it is argued that the reversal of statistical associations is sometimes spurious , stemming from invalid probabilistic assumptions imposed on the data . the concept of statistical misspecification is used to formalize the vague term ` spurious results ' as ` statistically untrustworthy ' inference results . this perspective sheds new light on the paradox by distingusing between statistically trustworthy vs. untrustworthy association reversals . it turns out that in both cases there is nothing counterintuitive to explain or account for . this perspective is also used to revisit the causal ` resolution ' of the paradox in an attempt to delineate the modeling and inference issues raised by the statistical misspecification perspective . the main arguments are illustrated using both actual and hypothetical data from the literature , including yule s `` nonsense - correlations '' and the berkeley admissions study . keywords : association reversal ; spurious correlation ; statistical misspecification ; statistical vs. substantive adequacy ; misspecification testing ; untrustworthy evidence ; causality ; confounding .
there is a stochastic multigrid method and a deterministic one . the stochastic version is used to compute high dimensional integrals in euclidean quantum field theory or statistical mechanics by a monte carlo method which uses updates at different length scales .the deterministic version solves discretized partial differential equations .one hopes to use both of them in simulations of lattice qcd , for updating the gauge fields and for computing fermion propagators in given gauge fields . in either casethe aim is to beat critical slowing down ( csd ) in nearly critical systems .our notation is as follows : denotes a given `` fundamental '' lattice of spacing .coarser ( block ) lattices of increasing spacings are denoted . typically , we chose , and a single point as the last layer .interpolation operators are introduced to transfer functions on coarser lattices into functions on finer lattices , while restriction operators transfer functions from finer to coarser lattices ( `` variational coarsening '' ) .a crucial problem is how to define and exhibit smooth functions in the disordered context , i.e. when translation symmetry is strongly violated .other possible applications besides gauge theories are low lying states of spin glasses , the shape of a lightning , waves on fractal lattices ( with bond percolation ) , or the localization of low lying electronic states in amorphous materials . in the case of deterministic mg, one wants to solve a discretized elliptic differential equation on : d_0 ^ 0 = f^0 .[ de ] it might have arisen from an eigenvalue equation by inverse iteration .if has a small eigenvalue , then local relaxation algorithms suffer from csd .after some relaxation sweeps on one gets an approximate solution whose error is not necessarily small but is _ smooth ( on length scale ) .the unknown error satisfies the equation d_0 e^0 = r^0 .[ dee ] with the residual .given that is smooth , it can be obtained by smooth interpolation of a suitable function on , e^0 = ^1 e^1 .[ intp0 ] that is , with which depends smoothly on .now define a restriction operator such that .then ( [ intp0 ] ) can be inverted , applying to both sides of ( [ dee ] ) yields an equation for , d_1 e^1 = r^1 , [ cgrideq]with and the effective operator . given , one obtains from ( [ intp0 ] ) , and is an improved solution of ( [ de ] ) .thus , _ the problem has been reduced to an equation on the lattice which has fewer points . if necessary , one repeats the procedure , moving to etc .the procedure stops , because an equation on a `` lattice '' with only a single point is easy to solve ._ _ the iterated interpolation } \equiv \a^1 \a^2 \dots \a^j ] from to the fundamental lattice which are smooth in this sense , works for arbitrarily disordered gauge fields .when there are no smooth functions in this sense at length scale , then has no low eigenvalue , and there is no csd and no need for mg .the above answer appears natural , and the `` projective mg '' of is in its spirit . but to obtain kernels }_{zx} ] is required to vanish for outside a neighbourhood of , the problem involves dirichlet boundary conditions . for large , } ] as solutions of ( [ eva ] ) by multigrid iteration ._ this is done successively for one uses already determined kernels } ] ._ we found that this works very well - cp .[ performance ] . method ( ii ) will need of order storage space and computational work for a -dimensional system of linear extension .any iteration to solve ( [ de ] ) amounts to updating steps of the form ^0 ^ 0 = ^0 + f^0 .[ iterlin ] with the iteration matrix whose norm governs the convergence , and . if , the iteration converges with a relaxation time ._ parameters in the algorithm - such as operators , and - are _ optimal if the cost functional is at its minimum ._ _ as an example , consider a twogrid iteration in which a standard relaxation sweep on with iteration matrix is followed by exact solution of the coarse grid equation ( [ cgrideq ] ) .the second step leads to an updating with some iteration matrix , and .therefore one may estimate with ( fine grid relaxation smoothens the error but does not converge fast - therefore is suppressed whereas is not much smaller than ) and try to optimize the parameters above by minimizing : using the trace norm , , one finds e_1 = volume^-1 _ z , w^0|_zw|^2 with . prescribing , and determining and by minimizing yields what we call the `` ideal interpolation kernel '' for a given restriction map .since it has exponential tails instead of vanishing for outside a neighbourhood of , it is impractical for production runs , though .a feed - forward artificial neural network ( ann ) can perform the computations to solve ( [ dee ] ) by mg relaxation .the nodes ( `` neurons '' ) of the nmg are identified with points of the mg as shown in fig .[ figur1 ] .the resulting nmg consists of two copies of the same mg , except that the last layer is not duplicated . in the standard mg approach , the basic interpolation kernels interpolate from one layer to the preceding one , .each node is connected to some of the nodes in the preceding layer . in the upper half ,the connection strength from to is . in the lower half ,node is connected to with strength .in addition there is a connection of strength between the two nodes which represent the same point in ( ) . according to hebb s hypothesis of synaptical learning, a biological neural network learns by adjusting the strength of its synaptical connections .the network receives as input an approximate solution of ( 2.1 ) , from which the residual is then determined .it computes as output an improved solution .the desired output ( `` target '' ) is . is a linear function of . except on the bottom layer, each node receives as input a weighted sum of the output of those nodes below it in the diagram to which it is connected .the weights are given by the connection strengths .our neurons are linear because our problem is linear .the output of each neuron is a linear function of the input .the result of the computation is = ( _ 0 d_0 ^ -1 + _ k1 ^[0k ] _ k d_k^-1 r^[k0 ] ) r^0 where }=r^k ... r^2r^1 ] . the operators and , and the damping parameters are not needed separately since they only enter in the combination .the fundamental differential operator and its diagonal part are furnished as part of the problem .the connection strengths ( `` synaptical strengths '' ) , ( and possibly ) need to be found by a learning process in such a way that the actual output is as close as possible to the desired output . in supervised learning , pairs ( `` training patterns '' )are presented to an ann . given input , the actual output is compared to the target , and the connection strengths are adjusted in such a way that the cost functional gets minimized .an iterative procedure to achieve this minimization is called _ learning rule ._ taking for the sequence a complete orthonormal system of functions on , in the limit , the target is for any input , and the output by ( [ iterlin ] ) .the learning rule for the resulting cost functional e= _ ^^2= tr _^2 is our previous optimality condition for multigrid relaxation in sect .[ criteria ] .the variant ( ii ) in sect . [ smoothness ] involves a slightly different nmg . instead of the connections between neighbouring layers of the multigrid , we now have connections from to with strength }_{xz} ] .if we adopt variational coarsening , all connection strengths are determined by interpolation kernels } ] .an approximation }_{zx}$ ] to was computed by multigrid iteration .it does not depend on .updating at changes by _z= ^[0j]_zxd_j , x^-1r^j_x , r^j=^[0j]r^0 .the convergence rate ( in units of mg iterations ) of the -iteration is shown in fig .[ figur2 ] for .one mg iteration involved one sweep ( in checkerboard fashion ) through each mg layer , starting with .99 g. mack , in : nonperturbative quantum field theory , eds .g. t hooft et al .( plenum , new york 1989 ) .j. goodman and a.d .sokal , phys .d 40 ( 1989 ) 2035 .w. hackbusch , multi - grid methods and applications ( springer - verlag , berlin , 1985 ) .brower , k. moriarty , c. rebbi and e. vicari , nucl .b 20 ( 1991 ) 89 .a. hulsebos , j. smit and j.c .vink , nucl .b 368 ( 1992 ) 379 .t. kalkreuter , phys .b 276 ( 1992 ) 485 .j. hertz , a. krogh and r.g.palmer , introduction to the theory of neural computation ( addison - wesley , redwood city ca , 1991 ) .
we present evidence that multigrid ( mg ) works for wave equations in disordered systems , e.g. in the presence of gauge fields , no matter how strong the disorder . we introduce a `` neural computations '' point of view into large scale simulations : first , the system must learn how to do the simulations efficiently , then do the simulation ( fast ) . the method can also be used to provide smooth interpolation kernels which are needed in multigrid monte carlo updates . h # 1 # 1
in robotics fields , caging is a type of grasping where robots capture an object by surrounding or hooking it . thus the caging problem based on the shape of the robots and the objectis addressed on geometrical representation .its mathematical description has been partially studied with focusing on methodology how to cage an object by certain robots .though it is rigorous , it is not for arbitrary target objects and robots . in this article , we propose an essential of caging to describe arbitrary target objects and robots from a mathematical viewpoint , and then it naturally leads us to a degree of difficulty of escaping and caging .it is a novel concept of the caging which is connected with practical approaches .caging or holding an object has been discussed in mathematics field such as , and has been applied to robotic manipulation in parallel .rimon and blake raised a caging by two circular robots driven by one parameter in two dimensional planar space , and formulated its conditions .wang and kumar proposed caging by multi - robot cooperation with mathematical abstract formulas .more than three dimensional caging problem is formulated by , although only circular and spherical robots are referred . there studies discuss existence of object s free movable space closed by the robots. hence path connectivity of the free space for the captured object is an important matter to investigate whether the object is caged or able to escape . as mentioned above , confinement of caging formation is studied by previous works such as , particularly for problems of two dimensional caging .additionally although path connectivity can be examined by using probabilistic search algorithm , the difficulty of caging constraint is not quantitatively qualified .therefore this paper aims to describe caging problem in robotics in the following viewpoint : to apply the formulation to arbitrary robots and objects ; to reveal difficulty of caging constraint mathematically . from a mathematical viewpoint ,we recall the fundamental fact that a compact connected -dimensional topological manifold in -dimensional euclidean space could be wrapped by the -dimensional sphere , which corresponds to holding in the real world . this fact is described well by the homotopy group .it is based on the mathematical fact that in divides into its inner side and outer side .the subspace has the configuration space ] .let us fix the sphere and then it divides ] of the inner side of , that of the outer side , {out} ] , i.e. , =[m]_{in}\coprod [ m]_{out}\coprod [ m]_{int} ] .if {in} ] . on the other hand ,caging is to divide the configurations space ] mathematically .this is a fundamental problem on caging .there are many mathematical studies on this problem to cage some proper geometrical objects using special as mentioned above .further fruchard and zamfirescu geometrically studied the geometrical conditions whether a circle holds a convex object ] from .it is closely related to a combinatorial explosion and the wire puzzle .it means that there might be a difference between practical caging and complete caging . when we capture a complicate object by caging , we propose that the difficulty should be proactively considered from a practical viewpoint .further if we treat the euclidean moves probabilistically , we could assign a natural measure on the moves .let us consider the -dimensional real euclidean space and the -dimensional euclidean group .the element of acts on each point of by such that for , and for the identity element , .the action of on has the matrix representation : for , there are element and such that for , in this article , let us refer a -dimensional topological manifold in , _-subspace_. we say that the -subspaces and of are _ congruent _ if there is an element of such that .we denote it by .let ] .the quotient space of the set of the -subspaces divided by is denoted by , which classifies the shape of subspace in .we call _ moduli of the shapes _ ; \in \fm ] , i.e. , ,\se(n)) ] ,the action on parameterized by ] for the lie group , there is its lie algebra whose element satisfies ; for the economy of notations , we use the same notation for its matrix representation .[ lm : liealg ] for where and , where , where is the unit matrix . if is regular , .the straightforward computation leads and thus we have the result . herewe recall the properties of : [ lm : liealg_so(n ) ] 1 . , 2 .the matrix representation of an element of is given by such that , i.e. , ( ) , 3 .the maximal rank of the matrix representation of is if and otherwise , and 4 . for the matrix representation , we have the natural decomposition , by considering , i.e. the cokernel agrees with the kernel of .1 , 2 , and 3 are obvious as in . using the euclidean inner product ,let us show which is equivalent with 4 .let us consider an element of , which means for every , and equivalently vanishes .the element must belongs to or because of .since for and ] and where satisfying if is regular , . . by replacing with , .though the action of the euclidean move should be regarded as a rotation in the projective space via sl( ) in , the simple euclidean move is given by the following lemma : [ lmm : rt ] for ] , i.e. , for , ] be the subset of the configuration space ] , and be the family of {\ck^c} ] , and there exists satisfying .if there is , \se(n)) ] satisfies the conditions , we say that and are -congruent , denoted by , and is a -congruent homotopy for and .if we can not find ,\se(n)) ] is -congruent each other , we say that {\ck^c} ] for .if we find a pair and of {\ck^c} ] a complete -caging set , and we say that we can completely cage {\ck^c}\in \fm_{\ck^c} ] has a non - trivial geometrical structure generally .in fact , fruchard and zamfirescu considered the similar problem in which the is a circle and {\ck^c } ( \subset [ m]) ] is not convex from a practical point of view .if is reduced to - action but can not be reduced to - action , we say that is -th type and and are -th -congruent if is -th type .further if and are congruent but not -congruent , or can not be -th -congruent of finite , we say that and are -th -congruent . for given an -subspace , the configuration space {\ck^c} ] is the set of subspaces of the -th -congruence to .an example is illustrated in figure 1 of case , in which dots mean and a -shaped object corresponds to .they are -congruent to the outer one but it is obvious that the -shaped object ( a ) in figure 1 is not the first -congruent to ( d ) . 1.7 cm and a -shaped object corresponds to . (a ) , ( b ) , ( c ) , and ( d ) show the piecewise euclidean moves of the -shaped object ., title="fig:",width=264 ] ( a ) 1.3 cm ( b ) 1.3 cm ( c ) 1.3 cm ( d ) the decomposition to and is related to the link problem and the knot theory .thus -congruence is a profound problem .some of kinds of caging are surely based on the braid group in the knot theory when the fundamental group of ] , there might be {\ck^c} ] by the simple euclidean move .it reminds us of parking in a small garage and the wire puzzle ; it is also closely related to the probabilistic treatment of connected path in {\ck^c} ] by using the figures or something in a daily life , we do not need a complete caging in definition [ def : caging ] but we have to find whether it is not small of -th -congruence . in other words, roughly speaking , there is the degree of the difficulty to take in the inside of to the outside of , though the inside and the outside are not rigorous mathematically .we need not discriminate between the higher type of -congruence and a complete -caging set in a daily life .though it is very difficult to introduce the measure of the path space in general , we could measure the difficulty of caging and the probabilistic treatment of connected path in {\ck^c} ] is not so complicate .even for the first caging , it is not easy to find whether it is the first caging or not because the dimension of is six if .determination of the first caging means the determination of topological property of .for example , if we reduce the determination of continuous space to finite problem by expressing concerned area and in terms of the voxels for 100 points per one - dimension , we have to deal with data , which is huge ; we can not deal with it practically in this stage .thus in order to avoid the problems , there are several attempts and proposals including the c - closure method .the second named author has investigated the problem using the c - closure concept . in another way , intuitive geometrical features such as loop shape and _ double fork and neck _ help us derive sufficient conditions for caging constraint .hiroyasu hamada + national institute of technology , sasebo college , + 1 - 1 , okishin - machi , sasebo , nagasaki 857 - 1193 , japan , + institute of mathematics for industry , kyushu university , + motooka 744 , nishi - ku , fukuoka 819 - 0395 , japan + satoshi makita + national institute of technology , sasebo college , + 1 - 1 , okishin - machi , sasebo , nagasaki 857 - 1193 , japan , + shigeki matsutani : + national institute of technology , sasebo college , + 1 - 1 , okishin - machi , sasebo , nagasaki 857 - 1193 , japan , + institute of mathematics for industry , kyushu university , + motooka 744 , nishi - ku , fukuoka 819 - 0395 , japan + smatsu.ac.jp +
it is a crucial problem in robotics field to cage an object using robots like multifingered hand . however the problem what is the caging for general geometrical objects and robots has not been well - described in mathematics though there were many rigorous studies on the methods how to cage an object by certain robots . in this article , we investigate the caging problem more mathematically and describe the problem in terms of recursion of the simple euclidean moves . using the description , we show that the caging has the degree of difficulty which is closely related to a combinatorial problem and a wire puzzle . it implies that in order to capture an object by caging , from a practical viewpoint the difficulty plays an important role . * keywords * caging , euclidean move , wire puzzle , robotics * ams : * 51m04 , 57n15 , 57n35 , 70b15 , 70e60 ,
the structure of magnetic field is often an important factor in many energetic processes in the solar corona .this especially refers to the topological features of magnetic structure such as null points , separatrix surfaces , and separator field lines .they serve as preferred sites for the formation of current sheets and the corresponding accumulation of the free magnetic energy .the magnetic reconnection process induced in the current sheets at some critical parameters allows the accumulated magnetic energy to convert into other forms : thermal , radiative and kinetic energy of plasma and accelerated particles .this process is considered to be a driving mechanism of many energetic phenomena in the solar atmosphere . over the last decade, it also became clear that the geometrical analogs of the separatrices , the so - called quasi - separatrix layers ( qsls , ) , have similar properties . there are indications that the qsls are probably more ubiquitous than the true separatrices .this increases the significance of the problem of determining qsls in a given magnetic configuration . in comparison with the separatrices, the determining of qsls requires a more sophisticated technique , which is based on a point - wise analysis of the magnetic field line connectivity .the basic quantity in this technique is the squashing degree or factor of elemental magnetic flux tubes .this quantity has previously been defined for the planar geometry , which provides a good approximation for describing magnetic structures in active regions with the characteristic size smaller than the solar radius .such an approximation , however , is hardly applicable for a global description of magnetic connectivity in the solar corona including the open magnetic field of the coronal holes .the corresponding large - scale structure of magnetic fields is also of substantial interest for solar physics , especially , for understanding solar eruptions .so the respective generalization of the above technique must allow us to determine for the coronal volume bounded by the photospheric and solar - wind surfaces .this immediately raises technical problems , which do not exist in the case of the planar geometry .first , both these boundary surfaces are closed , and therefore , none of them admits a global regular system of coordinates . to avoid a coordinate singularity of a spherical - pole type , at least two overlapping coordinate systems ( coordinate charts ) must be used in this case for describing the locations of the field line footpoints on each of the boundries .second , the solar - wind boundary surface generally can not be a sphere , but some other curvilinear surface whose geometry depends on the coronal magnetic field .these two requirements of the technique can be satisfied only by using a covariant approach to the description of with the coordinate systems that are generally different for each of the boundaries .the derivation of such a covariant expression for the squashing factor is one of the goals of the present work .the second goal of the work is to make an essential refinement of the squashing factor itself .the problem is that the large values of may be caused not only by squashing of elemental flux tubes in the volume but also by a projection effect at the boundary surfaces .the latter occurs at the field lines which are nearly touching the boundary at least at one of the footpoints .this effect , in particular , takes place in the vicinity of the bald patches ( bps , ) , which are the segments of the photospheric polarity inversion line ( pil ) , where the coronal field lines touch the photosphere .when analyzing magnetic connectivity , it is important to discriminate between the projection effect and volumetric squashing . for this purpose ,we derive a covariant expression for the perpendicular squashing factor , which describes the squashing of elemental flux tubes only in the directions orthogonal to the field lines . sections [ s : q_cov ] and [ s : q_perp ] present the derivations of and and demonstrate on the examples of planar and spherical geometry how to apply these general expressions .the difference between and is considered in detail in [ s : qvsqp ] by calculating and comparing these quantities for two particular magnetic configurations .the obtained results are summarized in [ s : s ] .consider a plasma - magnetic configuration in a finite volume with a smooth boundary of an arbitrary shape .it may generally consist of two or even more surfaces , for example , the photosphere and the solar - wind surface form a boundary for the entire solar corona .each of the two footpoints of a given magnetic field line may belong in general to any of these surfaces .we will use the designations launch " and target " for the footpoints and parts of the boundary surfaces at which the field lines start and end up .let and be the systems of curvilinear coordinates at the launch and target boundaries , respectively .the magnetic field lines connecting these boundaries define a mapping determined by some vector - function .the local properties of this mapping are described by the jacobian matrix .\label{d } \end{aligned}\ ] ] for each field line , this matrix determines a linear mapping from the tangent plane at the launch footpoint to the tangent plane at the target footpoint , so that a circle in the first plane is mapped into an ellipse in the second plane ( fig .[ f : f1]a ) .the aspect ratio of such an ellipse defines the degree of a local squashing of elemental flux tubes , which means that any infinitesimal circle centered at a given launch point is mapped along the field lines into an infinitesimal ellipse with this aspect ratio at the target footpoint .this generalizes a coordinate - free definition of the squashing factor to the case of curvilinear boundaries , whose tangent planes are generally not the same , as is in the case of plane boundaries considered in and . to derive an analytical expression forthe aspect ratio of the above ellipse , let us introduce first a vector function that describes in a three - dimensional ( 3d ) cartesian system of coordinates the locations of the footpoints at the launch boundary .then the vectors determine at this boundary the covariant vector basis tangent to the -coordinate lines .thus , is the corresponding covariant metric tensor , which determines local lengths and angles at the launch boundary .the dot here stands for the usual scalar product in 3d euclidean space . using ( [ ek ] ) and the standard gramm - schmidt procedure, one can construct an orthonormal basis hereafter ] of the covariant metric .the value is simply plus an additional value , which is independent of and whose expression does not matter for the present consideration .what actually matters is that runs values from to when and are tracing , respectively , the above circle and ellipse .the minimum and maximum correspond here to the minor and major axes of the ellipse , respectively , so that its aspect ratio is the large values of this ratio do not differ much from its asymptotic value note also that , since inverting equation ( [ are ] ) yields and . therefore equation ( [ q ] ) will be used as a covariant definition of the squashing factor .it is evident from the derivation of that this value is invariant to the direction of field line mapping .indeed , the inverse mapping implies locally that is a maximum stretching coefficient and is a minimum shrinking coefficient .such coefficients will coincide with the lengths of the major and minor axes of the ellipse obtained from a circle of a unit radius due to this inverse mapping .thus , although this new ellipse has different lengths of axes , their ratio is the same as for the previous one , which proves the statement .a formal proof of the statement is also not difficult to obtain by using the derived expressions of and in a similar way as in the case of plane boundaries .the invariancy of to the direction of field line mapping justifies its status of a correct measure for the magnetic connectivity .note also that for a given infinitesimal flux tube is a ratio of its cross section areas at the target and launch points .therefore , since the magnetic flux is conserved along the tubes , this value coincides with the corresponding inverse ratio of the normal field components , so that where and are normal components of the magnetic field to the boundaries at the conjugate launch and target footpoints . in practice ,the numerical calculation of through this ratio is more precise than that given by equation ( [ dltc ] ) and therefore it should be used for computing in equation ( [ q ] ) .the above mathematical construction is related to the cauchy - green deformation tensor known in the theory of elasticity .it can be written in our notations as where and represent , respectively , a finite deformation and covariant metric tensor of an elastic two - dimensional body . the contraction of the cauchy - green tensor with a pair of orthonormal vectors and yields the tensor such that its eigenvalues coincide with the squared semiaxes and of the above ellipse . the square root of their ratio defines in accordance with ( [ are ] ) and ( [ q ] ) the squashing factor .it should be emphasized that this analogy is possible only in our general approach , where two independent systems of coordinates are used for describing the location of the conjugate footpoints .this allows us to apply coordinate transformations only at the launch boundary , while keeping the coordinates at the target boundary unchanged .with respect to these transformations , the object defined by ( [ c ] ) does behave as a covariant second - rank tensor .the latter is not valid , however , if one global 3d system of coordinates is used for describing the entire field configuration and so both boundaries are subject then to coordinate transformations .this has only a methodological meaning and does not exclude , of course , an application of the derived expressions to such particular cases .for example , consider a closed magnetic configuration in the half space with the global cartesian coordinates and the photospheric boundary plane .the field line mapping is then given by .there are no more differences between upper and low indices and contravariant and covariant metrics ; the latter simply turn into kronecker symbols and .so equations ( [ n2c ] ) , ( [ dltc ] ) and ( [ dlt2 ] ) are reduced to as required in this case .consider now a more complicated class of configurations , where both open and closed magnetic field lines are present .let the configuration be described in one global system of coordinates , where corresponds to the photospheric launch boundary , while represents the target boundary .for the open field lines reaching the spherical solar - wind boundary of radius , we put , while for the closed ones we take . thus , , and the field line mapping is , which yields = \left ( \begin{array}{cc } r_{*}^{2}\sin^{2}\theta & 0 \\ 0 & r_{*}^{2 } \end{array } \right ) \label{gijs * } \end{aligned}\ ] ] and = \left ( \begin{array}{cc } r_{\sun}^{-2}\sin^{-2}\theta & 0 \\ 0 & r_{\sun}^{-2 } \end{array } \right ) , \label{gkls } \end{aligned}\ ] ] where the contravariant metric at the launch boundary is obtained from the corresponding covariant metric by inverting it simply as a matrix . using these expressions and equation ( [ n2c ] ), we obtain .\label{n2sph}\end{aligned}\ ] ] equations ( [ dltc ] ) and ( [ dlt2 ] ) in this case become the obtained expressions for and have seeming singularities at the poles , where they actually reduce in the generic case to resolved indeterminacies with and proportional to .these indeterminacies are artificial and unrelated to some special properties of the magnetic structure ; they are caused by the pole singularities inherent in the used global spherical system of coordinates .moreover , their appearance is unavoidable in any other global system of coordinates on a closed sphere - like surface because of its intrinsic topological properties .therefore this may generally reduce precision of a numerical evaluation of near the pole - like points . to avoid such indeterminacies , at least two overlapping coordinate charts on each of the spherical boundariesare required . for this purpose , it is sufficient to use two systems of spherical coordinates turned with respect to each other on the right angle in the -direction .switching from one of such systems in its polar regions to the other , as suggested , for example , by , makes it possible to resolve the problem of the pole indeterminacies .the required expressions for such calculations of can be obtained again from equations ( [ n2c ] ) and ( [ dltc ] ) with properly modified metric tensors .to find the perpendicular squashing factor , we need to know the field line mapping between infinitesimal planes orthogonal to the field lines at the conjugate footpoints ( fig .[ f : f1]b ) .note first that the projection effect at the boundaries is local , because it depends only on the orientations of the tangent planes at the boundaries with respect to the vectors of magnetic field at the footpoints .so the required mapping between the indicated orthogonal planes can be obtained from the respective mapping between the tangent planes by correcting it only at such footpoints .this implies that can be expressed in terms of the same values as and , in addition , the field vectors and , respectively , at the launch and target footpoints .we will derive by using the same procedure as for while modifying it in accordance with the above comments .the vector tracing the circle of unit radius is given by the same expression ( [ o ] ) .however , since it lies now in the plane perpendicular to the field line at the launch point , the corresponding orthonormal basis is chosen to be orthogonal to , so that this vector is mapped along the field lines into a vector lying in the plane perpendicular to the local field at the target footpoint .the respective mapping can be represented as a composition of three others according to the following diagram : here the mapping projects the vector along onto the plane tangential to the launch boundary to yield the vector which has a vanishing component along the vector perpendicular to such a plane .then this vector is mapped by the differential of the field line mapping determined by the jacobian matrix ( [ d ] ) into the vector which lies in the plane tangential to the target boundary .finally , the obtained vector is projected by along at the target footpoint onto the plane perpendicular to to result in eliminating now and from ( [ o_])([otp ] ) and using ( [ o ] ) with the basis from equations ( [ e1p])([e2p ] ) , we express in terms of -basis and magnetic field vectors at the conjugate footpoints .this allows us to calculate and then in a similar way as done before when deriving .the result is where the asterisk has the same meaning as in equations ( [ ot2 ] ) , ( [ n2c ] ) and ( [ dltc ] ) .thus , the obtained differs from only in the form of the metric tensors , which are determined now by the asterisk here implies automatically that the corresponding values refer to the target footpoint , therefore the tilde used for indicating this fact in intermediate equation ( [ otp ] ) is omitted in the final expression ( [ gpij * ] ) .one can also check that = \left[g_{\perp lk}\right]^{-1}$ ] and = \frac{\left(\bm{b\cdot}{{\bm \epsilon}}_{1 } \bm{\times}{{\bm \epsilon}}_{2}\right)^{2 } } { \bm{b}^{2 } } , \label{gp}\end{aligned}\ ] ] so that equation ( [ dltpc ] ) reduces to if , in addition , the magnetic flux conservation is taken into account .this expression should be used instead of ( [ dltpc ] ) for computing in ( [ qpc ] ) for the same reason that ( [ dlt2 ] ) should be used in ( [ q ] ) .thus , formulas ( [ qpc])([dltpc2 ] ) completely define the covariant expression for .let us see now how these formulas work in the case of a closed magnetic configuration described in a global cartesian system of coordinates with the photospheric boundary plane .the formulas are significantly simplified in this case to yield where the values of the corresponding covariant and contravariant components of vectors and tensors do not differ from each other .note also that expression ( [ n2pp ] ) apparently diverges near the pil , where the normal field component vanishes .this is actually not a true singularity but rather an indeterminacy , which is resolved to give a low limit of if the pil has no bps . at the bps , such an indeterminacyis also resolved but it may generally have different limits of at the left and right sides of bps . thus , may experience a jump when crossing bps , unless the configuration is symmetric as in the example of [ s : hfttwst ] .similar to [ s : q_cov ] , consider also a more general class of configurations , where both open and closed field lines are present and bounded by a spherical solar - wind surface of radius and the photosphere of radius as before .it is convenient in this case to use matrix notations , in which equation ( [ n2pc ] ) is written as where \displaystyle\frac{\partial \theta}{\partial \phi } & \displaystyle\frac{\partial \theta}{\partial \theta } \end{array } \right ) \label{do}\end{aligned}\ ] ] is the jacobian matrix of the field line mapping .the covariant and contravariant metrics at the target and launch boundaries , respectively , are determined by the following matrices : ^ { * } \nonumber \\ & & = r^{2 } _ { * } \left ( \begin{array}{cc } \displaystyle \sin^{2}\theta \left(1-\frac{b_{\phi}^{*2}}{{\bm b}^{*2}}\right ) & \displaystyle -\sin\theta \frac{b_{\phi}^ { * } b_{\theta}^{*}}{\bm{b}^{*2 } } \\ [ 8pt ] \displaystyle -\sin\theta\frac{b_{\phi}^ { * } b_{\theta}^{*}}{\bm{b}^{*2 } } & \displaystyle 1-\frac{b_{\theta}^{*2}}{\bm{b}^{*2 } } \end{array } \right ) , \qquad \label{gpo * } \\ [ 8pt ] & & g^{\perp } \equiv \left[g^{\perp lk}\right ] \nonumber \\ & & = r^{-2}_{\odot } \left ( \begin{array}{cc } \displaystyle\sin^{-2}\theta\left(1+\frac{b_{\phi}^{2}}{b_{r}^{2}}\right ) & \displaystyle \frac{b_{\phi}b_{\theta}}{\sin\theta\ , b_{r}^{2 } } \\ [ 10pt ] \displaystyle\frac{b_{\phi}b_{\theta}}{\sin\theta\ , b_{r}^{2 } } & \displaystyle 1+\frac{b_{\theta}^{2}}{b_{r}^{2 } } \end{array } \right ) , \label{gpo}\end{aligned}\ ] ] where if a given footpoint belongs to a closed field line and otherwise .the matrix has two types of singularities , which actually lead only to indeterminacies in expression ( [ n2po ] ) .the first indeterminacies take place at the poles of spherical coordinates ( ) .they are already discussed above in connection with equation ( [ n2sph ] ) .the second indeterminacies take place at the pil , where .they are resolved in much the same way as occurred in the previous case of cartesian geometry to give a finite value of .if the pil has no bps , the length of the field lines vanishes near the pil , so that and , which results in .the presence of bps implies , however , a strong projection effect in their neighborhood and the corresponding singularities in the jacobian matrix at the bps .these singularities are exactly of the same type and value as those in but opposite in signs .so they cancel each other in equation ( [ n2po ] ) to generally provide different limits at the different sides of the bps , as discussed above for the case of the plane boundaries .it should be also noted that there is one more type of plausible singularities not yet discussed .these singularities have to appear if a given configuration has null points of magnetic field and the corresponding separatrix field lines . in this case , the elemental flux tubes enclosing the separatrix field lines split at each of the null points to produce singularities in the derivatives of the jacobian matrix .such singularities are due to volumetric rather than surface properties of magnetic configurations . therefore ,if present , they appear in both and distributions thereby indicating the existence of magnetic nulls in the corona .since the numerical derivatives are estimated as a ratio of finite coordinate differences , their absolute values may not exceed the ratio of the coordinate range at the target boundary to a chosen increment of coordinates at the launch boundary .the presence of this upper bound on possible values of the numerical derivatives prevents an overflow error in computations of the squashing factors at the footpoints of the null - point separatrices .a more detailed consideration of this type of singularity requires a special study , which goes far beyond the scope of the present work .to see the difference between and , their distributions are compared below for several magnetic configurations .an emphasis is made on their potentiality for determining so - called hyperbolic flux tubes ( hfts ) .they are defined as two intersecting qsls with extended and narrow photospheric footprints characterized by very large values of . neglecting the possible curving and twisting of an hft , its cross section variation in the longitudinal direction can be respresented as {ii1 } \label{-x-}\ ] ] in other words , the width of one of the qsls is shrunken to the thickness of the other in the process of the mapping of their cross sections along magnetic field lines .this is possible because the field lines in hfts exponentially converge in one transversal direction and diverge in the other .such a property is typical for hyperbolic flows in the theory of dynamical systems , which was coined in the term hft .the examples below demonstrate that is a more accurate quantity than for characterizing hfts , although also provides valuable complementary information on the magnetic structure . the most simple magnetic configuration in which one would expect the presence of an hft is the so - called x - line configuration , whose field is determined in cartesian coordinates by where is a dimensionless field gradient characterizing the strength of the transverse field compared to the longitudinal field on a characteristic length scale .the right - hand side of ( [ bx ] ) represents the same field in terms of the euler potentials which are constant along the field lines .the use of and significantly simplifies the calculation of the squashing factors in this configuration .suppose that its volume is restricted by , so that the planes are the corresponding boundaries .the constancy of and means that the boundary points and are related by and . in terms of the notations used in expressions ( [ n2 ] ) for the plane boundaries , this means that and .since everywhere , equation ( [ dlt ] ) reduces simply to , so that the resulting squashing factor is thus , is constant over the entire planes and , hence , it does not determine any qsl in the x - line configuration with the plane boundaries . has arrived at the same conclusion by using the norm defined by ( [ n2 ] ) , since , and so in this configuration .however , they have found with the help of some evidences of the presence of qsls in the x - line configuration , if it is bounded by cubic , hemispheroidal and spherical surfaces . on the other hand ,it is clear from the general point of view that there should be an hft in such a configuration irrespective of the shape of the boundary surfaces .indeed , this configuration has in the unbounded space two genuine separatrix planes and , which can be regarded as a limiting case of the bounded configuration with the boundaries moved off to infinity .taking into account also that , one could expect that the proper measure for qsls must be growing with near the planes and much stronger than in the remaining volume to indicate the corresponding qsls near these planes .this would provide in the limit of large an expected continuous transition of such bounded configurations with qsls to the unbounded configuration with the genuine separatrices .the failure of in determining an hft in this simple case looks very surprising in light of its remarkable success in other more complicated field configurations ( see [ s : hfttwst ] ) .the reason for this failure lies actually in the above - mentioned projection effect , which is extremely large for the chosen type of boundaries .the transverse component here grows linearly with the distance from the x - line , while the longitudinal component remains constant .so the farther a given field line meets the boundary from the x - line , the more they become aligned with each other . such an explanation is fully confirmed by calculations of the perpendicular squashing factor in this case . to derive an analytical expression of ,let us choose as coordinates on both boundary surfaces , so that and in ( [ qpc])([dltpc2 ] ) .assume for generality that the boundaries are defined by , then according to ( [ xi ] ) and ( [ eta ] ) the vector functions define , respectively , the launch and target boundary surfaces .these formulas are needed for calculating ( [ ek ] ) , ( [ glk ] ) , ( [ ei ] ) , ( [ n2pc ] ) ( [ gpij * ] ) , ( [ gplk ] ) , ( [ dltpc2 ] ) and ( [ bx ] ) in the chosen coordinates . with the help of such calculations , equation ( [ qpc ] ) yields + h^{2 } \left(x_{+}^{2}+y_{+}^{2}+x_{-}^{2}+y_{-}^{2}\right ) } { \sqrt{\left[1+h^{2}\left(x_{+}^{2}+y_{+}^{2}\right)\right ] \left[1+h^{2}\left(x_{-}^{2}+y_{-}^{2}\right)\right ] } } , \label{qpx}\end{aligned}\ ] ]in which determine coordinates of the conjugate footpoints at the defined boundaries .these expressions determine as a function of the euler potentials and .note also that and at , so expressions ( [ qpx])([ypm ] ) define in addition the distribution of in the plane , to which is mapped along the field lines from the boundaries . more generally , the combination of these expressions with equations ( [ xi ] ) and ( [ eta ] ) , resolved with respect to and as provides a parametrical representation of , with and as parameters , in any plane between the boundaries . in the particular case of plane boundaries, we have to put in ( [ qpx])([ypm ] ) . using then ( [ xi ] ) , ( [ eta ] ) , ( [ xpm ] ) and ( [ ypm ] ) , expression ( [ qpx ] ) for can be rewritten even as an explicit function of . by comparing ( [ qpx ] ) and ( [ qx ] ) one can see also that the terms containing and are responsible in this case for eliminating the projection effect . as a result of this , the distribution of in the plane shows very pronounced ridges " along the and axes ( fig .[ f : f2]a ) by revealing an expected hft with a characteristic x - type intersection of qsls along the x - line . according to equations ( [ x ] ) and ( [ y ] ) , such a structure shrinks and expands along these axes exponentially fast with to give a typical hft variation of its cross section ( see diagram [ [ -x- ] ] ) .with growing , the ridges of the distribution and so the corresponding qsls become thinner and thinner by extending on larger and larger distances from the x - line .thus , the perpendicular squashing factor defines indeed an hft such that it continuously transforms in the limit of large into the separatrix planes and . however , the peripheral field lines in this hft are still nearly parallel to the boundaries , which causes too strong of a variation of the hft cross section .aesthetically more pleasing hft in the x - line configuration can be obtained by using the boundaries which are orthogonal to the field lines .these are iso - surfaces of magnetic potential having the shape of hyperbolic paraboloids .it is natural to chose them passing through the points , which means that for such iso - surfaces .this condition using ( [ xpm ] ) and ( [ ypm ] ) yields which is a transcendental equation for the functions entering in expression ( [ qpx ] ) .this equation is not difficult to solve numerically for given and and to use the respective solution for the calculation of .an example of the resulting distribution in the plane at is shown in figure [ f : f2]b .the corresponding hft with the magnetic surface defined by and boundaries is presented in figure [ f : f2]c . for the chosen type of boundaries , , so both squashing factors define the same hft .the magnetic field for this hft has the simplest analytical form ; the hyperbolic paraboloids are also relatively simple boundary surfaces .so we believe that this example provides the simplest possible hft relevant for theoretical studies of basic mhd processes , such as magnetic pinching and reconnection , in three dimensions .the considered x - line configuration with boundaries is an important example , where , in contrast to , succeeds in determining an expected hft .however , this example is not representative enough to make a general conclusion on the potentialities of and for detecting qsls . because the field lines in such a configuration behave in a rather artificial way over a major part of the boundaries .a better comparison of and can be done by using a more realistic field . for this purpose, we have chosen the analytical model of a twisted magnetic field , hereafter called the t&d model .it describes approximate equilibria of a circular magnetic flux rope , whose interior force - free field is continuously embedded into a potential background field .the latter is produced by fictitious subphotospheric sources consisting of two magnetic monopoles of opposite signs and a line current , all located at the axis of symmetry of the rope .the axis itself is placed some depth below the photospheric plane and the minor radius of the rope is assumed to be much smaller than the major one and the distance between the monopoles . to compare and in detail ,we have computed their distributions for three sets of parameters which differ only in values of .two of these values ( and ) and all the remaining parameters are chosen to be exactly the same as in the t&d model . by growing but keeping other parameters fixedwe imitate an emergence of the flux rope from below the photosphere .in this process , the configuration passes continuously through three distinct topological phases . for sufficiently small , there is a single bp separatrix surface the configuration with represents one of these topological states .the corresponding distributions of and are shown in figures [ f : f3]a and [ f : f3]d , respectively . with growing , this bp and the associated separatrix surface bifurcate into two parts to give birth to a bp separator field line the configuration with represents this second topological phase .the corresponding distributions of and are shown in figures [ f : f3]b and [ f : f3]e , respectively .the points and on these figures are the footpoints of the bp separator , while and are its photospheric contact points .further growing of leads to a complete disappearance of the bifurcated bp and the associated separatrix surfaces the configuration with represents this last phase .the corresponding distributions of and are shown in figures [ f : f3]c and [ f : f3]f , respectively .the magnetic field at these parameters becomes topologically trivial , since its field line mapping is continuous everywhere .the whole structure can be continuously transformed to a simple arcade - like configuration with the help of a suitable photospheric motion .thus , with growing or emerging of the flux rope the topological complexity of the configuration first increases and then abruptly decreases . on the contrary ,the geometrical distortion of the field line mapping gradually increases in the configuration during this process .both squashing factors and continue to grow with in narrow strips of the photospheric plane .it is clearly seen from the and distributions plotted with the same grayscaling ( fig .[ f : f3 ] ) that everywhere and the difference between and becomes smaller with growing .the most significant difference between them is seen at the first two phases near the bps and the footprints of the associated separatrix surfaces .as previously anticipated , always rises in these regions , while does not , except near the contact points and of the bp separator ( fig .[ f : f3]e ) .the value does rise there but only to give birth to a part of hft footprints , which are matured eventually in the third phase ( fig .[ f : f3]f ) .as concerned with the indicated rise of , it is mainly caused by the projection effect : the field lines which are close to the bp separatrix surfaces approach the photosphere near the bps at a small angle to the horizontal , which strongly distorts the footprints of the corresponding elemental flux tubes . comparing and at that the latter is valid for the central part of the pil as well .the discussed features of the distributions become more transparent if one saturates the grey shading in the plots at the values ( fig .[ f : f4 ] ) .these new plots show that the qsls based on the distribution are characterized by a thinner and more uniform thickness .their footprints acquire at a clear fishhook - like shape in each of the photospheric polarities with maximums of reaching .the qsls rooted at such fishhooks " intersect each other by combining themselves into an hft .this structural feature seems to be very robust , because it appears even in twisted configurations which are not in force - free or magnetostatic equilibrium .one can see from figure [ f : f5]a that , except for an essential twisting distortion , the cross section of such an hft varies exactly according to diagram ( [ -x- ] ) .it has yet to be proved , but it seems to be quite natural that this hft is pinched into a vertical current sheet below the flux rope by its upward movement when the kink or torus instability is developed in the configuration at a sufficiently large twist of the field lines in the rope .this interpretation is very important for understanding the properties of sigmoidal structures in flaring configurations .figure [ f : f5]a suggests that the sigmoids in such configurations are simply pinching hfts illuminated by a hot plasma material which appears there due to the reconnection process in the above - mentioned vertical current sheet .this seems to be valid at least for the third topological phase of the flux rope emergence , while an additional interaction with the photosphere must be involved at the first and second topological phases , where the bps are present .panels ( _ d_)(_f _ ) in figures [ f : f3 ] and [ f : f4 ] demonstrate that the footprints of the bp separatrix surfaces follow very close to the hft footprints emerging gradually with growing .this implies the corresponding similarity in the shapes of such separatrix surfaces and hfts .so the explanations of the sigmoids that rely on either the presence of the bps or the hfts are not alternative but rather complementary , since they refer to different phases of the flux rope emergence .it should be noted also that both and distributions ( fig .[ f : f4 ] ) contain at the border of the flux rope two less pronounced horseshoe - like features with maximums .the qsl rooted at these horseshoes " has a helical shape ( fig .[ f : f5]b ) with a slightly varying cross section along the field lines .so this qsl has a structure qualitatively different from those two which form the hft .the comparison of and shows that the squashing of the flux tubes in this helical qsl is only in part due to the projection effect .the distribution demonstrates that the major contribution to comes from the shearing of the twisted field lines in the rope .thus , both distributions reveal a helical qsl which is a part of the inner border layer of the flux rope . in this respect, the considered example demonstrates that our squashing factors help identify flux ropes themselves .it is not a problem , of course , to locate the flux rope in the t&d model , where its parameters are known from the construction of the model .yet identifying flux ropes in more complicated configurations obtained , for instance , numerically from magne- togram data is a real problem .if such configurations are topologically trivial , like those in the third phase of our example , the determination of qsls seems to be the only method for identifying flux ropes . as shown above , both squashing factors allow us in the case of the t&d model to determine similar qsls , except that is more advanced than near bps , whenever they appear .therefore , we think that , in general , if the numerical grid used for computing is fine enough for detecting possible bp separatrix surfaces by sudden spikes in the distribution , the use of only would be sufficient for the structural analysis of configurations . in practice ,however , the required resolution of the grid can not be always easily foreseen .also the computational cost for is not really high , since the same input data as for can be used .so it is sensible to compute both these distributions at a time and compare them in the same way as we did in the considered example .some redundancy of the information contained in such distributions is not superfluous but useful , especially , in the case of complicated real magnetic configurations .thus , from the practical point of view , the value should not be considered as obsolete but rather as a complementary characteristic of magnetic connectivity .we have derived a covariant form of the squashing factor , which enables us to determine quasi - separatrix layers ( qsls ) in both closed and open magnetic configurations with an arbitrary shape of boundaries .the corresponding expression for assumes that the jacobian matrix of the field line mapping and the metric tensors at the footpoints are known .the expression admits also that such input data " can be represented with the help of two different coordinate systems for determining location of the conjugate footpoints on the boundaries .this provides a firm theoretical basis for a global description of the field line connectivity in the solar corona . to eliminate the projection effect at the field lines which are nearly touching the boundary , the perpendicular squashing factor also derived in a similar covariant form .the value defines the degree of squashing of elemental magnetic flux tubes only in the directions orthogonal to the field lines . in the definition of ,the boundaries enclosing the magnetic configuration constrain only the length of the flux tubes while not affecting their cross sections at the footpoints . for calculating ,the vectors of magnetic field at the footpoints are required in addition to the same input data as for .the use of both covariant squashing factors is demonstrated by calculating them for the boundaries with the planar and spherical geometries .then the properties of and are compared by considering two examples of magnetic configurations .the first example is a classical x - line configuration of potential magnetic field in a plasma volume restricted by two boundary surfaces .it is easy to show that for the plane boundaries perpendicular to the x - line , the value is constant .so the distribution does not allow us to define any qsls in such a configuration .the reason for this failure lies in the projection effect , which is very strong for the field lines distant from the x - line .we have also calculated an analytical expression of for the same field but with the boundaries of an arbitrary shape . in the case of the plane boundaries ,this new value , in contrast to , has a non - uniform distribution , which does reveal the expected two qsls .these qsls intersect each other by combining themselves into a hyperbolic flux tube ( hft ) .a more elegant hft is obtained for the x - line configuration with the boundaries orthogonal to the field lines . to make a better comparison of the properties of and , a second magnetic configuration more relevant for solar physicsis considered .the respective field is defined by using the titov - dmoulin ( 1999 ) model of a force - free flux rope embedded into a potential background field .contrary to the case of the x - line configuration restricted by the plane boundaries , both the and distributions reveal qsls in the twisted configuration .these distributions are similar everywhere except near bald patches ( bps ) and footprints of the associated separatrix surfaces , whenever the bps exist . by definition ,the value is free of the projection effect , so that rises near bps only if the corresponding flux tubes are subject to a volumetric squashing .this is not the case , of course , for the value , which always rises in such regions of the photosphere .so , in comparison with , the value shows itself to be again a superior characteristic for analysis of magnetic connectivity .nevertheless , we have argued that it is more practical in general to compute both squashing factors for analyzing the structure of a given magnetic configuration .this does not require additional significant effort , while making it easy to discriminate between the volumetric squashing of elemental flux tubes and the surface projection effect at the boundaries .
the study of magnetic connectivity in the solar corona reveals a need to generalize the field line mapping technique to arbitrary geometry of the boundaries and systems of coordinates . indeed , the global description of the connectivity in the corona requires the use of the photospheric and solar wind boundaries . both are closed surfaces and therefore do not admit a global regular system of coordinates . at least two overlapping regular systems of coordinates for each of the boundaries are necessary in this case to avoid spherical - pole - like singularities in the coordinates of the footpoints . this implies that the basic characteristic of magnetic connectivity the squashing degree or factor of elemental flux tubes must be rewritten in covariant form . such a covariant expression of is derived in this work . the derived expression is very flexible and highly efficient for describing the global magnetic connectivity in the solar corona . in addition , a general expression for a new characteristic which defines a squashing of the flux tubes in the directions perpendicular to the field lines is determined . this new quantity makes it possible to filter out the quasi - separatrix layers whose large values of are caused by a projection effect at the field lines nearly touching the photosphere . thus , the value provides a much more precise description of the volumetric properties of the magnetic field structure . the difference between and is illustrated by comparing their distributions for two configurations , one of which is the titov - dmoulin ( 1999 ) model of a twisted magnetic field .
the difficulty in establishing a sensible global solution to the problem of low ( reynolds number ) viscous flow around simple objects , where the flow is uniform at infinity , has fascinated applied mathematicians for just over 150 years .stokes ( 1851 ) established that there was no solution to the two - dimensional , steady , incompressible , equations for asymptotically uniform flow around a cylinder in the biharmonic limit .this situation is now routinely described in modern tutorial discussions .see , for example , chapter 7 of acheson ( 1990 ) for an exercise on stokes paradox and a discussion of the corresponding situation for the sphere . the biharmonic equation can be solved in a neighbourhood of the cylinder ( i.e. a circle ) by any stream function of the form but there is no choice of for which for large .the analysis of this problem has lead to several classic papers ( oseen , 1910 ; lamb , 1911 ) and its understanding through the use of matched asymptotic expansions ( mae ) is one of the triumphs of perturbation theory .the reader is referred to van dyke ( 1964 ) for his classic survey of the work of kaplun ( 1957 ) , proudman and pearson ( 1957 ) and other key references . however , the mae approach , despite its immense power and diversity of expanding applications , does not give a clean resolution of the original difficulty in that such methods rely essentially on computing and then matching solutions to the problem defined on two regions : close to the cylinder and far from the cylinder .the purpose of this paper is to address the problem of finding _ global _solutions for the low reynolds number limit , i.e. to resolve the original paradox .there are , of course , other approaches to the paradox .recently , villas boas has considered the problem from a three - dimensional perspective and points out that there is then no paradox .a large class of fluids can be characterized by their density , , a scalar field not presumed to be constant , and their dynamic viscosity .the flow is characterized by a velocity vector field , and an associated scalar pressure field .conservation of mass is expressed by the continuity equation and the conservation of momentum is expressed by the navier - stokes equations acting on _ vectors _ should be understood as the ordinary laplacian acting on cartesian components . ] if the fluid is incompressible in the sense that is a constant in both time and space , we have the condition : to analyze matters further , we introduce the vorticity vector in the following discussion we demand incompressibility but allow for non - zero vorticity . using simple identities from vector calculus the navier - stokes equationsmay then be recast in the form taking the curl of this , we arrive at the vorticity equation where the kinematic viscosity .since the velocity field is divergence - free , we may introduce a vector potential such that and furthermore we may choose it so that it is divergence free : the vector potential can be reduced to a single function when there is an appropriate symmetry .the resulting object is a stream function .for example , planar 2d flow is obtained by setting ( and note that this automatically satisfies satisfies the divergence condition ) next we note that under the assumption that satisfies and the vorticity equation becomes for problems where it is possible to identify a natural length scale and a natural speed , it is normal practice to perform a non - dimensionalization of the variables and introduce the reynolds number . then the navierstokes equation becomes ( after rescaling the variables suitably to ) : throughout this paper we shall work in units in which the _ radius _ of the cylinder is taken to be unity . in such units the reynolds number is here based on the radius on the diameter . ] and is given by .the two stream functions are related by and furthermore etc .the old historical approach to the limiting case when and the flow is time - independent is to take the view that the non - linearities may then be ignored ( provided the non - linear term is well behaved ) and the time - independent navierstokes equations reduce to which is the biharmonic limit , also known as stokes flow .it is now well known ( see for example , chapter 8 of van dyke ( 1964 ) ) that the neglect of the non - linear terms can lead to inconsistencies , as is evidenced by the lack of any solution for asymptotically uniform two - dimensional flow past a cylinder .here we introduce the scalar vorticity function and write the navierstokes as the pair if we had exponential growth in vorticity , , then the linearized form would be or in terms of a single condition : which is the laplace - helmholtz equation .let us focus temporarily on the linearized time - dependent case based on the laplace - helmholtz equation .we let .it is an elementary exercise ( and we shall give equivalent details in a different context later ) to establish that for all , the laplace - helmholtz equation has a solution of the form \sin ( \theta ) \label{tdsf}\ ] ] that satisfies in other words , no matter how small the value of , the paradox does not exist . to summarize :if we have some time - dependence in the simplified form of exponential growth in vorticity the linearization is precisely the laplace - helmholtz problem rather than the biharmonic problem .addition of this type of extra variability , no matter how small , shows that the paradox evaporates , i.e. the existence of the paradox is unstable .villas boas , who has considered the problem from a three - dimensional perspective , also demonstrates that the paradox evaporates when extra _ spatial _ variability is incorporated . in subsequent analysiswe will use the laplace - helmholtz model as a approximation for the time-_independent _ 2d case , but where the modification is regarded as a simple way as approximating the combined effect of the non - linearities via a linear term .the question is as to whether we can do this in a sensible and self - consistent manner .we shall look at this in a variety of ways , starting with a rather _ ad hoc _ approach .the production of a global solution to the equations requires a slightly unusual approach .we take the time - independent equation as our starting point and note some obvious facts .first , if we have a satisfying laplace s equation , then it satisfies equation eq .( [ oldfour ] ) identically .second , in the high reynolds number limit , although this is a singular limit , we note that the jacobian term should then vanish identically , so that any solution of satisfies eq .( [ oldfour ] ) when .this will be true of course in the linear case when and this last equation is of helmholtz type .now let be a ( possibly complex ) number of order 1 and a parameter to be determined .in particular , if we consider solutions , , of a parametrized helmholtz equation in the form then the jacobian will vanish and we note that next suppose that we consider a more general stream function as then a trivial calculation tells us that our approach is therefore to combine solutions for potential flow , , with a solution of the helmholtz equation eq .( 23 ) that also satisfies the fourth order equation given by eq .( 24 ) without the identically zero non - linear term : when this pde of course approaches the biharmonic equation provided .then eq .( [ expandedns ] ) offers the possibility that we can solve a pde of the form : where and can be calculated and analyzed .the introduction of the term of order in the right side of eq .( [ expandedns ] ) may seem like an artificial device , but given that ( a ) we are seeking solutions of the problem as ; ( b ) we are not modifying higher derivatives in the equations ; ( c ) we have a valid perturbation equation in eq .( 28 ) , we consider that it is valid to proceed with this approach .the author recognizes that the argument in the previous sub - section is somewhat _ ad hoc _ , and initially gives us no idea how to justify the choice of or , or , as we shall introduce , the relevant composite parameter .however , there is another other route to justifying this approach .let .then the full 2d time - independent incompressible navier stokes equation is precisely we already know from the existence of the stokes paradox that simple linearization by setting is hopeless .we might therefore consider starting from other linearizations , for example , the helmholtz linearization where has to be determined either from experiment or from deeper theoretical considerations .the linear starting point for analysis is then not the biharmonic system but the helmholtz - laplace equation : in principle then we can consider justifying the choice of by matching with as closely as possible under a suitable norm .we will prove in the next section that _ for all the helmholtz - laplace equation admits a solution satisfying both the boundary conditions on the circle and the velocity condition at infinity _ , obtained by considering elementary solutions that are linear combinations of solutions of the laplace equation and solutions of the helmholtz equation .note that this approach and our _ ad hoc _ scheme are linked by the relationship the determination of and are interesting challenges .we shall see in section 8 that there are good theoretical grounds for setting .for now we will leave both parameters general .we now consider the well - trodden route to the analysis of a stream function associated with a flow that is uniform at infinity and satisfies appropriate boundary conditions on . using polar coordinates , we therefore want and we build the solution for and as follows , under the assumption that satisfies the laplace equation and the helmholtz conditions .any sum of the two will satisfy the laplace - helmholtz equation . given that the angular behaviour at infinity is make the standard assumption and seek solutions the potential flow part ( as usual ) will be taken to be for some constant .our analysis will differ from stokes classic ( stokes , 1851 ) treatment in that does not satisfy the separated biharmonic equation .instead we use the helmholtz equation , which in separated form is just where . the general solution to this odeis given by where are standard `` modified '' bessel functions .it is now quite clear that we can construct so as the preserve the boundary condition at infinity by setting .the function decays exponentially if .we are left with two arbitrary constants that can be determined by satisfying the boundary conditions on the circle given by eq .( [ circlebc ] ) .this is a matter of elementary algebra using some standard bessel function identities .the final result for the total stream function can be simplified to : \sin ( \theta ) \label{streamfn}\ ] ] we note some interesting facts about this expression .first , it has the _ right _ behaviour as provided , as the bessel function of decays exponentially .second , if we keep fixed and finite and let we obtain and we recover a multiple of stokes ( 1851 ) solution satisfying the boundary conditions on the sphere ( but not at infinity ) , and , furthermore , the multiple is now reminiscent of that arising from the methods of matched asymptotic expansions .the claim therefore is that it is eq .( [ streamfn ] ) that essentially resolves the paradox , as all the boundary conditions are satisfied , but the low reynolds number limit for finite does not .the third observation is that in the neighbourhood of the surface of the sphere , and that for small this is given by we also note that the limit as is just the potential flow limit : _ mathematica _ code for the evaluation of the stream function and velocity field is given in the appendix . at this stagewe have no knowledge about how to fix the parameter , and we now turn our attention to this issue and a more detailed analysis of eq .( [ expandedns ] ) .we note first that our working assumption of a power dependence of on has not actually been necessary .all we need is that as .we also note that the choice and recovers the result ( van dyke , 1964 ) that the limiting stream function for fixed and small is , from eq .( [ nearcyl ] ) so that the low limit matches exactly the first term of the mae result . on the diameter we have etc . in the denominator . ] clearly other choices of can be considered , as can other powers of or even a more general still . however , in section 8 we will establish a theoretical basis for estimating these parameters , and argue that .we must now analyze the right side of eq .( [ expandedns])we call it we do so without any assumption as the the form of .we have , changing variables in the jacobian to polar coordinates : \ ] ] recall that we set .having imposed the boundary conditions , we have it follows that where is the reduced jacobian : and denotes the ` helmholtz artifact ' introduced by our approach , and denotes the non - linear jacobian term . some calculation with bessel function identities leads to from which it is manifest that and the non - linear terms vanish on the cylinder boundary irrespective of the choice of .furthermore , the asymptotic behaviour of the bessel functions tells us that provided , as .we deduce that is a bounded function for all .what is its order as a function of ?a straightforward estimate may be given by looking in a neighbourhood of the cylinder .it is straightforward to establish that where the latter expression applies as .we deduce that in the _ immediate _ neighbourhood of the cylinder which is provided only that as .although grows to a maximum as is increased away from unity , before reaching a maximum and then decaying for large , some numerical experiments with _ mathematica _ confirm that the maximum of slowly decreases as decreases to zero .so we can assert that is well behaved and is .when we consider , it is easy to see that has a maximum on , and that as .so this term also behaves .if we desire that the helmholtz artifact tends to zero faster than the non - linear term ( which is desirable for the credibility of our approach ) it is then natural to specify in the form and to demand that . in particular , the choice arranges that and we have established that .it also decays exponentially at infinity .the calculation of the drag can be done by integrating the pointwise force on the cylinder over its surface .the pointwise force has two components .one involves the local rate of strain in the fluid , and the other is the pressure force .the first requires a purely local calculation , but the second requires an integration of the pressure equation from infinity to the cylinder .a question is how this second part can be carried out _ without _ any _ global _ representation of the flow field .for the _ specific _ case of low reynolds number calculations with a certain symmetry there are ways around the problem that we shall exploit presently . to make these matters clear ,we shall summarize a first - principles calculation of the force using the results for cylindrical polar coordinates for the rate of strain as given by appendix a of acheson ( 1990 ) .the fluid velocity is given ( in our units ) by on the cylinder the rate - of - strain tensor has components , and the stress tensor , so the force on the cylinder boundary is we project this into the -direction and integrate over the circle to get the following formula for the drag ( force per unit length on the cylinder ) : \theta\ ] ] for low we estimate the pressure using the momentum equation in the form and a short calculation gives we need to integrate this in from infinity to a general point on the circle .in general this is awkward without a global form of .but if has the form then we have \ \ { \rm where}\ \{ \cal l } [ g(r ) ] \equiv \frac{1}{r}\frac{\partial\ } { \partial r}\left(r\frac{\partial g}{\partial r } \right)- \frac{g}{r^2}\ ] ] and the pressure equation becomes : + \ethvec \sin\theta \frac{\partial\ } { \partial r}{\cal l}[g(r)]\right\}\ ] ] we do the integration from infinity along and then work around the circle .carrying this out we obtain |_{r=1}\ ] ] we now evaluate the total integral for the force per unit length to obtain -\frac{\partial^2 \!g(r)}{\partial r^2 } \right]\ ] ] the drag coefficient for such a 2d problem is defined following tritton ( 1959 ) as and is now easily seen to be given by - \frac{\partial^2\!g(r)}{\partial r^2 } \right|\ ] ] evaluated on . given the vanishing of and this simplifies further to in the case given by the mae approach the stream function is given in the neighbourhood of the cylinder by eq .( 1 ) and then and are given by in our new model as is given by the radial part of eq .( [ streamfn ] ) , and this time , using some bessel identities , we obtain we recall now the original assumption that . if we fix but allow to vary the drag coefficient of our model is then we can plot the drag coefficient from our model , assuming that , with various choices for , and compare the results with those of tritton ( 1959 ) , which used an based on the diameter .tritton s data has been converted by halving _his _ reynolds number .also , in making a comparison with fig .8.5 of van dyke ( 1964 ) , it appears that van dyke has plotted , as otherwise it is not possible to reconcile that plot with tritton s data . in figure 1we show the data for the first three fibres used by tritton ( those that consider the lowest ) and * our model with picked to match the mae result , ; * our model with a least squares best fit ; * the one term mae result based on ; * the two term mae result based on .the choice of constants in the mae approach is itself somewhat arbitrary .that made by kaplun ( 1957 ) , giving eq .( [ nearcyl ] ) , is little more than convenience .these results give support to the new model and the agreement with experiment we get by taking about one half that implied by the mae approach is rather tantalizing .another use for the type of low reynolds numbers solution developed here is to provide a basis for _ iteration _ of the solution in powers of .the presence of the paradox obstructing a global representation in the standard approach makes this impossible .furthermore , we know from a corresponding analysis of the 3d spherical problem that even when the base solution makes sense , even the first iteration may fail : the _ stokes - whitehead _ paradox emerges .so it is of considerable interest to investigate the iteration of the system . in doingso we will keep general initially .we summarize the solution developed thus far . with given we have , combining equations ( [ expandedns],45 ) and our definitions , where and where these equations also serve to define the radial terms . from now on we will work in terms of these radial functions , so it is useful just to write down their explicit forms : we now attempt to refine the solution by writing the total solution as ideally we would like to arrange that the right side of this is given by the expansion and we write the remaining parts of the navier - stokes equation as we do not know how to solve this full non - linear system , and propose instead to treat a linearized form .the question now is whether to work with the raw system or to propose a helmholtz - damped variation along the lines of our original approach .we also need to understand which non - linear term to treat first . _ is of the form , for example , we know that is the lowest order in : , so we will give priority to killing this lowest order correction .to develop the solution we need to identify the greens s function for the biharmonic operator .in fact we do not need the full form as the angular structure of the right side is simple - we need only look for a pair of appropriate radial green s functions satisfying appropriate boundary conditions on the cylinder and at infinity . to this endwe write and require that and we also require that vanishes as and both and its first derivative vanish on . to proceed further we recall the elementary form of the laplacian in 2d cylindrical polar coordinates : so if is of the form then define the radial modal laplacian operator as we now have a pair of odes in the form in a simpler problem we would now construct a pair of green s functions , , with the properties that subject to , _ if it is possible _ , however , there are significant complications .the helmholtz term we have thought of as an modelling artifact has first to be understood as an optimal linear approximation to the full non - linear system .the first steps on this are considered in section 8 .we will eventually have to manage the fact that no function exists satisfying these boundary conditions .however , it is now consistent to proceed to construct , as this ( a ) exists , and ( b ) is needed for the management of the full non - linear system and its solution .we will now focus on the lowest order correction arising from the inertia term , for which is the required radial green s function .the building blocks for this are two different solutions of the ode the solution to the homogeneous problem is of the form to try to build a green s function , we would write down a pair of solutions : the condition at infinity requires that we set , so that the boundary conditions on the cylinder require that so the inner function is given by we now have four unknowns that must be obtained by imposing junction conditions to obtain the delta - function .the junction conditions are : these equations were solved and simplified using_ mathematica _v7 and the solutions are finally the two parts of the green s function are ( introducing the -dependence explicitly ) the solution for is then given formally by this stream function is as so we claim that a secondary paradox of stokes - whitehead type has been avoided . to proceedfurther we need to better understand the full non - linear system .the task now at hand is to develop a proper theoretical basis for the estimation of the parameter . to this endwe must write down the full equations . converting to polar coordinates , the navier stokes equations under consideration are let us assume that is small enough that the flow remains symmetric about the horizontal axis .we may then write a fourier decomposition then with as above , \\ \\ & = \frac{r_e}{2 r } \sum_{l=1}^{\infty } \sum_{n=1}^{\infty } \biggl[n[\sin((l - n)\theta)+\sin((l+n)\theta)]\frac{\partial \phi_l}{\partial r}{\cal l}_n \phi_n \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - l [ \sin((l+n)\theta)-\sin((l - n)\theta)]\phi_l \frac{\partial \ } { \partial r}{\cal l}_n \phi_n \biggr ] \end{split}\ ] ] doing the fourier analysis gives us \end{split}\ ] ] thus far no approximations have been made , other than to assume the flow remains symmetric about the -axis .now we approximate the model by considering only the contributions of the terms with .we then have the simpler , but coupled and non - linear system with our simple form of that is a solution of the laplace - helmholtz condition we have already determined the solution of the eq .( [ phitwonl]) is just as already given above . in order to generate maximal self - consistency of the linearization we must choose in such a way as to minimize the mismatch between the right sides of eq .( [ phionenl ] ) and eq .( [ philin ] ) . ]let us look at the overall scaling behaviour in terms of the reynolds number .we let , so that the required matching is then of the form which strongly supports the scaling behaviour as .that is , we have a theoretical basis for setting .a determination of requires the introduction of a suitable rigorous criteria for minimizing the mismatch between the linear and non - linear forms , and this is under investigation .note that the _ full _ equations , but limited to the first two angular modes , can certainly be written _ without further approximation _ as for some unknown function , and our method can now be properly understood as that of working with some `` average '' value of , and noting that the resulting solution is free of a paradox .note also that writing the right side of this linearized system as a multiple of is not as arbitrary as it might seem , for we know that the right side of the full non - linear system vanishes identically when .that is , this laplace - helmholtz model has a proper theoretical justification , rather than merely being the basis of a convenient interpolation between the boundary conditions on the cylinder and at infinity .we could also consider generalizations where an improved _ ansatz _ for the form of is employed .the optimal average form and improved functional choices are under investigation .only once this has been done would it make sense to consider further iteration .some initial considerations suggest that as a function and that might be asymptotic to a constant independent of as , but further analysis is needed .we have constructed a _ global _stream function satisfying the two - dimensional viscous incompressible steady equations in the limit .the stream function satisfies the correct boundary conditions on a cylinder and infinity .the `` perturbation '' to the linearized equations introduced to accomplish this is of higher order in than the inertia terms , and this modification is now properly understood as a linear representation of the full non - linear theory .the results suggest improved agreement with experimental data over those obtained by the mae approach .further work is needed on this approach , in particular on comparisons with newer data sets and determination of the remaining free parameter from theoretical considerations .although the approach of this paper initially started with rather _ ad hoc _ considerations , the method developed here is founded on a deeper consideration based on the approximation of the non - linear equations with an optimal linear approximation based on equations of laplace - helmholtz type , rather than the biharmonic equation .the need to properly treat the non - linearities has also been illustrated by the observation that the presence of a paradox is unstable with respect to small changes in the system .the viscous and `` paradox - free '' stream function is given in non - iterated form by \sin ( \theta ) \ ] ] and the parameter .the drag coefficient with tritton s conventions is : the choice of from theoretical considerations has yet to be determined , but experimental drag data suggests .the stream function satisfies the correct boundary conditions on the cylinder and at infinity for all , and also reduces to that for potential flow as . _mathematica _ code for the stream function and velocity field are given in the appendix .i am grateful to bin zhou for his comments on the earlier ( 2006 ) version of this paper . 10 acheson , d.j . 1990 ._ elementary fluid dynamics_. clarendon press .kaplun , s. 1957 low reynolds number flow past a circular cylinder . _ j. math .mech . _ * 6 * , 595 - 603 . lamb , h , 1911 on the uniform motion of a sphere through a viscous fluid ._ philos . mag ._ [ 6 ] , * 21 * , 112 - 121 .milne - thomson , l.m .1996 _ theoretical hydrodynamics _ fifth edition , dover publications .ockendon , j.r . ,howison , s.d ., lacey , a. & movchan , a. 2003 _ applied partial differential equations _ rev . ed .oxford university press .oseen , c.w .1910 , _ ark .. fys _ * 6 * , no . 29 .proudman , i. , and pearson , j.r.a .expansions at small reynolds numbers for the flow past a sphere and a circular cylinder ._ j. fluid mech . _* 2 * , 237 - 262. stokes , g.g .1851 on the effect of the internal friction of fluids on the motion of pendulums _ trans .soc . _ * 9 * , part ii , 8 - 106 .tritton , d.j .( 1959 ) experiments on the flow past a circular cylinder at low reynolds number ._ j. fluid mech . _ * 6 * , 547 - 567 . van dyke , m. 1964 _ perturbation methods in fluid mechanics _ , academic press . villas boas , m.b . , 2007 . stokes paradox ; a new approach ( in portuguese )presentation at 8th congreso iberoamericano de ingenieria mecnica .the following code may be useful .first the stream function : .... \[psi][r _ , \[theta ] _ , \[epsilon ] _ ] : = sin[\[theta ] ] ( r - ( 1 + 2 besselk[1 , \[epsilon]]/\[epsilon]/besselk[0 , \[epsilon]])/ r + 2 besselk[1 , r \[epsilon]]/\[epsilon]/besselk[0 , \[epsilon ] ] ) .... .... the streamlines are then easily visualized : contourplot[\[psi][sqrt[x^2 + y^2 ] , arctan[x , y ] , 0.01 ] , { x , -10 , 10 } , { y , -5 , 5 } , aspectratio - > 1/2 , contours - > 50 , regionfunction - > function[{x , y } , x^2 + y^2 > = 1 ] , epilog - > circle[{0 , 0 } , 1 ] ] .... .... cartesianvelocity[x _ , y _ , u _ , \[epsilon ] _ ] : = module[{r = sqrt[x^2 + y^2 ] , \[theta ] = arctan[x , y ] , ur , u\[theta ] } , ur = -((u*((-1 + r^2)*\[epsilon]*besselk[0 , \[epsilon ] ] - 2*besselk[1 , \[epsilon ] ] + 2*r*besselk[1 , r*\[epsilon ] ] ) * cos[\[theta]])/(r^2*\[epsilon]*besselk[0 , \[epsilon ] ] ) ) ; u\[theta ] = u*(1 + r^(-2 ) - ( besselk[0 , r*\[epsilon ] ] - ( 2 * besselk[1 , \[epsilon]])/(r^2*\[epsilon ] ) + besselk[2 , r*\[epsilon]])/besselk[0 , \[epsilon ] ] ) * sin[\[theta ] ] ; ur*{cos[\[theta ] ] , sin[\[theta ] ] } + u\[theta]*{-sin[\[theta ] ] , cos[\[theta ] ] } ] .... .... vectorplot [ if[x^2 + y^2 > = 1 , cartesianvelocity[x , y , 1 , 0.1 ] , { 0 , 0 } ] , { x , -10 , 10 } , { y , -5 , 5 } , regionfunction - > function[{x , y } , x^2 + y^2 > = 1 ] , aspectratio - > 1/2 , vectorscale - > small , epilog - > circle[{0 , 0 } , 1 ] ] ....
this paper proposes a solution to stokes paradox for asymptotically uniform viscous flow around a cylinder . the existence of a _ global _ stream function satisfying a perturbative form of the two - dimensional navierstokes equations for low reynolds number is established . this stream function satisfies the appropriate boundary conditions on both the cylinder and at infinity , but nevertheless agrees with stokes original results at finite radius as the reynolds number tends to zero . the navierstokes equations are satisfied to a power - log power of the reynolds number . the drag on the cylinder is calculated from first principles and the free parameter of the approach can be chosen to give good agreement with data on drag . in this revised working paper we put our approach on a firmer mathematical basis using the helmholtz - laplace equation as a linear approximation to the navierstokes system . in so doing we demonstrate the instability of the original paradox . we also demonstrate the absence of a paradox of stokes - whitehead class , and give further theoretical constraints on the free parameters of the model . key words : stokes paradox , fluid dynamics , stokes flow , stream function , biharmonic equation , helmholtz equation , low reynolds number
non - trivial power laws are known to characterize second order phase transitions .a great success of the theory of critical phenomena has been to develop methods allowing to predict these power laws .one of the most successful approaches used in the theory of critical phenomena is the renormalization group , which consists in trying to relate physical properties of a given system at different values of the external parameters ( like the temperature or the magnetic field ) . in the last three or four decades , other non - trivial power laws been found in all kinds of systems : transition to chaos by period doubling , geometrical problems like self avoiding walks ( which model polymers ) and random walks , sand pile models and several other self organised critical systems , coarsening , etc . in many cases , renormalization ideas could be extended to predict the exponents of these power laws . in this work ,we report recent results on simple models of genealogical trees .when one looks at the distribution of repetitions in a genealogical tree ( in the framework of the simple models defined below ) , one observes non trivial power laws .the exponents of these power laws can be calculated _ exactly _ by writing a relation on the generating function of the weights of the ancestors ( a quantity proportional to the number of times they appear in a genealogical tree ) which has the form of a simple renormalization transformation . beyond the intrinsic interest of these models to describe real genealogies , they constitute simple pedagogical examples for which renormalization ideas allow the exact prediction of non trivial exponents .let us first consider a simple neutral model of a closed population with sexual reproduction . by definition of the model ,the population size at generation in the past is and each individual at generation has two parents chosen at random among the individuals in the previous generation . here counts the number of past generations and so increases as one climbs up a genealogical tree . for simplicity we will consider either a population of constant size ( ) or a population size increasing exponentially with an average number of offsprings per couple , i.e. as counts the number of past generations; is the size of the population at present , while the constant size case corresponds to .a related model was introduced to study the genetic similarity between individuals in a population evolving under sexual reproduction , although there the two parents were distinct .we do not exclude this case here .clearly , the number of branches of the genealogical tree of any individual increases like and , as soon as the number of branches exceeds , there should be repetitions in this tree .let us denote by the number of times that an individual living at generation in the past appears in the genealogical tree of individual . at generation ,the only individual in the tree of is itself , therefore and the evolution of these repetitions satisfies the recursion the quantity we want to consider is the probability that an individual living at generation in the past appears times in the genealogical tree of individual ( living at generation ) .normalization implies the initial condition ( [ initial ] ) gives and the fact that each individual has two parents at the previous generation gives of repetitions after generations ( is not shown ) at and for a population of constant size . in figure 1a , and in figure 1b , .both figures show averages over 1000 samples.,width=340 ] of repetitions after generations ( is not shown ) at and for a population of constant size . in figure 1a , and in figure 1b , .both figures show averages over 1000 samples.,width=340 ] these probabilities can be measured by simulating small systems through a monte carlo procedure : for each individual of a population at generation , two parents are chosen at random among the individuals at generation .figure 1 shows the results of such simulations for two populations of constant sizes , for several values of with in fig .1a and in fig .we see that for small there are very few repetitions and decreases very fast with . on the other hand ,when increases , the shape of becomes independent of and of the population size , with a clear power law at small and a fast decay at large .figure 2 shows the distribution for several values of and a population which increases exponentially with time , . here , again , the shape becomes stationary in the interval where is large enough and is still large .this stationary shape is different from the one seen in fig . 1 . for a population size increasing by a factor at each generation .here , and averages over samples are performed .the generations shown are and .,width=340 ] the shape of becomes stationary for large and large in the sense that one gets a fixed distribution by an appropriate rescaling .in fact , introducing the rescaled quantities and where can be considered as a continuous variable for , ( [ norm1],[norm2 ] ) transform into and we expect to become a fixed distribution .this means that if we associate to each individual in the tree of at generation in the past a weight defined by the distribution of these weights becomes stationary in the scaling limit . from ( [ evolution],[wi ] )it is clear that these weights satisfy as we limit ourselves to the case of a population increasing exponentially at rate per generation ( so that ) , ( [ evolutionw ] ) reduces to the ratios can be interpreted as the probability of reaching individual by randomly climbing up the genealogical tree of . in the particular case of a population of constant size ( ) ,the factor in ( [ wp ] ) is easy to understand . for a population of increasing size ( ), there is a factor in ( [ wp ] ) instead of because of the factor in the definition ( [ wi ] ) of the weights .the key observation which allows one to calculate the distribution in the scaling limit ( large and large ) is that , for large and for large , _ the random variables which appear in the r.h.s . of ( [ wp ] ) become independent_. this is due to the fact that ( at least in the model we consider ) the weights ( of brothers and sisters ) in the r.h.s . of ( [ wp ] ) are uncorrelated .this independence , which is discussed in the appendix , will be the basis of the calculation of the fixed distribution in the following sections .one can consider some variants of the model defined above , for instance : * at each generation one could form fixed couples by making random pairs and assign to each individual at generation one of these pairs ( of parents ) chosen at random at the previous generation ( ) . in this casethe correlations between the weights would again be small in the scaling limit and they can be ignored in the r.h.s . of ( [ wp ] ) .* one can also consider an imaginary situation where each individual has parents ( instead of ) . in this case , the definition of the weights ( [ wi ] ) should be replaced by to keep normalized as in ( [ norm ] ) . for a population of constant size ,the evolution of the weights ( [ wp ] ) becomes as shown in the appendix , in the scaling limit , the correlations on the r.h.s .of ( [ wpp ] ) can be neglected in this case too . in the remaining of this work, we try to predict the stationary shape .the fact that the weights in the r.h.s . of ( [ wp ] )are uncorrelated greatly simplifies the problem .one can then consider that is the sum of independent identically distributed random variables , where is itself random .the probability of is clearly which for large becomes ( using the fact that ) a poisson distribution therefore for large , the number of terms ( is the number of children of ) in the r.h.s . of ( [ wp ] ) is randomly distributed according to ( [ poisson ] ) and these terms are uncorrelated .this becomes a problem of branching processes .if one introduces the generating function \rangle \label{qdef}\ ] ] and uses ( [ wp ] ) and the fact that the weights are independent , one finds that satisfies .\label{qrecursion}\ ] ] the normalization ( [ wi ] ) of the implies that we have for all recursions similar to ( [ qrecursion ] ) appear in the theory of branching processes , in particular in the galton - watson process , already introduced in the 19th century to study the problem of the extinction of families . from ( [ qdef],[qrecursion ] ), one can easily obtain recursions for the moments of the weigths , and so on .we see that for large , each moment of has a limiting value , as expected from the observation in the previous section that converges to a fixed distribution such that the limiting values of these moments etc ., can be obtained directly by expanding the solution of \label{fixedq}\ ] ] around ( choosing as normalization ) , several other properties of can be obtained from the fixed point equation ( [ fixedq ] ) or from the recursion ( [ qrecursion ] ) .the simplest one is the limit where is the solution ( ) of this limiting value ( for a population of constant size , i.e. ) is the coefficient of in and so is the fraction of the population whose descendants become extinct : there is a fraction of the population with no children , a fraction of the population with children but no grandchildren , and so on , and the sum of all these contributions gives .equations ( [ qrecursion],[fixedq ] ) have the form of a real space renormalization . as a consequence , one can predict that for , approaches its limit as a power law , where the exponent must be for the terms of order on both sides of ( [ fixedq ] ) to be equal . for ,this gives and ( [ new ] ) implies that at small , the distribution is a power law with given by ( [ betaeq ] ) , in agreement with the results of the simulations shown in figures 1 and 2 .in fact , for , the leading contribution in the difference consistent with ( [ fixedq ] ) is where is an arbitrary periodic function ( not necessarily constant ) of period ( i.e. ) .such periodic amplitudes are often present in the critical behavior of systems which have a discrete scale invariance .it is easy to calculate numerically the function for all values of from the fixed point equation ( [ fixedq ] ) which relates to points arbitrarily close to , where the linear approximation becomes excellent . using this procedure, we could determine ( figure 3 ) the combination | \lambda|^{-\beta -1} ] versus for .we see clearly the periodic nature of the amplitude predicted by ( [ oscillations ] ) .discrepancies at small are due to the fact that the asymptotic regime is not yet reached . at too large , rounding errors in the difference make the resulst noisy and unreliable.,width=340 ] the knowledge of the periodic function determines in principle the whole expansion of in the limit . if we look for a solution of ( [ fixedq ] ) which starts as ( [ oscillations ] ) as , one finds by equating the two sides of ( [ fixedq ] ) order by order in powers of , ^ 3 + \dots \label{expan7}\end{aligned}\ ] ] in addition to the moments ( [ w2lim]-[w4lim ] ) of ( which are given by the expansion ( [ expan ] ) of ) and the exact values ( [ seq],[betaeq ] ) of and ,let us just mention two properties of the solution of ( [ fixedq ] ) which we checked by rather involved ways , and that we prefer to leave as conjectures : * is analytic in the whole complex plane of * grows extremely fast ( faster than the exponential of the exponential ... of the exponential of ) as . as a consequence , for large decays faster than any exponential but slower than any stretched exponential ( of exponent larger than 1 ) and even all the discussions of the present section can be repeated in the case of having parents .if we limit ourselves to a population of constant size ( as we did to obtain ( [ wpp ] ) ) , we find that satisfies the same fixed point equation ( [ fixedq ] ) as above with replaced by .\ ] ] this means that the distribution of the weights is exactly the same for the cases of ( i ) 2 parents and a population size increasing exponentially by a factor at each generation and ( ii ) a population of constant size with parents per individual .this can be checked by comparing figure 2 and figure 4 , where we show the distributions for a population of constant size and with parents per individual . for a population of constant size with ( a ) and ( b ) when the number of parents is 3 .the generations shown are and .,width=340 ] for a population of constant size with ( a ) and ( b ) when the number of parents is 3 .the generations shown are and .,width=340 ]despite its simplicity , it is not easy to extract more information on the function and consequently on the distribution from the fixed point equation ( [ fixedq ] ) .there are however two limiting cases around which one can apply a perturbation theory and extract a few more properties of the fixed distribution : close to 1 and very large .one can see from ( [ w2lim]-[w4lim ] ) that when , the successive moments of the weight diverge like .this indicates that if one writes the solution of the fixed point equation ( [ fixedq ] ) can be expanded in the following way where the functions resum the most divergent terms in the perturbative expansion ( [ expan ] ) in the range .if we insert the expansion ( [ expanbis ] ) into ( [ fixedq ] ) we get , by equating the two sides order by order in , a hierarchy of differential equations for the functions which can be solved and lead to \label{f2 } \\ & & f_3 ( y)= { 14 y^3 - 3 y^2 \over 36 \left(1 - { y \over 2 } \right)^3 } + { 17 y^2 - 6 y \over 36 \left(1 - { y \over 2 } \right)^3 } \ln \left [ 1 - { y \over 2 } \right ] \nonumber \\ & & \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; + { y^2 + 2 y \over 36 \left(1 - { y \over 2 } \right)^3 } \ln^2 \left [ 1 - { y \over 2 } \right ] .\label{f3 } \end{aligned}\ ] ] comparing these expressions for large negative with ( [ expan7 ] ) , one gets the expansions of , which both agree with what one would get by directly expanding ( [ seq],[betaeq ] ) .what the small expansion gives us in addition is the function which is found to be a constant function of to all orders in powers of , the non - constant nature of does not show up in the expansion in powers of .it is a non - perturbative contribution ( which vanishes to all orders in ) which could be calculated using wkb - like techniques .( the delta function contribution at is not shown ) for , with and ; , , ; , , ; , , ; , , ; and , , .averages over 1000 realizations have been carried out .the insert shows how the maximum varies with .,width=340 ] from ( [ expanbis]-[f2 ] ) and the definition ( [ new ] ) one finds that , for small , the continuous part of is an exponential corrections to this exponential shape are extractable from higher order terms ( , , ) .the other case which can be dealt with perturbatively is the limit of large .if is large and , the solution of ( [ fixedq ] ) is given by \nonumber \\ \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; \ ; + \left[{\lambda^3 \over 2 p^3}+ { \lambda^5 \over 120 p^4 } \right ] + o(p^{-2 } ) \ ; \ ; , \end{aligned}\ ] ] where each term represents a new order in powers of .this implies that can be written in terms of in the range as \ ; \ ; , \label{largep}\end{aligned}\ ] ] where each parenthesis represents a new order in .the gaussian shape in ( [ largep ] ) is not a surprise considering that , for large , each weight becomes the sum of a large number of independent contributions .one property which can be extracted from ( [ largep ] ) is the location of the maximum of figure 5 shows the shapes ( obtained by random samplings populations of constant sizes with parents per individual ) of the distribution for several choices of .the insert shows the values of extracted from these data .they agree with the prediction ( [ wsp ] ) that the maximum approaches with corrections of order as becomes large .we have seen that for simple neutral models of evolution with random mating , the distribution of ancestors repetitions in the genealogical tree of a present individual becomes stationary , with a fixed shape which can be described by a fixed point equation of the type ( [ fixedq ] ) .this shape is the same if one considers a population increasing exponentially at rate per generation with two parents per individual or a population of constant size with parents per individual .the fixed point equation ( [ fixedq ] ) allows one to determine exactly the exponent which characterizes at small .the determination of from ( [ fixedq ] ) is very reminiscent of the way one finds exponents in the renormalization group approach of critical phenomena .other properties ( large behavior , amplitude of the power law , ) of the fixed distribution are in principle extractable from ( [ fixedq ] ) but are more difficult to obtain than the exponent . the present work admits several extensions .in particular , one may consider the case where the probabilities ( that an individual has children ) is arbitrary ( instead of poissonian as in ( [ poisson ] ) ) .the fixed point equation ( [ fixedq ] ) becomes then simply and starting from this new fixed point equation , one can essentially repeat all the above calculations , including the determination of the exponent .if all the vanish for , one can see that for large , consequently , the distribution becomes a stretched exponential for large , recursions similar to ( [ wp ] ) describe the distribution of constraints in granular media .in such cases , the number of grains in direct contact and supporting the weight of a given grain is variable .this would correspond to considering that the number of parents is no longer constant over the whole population but may vary from individual to individual .finally let us mention that an interesting aspect of the problem is the calculation of the correlations between the genealogies of several contemporary individuals .one can show that for large , the weights of all the ancestors of two distinct individuals in the same population become the same after a number of generations .in this appendix we show , by calculating moments of the weights , that correlations become negligible in the r.h.s . of ( [ wp ] ) and ( [ wpp ] ) .it is convenient to rewrite ( [ wp ] ) as where for the random parent model of section 2 ( where each parent of is chosen at random among all the individuals of the previous generation ) , with probability , with probability and with probability ( as we did not exclude choosing the same parent twice ) .moreover there is no correlation between and if .lastly and are correlated for and this correlation together with when used in ( [ wp1 ] ) leads to as expected , since the definition ( [ wdef ] ) of was chosen to keep , and where ( the index has been omitted for simplicity ) . from ( [ initial],[evolution],[wdef ] ), we know that , and . thus for and ( [ w2a ] ) becomes so far this evolution equation is exact .if we consider that all the s are very large , ( [ w2b ] ) becomes so that for large ( in fact should not be too large to keep large enough , more precisely should be such that ) , the second moment of has a limiting value and we see from ( [ wwb ] ) that when one repeats the above calculation for higher correlations ( we did it up to three - point correlations ) , one finds that the correlations between the terms in the r.h.s . of ( [ wp1 ] )are negligible .this indicates that these correlations can be neglected ( of course a complete proof that all correlations are negligible in the scaling limit would be much better than our guess based on the computation of the lowest correlations ) .one can repeat the above calculation of correlations for several variants of the model , like those discussed at the end of section 2 .the exact formulae ( [ w2a],[wwa],[w2b ] ) are modified but one always find that , in the scaling regime , they reduce to ( [ w2c],[wwe ] ) , meaning that the correlations could be ignored .let us consider only the case where each individual has parents . to keep the notations simple ,we will limit the calculation to the case of a population of constant size one can then follow the same steps as above . starting from ( [ wpp ] ) ,one replaces ( [ wp1 ] ) by the correlations ( [ cor1]-[cor4 ] ) become in this case and ( [ w2a],[wwa ] ) read for large and large , we see ( using the fact that ) that and as .this again indicates that correlations can be neglected for large and large .interesting discussions with ugo bastolla and vincent hakim are gratefully acknowledged .scm acknowledges support from the alexander von humboldt foundation ( germany ) . c. domb , _ the critical point_. taylor and francis , london 1996 .mandelbrot , _ the fractal geometry of nature_. freeman , new york 1983. m. feigenbaum , j. stat .* 21 * ( 1979 ) 669 - 706 ; p. cvitanovi , _ universality in chaos _adam hilger , 1984 ( and references therein ) .b. duplantier , physica a * 163 * ( 1990 ) 158 - 182 ( and references therein ) .p. bak , c. tang , k. wiesenfeld , phys .* 59 * ( 1987 ) 381 - 384 ; p. bak , _ how nature works _oxford university press , 1997 ; d. dhar , _ studying self - organized criticality with exactly solved models _ , preprint cond - mat/9909009 .b. derrida , physica d * 103 * ( 1997 ) 466 - 477 ( and references therein ) .b. derrida , s.c .manrubia , and d.h .zanette , phys .rev . lett .* 82 * ( 1999 ) 1987 - 1990 .m. serva and l. peliti , j. phys .a : math . gen .* 24 * ( 1991 ) l705-l709 . t.e .harris , _ the theory of branching processes_. springer - verlag ohg , 1971 .burkhardt and j.m.j .van leeuwen , _ real space renormalization _, vol . 30 .springer , berlin 1982 .b. derrida , a. gervois , and y. pomeau , j. phys .a * 12 * ( 1979 ) 269 - 296 ; b. derrida , c. itzykson , and j.m .luck , com .* 94 * ( 1984 ) 115 - 132 ; d. sornette , phys . rep . * 297 * ( 1998 ) 239 - 270 , ( and references therein ) . v. hakim , private communication v. hakim and k. mallick , nonlinearity * 6 * ( 1993 ) 57 - 70 .coppersmith , c .- h .liu , s.n .majumdar , o. narayan , and t.a .witten , phys .e * 53 * ( 1996 ) 4673 - 4685 .b. derrida , s.c .manrubia , and d.h .zanette , _ on the genealogy of a population of biparental individuals _ , preprint 1999 .
we calculate the probability distribution of repetitions of ancestors in a genealogical tree for simple neutral models of a closed population with sexual reproduction and non - overlapping generations . each ancestor at generation in the past has a weight which is ( up to a normalization ) the number of times this ancestor appears in the genealogical tree of an individual at present . the distribution of these weights reaches a stationary shape , for large , i.e. for a large number of generations back in the past . for small , is a power law ( ) , with a non - trivial exponent which can be computed exactly using a standard procedure of the renormalization group approach . some extensions of the model are discussed and the effect of these variants on the shape of are analysed . genealogy , critical phenomena , renormalization group .
in the present paper the simplest three - dimensional system , describing the convection of fluid within an ellipsoidal rotating cavity , is considered .this system , suggested by glukhovsky and dolghansky , is as follows where , , are positive numbers . after the change of variables system takes the form where this system is a generalization of a classical lorenz system .system with the parameters , , is mentioned first in and after the corresponding change of variables can be transformed to the rabinovich system of waves interaction in plasma .consider system under the assumption that , , , are positive . in this case by one obtains the following : if , then has a unique equilibrium ( the trivial case ) .if , then has three equilibria : and , where and the real number is defined as .\ ] ]consider a differential equation with a continuously differentiable vector function .assume that for any initial value there exists a unique solution of defined for . herelet be an invariant set , i.e. for all .suppose is the jacobian matrix of consider a nonsingular -matrix .suppose that are eigenvalues of the matrix here * denotes a matrix transposition . * *[theorem : th1 ] given an integer $ ] and there is a continuously differentiable scalar function and a nonsingular matrix such that in this case the lyapunov dimension of the compact set is estimated as follows here is a derivative of the function with respect to the vector field : * *[theorem : th2 ] suppose that there is a continuously differentiable scalar function and a nonsingular matrix such that then any solution of system , bounded on tends to an equilibrium as .thus , if condition holds , then an attractor of system coincides with its stationary set .in it is proved that system is dissipative , i.e. it possesses a bounded absorbing set and , thus , has an attractor . by theorems [ theorem : th1 ] and [ theorem : th2 ] it can be formulated the assertion , concerning the lyapunov dimension of an attractor ( denote it by ) of system . this assertion is a generalization of the result obtained in for . [theorem : th3 ] suppose that either the inequality or the inequalities , are valid .+ if then any solution of system , bounded on , tends to an equilibrium as .+ if then we use the matrix then the eigenvalues of the corresponding matrix are the following ^{\frac{1}{2}}. & \end{aligned}\ ] ] the goal is to obtain a lyapunov - like function that allows one to check property of theorem [ theorem : th1 ] and property of theorem [ theorem : th2 ] . for considered eigenvalues one can obtain the following relation ^\frac{1}{2 } + \\ & & + \frac{2(1-s)}{\left[(\sigma - 1)^2 + a\left(\frac{\sigma}{a}+r\right)^2\right]^\frac{1}{2 } } \ , \omega(x , y , z),\end{aligned}\ ] ] where using the following lyapunov - like function ^{\frac{1}{2}}},\ ] ] where one obtains that therefore , if condition is satisfied and then for system with the given and theorem [ theorem : th1 ] gives estimation .if condition is valid and , then the conditions of theorem [ theorem : th2 ] are satisfied and any solution bounded on tends to an equilibrium as . for system with we can obtain another upper estimation .[ theorem : th4 ] let , and the following relations hold then we use the same idea but choose the following matrix and the following lyapunov - like function where further it is shown that under conditions of the theorem it is always possible to choose the parameters , such that where hence , for the chosen and the main inequality of theorem [ theorem : th1 ] is satisfied and this completes the proof .[ conseq ] if 1 . , + or 2 . , and then the lyapunov dimension of the zero equilibrium of system with or coincides with .thus for we have note that this exact formula coincides with the formula for the classical lorenz system .in this section the above results , concerning generalization of lorenz system , are applied to system , describing the convection of fluid within an ellipsoidal rotating cavity .nonsingular linear transformation , obviously , does not change the lyapunov dimension . in general case , it is known ( see , e.g. ) the invariance of the lyapunov dimension under a diffeomorphism .it is known that for system if , then there exists one equilibrium .it is stable if or . if , then there exist two additional equilibria and they are stable if or , . here in by means of numerical simulations for the case when parameter it was found certain values of the parameters and for which in system it is observed either a periodic regime ( i.e. there exists a limit cycle ) or a chaotic regime ( and for system there exists a chaotic attractor ) . in for parameters in the phase space of systemthere was obtained a chaotic self - excited attractor and for parameters there was localized numerically a chaotic hidden attractor ( fig .[ fig : attractors ] ) . using linear transformation and relation ,one can obtain corresponding parameters for system such that there exists self - excited or hidden attractor . by numerical methods one can get the following values of lyapunov dimension for the local self - excited attractor of system : and for the local hidden attractor of system : these values correspond to the values of lyapunov dimension for the global attractor of system obtained in corollary [ conseq ] . for ( self - excited local attractor )we have and for ( hidden local attractor ) we have work was supported by the russian science foundation ( 14 - 21 - 00041 ) leonov , g. , kuznetsov , n. , and mokaev , t. ( 2015b ) .homoclinic orbit and hidden attractor in the lorenz - like system describing the fluid convection motion in the rotating cavity ., 28(doi:10.1016/j.cnsns.2015.04.007):166174 .leonov , g. a. and kuznetsov , n. v. ( 2013 ) . hidden attractors in dynamical systems . from hidden oscillations in hilbert - kolmogorov ,aizerman , and kalman problems to hidden chaotic attractors in chua circuits . , 23(1 ) .1330002 .semenov , v. , korneev , i. , arinushkin , p. , strelkova , g. , vadivasova , t. , and anishchenko , v. ( 2015 ) .numerical and experimental studies of attractors in memristor - based chua s oscillator with a line of equilibria .noise - induced effects . , 224(8):15531561 .
in the past two decades lyapunov functions are used for the estimation of attractor dimensions . by means of these functions the upper estimate of lyapunov dimension for rssler attractor and the exact formulas of lyapunov dimension for hnon , chirikov , and lorenz attractors are obtained . in this report the simplest model , suggested by glukhovsky and dolzhansky , which describes a convection process in rotating fluid , is considered . a system of differential equations for this model is a generalization of lorenz system . for the lyapunov dimension of attractor of the model , the upper estimate is obtained . chaos , chaotic attractor , generalized lorenz system , lyapunov dimension , lyapunov function , fluid convection
the asian summer monsoon is the largest single abnormality in the global climate system ( shukla , 2007 ) .the seasonal rainfall brought by the southwest indian summer monsoon supplies 80% of southeast asia s annual precipitation and is vital to sustaining the region s agriculture which supports nearly a quarter of the world s population ( sinha et al . , 2007 ) .indian summer monsoon is one of the main weather systems on earth and variations in its intensity have broad economic effects .it has been the most important climate event in india .rainfall over india is subject to a high degree of variations leading to the occurrence of extreme monsoon rainfall deficient ( drought ) or excess ( flood ) over extensive areas of the country .floods and droughts result in many losses of lives , crops etc . ; these play havoc to indian economy and society .cause of abnormal variabilities in monsoon rainfall ( floods and droughts ) is not completely understood .consequently , accurate prediction of rainfall and its variability during monsoon season has been a challenging task .thus , there is greater need to understand the nature and variability of monsoon climatic conditions , especially , whether there is any extra - terrestrial influence ( e.g. cosmic ray variability ) in addition to natural terrestrial climatic conditions .more specifically , it is important to know whether indian monsoon rainfall is significantly influenced by changes in cosmic ray flux , and whether climate cooling is an effect of cosmic ray flux change .if so , then the possible physical mechanism(s ) must be identified .it is well known that cosmic ray flux varies in anti - phase with solar activity over all time scales . on the longer time scales ( millennial , centennial and multi - decadal ) , a number of studies have suggested solar / cosmic ray variability influence on the intensity of monsoonal rainfall in tropical and sub - tropical regions with conflicting results .for example , low rainfall in india coinciding with low solar activity ( or high cosmic ray intensity ) ( e.g. agnihotri et al . , 2002 ; tiwari et al ., 2005 ; gupta et al . , 2005 ; yadava and ramesh , 2007 ) and in north africa and south oman ( neff et al . , 2001 ) .these results imply that increased galactic cosmic ray ( gcr ) intensity is associated with a weakening of the monsoon ( decreased rainfall ) ( kirkby , 2007 ; singh et al . , 2011 ) .in contrast , low rainfall in equatorial east african ( e.g. verschuren et al . , 2000 ) , weaker chinese monsoon ( hong et al . , 2001 ) , and low tropical rainfall in gulf of mexico region have been observed , during high solar activity ( or low cosmic ray intensity ) .occurrence of periods of enhanced monsoonal precipitation in india slightly after the termination of the wolf , sporer and maunder minima periods ( low solar activity / high cosmic ray intensity ) have been reported by khare and nigam ( 2006 ) .this finding is in agreement with the finding of earlier workers , who reported high lake levels from mono lake and chad lake in the vicinity of solar minima ( cosmic ray maxima ) as well as the nile river in africa ( ruzmaikin et al . , 2006 ) .thus there are evidences , although sometimes contrary in nature , that suggest for some cosmic ray influence on monsoon rainfall on multi - decadal , centennial and millennial time scale . on shorter time scales ( decadal to inter annual ) too , solar activity / cosmic ray intensity influence on the rainfall changes in indian summer monsoonhave been suggested , but with conflicting results ( e.g. see jagannathan and bhalme , 1973 ; bhalme et al . , 1981 ;hiremath and mandi , 2004 ; bhattacharya and narasimha , 2005 ; badruddin et al ., 2006 , 2009 ) . understanding the factors that control ismr onset ,its variability and intensity are highly desired .in particular , it is extremely important to know about the role of extra - terrestrial sources ( e.g. cosmic rays ) in initiating and/or influencing the intensity of rainfall directly ( e.g. by changing the collision / coalescence efficiency in rain clouds ) or indirectly ( e.g. by altering the low cloud amount ) .it is particularly important to search for connection , if any , between the extreme deficiency ( droughts ) or excess ( floods ) in indian summer monsoon rainfall and cosmic ray flux variability during the same ismr periods , even though it is widely accepted that indian monsoon onset and intensity are controlled by large scale atmospheric ( e.g. land - sea temperature contrast ) and global features ( e.g. enso , qbo etc . ) .several studies have shown that the warm phase ( el nino ) is associated with weakening of indian monsoon with overall reduction in rainfall while the cold phase ( la nina ) is associated with the strengthening of the indian monsoon with enhancement in rainfall ( e. g. , sikka , 1980 ; pant and parthasarathy , 1981 ; rasmusson and carpenter , 1983 ) .all the el nino events during 1958 - 1988 were reported to be droughts and all the la nina events were associated with excess ismr . however , weakening of enso - ismr relationship after 1988 were reported in later studies ( e. g. , kripalani and kripalani , 1997 ; kumar et al ., 1999 ; ashok et al . , 2001 ; kripalani et al . , 2003 ) .further , for the 14 consecutive years beginning with 1988 ( 1988 to 2001 ) , there were no droughts , despite the occurrence of el nino ( gadgil et al . , 2004 ) .although 9 out of 12 drought years identified by us can be associated with el nino events , and 9 out of 12 flood years with la nina events , there are reports ( kumar et al . , 2002 )that out of 22 large negative ismr anomalies that occurred during 1871 - 2001 , only 11 were associated with el nino , while out of 19 large positive ismr anomalies that occurred during the same period , only 8 were associated with la nina .therefore , large deficient / excess ismr does occur in the absence of el nino / la nina and we do not yet understand adequately the response of monsoon to el nino ( gadgil et al . , 2004 ) . thus , there is the possibility of drought / floods in india being influenced by other external agents also .cosmic rays are the only source of ion production in the lower atmosphere .there are suggestions that cosmic ray flux variability may influence the earth s climate also . in view of these suggestions , although controversial , it will be interesting to search for any possibility of a link between gcr and rainfall variability .variations in precipitation potentially caused by changes in the cosmic ray flux have implications for the understating of the cloud and water vapour feedbacks .it is possible that any particular ( e.g. indian ) climate system is more sensitive to smaller variations in cosmic ray intensity than the other .the purpose of this investigation is to determine the relationship , if any , between the indian extreme weather ( drought / flood ) and cosmic ray flux variability .we analysed the gcr flux data to evaluate the possible existence of empirical evidence between cosmic ray variability and precipitation in india during monsoon season . for this study we utilized the gcr fluxes as recorded through the ground based neutron monitors , and perform analysis to look for any possibility of changes in pattern in indian rainfall , in particular , due to variations in gcr flux . for this purposewe adopt the methods of superposed epoch analysis ( singh and badruddin , 2006 ) and regression analysis .we find evidence for a possibility that gcr flux variability may have some influence in suppressing / enhancing the rainfall depending upon the decreasing / increasing nature of gcr variability , in favourable climatic conditions .in this work we adopt an approach that assumes that the rainfall changes can occur only with gcr changes if environmental conditions are suitable , and considering that the rate of gcr flux change , and not the mean gcr flux , may be the key ( laken et al . , 2010 ) .usoskin ( 2011 ) , in a recent review , concluded that it is not the intensity of cosmic rays but its variability that may affect climate .the gcr flux is provided by neutron monitors , which record neutrons generated chiefly by the primary cosmic ray protons that ionize the lower stratosphere and upper troposphere ( venkatesan and badruddin , 1990 ; bazilevskaya and svirzhevskaya , 1998 ) .continuous records of high quality cosmic ray intensity data , measured by neutron monitors located at different latitudes and longitudes on the earth s surface are available from 1964 onwards till date .reliable and good quality data of monsoon rainfall in india are also available for the period 1964 - 2011 ( see table 1 ) and many more years before that , at indian institute of tropical metrology , pune ( india ) website ( http://www.tropmet.res.in/ ) . for this work ,we have considered the 48-year period ( 1964 - 2011 ) for which both the gcr intensity and indian summer monsoon rainfall ( ismr ) data are available .a variation of in yearly mean gcr flux over a period of one solar cycle in anti - phase with the -year sunspot polar activity cycle is a well - observed known phenomena ( see fig .1 ) . a number of studies have utilized this change in gcr intensity over solar cycles to suggest ( or refute ) a possible connection between cosmic rays and climate ( clouds , rainfall , temperature etc . ) .however , significant changes / fluctuations in gcr intensity are observed when the data is averaged over monthly and daily time resolutions . at times , with these time resolutions , the gcr flux is observed to increase / decrease by a large amount ( a few percent ) during some months in the same year and during several days in the same month . the purpose of this paper to search the influence , if any , of gcr flux change on the summer monsoon rainfall in india , at regional and seasonal or even shorter time scales .for this purpose out of 48 years from 1964 - 2011 ( see table 1 ) we first identify - forth ( 12 ) years with lowest rainfall in four indian summer monsoon months ( june - september ) ( see table 2 ) .we call them deficient rainfall ( drought ) years and the same number of years ( 12 ) with the highest rainfall in summer monsoon months ( see table 2 ) .we call them heavy rainfall ( flood ) years . in fig .2 , the deviation of monthly precipitation during superposed ` drought ' and ` flood ' years from superposed average precipitation of 48 years ( 1964 - 2011 ) is shown . in fig . 1, we have plotted yearly average gcr intensity as observed by neutron monitors located at two different latitudes and longitudes ( see table 3 ) , namely oulu ( http://cosmicrays.oulu.fi/ ) and newark ( http://neutronm.bartol.udel.edu/ ) .unfortunately , there is no neutron monitor located in india whose data for the period 1964 - 2011 can be utilized for this analysis . however , the time variation shown in fig .1 at two locations on the earth is similar in nature globally with different amplitudes at different latitudes .that is , the nature of gcr variations observed at globally distributed monitors are similar , only differing in amplitudes .however , there are some suggestions ( e.g. eroshenko et al . , 2010 ) that the rainfall and the humidity influence the incoming particle flux around the detector ; moisture around the detector lowers both the neutrons incident to the surface and albedo neutrons .1 , the drought and flood years are indicated by dashed and dotted vertical lines respectively . from fig .1 we see that in india droughts / floods can occur at any level of mean gcr flux , minimum / maximum or intermediate level . in other words , these floods / droughts can occur at maximum / minimum / increasing / decreasing phases of the solar activity cycle .thus if we assume mean gcr flux to be the key , then we can conclude that there is no influence of gcr flux on the indian monsoon rainfall of inter - annual scale .next , we proceed to search for any possible influence of gcr flux variability on indian monsoon rainfall during the same period , assuming that it is more likely that the rainfall changes occur only with gcr flux changes if environmental conditions are suitable , and that not the mean gcr flux , but its variability may affect the rainfall amount / climate ( see laken et al . , 2010 ; usoskin , 2011 ) .the cosmic ray count rate has solar cycle dependence , so we normalized the count rate before performing the superposed epoch analysis .each year s data is normalized to the yearly average for that year , then the data is converted to percentage , which in turn allows for a direct comparison of the different data ( i.e. gcr intensity , sunspot number , 10.7 cm solar radio flux , total solar irradiance ) .frist , we have calculated the yearly average for individual years , then each months data is converted into percentage by takeing yearly average as reference. monthly resolution cosmic ray count rate of oulu nm during drought and flood years , mean count rate of individual months with standard deviation ( ) and standard error of mean ( sem = ) of both before and after normalization are tabulated in tables 4a and 4b .we then perform the superposed epoch analysis to study the rate of gcr flux variability during ismr months ( june - september ) averaged over 12 drought years and 12 flood years separately . for this purposewe have utilized the normalized gcr intensity data of three neutron monitors located at different positions on the earth , namely oulu ( finland ) , kiel ( germany ) and newark ( usa ) ( see table 3 ) .these three location data have been analysed to show that the nature of variation is globally similar , only differing in amplitudes . in fig .3(a ) we have plotted the superposed epoch results of monthly averaged normalized gcr intensity data for deficient rainfall years as observed by oulu neutron monitor count rate .we see that gcr intensity is decreasing during ismr ( june - september ) period ( shaded ) .the rate of decrease has been calculated by fitting a linear curve ( see table 5 ) , taking the pre - monsoon ( may ) value as a reference .the best - fit result shows that the gcr count rate decreases ( negative slope ) with linear correlation coefficient r = ( see table 5 and fig .as the gcr intensity may fluctuate to a large extent on a day - to - day basis , we have done the superposed epoch analysis of the daily normalized gcr count rate , as observed by the oulu neutron monitor , for the same 12 deficient rainfall years .the result of this analysis is plotted in fig .4(a ) and tabulated in table 5 .we see a continuously decreasing gcr intensity during the summer monsoon period .we did a linear regression to this averaged data considering the pre monsoon ( may ) data as the reference .the best - fit line with negative slope ( r = ) is also shown ( see table 5 and fig .4 ) . in both monthly and daily cases, we note that the regression line is steeper than it would be if the regression line is obtained using the entire year .to show that such a variation is not confined to one location but its nature is global , we did a similar superposed epoch analysis and best fit linear regression , as earlier , for two more neutron monitor stations data namely kiel and newark using monthly average gcr count rate as well as daily count rate ( see table 5 ) .we see a similar decreasing trend at these locations also .thus , we can infer that the trends of rate of change in gcr flux will be similar in nature at indian locations also .next , we consider the same number ( 12 ) of heavy rainfall years and did a similar superposed epoch analysis of gcr count rate ( both monthly and daily ) data for the same three neutron monitors .we also did a linear regression of these data for the ismr period taking pre - monsoon ( may ) value as a reference .we find that gcr flux is increasing during ismr periods .the best - fitted linear curves with positive slope and correlation coefficients are clearly evident on all three - neutron monitor stations and at both time resolutions ( see figs . 3 , 4 and table 5 ) .the linear regression shows a line with positive slope [ see figs .3(e ) and 4(e ) ] , and from these figures it is clear that the slope will be less rapid if a regression line is drawn for the entire year .in addition to cosmic rays , we extended our analysis to solar activity parameters , such as sunspot number ( ssn ) and 10.7 cm solar radio flux ( http://omniweb.gsfc.nasa.gov/ ) .the ssn is the oldest directly observed solar activity on the photosphere and a very useful indicator of solar activity .the 10.7 cm solar radio flux is an indicator of activity in the upper chromosphere and lower corona .we also considered total solar irradiance ( tsi ) data ; however , this data is available only from 1979 onwards ( http://www.ngdc.noaa.gov/ ; http://lasp.colorado.edu/home/ sorce / data / tsi - data/ ) .we analysed both monthly and daily resolution data of solar parameters ssn , 10.7 cm solar radio flux and tsi for drought and flood years .we carried out superposed epoch analysis after normalizing the data as done for gcr count rate .superposed epoch results of monthly averaged normalized ssn , 10.7 cm solar radio flux and tsi are plotted in fig . 3 for both drought years [ fig .3(b - d ) ] and flood years [ fig .3(f - h ) ] .the rate of change has been calculated by fitting a linear curve , taking the pre - monsoon ( may ) value as a reference .the best - fit results with linear correlation coefficient are also shown ( see fig . 3 and table 5 ) .we have also done the superposed epoch analysis of the daily normalized solar parameters data , for the same 12 drought ( ( b - d ) of fig .4 ) and flood ( ( f - h ) of fig .4 ) rainfall years .the rate of change has been calculated by fitting a linear curve to the data .the best - fit results with linear correlation coefficient are tabulated in table 5 .we can see the difference in nature of variability ( slopes ) in gcr flux and solar parameters ( ssn , 10.7 cm solar radio flux , tsi ) , errors in slopes and correlation coefficients ( r ) during drought and flood periods , favouring gcr flux - rainfall relationship .noticeable difference seen in gcr variability is not so clear in solar parameters considered here .however , it is possible that restricting the correlation analysis to ismr months only reduces the apparent dependence on solar indices such as f10.7 and ssn .but the anti - correlation between gcr flux and ssn and f10.7 becomes clearer if the analysis interval is extended to whole year .we observe that on an average , the gcr flux is decreasing during ismr months ( june - september ) with deficient monsoon rainfall ( drought ) in india . on the other hand ,gcr flux is increasing during ismr period with heavy rainfall ( flood ) in india .as regards the change in temperature with rainfall changes , we find that there is a strong inverse relation between the rainfall and temperature ( see fig .5 ) , at least during ismr period . in the view of the results shown in figs . 3 and 4, there is urgent need to quantify the extent of influence , and to identify the physical mechanism(s ) , responsible for influencing the indian monsoon rainfall through the cosmic ray flux variability .although , we found definite trends i.e. , on the average , heavy rainfall ( floods ) in india occurs during ismr period when gcr flux is increasing in the same season , and gcr flux is decreasing in ismr months during deficient rainfall ( drought ) in indian summer monsoon period .however , a caveat must be added here ; that the rainfall changes can occur with gcr changes only if environmental conditions ( to be identified ) are suitable .this caveat implies that similar trends in rainfall changes with gcr flux changes ( i.e. , deficient rainfall associated with decreasing gcr flux and heavy rainfall associated with increasing gcr flux ) may not be observed at all geographic locations from equator to pole and in all seasons simultaneously , although the nature of gcr flux change is overall similar at almost all locations on the earth only differing in magnitude. moreover , there may be exceptions in india even during ismr season due to unsuitable environmental conditions . as regard the breakdown of the enso - ismr connection after 1988 , mentioned earlier, the breakdown may be because the ismr is less variable .similary to enso , the gcr flux variability has similar properties on both the quiet interval from 1989 to 2002 and the drought and flood periods before and after this gap .thus the breakdown in the ismr - enso connection is not necesserly the evidence for the ismr - gcr hypothesis .the possibility of both the enso and gcr variability contributing to ismr variability in their own way can not be ruled out at this stage . more efforts and rigorous analysesare required to discriminate between enso and gcr as a cause of ismr variability . and maximum temperature [ ( tmax)jjas ] during ismr months . ]most of the studies that attempt to study possible gcr - cloud - climate relationship are focused on longer time scales ( millennial , centennial , multidecadal and decadal ) ( e.g. see reviews by carslaw et al . , 2002 ; kirkby , 2007 ; singh et al . , 2011 ; rao , 2011 ; and references therein ) .however , on shorter time scales too ( inter - annual , seasonal and even smaller ) attempts have been made to search for this relationship with conflicting results .forbush decreases are sudden decreases few percent in cosmic ray intensity within about a day and recover to its pre - reverse level within a week or so ( e.g. see rao , 1972 ; venkatesan and badruddin , 1990 ; kudela , 2009 for reviews on cosmic ray variations at different time scales ) .these forbush decreases in cosmic rays are thought to be an important laboratory for testing possible cosmic ray climate connection .decreases in rainfall in the former soviet union have been reported in the days of the forbush decreases ( stozhkov et al . , 1995 ) . however , most of the eastern mediterranean stations present higher probabilities for a precipitation episode one day after a forbush decrease ( mavrakis and lykoudis , 2006 ) .precipitation changes in relation to gcr flux changes in a short time scale have also been studied by kniveton and todd ( 2001 ) . a recent claim that forbush decreases affect atmospheric aerosol and cloud ( svensmark et al . , 2009 ) has been challenged by other studies ( kulmala et al , 2009 ; laken et al . , 2009 ; calogovic et al . ,2010 ) who found no connection between cosmic rays , aerosols and clouds .however , a more recent study ( dragic et al . , 2011 ) from an analysis of european region data supports the idea that cosmic rays influence the atmospheric process and climate .earlier too , the claim of decreased cloudiness detected during forbush decreases ( pudovkin and veretenenko , 1995 ) was not observed by palle and butler ( 2001 ) even during the same forbush decreases .moreover , the underlying physics suggesting for a connection between cosmic rays , aerosols and cloud is still highly speculative ( legras et al . ,2010 ) and empirical evidences for cosmic ray - cloud relation is still inconclusive ( usoskin , 2011 ) .it was concluded in a review ( kirkby , 2007 ) , based on the available results for longer timescales ( millennial , centennial and multi decadal ) , that increased gcr flux appears to be associated with a cooler climate and a weakening of the monsoon ; and a decreased gcr flux is associated with a warmer climate and strengthening of the monsoon . from our observations on a much shorter time scale during monsoon season in india , we observe that a decreasing gcr flux corresponds to decreasing rainfall and increasing gcr flux corresponds to increasing rainfall .moreover , our preliminary results reported earlier ( badruddin et al . , 2006 , 2009 ) show that temperature and rainfall changes show an opposite behaviour , i.e. , temperature is enhanced during deficient ismr periods and it is lower in heavy rainfall ismr periods . the observation that the cosmic ray intensity is decreasing during ismr months in almost all the years which , are deficient in rainfall ( drought years ) may be interpreted to suggest that a gcr -rainfall relation is possible in indian sub - continent during ismr periods , at least .thus , the gcr - rainfall relation should be considered as a potentially important driver of climate variability .a significant part of precipitation that falls in the tropics is warm rain formed by coalescence of cloud droplets ( kostinski and shaw , 2005 ) . formation of cloud droplets requires a water vapour super saturation environment and particles able to act as cloud condensation nuclei .usually cloud droplets are formed on aerosol particles containing a certain stable fraction .after condensation droplets grow by vapour diffusion and droplet - droplet collision ( coalescence ) , the latter providing more rapid growth as droplet size increases ( harrison and ambaum , 2009 ) .electrical effects play an important role in cloud microphysics .both condensation and coalescence can be influenced by the charge ( pruppacher and klett , 1997 ; tinsley , 2008 ) .early laboratory studies found that raindrops ( of around 0.5 mm diameter ) are about factor 100 more efficient at collecting aerosols when they are charged rather that neutral ( barlow and latham , 1983 ) .grover and beard ( 1975 ) calculated collision efficiencies and found a significant increase in collision efficiency when the droplets were loaded with a charge of the magnitude typical of thunderstorm clouds .khain et al .( 2004 ) from their simulation results have shown that the injection of just a small fraction of charged particles rapidly triggered the collision process and lead to raindrop formation a few minutes after the injection , thus seeding with charged particles may be a very efficient tool for rain enhancement , they suggested .the collision efficiencies highly depend on droplet charge and size .the collision efficiency is much enhanced in the case of a charged droplet collisions than in case of neutral droplet collisions .more specifically , they found that the collision efficiency between charged and neutral droplets , as well as between droplet charges of opposite polarity , is many orders higher than in the case of gravity - induced collisions .thus , efficient collision takes place between cloud droplets and coalescence to large droplet is enhanced by electrical forces between charged droplets .this significantly increases the rate of raindrop formation ( khain et al . , 2004 ) .another potential mechanism may operate through electrostatic image forces .because of electrostatic image forces , electrical forces between charged droplets are always attractive at small separations whatever the relative polarities of the colliding particles ( tinsley , 2008 ) . in this case, the attraction between droplets may lead to droplet size increase . as the droplet size increases, the droplet - droplet collision ( coalescence ) will lead to more rapid growth , leading to enhanced rate of raindrop formation , as suggested by harrison and ambaum ( 2009 ) .this mechanism appears more likely as the rate of change of gcr flux and not the amount of gcr flux is considered to be the key factor .however , more simulations and experiments need to be performed to demonstrate clearly how the increasing gcr flux corresponds to increasing rainfall and vice versa .thus it is expected that in proper atmospheric / environmental conditions ( e.g. air humidity , aerosols , temperature , cloud type etc . ) increasing gcr flux will increase coalescence efficiency that will lead to bigger rain droplets while decreasing gcr flux will decrease the coalescence efficiency and will suppress the droplet growth . during decreasing flux of cosmic rays , levitation / dispersion of low clouds due to electrical effects ( levin and ziv , 1974 ) may also play some role in such a way that it disperses the low cloud amount in proper climatic conditions .we suspect that in suitable environmental conditions , charge particle ( cosmic ray ) flux rate change modulates the droplet collision and coalescence efficiency and affects the rainfall to certain extent .rain formation is a function of different parameters of macro- and micro - physics .the important parameter for the microphysics is the ambient temperature where clouds reside and formation of raindrops occurs due to process of spontaneous coalescence and accretion ( rogers and yau , 1989 ) . although the initiation of raindrop coalescence remain an unsolved problem in cloud physics ( kostinski and shaw , 2005 ) , we suspect that charge induced cloud microphysics , for example , accelerating / decelerating coalescence to larger raindrops ( harrison and ambaum , 2009 ) is the likely effect that plays some role in affecting the rainfall variability in india during indian summer monsoon season , at least , depending on the increasing / decreasing rate of change of charge particle ( cosmic ray ) flux in the corresponding period , under suitable environmental conditions ( e.g. cloud type , temperature , pressure etc . ) .monsoon rainfall variability is connected with global precipitation ( hulme et al . , 1998 ) .there is a strong inverse relationship between the monsoon variability and tropical belt temperature ( see dugam and kakade , 1999 ; badruddin et al . , 2006 , 2009 ) .considering that change in monsoon rainfall variability is also consistent with the change in global mean precipitation ( hulme et al . , 1998 ) andprecipitation / rainfall is inversely related to temperature ( see fig . 8) , we suspect that the monsoon rainfall variability may have some influence on the changes in global temperature also .thus , it should be clarified whether monsoon / rainfall variability plays any role in global warming or its effects are only local .it has been suggested ( ban - weiss et al . , 2011 ) that evaporated waterhelps in cooling earth as a whole and not just the local area of evaporation . on the other hand ,reduction in evaporated water is likely to contribute to global warming significantly .the possible influence of gcr on clouds is a controversial issue .it appears that gcr flux variability plays an important role in influencing the ismr in this season , at least .it is likely , as we suspect , that the physical state of the cloud droplets may play a significant role .local physical ( cloud type , temperature , humidity etc . ) and chemical conditions may play a major role ( enghoff et al ., 2011 ; duplissy et al . , 2010 ; kirkby et al . ,2011 ) in deciding the extent of the influence .physics of liquid and ice cloud may differ ( geirens and ponater , 1999 ) .low clouds generally consist of liquid water droplets ( marsh and svensmark , 2000 ) .it has been suggested that low cloud liquid droplets over the tropics are more sensitive to cosmic ray variability ( palle and butler , 2000 ) .it is suggested that such studies ( i.e. , effects of gcr flux variability on rainfall variability ) on regional hydrological regions have to be studied in more detail . it is also suggested that proper environmental condition in which the influence of gcr flux variability of rainfall is more significant , needs to be identified .although amount of cloud may be dependent on gcr flux , in our hypothesis , we do not consider a direct relationship between the gcr flux and amount of cloud cover as the key ; it is still controversial .we propose an alternate scenario , although speculative , in which the cosmic ray variability influences the rainfall from clouds that are formed in proper environmental conditions .we propose that increasing / decreasing gcr flux influences the rainfall which later results in enhanced / reduced evaporation .this change in evaporation from the earth surface influences the low cloud amount which in turn alters the planetary albedo and concequently there is change in the temperature .however , such influence is only regional or has global effect needs to verified . therefore , more research is needed to understand the relationship amoung variability in gcr , ismr , surface evaporation , low cloud , planetary albedo and temperature .also model studies are needed to understand the extent to which such variability influences the regional and global rainfall and temperature .we find that the decreasing cosmic ray flux does play a role in such a way that the rainfall over this region of the globe ( india ) , at least , is reduced when cosmic ray flux is decreasing .we speculate that the hypothesis , proposed here , on the basis of indian climate data , can be extended to whole tropical and sub - tropical belt , and that it may contribute to global temperature in some way . in conclusion, a gcr - ismr link seems plausible and the gcr - rainfall relation should be considered as a potentially important driver of climate variability .however , further studies are required to improve our understanding of the link between cosmic rays and summer monsoon climate over india .it is also required to fully investigate the contributions of possible mechanisms , discussed here , to the variability in precipitation .further , once our hypothesis is confirmed , there is an urgent need to identify the local physical and chemical conditions conducive for significant effect of gcr flux variability in influencing the rainfall / precipitation .we suggest the following scenario , although speculative , for possible relationship between gcr flux - rainfall - temperature . 1 .increasing gcr flux increasing rainfall enhanced surface evaporation increased low cloud more scattering of solar radiation back to space ( more planetary albedo ) lower temperature .2 . decreasing gcr flux decreasing rainfall decreased surface evaporation reduced low cloud less scattering of solar radiation back to space ( less planetary albedo ) higher temperature .we thank station manager ilya usoskin and sodankyla geophysical observatory for the online availability of oulu neutron monitor data , the national science foundation ( supporting bartol research institute neutron monitors ) and principle investigator john w. bieber for the online availability of newark neutron monitor data and christian t. steigies and extraterrestrial physics department of university of kiel for the online availability of kiel neutron monitor data .availability of indian climate data through indian institute of tropical meteorology pune s website and its use is gratefully acknowledged with thanks .we also acknowledge the use of ssn and 10.7 cm solar radio flux data available through the nasa / gsfc omni web interface , total solar irradiance data through national geophysical data center website and sorce homepage .the authors also thank the editor and referees , whose comments and suggestions helped us to improve the paper .badruddin , singh , y. p. , singh , m. , 2006 .does solar variability affect indian ( tropical ) weather and climate ? : an assessment . in : gopalswamy , n. , bhattacharya , a. ( eds . ) , solar influence on the heliosphere and earth s environment : recent progress and prospects ( proc .ilws workshop ) .quest publications , 444 - 447 .gierens , k. , ponater , m. , 1999 .comment on variation of cosmic ray flux and global cloud coverage - a missing link in solar - climate relationships by h. svensmark and e. friis - christensen ( 1997 ) .61 , 795- 797 .grover , s. n. , beard , k. v. , 1975 . a numerical determination of the efficiency with which electrically charged cloud drops and small raindrops collide with electrically charged spherical particles of various densities .32 , 2156 - 2165 .hong , y.t . ,wang , z. g. , jiang , h. b. , lin , q. h. , hong , b. , zhu , y. x. , wang , y. , xu , l. s. , leng , x. t. , li , h. d. , 2001 .a 6000-year record of changes in drought and precipitation in northeastern china based on a 13c time series from peat cellulose .earth planet .185 , 111 - 119 .khain , a. , pokrovsky , a. , pinsky , m. , seifert , a. , phillips , v. , 2004 . simulation of effects of atmospheric aerosols on deep turbulent convective clouds using a spectral microphysics mixed - phase cumulus cloud model .part i : model description and possible applications .61 , 2963 - 2982 .kulmala , m. , asmi , a. , lappalainen , h. k. , carslaw , k. h. , et al . , 2009 .introduction : european integrated project on aerosol cloud climate and air quality interactions ( eucaari ) - integrating aerosol research from nano to global scales .chem . phys . 9 , 2825 - 2841 .pruppacher , h. r. , klett , j. d. , 1997 .microphysics of clouds and precipitation , ( second revised and enlarged edition with an introduction to cloud chemistry and cloud electricity ) , kluwer academic publishers , dordrecht , pp 954 .sikka , d. r. , 1980 .some aspects of the large - scale fluctuations of summer monsoon rainfall over india in relation to fluctuations in the planetary regional scale circulation parameters .indian acad .sci . earth planet .89 , 179 - 195 .sinha , a. , cannariato , k. g. , stott , l. d. , cheng , h. , edwards , r. l. , yadava , m. g. , ramesh , r. , singh , i. b. , 2007 .a 900-year ( 600 to 1500 a.d .) record of the indian summer monsoon precipitation from the core monsoon zone of india .34 , l16707 .
we study the role of galactic cosmic ray ( gcr ) variability in influencing the rainfall variability in indian summer monsoon rainfall ( ismr ) season . we find that on an average during drought ( low ismr ) periods in india , gcr flux is decreasing , and during flood ( high ismr ) periods , gcr flux is increasing . the results of our analysis suggest for a possibility that the decreasing gcr flux during the summer monsoon season in india may suppress the rainfall . on the other hand , increasing gcr flux may enhance the rainfall . we suspect that in addition to real environmental conditions , significant levitation / dispersion of low clouds and hence reduced possibility of collision / coalescence to form raindrops suppresses the rainfall during decreasing gcr flux in monsoon season . on the other hand , enhanced collision / coalescence efficiency during increasing gcr flux due to electrical effects may contribute to enhancing the rainfall . based on the observations , we put forward the idea that , under suitable environmental conditions , changing gcr flux may influence precipitation by suppressing / enhancing it , depending upon the decreasing / increasing nature of gcr flux variability during monsoon season in india , at least . we further note that the rainfall variability is inversely related to the temperature variation during ismr season . we suggest an explanation , although speculative , how a decreasing / increasing gcr flux can influence the rainfall and the temperature . we speculate that the proposed hypothesis , based on the indian climate data can be extended to whole tropical and sub - tropical belt , and that it may contribute to global temperature in a significant way . if correct , our hypothesis has important implication for the sun - climate link . sun - earth connection , galactic cosmic rays , summer monsoon rainfall , temperature