article
stringlengths
0
456k
abstract
stringlengths
0
65.5k
in this work we are interested in finding solutions to the problem consisting of the helmholtz equation defined in a two- ( or more ) dimensional finite volume : and the homogeneous dirichlet condition on the boundary : the set ( [ eq1])([eq2 ] ) is an eigenproblem in which the values are the eigenvalues and are the corresponding eigenfunctions .the eigenproblem ( [ eq1])([eq2 ] ) appears in many different areas of physics .it describes , for example , the behaviour of a particle confined in an infinitely deep potential ( in this case is proportional to the energy of the particle while is the probability density ) or vibrations of a homogeneous membrane ( is proportional to the vibration frequency , is the amplitude ) , it is useful in studying the propagation of electromagnetic waves in waveguides , etc .so , although the problem of finding eigenvalues and eigenfunctions of the laplace operator has been known for many decades , it remains very important in many fields .the standard analytical approach to problems like ( [ eq1])([eq2 ] ) is the method of separation of variables .the first step in the method is to choose an appropriate coordinate system .the choice depends on the shape of . in practice , only in some cases it is possible to find a system fitted to the geometry of a problem and to obtain the exact solutions using the separation of variables technique . in general, the shape of the boundary of may be arbitrary and no useful coordinate system may be found , so other methods may need to be used .there are many different attempts .amore has applied a collocation method using so - called little sinc functions for problems defined in two - dimensional domains of arbitrary shape .chakraborty et al . have presented an analytical perturbative method . in the two mentioned worksbrief surveys of other methods may be found .recently , steinbach et al . have formulated a boundary element domain decomposition method that enables to transform the original problem to a new one defined on the boundaries separating the subdomains .the goal of this work is to present two methods that are applicable to the eigenproblem ( [ eq1])([eq2 ] ) in case the domain is such that it can be naturally divided into two non - overlapping subdomains .the methods consist in the application of a variational principle allowing the use of trial functions that may experience jumps in values or derivatives when passing from one subdomain to other .the dirichlet - to - neumann ( dtn ) integral operator or the neumann - to - dirichlet ( ntd ) integral operator is used , both are defined on the interface separating the subdomains .each of the methods allows to replace the initial problem ( [ eq1])([eq2 ] ) with a new problem defined in one of the subdomains and on the interface .the methods are related to the dtn and the ntd embedding methods for the bound states of the schrdinger equation ( defined in ) and their relativistic counterparts .the dtn method for the schrdinger equation is a close relative of the embedding method proposed by inglesfield . in the inglesfield s methodthe green function formalism is used while in the dtn ( and ntd ) method an operator approach analogous to that employed in the r - matrix theory is applied .the structure of the paper is as follows . in section ii a systematic construction of a variational principle ( allowing the use of discontinuous trial functions ) for bound states of the helmholtz equation is presented . in sectioniii the dtn and ntd operators are defined .sections iv and v are devoted to the formalism of the dtn and ntd methods for bound states of the helmholtz equation . in sectionvi a numerical example is provided .let be a two- ( or more ) dimensional finite domain of such a shape that it may be in a natural way divided into two subdomains , and , separated by a smooth curve ( or surface ) denoted by , as shown in figure [ fig1 ] .thus , the boundary of consists of and while the boundary of is composed of and .partitioning of the domain into two subdomains and , separated by the interface ; is the unit vector normal to the interface at the point .,width=188 ] a position vector lying on will be denoted by and will be the unit vector normal to at the point ( we assume that is always pointed outward from ) . denoting may rewrite the initial problem ( [ eq1])([eq2 ] ) as the function and its gradient must be continuous in the whole domain , so it is obvious that the functions and obey the equations : where is the normal derivative of at .we want to determine the values of and the corresponding functions . basing on equations ( [ eq4 ] ) , ( [ eq6 ] ) , ( [ eq8 ] ) and ( [ eq9 ] ) and using a method proposed by gerjuoy et al . we define a functional that provides some estimate of one of the sought values : =\overline{k}\,^{2}+\big<\overline{\lambda}_{i } \big|[\delta+\overline{k}\,^{2 } ] \overline{\psi}_{i}\big>_{i } + \big<\overline{\lambda}_{ii}\big|[\delta+\overline{k}\,^{2 } ] \overline{\psi}_{ii}\big>_{ii } \nonumber \\ & & \qquad + \big(\overline{\lambda}\big|\overline{\psi}_{i } -\overline{\psi}_{ii}\big)+\big(\overline{\chi}\big| \nabla_{\perp}\overline{\psi}_{i}-\nabla_{\perp } \overline{\psi}_{ii}\big ) .\label{eq11}\end{aligned}\ ] ] the scalar products are defined as follows : ( is an infinitesimal volume element around the point , is an infinitesimal scalar element of the interface around the point , and denotes the complex conjugation ) . the value , the function ( vanishing on ) and the function ( vanishing on ) are some trial estimates of the exact quantities , and .the functions ( defined in ) , ( defined in ) , and ( both defined on ) play role of the lagrange functions including equations ( [ eq4 ] ) , ( [ eq6 ] ) , ( [ eq8 ] ) and ( [ eq9 ] ) in the functional . the first variation of the functional ( [ eq11 ] ) with respect to arbitrary variations of , , about , , ( supposing that the variations and vanish on and , respectively ) and , , , about some arbitrarily chosen , , , may be written as = 2k \delta k \left [ 1+\big<\lambda_{i}\big|\psi_{i}\big>_{i } + \big<\lambda_{ii}\big|\psi_{ii}\big>_{ii } \right ] \nonumber \\ & & \qquad + \big<[\delta+k^{2}]\lambda_{i}\big|\delta\psi_{i}\big>_{i } + \big<[\delta+k^{2}]\lambda_{ii}\big|\delta\psi_{ii}\big>_{ii } \nonumber \\ & & \qquad + \big(\lambda - \nabla_{\perp}\lambda_{i }\big|\delta\psi_{i}\big ) -\big(\lambda - \nabla_{\perp}\lambda_{ii } \big|\delta\psi_{ii}\big ) \nonumber \\ & & \qquad + \big(\chi+\lambda_{i}\big|\nabla_{\perp}\delta\psi_{i}\big ) -\big(\chi+\lambda_{ii}\big|\nabla_{\perp}\delta\psi_{ii}\big ) \nonumber \\ & & \qquad + \big(\lambda_{i}\big|\nabla_{\perp}\delta\psi_{i}\big)_{i } + \big(\lambda_{ii}\big|\nabla_{\perp}\delta\psi_{ii}\big)_{ii } , \label{eq14}\end{aligned}\ ] ] where in the above scalar product , is an infinitesimal scalar element of around the point ( cf . the definitions ( [ eq12 ] ) and ( [ eq13 ] ) ) .we seek such functions , , and for which the functional is stationary , i.e. its first variation is equal to zero .so the functions , , and fulfil the equations : \lambda_{i}(\mathbf{r})=0 , \qquad \mathbf{r } \in \gamma_{i } , \label{eq17}\ ] ] where . from equations ( [ eq19 ] ) and ( [ eq20 ] ) we obtain comparying equations ( [ eq17 ] ) , ( [ eq18 ] ) , ( [ eq21 ] ) and ( [ eq22 ] ) with equations ( [ eq4])([eq9 ] )we find that and obey the same differential equations and the same boundary conditions as and .this means that the functions and are proportional to and : the value of may be found using the formulas ( [ eq23 ] ) in equation ( [ eq16 ] ) : according to equations ( [ eq19 ] ) , ( [ eq20 ] ) and ( [ eq23 ] ) , we may write , \label{eq25}\ ] ] , \label{eq26}\ ] ] where and are arbitrary complex constants .now , let us assume , that the trial lagrange functions ,, , , appearing in the functional ( [ eq11 ] ) , are related to the estimates and in the same way that the functions , , , are related to the exact functions and . using the formulas obtained from equations ( [ eq23])([eq26 ] ) by replacing the functions , , , , and with the trial functions , , , , and , transforms the functional ( [ eq11 ] ) to = -\frac{\big<\overline{\psi}_{i}\big| \delta\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \delta\overline{\psi}_{ii}\big>_{ii } } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \overline{\psi}_{ii}\big>_{ii } } -\frac{\big(a\nabla_{\perp}\overline{\psi}_{i } + [ 1-a]\nabla_{\perp}\overline{\psi}_{ii}\big| \overline{\psi}_{i}-\overline{\psi}_{ii}\big ) } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \overline{\psi}_{ii}\big>_{ii } } \nonumber \\ & & \qquad + \frac{\big(b\overline{\psi}_{i}+[1-b]\overline{\psi}_{ii}\big| \nabla_{\perp}\overline{\psi}_{i } -\nabla_{\perp}\overline{\psi}_{ii}\big ) } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \overline{\psi}_{ii}\big>_{ii}}. \label{eq27}\end{aligned}\ ] ] our functionalis supposed to estimate of a real quantity , so it should possess the property = \mathcal{f}[\overline{\psi}_{i},\overline{\psi}_{ii } ] .\label{eq28}\ ] ] after some rearrangements we find that equation ( [ eq28 ] ) is obeyed if and the final form of the functional is = -\frac{\big<\overline{\psi}_{i}\big| \delta\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \delta\overline{\psi}_{ii}\big>_{ii } } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \overline{\psi}_{ii}\big>_{ii } } -\frac{\big(a\nabla_{\perp}\overline{\psi}_{i } + [ 1-a]\nabla_{\perp}\overline{\psi}_{ii}\big| \overline{\psi}_{i}-\overline{\psi}_{ii}\big ) } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \overline{\psi}_{ii}\big>_{ii } } \nonumber \\ & & \qquad + \frac{\big([1-a^{*}]\overline{\psi}_{i}+a^{*}\overline{\psi}_{ii}\big|\nabla_{\perp}\overline{\psi}_{i } -\nabla_{\perp}\overline{\psi}_{ii}\big ) } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \overline{\psi}_{ii}\big>_{ii}}. \label{eq30}\end{aligned}\ ] ] it is easy to verify that the exact functions are the stationary points of the functional ( [ eq30 ] ) : =0 , \label{eq31}\ ] ] and the corresponding stationary values are equal to : =k^{2}. \label{eq32}\ ] ] the initial problem ( [ eq1])([eq2 ] ) is then equivalent to the variational principle ( [ eq30])([eq32 ] ) .the important thing is that the functional ( [ eq30 ] ) allows to use trial functions and that , together with their gradients , are continuous in their domains , but do not have to match at , so at least one of the following cases may occur : it is worth noting that such functionals are rather rarely applied .more details about the construction of similar functionals may be found in the paper of szmytkowski et al . , where variational principles for bound states of the schrdinger and the dirac equations have been presented .let us assume that the subdomain is such that we are able to find analytically the functions obeying the helmholtz equation at some fixed real value of the parameter ( which need not be in the spectrum of the eigenproblem ( [ eq1])([eq2 ] ) ) : and the boundary condition now , let us define two mutually reciprocal integral operators and such that for every at the interface it holds that ( note that the operators and are not identical , equation ( [ op3 ] ) is valid only for the functions ) .the operator transforms the dirichlet datum into the neumann datum so it is called the dirichlet - to - neumann ( dtn ) operator . in analogy , the operator is called the neumann - to - dirichlet ( ntd ) operator . using integral kernels and of the operators , we may rewrite equations ( [ op3 ] ) and ( [ op4 ] ) in the following forms : now , let us analyze the eigensystem in the above system is an eigenvalue , is an eigenfunction and is some fixed real parameter .the eigensystem ( [ op7])([op9 ] ) is non - standard , because the eigenvalue appears not in the differential equation but in the boundary condition .eigenproblems of such type are known as the steklov eigenproblems . using the green s theorem ( and the condition ( [ op8 ] ) ) for two arbitrary eigenfunctions and we obtain in virtue of equation ( [ op7 ] ) the left - hand side of equation ( [ op10 ] ) vanishes . applying equation ( [ op9 ] ) leads us to \big(\psi_{n}\big|\psi_{n'}\big)=0 .\label{op11}\ ] ] there are two conclusions we may draw from equation ( [ op11 ] ) . first , if we take equal to , we see that the eigenvalues are real second , if the eigenfunctions and belong to different eigenvalues , they are orthogonal with respect to the surface scalar product ( [ eq13 ] ) : .\label{op13}\ ] ] now , let us assume that all the eigenfunctions of ( [ op7])([op9 ] ) are orthonormal on : and that the surface functions form a complete set in the space of single - valued square - integrable functions defined on and therefore obey the closure relation where is the dirac delta function on . combining the definition of the dtn operator ( [ op3 ] ) with equation ( [ op9 ] )we may write : observe that eigenvalues of the dtn operator are the eigenvalues of the eigensystem ( [ op7])([op9 ] ) and the associated eigenfunctions are the surface parts of the eigenfunctions .according to equation ( [ op5 ] ) , we may rewrite equation ( [ op16 ] ) as multiplying the above formula by , summing over and using the closure relation ( [ op15 ] ) leads us to the spectral expansion of the dtn operator kernel as the dtn operator and the ntd operator are mutually reciprocal , the spectral expansion of the ntd kernel takes form it is obvious from the expansions ( [ op18 ] ) and ( [ op19 ] ) that the operators and are hermitian .if the trial functions employed in the functional ( [ eq30 ] ) are continuous on , i.e. the functional reduces to the following form : = -\frac{\big<\overline{\psi}_{i}\big| \delta\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \delta\overline{\psi}_{ii}\big>_{ii } } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \overline{\psi}_{ii}\big>_{ii } } + \frac{\big(\overline{\psi}_{i}\big|\nabla_{\perp}\overline{\psi}_{i } -\nabla_{\perp}\overline{\psi}_{ii}\big ) } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \overline{\psi}_{ii}\big>_{ii}}. \label{d2}\ ] ] let us assume that the trial function is some function , obeying ( [ op1 ] ) and ( [ op2 ] ) : such choice of in virtue of equations ( [ op3 ] ) and ( [ d1 ] ) gives applying equations ( [ d3 ] ) , ( [ op1 ] ) and ( [ d4 ] ) to equation ( [ d2 ] ) leads us to such a form of the functional in which the only term containing is the integral : = -\frac{\big<\overline{\psi}_{i}\big| \delta\overline{\psi}_{i}\big>_{i } -\kappa^{2}\big < \psi^{(d ) } \big| \psi^{(d ) } \big>_{ii } } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } + \big < \psi^{(d ) } \big| \psi^{(d ) } \big>_{ii } } + \frac{\big(\overline{\psi}_{i}\big|\nabla_{\perp}\overline{\psi}_{i } -\hat{\mathcal{b}}\,\overline{\psi}_{i}(\boldsymbol{\rho})\big ) } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } + \big< \psi^{(d ) } \big| \psi^{(d ) } \big>_{ii}}. \label{d5}\ ] ] subtracting the complex conjugation of the helmholtz equation for multiplied by from the helmholtz equation for differentiated with respect to and multiplied by we obtain integration of ( [ d6 ] ) over after employing the green s theorem , the definition ( [ op3 ] ) , the hermiticity of and the continuity constraint ( [ d1 ] ) results in substitution of equation ( [ d7 ] ) transforms the functional ( [ d5 ] ) into the form = \frac{-\big<\overline{\psi}_{i}\big| \delta\overline{\psi}_{i}\big>_{i } + \big(\overline{\psi}_{i } \big| \nabla_{\perp}\overline{\psi}_{i } -\hat{\mathcal{b}}\overline{\psi}_{i}+\frac{\kappa}{2 } [ { \partial\hat{\mathcal{b}}}/ { \partial\kappa } ] \overline{\psi}_{i}\big ) } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } + \frac{1}{2\kappa } \big ( \overline{\psi}_{i } \big| [ { \partial\hat{\mathcal{b}}}/ { \partial\kappa } ] \overline{\psi}_{i}\big ) } , \label{d8}\ ] ] which depends only on the parameter and the trial function defined in and on .we arrive at a conclusion that the usage of the surface integral dtn operator allows us to reduce the initial problem defined in to the subdomain and the interface .we need to find such functions , , that make the functional ( [ d8 ] ) stationary .the associated stationary values are estimates of some of the values appearing in the problem ( [ eq1])([eq2 ] ) . in practice, it may be impossible to find analytically . in order to obtain some approximate solutionslet us represent the trial function as a linear combination of some basis functions , defined in : applying ( [ d9 ] ) to ( [ d8 ] ) yields =\frac{\overline{\mathsf{a}}\,^{\dag } \mathsf{\lambda}^{(d)}(\kappa)\overline{\mathsf{a } } } { \overline{\mathsf{a}}\,^{\dag } \mathsf{\delta}^{(d)}(\kappa)\overline{\mathsf{a } } } , \label{d10}\ ] ] where is an -component column vector with elements and is its hermitian adjoint , while and are hermitian matrices with elements \phi_{\nu}\big)\right ] , \label{d11}\ ] ] \phi_{\nu}\big ) .\label{d12}\ ] ] let and be such particular vectors and that make the functional ( [ d10 ] ) stationary with respect to variations in their components : =0 . \label{d13}\ ] ] from equations ( [ d13 ] ) and ( [ d10 ] ) we arrive at the algebraic eigensystem ( and its hermitian conjugate ) , where is defined as .\label{d15}\ ] ] the eigensystem ( [ d14 ] ) has eigenvalues and corresponding eigenvectors .these eigenvalues are second - order variational estimates of some among values of the system ( [ eq1])([eq2 ] ) . in virtue of equation ( [ d9 ] ) ,the eigenvectors , with the components , give us functions : which are first - order variational estimates of some of the eigenfunctions of the system ( [ eq1])([eq2 ] ) in the subdomain .now we may find the functions which are the estimates of the eigenfunctions in .let us expand them in the basis constitued by the eigenfunctions of the steklov system ( [ op7])([op9 ] ) : letting the point tend to the interface , employing the orthonormality relation ( [ op14 ] ) , and using the formula ( [ d1 ] ) , we obtain the previous section we started our reasoning with the matching condition ( [ d1 ] ) for the trial functions used in the functional ( [ eq30 ] ) .now , let us turn to another possibility and impose a weaker condition in this case , the functional ( [ eq30 ] ) simplifies to =-\frac{\big<\overline{\psi}_{i}\big| \delta\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \delta\overline{\psi}_{ii}\big>_{ii } } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \overline{\psi}_{ii}\big>_{ii } } -\frac{\big(\nabla_{\perp}\overline{\psi}_{i}\big| \overline{\psi}_{i}-\overline{\psi}_{ii}\big ) } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } + \big<\overline{\psi}_{ii}\big| \overline{\psi}_{ii}\big>_{ii}}. \label{n2}\ ] ] we assume that the trial function is some function obeying the set ( [ op1])([op2 ] ) : investigation analogous to that leading to ( [ d7 ] ) yields and the functional ( [ n2 ] ) transforms to = \frac{-\big<\overline{\psi}_{i}\big| \delta\overline{\psi}_{i}\big>_{i } + \big(\nabla_{\perp}\overline{\psi}_{i } \big| \hat{\mathcal{r}}\nabla_{\perp}\overline{\psi}_{i } -\overline{\psi}_{i}-\frac{\kappa}{2 } [ { \partial\hat{\mathcal{r}}}/{\partial\kappa } ] \nabla_{\perp}\overline{\psi}_{i}\big ) } { \big<\overline{\psi}_{i}\big|\overline{\psi}_{i}\big>_{i } -\frac{1}{2\kappa } \big ( \nabla_{\perp}\overline{\psi}_{i } \big| [ { \partial\hat{\mathcal{r}}}/ { \partial\kappa } ] \nabla_{\perp } \overline{\psi}_{i}\big ) } .\label{n5}\ ] ] following the method of algebraization applied in case of the dtn method ( see the formulas ( [ d9])([d14 ] ) ) we arrive at the generalized matrix eigensystem ( and its hermitian matrix conjugate ) , where and are matrices with elements \nabla_{\perp}\phi_{\nu}\big)\right ] \label{n7}\ ] ] and \nabla_{\perp}\phi_{\nu}\big ) . \label{n8}\ ] ] eigenvalues of ( [ n6 ] ) are second - order variational estimates of some of the values appearing in the set ( [ eq1])([eq2 ] ) , while components of the associated eigenvectors yield the estimates of eigenfunctions of ( [ eq1])([eq2 ] ) in : the last step is to find estimates of eigenfunctions in .we expand as follows the orthonormality relation ( [ op14 ] ) , the properties of the ntd operator and the matching condition ( [ n1 ] ) lead to it is worth noticing that in general the dtn method and the ntd method will give different estimates of the solutions of the initial system .more details about the dtn and ntd methods ( for bound states of the schrdinger equation and the dirac equation in ) may be found in the works of szmytkowski and bielski .to test the two methods , a few series of numerical calculations have been performed .a system in which is a two - dimensional domain consisting of a semicircle of radius joined to a rectangle of sides and , as depicted in figure [ fig2 ] , has been examined .the first step is to decide which part of the whole domain should be and which one should be .the decision depends on the simplicity of the construction of the dtn and the ntd operators .the region in which it is easier to solve ( [ op7])([op9 ] ) should be taken as . in our examplethe subdomain is the semicircle and the subdomain is the rectangle .it is not difficult to verify that the eigenvalues of ( [ op7])([op9 ] ) in this case are where .the corresponding eigenfunctions are of the form \left\ { \begin{array } { ccc } \sin\left[\sqrt{\kappa^{2}-n^{2}\pi^{2}/4a^{2}}\,(y+b)\right ] & \textrm{for } & \kappa^{2 } \geq n^{2}\pi^{2}/4a^{2 } \\\sinh\left[\sqrt{n^{2}\pi^{2}/4a^{2}-\kappa^{2}}\,(y+b)\right ] & \textrm{for } & \kappa^{2 } < n^{2}\pi^{2}/4a^{2 } \end{array } \right . ,\label{num2}\end{aligned}\ ] ] where according to the relation ( [ op14 ] ) are \right)^{-1 } & \textrm{for } & \kappa^{2 } \geq n^{2}\pi^{2}/4a^{2 } \\\left(\sqrt{a}\,\sinh\left[\sqrt{n^{2}\pi^{2}/4a^{2}-\kappa^{2}}\,b\right]\right)^{-1 } & \textrm{for } & \kappa^{2 } < n^{2}\pi^{2}/4a^{2 } \end{array } \right . .\ ] ] using equations ( [ num1])([num3 ] ) in ( [ op18 ] ) and ( [ op19 ] ) , we obtain the kernels of the dtn and ntd operators .let us observe that in the examined case we may distinguish the even ( symmetric with respect to the y - axis ) and the odd ( antisymmetric ) modes .we may search for them separately ( which means working with smaller matrices ) , applying apropriate basis functions in ( [ d9 ] ) .for the even modes we may use \cos(m\beta\varphi ) , \quad \mathbf{r } \in \gamma_{i } \\ & & \qquad \qquad \qquad(\mu=2,3,\ldots , \quad n , m=1,2,\ldots ) \nonumber\end{aligned}\ ] ] and for the odd modes we may apply \sin(m\beta\varphi ) , \quad \mathbf{r } \in \gamma_{i } \\ & & \qquad \qquad \qquad ( \mu=1,2,\ldots , \quad n , m=1,2,\ldots ) \nonumber\end{aligned}\ ] ] ( each represents an unique combination of two integers and ) . in the above formulas is the angle between the y - axis and the position vector $ ] ( we assume that is positive for and negative for ) , is the length of , while and are some arbitrary real parameters .note that all the functions vanish on .the variational bases are formed from the functions with and .do not forget about the extra function ( [ num4 ] ) used for the symmetric states .the function is added to the basis because all the functions ( [ num5 ] ) are equal to zero at , while in general the eigenfunctions of the even modes may be nonzero at . to obtain the estimates of some and , we must establish an initial value of ( which is some estimate of ) .then we apply some chosen variational basis , calculate the matrix elements of and or and and solve the matrix system ( [ d14 ] ) or ( [ n6 ] ) .the resulting eigenvalues are used to set new values of ( separately for each ) .we focus ourselves on an arbitrary chosen state , let it be the state with , so we take we find new matrices , solve the new matrix system and obtain new estimates of eigenvalues with among them .we then apply ( [ num7 ] ) again and the iterative procedure repeats until convergence of is achieved ..[tab1]convergence rate of the dtn and the ntd variational estimates of of the lowest even mode and the lowest odd mode of the system used in the numerical example .the results obtained by employing the basis functions ( [ num4])([num6 ] ) with and .the inputs for the iteration procedure have been for the first even mode and for the first odd mode .si units are used . [cols="^,^,^,^,^",options="header " , ] 20 p. amore , j. phys .a : math . theor . * 41 * , 265206/129 ( 2008 ) s. chakraborty , j. k. bhattacharjee , s. p. khastgir , j. phys .* 42 * , 195301/112 ( 2009 ) o. steinbach , m. windisch , numer .doi : 10.1007/s00211 - 010 - 0315 - 6 ( 2010 ) r. szmytkowski , s. bielski , phys .a * 70 * , 042103/112 ( 2004 ) s. bielski , r. szmytkowski , j. phys .a * 39 * , 73597381 ( 2006 ) j. e. inglesfield , j. phys .c * 14 * , 37953806 ( 1981 ) r. szmytkowski , j. phys .a : math . gen . * 30 * , 44134438 ( 1997 ) r. szmytkowski , j. math . phys .* 39 * , ( 1998 ) 52315252 , erratum : j. math. phys . * 40 * , 4181 ( 1999 ) e. gerjuoy , a. r. p. rau , l. spruch , rev .55 * , 725774 ( 1983 ) r. szmytkowski , s. bielski , int . j. quantum chem . * 97 * , 966976 ( 2004 ) g. auchmuty , num .* 25 * , 321348 ( 2004 )
two methods for computing bound states of the helmholtz equation in a finite domain are presented . the methods are formulated in terms of the dirichlet - to - neumann ( dtn ) and neumann - to - dirichlet ( ntd ) surface integral operators . they are adapted from the dtn and ntd methods for bound states of the schrdinger equation in . a variational principle that enables the usage of the operators is constructed . the variational principle allows the use of discontinuous ( in values or derivatives ) trial functions . a numerical example presenting the usefulness of the dtn and ntd methods is given .
marvin minsky s `` society of mind '' theory postulates that our behaviour is not the result of a single cognitive agent , but rather the result of a society of individually simple , interacting processes called agents .the power of this approach lies in specialization : different agents can have different representations , different learning processes , and so on . on a larger scale , our society as a whole validates this approach : our technological achievements are the result of many cooperating specialized agents . in reinforcement learning ( rl ) , where the goal is to learn a policy for an agent interacting with an initially unknown environment , the importance of breaking large tasks into smaller pieces has long been recognized .specifically , there has been a lot of work on hierarchical rl methods , which decompose a task in a hierarchical way into subtasks .hierarchical learning can help accelerate learning on individual tasks by mitigating the exploration challenge of sparse - reward problems .one of the most popular frameworks for this is the options framework , which extends the standard rl framework based on markov decision processes ( mdp ) with temporally - extended actions . in this paper, we propose a framework of communicating agents that aims to generalize the traditional hierarchical decomposition to allow for more flexible task decompositions .for example , decompositions where multiple subtasks have to be solved in parallel , or in cases where a subtask does not have a well - defined end , but rather is a continuing process that needs constant adjustment , such as walking through a crowded street .we refer to our framework as separation - of - concerns .what is unique about our proposed framework is the way agents cooperate . to enable cooperation, the reward function of a specific agent not only has a component that depends on the environment state , but also a component that depends on the communication actions of the other agents .depending on the specific mixture of these components , agents have different degrees of independence .in addition , because the reward in general is state - specific , an agent can show different levels of dependence in different parts of the state - space .typically , in areas with high environment - reward , an agent will act independent of the communication actions of other agents , while in areas with low environment - reward , an agent s policy will depend strongly on the communication actions .in this initial work , we focus primarily on the theoretical aspects of our framework .formally , our framework can be seen as a sequential multi - agent decision making system with non - cooperative agents .this is a challenging setting , because from the perspective of one agent , the environment is non - stationary due to the learning of the other agents .we address this by defining trainer agents with a fixed policy . learning with these trainer agentscan occur in two ways : 1 ) by pre - training agents and then freezing their policy , or 2 ) by learning in parallel using off - policy learning .the rest of this document is organized as follows . in section 2 ,we discuss related work . in section 3 ,we introduce our model .section 4 contains a number of experiments .section 5 discusses the results and future work . finally , section 6 concludes .* hierarchical learning / options * our work builds upon the long line of work on hierarchical learning . introduced the maxq framework , which decomposes the value function in a hierarchical way . introduced the options framework .options are temporally extended actions consisting of an initialization set , an option policy and a termination condition .effectively , applying options to an mdp , changes it into a semi - mdp .options have also been popular as a mechanism for skill discovery. there has been significant work on option discovery . in the tabular setting , useful subgoal states can be identified , for example , by using heuristics based on the visitation frequency , by using graph partitioning techniques , or by using the frequency with which state variables change . with function approximation ,finding good subgoals becomes significantly more challenging . assumed that subgoal states were given ( hence , only the option policy needs to be learned ) . perform option discovery by identifying ` purposes ' at the edge of a random agent s visitation area .learning options towards those edge - purposes brings the agent quickly to a new region where it can continue exploration . a new architecture to learn the policy over options , the options themselves , as well as their respective termination conditions .this is accomplished without defining any particular subgoal .only the number of options is known beforehand . study hierarchical rl in the context of deep reinforcement learning . in their setting, a high - level controller specifies a goal for the low - level controller .once the goal is accomplished , the top - level controller selects a new goal for the low - level controller .the system was trained in two phases : in the first phase the low - level controller is trained on a set of different goals ; in the second phase the high - level and low - level controllers are trained in parallel . also use a system with a high - level and a low - level controller , but the high - level controller continuously sends a modulation signal to the low - level controller , affecting its policy .this setup can be seen as a special case of the framework that we introduce .* multi - agent systems * there is also a large amount of work on multi - agent rl ( e.g. , see for an overview ) .the standard multi - agent configuration includes multiple agents which are acting on the environment simultaneously and which receive rewards individually based on the joint actions .this can naturally be modelled as a stochastic game .multi - agent systems can be divided into fully cooperative , fully competitive or mixed tasks ( neither cooperative nor competitive ) . for a fully cooperative task , all agents share the same reward function .early works includes the integrated learning system ( ils ) by , which integrates heterogeneous learning agents ( such as search - based and knowledge - based ) under a central controller through which the agents critique each other s proposals .alternatively , proposed learning with an external critic ( lec ) and learning by watching ( lbw ) , which advocate learning from other agents in a social settings .it has been shown that a society of q - learning agents , which are watching each other can learn faster than a single q - learning agent .more recently , used a framework of communicating agents based on deep neural networks to solve various complex tasks .they evaluated two learning approaches .for the first approach , each agent learns its own network parameters , while treating the other agents as part of the environment .the second approach uses centralized learning and passes gradients between agents . for fully competitive tasks , which typically deal with the two - agent case only , the agents have opposing goals ( the reward function of one agent is the negative of the reward function of the otherour work falls into the mixed category , which does not pose any restrictions on the reward functions .a lot of work in the mixed setting focuses on static ( stateless ) tasks .separation of concerns ( soc ) can benefit from ideas and theoretical advances in multi - agent frameworks .nevertheless , it should be noted that there are key differences between soc and multi - agent methodologies .intrinsically , soc splits a single - agent problem into multiple parallel , communicating agents with simpler and more focused , but different objectives ( skills ) .in this section , we introduce our model for dividing a single - agent task into a multi - agent task . our single - agent task is defined by a markov decision process ( mdp ) , consisting of the tuple , where is the set of states ; the set of actions ; indicates the probability of a transition to state , when action is taken in state ; indicates the reward for a transition from state to state under action ; finally , the discount factor specifies how future rewards are weighted with respect to the immediate reward .actions are taken at discrete time steps according to policy , which maps states to actions .the goal is to maximize the discounted sum of rewards , also referred to as the _ return _ : we call the agent of the single - agent task the _ flat agent_. we expand this task into a system of communicating agents as follows . for each agent , we define an environment action - set , a communication action - set , and a _ learning objective _ , defined by a reward function plus a discount factor . furthermore , we define an action - mapping function that maps the joint environment - action space to an action of the flat agent .the agents share a common state - space consisting of the state - space of the flat agent plus the joint communication actions : .see figure [ fig : soc model ] for an illustration for . at time , each agent observes state and selects environment action and communication action according to policy .action is fed to the environment , which responds with an updated state .the environment also produces a reward , but this reward is only used to measure the overall performance of the soc model . for learning , each agent uses its own reward function to compute reward .an important property of the soc model is that the reward function of a particular agent depends on the communication actions of the other agents .this provides a clear incentive for an agent to react in response to communication , even in the case of full observability .for example , agent a can ` ask ' agent b to behave in a certain way via a communication action that rewards agent b for this behaviour .full observability is not an explicit requirement of our framework .the general model described above can be extended in different ways .in particular , extensions that allow for further specialization of agents will increase the benefit of the soc model as a whole .some examples are : * state abstractions / information hiding : because the agents have different objectives , they can use different state - abstractions .* action - selection at different frequencies .* state - dependent gamma ( such that terminal states can be defined ) * state - dependent action sets .* example : hierarchical configuration * + one important example is the case where the agents are organized in a way that decomposes a task hierarchically . as an example , consider a configuration consisting of three agents : agent 0 is the top - level agent , and agent 1 and agent 2 are two bottom - level agents .the top - level agent only has communication actions , specifying which of the bottom level agents is in control .that is , and .agent 1 and agent 2 both have a state - dependent action - set that gives access to the environment actions if they have been given control by agent 0 .that is , for agent 1 : and vice - versa for agent 2 . by allowing agent 0 to only switch its actiononce the agent currently in control has reached a terminal state ( by either storing a set of terminal state conditions itself , or by being informed via a communication action ) , a typical hierarchical task decomposition is achieved .this example illustrates that our soc model is a generalization of a hierarchical model . like any multi - agent system , obtaining stable performance for some soc configurations can be challenging . in the next section, we discuss some learning strategies for obtaining stability .a common learning approach for mixed - strategy multi - agent systems like ours is to use a single - agent algorithm for each of the agents .while there is no convergence guarantee in this case , it often works well in practice . in this section, we examine under which conditions exact convergence can be guaranteed . by assigning a stationary policy to all agents , except agent , an implicit mdp is defined for agent with state space , reward function and ( joint ) action space .the proposition holds if the next state only depends on the current state and joint action . because the policies of all agents other than agent are fixed , knowing fixes a distribution over the environment and communication actions for each of the other agents .the distribution over these environment actions , together with the environment action of agent determines a distribution for the random variable .together with the ( distribution over ) communication actions this fixes a distribution for .it follows from proposition 1 that if we also define a policy for agent , then we get a well - defined value - function .let be a tuple of policies , assigning a policy to each agent : .we can now define a value - function with respect to reward function and discount factor of agent as follows : using this , we define the following independence relation between agents . agent is independent of agent if the value does not depend on the policy of agent . agent is dependent of agent if it is not independent of agent .note that the agent - state includes the communication action of agent .hence , this independence definition still allows for a dependence on the most recent communication action of agent ; the independence is only with respect to the future actions of agent , that is , its policy .a simple example of a case where this independence relation holds is the hierarchical case , where the actions of the top agent remain fixed until the bottom agent reaches a terminal state .to agent means that agent depends on agent .circles represent regular agents ; diamonds represent trainer agents . the cyclic graph shown in ( b )can be transformed into an acyclic graph by adding trainer agents . a trainer agent for agent defines fixed behaviour for the agents that agent depends on to ensure stable learning ., width=302 ] using the independence definition above , we can draw a _ dependency graph _ of an soc system , which is a directed graph with arrows showing the dependence relations : an arrow pointing from agent to agent means that agent depends on agent . in general , a dependency graph can be acyclic ( containing no directed cycles , see figure [ fig : dependency graphs]a ) or cyclic ( containing directed cycles , see figure [ fig : dependency graphs]b ) . if the dependency graph is an acyclic graph , using single - agent q - learning to train the different agents is especially straightforward , as shown by the following theorem . when the dependency graph is acyclic , using q - learning for each agent in parallel yields convergence under the standard ( single - agent ) conditions . because the graph is acyclic , some agents are independent of any of the other agents andhence will converge over time .once the q - values of these independent agents are sufficiently close to their optimal values ( such that the distance to the optimal value is smaller than the difference between two actions ) , their policies will not change anymore .once this occurs , the agents that only depend on these independent agents will converge .after this , the agents one step downstream in the graph will converge , and so on , until all agents have converged . in many cases, the dependency graph will be cyclic . in this case ,convergence of q - learning is not guaranteed .however , we can transform a cyclic graph into an acyclic one by using _ trainer agents_. a _ trainer agent _ , assigned to a particular agent , is a fixed - policy agent that generates behaviour for the agents that agent depends on .assigning a trainer agent to agent implicitly defines a stationary mdp for agent with a corresponding optimal policy that can be learned .hence , agent only depends on the trainer agent .the trainer agent itself is an independent agent .hence , trainer agents can be used to break cycles in dependency graphs .note that a cyclic graph can be transformed into an acyclic one in different ways . in practice , which agents are assigned trainer agents is a design choice that depends on how easy it is to define effective trainer behaviour . in the simplest case ,a trainer agent can just be a random or semi - random policy .as an example , the graph shown in figure [ fig : dependency graphs]b can be transformed into the graph shown in figure [ fig : dependency graphs]c by defining a trainer agent for agent that generates behaviour for agent 2 and 3 , and a trainer agent for agent that generates behaviour for agent and .learning with trainer agents can occur in two ways .the easiest way is to pre - train agents with their respective trainer agents , then freeze their weights and train the rest of the agents . alternatively , all agents could be learned in parallel , but the agents that are connected to a trainer agent use off - policy learning to learn values that correspond to the policy of the trainer agent , while the behaviour policy is generated by the regular agents .off - policy learning can be achieved by importance sampling , which corrects for the frequency at which a particular sample is observed under the behaviour policy versus the frequency at which it is observed under the target policy .this can be achieved as follows .consider agent with actions that depends on agent with actions .furthermore , consider that agent has a trainer agent attached to it mimicking behaviour for agent . in other words ,agent also has actions . at any moment in time, the actual behaviour is generated by agents and .if at time , agent selects action , while the selection probability for that action is , and the selection probability for that same action is for trainer agent , then the off - policy update for agent is : obtaining convergence does not necessarily mean that the policy that is obtained is a good policy .in the next section , we address the optimality of the policy . in the context of hierarchical learning , defines _ recursive optimality _ as a type of local optimality , in which the policy for each subtask is optimal given the policies of its children - subtasks .the recursive optimal policy is the overall policy that consists of the combination of all the locally optimal policies .the recursive optimal policy is in general worse than the optimal policy for a flat agent , but it can be much easier to find .we can define a similar form of optimality for an soc model : if the dependency graph of an soc model is acyclic ( with or without adding trainer agents ) , then we define a recursive optimal soc policy as the policy consisting of all locally optimal policies , that is , policy is optimal for agent , given the policies of the agents that it depends on. the learning strategies discussed in the previous section will converge to the recursive optimal policy .how close this policy is to the optimal policy depends on the specific decomposition , that is , the communication actions , the agent - reward functions and , potentially , the employed trainer agents .in this section , we illustrate the application of the soc model using two examples .first , we demonstrate the method on a tabular domain .then , we consider a pixel - based game and combine soc with deep reinforcement learning .the aim of the first experiment is to show the scalability of the soc model . for this, we use a navigation domain , shown in figure [ fig : navigation ] .the action set consists of a move forward action , a turn clockwise and a turn counterclockwise action .furthermore , we add a varying number of extra ` no - op ' actions ( actions without effect ) to control the complexity of the domain .the reward is -5 when the agent bumps into a wall and -1 for all other transitions .we compare a flat agent with an soc configuration consisting of a high and low - level agent .the high - level agent communicates a compass direction to the low - level agent , , and has no environmental actions ( ) .the low - level agent in turn has access to all environmental actions and no communication actions ( ) .the reward function of the high - level agent is such that it gets -1 on each transition .the reward function of the low - level agent is such that it gets -5 for hitting the wall and + 1 if it makes a move in the direction requested by the high - level agent .all agents are trained with q - learning and use -greedy exploration with a fixed of 0.01 and a step - size of 0.1 . clockwise and turn counter - clockwise .furthermore , additional ` no - op ' actions are added to increase the complexity of the learning task.,width=151 ] the left of figure [ fig : nav_learning ] shows the learning behaviour for tasks with different levels of complexity .specifically , we compare tasks with 5 , 10 and 20 no - op actions .while the complexity has only a small affect on the performance of the soc method , it affects the flat agent considerably .this is further illustrated by the right graph of figure [ fig : nav_learning ] , which shows the average return over 4000 episodes for a different number of no - op actions .overall , these results clearly illustrate the ability of the soc model to improve the scalability .note that implementing this with a hierarchical approach would require the high - level agent to know the available compass directions in each grid - cell to avoid giving the low - level agent a goal that it can not fulfill ( e.g. , it can not move north while it is in the top - left corner ) .by contrast , the high - level agent of the soc system does not require this information ._ comparison of the soc model and a flat agent in the navigation domain for three different task complexities ._ [ right ] _ scalability of soc learning : the average return over the first 4000 episodes on the navigation task as function of the number of added no - op actions.,title="fig:",width=245 ] _ comparison of the soc model and a flat agent in the navigation domain for three different task complexities ._ [ right ] _ scalability of soc learning : the average return over the first 4000 episodes on the navigation task as function of the number of added no - op actions.,title="fig:",width=249 ] in our second example , we compare a flat agent with the soc model on the game catch .catch is a simple pixel - based game introduced by .the game consists of a 24 by 24 screen of binary pixels in which the goal is to catch a ball that is dropped at a random location at the top of the screen with a paddle that moves along the bottom of the screen . in our case , both the ball and the paddle consist of just a single pixel .the available actions are , and .the agent receives + 1 reward for catching the ball , -1 if the ball falls of the screen and 0 otherwise .similar to the navigation domain , our soc model consists of a high - level and a low - level agent .the high - level agent has no direct access to the environment actions , but it communicates a desired action to the low - level agent : .the low - level agent has direct access to the environment actions and no communication actions : and .furthermore , the high - level agent has a discount factor of 0.99 and access to the full screen , whereas the low - level agent has a discount factor of 0.65 and uses a bounding box of 10 by 10 pixels around the paddle .the low - level agent only observes the ball when its inside the bounding box .the high - level agents receives a reward of + 1 if the ball is caught and -1 otherwise ; the low - level agent receives the same reward plus a small positive reward for taking the action suggested by the high - level agent .the high - level agent takes actions every 2 time steps , whereas the low - level agent takes actions every time step .both the flat agent and the high - level and low - level agents are trained using dqn . the flat agent uses a convolutional neural network defined as follows .the 24x24 binary image is passed through two convolutional layers , followed by two dense layers .both convolutional layers have 32 filters of size ( 5,5 ) and a stride of ( 2,2 ) .the first dense layer has 128 units , followed by the output layer with 3 units .the network uses the same activations and initializations as in .the high - level agent in the soc system uses an identical architecture to that of the flat agent .however , due to the reduced state size for the low - level agent , it only requires a small dense network .the network flattens the 10x10 input and passes it through two dense layers with 128 units each . the output is then concatenated with a 1-hot vector representing the communication action of the high - level agent . the merged outputis then passed through one final dense layer with 3 units ( see figure [ fig : catch_model ] ) .the left graph of figure [ fig : catch ] shows the results of the comparison .the soc model learns significantly faster than the flat agent . to show the importance of the co - operation between the low - level and the high - level agent, we performed an additional experiment where we varied the additional reward the low - level agent gets for taking the action suggested by the high - level agent .the results are shown in the right graph of figure [ fig : catch ] .if the additional reward is 0 , the low - level agent has no incentive to listen to the high - level agent and will act fully independent ; alternatively , if the additional reward is very high , it will always follow the suggestion of the high - level agent . because both agents are limited ( the high - level agent has a low action - selection frequency , while the low - level agent has a limited view ) , both these situations are undesirable .the ideal low - level agent is one that acts neither fully independent nor fully dependent . _ learning speed comparison on catch ( one epoch corresponds with 40 episodes ) . _[ right ] _ effect of the communication reward on the final performance of the soc system ., title="fig:",width=245 ] _ learning speed comparison on catch ( one epoch corresponds with 40 episodes ) . _[ right ] _ effect of the communication reward on the final performance of the soc system ., title="fig:",width=245 ]the experiments from the previous section showed the validity of the separation of concerns principle : separating a task into multiple related sub - tasks can result in considerable speed - ups . in the presented experiments ,the decomposition was made a priori . in future work, we would like to focus on ( partially ) learning the decomposition . in this case, we do not necessarily expect an advantage on single tasks , due to the cost of learning the decomposition . in the transfer learning setting , however , where a high initial cost for learning a representation can be offset by many future applications of that representation , it could prove to be useful . the soc configuration we used in both our examples consisted of a high - level agent that only communicates and a low - level agent that only performs environment actions .another direction for future work is to explore alternative configurations and use more than two agents .the reward function in reinforcement learning often plays a double role : it acts as both the performance objective , specifying what type of behaviour is desired , as well as the learning objective , that is , the feedback signal that modifies the agent s behaviour . that these two roles do not always combine well into a single functionbecomes clear from domains with sparse rewards , where learning can be prohibitively slow .work on intrinsic motivation aims to mitigate this issue by providing an additional reward function to steer learning . in a sense, the soc model takes this a step further : the performance objective , consisting of the reward function of the environment , is fully separated from the learning objective of the agents , consisting of the agent s reward function . this clear separation between performance objective and learning objectivefurther separates the soc model from options .options , once learned , aggregate the rewards obtained from the environment .hence , the top - level agent of a hierarchical system based on options learns a value function based on the environment reward .we argue for a clearer separation .we presented initial work on a framework for solving single - agent tasks using multiple agents . in our framework ,different agents are concerned with different parts of the task .our framework can be viewed as a generalization of the traditional hierarchical decomposition .we identified conditions under which convergence of q - learning occurs ( to a recursive optimal policy ) and showed experiments to validate the approach .imek , . , wolfe , a. p. , and barto , a. g. identifying useful subgoals in reinforcement learning by local graph partitioning . in _ in proceedings of the international conference on machine learning ( icml ) _ , 2005 .foerster , j. n. , assael , y. m. , de freitas , n. , and whiteson , s. learning to communicate with deep multi - agent reinforcement learning . in _ proceedings of advances in neural information processing systems ( nips )_ , 2016 .kulkarni , t. d. , narasimhan , k. r. , saeedi , a. , and tenenbaum , j. b. hierarchical deep reinforcement learning : integrating temporal abstraction and intrinsic motivation , 2016 .arxiv:1604.06057 [ cs.lg ] .mnih , v. , kavukcuoglu , k. , silver , d. , rusu , a. a. , veness , j. , bellemare , m. g. , graves , a. , riedmiller , m. , fidjeland , a. k. , ostrovski , g. , petersen , s. , beattie , c. , sadik , a. , antonoglou , i. , kumaran , h. king d. , wierstra , d. , legg , s. , and hassabis , d. human - level control through deep reinforcement learning ., 518:0 529533 , 2015 .silver , b. , frawley , w. , iba , g. , vittal , j. , and bradford , k. : a framework for multi - paradigmatic learning . in _ proceedings of the seventh international conference on machine learning ( icml )_ , pp . 348356 , 1990 .
in this paper , we propose a framework for solving a single - agent task by using multiple agents , each focusing on different aspects of the task . this approach has two main advantages : 1 ) it allows for specialized agents for different parts of the task , and 2 ) it provides a new way to transfer knowledge , by transferring trained agents . our framework generalizes the traditional hierarchical decomposition , in which , at any moment in time , a single agent has control until it has solved its particular subtask . we illustrate our framework using a number of examples .
fmos is the near - infrared fiber multi - object spectrograph that has been in operation as one of the open - use instruments on the subaru telescope since 2009 .it can configure 400 fibers of aperture in a 30 diameter field of view at the primary focus .the 400 infrared spectra in two groups are taken by two spectrographs called irs1 and irs2 ( infrared spectrograph 1 and 2 ) in either of two modes : a low - resolution mode with a spectral resolution of 20 in the 0.9 - 1.8 m wavelength range , and a high - resolution mode of 5 in one of the quarter wavelength ranges of 0.90 - 1.14 , 1.01 - 1.25 , 1.11 - 1.35 , 1.40 - 1.62 , 1.49 - 1.71 , or 1.59 - 1.80 m .the bright oh - airglow emission lines are masked by the mask mirror installed in these spectrographs . the individual tools for image reduction in this package were developed during engineering and guaranteed - time observations since 2008 , in conjunction with other operation and reduction software . although these tools were not developed as part of the public reduction software of fmos , they are now stable , thus they have earned the name of `` fibre - pac '' ( fmos image - based reduction package ) .the basic concepts underlying the package are as follows : 1 .most of the reduction is processed by iraf .several complicated steps are processed by original software written in c using the cfitsio library .3 . almost all of the reduction processes are automated using script files in text format .4 . modification of the text files is done by general unix commands .the original 2-dimensional information is kept as far as possible throughout the reduction processes .these concepts enable easy implementation and open processing without the inconvenience of licensed or black box parts and ensure traceable operation with visual confirmation .the 2-dimensional information has advantages not only in filtering out unexpected noise using their small sizes but also on detection of faint emission - features . in this paper, we describe the reduction process of the fmos images based on its aug.9 - 2011 version , taking care of complex conditions in the infrared , using multiple fibers and the oh - suppressed spectrograph .the fmos images are acquired by a uniform interval non - destructive readout technique called `` up - the - ramp sampling '' ( hereafter `` ramp sampling '' for short ) .a typical exposure of 900 seconds consists of 54 images .after an exposure is finished , a final frame ( treated as a `` raw image '' in the reduction process ) is created where the count in each pixel is calculated by performing a least squares fit to the signal count of 54 images .in addition to suppressing readout noise , the advantages of the ramp sampling are 1 ) saturated pixels can be estimated from the counts prior to saturation , and 2 ) cosmic - ray events can be detected as an unexpected jump in the counts and removed from the final frame .the detection threshold of the cosmic - ray events has been currently set to 10 in an empirical way based on the real fmos images .for irs1 , the fit and the cosmic - ray rejection are done during the ramp sampling , so that the final frame is ready as the exposure finishes . for irs2, however , the nonlinear bias variation prevents fitting the slope during the exposure .instead , the ramp fitting is executed after the simple background subtraction ( cf . 3.2 ) has been performed for all of the images taken during the sampling .for example , 54 background - subtracted ramp images are prepared prior to fitting . consequently the nonlinear bias component is subtracted together with the background photons .( 80mm,80mm)figure1.eps figure [ fig : raw_image ] shows the resulting `` raw image '' after ramp sampling 54 times using irs1 .raw images for irs2 do not exist for the reasons explained above .the raw images have the following components : 1 . thermal background ( e/900 s ) 2. suppressed oh - airglow ( e/900 s on average ) 3 .remaining cosmic ray events ( e / event ) 4 . read noise ( e rms/54 ramp readout ) 5 .object ( e/900 s for a 20 mag(ab ) object ) 6 .dark current ( e/900 s ) 7 .bias offset ( e / exposure ) 8 .cross - talk between quadrants ( .15% of the count ) 9 .bad pixels having no efficiency , unusual large dark current , or too large noise ( .2% of the pixels ) 10 . on - chipamplifier glow at the corners of the four quadrants ( e/54 ramp readout ) 11 .unknown external noise in a stripe pattern ( quite rare and the amplitude is e/900 s at most ) 12 .point spread function ( psf ) of the fiber of 5 pixels with a pitch of 10 pixels between spectra ( having almost no cross talk of the spectra on either side ) 13 .position of an individual fiber along the slit 14 .optical distortion 15 .quantum efficiency and its pixel - to - pixel variation ( `` flat '' pattern ) 16 .system throughput variation with wavelength 17 .atmospheric transmittance 18 .intrinsic absorption and emission of the objects the thermal background can be removed in the initial subtraction described in subsection 3.2 .the 2nd strongest component , residual airglow , can be subtracted by interpolation of the background of other fibers after the optical distortion is corrected using a dome - flat image taken in the same observation mode .the remaining cosmic - ray and other strong noise features having a size smaller than the psf of a fiber are removed in three subsequent stages with different threshold counts . besides the science frames , the following images are necessary for this reduction process . detector - flat ] reduction process ----------------- before performing a reduction of the science frames , a preparatory process is applied to the dome - flat and th - ar spectral images to determine the optical distortion and the wavelength calibration of the image .first , flat fielding : the dome - flat image is divided by the detector - flat image to remove differences in the quantum efficiency between pixels .second , bad pixel correction : the registered and temporally prominent bad pixels , which are picked up by subtraction of 3 median filtered image , are replaced with an interpolated value from the surrounding pixels . third , correction of the spatial distortion : the -axis of the image is converted using .( here , the origin is at the center of the image . )the four parameters are chosen to make the psf amplitude in the image projected along the -axis maximum ( cf .figure [ fig : spacial_distortion ] ) . in this modification process, the dome - flat spectrum from each fiber with 9 pixels in width is converted into a parallel line to make a `` combined 1d image '' ( as in figure [ fig : spectral_distortion ] ) that includes the pattern of the airglow mask .( 80mm,79mm)figure2a.eps ( 80mm,80mm)figure2b.eps fourth , correction of the spectral distortion : the -axis of the combined 1d image is converted using , in which the two parameters are determined for each line so as to minimize the shift and the magnification difference in the mask pattern . the 200 resulting and parametersare fitted with fourth - order polynomials in to remove the local matching error in the pattern .this conversion process makes the mask pattern straight in the column direction ( cf .figure [ fig : spectral_distortion ] ) , which is necessary for better subtraction of the residual airglow lines in the science frames .these parameters obtained from the dome - flat image are also applied to the th - ar spectral image to make a combined 1d image , as shown in figure [ fig : thar_lamp ] .although the slit image of each emission line and of the airglow masks should be parallel , alignment errors cause small differences between them .finally , wavelength calibration : the correlation between the observed wavelength and the pixels of each spectrum are determined , as represented by .the resulting , , , and coefficients are used to calculate the corresponding wavelengths in each spectrum without changing them , because the positions of the individual fibers in the slit may exhibit some scatter due to imperfect fiber alignment along the slit .the results of the wavelength calibration are confirmed by comparing the reduced th - ar spectra with an artificial image based on the known wavelengths of the th - ar emission lines ( cf. figure [ fig : thar_lamp ] ) .the typical calibration error of the spectra is less than 1 pixel , corresponding to 5 or 1.2 in the low- or high - resolution mode , respectively .( 80mm,18mm)figure3.eps ( 80mm,18mm)figure4.eps since the usual fmos observations are carried out using an nodding pattern of the telescope , simple sky subtractions can be performed using two different sky images : and ( where denotes the image taken at position of the -th pair ) . for irs2 , these sky - subtracted frames are calculated by the ramp fitting algorithm applied to the sky - subtracted sub - frames .when the brightness of the oh airglow varies monotonically , most of it can be canceled by merging these two images according to .the weight is chosen to make the sum of the absolute count a minimum within the range .figure [ fig : initial_subtraction ] shows a pair of sky - subtracted images and the merged one . typically , the weight is equal to about 0.5 .( 50mm,50mm)figure5a.eps ( 50mm,50mm)figure5b.eps ( 50mm,50mm)figure5c.eps after the initial background subtraction , cross talk is removed by subtracting 0.15% for irs1 and 1% for irs2 from each quadrant .next , the bias difference between the quadrants ( as indicated in figure [ fig : initial_subtraction ] , the lower left quadrant tends to show a higher bias level ) is corrected to make the average over each quadrant equal . after flat fielding using the detector flat image , the registered and temporally prominent bad pixels are rejected , together with the four adjacent pixels by interpolating the surrounding pixels .the processed image is converted into a combined 1d image based on the distortion parameters obtained in the preparatory reduction process .however , only one line of each spectrum ( 9 pixels in width ) is extracted instead of summing the 9 lines .one can thereby make a set of 9 combined 1d images each of which consists of a different part of each spectrum ( cf .figure [ fig : residual_subtraction ] ) .in other words , the psf of a fiber is divided into 9 pieces , only one of which is used in a combined 1d image .the flexure or temperature change in the spectrograph causes a small difference between the dome - flat and the scientific images along the vertical direction in position . this difference is corrected using the vertical position of the spectra of bright stars . in this way , the counts of the residual sky becomes smooth along the columns after the relative throughput correction of the fibers .the residual airglow lines are fitted and subtracted in these images , and then the relative throughput difference is multiplied to restore the noise level back to the original state .the residual subtracted images are recombined to form an image in which the psf of the fiber is determined .medium - level bad pixels having smaller size than the psf of a fiber are replaced at the end of this process .( 50mm,50mm)figure6a.eps ( 50mm,50mm)figure6b.eps ( 50mm,50mm)figure6c.eps there are two ways to observe scientific targets with the nodding pattern : `` normal beam switching '' ( nbs ) , in which all targets are observed at position ( all the fibers are supposed to observe `` blank '' sky at position ) , and `` cross beam switching '' ( cbs ) , in which less than half of the fibers are allocated to the targets at position while the others are at position . in the cbs observation mode , the same targets appear in both images collected in positions or . however , the target spectra in position are merged to minimize the absolute flux of the residual airglow lines in the initial background subtraction process .thus the same reduction process has to be applied to the images in position , replacing the negative spectra of position with the corresponding parts of the position image ( as shown in figure [ fig : merge_pair ] ) . the merged images ( or all of the position images in the nbs mode )are then combined into one averaged image .finally , the combined image is divided into a set of 9 combined 1d images again , in order to perform a fine subtraction of the residual background .the recomposed image also goes through a bad pixel rejection process .( 50mm,50mm)figure7a.eps ( 50mm,50mm)figure7b.eps ( 50mm,50mm)figure7c.eps although the correction for spectral distortions was performed to straighten the images of the airglow masks ( as in figure [ fig : spectral_distortion ] ) , there remain small differences among the throughput patterns of the spectra because of the local shape errors of the mask elements or because of the presence of dust on the mask mirror .these differences are corrected by dividing the averaged image by a `` relative '' dome - flat image in which the common spectral features have been removed by normalizing the count along lines and columns ( cf .figure [ fig : relative_domeflat ] ) . after correcting the relative differences of the throughput patterns among the spectra , the corresponding spectra from positions ( positive ) and ( negative ) taken in the cbs observation mode are ready to be combined .the negative spectra in the image are then inverted and rearranged so that they can be combined with the corresponding positive spectra ( as shown in figure [ fig : cbs_combine ] ) .( 80mm,18mm)figure8.eps ( 50mm,50mm)figure9a.eps ( 50mm,50mm)figure9b.eps ( 50mm,50mm)figure9c.eps in the process of fitting and subtract the residual background , some fraction of the object flux can be subtracted . the best way to retainthe object flux is to mask the objects during the fit .faint objects to be masked are therefore selected by a human eye on the combined image and the reduction process is repeated with these masks applied .the mask density should not be too high to make the residual background subtraction work well : the maximum density allowed is roughly 75% of the image . before the mask edge correction process , the distribution of noise in an image is truncated poisson distribution caused by the bad pixel rejection process in three subsequent stages .although the noise level in an image is almost homogeneous at this stage , the noise level map becomes complicated during the throughput correction and cbs combine . to estimate the noise level of each pixel, a frame is defined that consists of the squared noise level measured along columns just before the mask edge correction . here, the noise level is measured by a 3 clipping algorithm iterated ten times for each column , while some extra contribution proportional to the count is added to the clipped pixels .this square - noise frame is divided by the squared image of the relative dome flat , and reduced to half wherever a pair of spectra is averaged during the cbs combine .note that the noise level of the bright objects is probably underestimated because the systematic uncertainty ( i.e. sub - pixel shifts of the spectra in raw images and tiny variation of the throughput pattern caused by the instrument status such as temperature ) is the major contribution for them .to calibrate the flux of the scientific targets , at least one bright ( ) star in each science frames is needed as a spectral reference , because it must have almost the same atmospheric absorption feature as that for the scientific targets in the same field of view .all the spectra are divided by the reference spectrum , and then multiplied by the expected spectrum of the reference star whose flux and spectral type are known or can be determined by the observed values in two different wavelength regions . if the cataloged or estimated spectrum of the reference star is correct , all the observed spectra will then be calibrated accurately . in the next subsection , the method to estimate the reference star spectrum is described . since the flux and slope of the spectrum of a reference star are determined from the measured counts in an image, one needs the slope - removed template spectra including the intrinsic absorption features of the star .we analyzed 128 stellar spectra in the irtf spectral library split into seven groups : f0-f9iv / v ( 19 objects ) , g0-g8iv / v ( 14 objects ) , k0-k7iv / v ( 10 objects ) , f0-f9i / ii / iii ( 21 objects ) , g0-g9i / ii / iii ( 31 objects ) , k0-k3i / ii / iii ( 17 objects ) , and k4-k7i / ii / iii ( 16 objects ) . in each group, the spectra were averaged to improve the s / n ratio and divided by the results of the linear fitting , so that slope - removed template spectra of seven different types were prepared ( cf . figure [ fig : irtflib ] ) . here ,only f , g , and k type stars were used to make the slope - removed template spectra , because 1 ) their slope - removed spectra become roughly straight in - plot , 2 ) these stars are quite popular and easy to select in the target field , and 3 ) a and earlier type spectra are neither available in the irtf spectral library , nor in other database with similar qualities .next , the correlations between the spectral types and the slopes of the spectra have to be established .figure [ fig : type_slope ] shows the distribution of measured value of 128 stellar spectra from figure [ fig : irtflib ] .the stellar types are numbered from 0 ( f0 ) to 29 ( k9 ) while the slopes are defined by in .the correlations between the spectral types and the slopes are determined by second - order polynomial fits to the distributions of stellar types iii and v : as a result , any spectrum from f0 to k9 can be synthesized by multiplying a linear spectrum having a defined slope with the intrinsic absorption spectrum from the interpolation of the nearest two slope - removed template spectra .( 80mm,80mm)figure10a.ps ( 80mm,80mm)figure10b.ps ( 80mm,80mm)figure11.ps the first step in the calibration process is the relative throughput correction of fibers in which the averaged throughput of position and is used for the merged spectra in the cbs combine process . the next step is to remap the pixels with an increment of 5 / pixel ( 1.25 / pixel in the high - resolution mode ) based on the correlation between the observed wavelength and the pixels determined in the preparatory reduction process . in this remapping process , the observed wavelengths are multiplied by the count of each pixel in order to convert the value from photon count to .next , the converted count is divided by the atmospheric transmittance function of the airmass present during the observation , so as to roughly correct for the effects of atmospheric absorption . after this correction, the flux of the reference star is estimated from the converted count at around 1.31 m , assuming that the total system efficiency under good seeing condition is 2.5% including losses at the entrance of the fibers .the slope of the spectrum is then measured with a fixed efficiency ratio between 1.21 and 1.55 m .( the value of this ratio will be confirmed in the check process , along with the total system efficiency of 2.5% . ) finally , the observed scientific spectra are divided by the reference spectrum and multiplied by the stellar spectra from the measured flux and slope . here, the type v and iii absorption templates are adopted as the reference spectra , respectively bluer than g5 and redder than k1 ( cf .figure [ fig : type_slope ] ) .as a consequence , two of the intrinsic absorption templates of fv , gv , k1iii , and k4iii ( in figure [ fig : irtflib ] ) are used to interpolate the absorption of the reference spectrum . if the stellar type of the reference star is known , the corresponding synthesized spectrum is used instead of this predicted spectrum . a resulting wavelength- and flux - calibrated imageis shown in figure [ fig : calib_image ] .the size of this image is 1800 pixels , the wavelength in m is , the -th spectrum is located between and , and the count is in .the square - noise frame is converted in a similar way except that all factors are multiplied twice .the 1d spectrum of each object is extracted from this image with a user - defined mask of 9 pixels , together with the square - noise frame .an example of the final 1d spectrum is shown in figure [ fig:1d_spectrum ] .( 80mm,80mm)figure12a.eps ( 80mm,80mm)figure12b.eps ( 80mm,80mm)figure13.ps the results are checked by comparing the resulting flux in the and bands with the photometric data in the catalog .if all the factors contributing to the system efficiency are normal , the flux should be consistent with the catalog value .the accuracy of the efficiency ratio between 1.21 and 1.55 m can also be confirmed by these diagrams for most cases .figure [ fig : mag_fnu ] shows an example of such a comparison .however , weather conditions , telescope and instrument focus , extended ( non - stellar ) morphology of targets , and position accuracy of the catalog may cause the lower observed flux of some targets than that expected from the catalog . under sufficiently good conditions, the observed flux should match the catalog magnitude as shown in the thick line in figure [ fig : mag_fnu ] with some downward scatter due to small allocation error of the fibers .also , if the estimated efficiency ratio between 1.21 and 1.55 is not correct , the points of and will have small offset in this figure .the reduction process for fmos images includes two special processing steps .one is the segmented processing of the spectra to handle a given part of the psf as a unit , while the other is the automatic modeling of the reference spectrum to calibrate the scientific targets .the segmented processing enable to keep the original 2-dimensional information which has a large effect on bad pixel filtering and detection of faint emission - lines .most of the processes are carried out automatically , but the object mask preparation and the reference star selection require user judgement .the fibre - pac is available from the fmos instrument page of the subaru web site , together with the sample dataset presented in this paper .the base reduction platform is iraf and the reduction scripts and sources are free and open , so that users can check what is happening in each step by sending the commands one at a time .this work was supported by a grant - in - aid for scientific research ( b ) of japan ( 22340044 ) and by a grant - in - aid for the global coe program `` the next generation of physics , spun from universality and emergence '' from the ministry of education , culture , sports , science , and technology ( mext ) of japan .iraf is distributed by the national optical astronomy observatories , which are operated by the association of universities for research in astronomy , inc ., under cooperative agreement with the national science foundation .fixsen , d. j. , offenberg , j. d. , hanisch , r. j. , et al .2000 , , 112 , 1350 iwamuro , f. , motohara , k. , maihara , t. , hata , r. , & harashima , t. 2001 , , 53 , 355 kimura , m. , et al .2010 , , 62 , 1135 pence , w. 1999 , astronomical data analysis software and systems viii , 172 , 487 rayner , j. t. , cushing , m. c. , & vacca , w. d. 2009 , , 185 , 289
the fibre - pac ( fmos image - based reduction package ) is an iraf - based reduction tool for the fiber multiple - object spectrograph ( fmos ) of the subaru telescope . to reduce fmos images , a number of special techniques are necessary because each image contains about 200 separate spectra with airglow emission lines variable in spatial and time domains , and with complicated throughput patterns for the airglow masks . in spite of these features , almost all of the reduction processes except for a few steps are carried out automatically by scripts in text format making it easy to check the commands step by step . wavelength- and flux - calibrated images together with their noise maps are obtained using this reduction package .
a linear molecular motor is either a macromolecule or macromolecular complex that moves along a filamentous track . in spite of its noisy stepping kinetics , on the average ,it moves in a directed manner .its mechanical work is fuelled by the input energy which , for many motors , is chemical energy .the distributions of the dwell times of a motor at discrete positions on its track as well as the duration of many complex motor - driven intracellular processes have been calculated using the methods of first - passage times , a well - known formalism in non - equilibrium statistical mechanics .experimentally measured distributions of dwell times of a motor can be utilized to extract useful information on its kinetic scheme . for motors which can step both forward and backward on a linear track , four distinct conditional dwell times can be defined ; distributions of these four conditional dwell times have been calculated for some motors . in this paperwe consider a specific molecular motor called dna polymerase ( dnap ) and argue that its movements on the track is characterized by _nine _ distinct conditional dwell times because of the coupling of its dual roles during its key biological function .we define these nine conditional dwell times and calculate their distributions analytically treating each of these as an appropriate first - passage time . as a byproduct of this exercise , we obtain an important result , namely , its mean velocity , that characterizes one of its average properties ; the theoretically predicted behaviour is consistent with the corresponding experimental observations reported earlier in the literature .the distributions of the nine conditional dwell times are new predictions which , we believe , can be tested by single - molecule experiments . deoxyribonucleic acid ( dna ) is a polynucleotide , i.e. , a linear heteropolymer whose monomeric subunits are drawn from a pool of four different species of nucleotides , namely , a ( adenine ) , t ( thymine ) , c ( cytosine ) and g ( guanine ) . in this heteropolymerthe nucleotides are linked by phosphodiester bonds .the genetic message is chemically encoded in the sequence of the nucleotide species .dna polymerase ( dnap ) , the enzyme that replicates dna , carries out a template - directed polymerization . during this processes ,repetitive cycles of nucleotide selection and phosphodiester bond formation is performed to polymerize a dna strand . in every elongation cycle ,hydrolysis of the substrate molecule supplies sufficient amount of energy to the dnap for performing its function .therefore , dnaps are also regarded as molecular motor that transduce chemical energy into mechanical work while translocating step - by - step on the template dna strand that serves as a track for these motors . in an _ in - vitro _ experiment , wuite et al . applied a tension on a ssdna .the two ends of this dna fragment were connected to two dielectric beads ; one end was held by micro - pipette , while the other end , trapped optically by a laser beam , was pulled .this dna fragment also served as a template for the replication process carried out by a dnap .replication converted the ssdna into a dsdna .the average rate of replication was found to vary _ nonmonotonically _ with the tension applied on the template strand .similar results were obtained also in the experiments carried out by maier et al . , where magnetic tweezers were used to apply the tension on template dna .the observed nonmonotonic variation of the average rate of replication was explained as a consequence of the difference in the force - extension curves of ssdna and dsdna .fidelity of replication carried out by a dnap is normally very high .it achieves such high accuracy by discriminating between the correct and incorrect nucleotides by _kinetic proofreading_. the mechanism of kinetic proofreading enables the dnap to reduce the error ratio to values far lower than the thermodynamically allowed value of , where is the free energy difference of enzyme substrate complex for correct and incorrect nucleotides .thus , dnap is capable of correcting most of its own error during the ongoing replication process itself .a dnap performs its normal function as a polymerase by catalyzing the elongation of a new ssdna molecule using another ssdna as a template .however , upon committing a misincorporation of a nucleotide in the elongating dna , the dnap can detect its own error and transfer the nascent dna to another site where it catalyses excision of the wrongly incorporated nucleotide .the distinct sites , where the polymerisation ( pol ) and exonuclease ( exo ) reactions are catalyzed , are separated by 3 - 4 nm on the same dnap .the nascent dna is transferred to the pol site from the exo site after the wrong nucleotide is cleaved from its tip by the dnap .thus , the transfer of the dna between the pol and exo sites couples the polymerase and exonuclease activities of the dnap . in the next sectionwe develop a microscopic model for the replication of a ssdna template that is subjected to externally applied tension , a situation that is very similar to the _ in - vitro _ experiment reported in refs .the rates of both pol and exo activities of the dnap enter into the expression that we derive for the average rate of elongation of the dna .the -dependence of this rate is consistent with the experimental observations reported in .we then define 9 distinct conditional dwell times of the dnap and identifying each of these with an appropriate first - passage time , we calculate their distributions analytically .we believe that experimental measurements of these distributions are likely to elucidate the nature of the interplay of the pol and exo activities of dnap .the nucleotides on the template dna are labelled sequentially by the integer index ( ) which also serves to indicate the position of the dnap on its track .the chemical ( or conformtional ) state of the dnap is denoted by a discrete variable ( ) .the state of the dnap is during replication is described by the pair .the kinetic scheme used for our model is adapted from that proposed originally by patel et al . andsubsequently utilized by various other groups .the kinetic scheme of our model is shown in figure ( [ fig1 ] ) , where the four different values , , and of are the allowed chemical states in the polymerase - active mode of the enzyme , while in chemical state the exonuclease catalytic site is activated .the structure of dna polymerase resembles a `` cupped right hand '' of a human , where its sub domains are recognized as palm , thumb and finger sub domains .template dna enters from the finger sub - domain and takes exit from thumb sub - domain . the catalytic site where the binding occursis located between finger and palm domain .transitions between polymerase activated kinetic states of the enzyme ( i.e. , chemical states 1,2 , 3 and 4 ) can be summarized as where and represent the closed and open finger configuration dnap , respectively , while denotes the length of the nascent dna strand .let us start with the state , labelled by , in which the finger domain of dnap is open and the dnap is located at the site on its template .now a substrate molecule ( dntp ) binds with the dnap and resulting state is labeled by `` 2 '' .the transition take place with rate , while corresponding reverse transition occurs with rate .binding energy of dntp switches the open finger configuration of dnap into closed finger configuration and the corresponding transition 2 3 take place at the rate .the reverse transition occurs at the rate .this new closed finger configuration of dnap catalyzes the formation of phosphodiester bond between dntp and nascent dna strand thereby elongating the nascent dna from length to ; this process is represented by the transition 3 4 ( ) that occurs at the rate ( being the rate of the reverse transition ) .finally , the transition completes one elongation cycle ; the corresponding rates of the forward and reverse transitions are and , respectively .the transition captures more than one sub - step which includes opening of the finger domain , release of and the forward movement of the dnap to the next site on the template . immediately after completing one elongation cycle ,the dnap is normally ready to bind with a new substrate molecule and initiate the next elongation cycle .however , if a wrong nucleotide is incorporated in an elongation cycle , the dnap is likely to transfer the nascent dna from the pol site to the exo site .this switching from pol to exo activity is represented by the transition which occurs at the rate ; the reverse transition , without cleavage , takes place at the rate . in the exo modethe cleavage of the last incorporated nucleotide , at the rate , effectively alters the position of the dnap from to .external load force tilts the free energies and alters the barriers for the forward and reverse transitions .but , not all the rate constants change significantly with the tension applied on the template .we hypothesize that only the following transitions are affected by the tension : ( i ) 3 4 , i.e. , the polymerization step , where new dntp subunit is incorporated into nascent dna chain and a single stranded nucleotide is converted into a double stranded dna .( ii ) 1 5 i.e. , the transfer of the nascent dna from the pol site to the exo site of the dnap .these two catalytic sites are separated by 3.5 nm and a transfer of the nascent dna between them includes major change in the dnap conformation that involves a hairpin .moreover , polymerase to exonuclease switching causes local melting of the dsdna .suppose is the _ change _ in the free energy barrier so that where is the boltzmann constant , is the absolute temperature and , , , are the values of the corresponding rate constants in the absence of external force . the symbols and in eqn.([eq - fishkolo ] ) are the load - sharing parameters .note that detailed balance is satisfied by our choice of the force - dependence of the rate constants when it is satisfied by the corresponding rates in the absence of the force .the expressions for and are derived in appendix a by relating these to which is the change in the stretching free energy when a ssdna is converted into dsdna . as we show in the next section , following force dependence of and , together with , i.e. , , shows a good qualitative agreement with the experimental data .in this subsection we derive the force - velocity curve for our model dnap motor and compare it with those reported earlier in the literature .let be the probability of finding dnap in chemical state , at the position on its track , at time .the probability to finding the dna polymerase in chemical state , irrespective of its position , is where is the total number of nucleotides in template dna strand .normalisation of the probability imposes the condition at all times .the time evolution of the probability is governed by following equations now we solve these equations in steady state and calculate the probability of finding the dna polymerase in chemical state ( ) . expressions for s are given in appendix b. now we define the average rate of polymerization and the average rate of excision as therefore , the average velocity of the dnap on its track is in figure ( [ force1 ] ) the average velocity of the dnap is plotted against the tension applied on dna track .rate constants used for this plot are collected from the literature and listed in table [ tab - rateconst ] .+ .numerical values of the rate constants used for graphical plotting of some typical curves obtained from the analytical expressions derived in this paper .[ cols="<,<",options="header " , ] + because of the -dependence of the form assumed in ( [ eq - fdeprate ] ) , at lower tension transition 2 3 is rate limiting while at higer values of tension 3 4 becomes the rate limiting step .frequent switching cause the significant increase in the exonuclease cleaving at higher forces . observed trend of variation of the average velocity is the direct consequence of the nonmonotonic behavior of the , shown in figure ( [ phie ] ) .the average velocity of a dnap and its dependence on the tension applied on the corresponding template does not provide any information on the intrinsic fluctuations in both the pol and exo activities of these machines .probing fluctuations in the kinetics of molecular machines have become possible because of the recent advances in single molecule imaging , manipulation and enzymology . in this sectionwe investigate theoretically how the fluctuations in the pol and exo activities of a dnap would vary with the tension applied on the template dna . for this purposewe use the same kinetic model introduced in section [ sec - introduction ] , that we have used in subsection [ sec - forcevel ] for calculating the average properties of dnap .the variable chosen to characterize the fluctuations in replication process is the time of dwell of dnap at a single nucleotide on the template , which is nothing but the effective duration of its stay in that location . while moving on the one dimensional template strand three different mechanical stepsare taken by dnap , which are + ( 1 ) forward step in the pol mode : . + ( 2 ) backward step in the pol mode : .+ ( 3 ) backward step ( caused by cleavage ) in the exo mode : .if a molecular motor takes more than one type of mechanical step then the fluctuations in the durations of its dwell at different locations can not be characterized by a single distribution ; instead , distributions of more than one type of conditional dwell times can be defined .so , in the context of our model of dnap , three different types of mechanical step would generate nine different distribution of conditional dwell times .we denote the forward , backward and cleavage steps are by the symbols , and , respectively . is the conditional dwell time of the dna polymerase when step m is followed by n , where the three allowed values of each of the subscripts and are . for the convenience of calculation of the distributions , first we assume that the dnap is already at the site on the template strand and that the rate constants for all the transitions leading to this special site are equated to zero .in other words , + ( 1 ) only for the transition ( and not for any ) , + ( 2 ) only for ( and not for any ) , + ( 3 ) only for ( and not for any ) . +now appropriate initial conditions will ensure the type of previous step taken by dnap .+ if is the probability of finding the dna polymerase in chemical state at site at time , then time evolution of these probabilities are governed by following master equation . these equation can be re - expressed in the following matrix form . here * p(t ) * is a column matrix , whose elements are , , , and .and now introducing the laplace transform of the probability of kinetic states by , solution of equation ( [ main1 ] ) in laplace space is , here is the vector of the probability of individual chemical state in laplace space and is the column vector of initial probabilities .+ determinant of matrix is a fifth order polynomial full expressions for , , , , and in terms of the primary rate constants are given in appendix c. following set of initial conditions guarantees that previous step taken by dna polymerase is a forward step . so three different distribution of dwell time , where first step is forward , are defined as follows : } \label{sipp}\ ] ] } \label{sipn}\ ] ] } \label{sipx}\ ] ] by applying the initial condition ( [ initial1 ] ) in equation ( [ main2 ] ) , we get mathematical expressions for , , , , , , , , , and are given in appendix d. + by inserting the inverse laplace transforms of the expressions ( [ flpp ] ) , ( [ flpn ] ) and ( [ flpx ] ) into the equations ( [ sipp ] ) , ( [ sipn ] ) and ( [ sipx ] ) , respectively , we get ^{-\omega_{1}t } \nonumber \\ & + & \biggl[\dfrac{(a_{0}-a_{1}\omega_{2})k_{4}}{(\omega_{2}-\omega_{1})(\omega_{2}-\omega_{3})(\omega_{2}-\omega_{4})(\omega_{2}-\omega_{5})}\biggr]e^{-\omega_{2}t } \nonumber \\ & + & \biggl[\dfrac{(a_{0}-a_{1}\omega_{3})k_{4}}{(\omega_{3}-\omega_{1})(\omega_{3}-\omega_{2})(\omega_{3}-\omega_{4})(\omega_{3}-\omega_{5})}\biggr]e^{-\omega_{3}t } \nonumber \\ & + & \biggl[\dfrac{(a_{0}-a_{1}\omega_{4})k_{4}}{(\omega_{4}-\omega_{1})(\omega_{4}-\omega_{2})(\omega_{4}-\omega_{3})(\omega_{4}-\omega_{5})}\biggr]e^{-\omega_{4}t } \nonumber \\ & + & \biggl[\dfrac{(a_{0}-a_{1}\omega_{5})k_{4}}{(\omega_{5}-\omega_{1})(\omega_{5}-\omega_{2})(\omega_{5}-\omega_{3})(\omega_{5}-\omega_{4})}\biggr]e^{-\omega_{5}t } \end{aligned}\ ] ] ^{-\omega_{1}t } \nonumber \\ & + & \biggl[\dfrac{(b_{0}-b_{1}\omega_{2}+b_{2}\omega_{2}^{2}-b_{3}\omega_{2}^{3}+b_{4}\omega_{2}^{4})k_{-4}}{(\omega_{2}-\omega_{1})(\omega_{2}-\omega_{3})(\omega_{2}-\omega_{4})(\omega_{2}-\omega_{5})}\biggr]e^{-\omega_{2}t } \nonumber \\ & + & \biggl[\dfrac{(b_{0}-b_{1}\omega_{3}+b_{2}\omega_{3}^{2}-b_{3}\omega_{3}^{3}+b_{4}\omega_{3}^{4})k_{-4}}{(\omega_{3}-\omega_{1})(\omega_{3}-\omega_{2})(\omega_{3}-\omega_{4})(\omega_{3}-\omega_{5})}\biggr]e^{-\omega_{3}t } \nonumber \\ & + & \biggl[\dfrac{(b_{0}-b_{1}\omega_{4}+b_{2}\omega_{4}^{2}-b_{3}\omega_{4}^{3}+b_{4}\omega_{4}^{4})k_{-4}}{(\omega_{4}-\omega_{1})(\omega_{4}-\omega_{2})(\omega_{4}-\omega_{3})(\omega_{4}-\omega_{5})}\biggr]e^{-\omega_{4}t } \nonumber \\ & + & \biggl[\dfrac{(b_{0}-b_{1}\omega_{5}+b_{2}\omega_{5}^{2}-b_{3}\omega_{5}^{3}+b_{4}\omega_{5}^{4})k_{-4}}{(\omega_{5}-\omega_{1})(\omega_{5}-\omega_{2})(\omega_{5}-\omega_{3})(\omega_{5}-\omega_{4})}\biggr]e^{-\omega_{5}t } \end{aligned}\ ] ] ^{-\omega_{1}t } \nonumber \\ & + & \biggl[\dfrac{(c_{0}-c_{1}\omega_{2}+c_{2}\omega_{2}^{2}-c_{3}\omega_{2}^{3})k_{exo}}{(\omega_{2}-\omega_{1})(\omega_{2}-\omega_{3})(\omega_{2}-\omega_{4})(\omega_{2}-\omega_{5})}\biggr]e^{-\omega_{2}t } \nonumber \\ & + & \biggl[\dfrac{(c_{0}-c_{1}\omega_{3}+c_{2}\omega_{3}^{2}-c_{3}\omega_{3}^{3})k_{exo}}{(\omega_{3}-\omega_{1})(\omega_{3}-\omega_{2})(\omega_{3}-\omega_{4})(\omega_{3}-\omega_{5})}\biggr]e^{-\omega_{3}t } \nonumber \\ & + & \biggl[\dfrac{(c_{0}-c_{1}\omega_{4}+c_{2}\omega_{4}^{2}-c_{3}\omega_{4}^{3})k_{exo}}{(\omega_{4}-\omega_{1})(\omega_{4}-\omega_{2})(\omega_{4}-\omega_{3})(\omega_{4}-\omega_{5})}\biggr]e^{-\omega_{4}t } \nonumber \\ & + & \biggl[\dfrac{(c_{0}-c_{1}\omega_{5}+c_{2}\omega_{5}^{2}-c_{3}\omega_{5}^{3})k_{exo}}{(\omega_{5}-\omega_{1})(\omega_{5}-\omega_{2})(\omega_{5}-\omega_{3})(\omega_{5}-\omega_{4})}\biggr]e^{-\omega_{5}t } \end{aligned}\ ] ] where , , , and are the roots of following equation the explicit expressions of and in terms of the primary rate constants of the kinetic model are given in appendix c. the coupled nature of the pol and exo activities is revealed by the mixing of the corresponding rate constants in the expressions of and .following initial conditions ensures that dna polymerase has reached to site by making a backward step : so three different distributions of dwell time , where first step is backward , are defined as follows : } \label{fsnp}\ ] ] } \label{fsnn}\ ] ] } \label{fsnx}\ ] ] after applying the above initial condition in equation ( [ main2 ] ) , we get full expressions for , , , , , and in terms of the primary rate constants of the kinetic model are given in appendix d. inverse transform of equation ( [ flnp ] ) , ( [ flnn ] ) and ( [ flnx ] ) gives the mathematical expression for , and .substituting the inverse laplace transforms of ( [ flnp ] ) , ( [ flnn ] ) and ( [ flnx ] ) into the equations ( [ fsnp ] ) , ( [ fsnn ] ) and ( [ fsnx ] ) respectively , we get the following distributions of the conditional dwell time : ^{-\omega_{1}t } \nonumber \\ & + & \biggl[\dfrac{(d_{0}-d_{1}\omega_{2}+d_{2}\omega_{2}^{2}-d_{3}\omega_{2}^{3}+d_{4}\omega_{2}^{4})k_{4}}{(\omega_{2}-\omega_{1})(\omega_{2}-\omega_{3})(\omega_{2}-\omega_{4})(\omega_{2}-\omega_{5})}\biggr]e^{-\omega_{2}t } \nonumber \\ & + & \biggl[\dfrac{(d_{0}-d_{1}\omega_{3}+d_{2}\omega_{3}^{2}-d_{3}\omega_{3}^{3}+d_{4}\omega_{3}^{4})k_{4}}{(\omega_{3}-\omega_{1})(\omega_{3}-\omega_{2})(\omega_{3}-\omega_{4})(\omega_{3}-\omega_{5})}\biggr]e^{-\omega_{3}t } \nonumber \\ & + & \biggl[\dfrac{(d_{0}-d_{1}\omega_{4}+d_{2}\omega_{4}^{2}-d_{3}\omega_{4}^{3}+d_{4}\omega_{4}^{4})k_{4}}{(\omega_{4}-\omega_{1})(\omega_{4}-\omega_{2})(\omega_{4}-\omega_{3})(\omega_{4}-\omega_{5})}\biggr]e^{-\omega_{4}t } \nonumber \\ & + & \biggl[\dfrac{(d_{0}-d_{1}\omega_{5}+d_{2}\omega_{5}^{2}-d_{3}\omega_{5}^{3}+d_{4}\omega_{5}^{4})k_{4}}{(\omega_{5}-\omega_{1})(\omega_{5}-\omega_{2})(\omega_{5}-\omega_{3})(\omega_{5}-\omega_{4})}\biggr]e^{-\omega_{5}t } \end{aligned}\ ] ] ^{-\omega_{1}t } \nonumber \\ & + & \biggl[\dfrac{k_{x}k_{-1}k_{-2}k_{-3}k_{-4}}{(\omega_{2}-\omega_{1})(\omega_{2}-\omega_{3})(\omega_{2}-\omega_{4})(\omega_{2}-\omega_{5})}\biggr]e^{-\omega_{2}t } \nonumber \\ & + & \biggl[\dfrac{k_{x}k_{-1}k_{-2}k_{-3}k_{-4}}{(\omega_{3}-\omega_{1})(\omega_{3}-\omega_{2})(\omega_{3}-\omega_{4})(\omega_{3}-\omega_{5})}\biggr]e^{-\omega_{3}t } \nonumber \\ & + & \biggl[\dfrac{k_{x}k_{-1}k_{-2}k_{-3}k_{-4}}{(\omega_{4}-\omega_{1})(\omega_{4}-\omega_{2})(\omega_{4}-\omega_{3})(\omega_{4}-\omega_{5})}\biggr]e^{-\omega_{4}t } \nonumber \\ & + & \biggl[\dfrac{k_{x}k_{-1}k_{-2}k_{-3}k_{-4}}{(\omega_{5}-\omega_{1})(\omega_{5}-\omega_{2})(\omega_{5}-\omega_{3})(\omega_{5}-\omega_{4})}\biggr]e^{-\omega_{5}t } \end{aligned}\ ] ] ^{-\omega_{1}t } \nonumber \\ & + & \biggl[\dfrac{(e_{0}-e_{1}\omega_{2})k_{exo}}{(\omega_{2}-\omega_{1})(\omega_{2}-\omega_{3})(\omega_{2}-\omega_{4})(\omega_{2}-\omega_{5})}\biggr]e^{-\omega_{2}t } \nonumber \\ & + & \biggl[\dfrac{(e_{0}-e_{1}\omega_{3})k_{exo}}{(\omega_{3}-\omega_{1})(\omega_{3}-\omega_{2})(\omega_{3}-\omega_{4})(\omega_{3}-\omega_{5})}\biggr]e^{-\omega_{3}t } \nonumber \\ & + & \biggl[\dfrac{(e_{0}-e_{1}\omega_{4})k_{exo}}{(\omega_{4}-\omega_{1})(\omega_{4}-\omega_{2})(\omega_{4}-\omega_{3})(\omega_{4}-\omega_{5})}\biggr]e^{-\omega_{4}t } \nonumber \\ & + & \biggl[\dfrac{(e_{0}-e_{1}\omega_{5})k_{exo}}{(\omega_{5}-\omega_{1})(\omega_{5}-\omega_{2})(\omega_{5}-\omega_{3})(\omega_{5}-\omega_{4})}\biggr]e^{-\omega_{5}t } \end{aligned}\ ] ] where , , , and are the roots of the equation ( [ eq - omega5 ] ) .now we consider the case where dna polymerase has arrived at site by making an exonuclease cleavage .the initial condition ensures that previous mechanical step is an exonuclease cleaving .now we define following distributions of conditional dwell time } \label{fsxp}\ ] ] } \label{fsxn}\ ] ] } \label{fsxx}\ ] ] after applying the above initial condition in equation [ main2 ] , we get the expressions for , , , , , , , and are given in appendix d. the values of , and are obtained from the inverse laplace transform of the ( [ flxp ] ) , ( [ flxn ] ) and ( [ flxn ] ) .after inserting the values of , and in equations ( [ fsxp ] ) , ( [ fsxn ] ) and ( [ fsxn ] ) , we get the exact analytical expression for , and .^{-\omega_{1}t } \nonumber \\ & + & \biggl[\dfrac{(f_{0}-f_{1}\omega_{2}+f_{2}\omega_{2}^{2}-f_{3}\omega_{2}^{3}+f_{4}\omega_{2}^{4})k_{exo}}{(\omega_{2}-\omega_{1})(\omega_{2}-\omega_{3})(\omega_{2}-\omega_{4})(\omega_{2}-\omega_{5})}\biggr]e^{-\omega_{2}t } \nonumber \\ & + & \biggl[\dfrac{(f_{0}-f_{1}\omega_{3}+f_{2}\omega_{3}^{2}-f_{3}\omega_{3}^{3}+f_{4}\omega_{3}^{4})k_{exo}}{(\omega_{3}-\omega_{1})(\omega_{3}-\omega_{2})(\omega_{3}-\omega_{4})(\omega_{3}-\omega_{5})}\biggr]e^{-\omega_{3}t } \nonumber \\ & + & \biggl[\dfrac{(f_{0}-f_{1}\omega_{4}+f_{2}\omega_{4}^{2}-f_{3}\omega_{4}^{3}+f_{4}\omega_{4}^{4})k_{exo}}{(\omega_{4}-\omega_{1})(\omega_{4}-\omega_{2})(\omega_{4}-\omega_{3})(\omega_{4}-\omega_{5})}\biggr]e^{-\omega_{4}t } \nonumber \\ & + & \biggl[\dfrac{(f_{0}-f_{1}\omega_{5}+f_{2}\omega_{5}^{2}-f_{3}\omega_{5}^{3}+f_{4}\omega_{5}^{4})k_{exo}}{(\omega_{5}-\omega_{1})(\omega_{5}-\omega_{2})(\omega_{5}-\omega_{3})(\omega_{5}-\omega_{4})}\biggr]e^{-\omega_{5}t } \end{aligned}\ ] ] ^{-\omega_{1}t } \nonumber \\ & + & \biggl[\dfrac{k_{1}k_{2}k_{3}k_{4}k_{p}}{(\omega_{2}-\omega_{1})(\omega_{2}-\omega_{3})(\omega_{2}-\omega_{4})(\omega_{2}-\omega_{5})}\biggr]e^{-\omega_{2}t } \nonumber \\ & + & \biggl[\dfrac{k_{1}k_{2}k_{3}k_{4}k_{p}}{(\omega_{3}-\omega_{1})(\omega_{3}-\omega_{2})(\omega_{3}-\omega_{4})(\omega_{3}-\omega_{5})}\biggr]e^{-\omega_{3}t } \nonumber \\ & + & \biggl[\dfrac{k_{1}k_{2}k_{3}k_{4}k_{p}}{(\omega_{4}-\omega_{1})(\omega_{4}-\omega_{2})(\omega_{4}-\omega_{3})(\omega_{4}-\omega_{5})}\biggr]e^{-\omega_{4}t } \nonumber \\ & + & \biggl[\dfrac{k_{1}k_{2}k_{3}k_{4}k_{p}}{(\omega_{5}-\omega_{1})(\omega_{5}-\omega_{2})(\omega_{5}-\omega_{3})(\omega_{5}-\omega_{4})}\biggr]e^{-\omega_{5}t } \end{aligned}\ ] ] ^{-\omega_{1}t } \nonumber \\ & + & \biggl[\dfrac{(g_{0}-g_{1}\omega_{2}+g_{2}\omega_{2}^{2}-g_{3}\omega_{2}^{3})k_{-4}}{(\omega_{2}-\omega_{1})(\omega_{2}-\omega_{3})(\omega_{2}-\omega_{4})(\omega_{2}-\omega_{5})}\biggr]e^{-\omega_{2}t } \nonumber \\ & + & \biggl[\dfrac{(g_{0}-g_{1}\omega_{3}+g_{2}\omega_{3}^{2}-g_{3}\omega_{3}^{3})k_{-4}}{(\omega_{3}-\omega_{1})(\omega_{3}-\omega_{2})(\omega_{3}-\omega_{4})(\omega_{3}-\omega_{5})}\biggr]e^{-\omega_{3}t } \nonumber \\ & + & \biggl[\dfrac{(g_{0}-g_{1}\omega_{4}+g_{2}\omega_{4}^{2}-g_{3}\omega_{4}^{3})k_{-4}}{(\omega_{4}-\omega_{1})(\omega_{4}-\omega_{2})(\omega_{4}-\omega_{3})(\omega_{4}-\omega_{5})}\biggr]e^{-\omega_{4}t } \nonumber \\ & + & \biggl[\dfrac{(g_{0}-g_{1}\omega_{5}+g_{2}\omega_{5}^{2}-g_{3}\omega_{5}^{3})k_{-4}}{(\omega_{5}-\omega_{1})(\omega_{5}-\omega_{2})(\omega_{5}-\omega_{3})(\omega_{5}-\omega_{4})}\biggr]e^{-\omega_{5}t } \end{aligned}\ ] ] where , , , and are the roots of the equation ( [ eq - omega5 ] ) .the distributions of the conditional dwell times , except , are plotted for a few typical values of the parameters in figs.[f++ ] and [ fxx ] . since is independent of the tension ,it has not been drawn graphically .we have also presented our numerical data , obtained from direct computer simulation , for the distributions plotted in fig.[f++ ] .each of these distributions is a sum of several exponentials .therefore , in general , these distributions are expected to peak at a nonzero value of time . however , some of the distributions in fig.[f++ ] and [ fxx ] appear as a single exponential .this single - exponential like appearance is an artefact of the parameters chosen for plotting these curves although , in reality , the full distributions are sum of several exponentials .an interesting feature of the distributions plotted in figs.[f++ ] and [ fxx ] is a non - monotonic variation of the probability of the most probable conditional dwell times with increasing ( see , for example , and ) .this trend of variation is a consequence of the nonmonotonic variation of with ( see fig.[phie ] ) .the distributions of the conditional dwell times have been extracted for some motors in the last decade from the data obtained from single - molecule experiments .but , to our knowledge , none of the distributions , , and have been measured experimentally so far specifically for the dnap motor . in this sectionwe first mention a few recently developed single - molecule techniques that probe some aspects of dnap kinetics during replication . in a landmark paper eid reported a single - molecule method for monitoring replication exploiting fluorescently labelled nucleotide monomers .the fluorophores are `` phospholinked '' ( i.e. , linked to the phosphate group of the nucleotide monomer ) . since dnap - catalyzed phosphodiester bond formation releases the fluorophore from the nucleotide , the temporal sequence of the color of the fluorescence provides the sequence of the nucleotides that are incorporated in the elongating dna .christian et al. have developed a single - molecule technique for monitoring replication by a dnap with base - pair resolution .this method is based on frster resonance energy transfer ( fret ) .use of this technique also makes it possible to discriminate between the polymerization activity and exonuclease activity of the dnap .it is likely that in near future appropriate adaptations of these or some combination of force - based and fluorescence - based single molecule techniques may achieve sufficiently high resolution required for measuring the nine distributions of conditional dwell times introduced in this paper .next , we propose a reduced description of the stochastic pause - and - translocation of the dnap in terms of fewer conditional dwell times which , as we explain below , may be measurable with the currently available single molecule techniques because these do not distinguish between chemical and mechanical backward steppings .let us define and as the distributions of conditional dwell times in which , regardless of the nature of the next step , the step taken by the dnap is a forward polymerase step , backward polymerase step and backward exonuclease step , respectively . for a given set of initial conditions , overall probability to leave the site should be unity .therefore , following conditions must be satisfied : }dt=\int_{0}^{\infty } \psi_{+}(t)dt=1\ ] ] }dt=\int_{0}^{\infty } \psi_{-}(t)dt=1\ ] ] }dt=\int_{0}^{\infty } \psi_{x}(t)dt=1\ ] ] i.e. , , and are probability distributions normalized to unity . therefore , the overall distribution of dwell time , irrespective of the type of steps taken by dnap , is the weighted sum where , and denote the probabilities of taking forward polymerase step , backward polymerase step and backward exonuclease step by a dnap , respectively .the explicit expressions for , and are given by where , in terms of the rate constants , are given in appendix b. + we now recast eqn ( [ eq - wtuncond ] ) in a form that would facilitate direct contact with experiments that are feasible with the currently available techniques .writing we identify the four new distributions of conditional dwell times to be \label{eq - xi+-}\ ] ] +q_{x}[\psi_{x-}(t)+\psi_{xx}(t ) ] \label{eq - xi--}\ ] ] where the symbols `` '' and `` '' denote forward and backward movements of the dnap irrespective of the mode of movement .for example , the dnap can move backward either by polymerase or exonuclease activity ; however , the newly defined conditional dwell times does not discriminate between these two modes of backward movement . for the purpose of comparison with experimental data , expressions( [ eq - q+]),([eq - q- ] ) and ( [ eq - qx ] ) for and and the expressions derived in section [ sec - results ] for the conditional dwell times should be substituted into the eqns.([eq - xi++])-([eq - xi ] ) . for a dnap with the data set given in table 1 ,the probabilities for a polymerase - dependent forward step ( ) , polymerase - dependent backward step ( ) and exonuclease activity ( ) are 0.9736 , .0261 and .0003 , respectively .thus , under normal circumstances back - stepping and exonuclease events are very unlikely . moreover ,two consecutive _ exonuclease _ steps would be extremely rare .however , the frequency of _ exonuclease _ activity of the dnap can be increased by using mutants of the same dnap . by increasing the concentration of pyrophosphate far above the equilibrium concentration ,back - stepping events can be made more frequent .besides , transfer - deficient mutants and exonuclease - deficient mutants can be used to test the effects of variation of the corresponding rate constants on the various dwell time distributions .dna replication is carried out by dnap which operates as a molecular motor utilizing the template dna strand as its track . in this paperwe have presented a theoretical model for dna replication that allows systematic investigation of the pol and exo activities as well as their coupling .more specifically , the situation considered here mimics an _ in - vitro _ experiment where a tension is applied on the template strand throughout the replication process .we have calculated the effect of the tension on the average speed of replication , capturing the effects of both the pol and exo activities of the same dnap .our theoretical results in section [ sec - fvcurve ] are in good qualitative agreement with the results of single molecule experiments reported in the literature . however , the intrinsic fluctuations in the pol and exo processes contain some additional information which can not be extracted from average properties .it is well known that the fluctuation in the dwell times provides a numerical estimate the number of kinetic states .more specifically , one defines a `` randomness parameter '' where is the dwell time and the symbol indicates average ; provides a lower bound on the number of kinetic states in each mechano - chemical cycle of the motor . moreover ,if is larger than unity in any parameter regime , it would indicate existence of branched pathways .furthermore , _conditional _ dwell times can reveal existence of correlations between individual steps of the mechano - chemical pathways of a molecular motor .besides , hidden substeps may be missed in the noisy data recorded in a single - motor experiment ; the distributions of conditional dwell times are quite useful in detecting such substeps . exact analytical expressions for the distributions of conditional dwell times that we report here may find use in the analysis of the experimental data for extracting these information .although both the pol and exo activities of the dnap have been studied extensively , the distributions of dwell times of dnap have not been measured so far in any single molecule experiment . in this paperwe have also mentioned a few recently developed single - molecule techniques for dnap which , after minor alteration , might be the appropriate tool for measuring the conditional dwell times introduced in this paper .we have also proposed a reduced description of the pause - and - translocation of dnap in terms of the distributions of fewer conditional dwell times which , in principle , can be measured by the currently available single - molecular techniques .we hope our model and results will motivate experiments to study the unexplored stochastic features of the kinetics of one of the most important genetic processes , namely dna replication driven by dnap .understanding this kinetics will throw light on the propagation of life from one generation to the next . + * acknowledgements * : dc thanks berenike maier for useful discussions .we also thank the anonymous referees for constructive criticism and suggestions which helped in significant improvement of the manuscript .this work has been supported by the dr .jagmohan garg chair professorship ( dc ) , j.c .bose national fellowship ( dc ) , department of biotechnology ( dc ) and council of scientific and industrial research ( aks ) .* appendix a * here the parameters with subscripts `` 1 '' and `` 2 '' correspond to ssdna and dsdna , respectively .let ( ) denote the average equilibrium projections of base pair in the direction of the applied force .suppose , ( are the corresponding free energies .then , for a given force , the free energy difference between single base - pair of dsdna and ssdna can be expressed as where the right - hand side can be evaluated if the functions are known . for the freely jointed chain ( fjc ) model of dna ,is established .\biggl(1+\dfrac{f}{k_{i}}\biggl)b_{i}^{max } \label{eq - fjc}\ ] ] where , and are , respectively , the elastic modulus , the persistence length and the average length of a base pair in the absence of any force . inserting the expression ( [ eq - fjc ] ) into the equation ( [ eq - fdiff ] )we numerically compute the free energy difference between single base pair of dsdna and that of ssdna for the given force . in figure [ phie ] we plot against the tension .the numerical values of the parameters that we use for this computation are given in the table [ tab - fdiff ] .we now assume that change in the barrier height that enters into the equation ( [ eq - fdeprate ] ) is equivalent to where is an integer .we would like to emphasize that is the stetching free energy difference between ssdna and dsdna i.e. , between the initial state 3 and final state 4 of the transition for which the barrier height , i.e. , the free energy difference between state 3 and the transition state , contains the force - induced extra term .our assumption is similar , in spirit , but not identical to the assumptions made by wuite et al . and maier et al. .the physical meaning of this assumption is that the tension - induced change of the barrier arising from the legnth mismatch between the ssdna and dsdna base pairs is equivalent to times the free energy difference .atomistic explanation of the physical origin of the tension - induced change of the activation energy would require more fine - grained modeling of the local neighborhood of the catalytic site which is beyond the scope of the markov kinetic models of the type developed in this paper . in our numerical calculations we use which is consistent with the best fit values reported in refs. .the parameter value should not be confused with the step size of the dnap which is 1 nucleotide .the polymerase and exonuclease catalytic sites are separated by about 3.5 nm .a dna molecule migrating from the polymerase site to the exonuclease site of dnap would cause local melting of more than one termial base pairs .therefore , based on arguments similar to those used earlier for the rate constant , we now expect . since no further information is available to fix the numerical value of , we use because this choice provides the best fit to the experimenta data .wlc model provides a slightly better quantitative estimate of the force extension curve of dsdna in the range of 0 to 10 pn force .however , given the uncertainties of the other parameters used for plotting our results graphically , the simpler fjc model is good enough .indeed , it produces the non monotonicity of as a function of force ( f ) which , in turn , can be used to estimate the mean rate of elongation as well as the conditonal dwell times .99 howard j 2001 _ mechanics of motor proteins and the cytoskeleton _ , ( sinauer associates , sunderland ) .bustamante c , keller d and oster g 2001 _ acc . chem .res _ * 34 * 412 kolomeisky a b and fisher m e 2007 _ annu .chem . _ * 58 * , 675 chowdhury d 2013 _ phys .rep . _ * 529 * , 1 chowdhury d 2013 _ biophys . j. _ * 104 * , 2331 kolomeisky a b , stukalin e b and popov a a 2005 _ phys . rev .e _ * 71 * , 031902 liao j c , spudich j a , parker d , delp s l 2007 _ proc .sci . _ * 104 * , 3171 linden m and wallin m 2007 _ biophys .j. _ * 92 * , 3804 tsygankov d , linden m , fisher m e 2007 _ phys rev e _ * 75 * , 021909 chemla y r , moffitt j r and bustamante c 2008 _ j. phys . chem .b _ * 112 * , 6025 moffitt j r , chemla y r and bustamante c 2010 _ methods in enzymology _ * 475 * , 221 garai a and chowdhury d 2011 _ epl _ , * 93 * , 58004 sharma a k and chowdhury d 2011 _ phys ._ * 8 * , 026005 kornberg a and baker t , _ dna replication _ , 2nd edn .( w.h . freeman and co. , new york , 1992 ) .sharma a k and chowdhury d 2012 _ biophys .lett . _ * 7 * , 1 wuite g j l , smith s b , young m , keller d and bustamante c 2000 _ nature _ * 400 * 103 maier b , bensimon d and croquette v 2000 _ proc . natlsci . _ * 97 * 12002 goel a , frank - kamenetskii m d , ellenberger t and herschbach d 2001 _ proc .* 98 * , 8485 .goel a , astumian r d and herschbach d 2003 _ proc .* 17 * 9699 andricioaei i , goel a , herschbach d and karplus m 2004 _ biophys . j. _ * 87 * , 1478 rouzina i and bloomfield v a 2001 _ biophys . j. _ * 80 * 882 kunkel t a 2009 _ cold spring harb . symp .biol _ , * 74 * , 91 ibarra b , chemla y r , plyasunov s , smith s b , lzaro j m , salas m and bustamante c 2009 _ embo j. _ * 28 * 2794 wong i , patel s s and johnson k a 1991 _ biochemistry _ * 30 * 526 xie p 2007 _ arch .* 1457 * 73 cramer p 2001 _ bioessays _ * 24 * 724 johnson k a 2010 _ biochimica et biophysica acta _ * 1804 * 1041 berdis a j 2009 _ chem . rev . _ * 109 * 2862 bustamante c , chemla y r , forde n r and izhaky d 2004 _ annu . rev ._ , * 73 * 705 ( 2004 ) .l s , derbyshire v , and steitz t a 1993 _ science _ * 260 * 352 wang j , sattar a k m a , wang c c , karam j d , konigsberg w h and steitz t a 1997 _ cell _ * 89 * 1087 shamoo y and steitz t a 1999 _ cell _ * 99 * 155 reha - krantz l j 2010 _ biochim .acta _ , * 1804 * , 1049 manosas m , spiering m m , ding f , bensimon d , allemand j f , benkovic s j and croquette v 2012 _ nucleic acids res ._ * 40 * , 6174 christian t d , romano l j and rueda d 2009 _ proc .usa _ * 106 * , 21109 .eid j et al .2009 _ science _ * 323 * , 133 .m c and i rouzina 2002 _ current opinion in structural biology _ * 12 * , 330 bustamante c , bryant z and smith s b 2003 _ nature _ * 421 * 423 bustamante c , smith s b , liphardt j and smith d 2000 _ curr . opin . in struct .
a dna polymerase ( dnap ) replicates a template dna strand . it also exploits the template as the track for its own motor - like mechanical movement . in the polymerase mode it elongates the nascent dna by one nucleotide in each step . but , whenever it commits an error by misincorporting an incorrect nucleotide , it can switch to an exonuclease mode . in the latter mode it excises the wrong nucleotide before switching back to its polymerase mode . we develop a stochastic kinetic model of dna replication that mimics an _ in - vitro _ experiment where a single - stranded dna , subjected to a mechanical tension , is converted to a double - stranded dna by a single dnap . the -dependence of the average rate of replication , which depends on the rates of both polymerase and exonuclease activities of the dnap , is in good qualitative agreement with the corresponding experimental results . we introduce 9 novel distinct _ conditional dwell times _ of a dnap . using the methods of first - passage times , we also derive the exact analytical expressions for the probability distributions of these conditional dwell times . the predicted -dependence of these distributions are , in principle , accessible to single - molecule experiments .
let be i.i.d . zero mean vectors with unknown covariance matrix .our objective is to estimate the unknown covariance matrix when the vectors are partially observed , that is , when some of their components are not observed .more precisely , we consider the following framework .denote by the -th _ _ component of the vector .we assume that each component is observed independently of the others with probability ] , thus showing that our procedures are minimax optimal up to a logarithmic factor . finally , section [ proof ] contains all the proofs of the paper .we emphasize that the results of this paper are non - asymptotic in nature , hold true for any setting of , are minimax optimal ( up to a logarithmic factor ) and do not require the unknown covariance matrix to be low - rank .we note also that to the best of our knowledge , there exists in the literature no minimax lower bound result for statistical problem with missing observations .the -norms of a vector is given by denote by the set of symmetric positive - semidefinite matrices .any matrix admits the following spectral representation where is the rank of , are the nonzero eigenvalues of and are the associated orthonormal eigenvectors ( we also set ) .the linear vector space is the linear span of and is called support of .we will denote respectively by and the orthogonal projections onto and .the schatten -norm of is defined by note that the trace of any satisfies .recall the _ trace duality _ property : we will also use the fact that the subdifferential of the convex function is the following set of matrices : ( cf . ) .we recall now the definition and some basic properties of sub - exponential random vectors .the -norms of a real - valued random variable are defined by we say that a random variable with values in is sub - exponential if for some . if , we say that is sub - gaussian .we recall some well - known properties of sub - exponential random variables : 1 .for any real - valued random variable such that for some , we have where can depend only on .2 . if a real - valued random variable is sub - gaussian , then is sub - exponential .indeed , we have a random vector is sub - exponential if are sub - exponential random variables for all .the -norms of a random vector are defined by we recall the bernstein inequality for sub - exponential real - valued random variables ( see for instance corollary 5.17 in ) [ bernstein ] let be independent centered sub - exponential random variables , and .then for every , we have with probability at least where is an absolute constant .the following proposition is the matrix version of bernstein s inequality for bounded random matrices ( see also corollary 9.1 in ) .[ prop : bernstein_bounded ] let be symmetric independent random matrices in that satisfy and almost surely for some constant and all . define then , for all with probability at least we have can now state the main result for the procedure ( [ nuclearnormest ] ) .[ theomain1 ] let be i.i.d .vectors in with covariance matrix . for any , we have on the event and as we see in theorem [ theomain1 ] , the regularization parameter should be chosen sufficiently large such that the condition holds with probability close to .the optimal choice of depends on the unknown distribution of the observations .we consider now the case of sub - gaussian random vector .[ assumption1 ] the random vector is sub - gaussian , that is .in addition , there exist a numerical constant such that note that gaussian distributions satisfy assumption [ assumption1 ] . under the above condition, we can study the stochastic quantity and thus properly tune the regularization parameter .the intrinsic dimension of the matrix can be measured by the effective rank see section 5.4.3 in .note that we always have .in addition , we can possibly have for approximately low - rank matrices , that is matrices with large rank but concentrated around a low - dimensional subspace .consider for instance the covariance matrix with eigenvalues and , then we have the following result , which requires no condition on the covariance matrix . [ lem1 ]let be i.i.d .random vectors satisfying assumption [ assumption1 ] .let be defined in ( [ equationy ] ) with ] .let be integers such that .let be i.i.d .random vectors in with covariance matrix .we observe i.i.d .random vectors such that where is an i.i.d .sequence of bernoulli random variables independent of .then , there exist absolute constants and such that and where denotes the infimum over all possible estimators of based on .the proof of the first inequality adapts to covariance matrix estimation the arguments used in the trace regression problem to prove theorems 1 and 11 in . by definition of , we have for any if , we deduce from the previous display that next , a necessary and sufficient condition of minimum for problem ( [ nuclearnormest ] ) implies that there exists such that for all for any of rank with spectral representation and support , it follows from ( [ kkt ] ) that for an arbitrary .note that by monotonicity of subdifferentials of convex functions and that the following representation holds where is an arbitrary matrix with .in particular , there exists with such that for this choice of , we get from ( [ th1-interm1 ] ) that where we have used the following facts and for any define . set .we have using cauchy - schwarz s inequality and trace duality , we get the above display combined with ( [ th1-interm2 ] ) give a decoupling argument gives finally , we get on the event that we now prove the spectral norm bound .note first that the solution of ( [ nuclearnormest ] ) is given by where and admits the spectral representation with positive eigenvalues and orthonormal eigenvectors .indeed , the solution of ( [ nuclearnormest ] ) is unique since the functional is strictly convex. a sufficient condition of minimum is with .we consider the following choice of with it is easy to check that .next , we have on the event the delicate part of this proof is to obtain the sharp dependence on . as a consequence ,the proof is significantly more technical as compared to the case of full observations . to simplify the understanding of this proof , we decomposed it into three lemmas that we prove below .define we have now combining a simple union bound argument with lemmas [ lem3 ] , [ lem4 ] and [ lem5 ] , we get with probability at least that and ,\end{aligned}\ ] ] where is an absolute constant .noting finally that and , we can conclude , up to a rescaling of the absolute constant , that ( [ prop1-bound-1 ] ) and ( [ prop1-bound-2 ] ) hold true simultaneously with probability at least .[ lem3 ] under the assumptions of proposition [ lem1 ] , we have with probability at least that where is an absolute constant .we have next , since the random variables and are sub - gaussian for any , we have where we have used assumption 1 in the last inequality .we can apply bernstein s inequality ( see proposition [ bernstein ] in the appendix below ) to get for any with probability at least that where is an absolute constant .next , taking combined with a union bound argument we get the result .[ lem4 ] under the assumptions of proposition [ lem1 ] , we have with probability at least that where is a large enough absolute constant . we have where define where are i.i.d .bernoulli random variables with parameter independent from and we want to apply the noncommutative bernstein inequality for matrices . to this end, we need to study the quantities and .we note first that .next , we set and . some easy algebra yields that we now treat and separately .denote by and the expectations w.r.t . and respectively .we have .next , we have consequently , we get for any with that \right)\notag\\ & \leq \delta^2 \sqrt{\mathbb e_x |x|_2 ^ 4 } \left ( \delta \sqrt{\mathbb e_x ( x^{\top}u)^4 } + ( 1-\delta ) \sum_{j=1}^p \sqrt{\mathbb e_x ( x^{(j)})^4}u_j^2 \right),\end{aligned}\ ] ] where we have applied cauchy - schwarz s inequality .we have again by cauchy - schwarz s inequality and assumption [ assumption1 ] that ^{1/2}\left[\mathbb e ( x^{(k)})^4\right]^{1/2}\\ & \leq & \left(\sum_{j=1}^p \sqrt{\mathbb e ( x^{(j)})^4}\right)^2\\ & \leq & c\left(\sum_{j=1}^p \|x^{(j)}\|_{\psi_2}^2\right)^2\\ & \leq & cc_1^{-2}\left(\mathrm{tr}(\sigma)\right)^2,\end{aligned}\ ] ] for some absolute constant .we have also , in view of ( [ subexp1 ] ) , with the same absolute constant as above and combining the three above displays with ( [ interm - lem4 - 2 ] ) , we get \notag\\ & \leq c c_1^{-2}\delta^2 \mathrm{tr}(\sigma)\|\sigma\|_\infty,\end{aligned}\ ] ] and combining the two above displays with ( [ interm - lem4 - 1 ] ) , we get where is an absolute constant . next , we treat .we have where we have used that in view of assumption [ assumption1 ] , we have then , combining proposition [ bernstein ] with a union bound argument gives for any where is an absolute constant .define and where is a large enough absolute constant .we have where we have used proposition [ prop : bernstein_bounded ] to get that [ lem5 ] under the assumptions of proposition [ lem1 ] , we have with probability at least that where is an absolute constant . in view of assumption [ assumption1 ], we have for any that and next , we have next , we have then we can apply proposition [ bernstein ] to get the result . in view of proposition [ lem1 ], we have on an event of probability at least that we assume further that ( [ measurements ] ) is satisfied with a sufficiently large constant so that we have , in view of ( [ prop1-bound-1 ] ) and ( [ prop1-bound-2 ] ) , on the same event that and we immediately get on the event that and combining these simple facts with ( [ lem - lambda - datadriven - interm1 ] ) , we get the result .this proof uses standard tools of the minimax theory ( cf .for instance ) . however , as for proposition [ lem1 ] , the proof with missing observations ( ) is significantly more technical as compared to case of full observations ( ) .in particular , the control of the kullback - leibler divergence requires a precise description of the conditional distributions of the random variables given the masked variables . to our knowledge, there exists no minimax lower bound result for statistical problem with missing observations in the literature .set where is a sufficiently small absolute constant .we consider first the case .define set for any .consider the associated set of symmetric matrices note that any matrix is positive - semidefinite if since we have by assumption by construction , any element of as well as the difference of any two elements of is of rank exactly .consequently , since for any .note also that for any , we have and provided that and consequently .indeed , we have in view of the condition .a similar reasoning gives the lower bound .denote by the block matrix with first block equal to .varshamov - gilbert s bound ( cf .lemma 2.9 in ) guarantees the existence of a subset with cardinality containing and such that , for any two distinct elements and of , we have let be i.i.d . with . for the sake of brevity , we set .recall that are random vectors in whose entries are i.i.d .bernoulli entries with parameter independent from and that the observations satisfy . denote by the distribution of and by the conditional distribution of given .next , we note that , for any , the conditional random variables are independent gaussian vectors where , we have .denote respectively by and the probability distribution of and the associated expectation , and by the expectation w.r.t for any .we also denote by and the expectation and conditional expectation associated respectively with and .next , the kullback - leibler divergences between and satisfies using that with defined in ( [ proof - theo2-interm1 ] ) , we get for any , any and any realization that 1 . and hence . and are supported on a -dimensional subspace of where .define .define the mapping as follows where for any , is obtained by keeping only the components with their index .we denote by the right inverse of .we note that and thus we get that denote by the eigenvalues of .note that for any in view of ( [ spectraltest ] ) if .we get , using the inequality for any , that taking the expectation w.r.t . to in the above display , we get for any that since . combining the above display with ( [ proof - theo2-interm2 ] ), we get thus , we deduce from the above display that the condition is satisfied for any if is chosen as a sufficiently small numerical constant depending on . in view of ( [ lower_2 ] ) and ( [ eq : condition c ] ) , ( [ eq : lower1 ] ) now follows by application of theorem 2.5 in .the lower bound ( [ eq : lower2 ] ) follows from ( [ eq : lower1 ] ) by the following simple argument .consider the set of matrices .for any two distinct matrices of , we have indeed , if ( [ eq : lower_spectral_1 ] ) does not hold , we get since by construction of . this contradicts ( [ lower_2 ] ) .next , ( [ eq : condition c ] ) is satisfied for any if is chosen as a sufficiently small numerical constant depending on . combining ( [ eq : lower_spectral_1 ] ) with ( [ eq : condition c ] ) and theorem 2.5 in gives the result .the case can be treated similarly and is actually easier .indeed if , then we have and .consequently , we can derive the lower bound by testing between the two hypothesis where and are covariance matrices with only one nonzero component on the first diagonal entry .for these covariance matrices , we have and .thus we have for some absolute constant .the rest of the proof is identical to the case .i wish to thank professor vladimir koltchinskii for suggesting this problem and the observation ( [ sigmarecons ] ) .
in this paper , we study the problem of high - dimensional approximately low - rank covariance matrix estimation with missing observations . we propose a simple procedure computationally tractable in high - dimension and that does not require imputation of the missing data . we establish non - asymptotic sparsity oracle inequalities for the estimation of the covariance matrix with the frobenius and spectral norms , valid for any setting of the sample size and the dimension of the observations . we further establish minimax lower bounds showing that our rates are minimax optimal up to a logarithmic factor .
the need for higher - order radiative corrections is growing more and more important due to the increasing precision of measurements at the lhc and planned future colliders .the anticipated precision of future experiments will require the evaluation of three - loop corrections with arbitrary masses , for precision electroweak quantities ( for a recent review see ref . ) or a detailed understanding of the higgs potential and its stability . in this article , the calculation of general three - loop vacuum integrals is considered , integrals with vanishing external momentum and arbitrary propagator masses .such integrals may arise in low - energy observables , in the coefficients of low - momentum expansions ( see ref . ) or as building blocks in more general three - loop calculations . at the two - loop level ,analytical formulae for general vacuum integrals have been known for some time .when expanding in powers of within dimensional regularization , they can be written in terms of polylogarithms . at the three - loop level ,results for vacuum integrals are only available for one and two independent mass scales .the derivation of analytical results for the class of two - scale three - loop vacuum integrals requires the introduction of harmonic polylogarithms , and some cases are only known numerically . in light of these facts , a numerical approach to three - loop vacuum integrals with general mass pattern appears most promising . in ref . a numerical technique for the calculation of the four - propagator topology has been presented . in the following ,a method for the evaluation of all relevant master integrals is proposed , which is based on dispersion relations .this technique has been previously used for the numerical evaluation of two - loop self - energy and vertex integrals .for the master integrals considered in this paper , the dispersion relation approach leads to simple numerical integrals for their finite part . for two of the three master integral topologies, one can obtain one - dimensional numerical integral representations in terms of elementary functions .for the most complicated case , the six - propagator master integral , one may construct a two - dimensional integral in terms of elementary functions .note that in some applications it may be necessary to evaluate the master integrals to higher orders in .this happens when a master integral is multiplied by a coefficient that has poles in .the method described in this paper , in its present form , is not suitable for such situations .p3cmp3cmp3 cm & & + ( 0,0)1(0,1)(0,-1)(-1,0)(-1,0)1.414 - 4545(1,0)1.414135 - 135(-0.9,0.7)1(-0.9,-0.7)1(-0.5,0)2(0.3,0)3(1.05,0)4 & ( 0,0)1(-0.707,0.707)(0,-1)(0.707,0.707)(0,-1)(-0.707,0.707)(0,-1)(0.707,0.707)(-1,-0.4)1(-0.45,-0.2)2(1,-0.4)4(0.45,-0.2)3(0,0.9)5 & ( 0,0)1(-0.866,0.5)(0,-1)(0.866,0.5)(0,0)(0,0)(-0.866,0.5)(0,0)(0.866,0.5)(0,0)(0,-1)(-1,-0.4)1(-0.4,0)2(1,-0.4)4(0.4,0)3(0,0.9)5(0.1,-0.5)6 + & & this article begins by defining the set of three - loop vacuum master integrals in section [ sc : def ] .each master integral topology is discussed in turn in sections [ sc : u4][sc : u6 ] .several special cases , which require a modification of the integral representations , are treated separately in sections [ sc : u4 ] and [ sc : u5 ] . for the most complicated masterintegral , which is the subject of section [ sc : u6 ] , no such special case has been identified so far .the paper finishes with some comments on the implementation of the numerical integrations in section [ sc : num ] before concluding in section [ sc : concl ] .some useful formulae are collected in the appendix .after trivial cancellations of numerator and denominator terms , a general scalar three - loop vacuum integral may be written in the form ^{\nu_1 } [ ( q_1-q_2)^2-m_2 ^ 2]^{\nu_2 } } \nonumber \\ & \;\times \frac{1}{[(q_2-q_3)^2-m_3 ^ 2]^{\nu_3 } [ q_3 ^ 2-m_4 ^ 2]^{\nu_4 } [ q_2 ^ 2-m_5 ^ 2]^{\nu_5 } [ ( q_1-q_3)^2-m_6 ^ 2]^{\nu_6}}\,,\end{aligned}\ ] ] where , is the number of dimensions in dimensional regularization , and are integer numbers .the complete set of three - loop vacuum integrals can be reduced to a small set of master integrals with the help of integration - by - parts identities . in most cases ,not involving any special mass patterns , one can choose the following basis of three master integrals , see fig .[ fig : diag1 ] , besides integrals that factorize into products of one- and two - loop contributions .two other simple integrals that are often encountered ( see fig . [fig : diag2 ] ) can be reduced to these three with the help of integration - by - parts identities : , \label{eq : m111100 } \displaybreak[0 ] \\[1ex ] m(1,1,1,1,-1,0)\big|_{m_5=0 } & = \biggl\ { \frac{2 m_1 ^ 2}{3(d-2)(3d-8 ) } \bigl [ ( d-2)m_1 ^ 2 + ( 7d-18)m_2 ^ 2 \nonumber \\ & \qquad - 2(d-3)(m_3 ^ 2+m_4 ^ 2 ) \bigr ] \ , u_4(m_1 ^ 2,m_2 ^ 2,m_3 ^ 2,m_4 ^ 2 ) \nonumber \\ & \qquad+ \frac{1}{3}a_0(m_2 ^ 2)a_0(m_3 ^ 2)a_0(m_4 ^ 2 ) \biggr \ } + \bigl\ { m_1 \leftrightarrow m_2 \bigr\ } \nonumber \\ & \quad + \bigl\ { m_1 \leftrightarrow m_3 , m_2 \leftrightarrow m_4 \bigr\ } + \bigl\ { m_1 \leftrightarrow m_4 , m_2 \leftrightarrow m_3 \bigr\ } \ , , \end{aligned}\ ] ] where `` cycl '' refers to cyclic permutations of , and is the standard one - loop vacuum function ( see appendix [ sc:12 ] ) .p3cmp3 cm & + ( 0,0)1(0,1)(0,-1)(-1,0)1.414 - 4545(1,0)1.414135 - 135(-1.05,0)1(-0.5,0)2(0.3,0)3(1.05,0)4 & ( 0,0)1(-0.707,0.707)(0,-1)(0.707,0.707)(0,0.97)(0,-1)(-0.707,0.707)(0,-1)(0.707,0.707)(-1,-0.4)1(-0.45,-0.2)2(1,-0.4)4(0.45,-0.2)3(0,0.8)5 + & m(1,1,1,1,-1,0 ) ( =0 ) the master integrals , and have the following symmetry properties : * is symmetric under arbitrary permutations of . * is symmetric under the replacements , , and , as well as any combination thereof . * is symmetric under the replacements , , , and any combination thereof .let us begin with a dispersion relation for the double bubble loop integral in fig .[ fig : diag3 ] , \delta i_{\rm db}(s , m_1 ^ 2,m_2 ^ 2,m_3 ^ 2,m_4 ^ 2 ) & = \delta b_{0,m_1}(s , m_1 ^ 2,m_2 ^ 2 ) \ , b_0(s , m_3 ^ 2,m_4 ^ 2 ) \nonumber \\ & \quad + b_{0,m_1}(s , m_1 ^ 2,m_2 ^ 2 ) \ , \delta b_0(s , m_3 ^ 2,m_4 ^ 2),\end{aligned}\ ] ] where is the standard scalar one - loop self - energy function ( see appendix [ sc:12 ] ) , and .the discontinuities of these two functions are denoted by and , respectively . in dimensions ,they are given by and is the heaviside step function .( -0.6,0)0.6(0.6,0)0.6(0,0)(-1.2,0)(1.2,0)(-0.6,-0.6)(-2,0)(-1.2,0)(2,0)(1.2,0)(-0.9,-0.7)1(-0.3,-0.7)1(-0.6,0.7)2(0.6,-0.7)3(0.6,0.7)4(-1.6,0.1) inserting the dispersion relation eq . into the three - loop integral ,one obtains this integral is divergent , and thus a numerical integration of in dimension is not possible . instead one may consider the sum the master integrals with one or two non - zero masses can be calculated analytically , with results collected in appendix [ sc : ana ] .the first four terms on the right - hand side of eq .precisely reproduce the divergencies of the general integral on the left - hand side , such that the remainder is finite and can be integrated numerically .it is given by & i_{\rm db , sub}(s , m_1 ^ 2,m_2 ^ 2,m_3 ^ 2,m_4 ^ 2 ) = \nonumber \\ & \qquad \delta b_{0,m_1}(s , m_1 ^ 2,m_2 ^ 2 ) \ ; \text{re}\bigl\ { b_0(s , m_3 ^ 2,m_4 ^ 2 ) - b_0(s,0,0 ) \bigr\ } \nonumber \\ & \qquad - \delta b_{0,m_1}(s , m_1 ^ 2,0 ) \ ; \text{re }\bigl\ { b_0(s,0,m_3 ^ 2 ) + b_0(s,0,m_4 ^ 2 ) - 2 b_0(s,0,0 ) \bigr\ } \nonumber \\ & \qquad + \text{re}\bigl\{b_{0,m_1}(s , m_1 ^ 2,m_2 ^ 2)\bigr\ } \ , \bigl [ \delta b_0(s , m_3 ^ 2,m_4 ^ 2 ) - \deltab_0(s,0,0 ) \bigr ] \nonumber \\ & \qquad - \text{re } \bigl\{b_{0,m_1}(s , m_1 ^ 2,0)\bigr\ } \ , \bigl [ \delta b_0(s,0,m_3 ^ 2 ) + \deltab_0(s,0,m_4 ^ 2 ) - 2\ , \delta b_0(s,0,0 ) \bigr ] \,,\end{aligned}\ ] ] which has been written in a way that makes its finiteness manifest . the divergent part of the function in eq . integrates to zero and thus can be ignored .a special treatment is required for the case , when develops an infrared divergence .this singularity occurs in the limit when the first loop momentum goes on - shell , . in this limit into the product .note that in dimensional regularization , but it has been kept here for illustration . by alternatively considering a mass regulator for the infrared divergence ,one arrives at the identity where is the basic scalar two - loop vacuum integral ( see appendix [ sc:12 ] ) , and is an infinitesimal mass parameter . in principle , one could directly evaluate for a small numerical value of according to the prescription in the previous subsection , although one has to pay the price of having numerical cancellations between the two terms from the last two terms in eq . . alternatively , it is possible to extract the dependence explicitly from . for this purpose ,let us consider the following small expansions : \nonumber \\& \quad - \pi^2 \delta(s - m_2 ^ 2 ) + { \cal o}(\delta^2 ) , \\ \text{re } \bigl\ { b_{0,m_1}(s,\delta^2,0)\bigr\ } & = \frac{1}{s } \log \frac{s}{\delta^2 } + { \cal o}(\delta^2 ) ,\\ \int_0^\infty ds\ , \delta b_{0,m_1}(s,\delta^2,m_2 ^ 2)\ , f(s ) & = \int_0^\infty ds\ , \delta b_{0,m_1}(s,0,m_2 ^ 2)\,\biggl [ f(s ) - f(m_2 ^ 2)\frac{m_2 ^ 2}{s } \biggr ] \nonumber \\ & \quad + \biggl ( 1 + \log \frac{\delta^2}{m_2 ^ 2}\biggr ) f(m_2 ^ 2 ) + { \cal o}(\delta^2),\end{aligned}\ ] ] where is some arbitrary well - behaved function that does not depend on . for the remaining terms in the integrand , involving and functions , one can simply set to zero .one then obtains \\[1ex ] & i_{\rm db , sub,0}(s , m_2 ^ 2,m_3 ^ 2,m_4 ^ 2 ) = \nonumber \\ & \qquad \delta b_{0,m_1}(s,0,m_2 ^ 2 ) \biggl [ \begin{aligned}[t ] & \text{re}\bigl\ { b_0(s , m_3 ^ 2,m_4 ^ 2 ) - b_0(s,0,0 ) \bigr\ } \\ & - \frac{m_2 ^ 2\ , a_{0,\rm fin}(m_2 ^ 2)}{s\ , a_{0,\rm fin}(s)}\ , \text{re}\bigl\ { b_0(m_2 ^ 2,m_3 ^ 2,m_4^ 2 ) - b_0(m_2 ^ 2,0,0 ) \bigr\ } \biggr ] \end{aligned } \nonumber \\ & \qquad - \delta b_{0,m_1}(s,0,0 ) \ ; \text{re }\bigl\ { b_0(s,0,m_3 ^ 2 ) + b_0(s,0,m_4 ^ 2 ) - 2 b_0(s,0,0 ) \bigr\ } \nonumber \\ & \qquad + \frac{s+m_2 ^ 2}{s(s - m_2 ^ 2 ) } \text{re}\bigl\{\log \frac{m_2 ^ 2-s}{m_2 ^ 2}\bigr \ } \ , \bigl [ \delta b_0(s , m_3 ^ 2,m_4 ^ 2 ) - \delta b_0(s,0,0 ) \bigr ] \nonumber \\ & \qquad - \frac{1}{s } \log \frac{s}{m_2 ^ 2 } \ , \bigl [ \delta b_0(s,0,m_3 ^ 2 ) + \deltab_0(s,0,m_4 ^ 2 ) - 2\ , \delta b_0(s,0,0 ) \bigr ] \ , , \displaybreak[0 ] \\[1ex ] & u_{4,\rm add,0}(\delta^2,m_2 ^ 2,m_3 ^ 2,m_4 ^ 2 ) = \nonumber \\ & \qquad -\biggl ( 1 + \log \frac{\delta^2}{m_2 ^ 2}\biggr ) \ , a_{0,\rm fin}(m_2 ^ 2)\ , \text{re}\bigl\ { b_0(m_2 ^ 2,m_3 ^ 2,m_4 ^ 2 ) - b_0(m_2 ^ 2,0,0 ) \bigr\ } \nonumber \\ & \qquad -\log \frac{\delta^2}{m_2 ^ 2 } \int_0^\infty ds \ , \frac{1}{s - m_2 ^ 2}a_{0,\rm fin}(s)\,\bigl [ \delta b_0(s , m_3 ^ 2,m_4 ^ 2 ) - \deltab_0(s,0,0 ) \bigr ] \nonumber \\ & \qquad + \log \frac{\delta^2}{m_2 ^ 2 } \int_0^\infty ds \ , \frac{1}{s}a_{0,\rm fin}(s)\,\bigl [ \delta b_0(s,0,m_3 ^ 2 ) + \deltab_0(s,0,m_4 ^ 2 ) - 2\ , \delta b_0(s,0,0 ) \bigr ] \nonumber \\ & \qquad -\pi^2 a_{0,\rm fin}(m_2 ^ 2)\ , \bigl [ \delta b_0(m_2 ^ 2,m_3 ^ 2,m_4^ 2 ) - \delta b_0(m_2 ^ 2,0,0 ) \bigr ] \displaybreak[0 ] \\& \quad = - \log \frac{\delta^2}{m_2 ^ 2 } \bigl [ t_3(m_2 ^ 2,m_3 ^ 2,m_4 ^ 2 ) -\sum_{i=2}^4 t_3(m_i^2,0,0 ) \bigr ] \nonumber \\ & \qquad \begin{aligned}[b ] -a_{0,\rm fin}(m_2 ^ 2)\ , \bigl [ & \text{re}\bigl\ { b_0(m_2 ^ 2,m_3 ^ 2,m_4^ 2 ) - b_0(m_2 ^ 2,0,0 ) \bigr\ } \\ & - \pi^2\ , \delta b_0(m_2 ^ 2,m_3 ^ 2,m_4 ^ 2 ) + \pi^2 \ , \delta b_0(m_2 ^ 2,0,0 ) \bigr ] \ , .\end{aligned}\end{aligned}\ ] ]the master integral can be addressed with a dispersion relation similar to eq .[ eq : disp1 ] . as in the previous section ,one first needs to subtract the divergencies to arrive at a finite integral suitable for numerical evaluation . for this purpose , it is useful to consider the following relation , which has been derived from integration - by - parts identities : \\+ \frac{\lambda^2_{125 } \lambda^2_{345}}{(3-d)^2(m_2 ^ 2-m_1 ^ 2+m_5 ^ 2)(m_3 ^ 2-m_4 ^ 2+m_5 ^ 2 ) } m(2,1,1,2,1,0)\ , , \label{eq : u5red}\end{gathered}\ ] ] where r = m_1 ^ 2/p^2 s= m_2 ^ 2/p^2 \lambda( ...) z = |z| e^{i\varphi}$ the logarithm is defined as } \log z & = \log|z| + i\varphi , \qquad \varphi \in ( -\pi,\pi]\,.\end{aligned}\ ] ] the mass derivative can be expressed in terms of and functions . after expanding in obtains + { \cal o}(\epsilon)\ , .\end{aligned}\ ] ] finally , the two - loop vacuum integral is given by [q_2 ^ 2-m_2 ^ 2][(q_1-q_2)^2-m_3 ^ 2 ] } \\ & = e^{2\gamma_{\rm e}\epsilon}\ , ( m_3 ^ 2)^{1 - 2\epsilon } \ , \frac{\gamma(1+\epsilon)^2}{2(1-\epsilon)(1 - 2\epsilon ) } \biggl [ \frac{1+x+y}{\epsilon^2 } -\frac{2}{\epsilon } \bigl ( x \log x + y \log y \bigr ) \nonumber \\ & \begin{aligned } \qquad + \biggl ( & x\log^2 x + y \log^2 y - ( 1-x - y)\log x \log y \\ & + \lambda(1,x , y ) \bigl ( 2 \log u \log v - \log x \log y - 2 \text{li}_2\,u - 2 \text{li}_2\,v + \frac{\pi^2}{3 } \bigr ) \biggr ) \end{aligned } \nonumber \\ & \begin{aligned}[b ] \qquad - \epsilon\biggl ( & \frac{x}{3}\log^3 x + \frac{y}{3 } \log^3 y - \frac{1-x - y}{2}\log x \log y \,\log(xy ) \\ & + \lambda(1,x , y ) \biggl\ { \frac{1}{2}\log x \log y \,\log(xy ) + \frac{4}{3 } \log^3(1-w ) \\ &\quad+ 2\log^2(1-w)\,\bigl ( \log(xy ) - \log w \bigr ) \\ & \quad+ \log(1-w)\bigl ( \frac{2\pi^2}{3 } + \log^2(xy ) \bigr ) + \frac{4}{3}\log^3 w + \frac{2\pi^2}{3 } \log w \\ & \quad -\frac{4}{3}\log^3 u + 2 \log^2 u \,\log \frac{v^2}{y^2 } - \log u \,\bigl ( \frac{2\pi^2}{3 } + \log^2 y \bigr ) \\ &\quad -\frac{4}{3}\log^3 v + 2 \log^2 v \,\log \frac{u^2}{x^2 } - \log v \,\bigl ( \frac{2\pi^2}{3 } + \log^2 x \bigr ) \\ & \quad - 2\log x \;\text{li}_2\frac{u^2}{x } - 2\log y \;\text{li}_2\frac{v^2}{y } + 2 \log(xy ) \;\text{li}_2\,w \\ & \quad + 2\text{li}_3\frac{u^2}{x } + 2\text{li}_3\frac{v^2}{y } - 2\text{li}_3\,w - 4\text{li}_3(1-w ) \\ & \quad+ 4\text{li}_3\,u + 4\text{li}_3\,v -2 \zeta(3 ) \biggr\}\biggr ) + { \cal o}(\epsilon^2 ) \biggr ] \ , , \end{aligned}\end{aligned}\ ] ] where & u = \tfrac{1}{2}\bigl[1+x - y+\lambda(1,x , y)\bigr ] , \quad v = \tfrac{1}{2}\bigl[1-x+y+\lambda(1,x , y)\bigr ] , \\[1ex ] & w = \bigl(\frac{u}{x}-1\bigr)\bigl(\frac{v}{y}-1\bigr ) , \\ & u = \frac{x}{u}(1-w ) , \quad v = \frac{y}{v}(1-w ) , \\ & w = \frac{x}{u } + \frac{y}{v } -1.\end{aligned}\ ] ] for one obtains the simpler expression \qquad - \epsilon\bigl ( & \frac{y}{3 } \log^3 y + ( 1-y ) \bigl\ { \log^2 y \ , \log(1-y ) - \frac{\pi^2}{3}\log y \\ & \quad+ 2\log y \ , \text{li}_2(1-y ) + 2\text{li}_3\,y + 4\text{li}_3(1-y ) - 2\zeta(3 ) \bigr\}\bigr ) \end{aligned } \nonumber \\ & \qquad+ { \cal o}(\epsilon^2 ) \biggr ] \,.\end{aligned}\ ] ] a. freitas , prog .phys . * 90 * , 201 ( 2016 ) [ arxiv:1604.00406 [ hep - ph ] ] .s. p. martin and d. g. robertson , phys .d * 90 * , no . 7 , 073010 ( 2014 ) [ arxiv:1407.4336 [ hep - ph ] ] .a. i. davydychev and j. b. tausk , nucl .b * 397 * , 123 ( 1993 ) . c. ford ,i. jack and d. r. t. jones , nucl .b * 387 * , 373 ( 1992 ) [ erratum - ibid . *504 * , 551 ( 1997 ) ] [ hep - ph/0111190 ] ; r. scharf and j. b. tausk , nucl .b * 412 * , 523 ( 1994 ) .d. j. broadhurst , z. phys .c * 54 * , 599 ( 1992 ) ; l. avdeev , j. fleischer , s. mikhailov and o. tarasov , phys .b * 336 * , 560 ( 1994 ) [ erratum : phys .b * 349 * , 597 ( 1995 ) ] [ hep - ph/9406363 ] ; j. fleischer and m. y. kalmykov , phys .b * 470 * , 168 ( 1999 ) [ hep - ph/9910223 ] .d. j. broadhurst , eur .j. c * 8 * , 311 ( 1999 ) [ hep - th/9803091 ] . k. g. chetyrkin and m. steinhauser , nucl .b * 573 * , 617 ( 2000 ) [ hep - ph/9911434 ] .y. schrder and a. vuorinen , jhep * 0506 * , 051 ( 2005 ) [ hep - ph/0503209 ] .m. y. kalmykov , jhep * 0604 * , 056 ( 2006 ) [ hep - th/0602028 ]. j. grigo , j. hoff , p. marquard and m. steinhauser , nucl .b * 864 * , 580 ( 2012 ) [ arxiv:1206.3418 [ hep - ph ] ] ; results in the form of mathematica code available at https://www.ttp.kit.edu/progdata/ttp12/ttp12-20/twomasstadpoles/ a. i. davydychev and m. y. kalmykov , nucl .b * 699 * , 3 ( 2004 ) [ hep - th/0303162 ] ; m. y. kalmykov , nucl .b * 718 * , 276 ( 2005 ) [ hep - ph/0503070 ] ; s. bekavac , a. g. grozin , d. seidel and v. a. smirnov , nucl .b * 819 * , 183 ( 2009 ) [ arxiv:0903.4760 [ hep - ph ] ] ; v. v. bytev , m. y. kalmykov and b. a. kniehl , nucl .b * 836 * , 129 ( 2010 ) [ arxiv:0904.0214 [ hep - th ] ] ; v. v. bytev , m. y. kalmykov and b. a. kniehl , comput .commun .* 184 * , 2332 ( 2013 ) [ arxiv:1105.3565 [ math - ph ] ] .s. laporta and e. remiddi , phys .b * 301 * , 440 ( 1993 ) .e. remiddi and j. a. m. vermaseren , int .j. mod .a * 15 * , 725 ( 2000 ) [ hep - ph/9905237 ] .s. groote , j. g. krner and a. a. pivovarov , nucl .b * 542 * , 515 ( 1999 ) [ hep - ph/9806402 ] ; s. groote , j. g. krner and a. a. pivovarov , eur .j. c * 11 * , 279 ( 1999 ) [ hep - ph/9903412 ] ; s. groote , j. g. krner and a. a. pivovarov , annals phys . *322 * , 2374 ( 2007 ) [ hep - ph/0506286 ] .s. bauberger , f. a. berends , m. bhm and m. buza , nucl .b * 434 * , 383 ( 1995 ) [ hep - ph/9409388 ] ; m. awramik , m. czakon and a. freitas , jhep * 0611 * , 048 ( 2006 ) [ arxiv : hep - ph/0608099 ] . s. bauberger and m. bhm , nucl .b * 445 * , 25 ( 1995 ) [ hep - ph/9501201 ] .s. p. martin , talk given at _loopfest xv _, buffalo , ny , usa ( 2016 ) ; s. p. martin and d. g. robertson , in preparation .k. g. chetyrkin and f. v. tkachov , nucl .b * 192 * , 159 ( 1981 ) .g. weiglein , r. scharf and m. bhm , nucl .b * 416 * , 606 ( 1994 ) [ hep - ph/9310358 ] .d. j. broadhurst , z. phys .c * 47 * , 115 ( 1990 ) .k. g. chetyrkin and f. v. tkachov , nucl .b * 192 * , 159 ( 1981 ) .g. t hooft and m. j. g. veltman , nucl .b * 153 * , 365 ( 1979 ) ; g. j. van oldenborgh and j. a. m. vermaseren , z. phys .c * 46 * , 425 ( 1990 ) ; a. denner , fortsch .phys . * 41 * , 307 ( 1993 ) [ arxiv:0709.1075 [ hep - ph ] ] .wolfram research , inc ., `` mathematica , version 10.2 , '' champaign , illinois , usa ( 2015 ) .
three - loop vacuum integrals are an important building block for the calculation of a wide range of three - loop corrections . until now , analytical results for integrals with only one and two independent mass scales are known , but in the electroweak standard model and many extensions thereof , one often encounters more mass scales of comparable magnitude . for this reason , a numerical approach for the evaluation of three - loop vacuum integrals with arbitrary mass pattern is proposed here . concretely , one can identify a basic set of three master integral topologies . with the help of dispersion relations , each of these can be transformed into one - dimensional or , for the most complicated case , two - dimensional integrals in terms of elementary functions , which are suitable for efficient numerical integration . + ayres freitas _ pittsburgh particle - physics astro - physics & cosmology center ( pitt - pacc ) , + department of physics & astronomy , university of pittsburgh , pittsburgh , pa 15260 , usa _
today , there are many impressive archives painstakingly constructed from observations associated with an instrument. the hubble space telescope ( hst ) , the chandra x - ray observatory , the sloan digital sky survey ( sdss ) , the two micron all sky survey ( 2mass ) , and the digitized palomar observatory sky survey ( dposs ) are examples of this .furthermore yearly advances in electronics bring new instruments , doubling the amount of data we collect each year . for example ,approximately a gigapixels is deployed on all telescopes today , and new gigapixel instruments are under construction .this trend is bound to continue . just like what szalay says , the astronomy is facing `` data avalanche '' ( see e.g. , szalay & gray 2001 ) . how to organize , use , and make sense of the enormous amounts of data generated by today s instruments and experiments ?it is very time consuming and demands high quality human resources .therefore , better features and better classifiers are required . in addition, expert systems are also useful to get quantitative information .it is possible to solve the above questions with neural networks ( nns ) , because they permit application of expert knowledge and experience through network training .furthermore , astronomical object classification based on neural networks requires no priori assumptions or knowledge of the data to be classified as some conventional methods need .neural networks , over the years , have proven to be a powerful tool capable to extract reliable information and patterns from large amounts of data even in the absence of models describing the data ( cf .bishop 1995 ) and are finding a wide range of applications also in the astronomical community : catalogue extraction ( andreon et al .2000 ) , star / galaxy classification ( odewahn et al . 1992 ; naim et al .1995 ; miller & coe 1996 ; m & hakala 1995 ; bertin & arnout 1996 ; bazell & peng 1998 ) , galaxy morphology ( storrie - lombardi et al . 1992 ; lahav et al .1996 ) , classification of stellar spectra ( bailer - jones et al . 1998; allende prieto et al .2000 ; weaver 2000 ) .just to name a few , the rising importance of artificial neural networks is confirmed in this kind of task .there is also a very important and promising recent contribution by andreon et al .( 2000 ) covering a large number of neural algorithms . in this work , a class of supervised neural networks called learning vector quantization ( lvq ) was proposed .lvq shares the same network architecture as the kohonen self - organizing map ( som ) , although it uses a supervised learning algorithm .bazell & peng ( 1998 ) pioneered the use of it in astronomical applications .another class of supervised neural networks named multi - layer perceptrons ( mlp ) was presented .goderya & mcguire ( 2000 ) summarized progress made in the development of automated galaxy classifiers using neural networks including mlp .qu et al . ( 2003 ) experimented and compared multi - layer perceptrons ( mlp ) , radial basis function ( rbf ) , and support vector machines ( svm ) classifiers for solar - flare detection .meanwhile , an automated algorithm called support vector machines ( svm ) for classification was introduced .the approach was originally developed by vapnik ( 1995 ) .wozniak et al . ( 2001 ) and humphreys et al . (2001 ) have pioneered the use of svm in astronomy .wozniak et al . ( 2001 ) evaluated svm , k - means and autoclass for automated classification of variable stars and compared their effectiveness .their results suggested a very high efficiency of svm in isolating a few best defined classes against the rest of the sample , and good accuracy for all classes considered simultaneously .humphreys et al .( 2001 ) used different classification algorithms including decision trees , k - nearest neighbor and support vector machines for classifying the morphological type of the galaxy .furthermore , they got the very promising results of their first experiments with different algorithms .celestial objects radiate energy over an extremely wide range of wavelengths from radio waves to infrared , optical to ultraviolet , x - ray and even gamma rays .each of these observations carries important information about the nature of objects .different physical processes show different properties in different bands . based on these, we apply learning vector quantization ( lvq ) , single - layer perceptron ( slp ) and support vector machines ( svm ) to classify agns , stars and normal galaxies with data from optical , x - ray , infrared bands . in this paperwe present the principles of lvq , slp and svm in section 2 . in section 3, we discuss the sample selection and analysis the distribution of parameters . in section 4the computed results and discussion are given .finally , in section 5 we conclude this paper with a discussion of general technique and its applicability .here the adopted learning vector quantization ( lvq ) algorithm is based upon the lvq_pak routines developed at the laboratory of computer and information sciences , helsinki university of technology , finland .their software can be obtained via the www from www.cis.hut.fi / research / lvq_pak/. if interested in the application of lvq in astronomy , we can refer to the papers of bazell & peng ( 1998 ) and cortiglioni et al .( 2001 ) .the lvq method was developed by kohonen ( 1989 ) who also developed the popular unsupervised classification technique known as the self - organizing map or topological map neural networks ( kohonen 1989 , 1990 ) .som performs a mapping from an -dimensional input vector onto two - dimensional array of nodes that is usually displayed in a rectangular or hexagonal lattice .the mapping is performed in such a way as to preserve the topology of the input data .this means that input vectors that are similar to each other in some sense , are mapped to neighboring regions of the two - dimensional output lattice .each node in the output lattice has an -dimensional reference vector of weights associated with it , one weight for each element of the input vector .the som functions compare the distance , in some suitable form , between each input vector and each reference vector in an iterative manner . with each iteration ,the reference vectors are moved around in the output space until their positions converge to a stable state . when the reference vector that is closest to a given input vector is found ( the winning reference vector ) , that reference vectoris updated to more closely match the input vector .this is the learning step .lvq uses that same internal architecture as som : a set of -dimensional input vectors are mapped onto a two - dimensional lattice , and each node on the lattice has an -dimensional reference vector associated with it . the learning algorithm for lvq ,i.e. , the method of updating the reference vectors , is different from that of som . because lvq is a supervised method , during the learning phasethe input data are tagged with their correct class .we define the input vector as : reference vector for output neuron as : define euclidean distance between the input vector and the reference vector of the neuron as : when is a minimum , the input vectors are compared to the reference vectors and the closest match is found using the formula where is an input vector , are the reference vectors , and is the winning reference vector .the reference vectors are then updated using the following rules : + if is in the same class as , if is in a different class from , if is not the index of the winning reference vector , the learning rate should generally be made to decrease monotonically with time , yielding larger changes for early iterations and more fine tuning as convergence is approached .the time is taken as positive integers . herewe adopt the optimized - leaning - rate ( see kohonen et al .1995 ) where if the classification is correct and if the classification is wrong . in this work ,the initial value of is selected , 0.3 , whereby learning is significantly speeded up , especially in the beginning , and the quickly find their approximate asymptotic values .two hundred codebook vectors in the codebook is adopted , meanwhile , 7 neighbors is used in knn - classification .the network is trained for 5000 epochs .there are several versions of the lvq algorithm for which the learning rules differ in some details .see kohonen ( 1995 ) for an explanation of the differences between these algorithms .when the learning phase is over , the reference vectors can be frozen , and any further inputs to the system will be placed into one of the existing classes , but the classes will not change .support vector machines ( svm ) are learning machines that can perform binary classification ( pattern recognition ) and real valued function approximation ( regression estimation ) tasks .svm creates functions from a set of labeled training data and operate by finding a hypersurface in the space of possible inputs .this hypersurface will attempt to split the positive examples from the negative examples .the split will be chosen to have the largest distance from the hypersurface to the nearest of the positive and negative examples .intuitively , this makes the classification correct for testing data that is near , but not identical to the training data . in detail , during the training phase svm takes a data matrix as input , and labels each sample as either belonging to a given class ( positive ) or not ( negative ) .svm treats each sample in the matrix as a point in a high - dimensional feature space , where the number of attributes determines the dimensionality of the space .svm learning algorithm then identifies a hyperplane in this space that best separates the positive and negative training samples .the trained svm can then be used to make predictions about a test sample s membership in the class . in brief , svm non - linearly maps their n - dimensional input space into a high dimensional feature space . in this high dimesional feature spacea linear classifier is constructed .more information can be found in burges tutorial ( 1998 ) or in vapnik s book ( 1995 ) .given some training data + if the data is linearly separable , one can separate it by an infinite number of linear hyperplanes .we can write these hyperplanes as among these hyperplanes , the one with the maximum margin is called by the optimal separating hyperplane .this hyperplane is uniquely determined by the support vectors on the margin .it satisfies the conditions \ge 1,\qquad i=1,\ldots , l.\ ] ] besides satisfying the above conditions , the optimal hyperplane has the minimal norm the optimal hyperplane can be found by finding the saddle point of the lagrange functional : -1)\ ] ] where are lagrange multipliers .the lagrangian has to be minimized with respect to , and maximized with respect to .the saddle point is defined as follows : where is the maximum point of subject to constraints therefore the optimal separating hyperplane has the form this solution only holds for linearly separable data , but has to be slightly modified for linearly non - separable data , the has to be bounded : where c is a constant chosen a priori . to generalize to non - linear classification, we replace the dot product with a kernel [ . for binary classification , stitson et al.(1996 ) andgunn ( 1998 ) stated it in detail .as for the multi - class classification can refer to weston and watkins ( 1998 ) .multi - layer perceptrons ( mlp ) are feedforward neural networks trained with the standard backpropagation algorithm . if no hidden layer , mlp are also called single - layer perceptron .they are supervised networks so they require a desired response to be trained .they learn how to transform input data into a desired response , so they are widely used for pattern classification . with one or two hidden layers , they can approximate virtually any input - output map .they have been shown to approximate the performance of optimal statistical classifiers in difficult problems .most neural network applications involve mlp .the basic mlp building unit is a model of artificial neuron .this unit computes the weighted sum of the inputs plus the threshold weight and passes this sum through the activation function ( usually sigmoid ) ( 18 ) , ( 19 ) : where is the linear combination of inputs of neuron , is the threshold weight connected to a special input , is the output of neuron , and is its activation function .herein we use a special form of sigmoidal ( non - constant , bounded , and monotone - increasing ) activation function - logistic function in a multilayer perceptron , the outputs of the units in one layer form the inputs to the next layer .the weights of the network are usually computed by training the network using the back - propagation ( bp ) algorithm .a multilayer perceptron represents a nested sigmoidal scheme ( 18 ) , its form for a single output neuron is where is the sigmoidal activation function , is the synaptic weight from neuron in the last hidden layer to the single output neuron , and so on for the other synaptic weights , is the -th element of the input vector .the weight vector denotes the entire set of synaptic weights ordered by layer , then the neurons in a layer , and then their number in a neuron .usually , astronomical object classification is based on the properties of spectra , photometry , multiwavelength and so on . in order to check the effectiveness and the efficiency of our provided methods , we classified objects with data from x - ray ( rosat ) , optical ( usno - a2.0 ) and infared ( 2mass ) bands . by positional cross - correlation of rosat , usno - a2.0 and 2mass released databases, we obtain the multi - wavelength data .the three catalogs are described in detail as follows : the rosat all - sky ( rass ) using an imaging x - ray telescope ( tr 1983 ) , are well suited for investigating the x - ray properties of astronomical objects .the rass bright source catalogue ( rbsc ) includes 18,811 sources , with a limiting rosat pspc countrate of 0.05 counts s in the 0.1 - 2.4 kev energy band .the typical positional accuracy is 30 .similarly , the rass faint source catalogue ( rfsc ) contains 105,924 sources and represents the faint extension to rbsc .the rbsc and rfsc catalogues contain the rosat name , positions in equatorial coordinates , the positional error , the source countrate ( ) and error , the background countrate , exposure time , hardness - ratios and and errors , extent ( ) and likelihood of extent ( ) , and likelihood of detection . the two hardness ratios and represent x - ray colors . from the count rate in the 0.1 - 0.4 kev energy band and the count rate in the 0.5 - 2.0 kev energy band, is given by : . is determined from the count rate in the 0.5 - 0.9 kev energy band and the count rate in the 0.9 - 2.0 kev energy band by : . is rosat total count rate in counts s .the parameters of and are source extent in arcsecond and likelihood of source extent in arcsecond , respectively .the amount of is specified , by which the source image exceeds the point spread function .the parameters of and reflect that sources are point sources or extent sources .for example , stars or quasars are point sources ; galaxies or galaxy clusters are extent sources. therefore and are useful for classification of objects .the usno - a2.0 ( monet et al . 1998 ) is a catalog of 526,280,881 stars over the full sky , compiled in the u.s .naval observatory , which contains stars down to about 20 mag over the whole sky .its astrometric precision is non - uniform , depending on position on schmidt plates , typically better than 1 .usno - a2.0 presents right ascension and declination ( j2000 , epoch of the mean of the blue and red plate ) and the blue and red magnitude for each star .the infrared data is the first large incremental data release from the two micron all sky survey ( 2mass ) .this release covers 2,483 square degree of northern sky observed from the 2mass facility at mt .hopkins , az . the catalogue contains 20.2 million point and 73,980 extended sources , and includes three bands j ( 1.25 m ) , h ( 1.65 m ) , and k ( 2.17 m ) magnitudes . for supervised methods , the input sample must be tagged with known classes .so the catalogues of known classes of astronomical objects need to be adopted .we choose known agns from the catalog of agn ( vron - cetty & vron , 2000 ) , which contains 13214 quasars , 462 bl lac objects and 4428 active galaxies ( of which 1711 are seyfert 1 ) .stars include all spectral classes of stars , dwarfs and variable stars , which are adopted from simbad database .normal galaxies are from third reference catalogue of bright galaxies ( rc3 ; de vaucouleurs et al . 1991 ) . studying the clustering properties of astronomical objects in a multidimensional parameter spaceneeds catalogue cross - correlation to get multi - wavelength parameters available for all sources .firstly , within a search radius of 3 times the rbsc and rfsc positional error , we positionally cross - identified the catalogue of usno - a2.0 with the rbsc and rfsc x - ray sources , and then cross - matched the data from x - ray and optical bands with infared sources in 2mass first released database within 10 arcsec radius . secondly , we similarly cross - identified the data from three bands with the catalogues of agns , stars and normal galaxies within 5 arcsec radius . only considering the unique entries, the total sample contains 1656 ( 29.9% ) agns , 3718 ( 67.0% ) stars , 173 ( 3.1% ) normal galaxies . in the whole process ,the obtained data of agns , stars and galaxies with catalogue counterparts are divided into four subclasses , ( i ) unique entries , ( ii ) multiple entries , ( iii ) the same entries , ( iv ) no entries . in detail , unique entries refer to the objects which have only one catalogue entry in the various catalogues , or which have a unique identification in private catalogues .multiple entries refer to the objects that have more than one catalogue entries in various catalogues .the same entries point to the two or three kinds of objects which have the same catalogue counterparts .no entries show that the objects may not be matched from one or more catalogues , by the reason of the incompleteness of catalogues .in addition , we point out the sample here is obtained by multi - wavelength cross - identification .for positional error , some sources unavoidably match the unrelated or fake sources . in order to keep sources as true as possibly, we only consider the unique entries , cross out the multiple entries , the same entries and no entries . certainly , knowing which are true sources , we need to compute the probability to assess the validity of identifications of the counterparts from three bands , just like what mattox et al .1997 , rutledge et al .2000 do with cross - association . owing to the restrictive aim of this work, we do nt investigate this respect in detail . in the paper ,the plausibility is based on the optical classification , x - ray characteristics like hardness ratios and extent parameter , and the infrared classification ( stocke et al .1991 ; motch et al .1998 ; pietsch et al . 1998 ; he et al .according to the results of the medium sensitivity survey ( emss ; stocke et al .1991 ) , x - ray - to - optical flux ratio , , was found to be very different for different classes of x - ray emitters .motch et al .( 1998 ) stated that , for source classification , the most interesting parameters are flux ratios in various energy bands , including the conventional x - ray hardness ratios , ratios as well as optical colors .they also presented that , although stars and agns have similar x - ray colors , their mean x - ray to optical ratios are obviously quite different and they are well separated in the vs. diagram .cataclysmic variables exhibit a large range of x - ray colors and ratios and can be somewhat confused with both agns and the most active part of the stellar population .however , the addition of a or optical index would allow to further distinguish between these overlapping population .( 2001 ) stated that galactic stars usually have bright optical magnitudes and weak x - ray emission , galaxies with fainter optical magnitudes and median x - ray emission , and agns with the faintest magnitudes and strongest x - ray emission . in their figure 1 .of vs. , agns and non - agns occupy different zones .pietsch et al . ( 1998 ) also used a conservative extent criterion ( and ) as an indicator that the x - ray emission does not originate from a nuclear source .since the corresponding parameter spaces overlap significantly for different classes of objects , an unambiguous identification based on one band data alone is not possible . in order to classify sources, we consider the data from optical , x - ray and infrared bands .the chosen parameters from different bands are ( optical index ) , ( optical - x - ray index ) , , ( x - ray index ) , ( x - ray index ) , , , ( infrared index ) , ( infrared index ) , ( infrared - x - ray index ) .motch et al .( 1998 ) showed that the x - ray to optical flux ratio can be approximate to , assuming an average energy conversion factor of 1 pspc cts s for a 10 erg s flux in the range of 0.1 to 2.4 kev .so can be viewed as an x - ray - to - optical flux ratio , similarly , is an x - ray - to - infrared flux ratio .the mean values of parameters for the sample are given in table 1 .table 1 indicates that some mean values of parameters have rather big scatter .the value of normal galaxies is obviously larger than those of agns and stars ; the value of agns is higher than those of stars and normal galaxies . for the mean values of , which subdivides the hard range ,there are only marginal differences between the individual classes of objects .this applies to the total sample .there is a trend that galaxies seem to have somewhat higher values than agns and stars .agns and stars have on the average the lower , i.e. , they have the softer spectral energy distribution ( sed ) . a significantly harder sed is found for normal galaxies with .this is indeed what is expected for this class of objects which exhibit a rather hard intrinsic spectrum caused by thermal bremsstrahlung from a hot ( ) plasma(cf .e.g. b 1996 ) .the mean values of and of normal galaxies is apparently larger than agns and stars .furthermore , those of agns are larger than stars .as table 1 shows , galaxies are not only 0.76 mag in , but they also have values , 0.37 mag , redder than stars .likewise , agns are redder than stars , too .we also find that the mean +2.5log(cr) and +2.5log(cr) values of agns are much higher than those of stars and galaxies .this can be explained by the fact that agns are strong x - ray emitters . [cols=">,<,<,<,<,<,<",options="header " , ] table 2 shows that the efficiency of classification is rather high , more than 90% when only considering the important features .apparently , it is simple and applicable to choose a few good features for classification . but compared to the results by the automated algorithms , such a method is a little inefficient .after all , the method is limited by itself for it ca nt avoid losing information only with a few features .what s more , sometimes it is very difficult to find such good features . only depending on other tools , such as principal component analysis ( folkes et al .1996 , zhang et al .2003 ) , we can find the principal features . if the number of principal components is more than 3 , it is not appliable to use simple cutoff for the difficulty of visualization . as a result , it is better to apply automatic approaches under such situations . for lvq and slp ,as shown by tables 3 and 5 , the results are rarely affected by the number of space dimension when the space owns the important features .but for svm , in contrast , the result of table 4 is closely connected with the number of space dimension even including the important features .moreover , the more parameters considered , the higher the accuracy is . for low dimensional spaces , lvq and slp are better . while for high dimensional spaces , svm shows its superiority . moreover, the statistics listed in tables 3 - 5 give a view of how well the algorithms did in classifying agn and non - agn objects .these statistics tell us how effective a given method is at correctly identifying a true agn as an agn or a true non - agn as a non - agn .in other words ,how often does the method misidentify objects ? if the number of agn objects identified as non - agns were zero , the classified accuracy of agns is 100% .conversely , if the number of non - agns identified as agns were zero , the classified accuracy of stars and normal galaxies is 100% .the generally lower values of the classified accuracy of agns compared to those of stars and normal galaxies may be a result of the smaller sample size for agns ( 1656 vs. 3891 ) .this suggests that it would be useful to run these tests again with a larger sample base for the methods examined here .given our results for the methods presented here , we are encouraged that distinguishing between a number different types of objects should be possible .for such a project , a larger number of samples of each type of object would be necessary to have an adequate ability to distinguish between the classes .comparing the computed results , we conclude that lvq , svm and slp are effective methods to classify sources with multi - wavelength data . with the data from three bands, we can classify agns from stars and normal galaxies effectively by lvq , slp or svm .this also indicates that the chosen parameters are such good feature vectors to separate agns from stars and normal galaxies .we believe the performance will increase if the data are complete or the quality and quantity of data improves .moreover , these methods can be used to preselect agns from large numbers of sources in large surveys avoiding wasting time and energy , when studying agns or cosmology .the three supervised learning methods we investigated here gave comparable results in a number of situations .generally , the more features considered , the better results svm gave ; however , the results of lvq and slp were considerable with different number of attributes . also ,the different methods , while giving different quality results in a number of cases , were comparable for most of the samples we examined .however , our results suggest that the parameters we choose did not adequately pick out characteristics of the objects in all cases .other parameters added from more bands that effectively summarize the features of sources , such as from radio band , appear to do better ( krautter et al .thus we can improve the classified accuracy of agns or stars and normal galaxies , even classify different types of agns .moreover , these methods can be used for other types of data , such as spectral data and photometric data .we believe that it would be beneficial to have more extensive comparisons between different methods . only then can we take some of the magic out of determining what parameters to choose and know which method to use better in different cases .the performances of lvq and slp are different from that of svm , which arises from different methods based on different theories .svm embodies the structural risk minimization ( srm ) principle , which is superior to empirical risk minimization ( erm ) principle that conventional neural networks employ .most neural networks including lvq and slp are designed to find a separating hyperplane .this is not necessarily optimal .in fact many neural networks start with a random line and move it , until all training points are on the right side of the line .this inevitably leaves training points very close to the line in a non - optimal way .however , in svm , a large margin classifier , i.e. a line approaching the optimal is sought . as a result, svm shows better performance than lvq and slp in the high dimensional space .sources classification depends on the quality and amount of real - time data and on the algorithm used to extract generalized mappings .availability of the high - resolution multi - wavelength data constantly increases .the best possible use of this observational information requires efficient processing and generalization of high - dimensional input data. moreover , good feature selection techniques , as well as good data mining methods , are in great demand .a very promising algorithm that combines the power of the best nonlinear techniques and tolerance to very high - dimensional data is support vector machines ( svm ) . in this work we have used histogram as the feature selection technique and applied lvq , slp and svm to multi - wavelength astronomy to classify agns from stars and normal galaxies .we conclude that the features selected by histogram are applicable and the performance of svm models can be comparable to or be superior to that of the nn - based models in the high dimensional space .the advantages of the svm - based techniques are expected to be much more pronounced in future large multi - wavelength survey , which will incorporate many types of high - dimensional , multi - wavelength input data once real - time availability of this information becomes technologically feasible .all these methods can be used for astronomical object classification , data mining and preselecting agn candidates for large survey , such as the large sky area multi - object fiber spectroscopic telescope ( lamost ) .various data , incuding morphology , photometry , spectral data and so on , can be applied to train the methods and obtain classifiers to classify astronomical objects or preselect intresting objects . when lacking training sets , we may explore some unsupervised methods or outlier finding algorithms to find unusual , rare , or even new types of objects and phenomena .in addition , with the development of the virtual observatory , these methods will be part of the toolkits of the international virtual observatory .we are very grateful to anonymous referee for his important comments and suggestions .we would like to thank lamost staff for sincere help .this research has made use of the simbad database , operated at cds , strasbourg , france .simultaneously , this paper has also made use of data products from the two micron all sky survey , which is a joint project of the university of massachusetts and the infrared processing and analysis center / california institute of technology , funded by the national aeronautics and space administration and the national science foundation .this research is supported by national natural science foundation of china under grant no.10273011 .stitson , m.o . ,weston , j.a.e . ,gammerman , a. , et al .1996 , theory of support vector machines , technical report csd - tr-96 - 17 , department of computer science , royal holloway college , university of london
data mining is an important and challenging problem for the efficient analysis of large astronomical databases and will become even more important with the development of the global virtual observatory . in this study , learning vector quantization ( lvq ) , single - layer perceptron ( slp ) and support vector machines ( svm ) were put forward for multi - wavelength data classification . a feature selection technique was used to evaluate the significance of the considered features to the results of classification . from the results , we conclude that in the situation of less features , lvq and slp show better performance . in contrast , svm shows better performance when considering more features . the focus of the automatic classification is on the development of efficient feature - based classifier . the classifiers trained by these methods can be used for preselecting agn candidates .
energy harvesting from motion is a means to power wireless sensor nodes in constructions , machinery and on the human body .a vibration energy harvester contains a proof mass whose relative motion with respect to a frame drives a transducer that generates electrical power .linear resonant devices are superior when driven by harmonic vibrations at their resonant frequency , but perform poorly for off - resonance conditions . as real vibrations may display a rich spectral content , sometimes of broadband nature , there has been considerable interest in using nonlinear suspensions to shape the spectrum of the harvester s response to better suit the vibrations . the wider spectral response of nonlinear devices is expected to be beneficial for broadband vibrations .the studies so far indicate some advantages of nonlinearities for broad - banded vibrations , but little is known about which conditions make a nonlinear harvester favorable compared to a linear one .this is due to lack of adequate theory and due to the studies being concerned about specific experimental or numerical examples of nonlinear harvesters that are compared to specific examples of linear harvesters that could have been chosen differently .furthermore , several studies do not consider the role of electrical loading which is known to have a dramatic influence on the consequences of mechanical nonlinearities for the output power .white noise is widely used in physics and engineering , and is also important in studying broadband energy harvesting .if the vibration spectrum is flat over the frequency range of the harvester , the harvester itself provides a cut - off making the infinite bandwidth of white noise a meaningful idealization .white noise approximates colored noise with correlation time sufficiently short compared to the characteristic times of the system .aspects of a nonlinear harvester s performance hinging on a finite correlation time and not present for white noise are , albeit interesting , necessarily relying on a limited vibration bandwidth .therefore white noise is a good case for investigating broadband performance .here we investigate theoretically the behavior of mechanically nonlinear energy harvesters driven by a gaussian white noise acceleration .we derive rigorous upper bounds on the output power for arbitrary elastic potential and show that subject to mild restrictions on the device parameters , it is possible to find a linear device that performs equally well as the upper bound .we give a compact expression for the output power that we use to numerically investigate the weak coupling limit of harvesters for different quartic polynomial potentials taking electrical loading fully into account .an energy harvester model that is nt technology specific is shown in fig .[ fig : ehsystem ] . the corresponding state space equations with a linear electromechanical transducer and a nonlinear mechanical suspensioncan be written where is the proof mass , its relative displacement , its velocity , the open - circuit internal energy , the transducer - electrode charge , the output voltage , the current , the damping coefficient , the load resistance , the clamped capacitance and the transduction factor .the device - frame acceleration is gaussian white noise with a two - sided spectral density .the equations can represent a piezoelectric or an electrostatic energy harvester .an electromagnetic harvester gives the same mathematical structure , but different physical interpretation .we use charge as the independent variable .using voltage instead is physically equivalent and also common , see e.g. .ensemble averages with respect to the stationary distribution generated by the process ( [ eq : sp1]-[eq : sp3 ] ) will be denoted by .the mean output power will be our main object of interest .a number of other expressions for immediately follow by using stationarity , ( [ eq : sp1 ] ) and ( [ eq : sp3 ] ) .we will use some of these expressions without giving the derivation .all results for linear systems are exact and taken from unless said otherwise . from ( [ eq : sp3 ] ) , .the second term on the right hand side of ( [ eq : sp2 ] ) is and can then be dropped in the limit .this is the weak coupling limit , which in the stationary state has the reduced probability density where is a normalization constant .we denote expectations in this limit by .in this section , we prove that a previously known lemma on the mechanical input power of linear harvesters also encompasses mechanically nonlinear ones , and discuss its consequences .we then show that known asymptotic formulas for large or small load resistances are upper bounds on output power .finally we find improved bounds that are asymptotically correct in both limits and compare to exact results for a linear harvester .the important observation that the mean input power is was made in where it was proved for linear harvesters . for our nonlinear system and ,all power is dissipated in the damper , ( [ eq : wst0 ] ) implies the equipartition theorem , and . for general , consider the input energy over a time interval .when the actually continuously differentiable is modeled as white noise , the appropriate stochastic representation of the energy is a stratonovich integral .we have where is an ito integral and has zero expectation .the input - energy expectation is then which yields the stated expression for .the observation means that is an efficiency that should be maximized , as opposed to linear narrow - band harvesting where power transfer is maximized .it also implies a power balance for linear harvesters , as where is the transducer electromechanical coupling factor , is the open - circuit stiffness and is the open - circuit quality factor .hence , it is impossible to improve significantly on a linear harvester that is already very efficient .the device in for example , has resulting in .the great number of harvesters , especially those with small volume , that perform substantially below their theoretical maximum , suggests that the weak coupling regime nevertheless has great practical relevance .the load resistance determines the electrical time scale distinguishing different regimes of operation .when is the fastest scale , i.e. , we have from ( [ eq : sp3 ] ) , it is readily proved that .one can also show that .hence , both asymptotic relations in ( [ eq : powsmalltau ] ) are upper bounds on the output power .we note that the bounds are valid for any that permits a stationary distribution and that the output power is otherwise independent of when . when the electrical time scale is the slowest in the system , i.e. when , we have the leftmost asymptotic formula in ( [ eq : powlargetau ] ) is also an upper bound .this is seen by using ( [ eq : sp3 ] ) to find which gives the inequality when dropping the second term .the rightmost asymptotic formula in ( [ eq : powlargetau ] ) need not be an upper bound as can be inferred already from linear theory .we note that ( [ eq : powlargetau ] ) , in contrast to ( [ eq : powsmalltau ] ) , is strongly dependent on as it is proportional to .the maximum power as a function of must necessarily be found at an intermediate value of between the small- and large- regimes . since the output power is respectively insensitive and sensitive to the nature of in these two regimes , the degree to which the maximum power can be improved by mechanical nonlinearities is an open question .we now address the potential benefits of nonlinear devices by deriving improved power bounds and comparing to linear behavior .define and find the values of the constants and that minimize .eliminate covariances between and using and use and ( [ eq : powxq ] ) to write the minimum value as next , use this to eliminate the variance of in ( [ eq : powxq ] ) and rearrange to obtain where we see that ( [ eq : ubound ] ) agrees with ( [ eq : powsmalltau ] ) and ( [ eq : powlargetau ] ) in their respective limits and is a tighter bound . the quantity can be used to eliminate the displacement variance in ( [ eq : ubound ] ) . using we find a _lower _ bound on which we substitute back into the power balance equation to obtain where the new bound is is manifestly less than and is asymptotically approaching the exact result at both the extreme limits of .we can interpret as the root - mean - square frequency of the spectrum of the displacement .this follows from representing the variances in terms of the spectral densities and of and respectively , that is the most optimistic estimate of output power permitted by ( ) is found for load resistances such that and is this can be compared to the exact output power of an optimally loaded linear harvester which is where is the open - circuit resonance .the two power expressions differ only in terms in the denominators : in ( [ eq : pu2opt ] ) v. in ( [ eq : plinopt ] ) . withall other parameters except load resistance held equal , a linear system can therefore be made to perform better than , worse than or equally to the bound depending on its stiffness .it will meet the performance of the bound if its stiffness is such that .the only fundamental restriction on the linear system is that it is stable , i.e. has which is equivalent to .hence , a linear device meeting the bound is realizable if _ therefore nonlinear harvesters are not fundamentally better than linear ones . _harvesters that have their spectrum shaped by nonlinear design of their proof mass suspension will , like linear resonant devices , typically be designed to have much less than the characteristic frequencies of proof - mass motion in order to maximize performance .we therefore expect to be a typical case for such nonlinear devices . a corresponding linear system performing equally to the bound , will then have .that is , its resonance lies within the frequency range of the nonlinear harvester s spectrum .we note that failure to fulfill the criterion ( [ eq : lincrit ] ) because of the second term on the r.h.s , corresponds to coupling strong enough that a linear device is not an alternative due to lack of stability or due to being only marginally stable .we would expect this situation for truly nonresonant devices with low damping . for approaching this limit from below , one has the high - efficiency situation discussed in section [ sec : powerbalance ] even with considerable damping ( moderate for the linear device ) .while ( [ eq : ubound2 ] ) is always an upper bound on the output power , it is quite possible that this bound is a poor approximation and considerably overestimates the actual output power .we might expect this situation when the spectrum has multiple peaks widely separated in frequency such as for quartic bistable potentials .if so , the actual performance can be met by a linear device with larger than by an amount in correspondance to the degree of overestimate .this has to be checked for each particular case .the criterion ( [ eq : lincrit ] ) is a sufficient , but not necessary , condition for the realizability of a linear harvester that performs equally well or better than a harvester characterized by .we now consider how to directly calculate the output power for concrete examples . from ( [ eq : sp1 ] ) and ( [ eq : sp3 ] ) it follows that . inserting this expression into , we obtain i.e. that the output power is proportional to the laplace transform of the velocity autocorrelation function . in the weak coupling limit , we can approximate by its value for to obtain the leading order . can be found from the transition probability by solving the fokker - planck equation corresponding to ( [ eq : sp1 ] ) and ( [ eq : sp2 ] ) with , i.e. the kramers equation . without pursuing itfurther , we remark that an alternative method to calculate the output power , and therefore also , would be to find a _stationary _ solution of the fokker - planck equation for the energy harvester in the weak coupling limit and use or .we determine numerically from the kramers equation by orthogonal function expansions and matrix continued fraction methods following .the spatial basis functions are where , is a normalization constant and are orthonormal polynomials with as weight function .we express all spatial - basis matrix elements in terms of the recurrence coefficients for which are determined by adapting the lanczos method described in to continuous variables .dimensionless variables distinguished by asterisk subscripts and based on a characteristic length scale and frequency scale are used , e.g. , , and . versus electrical time scale for mono- and bistable potentials at weak coupling , , ( bottom to top ) and .open squares : numerical solution for .solid circles : numerical solution for .solid lines : corresponding upper bounds .dotted line : solution from linearization around potential minimum with stiffness . ] .[ fig : pvr ] as a function of at weak coupling , , and .solid circles : numerical solution . dashed lines : solution from linearization around potential minima with stiffness or .thin solid line : upper bound .inset shows corresponding optimal load given by ( solid circles ) and the root - mean - square frequency ( solid line ) . ]we first consider the much studied symmetric quartic potential , choose such that and such that . traces for a bistable potential with and a monostable potential with are shown in fig . [fig : pvr ] . for small values of ,the output power collapses as predicted by ( [ eq : powsmalltau ] ) onto the same asymptotic form for both potentials . for mass vibrates around a potential minimum , giving a performance for larger that differs between the two cases due to their different linear stiffnesses at the minima , i.e. for the bistable potential and for the monostable potential . at , the quartic term in the potentialdetermines the behavior . in the intermediate case ,the two potentials give comparable maximum power even though there is a considerable difference between them for large . for weak coupling , the upper bounds ( [ eq : ubound],[eq : ubound2 ] ) simplify to with .in this limit we can calculate directly from the known expression for and the value of obtained from numerical quadrature using ( [ eq : wst0 ] ) as the probability density .then is independent of , but does depend on .we have so corresponds to the stiffness in standard stochastic equivalent linearization .the bound has a maximum value of at .the maximum value will therefore increase and shift to a larger when is lowered . as can be strongly dependent on the acceleration spectral density , the bound can have a nontrivial dependence on .for example , for and in fig .[ fig : pvr ] , we find respectively and for the bistable potential .this frequency difference is big enough for the bounds to cross . the value is small enough that the proof mass exhibits approximately linear dynamics around the potential minima , as indicated by the agreement between the dotted line in the figure and the numerical calculation .the root - mean - square displacement is then on the order of the half the separation between the potential minima for the bistable system , , is very different from the linear stiffness , and the bound grossly overestimates the actual performance . at small ,the longest time scale is that of interwell transitions as given by kramers rate problem and the large- asymptotics is only reached for values far above the optimum .this demonstrates the necessity of the more complicated numerical treatment in predicting maximum power as opposed to bounding it .[ fig : optpow ] shows the output power versus the parameter when the load is optimized for every .the value of the optimal in the inset varies correspondingly .together with the numerical solution and the value of the bound , we show the output for linear devices with stiffness or as an indication of when the proof mass mostly vibrates around the potential minima .the values of used to calculate the bound are shown in the inset .the maximum power is obtained for a negative value of , i.e with a bi - stable potential , like demonstrated for a fixed load and colored noise in .but , as the bound corresponds to a linear device with , more power can be obtained with a linear device .increasingly negative again leads to vibrations around the minima with rare interwell transitions as discussed above for small , and the bound s overestimate becomes large ( leaving the plot ) . for sufficiently negative , a linear system with stiffness gives less power . from the monotonic frequency - behavior of ( [ eq : plinopt ] ) , we can then conclude that a linear device with somewhat less than , but still larger than can match or outperform the bistable harvester . for small negative and all positive values of in fig .[ fig : optpow ] , linear devices with the same stiffness or as the nonlinear devices have at their potential minima give more power .this can by understood from the quartic term of the potential limiting proof mass motion .we also note that the bound is a good approximation for positive , as was also the case in fig .[ fig : pvr ] .these considerations show that the motivation for utilizing nonlinear stiffness is rather one of necessity than one of advantage .implementation constraints such as , e.g. , package size and/or beam dimensioning may prohibit linear operation . in this respect , we can think of the quartic term of the potential as a model of proof mass confinement or beam stretching at large amplitudes ., hardening duffing spring ; dashed - dotted line : , negative tangential stiffness arises ; dashed line : , bi - stability arises ; symmetric bistable potential . ]we now consider a suspension made of a stable elastic material without built - in stress , choose and require , and .the lowest order nontrivial polynomial form can then be parametrized as where , and is a length scale , see fig .[ fig : quarticasym ] which illustrates how the potential varies with .we choose and as characteristic scales . a linear system with stiffness constrained to the same value as in ( [ eq : quarticasym ] ) , and therefore with , is used in some comparisons .relative to acceleration spectral density versus both for ( thick lines ) and for optimal at each point ( markers ) .thin solid line : linear device with .all devices have the same linear stiffness .medium thick , grey solid line : upper bound for . ] ( from bottom to top traces at the highest frequencies ) .solid lines : . dashed lines : . ]figure [ fig : pvs ] compares output power as function of acceleration spectral density for harvesters with different values of the parameter . to ease comparisonthe power is divided by .a linear harvester then appears as a horizontal line as shown for the particular case with . for each nonlinear potential ,results are shown both with fixed load ( lines ) and with optimized at each value of ( markers ) . is optimal for the linear system with , and therefore for all the shown potentials at small .the difference in output power between the two loading cases are moderate for these examples .it is largest for the largest values of which have the lowest .for example , for we have and from lowest to highest . from these valueswe also note that increased power correlates with lower as we would expect from the form of the bound ( [ eq : uboundwc ] ) .[ fig : pvs ] shows that the nonlinear devices with give an -range of better performance than their linear counterpart with .this is the case even with which is optimal only for that linear device .the consistently lower power for is due to the stiffening nature of the potential which limits motion and shifts the spectrum to higher frequencies .the other potentials have a range of softening behavior causing a shift to lower frequencies and higher power . also shown on dimensionless form in fig .[ fig : pvs ] ( grey line ) , is ( [ eq : uboundwc ] ) for evaluated with .each point of this curve represents an optimally loaded linear device with open - circuit frequency . for ,this corresponds to . if we compare to a linear system with instead of one with , it has outperforming all nonlinear cases in fig .[ fig : pvs ] over all values of base acceleration spectral density .the comparison between nonlinear and linear suspensions to judge their relative merits is only fair if the harvester responses are within approximately the same frequency range . in the preceding analysis we secured that by choosing the open - circuit frequency of the linear device approximately equal to the of the nonlinear device .we also discussed how this condition could be relaxed for weakly excited bistable systems .to be more specific on the spectral characteristics , the velocity spectral density for the bistable potential with and for the monostable potential with is plotted in in fig .[ fig : spect ] for a selection of -values . for both potentials ,the spectra demonstrate an increased broadening and a tendency of downwards - in - frequency shift of the spectral weight . despite their differences, these two potentials gave very similar performance in fig .[ fig : pvs ] and also display similar spectral shapes here . if we consider the curve for in fig .[ fig : spect ] , we see that the choice for the linear system discussed above lies within the spectrum of the nonlinear device and therefore is a fair case to compare to .even though we only considered simple phenomenological potentials ( [ eq : quarticasym ] ) , the broadening and flattening of the spectrum and the better - than - linear power - characteristic within an -range replicate experiments on a device with an asymmetric monostable potential .we have shown that when driven by white noise , harvesters with nonlinear stiffness do not have the fundamental performance advantage over linear ones that one could have expected from their wider spectrum .this followed for efficient devices from considerations on input power and for general coupling from power bounds .numerical examples were given for weak coupling .the findings do not preclude advantages of nonlinear - stiffness harvesters subject to vibrations significantly different from wide band noise , e.g. off - resonance , sufficiently band - limited vibrations .implementation constraints may render a nonlinear stiffness unavoidable or a desired value of linear stiffness unattainable .we demonstrated advantages when linear stiffness was constrained .35ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] http://stacks.iop.org/0957-0233/17/r175 [ * * , ( ) ] * * , ( ) in link:\doibase 10.1109/iemdc.2007.382755 [ _ _ ] , vol .( , ) pp .link:\doibase 10.1103/physrevlett.102.080601 [ * * , ( ) ] * * , ( ) * * , ( ) http://stacks.iop.org/0960-1317/18/115021 [ * * , ( ) ] * * , ( ) link:\doibase 10.1109/jmems.2009.2032784 [ * * , ( ) ] link:\doibase 10.1063/1.3097207 [ * * , ( ) ] http://stacks.iop.org/0960-1317/20/i=12/a=125009 [ * * , ( ) ] link:\doibase 10.1109/jmems.2011.2170824 [ * * , ( ) ] link:\doibase 10.1109/jmems.2008.928709 [ * * , ( ) ] _ _ ( , , ) _ _ , ed .( , , ) _ _ , ed .( , , ) _ _ ( , , ) in _ _ ( , ) \doibase doi : 10.1016/j.jsv.2008.09.001 [ * * , ( ) ] link:\doibase 10.1109/jmems.2009.2039017 [ * * , ( ) ] link:\doibase 10.1016/j.jsv.2010.04.002 [ * * , ( ) ] link:\doibase 10.1063/1.3560523 [ * * , ( ) ] link:\doibase 10.1007/s11071 - 012 - 0327 - 0 [ * * , ( ) ] * * , ( ) * * , ( ) _ _ , ed .( , , ) _ _ ( , , ) _ _ ( , , ) http://stacks.iop.org/0960-1317/21/i=4/a=045006 [ * * , ( ) ] * * , ( ) * * , ( ) * * , ( ) _ _ , ed . , ,( , , ) link:\doibase 10.1016/j.cam.2004.03.029 [ * * , ( ) ] * * , ( )
mechanically nonlinear energy harvesters driven by broadband vibrations modeled as white noise are investigated . we derive an upper bound on output power versus load resistance and show that , subject to mild restrictions that we make precise , the upper - bound performance can be obtained by a linear harvester with appropriate stiffness . despite this , nonlinear harvesters can have implementation - related advantages . based on the kramers equation , we numerically obtain the output power at weak coupling for a selection of phenomenological elastic potentials and discuss their merits .
characterizing the capacity of cellular networks is one of the fundamental problems in network information theory .unfortunately , even for the simplest setting consisting of two base stations ( bss ) having one serving user each , which is referred to as the two - user interference channel ( ic ) , capacity is not completely characterized for general channel parameters .exact capacity results being notoriously difficult to obtain , many researchers have recently studied approximate capacity characterizations in the shape of so - called `` degrees of freedom ( dof ) '' , which captures the behavior of capacity as the signal - to - noise ratio ( snr ) becomes large . the dof metric has received a great deal of attention and thoroughly analyzed as multiantenna techniques emerged , especially in cellular networks because of their potential to increase the dof of cellular networks . roughly speaking , equipping multiple antennas at the bs and/or users can drastically increase the sum dof of single - cell cellular networks proportionally with the number of equipped antennas .under multicell environment , cadambe and jafar recently made a remarkable progress showing that the optimal sum dof for the -user ic is given by , which corresponds to the -cell cellular network having one serving user in each cell .a new interference mitigation paradigm called interference alignment ( ia ) has been proposed to achieve the sum dof .multicell cellular networks having multiple serving users in each cell has been studied in under both uplink and downlink operation , each of which is called interfering multiple access channel ( imac ) and interfering broadcast channel ( ibc ) .it was shown in that multiple users in each cell is beneficial for increasing the sum dof of imac and ibc by utilizing multiple users in each cell for ia . as a natural extension , integrating multiantenna techniques and ia techniques has been recently studied to boost the dof of multicell multiantenna cellular networks .the dof of the -user ic having antennas at each transmitter and antennas at each receiver has been analyzed in .more recently , the imac and ibc models have been extended to multiantenna bs and/or multiantenna users , see and the references therein . in this paper, we study a multiantenna two - cell cellular network in which the first and second cells operate as uplink and downlink respectively . for better understanding on the motivation of the paper ,we introduce a simple two - cell cellular network in fig .[ figs : motivating_ex ] .the first cell consists of a bs having two antennas and three users but the second cell consists of a bs having three antennas and two users .let us consider how to operate or coordinate this example network in order to maximize its sum dof .as we will explain later , if both cells operate as the conventional uplink or downlink , then the sum dof is limited by two from the dof result of the two - user multiple input multiple output ( mimo ) ic in .hence , activating one of the two cells can trivially achieve the optimal sum dof for these cases .notice that the another option is to operate the first cell as uplink and the second cell as downlink or vice versa .for this case , the two - user mimo ic upper bound in is given by three , suggesting that it might be possible to achieve more than two sum dof .but it is at least impossible to achieve more than two dof by simply activating one of two cells .we will show that for this case the optimal sum dof is given by , strictly greater than that achievable by the conventional uplink or downlink operation .the previous work on the dof of multiantenna cellular networks , however , inherently assumes either uplink or downlink so that it can not capture the possibility of such dof improvement from the uplink downlink operation .therefore , the primary aim of this paper is to figure out whether operating as either the conventional uplink or downlink is optimal or not in terms of the dof for multicell multiantenna cellular networks .we focus on two - cell networks in which the first cell , consisting of a bs with antennas and users , operates as uplink and the second cell , constisting of a bs with antennas and users , operates as downlink .we completely characterize the sum dof and the result demonstrates that , depending on the network configuration , uplink downlink operation is beneficial for increasing the sum dof compared to the conventional uplink or downlink operation . in seminal work , cadambe andjafar showed that the optimal sum dof of the -user ic with time - varying channel coefficients is given by , achievable by signal space ia .the concept of this signal space alignment has been successfully adapted to various network environments , e.g. , see and the references therein .it was shown in that ia can also be attained on fixed ( not time - varying ) channel coefficients .a different strategy of ia was developed in called ergodic ia , which makes interference aligned in the finite snr regime and , as a result , provides significant rate improvement compared with the conventional time - sharing strategy in the finite snr regime .the dof of -user mimo ic has been considered in . for multisource multihop networks ,interference can not only be aligned , but it can be cancelled through multiple paths , which is referred to as interference neutralization .the work has exploited ia to neutralize interference at final destinations , which is referred to as aligned interference neutralization , and showed that the optimal sum dof two is achievable for -user -hop networks with relays . similar concept of ergodic ia has been proposed for interference neutralization in showing that ergodic interference neutralization achieves the optimal sum dof of -user -hop isotropic fading networks with relays in each layer .recently , it has been shown in that the optimal sum dof of the -user -hop network with relays is given by .the dof of cellular networks has been first studied by suh and tse for both uplink and downlink environments , called imac and ibc respectively .it was shown that , for two - cell networks having users in each cell , the sum dof is achievable for both uplink and downlink .hence , multiple users at each cell are beneficial for improving the dof of cellular networks . the imac and ibc models have been extended to have multiple antennas at each bs and/or user .for multiantenna imac and ibc , it was shown that there exists in general a trade - off between two approaches : zero - forcing by using multiple antennas and asymptotic ia by treating each antenna as a separate user .recently , reverse time division duplex ( tdd ) , i.e. , operating a subset of cells as uplink and the rest of the cells as downlink , has been actively studied in heterogeneous cellular networks , consisting of macro bss with larger number of antennas and micro bss with smaller number of antennas . under various practical scenarios ,potential benefits of reverse tdd have been analyzed in the context of coverage , area spectral efficiency , throughput , and so on .the rest of this paper is organized as follows . in section [ sec : prob_formulation ] , we introduce the uplink downlink multiantenna two - cell cellular network model and define its sum dof . in section [ sec : main_result ] , we first state the main result of this paper , the sum dof of the uplink downlink multiantenna two - cell cellular network .the proof of the main result is presented in section [ sec : achievability ] .we then discuss some related problems regarding the main result in section [ sec : discussion ] and finally conclude in section [ sec : conclusion ] .we will use boldface lowercase letters to denote vectors and boldface uppercase letters to denote matrices . throughout the paper , ] . on the other hand , the bs in cell ( bs ) equipped with antennas wishes to send an independent message to the user in the same cell ( user ) for all ] .here \in\mathbb{r}^{m_1\times 1} ] is the channel matrix from bs to bs , \in\mathbb{r}^{1\times m_2} ] is the scalar channel from user to user . also , \in \mathbb{r} ] is the transmit signal vector of cell .the additive noise vector at cell , denoted by \in \mathbb{r}^{m_1\times 1} ] , is assumed to follow .each user in cell and bs should satisfy the average power constraint , i.e. , \big)\le p ] and \|^2\right)\le p ] and ] and ] and ] by an achievable dof of user and , ] and ] , ,\cdots,\mathbf{h}_{\beta 1}[n_1])\in \mathbb{r}^{n_1\times 2n_1} ] , and ,\cdots , g_{\beta 1i}[n_1])\in \mathbb{r}^{n_1\times n_1} ] . as shown in the figure, user transmits a single stream via the beamforming vector , where ] .then , we can set linearly independent } ] . in particular , for a fixed , set , where ] satisfying the downlink interference nulling ( in ) condition , i.e. , for all ] as linearly independent vectors in the null space .linearly independent vectors can satisfy the downlink in condition , the number of possible streams for successful decoding at user is given by because one dimension is occupied by the inter - cell interference vectors as seen in fig .[ figs : simple_case1 ] . ] hence , bs is able to decode its intended streams achieving one dof each since there is no inter - cell interference and } ] are linearly independent almost surely .finally , from the fact that total streams are delivered over time slots , is achievable . in the following three subsections , we introduce two ia in schemes forgeneral , , , and and then derive their achievable sum dof .we prove that the maximum achievable sum dof by the two proposed schemes coincides with in theorem [ thm : achievable_dof ] . as shown in fig .[ figs : simple_case1 ] , the first key ingredient follows uplink ia from the users in cell to the users in cell . unlike the simple case in fig .[ figs : simple_case1 ] , asymptotic ia using an arbitrarily large number of time slots is generally needed for simultaneously aligning interference from multiple transmitters at multiple receivers .the second key ingredient follows downlink in using antennas from bs to bs and the users in the same cell .we propose two ia in schemes generalizing the main idea in section [ subsec : main_idea ] .the first ia in scheme applies uplink inter - cell ia and downlink inter - cell and intra - cell in .specifically , the users in cell align their interferences at the users in cell . on the other hand, bs nulls out its inter - cell and intra - cell interferences using antennas , each of which is the interference to bs and the users in cell .define ] are assumed to be set such that they satisfy the three constraints in .define ^{n_1n_2} ] , into submessages .let ,\cdots , c^{(\mathbf{s})}_{\alpha i}[n]\big] ] , into submessages } ] denote a length- codeword of gaussian codebook generated i.i.d . from , that is associated with .let .communication will take place over a block of time slots .each of the codewords defined above will be transmitted via a length- time - extended beamforming vector . for easy explanation ,denote the length- time - extended inputs and outputs as &=\left[x_{\alpha i}[(m-1)d+1],\cdots , x_{\alpha i}[md]\right]^{\dagger}\in\mathbb{r}^{d\times 1},\nonumber\\ \bar{\mathbf{x}}_{\beta}[m]&=\left[\mathbf{x}_{\beta}[(m-1)d+1],\cdots , \mathbf{x}_{\beta}[md]\right]^{\dagger}\in\mathbb{r}^{m_2d\times 1},\nonumber\\ \bar{\mathbf{y}}_{\alpha}[m]&=\left[\mathbf{y}_{\alpha}[(m-1)d+1],\cdots , \mathbf{y}_{\alpha}[md]\right]^{\dagger}\in\mathbb{r}^{m_1d\times 1},\nonumber\\ \bar{\mathbf{y}}_{\beta j}[m]&=\left[y_{\beta j}[(m-1)d+1],\cdots , y_{\beta j}[md]\right]^{\dagger}\in\mathbb{r}^{d\times 1},\end{aligned}\ ] ] where ] and , ] .similarly , for ] , ] . that is, user transmits =\gamma\sum_{\mathbf{s}\in \mathcal{s}_t}\bar{\mathbf{v}}^{(\mathbf{s})}_{\alpha i}[m]c^{(\mathbf{s})}_{\alpha i}[m],\ ] ] and bs transmits =\gamma\sum_{j=1}^{n_2}\sum_{k=1}^{\frac{\lambda_2}{\lambda_1}t^{n_1 n_2}}\bar{\mathbf{v}}^{(k)}_{\beta j}[m]c^{(k)}_{\beta j}[m],\ ] ] where is chosen to satisfy the average power .figure [ fig : scheme_i_bc_mac ] illustrates how to construct these length- time - extended beamforming vectors for uplink inter - cell ia and downlink inter - cell and intra - cell in .the detailed construction of such beamforming vectors is explained in the following .since the overall construction is identical for all ] , define =\prod_{1\leq i\leq n_1,1\leq j\leq n_2}g_{\beta ji}[t]^{s_{ji}}\ ] ] for ] .set for all ] occupies at least dimensional subspace and at most dimensional subspace in dimensional space almost surely . from the fact that is a set of linearly independent vectorsalmost surely , ,j\in[1:n_2],\mathbf{s}\in\mathcal{s}_t}\big) ] , ] occupies at most dimensional subspace since the cardinality of is given by .therefore , lemma [ lemma : dim ] holds .+ from , , and . hence , in order to null out inter - cell interference by zero - forcing at bs , ,\mathbf{s}\in\mathcal{s}_t}\right)\end{aligned}\ ] ] for all ] . in order to null out intra - cell interference ,we first define dimensional subspace in dimensional space represented by }\right) ] occupies at most dimensions almost surely , which means the null space of ,j'\in[1:n_2],\mathbf{s}\in\mathcal{s}_t}\right) ] as a subset of basis consisting of the null space of ,j'\in[1:n_2],\mathbf{s}\in\mathcal{s}_t}\right) ] , , and ] orthogonal with the vectors in for all ] can be set to satisfy the downlink inter - cell and intra - cell in conditions almost surely .each submessage will be decoded by zero - forcing .we first introduce the following properties : * is a function of ,j'\in[1:n_2]} ] , ,j'\neq j} ] ( see and property ( a ) ) , based on the above properties , we prove that one dof is achievable for each submessage .+ + * decoding at bs : * + since ,k\in[1:\frac{\lambda_2}{\lambda_1}t^{n_1n_2}]} ] should be a set of linearly independent vectors .note that is a set of linearly independent vectors almost surely .furthermore , from property ( a ) , is a random projection of into dimensional space ( is set independent of ) .therefore , ,\mathbf{s}\in\mathcal{s}_t} ] . since ,k\in[1:\frac{\lambda_2}{\lambda_1}t^{n_1n_2}]} ] should be a set of linearly independent vectors and ,\mathbf{s}\in\mathcal{s}_t}\right)\end{aligned}\ ] ] should be satisfied for all ] is a set of linearly independent vectors almost surely since . now consider the condition in .lemma [ lemma : dim ] shows that ,\mathbf{s}\in\mathcal{s}_t}\right) ] is independent of .therefore is satisfied almost surely if which is satisfied from the assumption that in conclusion , each submessage intended to the users in the second cell can be decoded by achieving one dof almost surely . from the facts that each submessage is delivered via a length- codeword and total submessages are delivered during time slots , the sum dof is achievable under the three constraints in , , and .finally , since converges to as increases , the sum dof in is achievable . in this subsection , we prove that is achievable .assume that ]is set only for the intra - cell in , but not for inter - cell in . that is , should be satisfied for all ] , where } ] orthogonal with the vectors in for all $ ] if which is satisfied from the assumption that . now consider the decoding procedure . even though inter - cell interference from bs is not zero - forced, bs is able to decode all the intended submessages by zero - forcing if the number of dimensions occupied by all signal and interference vectors is less than or equal to , i.e. , which is satisfied from the assumption that .lastly , the condition for successful decoding at each user in cell is the same as in , which is satisfied from the assumption that . therefore , the second ia in scheme achieves the sum dof in .in this section , we discuss about the cell coordination problem figuring out the dof gain achievable by uplink downlink operation in more details in sections [ subsec : dof_gain ] and [ subsec : dof_hetnet ] and also propose a simple ia scheme exploiting delayed csi at transmitters ( csit ) in section [ subsec : delayed_csit ] . in remark 1 of section [ sec : main_result ] , we have briefly explained the dof gain achievable by uplink downlink operation compared to the conventional uplink or downlink operation . in this subsection, we consider the _ cell coordination problem _ in more details for a general four - parameter space .specifically , the first cell consists of the bs with antennas and users and the second cell consists of the bs with antennas and users .the operation mode of each cell can be chosen to maximize the sum dof .unfortunately , the sum dof of the two - cell multiantenna ibc ( or imac ) is not completely characterized for a general .it was shown in that , for , the sum dof is given by which corresponds to the regime that zero - forcing is optimal .for , on the other hand , zero - forcing is not optimal in general and the sum dof has been characterized only for the symmetric case where and . specifically , the sum dof is given by if , which is achievable by treating each bs antenna as a separate user and then applying asymptotic ia proposed in .to figure out the dof gain from uplink downlink operation over a four - parameter space , for , we define }1_{d_{\sigma}(i , j , k , l)>d_{\sf upper}(i , j , k , l)}}{\lambda^4},\end{aligned}\ ] ] where and denotes the indicator function .note that is given from theorem 1 , which is the sum dof obtained by uplink downlink operation , and is an upper bound on the sum dof obtained by the conventional uplink or downlink operation .hence , from the definition of , uplink downlink operation is beneficial for improving the sum dof at least fraction of the entire four - parameter space .table [ table : gain_fraction ] states with respect to .as the space size increases , the fraction of subspace showing the dof gain from uplink downlink operation increases .for instance , uplink downlink operation can improve the sum dof more than 30 percent of the entire space when . depending on the relationship between , , , and ,the solutions of the above two linear programs are represented as in different forms .hence we first divide the entire four - parameter space into regimes as shown in table [ table : solution ] .* identify a feasible region of for , i.e. , the region of satisfying three constraints in .* find maximizing the objective function among the corner points in the feasible region , which provides . *repeat the above two steps for , which provides . *find .for instance , consider the first regime where in table [ table : solution ] .figure [ fig : feasible ] plots the feasible regions in and for this regime . for , the first constraint becomes inactive and thus at least one of the three corner points yields the maximum of , which gives when . for, on the other hand , only the second constraint becomes active and at least one of the two corner points yields the maximum , which gives when or .hence when . in the same manner, we can derive and , and for the rest of the regimes in table [ table : solution ] .from table [ table : solution ] , if and if .furthermore , in table [ table : solution ] coincides with in theorem [ thm : achievable_dof ] for all the regimes . for the regimewhere , for instance , is given by where the second equality follows since . in a similar manner, we can prove that for the rest of the regimes . in conclusion , which completes the proof . t. s. han and k. kobayashi , `` a dichotomy of functions of correlated sources from the viewpoint of the achievable rate region , '' _ ieee trans .inf . theory _ ,it-33 , pp .6976 , jan .1987 .h. weingarten , y. steinberg , and s. shamai ( shitz ) , `` the capacity region of the gaussian multiple - input multiple - output broadcast channel , '' _ ieee trans .inf . theory _ ,39363964 , sep .2006 .w. shin , n. lee , j .- b .kim , c. shin , and k. jang , `` on the design of interference alignment scheme for two - cell mimo interfering broadcast channels , '' _ ieee trans .wireless commun ._ , vol . 10 , pp .437442 , feb .2011 .m. a. maddah - ali , a. s. motahari , and a. k. khandani , `` communication over mimo x channels : interference alignment , decomposition , and performance analysis , '' _ ieee trans .inf . theory _ ,34573470 , aug .2008 .v. s. annapureddy , a. el gamal , and v. v. veeravalli , `` degrees of freedom of interference channels with comp transmission and reception , '' _ ieee trans .inf . theory _ ,58 , pp . 57405760 , sep .2012 .a. s. motahari , s. o. gharan , m. a. maddah - ali , and a. k. khandani , `` real interference alignment : exploiting the potential of single antenna systems , '' _ ieee trans .inf . theory _ , vol .60 , pp . 47994810 ,2014 . c. wang , h. sun , and s. a. jafar , `` genie chains and the degrees of freedom of the -user mimo interference channel , '' in _ proc .information theory ( isit ) _ , cambridge , ma , jul. 2012 .t. gou , s. a. jafar , c. wang , s .- w .jeon , and s .- y .chung , `` aligned interference neutralization and the degrees of freedom of the interference channel , '' _ ieee trans .inf . theory _ ,vol . 58 , pp . 43814395 , jul. 2012 .b. zhuang , r. a. berry , and m. l. honig , `` interference alignment in mimo cellular networks , '' in _ proc .ieee international conference on acoustics , speech and signal processing ( icassp ) _ , prague , czech republic , may 2011 . t. liu and c. yang , `` on the feasibility of linear interference alignment for mimo interference broadcast channels with constant coefficients , '' _ ieee trans .signal processing _21782191 , may 2013 .s. a. ayoughi , m. nasiri - kenari , and b. h. khalaj , `` on degrees of freedom of the cognitive mimo two - interfering multiple - access channels , '' _ ieee trans .wireless commun ._ , vol . 5 , pp .20522068 , jun . 2013 .a. ghosh , n. mangalvedhe , r. ratasuk , m. c. b. mondal , e. visotsky , t. a. thomas , j. g. andrews , p. xia , h. s. jo , h. s. dhillon , and t. d. novlan , `` heterogeneous cellular networks : from theory to practice , '' _ ieee commun . mag ._ , vol .5464 , jun . 2012 .m. kountouris and n. pappas , `` hetnets and massive mimo : modeling , potential gains , and performance analysis , '' in _ proc .ieee antennas and propagation in wireless communications ( apwc ) _ , torino , italy , sep .2013 .k. hosseini , j. hoydis , s. ten brink , and m. debbah , `` massive mimo and small cells : how to densify heterogeneous networks , '' in _ proc .ieee international conference on communications ( icc ) _ , budapest , hungary , jun. 2013 .c. s. vase and m. k. varanasi , `` the degrees of freedom region and interference alignment for the mimo interference channel with delayed csit , '' _ ieee trans .inf . theory _ ,43964417 , jul .
an uplink downlink two - cell cellular network is studied in which the first base station ( bs ) with antennas receives independent messages from its serving users , while the second bs with antennas transmits independent messages to its serving users . that is , the first and second cells operate as uplink and downlink , respectively . each user is assumed to have a single antenna . under this uplink downlink setting , the sum degrees of freedom ( dof ) is completely characterized as the minimum of , , , and , where denotes . the result demonstrates that , for a broad class of network configurations , operating one of the two cells as uplink and the other cell as downlink can strictly improve the sum dof compared to the conventional uplink or downlink operation , in which both cells operate as either uplink or downlink . the dof gain from such uplink downlink operation is further shown to be achievable for heterogeneous cellular networks having hotspots and with delayed channel state information . cellular networks , degrees of freedom , heterogeneous networks , interference alignment , multiantenna techniques , reverse tdd .
in the past two decades , the main focus of the cellular access engineering was on the efficient support of human - oriented services , like voice calls , messaging , web browsing and video streaming services .a common feature of these services is seen in the relatively low number of simultaneous connections that require high data rates . on the other hand ,the recent rise of m2 m communications introduced a paradigm shift and brought into research focus fundamentally new challenges .particularly , m2 m communications involve a massive number of low - rate connections , which is a rather new operating mode , not originally considered in the cellular radio access .smart metering is a showcase m2 m application consisting of a massive number of devices , up to , where meters periodically report energy consumption to a remote server for control and billing purposes . on the other hand ,the report size is small , of the order of bytes .the current cellular access mechanisms , considering all the associated overhead , can not support this kind of operation .there are on - going efforts in 3gpp that deal with the cellular access limitations , investigating methods for decreasing the access overhead for small data transmissions , access overload control and guaranteed quality of service .besides lte , proposes an allocation method for reports with deadlines in gprs / edge , showing that up to devices can be effectively supported in a single cell by avoiding random access and using a periodic structure to serve the devices such that the deadlines are met . in this letter, we consider a system with a periodically occurring pool of resources that are reserved m2 m communications and shared for uplink transmission by all m2 m devices .the re - occurring period is selected such that if a report is transmitted successfully within the upcoming resource pool , then the reporting deadline is met .we note that , if each device has a deterministic number of packets to transmit in each resource pool and if there are no packet errors , then the problem is trivial , because a fixed number of resources can be pre - allocated periodically to each device .however , if the number of packets , accumulated between two reporting instances , is random and the probability of packet error is not zero , then the number of transmission resources required per device in each transmission period is random . on the other hand , as the number of transmission resources in each instance of the resource pool is fixed , the following question arises : _ how many periodically reporting devices can be supported with a desired reliability of report delivery ( i.e. , % ) , for a given number of resources reserved for m2 m communications ? _ we elaborate and analyze the proposed approach in lte context ; however , the presented ideas are generic and implementable in other systems where many devices report to a single base station or access point .the rest of the letter is organized as follows .section [ sec : model ] presents the system model .section [ sec : analysis ] is devoted to the analysis of the proposed access method .section [ sec : results ] presents the numerical results and section [ sec : conclusions ] concludes the letter .we focus on the case of periodic reporting , where the length of the reporting interval ( ri ) , denoted by , depends on the application requirements .the m2 m resources for uplink transmission are reserved to occur periodically , at the end of each ri .the periodic reporting is typically modeled either as a poisson process with arrival rate , where devices can actually send none , one , or multiple reports within ri , or as a uniform distribution , where devices send a single packet per ri .our analysis will focus on the former case , but we note that the derived results can be readily applied to the latter arrival model , as well .we assume that all report arrivals that occur within the current reporting interval are served in the next reporting interval .the lte access combines tdma and fdma , such that access resources are represented in 2d , see fig .[ lte_grid ] .as depicted , time is organized in frames , where each frame is composed of subframes with duration ms .the minimum amount of uplink resources that can be allocated to a device is one resource block ( rb ) , corresponding to a single subframe and 12-subcarriers in frequency .we assume that the uplink resources are split into two pools , one reserved for m2 m services ( depicted in blue in fig .[ lte_grid ] ) , and the other used for other services .note that the approach of splitting the resources for m2 m and non - m2 m has often been used , as their requirements are fundamentally different .finally , we assume that there is a set of rbs reserved for m2 m resource pool in each subframe .the m2 m resource pool is divided into two parts , denoted as the preallocated and common pool , which reoccur with period , as depicted in fig .[ preallocated]a ) .we assume that there are reporting devices , and each device is preallocated an amount of rbs from the preallocated pool dimensioned to accommodate a single report and an indication if there are more reports , termed excess reports , from the same device to be transmitted within the same interval .the common pool is used to allocate resources for the excess reports , as well as all the retransmissions of the reports / packets that were erroneously received .these resources can only be reactively allocated and in our case we consider the lte data transmission scheme , where each transmission has an associated feedback that can be used to allocate the resources from the common pool ms ( subframes ) , which includes processing times at the base station and at the device , and which can be assumed negligible taking into account that the ri that we are considering is of the order of thousands subframes . ] .the length of the m2 m resource pool , preallocated pool and common pool , expressed in number of subframes , are denoted by , and , respectively , see fig .[ preallocated]b ) , such that where denotes the fraction of rbs per subframe required to accommodate a report transmission and where the value of should be chosen such that a report is served with a required reliability .the analysis how to determine , given the constraints of the required number of rbs per report , number of devices and reliability , is the pivotal contribution of the letter and presented in section [ sec : analysis ] .finally , we note that the duration of has a direct impact on the delay ; in the worst case a ( successful ) report is delivered seconds after its arrival , which also defines the delivery deadline .denote by a random variable that models the total number of transmissions of report from device ; i.e. , includes the first transmission and any subsequent retransmissions that may occur due to reception failures .further , assume that the maximum value of is limited to , where is a system parameter , applicable to all reports from all devices .the probability mass function ( pmf ) of can be modeled as a geometric distribution truncated at : =\left\ { \begin{array}{l l } p_e^{k-1 } ( 1-p_e ) , & 1 \leq k \leq l-1 , \\p_e^{l-1 } , & k = l , \end{array } \right .\label{h0 } \end{aligned}\ ] ] where is the probability of a reception failure .recall that the reporting of device is modeled as a poisson process with arrival rate , where the device can send none ( ) , one ( ) or multiple reports ( ) within ri .as stated above , the first transmission of the first report of a device is sent in the preallocated pool , while all subsequent transmissions take place in the common pool .these include : a ) potential retransmissions of the first report , and b ) transmissions and potential retransmissions of all excess reports .denote by the random variable that corresponds to the total number of transmissions from device requiring resources in the common pool : the total number of transmissions of all devices requiring resources from the common pool is : the straightforward way to characterize is to derive its pmf .however , as the supposed number of reporting devices is very large , is a sum of a large number of independent identically distributed ( iid ) random variables .therefore , we apply the central limit theorem and assume that is a gaussian random variable , requiring characterization only in terms of the expectation and variance .we proceed by evaluation of ] , and show in section [ sec : results ] that this approach provides accurate results .the expectation of is : = \mathrm{e}\left[\sum_{i=1}^n r_i\right ] = n \cdot \mathrm{e}\left[r_i\right ] .\label{er}\ ] ] taking into account , it could be shown that : & = \mathrm{e } [ r_i | u_i = 0 ] \mathrm{p } [ u_i = 0 ] + \mathrm{e } [ r_i | u_i \geq 1 ] \mathrm{p } [ u_i \geq 1 ] , \nonumber \\ & = \mathrm{e } [ r_i | u_i \geq 1 ] \mathrm{p } [ u_i \geq 1 ] , \nonumber \\ & = \mathrm{e } \left [ \sum_{j=0}^{u_i } w_{i , j } - 1 | u_i \geq 1 \right ] ( 1 - e^{-\lambda t_{ri } } ) \label{eq : exp}. \end{aligned}\ ] ] by putting and applying wald s equation , the equation becomes : & = \left ( \mathrm{e } [ u_i | u_i \geq 1 ] \mathrm{e } [ w_{i , j } ] - 1 \right ) ( 1 - e^ { - 1 } ) , \nonumber \\ & = \frac{1 - p_e^l } { 1 - p_e } - ( 1 - e^ { - 1 } ) .\label{eq : exp1 } \end{aligned}\ ] ] where we used the fact that = 1 / ( 1 - e^ { - 1 } ) ] , i.e. , there is no report delivery failure when the total number of required transmissions is not greater then the capacity of the common pool , for .regardless of the scheduling policy applied in the common pool , it is always \leq 1 ] , } ] , i.e. the desired reliability to at least % , to validate the analytical upper bound , we simulated a random scheduler with repeats for each parameter combination .[ fig1 ] shows the percentage of system resources required to serve devices for a reporting interval ri of minute , which corresponds to the most demanding ri according to .the report size ( rs ) is bytes .it can be seen that in the worst case , for the available bandwidth of mhz and the lowest - order modulation ( qpsk ) , up to devices can be served with only % of the available system resources .if a larger bandwidth ( mhz ) and/or higher - order modulations ( -qam ) is used , then only % of the available resources are required to achieve the target report reliability .[ fig3 ] depicts the required fraction of system capacity for m2 m service , when the report rs varies between bytes and kbytes , the system bandwidth is set to mhz , modulation scheme is -qam , and .obviously , the report size has a large impact in the results , demanding up to 30% of the system capacity in the worst case .finally , we note that figs .[ fig1 ] and [ fig3 ] also demonstrate a tight match between the analytical and simulated results .\leq 10^{-3}$ ] , ri of 1 minute , rs of 100 bytes and . ]we have introduced a contention - free allocation method for m2 m that relies on a pool of resources reoccurring periodically in time . within each occurrence, feedback is used to reactively allocate resources to each individual device .the number of transmissions required by an m2 m device within the pool is random due to two reasons : ( 1 ) random number of arrived reports since the last reporting opportunity and ( 2 ) requests for retransmission due to random channel errors .the objective is to dimension the pool of m2m - dedicated resources in order to guarantee certain reliability in the delivery of a report within the deadline .the fact that the pool of resources is used by a massive number of devices allows to base the dimensioning on the central limit theorem .promising results have been shown in the context of lte , where even with the lowest - order modulation only % of the system resources are required to serve m2 m devices with a reliability of % for a report size of bytes .the proposed method can be applied to other systems , such as .a. laya , l. alonso , and j. alonso - zarate , `` is the random access channel of lte and lte - a suitable for m2 m communications ?a survey of alternatives , '' _ ieee commun .surveys tuts _ , vol .16 , no . 1416 , first 2014 .
this letter considers a wireless m2 m communication scenario with a massive number of m2 m devices . each device needs to send its reports within a given deadline and with certain reliability , e. g. % . a pool of resources available to all m2 m devices is periodically available for transmission . the number of transmissions required by an m2 m device within the pool is random due to two reasons - random number of arrived reports since the last reporting opportunity and requests for retransmission due to random channel errors . we show how to dimension the pool of m2m - dedicated resources in order to guarantee the desired reliability of the report delivery within the deadline . the fact that the pool of resources is used by a massive number of devices allows to base the dimensioning on the central limit theorem . the results are interpreted in the context of lte , but they are applicable to any m2 m communication system . wireless cellular access , m2 m communications , lte
one of the objectives of time series analysis is to study the detailed structure of the equations of the underlying dynamical system which govern its temporal evolution .this includes the number of independent variables , the form of the flow functions , the nonlinearities in them and parameters of the system .this paper concentrates on the last aspect , i.e. , estimating the parameters of a nonlinear system from a single time series when partial information about the system dynamics is available . assuming that the number of independent variables and the structure of underlying dynamical evolution equations for a nonlinear system is known , we address the problem of determining the values of the parameters . in particular , given a time series for a single variable ( a scalar time series ) , we suggest a simple method which enables us to determine values of the unknown parameters dynamically . the unknown parameters may or may not appear in the evolution equation of the variable for which the time series is given . for this purpose, we employ a combination of two techniques , namely synchronization and adaptive control .owing mainly to the extreme sensitivity to initial conditions , engineering and controlling a nonlinear chaotic system requires a careful analysis .feedback based synchronization techniques are investigated in this context to force a chaotic system , to go to a desired periodic or chaotic orbit .such control mechanisms were suggested by pecora and carroll and many others with an aim to synchronize two chaotic orbits and to stabilize unstable periodic orbits or fixed points . in such mechanisms ,some of the independent variables are used as drive variables and the remaining variables are found to synchronize with the desired trajectory under suitable conditions .there have been many other important attempts in controlling chaotic systems using synchronization .the other method that we use is that of adaptive control which is used to bring back a system , deviated from a stable fixed point due to changes in parameters and variables , to its original state .this mechanism was suggested by huberman and lumer .it was generalized for an unstable periodic orbit and a chaotic orbit by john and amritkar where it was shown that it is possible to synchronize with an unstable periodic orbit or a chaotic orbit starting from a random initial condition and different value of the parameter . in this paper , we show that a simple combination of synchronization and adaptive control methods similar to that described by john and amritkar can be used for extracting information contained in a scalar time series .we approach the problem by considering a dynamical system , in which the number of independent variables and the structure of evolution equations are assumed to be known .a linear feedback function is added to the variable corresponding to that for which the time series is given .this acts as a _ drive _ variable .the feedback serves the purpose of synchronization of all the system variables .the feedback function in our case is proportional to the difference between the new and the old values of the drive variable .the system variables respond to this feedback by synchronizing with the corresponding values in the original system . in the context of application of synchronization techniques to telecommunications ,the new system to be reconstructed is often referred to as the _ receiver _ whereas the old system , from which the time series is made available is termed the _ transmitter_. we will borrow the terminology , although the meanings of terms in the two cases are not exactly identical .the synchronization as described above becomes exact when the receiver parameters are set equal to those of the transmitter and takes place whenever the _ conditional lyapunov exponents _ ( cle s ) as defined in the next section are all negative .now assume that precise values of only a few of the transmitter parameters are known to the receiver system .we show that , in such a case it is possible to write simple evolution equations for the unknown parameters ( initially set to arbitrary values ) , which when coupled with the system equations , yield precise values of all the state variables and the unknown parameters asymptotically to any desired accuracy .our method comprises of raising the unknown parameters to the status of variables of a higher dimensional dynamical system which evolve according to a simple set of evolution equations .the receiver forms a subsystem of this higher dimensional system which in addition contains the evolution equation for the unknown parameters .the input to this higher dimensional system is a scalar time series obtained from the transmitter system .thus our method uses a dynamical algorithm to estimate the parameters which are obtained asymptotically .we note that the method of estimating parameters using synchronization and minimization as proposed in ref . , is essentially a static method .the problem of estimating model parameters was also handled in ref . , in which starting with an ansatz , the optimal equations for parameter evolution are obtained .our method gives a simpler and a systematic derivation of the parameter control loop and in many cases , a better convergence rate .it is well known that a great deal of information about a chaotic system is contained in the time series of its variables .techniques like embedding the time series in a space with chosen dimensionality are available for studying the universality class and other _ global _ features of the system .our results suggest that a scalar time series , in addition to the information about the universality class also contains information about the exact values of the parameters of the underlying dynamical system , including the ones which appear in the evolution of other variables .the method and the required notation is developed in section ii .section iii consists of illustrations for lorenz and rssler systems and a set of equations in plasma physics .the effect of noise in the transmitter system is studied in section iv .finally we conclude in section v with a brief summary of results along with a few remarks .in this section , we will describe our method of parameter estimation , for a general system with variables and parameters .we will first consider the case when only one parameter is unknown to the receiver .consider an autonomous , nonlinear dynamical system with evolution equations , where is an -dimensional state vector whose evolution is described by the function .the dot represents time differentiation and are the parameters of the system .now suppose a time series for one variable , which without loss of generality can be taken as , is given as an output of the above system and in addition suppose the functional form of , and the values of all the parameters are known while the time evolution of the remaining variables and value of , the parameter are not known , then formally the problem at hand consists of writing a set of evolution equations which will yield the information about the unknown parameter and also other variables . with the unknown parameter explicitly for convenience we rewrite eq .( [ transmitter ] ) as , now we introduce a new system of variables whose evolution equations have identical form to that of .we fix as the drive variable and a feedback is introduced in the evolution of .the parameters are also the same except the one corresponding to the unknown parameter which will be set to an arbitrary initial value denoted by .thus the receiver system will have the structure , where is a feedback function which depends upon the drive variable and the variable .the feedback function can be most simply chosen to be proportional to the difference and the evolution for the drive variable can be written as , where is called the feedback constant .more general forms of the feedback function are also possible and give similar results .the receiver system is formed by eqs .( [ receive ] ) and ( [ feedback ] ) .if the parameter in these equations is set precisely equal to then the two sets of variables , and , after a transient time , evolve in tandem and show exact synchronization under suitable conditions , but because the value of is _ unknown _ to the receiver system , this does not happen .the solution is to set the parameter to an arbitrary initial value , while all others are set to the known values , and _ adapt _ it through a suitable evolution equation .the resulting -dimensional system then evolves all the receiver variables to correct values of the corresponding transmitter variables and simultaneously settles the value of to that of provided all the cle s as defined in the next subsection , are negative .the equation for evolution of the is chosen similar to those used in adaptive control mechanisms , and quite generally can have the form , the form of the function that we have chosen is where is another parameter in the combined -dimensional system formed by eqs .( [ receive ] ) , ( [ feedback ] ) and ( [ adaptation ] ) .we call it the _ stiffness constant_. the values of and together control the convergence rates involved in synchronization and adaptive evolution . towards the end of this subsection, we will show that the above form of function ( eq .( [ adaptation ] ) ) is obtained as a result of dynamic minimization of the synchronization error . the last factor in the eq .( [ adaptation ] ) , , needs some elaboration . in generalthe parameter may or may not explicitly appear in the evolution function in eq .( [ drive ] ) .this stresses a need for identification of two separate cases .if the function explicitly depends on , then the calculation of is straightforward . in case not appear in the function explicitly , it still indirectly affects the evolution of .the information about the value of is contained in the given time series .function `` taps '' this dependence .the calculation of in this case , when function does not explicitly depend on needs to be done carefully .this is done as follows : consider the system formed by ( [ receive ] ) and ( [ feedback ] ) in which , a change in the variable in one time step due to a change in the parameter can be estimated as follows . where is the variable of the receiver , such that its evolution contains the parameter explicitly .thus the last of the above equations gives us , a further complication arises if the variable itself does not appear in the function explicitly .in such a case further dependences appearing in more time steps may be considered .note that here , may appear in more than one flow functions and a summation over all such functions becomes necessary . in this case we can write , thus the last factor in eq .( [ adaptation ] ) takes the form , one such case appears in the example of lorenz system , which will be discussed in the next section .now , when more than one parameters of the transmitter are to be estimated , one may use a set of equations similar in form to that of eq .( [ adaptation ] ) .we will use such a set when we discuss lorenz system where it will be assumed that two or three parameters of the lorenz system are unknown to the receiver system .we note that a parameter estimation algorithm as described in ref . can also be used in the estimation of more than one unknown parameters .it uses autosynchronization method based on an active passive decomposition ( apd ) of a dynamical system and starts from an ansatz for the parameter control .in contrast , our method is a dynamical minimization for the synchronization error .this can be seen as follows : let us define the dynamical synchronization error as , where is the receiver parameter corresponding to the unknown parameter and is the drive variable .we note that if takes precisely the value of , then the transmitter and receiver synchronize , which makes the error as defined by eq .( [ errdef ] ) minimum , i.e. zero . to go to this minimum , we want to evolve such that it will go to a value making minimum . with an analogy to an equation in mechanics , where an overdamped particle goes to a mimimum of a potential , we write the following , which leads to , further , to the lowest order in , .hence eq .( [ minimisn0 ] ) can be written as , where is a proportionality constant .this equation is same as eq .( [ adaptation ] ) . in the next subsectionwe will define the _ conditional lyapunov exponents _ ( cle s ) for the newly reconstructed receiver system and state the condition for the combination of synchronization and adaptive control to work convergently such that parameter estimation is possible .consider the transmitter equations ( eq .( [ transmit ] ) ) and the receiver equations ( eqs .( [ receive ] ) , ( [ feedback ] ) and ( [ adaptation ] ) ) .convergence between two trajectories of these systems means that the receiver variables evolve such that the differences and all evolve to zero . in the -dimensional space formed by these differences, origin acts as a fixed point and the condition for the algorithm to work is the same as the stability condition for this fixed point .if the above differences are considered to form an -dimensional vector then the differential evolves as , where the jacobian matrix is given by , where the function describes the evolution of the parameter as in eq .( [ genadapt ] ) and the derivatives in the matrix are evaluated at which is a fixed point .the condition for the convergence of our procedure is that the real part of the eigenvalues of the matrix or _ conditional lyapunov exponents _ ( cle s ) are all less than zero .it can be seen from the above matrix equation that choices of the feedback constant and the stiffness constant affect the values of conditional lyapunov exponents .thus the method will work convergently only for suitably chosen and .when these are chosen such that the largest of the cle s become positive , the algorithm does not work due to diverging trajectories . in the next sectionwe will illustrate the method using the examples of lorenz and rssler flows and a set of equations in plasma physics .as a first example , we study the lorenz system . we divide the discussion in two parts . in the first , we present the results when only a single parameter is estimated in a lorenz system .three different cases are discussed in detail . in the later part ,we extend our method for the case when more parameters are to be estimated .the lorenz system is given by , where form the state space and form the three dimensional parameter space .now assume that the time series for is given , and two of the three parameters are also known .we consider the following cases .case 1 : when the unknown parameter appears in the evolution of : here assuming to be the unknown parameter , we create a receiver system as described in the section i , given by , where are the new state variables and are the parameters , and being the same as those in the transmitter while is initially set to an arbitrary value . is the the feedback constant .these constitute the receiver system . is the _ drive _ variable .the parameter , which is initially set to an arbitrary value , is made to evolve through an equation similar the equation ( eq . ( [ adaptation ] ) ) .here we can use only the of the last factor in eq .( [ adaptation ] ) since there is a single equation involving parameter evolution . this equation along with the receiver system ( eq .( [ lorenz2 ] ) ) , can achieve required synchronization as well as parameter estimation since , a randomly chosen initial vector evolves to and as time .figure 1 displays the manner in which the synchronization takes place and how the parameter , initially set to an arbitrary value finally evolves towards the precise `` unknown '' value . in fig .1(a ) , ( b ) and ( c ) we show the differences as functions of time and we observe that they eventually settle down to zero after an initial transient . in fig .1(d ) we plot as a function of time which also goes to zero simultaneously .the synchronization as shown in fig .1 occurs when conditional lyapunov exponents for the receiver system coupled to the parameter evolution are all negative or at most zero .this restricts the suitable choices for and .the jacobian matrix , for the evolution of the vector is given by ( eq . ( [ linflow ] ) ) , figure 2 shows the curve along which the largest cle becomes zero , in the plane .in region i , all nontrivial cle s are negative and the method works convergently , while in region ii , the largest cle becomes positive and no convergence takes place .nevertheless note that for any positive value of , there can always be a suitably chosen such that the convergence occurs . on the other hand ,there is a critical value of below which the method does not work .case 2 a : when the unknown parameter appears in the evolution of variable : here , we consider the case of as the unknown parameter ( [ lorenz1 ] ) and reconstruct the receiver as , while the evolution of takes the form , ( eqs .( [ adaptation ] ) and ( [ signcaln1 ] ) ) .similar to eq .( [ sprimedot ] ) we use only the of the derivative involved .when a time series for from ( [ lorenz1 ] ) is fed into the these equations , setting and to arbitrary initial condition , they finally evolve to the corresponding values of and . the associated jacobian matrix ( eq .( [ linflow ] ) is given by , figure 3 shows the curve along which the largest cle becomes zero , in the plane . in regioni , all nontrivial cle s are negative and the method works convergently , while in region ii , the largest cle is positive .let denote the time required for the convergence to the correct value of the parameter within a given accuracy , defined as . in fig .4 we plot ( ) as a function of the feedback constant , when the stiffness constant is held fixed . on the other hand , may be plotted as a function of for a fixed value of .this is plotted in fig .5 . in both fig . 4 and fig . 5 , is assumed to be unknown and a time series for is assumed to be given .the chosen accuracy for convergence was . in fig .6 we plot the time required for convergence of to to within a given accuracy as a function of logarithm of the accuracy , which is 0 with respect to the initial value .the straight line shows that the time required to achieve better accuracy grows exponentially .the slope of the line in fig .6 corresponds to the lyapunov exponent .it was compared with the lyapunov exponent computed using a numerical algorithm and a fair agreement was observed .case 2 b : when the unknown parameter appears in the evolution of variable : the case where the parameter appearing in the evolution of , ( eq .( [ lorenz1 ] ) ) is unknown , while the given time series is for is a particularly interesting case . since the variable not appear explicitly in the evolution equation for , the calculation of in eq .( [ adaptation ] ) has to be done using eq .( [ signcaln2 ] ) .thus with the evolution for , the complete receiver system becomes , an initial vector in the above system goes to and thus makes the estimation of the value of possible . herethe matrix takes the form , ( eq . ( [ linflow ] ) ) figure 7 shows the curve along which the largest cle becomes zero in the plane . in region(i ) , all cle s are negative and the condition of convergence is satisfied .finally we note that in all the three cases discussed above , since the time series for in eq .( [ lorenz1 ] ) is assumed to be known , acts as a drive variable .a similar procedure is possible when a time series for in eq .( [ lorenz1 ] ) is given as an input . here can be chosen as a drive variable which drives the evolution of the remaining variables as well as the unknown parameter .thus it is possible to know an unknown value of any of the parameters of the lorenz system from a single time series for or . herewe will consider the estimation of two or three parameters for the lorenz system ( [ lorenz1 ] ) .we have applied our method for estimation of two parameters of the lorenz system ( [ lorenz1 ] ) , taking or as drive variables .a typical receiver system , taking as the drive and as the unknown parameters , is constructed as , note that the same stiffness constant is used in controlling both the unknown parameters .we have found that with similar receiver structure to that in eq .( [ twoparest ] ) , it is possible to estimate any two of the three parameters and , when a time series for either of or is given . in fig . 8 ( a ) we plot the difference while fig. 8 ( b ) shows as functions of time , when the drive is and two parameters and are assumed unknown to the receiver .we see that the differences converge to zero , indicating that it is possible to estimate two parameters simultaneously .finally we mention that , if the time series for is given , estimation of all the three parameters is possible though in this case , the convergence is very slow .the method fails to estimate all the three parameters and , when time series for is given .we thus note that the detailed information about all the parameters of lorenz system is contained in a time series for either or variables and can be extracted as above .it should be however mentioned that when a time series for is given from a lorenz system , the eigenvalues of the associated matrix ( eq . ( [ jacob ] ) ) do not satisfy the condition of convergence for any choice of and .thus the method fails when a time series for is known .we next consider the rssler system of equations given by , which contains the three parameters .we have applied our procedure to estimate any of these parameters , when unknown , assuming the knowledge of a time series for the variable in the rssler system .the corresponding variable , which acts as a drive variable for and the evolution of the unknown parameter , then evolves through , while the unknown parameter evolves adaptively .thus with the given time series , fed into the evolution of the drive variable , we find that the convergence condition can be satisfied by a suitable choice of feedback constant and the stiffness constant . in fig .9 we show the convergence of ( ) to ( 0 , 0 , 0 , 0 ) when the parameter is unknown. thus our algorithm of parameter estimation works for as a drive variable and any of the three parameters can be estimated .we have however found that the convergence is not possible for or as the drive variables .finally , we have also applied our method to estimate two or three parameters of the rossler system with as a drive variable .it was seen that no choice of the feedback constant and the stiffness constant lead to convergences required for estimation . as our final example , we present a set of nonlinear equations appearing in plasma physics .this is the so called resonant three - wave coupling equations when high frequency wave is unstable and the remaining two are damped .these equations are , where and are the system parameters .we find that with time series given for either or , it is possible to know an unknown parameter or using synchronization and adaptive control .the method fails when a time series for is known .figure 10 displays the evolution of the differences between the transmitter and receiver variables as well as the evolution of as functions of time , when the time series for is known . as expected, the differences go to zero asymptotically .in this section we will study the effect of noise present in the transmitter system .we will take the example of the lorenz system ( eq .( [ lorenz1 ] ) for this purpose , where is assumed to be the unknown parameter and acts as a drive variable .assuming that there is a small additive noise present in the time series given for , we feed the noisy time series into the receiver system ( eq .( [ lorenz2 ] ) and carry out the parameter estimation as described .we find that for weak noise , the asymptotically estimated value of the parameter fluctuates around the correct value with a small amplitude .thus the estimation is possible using our method .the error in the estimation can be reduced by a suitable averaging over the time evolution of in the asymptotic limit . for increasing strengths of noise , the fluctuations in the estimated valuegrow larger and precise estimation becomes difficult . figure 11 shows the convergence of to when additive noise is present in the evolution of , the drive variable of the lorenz system , for which the time series is given .we define the accuracy ( ) in the estimation of as while denotes the strength of noise with uniform distribution ranging from to . in fig.12we plot the asymptotic value of a , the accuracy of the estimation of , against the strength of noise in .it can be seen from the curve that the accuracy grows linearly as the noise increases to a value of which corresponds to about 12% of the range of values .the plot thus shows that our method is quite robust for weak noise in , while it can fail as the noise strength increases to a larger value .to summarize , we have shown that a combination of synchronization based on linear feedback given into only a single receiver variable with an _ adaptive _ evolution for parameters unknown to the receiver , enables the estimation of the unknown parameters .the feedback comes from a scalar time series .we have also shown that our procedure corresponds to dynamic minimization of the synchronization error .we have presented examples of lorenz and rssler systems taking different candidate parameters to be unknown to the receiver as well as that of a plasma system obeying resonant three - wave coupling equation . in the lorenz system ( eq .( [ lorenz1 ] ) ) , any of the three parameters can be estimated when a time series is given for either of and , but the method fails when the known time series is for the variable .extensions to estimation of more than one parameters of the lorenz system are also presented as a representative case .estimation of two parameters is possible for both or as drive variables while estimation of all the three parameters is possible only when time series for is given . in the case of rssler system ( eq .( [ rossler1 ] ) ) the method works only when the time series is given for the variable where it is possible to estimate any of the three parameters .we find that in case of the plasma system , the parameters can be estimated with the feedback in the evolution for either or .we have thus numerically demonstrated that the explicit detailed information about the parameters of a nonlinear chaotic system is contained in the time series data of a variable and can be extracted under suitable conditions .this information includes the particular values of the parameters of the system which can be estimated even if they appear in the evolution of variables other than the one for which the time series is given .we have also checked the robustness of the method against the noise and it shows reasonable robustness against small noise though the error of estimation becomes larger as the noise strength is increased .the possibility of improving the efficiency of the method needs to be explored .this can be done , for example , by optimizing the choices of newly introduced parameters and or by trying to estimate initial values of variables of the transmitter system , corresponding to response variables and thereby starting from a `` better '' initial point .work in these directions is under progress .figures ( a ) , ( b ) , ( c ) and ( d ) show the differences respectively as functions of time , for the lorenz system ( eqs.([lorenz1 ] ) , ( [ lorenz2 ] ) and ( [ sprimedot ] ) ) .the unknown parameter is and the drive variable is .the figures show that the differences tend to zero asymptotically . which is set to an arbitrary initial value finally evolves to facilitating the parameter estimation to any desired accuracy in the asymptotic limit .fig.2 . the curve along which the largest conditional lyapunov exponent ( computed using eq .( [ jacob1 ] ) ) becomes zero in the -plane for the lorenz system with as the unknown parameter ( eqs .( [ lorenz2 ] ) and ( [ sprimedot ] ) is plotted . in region( i ) , the cle s are all negative and parameter estimation works convergently .region ( ii ) corresponds to a positive largest cle , where the method does not work .note that there is a critical below which the method does not work .nevertheless for any , an can be chosen so that the method works .the figure shows the curve along which the largest conditional lyapunov exponent for lorenz system with the parameter as unknown and as drive variable ( eqs .( [ lorenz2 ] ) and ( [ rprimedot ] ) becomes zero in the -plane . in region( i ) all the cle s are negative and the parameter estimation can be achieved . in region ( ii )the the largest lyapunov exponent is positive .the plot shows the time ( ) required for convergence of to to a given accuracy with a fixed value of the stiffness constant ( ) , as a function of the feedback constant , , for lorenz system .the drive is while the unknown parameter is ( eq . ( [ lorenz2 ] ) ) .it can be seen that the synchronization time tends to infinity when the largest cle becomes zero .the plot shows the time ( ) required for convergence of to 2 a given accuracy , with a fixed value of the feedback constant ( ) , as a function of the stiffness constant , , for the lorenz system .the drive is while the unknown parameter is ( eq . ( [ lorenz2 ] ) ) .it can be seen that the synchronization time tends to infinity as approaches a value so as to make the largest cle zero .the graph shows the time , t , required to achieve the parameter estimation to within a given accuracy as a function of the accuracy , a , ( logarithmic scale ) normalized with respect to the initial deviation of the parameter from the correct value for lorenz system ( eq .( [ lorenz2 ] ) ) . the time series for assumed to be known while the value of is unknown .the straight line shows that the time required for a better accuracy grows exponentially .the curve along which the largest conditional lyapunov exponent ( computed using eq . ( [ jacob1 ] ) ) becomes zero in the -plane for the lorenz system with as the unknown parameter and as drive ( eqs .( [ lorenz4 ] ) and ( [ bprimedot ] ) is plotted . in region( i ) , the cle s are all negative and parameter estimation works convergently .region ( ii ) corresponds to a positive largest cle , where the method does not work .similar to other cases , there is a critical below which the method does not work .fig.8 . plots ( a ) and ( b ) show the differences and respectively as functions of time in the lorenz system ( eq.([lorenz1 ] ) .the unknown parameters are and and the drive variable is .the plots show that the differences go to zero , and hence indicate that a simultaneous estimation of more than one unknown parameters is possible .plots ( a ) , ( b ) , ( c ) and ( d ) show the differences as functions of time respectively , in the rssler system ( eq.([rossler1 ] ) ) .the unknown parameter is and the drive variable is .the figures show that the differences tend to zero asymptotically . which is set to an arbitrary initial value finally evolves to facilitating the parameter estimation .plots ( a ) , ( b ) , ( c ) and ( d ) show the differences as functions of time respectively , in the plasma system ( eq.([plasma1 ] ) ) .the unknown parameter is and the drive variable is .the figures show that the differences tend to zero asymptotically . which is set to an arbitrary initial value finally evolves to facilitating the parameter estimation .the graph shows the evolution of as a function of time , in the presence of an additive noise in the given time series for for lorenz system ( [ lorenz1 ] ) .the value of is assumed unknown .the plot shows that the difference fluctuates around zero with a small amplitude after an initial transient and a reasonably good estimation is possible using a suitable averaging over these fluctuations .fig.12 . the plot of asymptotic accuracy of parameter estimation ( ) , as a function of strength of the noise , , in the given time series of in lorenz system ( eq .( [ lorenz1 ] ) ) .the noise with strength takes uniformly distributed values from to .the drive is and the unknown parameter is .it is seen that the estimation of is stable for a range of noise strength growing from zero to about 2 which corresponds to about 12 % of the range of values .
a technique is introduced for estimating unknown parameters when time series of only one variable from a multivariate nonlinear dynamical system is given . the technique employs a combination of two different control methods , a linear feedback for synchronizing system variables and an adaptive control , and is based on dynamic minimization of synchronization error . the technique is shown to work even when the unknown parameters appear in the evolution equations of the variables other than the one for which the time series is given . the technique not only establishes that explicit detailed information about all system variables and parameters is contained in a scalar time series , but also gives a way to extract it out under suitable conditions . illustrations are presented for lorenz and rssler systems and a nonlinear dynamical system in plasma physics . also it is found that the technique is reasonably stable against noise in the given time series and the estimated value of a parameter fluctuates around the correct value , with the error of estimation growing linearly with the noise strength , for small noise .
it is now well established that middle - aged solar - type stars show variability on a wide range of time scales , including the intermediate time scales of years ( weiss 1990 ) .the evidence for the latter comes from a variety of sources , including observational , historical and proxy records .many solar - type stars seem to show cyclic types of behaviour in their mean magnetic fields ( e.g. weiss 1994 , wilson 1994 ) , which in the case of the sun have a period of nearly 22 years .furthermore , the studies of the historical records of the annual mean sunspot data since 1607 ad show the occurrence of epochs of suppressed sunspot activity , such as the _ maunder minimum _ ( eddy 1976 , foukal 1990 , wilson 1994 , ribes & nesme - ribes 1993 , hoyt & schatten 1996 ) .further research , employing ( eddy 1980 , stuiver & quey 1980 , stuiver & braziunas 1988 , 1989 ) and ( beer _ et al ._ 1990 , 1994a , b , weiss & tobias 1997 ) as proxy indicators , has provided strong evidence that the occurrence of such epochs of reduced activity ( referred to as _ grand minima _ ) has persisted in the past with similar time scales , albeit irregularly .these latter , seemingly irregular , variations are important for two reasons .firstly , the absence of naturally occurring mechanisms in solar and stellar settings , with appropriate time scales ( gough , 1990 ) , makes the explanation of such variations theoretically challenging .secondly , the time scales of such variations makes them of potential significance in understanding the climatic variability on similar time scales ( e.g. friis - christensen & lassen 1991 , beer _ at al ._ 1994b , lean 1994 , stuiver , grootes & braziunas 1995 , obrien _ et al ._ 1995 , baliunas & soon 1995 , butler & johnston 1996 , white _ et al ._ 1997 ) . in view of this , a great deal of effort has gone into trying to understand the mechanism(s ) underlying such variations by employing a variety of approaches . our aim here is to give a brief account of some recent results that may throw some new light on our understanding of such variations .theoretically there are essentially two frameworks within which such variabilities could be studied : stochastic and deterministic .here we mainly concentrate on the deterministic approach and recall that given the usual length and nature of the solar and stellar observational data , it is in practice difficult to distinguish between these two frameworks ( weiss 1990 ). nevertheless , even if the stochastic features play a significant role in producing such variations , the deterministic components will still be present and are likely to play an important role .the original attempts at understanding such variabilities were made within the linear theoretical framework .an important example is that of linear mean - field dynamo models ( krause & rdler 1980 ) which succeeded in reproducing the nearly 22 year cyclic behaviour . unfortunately such linear models can not easily and naturally account for the complicated , irregular looking solar and stellar variability .the developments in nonlinear dynamical systems theory , over the last few decades , have provided an alternative framework for understanding such variability . within this nonlinear deterministic framework ,irregularities of the grand minima type are probably best understood in terms of various types of dynamical intermittency , characterised by different statistics over different intervals of time .the idea that some type of dynamical intermittency may be responsible for understanding the maunder minima type variability in the sunspot record goes back at least to the late 1970 s ( e.g. tavakol 1978 , ruzmaikin 1981 , zeldovich _ et al ._ 1983 , weiss __ 1984 , spiegel 1985 , feudel _ et al .we shall refer to the assumption that grand minima type variability in solar - type starts can be understood in terms of some type of dynamical intermittency as the _ intermittency hypothesis_. to test this hypothesis one can proceed by adopting either a quantitative or a quantitative approach .given the complexity of the underlying equations , the most direct approach to the study of dynamo equations is numerical . ideally one would like to start with the full 3d dynamo models with the least number of simplifying assumptions and approximations .there have been a great deal of effort in this direction over the last two decades ( e.g. gilman 1983 , nordlund __ 1992 , brandenburg _ et al ._ 1996 , tobias 1998 ) .the difficulty of dealing with small scale turbulence has meant that a detailed fully self - consistent model is beyond the range of the computational resources currently available , although important attempts have been made to understand turbulent dynamos in stars ( e.g. cattaneo , hughes & weiss 1991 , nordlund __ 1992 , moss __ 1995 , brandenburg _ et al ._ 1996 , cattaneo & hughes 1996 ) and accretion discs ( e.g. brandenburg _ et al ._ 1995 , hawley _such studies have had to be restricted to the geometry of a cartesian box , which in essence makes them local dynamos , whereas magnetic fields in astrophysical objects are observed to exhibit large scale structure , related to the shape of the object , and thus can only be captured fully by global dynamo models ( tobias 1998 ) .furthermore , despite great advancements in numerical capabilities , these models still involve approximations and parametrisations and are extremely expensive numerically , specially if the aim is to make a comprehensive search for possible ranges of dynamical modes of behaviours as a function of control parameters . an alternative approach , which is much cheaper numerically , has been to employ mean - field dynamo models . despite their idealised nature ,these models reproduce some features of more complicated models and allow us to analyse certain global properties of magnetic fields in the sun .for example , the dependence of various outcomes of these models ( such as parity , time dependence , cycle period , etc . ) on global properties , including boundary conditions , have been shown to be remarkably similar to those produced by full three - dimensional simulations of turbulent models ( brandenburg 1999a , b ) .this gives some motivation for using these models for our studies below .a number of attempts have recently been made to numerically study such models , or their truncations , to see whether they are capable of producing the grand minima type behaviours .there are a number of problems with these attempts .firstly , the developments in dynamical systems theory over the last two decades have uncovered a number of theoretical mechanisms for intermittency , each with their dynamical and statistical signatures .secondly , the simplifications and approximations involved in these models , make it difficult to decide whether a particular type of behaviour obtained in a specific model is in fact generic . andfinally , the characterisation of such numerically obtained behaviours as `` intermittent '' is often phenomenological and based on simple observations of the resulting time series ( e.g. zeldovich _ et al ._ 1983 , jones _ et al ._ 1985 , schmalz & stix 1991 , feudel _ et al ._ 1993 , covas _ et al ._ 1997a , b , c , tworkowski _ et al ._ 1998 , and references therein ) , rather than a concrete dynamical understanding coupled with measurements of the predicted dynamical signatures and scalings .there are , however , examples where the presence of various forms of intermittency has been established concretely in such dynamo models , by using various signatures and scalings ( brooke 1997 , ( covas & tavakol 1997 , covas _ et al ._ 1997c , brooke __ 1998 , covas & tavakol 1998 , covas _ et al . _1999b ) . given the inevitable approximations and simplifications involved in dynamo modelling ( specially given the turbulent nature of the regimes underlying such dynamo behaviours and hence the parametrisations necessary for their modelling in practice ) , a great deal of effort has recently gone into the development of approaches that are in some sense generic .the main idea is to start with various qualitative features that are thought to be commonly present in such settings and then to study the generic dynamical consequences of such assumptions .such attempts essentially fall into the following categories .firstly , there are the low dimensional ode models that are obtained using the normal form approach ( spiegel 1994 , tobias __ 1995 , knobloch _ et al .these models are robust and have been successful in accounting for certain aspects of the dynamos , such as several types of amplitude modulation of the magnetic field energy , with potential relevance for solar variability of the maunder minima type .the other approach is to single out the main generic ingredients of such models and to study their dynamical consequences . for axisymmetric dynamo models ,these ingredients consist of the presence of invariant subspaces , non - normal parameters and non - skew property .the dynamics underlying such systems has recently been studied in ( covas _ et al ._ , 1997c,1999b ; ashwin _ et al .this has led to a number of novel phenomena , including a new type of intermittency , referred to as _ in out intermittency _ , which we shall briefly discuss in section [ intermittencies ]the standard mean - field dynamo equation is given by where and are the mean magnetic field and mean velocity respectively and the turbulent magnetic diffusivity and the coefficient arise from the correlation of small scale turbulent velocities and magnetic fields ( krause & rdler , 1980 ) . in axisymmetric geometry ,( 1 ) is solved by splitting the magnetic field into meridional and azimuthal components , , and expressing these components in terms of scalar field functions , . in the followingwe shall also employ a family of truncations of the one dimensional version of equation ( [ dynamo ] ) , along with a time dependent form of , obtained by using a spectral expansions of the form : where , and are derived from the spectral expansion of the magnetic field and respectively , and are coefficients expressible in terms of and , is the truncation order , is the dynamo number and is the prandtl number ( see covas __ 1997a , b , c for details ) .recent detailed studies of axisymmetric mean field dynamo models have produced concrete evidence for the presence of various forms of dynamical intermittency in such models .we shall give a brief overview of these results in this section . #1#20.6#1 a particular form of this type of intermittency , discovered by grebogi , ott & yorke ( grebogi _ et al ._ 1982 , 1987 ) , is the so called `` attractor merging crisis '' , where as a system parameter is varied , two or more chaotic attractors merge to form a single attractor .there is both experimental and numerical evidence for this type of intermittency ( see for example ott ( 1993 ) and references therein ) .we have found concrete evidence for the presence of such a behaviour in a 6-dimensional truncation of mean - field dynamo model of the type ( [ truncated ] ) ( covas & tavakol 1997 ) and more recently , in a pde model of type ( [ dynamo ] ) ( see covas & tavakol ( 1999 ) for details ) .[ crisis1 ] shows an example of the latter which clearly demonstrates the merging of two attractors , with different time averages for energy and parity . for a concrete characterisation and scaling ,see covas & tavakol ( 1999 ) .this form of intermittency , first discovered by pomeau and manneville in the early 1980 s ( pomeau & manneville 1980 ) , has been extensively studied analytically , numerically and experimentally ( see bussac & meunier 1982 , richter __ 1994 and references therein ) .it is identified by long almost regular phases interspersed by ( usually ) shorter chaotic bursts . in particular , this type of intermittency has been found in a 12d truncated dynamo model of type ( [ truncated ] ) ( covas _ et al . _1997c ) , and more recently in a pde dynamo model of type ( [ dynamo ] ) ( covas & tavakol 1999 ) .[ typeia ] gives an example of such time series , where the irregular interruptions of the laminar phases by chaotic bursts can easily be seen . for a concrete characterisation , including the scaling for the average length of laminar phases see covas & tavakol ( 1999 ) . #1#20.6#1 an important feature of systems with symmetry ( as in the case of solar and stellar dynamos ) is the presence of invariant submanifolds .it may happen that attractors in such invariant submanifolds may become unstable in transverse directions .when this happens , one possible outcome could be that the trajectories can come arbitrarily close to this submanifold but also have intermittent large deviations from it .this form of intermittency is referred as _ on - off intermittency _ ( platt _ et al ._ 1993a , b ) .examples of this type of intermittency have been found in dynamo models , both phenomenologically ( schmitt _ et al ._ , 1996 ) and concretely in truncated dynamo models of the type ( [ truncated ] ) ( covas _ et al . _1997c ) . a generalisation of on - off intermittency, the in - out intermittency , discovered recently ( ashwin _ et al ._ 1999 ) is expected to be generic for axisymmetric dynamo settings .the crucial distinguishing feature of this type of intermittency is that , as opposed to on - off intermittency , there can be different invariant sets associated with the transverse attraction and repulsion to the invariant submanifold , which are not necessarily chaotic .this gives rise to identifiable signatures and scalings ( ashwin _ et al ._ 1999 ) .concrete evidence for the occurrence of this type of intermittency has been found recently in both pde and truncated dynamo models of the types ( [ dynamo ] ) and ( [ truncated ] ) respectively ( see covas _( 1999a , b ) for details ) .in the previous section , we have summarised concrete evidence for the presence of four different types of dynamical intermittency in both truncated and pde mean - field dynamo models .> from a theoretical point of view , the intermittency hypothesis may therefore be said to have been established , at least within this family of mean - field models . what remains to be seen is whether these types of intermittency still persist in more realistic models .an encouraging development in this connection is the discovery of a type of intermittency which is expected to occur generically in axisymmetric dynamo settings , independently of the details of specific models . despite these developments , testing the intermittency hypothesis poses a number of difficulties in practice : 1 .observationally , all precise dynamical characterisation of solar and stellar variability are constrained by the length and the quality of the available observational data .this is particularly true of the intermediate ( and of course longer ) time scale variations .such a characterisation is further constrained by the fact that some of the indicators of such mechanisms , such as scalings , require very long and high quality data .theoretically , there is now a large number of such mechanisms , some of which share similar signatures and scalings , which could potentially complicate the process of differentiation between the different mechanisms .an important feature of real dynamo settings is the inevitable presence of noise .this calls for a theoretical and numerical study of effects of noise on the dynamics , on the one hand ( e.g. meinel & brandenburg 1990 , moss_ 1992 , ossendrijver & hoyng 1996 , ossendrijver , hoyng & schmitt 1996 ) and on the signatures and scalings of various mechanisms of intermittency on the other . in this connectionit is worth bearing in mind that some types of intermittency do possess signatures that are rather easily identifiable .nevertheless , we believe the answer to these difficult questions can only be realistically contemplated once a more clear picture has emerged of all the possible types of intermittency that can occur in more realistic solar - type dynamo models ( and ultimately real dynamos ) and once their precise signatures and scalings , in presence of noise , have been identified .we would like to thank the organisers of this meeting for their kind hospitality and for bringing about the opportunity for many fruitful exchanges .we would also like to thank peter ashwin , axel brandenburg , john brooke , david moss , ilkka tuominen and andrew tworkowski for the work we have done together and edgar knobloch , steve tobias , alastair rucklidge , michael proctor and nigel weiss for many stimulating discussions .ec is supported by grant bd/5708/95 praxis xxi , jnict .ec thanks the astronomy unit at qmw for support to attend the conference .rt benefited from pparc uk grant no .l39094 . this research also benefited from the ec human capital and mobility ( networks ) grant `` late type stars : activity , magnetism , turbulence '' no .erbchrxct940483 .ashwin , p. , covas , e. & tavakol , r. , 1999 , _ nonlinearity _ , * 9 * , 563 .baliunas , s. l. & soon , w. , 1995 , _ astrophy .j. _ , * 450 * , 896 .beer , j. _ et al ._ , 1990 , _ nature _ * 347 * , 164 .beer , j. _ et al ._ , 1994a , in j. m. pap , c. frhlich , h. s. hudson & s. k. solaski ( eds . ) , _ the sun as a variable star : solar and stellar irradiance variations _ , cambridge university press , cambridge , p. 291 . beer , j. _ et al ._ , 1994b , in e. nesme - ribes ( ed . ) ,_ the solar engine and its influence on terrestial atmosphere and climate _ , springer - verlag , berlin , p. 221 .brandenburg , a. , 1999a , in : _ theory of black hole accretion discs _ , eds .m. a. abramowicz , g. bjrnsson & j. e. pringle , cambridge university press .brandenburg , a. , 1999b , in : _ helicity and dynamos _ , eds .a. a. pevtsov , american geophysical union , florida .brandenburg , a. , jennings r. l. , nordlund ., rieutord m. , stein r. f. , tuominen , i. , 1996 , _ jfm _ , * 306 * , 325 .brandenburg , a. , nordlund , . ,stein , r. f. , torkelsson , u. , 1995 , _ apj _ , * 446 * , 741 .brooke , j. m. , 1997 , _europhysics letters _ * 37 * , 3 .brooke , j. m. , pelt , j. , tavakol , r. & tworkowski , a. , 1998 , _a&a _ * 332 * , 339 .bussac , m. n. & meunier , c ._ j. de phys ._ , * 43 * , 585 .butler , c. j. & johnston , d. j. , 1996 , _j. atmospheric terrest ._ , * 58 * , 1657 .cattaneo , f. , hughes , d. w. & weiss , n. o. , 1991 , _ mnras _ , * 253 * , 479 .cattaneo , f. & hughes , d. w. , 1996 , _ phys .e _ * 54 * , 4532 .covas , e. , tworkowski , a. , brandenburg , a. & tavakol , r. , 1997a , _a&a _ * 317 * , 610 .covas , e. , tworkowski , a. , tavakol , r. & brandenburg , a. , 1997b , _ solar physics _* 172 * , 3 .covas , e. , ashwin , p. & tavakol , r. , 1997c , _ physical review e _ * 56 * , 6451 .covas , e. & tavakol , r. , 1997 , _ physical review e _ * 55 * , 6641 .covas , e. & tavakol , r. , 1998 , proceedings of the 5th international workshop `` planetary and cosmic dynamos '' , trest , czech republic , studia geophysica et geodaetica , 42 .covas , e. & tavakol , r. , 1999 , _multiple forms of intermittency in pde dynamo models _ , in preparation .covas , e. , tavakol , r. , tworkowski , a. , brandenburg , a. , brooke , j. m. & moss , d. , 1999a , _a&a _ , in press .preprint available at web address .covas , e. , tavakol , r. , ashwin , p. , tworkowski , a. & brooke , j. m. , 1999b , submitted to _ phys .a_. preprint available at web address .eddy , j. a. , 1976 , _ science _ , * 192 * , 1189 .feudel , w. jansen , & j. kurths , 1993 , _ int .j. of bifurcation and chaos _ * 3 * , 131 .foukal , p. v. , 1990 , _ solar astrophysics _ , wiley interscience , new york .friis - christensen , e. & lassen , k. , 1991 , science * 254 * , 698 .gilman , p. a. , 1983 , _ apj ._ , * 53 * , 243 .gough , d. , 1990 , _ phil .r. soc .lond . _ * a330 * , 627 .grebogi , c. , ott , e. , romeiras , f. , & yorke , j.a . , 1987 , _ phys .a. _ , * 36 * , 5365 .grebogi , c. , ott , e. , & yorke , j.a . , 1982 , phys .rev lett , * 48 * , 1507 hawley j.f . ,gammie c.f . ,balbus s.a . , 1996 ,apj * 464 * , 690 hoyt , d. v. & schatten , k. h. , 1996 , _ solar phys . _ * 165 * , 181 .jones , c. a. , weiss n.o ., cattaneo f. , 1985 , _physica 14d _ , 161 knobloch , e. , tobias , s. m. & weiss , n. o. , 1998 , _ mnras _ , * 297 * , 1123 .krause , f. & rdler , k .- h .mean field magnetohydrodynamics and dynamo theory _ , pergamon , oxford .lean , j. , 1994 , in e. nesme - ribes ( ed . ) , _ the solar engine and its influence on terrestial atmosphere and climate _ , springer - verlag , berlin , p. 163 .meinel , r. & brandenburg , a. , 1990,_a&a _ * 238 * , 369 .moss , d. , barker d.m ., brandenburg a. , tuominen i. , 1995,_a&a _ 294 , 155 moss , d. , brandenburg , a. , tavakol , r. & tuominen , i. , 1992,_a&a _ bf 265 , 843 .nordlund , . , brandenburg , a. , jennings , r. l. , rieutord , m. , ruokolainen , j. , stein , r. f. & tuominen i. , 1992 , _ apj _ , * 392 * , 647 obrien , s. r. , mayewsky , p.a. , meeker , l. d. , meese , d. a. , twickler , m. s. & whitlow , s. i. , 1995 , _ science _ , * 270 * , 1962 .ossendrijver , a. j. h. , hoyng , p. & schmitt ,d. , 1996,_a&a _ * 313 * , 938 .ossendrijver , a. j. h. & hoyng , p. , 1996,_a&a_ * 313 * , 959 .ott , e. , _ chaos in dynamic systems _, 1993 , cambridge press , cambridge platt , m. , spiegel , e. & tresser , c. , 1993a , _ phys ._ , * 70 * , 279 .pomeau , y. & manneville , p. , 1980 , _ commun ._ , * 74 * , 189 .ribes , j. c. & nesme - ribes , e. , 1993 , _ a&a _ * 276 * , 549 .richter , r. , kittel , a. , heinz , g. , fltgen , g. , peinke , j. & parisi , j. , 1994 , _ phys .b _ * 49 * , 8738 .ruzmaikin , a. a. , 1981 , _ comm ._ , * 9 * , 88 .schmalz , s. & stix , m. , 1991 , _a&a _ * 245 * , 654 .schmitt , d. , schssler , m. , & ferriz - mas , a. , 1996 , a&a , * 311 * , l1 .spiegel , e. , platt , n. & tresser , c. , 1993b , _ geophys . and astrophys .fluid dyn ._ , * 73 * , 146 .spiegel , e.a .1994 , in proctor m.r.e ., gilbert a.d .lectures on solar and planetary dynamos _ , cambridge univ .press , cambridge .spiegel , in _ chaos in astrophysics _, edited by j. r. butcher , j. perdang , & e. a. spiegel ( reidel , dordrecht , 1985 ) .stuiver , m. , grootes , p. m. & braziunas , t. f. , 1995 , _ quarternary res ._ * 44 * , 341 .stuiver , m. & braziunas , t. f. , 1988 , in f. r. stephenson & a. w. wolfendale ( eds . ) , _ secular solar and geomagnetic variations in the last 10 000 years _ , kluwer academic publishers , dordrecht , holland , p. 245 .stuiver , m. & braziunas , t. f. , 1989 , _ nature _ * 338 * , 405 .stuiver , m. & quay , p. d. , 1980 , _ science _ , * 207 * , 19 .tavakol , r. , 1978 , _ nature _ , * 276 * , 802 .tobias , s. m. , 1998 , _ mnras _ , * 296 * , 653 .tobias , s. m. , weiss , n.o .& kirk , v. , 1995 , _ mnras _ , * 273 * , 1150 .tworkowski , a. , tavakol , r. , brandenburg , a. , brooke , j. m. , moss , d. & tuominen i. , 1998 , _ mnras _ , * 296 * , 287 .weiss , n. o. , cattaneo , f. , jones , c. a. , 1984 , _ geophys . astrophys .fluid dyn ._ , * 30 * , 305 .weiss , n. o. , in _ lectures on solar and planetary dynamos _ , edited by proctor , m.r.e . and gilbert , a.d ., cambridge university press , cambridge ( 1994 ) weiss , n. o. & tobias , s. m. , in solar and heliospheric plasma physics , ed .g. m. simnett , c. e. alissandrakis & l. vlahos , 25 , springer , berlin , 1997 .weiss , n. o. 1990 , _ phil .r. soc ._ , * a330 * , 617 .white , w. b. , lean , j. , cayan , d. & dettinger , m. d. , 1997 , _ j. geophys ._ , * 102 * , 3255 .wilson , p. r. , 1994 , _ solar and stellar activity cycles _ , cambridge university press , cambridge .zeldovich , ya .b. , ruzmaikin , a. a. & sokoloff , d. d. , 1983 , _ magnetic fields in astrophysics _ , gordon and breach , new york .
we briefly discuss the status of the _ intermittency hypothesis _ , according to which the grand minima type variability in solar - type stars may be understood in terms of dynamical intermittency . we review concrete examples which establish this hypothesis in the mean - field setting . we discuss some difficulties and open problems regarding the establishment of this hypothesis in more realistic settings as well as its operationally decidability .
recently , pusey , barrett , and rudolph ( hereafter abbreviated as pbr ) proved an important theorem , showing that different quantum states represent physically different states of reality .differently put , the theorem refutes the idea that one and the same physical reality can be consistently described by two different quantum states .epr s proof is based on the idea that independent preparation devices can be constructed which can be set up to prepare a quantum system either in a state , or a different state .an ensemble of copies of this device ( ) can then be used to prepare independent states , , with either or , depending on the preparation of device .all these states are then destroyed in an -partite entanglement measurement . since each devicecan be prepared in two different ways , physically different situations can be construced .epr construct entanglement measurements with different possible outcomes , in such a way that there is a bijective mapping of each of the possible device settings to one outcome which is measured with zero probability if this setting is chosen .if we assume that there is a non - zero probability of at least that a device produces a physical reality which is compatible with _ both _ states and , it follows from the independent preparation of the states that the product state would be compatible with _ all _ possible product states with a probability of at least .however , this would mean that _ all _ possible outcomes of should be zero , which leads to a contradiction .thus given quantum mechanics is correct we must conclude that two different quantum states can never be consistent with the same reality .different proofs have been given , which lead to the same general conclusion .the tricky part of the proof by epr is the construction of the basis of the entanglement measurement .they show that the simplest case is sufficient to proove the theorem only for a subset of possible state pairs that are `` not too close to each other '' ; more precisely , their scalar product must fulfill the inequality . for a general proof , a larger number of independent preparation devices has to be used . here, i present an argument that even in the general case , it is always possible to track the problem down to the case , rendering the construction of entanglement measurement bases with unnecessary . in section 2 , i recapitulate the proof for , based on pbr s paper . in section 3 ,i present the alternative general proof , and section 4 compares both proofs .two identical devices , labelled by , can be prepared to either produce the quantum state or .independetly of the dimension if the hilbert space which and are defined in , a basis can always be chosen such that ( possibly after the multiplication of one of the states with a phase , without any effect to the physical state of the system ) thus , the modulus of the scalar product the two is given as the crucial point here is , that regardless of the dimension of , the whole problem can be reduced to two two - dimensional hilbert spaces , treating the quantum state generated by each device as a qbit .we define the four basis states of the product hilbert space by : on this product space , we define an operator , such that generates all four product states which can be jointly generated by the two devices : in matrix form , is given as a given product state with is now subject to an entanglement measurement , with a measurement basis given by states , generated by a unitary operator which in matrix form is given as with real numbers and . if the measurement returns the output , the quantum system will be projected to the entagled state .given that the system is prepared in the initial state by a corresponding setup of the two devices , the probability of recieving a measurement output is given as .pbr show that under certain conditions it is possible to choose and such that given the initial state , the probability of measuring becomes zero .this is the case if this equation yields for the solution using the identity this equation can also be written as the relationship between and in eq .( [ cosbcosw ] ) is plotted in fig .[ figure1 ] , showing that is only fulfilled in the range . if , becomes larger than , meaning that no real value for can be given in order to fulfill eq .( [ jmcj ] ) .the assumption as already mentioned in the introduction , that both devices independently produce a physical state which is compatible with and for at the same time with a non - zero probability of at least , leads to the desired contradiction with eq .( [ jmcj ] ) : in these cases , the product state would be compatible with all , meaning that all four possible measurement outcomes would get zero probability .note , however , that this argument fails if , and is thus not sufficient for a general proof .we now assume arbitrary states and , without any restriction to .we consider an ensemble of independent copies of the preparation device , with being an even number such that the devices can be divided into two groups , say , group 1 and 2 , each consisting of devices .we denote the devices in group 1 by , and in group 2 by . all devices in group 1can be prepared to generate a product state , with either , or . respectively , all deviced in group 2 can be prepared to generate a state .we can now define : using eq .( [ def_omega ] ) , the scalar product of these product states fulfills where we define with , in analogy to in eq .( [ def_omega ] ) . since , it is always possible to choose large enough such that .if we now apply the pbr assumption as stated in the introduction , that there is at least the probability that all states , in group 1 and , in group 2 are each compatible with both and , this would imply that with non - zero probability , the product state were compatible with both and , and at the same time , were compatible with both and .thus , we can to perform the replacements , , and and repeat the two - qbit proof given in section 2 for the product states and .this is possible , since in eq .( [ basis_qbit ] ) there is no restriction to the hilbert space in which the two states are defined in . using these replacements , we yield a general proof of the pbr theorem , by reducing it to the two - qbit case .pbr prove the general case by constructing a hierarchy of multi - partite entanglement measurements , where the number of independently prepared quantum states must be chosen such that which is equivalent to where again the identity eq .( [ tan_cos_identity ] ) has been used .the proof presented here is identical to pbr s for the case , i.e. , being equivalent to otherwise , according to eq .( [ def_gross_omega ] ) , has to be chosen such that leading to the condition with being an even number , for the choice of .the minimal number of to prove the theorm with a given value of , is give in fig .[ figure2 ] for both the pbr prove and the alternative proof , as given by eqs .( [ n_pbr ] ) and ( [ n_alternative ] ) , respectively .the graph shows , that a larger number of indepentend preparation devices is necessary for the alternative proof , if .therefore , we could state that bpr s original proof is more `` efficient '' . on the other hand , the proof presented here works without the construction of a non - trivial multi - partite entanglement measurement basis for more than two qbits , but rests on a simple argument in the case where bpr s proof does not work with two qbits only .it shows that the general case follows immediately from the two qbit case .it is therefore simpler than the proof given by pbr .99 m. f. pusey , j. barrett , and t. rudolph : on the reality of the quantum state , _ nature physics _ * 8 * , 475478 ( 2012 ) r. colbeck , and r. renner : is a system s wave function in one - to - one correspondence with its elements of reality ? _ phys .lett . _ * 108 * , 150402 ( 2012 ) d. j. miller : alternative experimental protocol to demonstrate the pusey - barrett - rudolph theorem _ phys .a _ * 87 * , 014103 ( 2013 )
the theorem of pusey , barrett , and rudolph proves that different quantum states describe different physical realities . their proof is based on the construction of entanglement measurement bases of two , and more than two qbits . in this note , i show that a two - qubit entanglement base is sufficient for a general proof .
we study the impact of heavy - tailed traffic on the performance of scheduling policies in single - hop queueing networks .single - hop network models have been used extensively to capture the dynamics and scheduling decisions in real - world communication networks , such as wireless uplinks and downlinks , switches , wireless ad hoc networks , sensor networks , and call centers . in all these systems , one can not serve all queues simultaneously , e.g. , due to wireless interference constraints , giving rise to a scheduling problem .clearly , the overall performance of the network depends critically on the scheduling policy applied .the focus of this paper is on a well - studied class of scheduling policies , commonly refered to as max - weight policies .this class of policies was introduced in the seminal work of tassiulas and ephremides , and since then numerous studies have analyzed the performance of such policies in different settings , e.g. , see , and the references therein .a remarkable property of max - weight policies is their * throughput optimality * , i.e. , their ability to stabilize a queueing network whenever this is possible , without any information on the arriving traffic .moreover , it has been shown that policies from this class achieve low , or even optimal , average delay for specific network topologies , when the arriving traffic is light - tailed .however , the performance of max - weight scheduling in the presence of heavy - tailed traffic is not well understood .we are motivated to study networks with heavy - tailed traffic by significant evidence that traffic in real - world communication networks exhibits strong correlations and statistical similarity over different time scales .this observation was first made by leland _ through analysis of ethernet traffic traces .subsequent empirical studies have documented this phenomenon in other networks , while accompanying theoretical studies have associated it with arrival processes that have heavy tails ; see for an overview .the impact of heavy tails has been analyzed extensively in the context of single or multi - server queues ; see the survey papers , and the references therein .however , the related work is rather limited in the context of queueing networks , e.g. , see the paper by borst _ , which studies the `` generalized processor sharing '' policy .this paper aims to fill a gap in the literature , by analyzing the impact of heavy - tailed traffic on the performance of max - weight scheduling in single - hop queueing networks .in particular , we study the delay stability of traffic flows : a traffic flow is delay stable if its expected steady - state delay is finite , and delay unstable otherwise .our previous work gives some preliminary results in this direction , in a simple system with two parallel queues and a single server .the * main contributions * of this paper include : i ) in a single - hop queueing network under the max - weight scheduling policy , we show that any light - tailed flow that conflicts with a heavy - tailed flow is delay unstable ; ii ) surprisingly , we also show that for certain admissible arrival rates , a light - tailed flow can be delay unstable even if it does not conflict with heavy - tailed traffic ; iii ) we analyze the max - weight- scheduling policy , and show that if the -parameters are chosen suitably , then the sum of the -moments of the steady - state queue lengths is finite .we use this result to prove that by proper choice of the -parameters , all light - tailed flows are delay stable .moreover , we show that max - weight- achieves the optimal scaling of higher moments of steady - state queue lengths with traffic intensity .the rest of the paper is organized as follows .section 2 contains a detailed presentation of the model that we analyze , namely , a single - hop queueing network .it also defines formally the notions of heavy - tailed and light - tailed traffic , and of delay stability . in section 3we motivate the subsequent development by presenting , informally and through simple examples , the main results of the paper . in section 4we analyze the performance of the celebrated max - weight scheduling policy .our general results are accompanied by examples , which illustrate their implications in practical network settings .section 5 contains the analysis of the parameterized max - weight- scheduling policy , and the performance that it achieves in terms of delay stability .this section also includes results about the scaling of moments of steady - state queue lengths with the traffic intensity and the size of the network , accompanied by several examples .we conclude with a discussion of our findings and future research directions in section 6 .the appendices contain some background material and most of the proofs of our results .we start with a detailed presentation of the queueing model considered in this paper , together with some necessary definitions and notation .we denote by , , and the sets of nonnegative reals , nonnegative integers , and positive integers , respectively .the cartesian products of copies of and are denoted by and , respectively .we assume that time is slotted and that arrivals occur at the end of each time slot .the topology of the network is captured by a directed graph , where is the set of nodes and is the set of ( directed ) edges .our model involves single - hop traffic flows : data arrives at the source node of an edge , for transmission to the node at the other end of the edge , where it exits the network .more formally , let be the number of traffic flows of the network . a * traffic flow * consists of a discrete time stochastic arrival process , a source node , and a destination node , with , and .we assume that each arrival process takes values in , and is independent and identically distributed ( iid ) over time .furthermore , the arrival processes associated with different traffic flows are mutually independent .we denote by >0 ] , and light - tailed otherwise .the traffic of flow is buffered in a dedicated queue at node ( queue , henceforth . )our modeling assumptions imply that the set of traffic flows can be identified with the set of edges and the set of queues of the network .the service discipline within each queue is assumed to be `` first come , first served . ''the stochastic process captures the evolution of the length of queue .since our motivation comes from communication networks , will be interpreted as the number of packets that queue receives at the end of time slot , and as the total number of packets in queue at the beginning of time slot .the arrivals and the lengths of the various queues at time slot are captured by the vectors and , respectively . in the context of a communication network ,a batch of packets arriving to a queue at any given time slot can be viewed as a single entity , e.g. , as a file that needs to be transmitted .we define the * end - to - end delay of a file * of flow to be the number of time slots that the file spends in the network , starting from the time slot right after it arrives at , until the time slot that its last packet reaches . for , we denote by the end - to - end delay of the file of queue .the vector captures the end - to - end delay of the files of the different traffic flows . in general ,not all edges can be activated simultaneously , e.g. , due to interference in wireless networks , or matching constraints in a switch .consequently , not all traffic flows can be served simultaneously .a set of traffic flows that can be served simultaneously is called a * feasible schedule*. we denote by the set of all feasible schedules , which is assumed to be an arbitrary subset of the powerset of . for simplicity , we assume that all attempted transmissions of data are successful , that all packets have the same size , and that the transmission rate along any edge is equal to one packet per time slot .we denote by the number of packets that are scheduled for transmission from queue at time slot .note that this is not necessarily equal to the number of packets that are transmitted because the queue may be empty .let us now define formally the notion of a * scheduling policy*. the past history and present state of the system at time slot is captured by the vector at time slot 0 , we have . a ( causal ) scheduling policyis a sequence of functions , used to determine scheduling decisions , according to . using the notation above, the * dynamics * of queue take the form : for all , where denotes the indicator function of the event .the vector of initial queue lengths is assumed to be an arbitrary element of .we restrict our attention to scheduling policies that are * regenerative * , i.e. , policies under which the network starts afresh probabilistically in certain time slots .more precisely , under a regenerative policy there exists a sequence of stopping times with the folowing properties .i ) the sequence is iid .ii ) let , and consider the processes that describe the `` cycles '' of the network , namely , , and ; then , is an iid sequence , independent of .iii ) the ( lattice ) distribution of the cycle lengths , , has span equal to one and finite expectation . properties ( i ) and ( ii ) imply that the queueing network evolves like a ( possibly delayed ) regenerative process .property ( iii ) states that this process is aperiodic and positive recurrent , which will be crucial for the stability of the network .the following definition gives the precise notion of stability that we use in this paper .* definition 2 : ( stability ) * the single - hop queueing network described above is stable under a specific scheduling policy , if the vector - valued sequences and converge in distribution , and their limiting distributions do not depend on the initial queue lengths .notice that our definition of stability is slightly different than the commonly used definition ( positive recurrence of the markov chain of queue lengths ) , since it includes the convergence of the sequence of file delays .the reason is that in this paper we study properties of the limiting distribution of and , naturally , we need to ensure that this limiting distribution exists . under a stabilizing scheduling policy ,we denote by and the limiting distributions of and , respectively .the dependence of these limiting distributions on the scheduling policy has been suppressed from the notation , but will be clear from the context .we refer to as the steady - state length of queue .similarly , we refer to as the steady - state delay of a file of traffic flow .we note that under a regenerative policy ( if one exists ) , the queueing network is guaranteed to be stable .this is because the sequences of queue lengths and file delays are ( possibly delayed ) aperiodic and positive recurrent regenerative processes , and , hence , converge in distribution ; see .the stability of the queueing network depends on the rates of the various traffic flows relative to the transmission rates of the edges and the scheduling constraints .this relation is captured by the stability region of the network .* definition 3 : ( stability region ) * the stability region of the single - hop queueing network described above , denoted by , is the set of rate vectors : in other words , a rate vector belongs to if there exists a convex combination of feasible schedules that covers the rates of all traffic flows .if a rate vector is in the stability region of the network , then the traffic corresponding to this vector is called * admissible * , and there exists a scheduling policy under which the network is stable .* definition 4 : ( traffic intensity ) * the traffic intensity of a rate vector is a real number in [ 0,1 ) defined as : clearly , arriving traffic with rate vector is admissible if and only if .* throughout this paper we assume that the traffic is admissible*. let us now define the property that we use to evaluate the performance of scheduling policies , namely , the delay stability of a traffic flow .* definition 5 : ( delay stability ) * a traffic flow is delay stable under a specific scheduling policy if the queueing network is stable under that policy and <\infty ] and ] , for all , then the network is stable and the steady - state queue lengths satisfy : < \infty , \qquad \forall f \in \{1,\ldots , f\}. \nonumber\ ] ] an earlier work by eryilmaz __ has given a similar result for the case of parallel queues with a single server ; see theorem 1 of . in this paperwe extend their result to a general single - hop network setting .moreover , we provide an explicit upper bound to the sum of the -moments of the steady - state queue lengths . before we do that we need the following definition . *definition 7 : ( covering number of feasible schedules ) * the covering number of the set of feasible schedules is defined as the smallest number for which there exist with .notice that the quantity is a structural property of the queueing network , and is not related to the scheduling policy or the statistics of the arriving traffic : it is the minimum number of time slots required to serve at least one packet from each flow . * theorem 3 : ( max - weight- scheduling ) * consider the single - hop queueing network described in section 2 under the max - weight- scheduling policy .let the intensity of the arriving traffic be . if <\infty ] .( sketch ) consider the single - hop queueing network of section 2 under the max - weight- scheduling policy .it can be verified that the sequence is a time - homogeneous , irreducible , and aperiodic markov chain on the countable state - space .the fact that this markov chain is also positive recurrent , and the related moment bound , are based on drift analysis of the lyapunov function and use of the foster - lyapunov stability criterion .this implies that converges in distribution , and its limiting distribution does not depend on .based on this , it can be verified that the sequence is a ( possibly delayed ) aperiodic and positive recurrent regenerative process .hence , it also converges in distribution , and its limiting distribution does not depend on . for a formal proof see appendix 5 .a first corollary of theorem 3 relates to the delay stability of light - tailed flows .* corollary 1 : ( delay stability under max - weight- ) * consider the single - hop queueing network described in section 2 under the max - weight- scheduling policy .if the -parameters of all light - tailed flows are equal to 1 , and the -parameters of heavy - tailed flows are sufficiently small , then all light - tailed flows are delay stable . with the particular choice of -parameters , theorem 3 guarantees that the expected steady - state queue length of all light - tailed flows is finite .lemma 1 relates this result to delay stability . combining this with theorem 1, we conclude that when its -parameters are chosen suitably , * the max - weight- policy delay - stabilizes a traffic flow , whenever this is possible*. max - weight- turns out to perform well in terms of another criterion too .theorem 3 implies that by choosing the -parameters such that <\infty ] is finite , for all .the following proposition suggests that this is the best we can do under any regenerative scheduling policy .* proposition 5 : * consider the single - hop queueing network described in section 2 under a regenerative scheduling policy . then , =\infty \e[q_f^c]=\infty , \qquad \forall f \in \{1,\ldots , f\}. \nonumber\ ] ] this result is well - known in the context of a m / g/1 queue , e.g. , see section 3.2 of .it can be proved similarly to theorem 1 .thus , when its -parameters are chosen suitably , * the max - weight- policy guarantees the finiteness of the highest possible moments of steady - state queue lengths*. although this paper focuses on heavy - tailed traffic and its consequences , some implications of theorem 3 are of general interest . in this sectionwe assume that all traffic flows in the network are light - tailed , and analyze how the sum of the -moments of steady - state queue lengths scales with traffic intensity and the size of the network .* corollary 2 : ( scaling with traffic intensity ) * let us fix a single - hop queueing network and constants and .the max - weight- scheduling policy is applied with , for all .assume that the traffic arriving to the network is admissible , and that the -moments of all traffic flows are bounded from above by .then , \leq \frac{m(k^*,\alpha , b)}{(1-\rho)^{\alpha } } , \nonumber\ ] ] where is a constant that depends only on , , and .moreover , under any stabilizing scheduling policy \geq \frac{m'(\alpha)}{(1-\rho)^{\alpha } } , \nonumber\ ] ] where is a constant that depends only on .if , for all , then theorem 3 implies that : \leq \frac{m(k^*,\alpha , b)}{(1-\rho)^{\alpha } } , \nonumber\ ] ] where is a constant that depends only on , , and . on the other hand , theorem 2.1 of implies that under any stabilizing scheduling policy there exists an absolute constant , such that \geq \frac{\tilde{m}}{(1-\rho)}. \nonumber\ ] ] utilizing jensen s inequality, we have : & \geq \sum_{f=1}^f ( e[q_f])^{\alpha } \nonumber \\ & \geq \frac{1}{f^{\alpha } } \big ( \sum_{f=1}^f e[q_f ] \big)^{\alpha}. \nonumber\end{aligned}\ ] ] consequently , there exists a constant that depends only on , such that \geq \frac{m'(\alpha)}{(1-\rho)^{\alpha } } , \nonumber\ ] ] under any stabilizing scheduling policy .similar scaling results appear in queueing theory , mostly in the context of single - server queues , e.g. , see chapter 3 of .more recently , results of this flavor have been shown for particular queueing networks , such as input - queued switches .all the related work , though , concerns the scaling of first moments .corollary 2 gives the precise scaling of higher order steady - state queue length moments with traffic intensity , and shows that max - weight- achieves the * optimal scaling*. we now turn our attention to the performance of the max - weight scheduling policy under bernoulli traffic , i.e. , when each of the arrival processes is an independent bernoulli process with parameter .we denote by the maximum number of traffic flows that any feasible schedule can serve .* corollary 3 : ( scaling under bernoulli traffic ) * consider the single - hop queueing network described in section 2 under the max - weight scheduling policy .assume that the traffic arriving to the network is bernoulli , with traffic intensity .then , \leq 2 \cdot k^ * \cdot s_{\max } \cdot \big ( \frac{1+\rho}{1-\rho } \big ) .\nonumber\ ] ] if all traffic flows are light - tailed and all the -parameters are equal to one , a more careful accounting in the proof of theorem 3 provides the following tighter upper bound : \leq \frac{2k^*}{1-\rho } \cdot \big ( s_{\max } + \sum_{f=1}^f e[a_f^2(0 ) ] \big ) .\nonumber\ ] ] if the traffic arriving to the network is bernoulli , then =\lambda_f ] queue , from which we infer that = \theta \big ( \frac{1}{1-\rho } \big) ] is finite .we call the aggregate length of queue during a regeneration cycle , and write , the random variable where and represent the first two ( or , in general , two consecutive ) regeneration epochs of the network . initially , we prove by contradiction that ] is infinite .using a truncation argument , similar to the one in lemma 4 of appendix 1.3 , it can be shown that ] is finite .hence , ] is finite , the ergodic theorem for regenerative processes implies that \qquad \mbox{w.p.1 } ; \nonumber\ ] ] see . moreover , since the network is stable under a regenerative scheduling policy , see theorem 2b of .the sequence is also a ( possibly delayed ) positive recurrent regenerative process .then , the ergodic theorem for regenerative processes and theorem 2e of imply that \qquad \mbox{w.p.1 } , \nonumber\end{aligned}\ ] ] and = p_f \cdot e[d_f ] .\nonumber\ ] ] to summarize , starting with the assumption that ] . the same can be shown if we start with the assumption ] is finite , for some . since every file has at least one packet , we have argued that under a regenerative scheduling policy , the sequences and converge in distribution .so , taking the limit as goes to infinity , we have : which , in turn , implies that \geq e[l_f ] .\nonumber\ ] ] combining this inequality with little s law and the assumption that ] is infinite , .the end - to - end delay of a file is bounded from below by the length of the respective queue upon its arrival , since the service discipline within each queue is `` first come , first served . ''so , we have argued that under a regenerative scheduling policy , the sequences and converge in distribution .so , taking the limit as goes to infinity , we have combining this with the basta property , which results in \geq e[q_f ] .\nonumber\ ] ] finally , the assumption that ] . for any given , there exists a constant , such that we define an event by by the strong law of large numbers , .because the sequence of events is nondecreasing , the continuity property of probabilities implies that .let us therefore fix some such that .let us consider the event we choose large enough so that and .note that note also that when both and occur , then so that the latter event has positive probability , which is the desired result follows . consider the single - hop queueing network described in section 2 under a regenerative scheduling policy . by definition, there exists a sequence of stopping times , which constitutes a ( possibly delayed ) renewal process , i.e. , the sequence is iid .moreover , the lattice distribution of cycle lengths has span equal to one and finite expectation . for ,let be an instantaneous reward on this renewal process , which is assumed to be an arbitrary function of .we define the truncated reward as , where is a positive integer . under a regenerative scheduling policy , the sequences and are ( possibly delayed ) aperiodic and positive recurrent regenerative processes .consequently , they converge in distribution , and their limiting distributions do not depend on ; see .let and be generic random variables distributed according to these limiting distributions .we denote by the aggregate reward , i.e. , the reward accumulated over a regeneration cycle .similarly , represents the truncated aggregate reward .* lemma 4 : * consider the single - hop queueing network described in section 2 under a regenerative scheduling policy .suppose that there exists a random variable with infinite expectation , and a nondecreasing function , such that , and \leq e[r^m_{agg } ] .\tag{1}\ ] ] then , =\infty . \nonumber\ ] ] by definition , cycle lengths have finite expectation , and ] .then , the renewal reward theorem implies that }{e[\tau_1-\tau_0 ] } = \lim_{t \to \infty } \frac{1}{t } \sum_{t=0}^{t-1 } r^m(t ) , \qquad \mbox{w.p.1 } ; \tag{2}\ ] ] see section 3.4 of .the sequence is a ( possibly delayed ) positive recurrent regenerative process , which is also uniformly bounded by .then , the ergodic theorem for regenerative processes implies that , \qquad \mbox{w.p.1 } ; \tag{3}\ ] ] see .( 1)-(3 ) give : }{e[\tau_1-\tau_0 ] } \leq e[\min\{r , m\ } ] .\nonumber\ ] ] by taking the limit as goes to infinity on both sides , and using the monotone convergence theorem , we obtain }{e[\tau_1-\tau_0 ] } \leq e[r ] ; \nonumber\ ] ] see section 5.3 of .finally , the fact that has infinite expectation implies that =\infty .\nonumber\ ] ]consider a heavy - tailed traffic flow . we will show that under any regenerative scheduling policy : =\infty .\nonumber\ ] ] combined with lemma 1 , this will imply that traffic flow is delay unstable .consider a fictitious queue , denoted by , which has exactly the same arrivals and initial length as queue , but is served at unit rate whenever nonempty .we denote by the length of queue at time slot .since the arriving traffic is assumed admissible , the queue length process converges to a limiting distribution .an easy , inductive argument can show that under a regenerative scheduling policy , the length of queue dominates the length of queue at all time slots .this implies that taking the limit as goes to infinity , and using the fact that both queue length processes converge in distribution , we have : in order to prove the desired result , it suffices to show that =\infty .\nonumber\ ] ] the time slots that initiate busy periods of queue constitute regeneration epochs .denote by the length of the cycle .the random variables are iid copies of some nonnegative random variable , with finite first moment ; this is because the length of queue is a positive recurrent markov chain , and the empty state is recurrent .we define an instantaneous reward on this renewal process : where is some finite integer . without loss of generality , assume that a busy period starts at time slot 0 , and let be the random size of the file that initiates it .since queue is served at unit rate , its length is at least packets over a time period of length at least time slots .this implies that the aggregate reward , i.e. , the reward accumulated over a renewal period , is bounded from below by consequently , the expected aggregate reward is bounded from below by & \geq \sum_{b=1}^{\infty } \min \big\ { \frac{b^2}{4 } , m^2 \big\ } \cdot p(a_h(0)=b ) \nonumber \\ & = \sum_{b=0}^{\infty } \min \big\ { \frac{b^2}{4 } , m^2 \big\ } \cdot p(a_h(0)=b ) \nonumber \\ & = e \big[\min \big\ { \frac{a_h^2(0)}{4 } , m^2 \big\ } \big ] .\nonumber\end{aligned}\ ] ] then , lemma 4 ( see appendix 1.3 ) applied to , implies that =\infty ] .consider the single - hop queueing network of figure 3 under the max - weight scheduling policy .assume that traffic flow 1 is heavy - tailed , traffic flows 2 and 3 are light - tailed , and also that .we will show that =\infty .\nonumber\ ] ] combined with lemma 1 , this will imply the delay instability of queue 2 .our proof is based on renewal theory , using a strategy similar to the one in the proof of theorem 2 .the time slots that initiate busy periods of the network constitute regeneration epochs .denote by the length of the cycle .the random variables can be viewed as iid copies of some nonnegative random variable , with finite first moment ; this is because the network is stable under the max - weight policy and the empty state is recurrent .we define an instantaneous reward on this renewal process : where is a positive integer . without loss of generality , assume that a renewal period of the system starts at time slot 0 .consider the set of sample paths of the network , where at time slot 0 , queue 1 receives a file of size packets , and all other queues receive no traffic ; we denote this set of sample paths by . clearly , the event has positive probability , as long as is in the support of , which we henceforth assume : our * proof strategy * is as follows : initially , queue 3 does not receive service under max - weight , so it starts building up . at the time slotwhen the service switches from schedule to schedule , and if the arrival processes of all traffic flows exhibit their `` average '' behavior , queues 1 and 3 are proportional to , whereas queue 2 remains small .then , max - weight will start draining the weights of the two feasible schedules at roughly the same rate , until one of them empties .let denote the departure rate from queue during this period .roughly speaking , the departure rates are the solution to the following system of linear equations : the last two equations follow from the facts that max - weight is a work - conserving policy , and that queues 1 and 2 are served simultaneously . if the rate at which traffic arrives to queue 2 is greater than the rate at which it departs from it , i.e. , or , equivalently , then queue 2 builds up during this time period , which is proportional to .this implies that =\infty ] .consider a set of feasible schedules such that : notice that where denotes the closure of the set .this is because we have a convex combination of feasible schedules , and the stability region is known to be a convex set ; see section 3.2 of .moreover , where denotes the -dimensional vector of ones .a well - known monotonicity property of the stability region is the following : if componentwise , and , then . using this property , we have : this , in turn , implies the existence of nonnegative numbers , adding up to 1 , and of feasible schedules , such that : under the max - weight- scheduling policy the sequence is a time - homogeneous , irreducible , and aperiodic markov chain on the countable state - space .we will prove that this markov chain is also positive recurrent , and we will establish upper bounds for the -moments of the steady - state queue lengths , provided that <\infty ]. then , \= v_f(q_f(t ) ) + e[\delta_f(t ) \mid q(t ) ] \cdot q_f^{\alpha_f}(t ) + e\big[\frac{\delta_f^2(t)}{2 } \cdot \alpha_f \cdot \xi(t)^{\alpha_f-1 } \ \big| \q(t)\big ] .\tag{25}\ ] ] since and , the last term can be bounded from above by \leq e\big[\frac{\delta_f^2(t)}{2 } \cdot \alpha_f \cdot ( q_f(t)+a_f(t))^{\alpha_f-1 } \q(t)\big ] .\tag{26}\ ] ] eqs .( 26)-(28 ) imply that & \leq 2^{\alpha_f-2 } \cdot \alpha_f \cdot \big ( e[a_f^2(t ) ] + 1 \big ) \cdot q_f^{\alpha_f-1}(t ) \nonumber \\ & + 2^{\alpha_f-2 } \cdot \alpha_f \cdot \big ( e[a_f^{\alpha_f+1}(t ) ] + e[a_f^{\alpha_f-1}(t ) ] \big ) \nonumber \\ & \leq k \cdot q_f^{\alpha_f-1}(t)+k , \tag{29}\end{aligned}\ ] ] where + 1 \big) ] .summing over all , gives : & \leq v(q(t ) ) + \sum_{f=1}^f ( \lambda_f - s_f(t ) \cdot 1_{\{q_f(t)>0\ } } ) \cdot q_f^{\alpha_f}(t ) \nonumber \\ & + \frac{1-\rho}{2 k^ * } \cdot \sum_{f=1}^f q_f^{\alpha_f}(t ) + \sum_{f=1}^f h \big ( \rho , k^*,\alpha_f , e[a_f^{\alpha_f+1}(0 ) ] \big ) .\nonumber\end{aligned}\ ] ] taking into account eq .( 20 ) , we have : & \leq v(q(t ) ) -\frac{1-\rho}{2 k^ * } \cdot \sum_{f=1}^f q_f^{\alpha_f}(t ) + \sum_{f=1}^f h \big ( \rho , k^*,\alpha_f , e[a_f^{\alpha_f+1}(0 ) ] \big ) \nonumber \\ & + \sum_{f=1}^f \big(\sum_{j=1}^{j } \theta_j \cdot s_f^j - s_f(t ) \big ) \cdot q_f^{\alpha_f}(t ) .\nonumber\end{aligned}\ ] ] by the definition of the max - weight- scheduling policy , the last term is nonpositive .so , \ \leq \v(q(t ) ) -\frac{1-\rho}{2 k^ * } \cdot \sum_{f=1}^f q_f^{\alpha_f}(t ) + \sum_{f=1}^f h \big ( \rho , k^*,\alpha_f , e[a_f^{\alpha_f+1}(0 ) ] \big ) . \nonumber\ ] ] then , the foster - lyapunov stability criterion and moment bound ( e.g. , see corollary 2.1.5 of ) implies that the sequence converges in distribution .moreover , its limiting distribution does not depend on , and satisfies \leq \frac{2 k^*}{1-\rho } \cdot \sum_{f=1}^f h \big ( \rho , k^*,\alpha_f , e[a_f^{\alpha_f+1}(0 ) ] \big ) .\nonumber\ ] ] based on this , it can be verified that the sequence is a ( possibly delayed ) aperiodic and positive recurrent regenerative process .hence , it also converges in distribution , and its limiting distribution does not depend on ; see .
we consider the problem of packet scheduling in single - hop queueing networks , and analyze the impact of heavy - tailed traffic on the performance of max - weight scheduling . as a performance metric we use the delay stability of traffic flows : a traffic flow is delay stable if its expected steady - state delay is finite , and delay unstable otherwise . first , we show that a heavy - tailed traffic flow is delay unstable under any scheduling policy . then , we focus on the celebrated max - weight scheduling policy , and show that a light - tailed flow that conflicts with a heavy - tailed flow is also delay unstable . this is true irrespective of the rate or the tail distribution of the light - tailed flow , or other scheduling constraints in the network . surprisingly , we show that a light - tailed flow can be delay unstable , even when it does not conflict with heavy - tailed traffic . furthermore , delay stability in this case may depend on the rate of the light - tailed flow . finally , we turn our attention to the class of max - weight- scheduling policies ; we show that if the -parameters are chosen suitably , then the sum of the -moments of the steady - state queue lengths is finite . we provide an explicit upper bound for the latter quantity , from which we derive results related to the delay stability of traffic flows , and the scaling of moments of steady - state queue lengths with traffic intensity .
periodic behavior is ubiquitous in living systems , from neural oscillations to circadian cycles .an example of a well studied biochemical oscillation is the phosphorylation - dephosphorylation cycle of the kaic protein .this phosphorylation - dephosphorylation cycle functions as a circadian clock allowing a cyanobacterium to tell time , i.e. , to oscillate in synchrony with day - night changes .another example of a biochemical oscillation that is related to a phosphorylation - dephosphorylation cycle of a protein happens in the activator - inhibitor model recently analyzed in .more generally , biochemical oscillations are typically associated with a protein that goes through a cyclic sequence of states .any such protein can be taken as an example of a brownian clock .brownian clocks are stochastic and , therefore , some uncertainty must be associated with them . quite generally ,uncertainty related to stochastic changes at the molecular level is an important issue in biophysics .for example , the berg and purcell limit on the maximal precision of a receptor that measures an external ligand concentration is such a fundamental result .the relation between precision of some kind and energy dissipation in biophysics has become an active area of research , often using concepts from stochastic thermodynamics .specific examples include a relation between energy dissipation and adaptation error in chemotaxis , bounds on the precision of estimating an external ligand concentration by a receptor related to energy consumption , a relation between energy dissipation and accuracy in biochemical oscillations , and the relation between information - theoretical quantities and entropy production in biophysically inspired models .this last example is also related to the growing field of information and thermodynamics .the question we investigate in this paper concerns the relation between precision and dissipation in brownian clocks .given that the clock should have a certain precision , what is the minimal energy budget required to run a clock with this precision ?we model a brownian clock as an inhomogeneous biased random walk on a ring .the different states of the clock can be interpreted as different states of a protein that influences a biochemical oscillation ; changes in these states would correspond to , e.g. , conformational changes or phosphorylation steps .we consider two classes of clocks .first , we analyze a clock driven by a constant thermodynamic force that can be generated by , for example , atp . for this class ,the general thermodynamic uncertainty relation we obtained in ( see also ) , establishes the best precision that can be obtained given a certain energy budget . within this classa precise clock requires a minimal energy dissipation .the second class is represented by a clock that is driven by a periodic external protocol .systems driven by such protocols reach a periodic steady state and are known as stochastic pumps " .experimental examples of such systems are the generation of rotational motion in an artificial molecular motor driven by an external protocol and the pumping of ions across membranes in red blood cells driven by an oscillating electric field . we show that a clock in this class can achieve high precision with an arbitrarily small energy budget .hence , a clock in this second class is fundamentally different from a clock driven by a fixed thermodynamic force .the mathematical treatment of systems that reach a periodic steady state , which are driven by deterministic protocols , is typically difficult . in particular , calculating the dispersion associated with the clock can be quite challenging . for our investigation on the fundamental differences between the two classes we consider a generic theoretical framework for which the protocol changes at random time intervals .such protocols have been realized in experiments . within this theoretical frameworkthe system , i.e. , the clock , and the external protocol together form a bipartite markov process .this property considerably simplifies calculations ; in particular , it allows us to calculate analytically the dispersion of the clock . using these analytical tools we find the optimal parameters that lead to a clock that can achieve high precision with arbitrarily low dissipation . with this proper tuning in hands ,we confirm numerically that the corresponding clock with a deterministic protocol can also achieve high precision with vanishing dissipation . for protocols that change at stochastic times ,we prove that given a periodic steady state with a certain probability distribution , it is always possible to build a steady state of a bipartite markov process , which comprises the system and the external protocol , that has the same probability distribution .this paper is organized as follows . in sec .[ mainsec2 ] we discuss a clock driven by a fixed thermodynamic force .our main result comes in sec .[ mainsec3 ] , where we show that a clock driven by an external protocol can combine high precision with arbitrarily low dissipation .we conclude in sec .[ mainsec4 ] .appendix [ app1 ] contains the thermodynamics of systems driven by external stochastic protocols . in appendix [ app2 ]we prove the equivalence between a periodic steady state and a steady state of a bipartite process composed of both system and external protocol .more details for the model analyzed in sec .[ mainsec3 ] are given in appendix [ app3 ] .the simplest model of a brownian clock is a biased random walk on a ring with , possibly different , states and arbitrary rates , as illustrated in fig .[ fig1main ] for .the transition rate from state to state is , whereas the transition rate from to is .time is counted by the number of full revolutions of the pointer .whenever the pointer undergoes the transition from state to state , one unit of clock time " has passed .since the clock is stochastic , a backward step from state to state could happen .if , in the next step , the pointer moves from to , one should not attribute the passing of a second time unit to such a sequence of events .hence , one counts a backward steps from to as a unit to prevent such over - counting .the stochastic variable that counts time thus is a fluctuating current that increases by one if there is a transition from to and it decreases by one if there is a transition from to .moves to state with rate , or to state with rate , and so on.,width=328 ] in the stationary state , the average is given by the probability current where the clock runs for a total time .the inverse is the average time for the clock to complete a cycle , which should correspond to the average period of oscillation of the biological function regulated by the clock .an alternative random variable for counting time would be the cycle completion time which is , however , well - defined only if .measuring time with this clock comes with a finite uncertainty where we have introduced the diffusion coefficient the clock is driven in the clockwise direction by , for example , a chemical potential difference that is related to the transition rates by the generalized detailed balance condition .this condition for this clock reads where and we set boltzmann constant multiplied by the temperature to in the equations throughout the paper . each revolution of the clock cost an amount of free energy . hence running the clock for a total time costs an average free energy the uncertainty of the clock , the cost of running it and its number of states are constrained by a universal thermodynamic uncertainty relation , which we discuss in the following . for a biased random walk with uniform rates and , the current is and the diffusion coefficient is . for this case , the cost in eq .times in eq . gives ] are given by \ ] ] and .\ ] ] besides the variable we also consider a variable , which is convenient for our calculations . whereas the variable marks a position in the clock the variable is determined by the energy of the state . if the external protocol changes during the period , for the variable the transition rates rotate in the clockwise direction , whereas the variable undergoes an effective backward transition , as illustrated in fig .[ fig2main ] .the random variable is the same as for the previous clock : counts the number of transitions between and in the clockwise direction minus the number of transitions in the anticlockwise direction .it turns out that analytical calculations with the above model that reaches a periodic steady state are complicated . in particular ,a method to calculate the diffusion coefficient for arbitrary is not available .however , if we consider a protocol that changes at stochastic times with a rate , analytical calculations become simpler . in appendix [ app1 ] , we explain a general theory for such stochastic protocols , along the lines of .we show that an analytical expression for the diffusion constant can be obtained in this case .furthermore , in appendix [ app2 ] we show that given a periodic steady state arising from a continuous deterministic periodic protocol , it is always possible to build a bipartite process comprising the system and the stochastic protocol that has the same probability distribution as the periodic steady state .the green backward arrows represent a jump with rate .a backward jump is equivalent to a forward rotation of the rates represented in fig .[ fig2main ] ., width=328 ] for the clock with stochastic protocol , the energies and energy barriers change at stochastic times , with a rate .the precise definition of the model for general is presented in appendix [ app3 ] . here in the main textwe discuss the case that is represented in fig [ fig3main ] .it turns out that the full bipartite process can be reduced to a markov process with four states only . in this reduced descriptionwe use the variable .the transition rates are related to one rotation of the transition rates .effectively , such a rotation corresponds to a backward jump of this variable , as illustrated for the deterministic protocol in fig .[ fig2main ] and explained in more detail in appendix [ app3 ] .as explained in appendix [ app3 ] , we can calculate current , entropy production rate , and diffusionconstant analytically for this clock with the stochastic protocol , which lead to the product as a function of the transition rates .the entropy production is equal to the rate of work done on the system due to the periodic variation of the external protocol .similar to the previous clock driven by a fixed thermodynamic force , if this clock runs for a time , the energetic cost is and the uncertainty is .for the simplest clock with , the minimum value of the product turns out to be , which is smaller than the universal limit for systems driven by a fixed thermodynamic force .we have obtained this product as a function of the transition rates up to . minimizing numerically , we find that the minimum decreases with , and that the transition rates at the minimum have the properties and .thus , in this limit , the energy barrier between states and becomes infinite , effectively blocking transitions between these states .moreover , the internal transitions are much faster than changes in the protocol , i.e. , the system equilibrates before the next change in the external protocol happens , which is common in studies about periodically driven systems . for this clock ,the product is minimized in the far from equilibrium regime , in contrast to the clock from sec .[ mainsec2 ] , for which the minimum occurs in the linear response regime . in this limit , the expressions for current and diffusion coefficient become and where .these expressions can be obtained by mapping the model in this special limit onto a biased random walk , as explained in appendix [ app3 ] . the basic idea behindthis mapping is to consider the position of the particle , i.e. , the state of the clock , in relation to the barrier .if the barrier moves and the particle is in state , then the particle crosses the barrier and moves to state , corresponding to a backward step of size of the random walk .otherwise , the particle moves one step closer to the barrier , i.e. , from state to , corresponding to a forward step of size .the entropy production is calculated with the expression in eq . ,which gives this expression for the entropy production , which is the rate of work done on the system , can be understood as follows .if there is a jump that changes the external protocol , the work done on the system is given by the energy change of the system after the jump .if the system is in a state , this energy change is .therefore , the rate of work done on the system in eq . is times a sum over all state of this energy difference multiplied by the probability of the system being in state before an external jump , which is . in marked contrast to the clock driven by a fixed thermodynamic force , the cost for this periodically driven clock is , in general , not proportional to the current that is given in eq . .before discussing the optimal energy profile that minimizes the product we consider the simple profile where is the kronecker delta . in this case , using eqs . , , and the product becomes }{(n-1)(1-e^{-e})}. \label{prodfor1}\ ] ]this expression implies a fundamental difference between the two kinds of clocks . if we choose the parameters and in such a way that , the product can reach an arbitrarily small value .for example , for and we obtain .the fact that it is possible to build a clock that has small uncertainty and dissipates arbitrarily low energy is the main result of this paper .such a dissipation - less clock is in stark contrast with a clock driven by a fixed thermodynamic force , which is constrained by the thermodynamic uncertainty relation .a physical explanation for this result is as follows .let us consider the case where is large enough so that the particle is practically never at position when the barrier moves forward .this condition amounts to . in this case, the position of the particle with respect to the energy barrier always diminishes by one when the barrier moves .the current is then given by the velocity of the barrier and the dispersion is , which is the dispersion of the random walk performed by the barrier that has only forward transitions with rate .work is done on the system only if the particle is at state when the barrier moves , which happens with probability . for large , the entropy production is then given by .the product of cost and uncertainty becomes .the condition guarantees a small dissipation , leading to a product that can be arbitrarily close to .the mechanism that allows for this scaling of the product with is the large energy barrier that determines the current and the dispersion .such a mechanism can not be realized with the clock driven by a fixed thermodynamic force from sec .[ mainsec2 ] .for which the product is minimized.,width=328 ] in the limit where the expressions , , and are valid , the minimum of is achieved with an optimal energy profile that depends on , as shown in fig [ fig4main ] .the negative value of the minimum of this energy profile grows with , and for larger the profile becomes flatter in the middle .hence , for large , the probability to be in the state with highest energy goes to zero and , from expressions and , and , respectively .current and diffusion are then determined by the unidirectional random walk performed by the barrier , as is the case of the simple profile from eq . with a large .we verified numerically that for this optimal profile the entropy production rate behaves as .the product can then become arbitrarily small for large .for example , for a clock with states and with an optimal energy profile , we get . hence , with this clock , an uncertainty costs approximately , which is much less then the minimal cost of found above for a clock with the same precision and driven by a fixed thermodynamic force .this clock with an optimal energy profile also relies on the mechanism of a large barrier that controls the dispersion and current of the clock , with the difference that the energy dissipation can be suppressed as .there are other energy profiles that lead to a dissipation - less and precise clock . if we choose a simple energy profile , with , from eqs ., , and , we obtain . a dissipation - less and precise clock can also be obtained with a deterministic protocol .we have confirmed with numerical simulations up to , using the optimal energy profile from fig .[ fig4main ] , that for a deterministic protocol and are the same as given by and , while becomes smaller .such a smaller diffusion comes from the fact that the deterministic protocol does not have the randomness associated with the waiting times for a change in the protocol .therefore , the product is even smaller in this case and also vanishes for large . for illustrative purposes we compare a specific clock driven by an external protocol with the results for clocks driven by a fixed thermodynamic force .in fig . [ fig5main ] , we show a contour plot of the product for .the energies of the clock are set to , , and , which is the optimal profile for .the parameters and determine the other transition rates in the following way .the parameters related to the energy barriers are set to and .the rate of change of the protocol is set to .hence , for large and , the product reaches its minimal value for , which is . for a clock driven by an external protocol .the parameters of the clock are set to , , , , , and . below the lines ,the product is smaller than , which is the optimal value of this product for a clock driven by a fixed affinity and ., width=328 ] this externally driven clock can be compared to an optimal clock driven by a fixed thermodynamic force with the same number of states .the product for the optimal clock driven by a fixed affinity saturates the inequality , i.e. , for this optimal clock follows the relation , which is an increasing function of the affinity .close to equilibrium , , the product reaches the minimal value .hence , a clock driven by a fixed thermodynamic force can not have a better tradeoff relation between cost and precision than the externally driven clock inside the region limited by the line in fig .[ fig5main ] .increasing the affinity leads to a larger region for which the externally driven clock has an smaller product .we have shown that a brownian clock driven by an external protocol can achieve small uncertainty in a dissipation - less manner .this result constitutes a fundamental difference between systems driven by a fixed thermodynamic force and systems driven by an external protocol . for the first case , small uncertainty does have a fundamental cost associated with it , which is determined by the thermodynamic uncertainty relation from .more realistic models related to biochemical oscillations do not typically have a simple space of states like the ring geometry considered in this paper .however , this feature does not represent a limitation in our fundamental bounds .first , the thermodynamic uncertainty relation is not limited to the ring geometry but valid even for any multicyclic networks of states .second , we have shown that it is possible to reach with a specific model , which is sufficient to prove that systems driven by an external periodic protocol can , in principle , achieve high precision with vanishingly small dissipation .main features of the protocol that achieves high precision in a dissipation - less manner are internal transitions much faster than changes in the external protocol , a large number of states , and a large energy barrier that effectively blocks transitions between one pair of states. this third property does not allow for cycle completions without a change in the external protocol .it remains to be seen whether further classes of protocols that also lead to exists . in particular , a quite different externally driven system , known as a hidden pump , that leads to a finite current with an arbitrarily low entropy production has been proposed in .it would be worthwhile to verify whether such hidden pumps can also be used to build a clock that reaches a finite precision with arbitrarily low dissipation .the theoretical framework for systems driven by a protocol that changes at stochastic times considered here was crucial to obtain our main result .with this theory the system and external protocol together form a bipartite markov process and quantities like the diffusion coefficient can be calculated with standard methods for steady states .this option represents a major advantage in relation to standard deterministic protocols that reach a periodic steady state , where a similar method to calculate the diffusion coefficient is not available .it is possible to consider a stochastic protocol that also has reversed jumps . in this case, the entropy production associated with generating the external protocol is finite .this well defined quantity can then be taken into account in a way consistent with thermodynamics . if one chooses to also consider the entropy production due to the changes in the external protocol as part of the thermodynamic cost , then the thermodynamic uncertainty relation from sec .[ mainsec2 ] is again valid .this result follows from the fact that the uncertainty relation from is valid for any markov process , including the full bipartite process of system and protocol together . from a physical perspective, this observation is not surprising .if we also take the cost of generating the stochastic protocol into account , then our full bipartite process is a thermodynamic system driven by a fixed force , which obeys the thermodynamic uncertainty relation .for example , this cost of the external protocol would be of interest if the external protocol is driven by some chemical reaction . however , if the protocol is directed by some truly external process , e.g. , day light changes that influence a circadian clock or an external field applied to a system , then the entropic cost of the external protocol is irrelevant , independent on whether the protocol is deterministic or stochastic .it is in this case that our definition of cost for a system driven by an external protocol is meaningful .finally , the experimental confirmation of both the thermodynamic uncertainty relation for systems driven by a fixed thermodynamic force and the limit of high precision in the output with small dissipation for a system driven by an external periodic protocol remains an open challenge .promising candidates for the experimental realization of a brownian clock are single molecules , colloidal particles , and small electronic systems .in this appendix , we consider a theoretical framework for systems driven by periodic protocols that change at stochastic times . as a simple example of a periodic steady state we consider a two state system .the `` lower '' level has energy while the `` upper '' level has a time dependent periodic energy where is the period .the transition rates fulfill the detailed balance relation .the master equation reads (t),\ ] ] where is the probability that the level with energy is occupied . with the particular choice and the initial condition , the solution of this equation reads ''}dt'.\label{pttwostate}\ ] ] this solution has the property that , for large , the system reaches a periodic steady state independent of initial conditions that fulfills the relation . the function in a period obtained from eq .[ pttwostate ] is shown in fig .[ fig1 ] . .for the second case the horizontal axis is . for the periodic steady state we plot and for the steady statethe conditional probability for different values of .the parameters are set to and . , width=328 ] and have energy and , respectively .the protocol changes from to with a rate .,width=328 ] instead of an energy that changes continuously and deterministically with time we now consider discontinuous changes that take place at random times , as shown in fig .particularly , the transition rates for changes in the state of the system are now written as , where plays a role similar to in eq . .the detailed balance condition for jumps changing the state of the system reads .the period is partitioned in pieces , leading to .the energy can change to with jumps that take place with a rate , where for the jump is to .the reversed transition leading to an energy change from to is not allowed . the external protocol and the system together form a bipartite markov process that has states ( see fig .[ fig2 ] ) .furthermore , the external protocol alone is a unicyclic markov process with the irreversible transitions . to match with the protocol in eq . , the rate is set to .the full markov process of system and protocol together reaches a stationary state , with the joint probability that the protocol is in state and the system is in a generic state denoted by .the marginal probability of the state of the protocol is . for the present case . comparing the periodic steady state with the stationary state ,the quantity analogous to the probability is the conditional probability , where denotes the state with energy .this conditional probability is compared to in fig .[ fig1 ] . clearly , for larger the conditional probability of the steady state tends to the probability in the periodic steady state .more generally , in appendix [ app2 ] we prove that for any periodic steady state it is possible to construct a steady state of a bipartite process with a stationary probability that converges to the probability of the periodic steady state in the limit . for both protocols the system is out of equilibrium due to the time variation of the energy levels . for the periodic steady statethe average rate of work done on the system is the integrand is just the probability of being in the upper state with energy multiplied by the rate of energy change .the expression for the rate of work done on the system for the model with stochastic jumps in the protocol is the sum in corresponds to the integral in in eq ., is the average fraction of time that the protocol spends in state during a period , is equivalent to , and is related to in eq . .in fig . [ fig3 ] we compare with . for large ,they become the same , which is a consequence of the convergence of the corresponding probabilities shown in fig .even if for smaller the quantitative discrepancy between and is noticeable , the qualitative behavior is still similar , i.e. , in all cases the rate of work done on the system is an increasing function of . as a function of .for the periodic steady state this rate is denoted by .,width=328 ] we now consider the general case that includes an arbitrary network of states beyond the ring geometry of the models in the main text , which is similar to the framework from .the system and the external protocol together form a markov process with states labeled by the variables for the state of the system and for the state of the external protocol .this full markov process is bipartite , i.e. , a transition changing both variables is not allowed .a state of the system with the external protocol in state has free energy .the transition rates for a change in the state of the system fulfill the generalized detailed balance relation where is a thermodynamic force or affinity and is a generalized distance . for example , if the transition from to is related to a chemical reaction then is the chemical potential difference driving the reaction and is the number of molecules consumed in the reaction . a jump changing the external protocol from to takes place with rate , while the reversed jump is not allowed .the master equation for the full bipartite process then reads \nonumber\\ & + \left[\gamma_{n-1}p_i^{n-1}(t')-\gamma_{n}p_i^{n}(t')\right ] , \label{meq}\end{aligned}\ ] ] where is the probability that the system is at state and the external protocol at state at time .we use the variable in this master equation in order to stress the difference with the variable used for the periodic steady state . in the following we consider only the stationary distribution , which is simply denoted .the entropy production , which characterizes the rate of dissipated heat in an isothermal system , is defined as the above inequality is demonstrated in .this entropy production does not include jumps that lead to a change in the external protocol .the mathematical expression for the entropy production of the full markov process also contains a contribution that comes from these jumps .this contribution is related to the entropy production due to the external protocol ( see also ) . as usual for thermodynamic systems driven by an external protocol, we do not take such contribution , which is irrelevant for the second law in eq ., into account .the first law reads where is the rate of work done on the system and is the rate of increase of the internal energy . since , the rate of dissipated heat is . in the stationary state which , with eq ., leads to the equation in the stationary state the first law then reads .using equation we can rewrite the entropy production in the form where is a probability current .the second term on the right hand side of this equation is the work done by the external variation of the protocol .the first term is the work related to the affinity ; this term would be present even if the protocol was constant in time .for the model considered in sec .[ mainsec3 ] of the main text only the second term is present .we now compare expression with the expression for entropy production for a standard periodic steady state .the master equation for the periodic steady state is . \label{meqcont}\ ] ] where is the probability of the system being in state at time .the generalized detailed balance relation in this case reads ,\ ] ] where the time dependent quantities have a period .we assume that for large eq . reaches a periodic steady state with the property . from the average energy that is also periodic ,i.e. , we obtain (t)dt\nonumber\\ = \int_{0}^{\tau}\sum_{i } r_i^{ps}(t)\dot{e}_i(t)dt.\end{aligned}\ ] ] this equation is equivalent to eq . .the standard entropy production rate from stochastic thermodynamics for this periodic steady state is where .this expression is analogous to the entropy production .the problem of determining a periodic steady state probability analytically is typically complicated , whereas finding the probability distribution of a steady state in the case of stochastic changes in the external protocol can be much easier .this framework should then be useful also for the analysis of the qualitative behavior displayed by a system driven by a deterministic external protocol that is preserved in the case of a discretized stochastic protocol .a main advantage of the stochastic protocols we consider here is that we can determine the diffusion coefficient defined in eq . .for a general model defined by the master equation , we calculate the diffusion coefficient associated with an elementary current between states and : the random variable in eq .is such that if there is a jump from to it increases by one and if there is jump from to it decreases by one .this random variable is a standard probability current of a steady state , therefore , the method from koza ( see also ) can be used to calculate the current and diffusion coefficient in the following way .the -dimensional matrix , where is a real variable , is defined as the modified generator associated with the current is a matrix with dimension given by where is the identity matrix with dimension multiplied by . as explained in , we can obtain and , defined in eqs . and , respectively , from the coefficients of the characteristic polynomial associated with , which are defined through the relation .\ ] ] the current and diffusion coefficient are given by and where the lack of dependence in indicates evaluation of the function at and the primes denote derivatives with respect to .in this appendix we prove that for any given periodic steady state it is possible to construct a bipartite process that has a stationary distribution corresponding to the distribution of the periodic steady state .we consider a periodic steady state following the master equation , which can be written in the form where stochastic matrix has period , i.e. , , and is the probability vector with states . the periodic steady state .the period is discretized in small intervals so that in each time interval the transition rates can be taken as time - independent . in the nth - time intervalthe system then follows the master equation with time independent transition rates where and .the formal solution of this equation is where and the superscript ( ) denotes the initial ( final ) distribution of the system in the time interval 12 & 12#1212_12%12[1][0] _( , ) _ _ ( , ) link:\doibase 10.1126/science.1108451 [ * * , ( ) ] link:\doibase 10.1016/j.mib.2008.10.003 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.96.038303 [ * * , ( ) ] link:\doibase 10.1073/pnas.0608665104 [ * * , ( ) ] link:\doibase 10.1073/pnas.1007613107 [ * * , ( ) ] link:\doibase 10.1038/nphys3412 [ * * , ( ) ] link:\doibase 10.1016/s0006 - 3495(77)85544 - 6 [ * * , ( ) ] link:\doibase 10.1073/pnas.0504321102 [ * * , ( ) ] link:\doibase 10.1073/pnas.0804688105 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.103.158101 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.104.248101 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.109.218103 [ * * , ( ) ] link:\doibase 10.1016/j.bpj.2013.12.030 [ * * , ( ) ] link:\doibase 10.1146/annurev.physchem.58.032806.104550 [ * * , ( ) ] link:\doibase 10.1038/nphys2276 [ * * , ( ) ] link:\doibase 10.1073/pnas.1207814109 [ * * , ( ) ] link:\doibase 10.1103/physreve.87.042104 [ * * , ( ) ] link:\doibase 10.1371/journal.pcbi.1003300 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.113.148103 [ * * , ( ) ] link:\doibase 10.1073/pnas.1411524111 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.113.258102 [ * * , ( ) ] link:\doibase 10.1371/journal.pcbi.1003974 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/16/10/103024 [ * * , ( ) ] link:\doibase 10.1088/1367 - 2630/17/5/055026 [ * * , ( ) ] link:\doibase 10.1088/1742 - 5468/2015/01/p01014 [ * * , ( ) ] link:\doibase 10.1038/ncomms8498 [ * * , ( ) ] link:\doibase 10.1088/0034 - 4885/75/12/126001 [ * * , ( ) ] link:\doibase 10.1103/physreve.85.021104 [ * * , ( ) ] link:\doibase 10.1073/pnas.1204263109 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.111.180603 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.111.010602 [ * * , ( ) ] link:\doibase 10.1103/physrevx.3.041003 [ * * , ( ) ] link:\doibase 10.1103/physrevx.4.031015 [ * * , ( ) ] http://stacks.iop.org/1742-5468/2014/i=2/a=p02016 [ ( ) ] link:\doibase 10.1103/physrevlett.112.090601 [ * * , ( ) ] link:\doibase 10.1103/physreve.90.042150 [ * * , ( ) ] link:\doibase 10.1038/nphys2940 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.113.030601 [ * * , ( ) ] link:\doibase 10.1038/nphys3230 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.114.158101 [ * * , ( ) ] link:\doibase 10.1021/acs.jpcb.5b01918 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.116.120601 [ * * , ( ) ] link:\doibase 10.1103/physreve.93.052145 [ * * , ( ) ] p. pietzonka , a. c. barato , and u. seifert , `` affinity- and topology - dependent bound on current fluctuations , '' j. phys . a : math. theor . * 49 * , 34lt01 ( 2016 ) . ( ) j. m. r. parrondo , `` reversible ratchets as brownian particles in an adiabatically changing periodic potential , '' phys .e * 57 * , 7297 ( 1998 ) .r. d. astumian , `` adiabatic operation of a molecular machine , '' proc .* 104 * , 19715 ( 2007 ) .link:\doibase 10.1209/0295 - 5075/77/58001 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.99.220408 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.101.160601 [ * * , ( ) ] link:\doibase 10.1103/physrevlett.101.140602 [ * * , ( ) ] link:\doibase 10.1007/s10955 - 009 - 9818-x [ * * , ( ) ] link:\doibase 10.1088/1742 - 5468/2011/09/p09020 [ ( ) ] link:\doibase 10.1146/annurev - biophys-042910 - 155355 [ * * , ( ) ] link:\doibase 10.1209/0295 - 5075/108/50001 [ * * , ( ) ] link:\doibase 10.1103/physrevx.6.021022 [ * * , ( ) ] link:\doibase 10.1038/nature01758 [ * * , ( ) ] * * , ( ) link:\doibase 10.1063/1.4826634 [ * * , ( ) ] g. verley , c. van den broeck , and m. esposito , `` work statistics in stochastically driven systems , '' new j. phys .* 16 * , 095001 ( 2014 ). http://stacks.iop.org/0295-5075/89/i=6/a=60003 [ * * , ( ) ] link:\doibase 10.1038/nphys3435 [ * * , ( ) ]link:\doibase 10.1007/s10955 - 013 - 0834 - 5 [ * * , ( ) ] link:\doibase 10.1088/1742 - 5468/2015/03/p03006 [ * * , ( ) ] link:\doibase 10.1007/bf01019492 [ * * , ( ) ] in this case , this random variable is indeed equivalent to our random variable .this cycle completion time is more similar to the random variable analyzed in , where the dispersion of the time at which a peak of a biochemical oscillation occurs was considered .since , strictly speaking , for the cycle completion time is , due to the possibility of backward jumps , not a meaningful variable anymore , we stick with our identification of counting time . * * , ( ) this mapping of a periodic steady state onto a bipartite system with a larger number of states is different from the mapping introduced in , where systems driven by a fixed force are mapped onto systems driven by an external protocol , and _vice versa _ , with the same number of states .m. esposito and j. m. r. parrondo , `` stochastic thermodynamics of hidden pumps , '' phys .e * 91 * , 052114 ( 2015 ) .b. b. machta , `` dissipation bound for thermodynamic control , '' phys .* 115 * , 260603 ( 2015 ) .link:\doibase 10.1088/0305 - 4470/32/44/303 [ * * , ( ) ] \doibase 10.1023/a:1004589714161 [ * * , ( ) ]
brownian clocks are biomolecular networks that can count time . a paradigmatic example are proteins that go through a cycle thus regulating some oscillatory behaviour in a living system . typically , such a cycle requires free energy often provided by atp hydrolysis . we investigate the relation between the precision of such a clock and its thermodynamic costs . for clocks driven by a constant thermodynamic force , a given precision requires a minimal cost that diverges as the uncertainty of the clock vanishes . in marked contrast , we show that a clock driven by a periodic variation of an external protocol can achieve arbitrary precision at arbitrarily low cost . this result constitutes a fundamental difference between processes driven by a fixed thermodynamic force and those driven periodically . as a main technical tool , we map a periodically driven system with a deterministic protocol to one subject to an external protocol that changes in stochastic time intervals , which simplifies calculations significantly . in the non - equilibrium steady state of the resulting bipartite markov process , the uncertainty of the clock can be deduced from the calculable dispersion of a corresponding current .
the large area telescope ( lat ) is the main instrument on the fermi gamma - ray space telescope , which was launched in june 2008 and has been surveying the gamma - ray sky since then .the data collected have led to numerous discoveries in this previously relatively unexplored energy regime .the lat consists of 16 towers in a 4x4 array , each incorporating a tracker and a cesium - iodide - based calorimeter , all surrounded by an tiled scintillator anti - coincidence detector .each tracker module contains 36 planes of silicon - strip detectors , paired orthogonally in 18 layers interspersed with tungsten foils .the hit strip numbers and associated information in each of the tracker planes are read out into buffers in two read controllers ( rcs ) , one at each end of the plane , and nine rcs are read into a buffer in one of eight cable controller ( cc ) , where the data are stored for assembly into the complete event .each rc can accommodate up to 64 strips , but the cc can only accept the first 128 .if there are more than 128 hits , those from the top planes are lost first , potentially leading to a serious loss of resolution on the reconstructed photon direction .so we limit the number of hits in each rc so that the cc buffer is never completely filled . for events with many hits, this strategy tends to confine the hit loss to the lower planes , where the photon has started to shower , and where the tracker is sensitive to backsplash due to low - energy photons from the calorimeter , just below the tracker .figure [ truncevent ] shows a simulated high - energy photon showering in the tracker , with no limits on the number of hits read out .this can be done in simulation , but not in the real detector . as the shower develops , the number of hit clusters ( green x s ) in each layer increases .the blue lines indicate the tracks found by our new pattern - recognition algorithm .the inset shows the same event , but now the hit buffers are truncated as they would be in the standard configuration of the lat readout electronics .( see next panel . )many of the hits in the lower layers are lost ; the teal bars show the regions that become insensitive .because of the missing hits , the lower parts of the tracks are displaced , and this contributes to a shift in the reconstructed direction of the found tracks away from the true direction , and thus in the inferred photon direction . for this event ,the shift is .figure [ standard ] is a schematic representation of the standard configuration of the tracker readout , for one of the two orthogonal directions .note the two read controllers on each plane , each of which reads out the hits in half of the plane .all the planes are configured in the same way . in the actual readout, each read controller can accept 14 hits ; here , for illustration , we set the buffer limit to three . in this event ,the dark green arrow represents a photon showering in the tracker .the gray arrows indicate the direction of the information flow .the red x s are the recorded hits , and we show the lost hits in black .note that in this event the hits towards the center of the plane are the ones affected .the proposed configuration , shown in figure [ proposed ] , takes advantage of the fact that the width of the shower in a tracker plane is typically much smaller than the half - width of the plane , so that the hits tend to fall into one half or the other . instead of splitting the plane, we read out all the hits at one end , and double the buffer size .in the next plane , all the hits are read out at the other end .the maximum number of hits presented to each cable controller stays the same , but we now can use the currently often wasted capacity of the end farthest from the hits . in this example ,we lose only two hits in the new configuration , instead of the five in the standard .the remaining lost hits fall outside of the shower core , and will only marginally affect the tracking . this is a general feature of this configuration , since the hits are lost from the ends , rather than from the middle of the plane . as a variation of this configuration , we can `` taper '' the buffer limits , so that the buffer size is smaller at the top of the tracker , where there are fewer hits , allowing us to use the extra capacity at the bottom .the example we will use below tapers from 12 hits at the top to 49 at the bottom .readout configurations are defined in the onboard software , and can be uploaded to the orbiting instrument .in figure [ tapered ] , we show the original event , read out with a tapered configuration , as described above . note that even though there are still lost hits , the overall situation is much closer to the ideal one . in figure[ plot ] , we compare the angular deviations of the reconstructed tracks from the true direction ( `` psf ' 's ) , for the configurations discussed above , for an event sample consisting of photons with a energy spectrum ( uniform in ) , and incident on the lat at to the vertical , and with a minimal cut on the overall quality of the reconstruction .we select the subsample with energies between 10 and 300 gev that convert in the upper ( `` thin radiator '' ) section of tracker .the energy , angle and conversion point are all chosen to maximize the truncation effect .we include one extra configuration , called restricted . "this is similar to the standard configuration , except that the maximum number of hits in an rc is restricted to eight .this configuration allows us to compare the effects of truncation in the simulation to those in the real data , since our existing data can be truncated _ a posteriori _ to simulate the restricted configuration . to make the plot, we choose all events for which the psf for the standard configuration is below two degrees .since there are only three events between one and two degrees , the exact location of this cut is not crucial .to make a quantitative comparison , we take the means of the distributions in the previous panel , cutting the distributions off at five degrees . the qualitative conclusion does nt depend on this cut . the results of this simple study , shown in table [ results ] , indicate that there is definitely something to be gained by adopting one of the proposed configurations .( recent studies suggest that similar gains can be achieved using configurations which allow the cc buffers to overflow , but which provide a fuller reconstruction of events which originate in the lower part of the tracker . )before making a serious proposal for a change of configuration we need to do a more realistic study , using photons at all angles , and background events .we should also compare the simulation to real data , first by using the restricted configuration with existing data , and eventually by running tests on - orbit with the proposed configuration(s ) . at the same time , we need to consider other effects of such a change .some possibilities are : degradation of the hardware trigger , principally through possible changes in timing ; decreased ability to identify out - of - time tracks ( ghosts ) , because of loss of granularity of the detailed trigger information ; and increased time to reconstruct the events off - line , which can be significant for our combinatoric algorithm .the fermi lat collaboration acknowledges sup- port from a number of agencies and institutes for both development and the operation of the lat as well as scientific data analysis .these include nasa and doe in the united states , cea / irfu and in2p3/cnrs in france , asi and infn in italy , mext , kek , and jaxa in japan , and the k. a. wallenberg foundation , the swedish research council and the national space board in sweden . additional support from inaf in italy and cnes in france for science analysis during the operations phaseis also gratefully acknowledged .
the fermi large area telescope ( lat ) consists of 16 towers , each incorporating a tracker made up of a stack of 18 pairs of orthogonal silicon strip detectors ( ssds ) , interspersed with tungsten converter foils . the strip numbers of the struck strips in each ssd plane are collected by two read controllers ( rcs ) , one at each end , and nine rcs are connected by one of eight cables to a cable controller ( cc ) . the tracker readout electronics limit the number of strips that can be read out . although each rc can store up to 64 hits , a cc can store maximum of only 128 hits . to insure that the photon shower development and backsplash in the lower layers of the tracker do nt compromise the readout of the upper layers , we artificially limit the number of strips read out into each rc to 14 , so that no cc can ever can see more than 126 hit strips . in this contribution , we explore other configurations that will allow for a more complete readout of large events , and investigate some of the consequences of using these configurations .
risk management is one of the top priorities in the financial industry today .a huge effort is being invested into developing reliable risk measurement methods and sound risk management techniques by academics and practitioners alike .morgan was the first to develop a comprehensive market risk management methodology based on the value - at - risk ( var ) concept .their product , riskmetrics has become extremely popular and widespread .it has greatly contributed to the dissemination of a number of basic statistical risk measurement methods and to the general acceptance of var as an industry standard .although backtests performed first by j. p. morgan and later by other market participants lent support to the riskmetrics model , its basic assumptions were shown to be questionable from several points of view ( see e.g. ) . moreover ,the existence of fat tails in real market data ( see for a discussion ) is in a clear conflict with riskmetrics assumption of normally distributed returns , which can lead to a gross underestimation of risk .furthermore , serious doubt has recently been raised as to the stability and information content of the empirical covariance matrices used by the model for calculating the risk of portfolios .accurate evaluation of risks in financial markets is crucial for the proper assessment and efficient mitigation of risk .therefore , it is important to see to what extent widespread methodologies like riskmetrics are reliable and what their possible limitations are . in particular , we try to understand why , despite the evident oversimplifications embedded in the model , it can perform satisfactorily .we will argue that the apparent success of riskmetrics is due basically to the way risk is quantified in this framework , which does not necessarily mean that this particular risk measure is the most adequate one .the paper is organized as follows . in section [ sec :overview ] a sketchy overview of the riskmetrics methodology will be presented . in section [ sec : success ]the reasons for the success of riskmetrics will be discussed .finally , section [ sec : concl ] is a short summary .it is well - known that daily returns are uncorrelated whereas the squared returns are strongly autocorrelated . as a consequence , periods of persistent high volatilityare followed by periods of persistent low volatility , a phenomenon known as `` volatility clustering '' .these features are incorporated in riskmetrics by choosing a particular autoregressive moving average process to model the price process ( see below ) .furthermore , riskmetrics makes the very strong assumption that returns are conditionally , which usually consists of the past return series available at time . ] normally distributed .since the standard deviation of returns is usually much higher than the mean , the latter is neglected in the model , i.e. about 50 times larger than the mean . ] , and , as a consequence , the standard deviation remains the only parameter of the conditional probability distribution function . in order to avoid the usual problems related to the uniformly weighted moving averages , riskmetricsuses the so called exponentially weighted moving average ( ewma ) method which is meant to represent the finite memory of the market .accordingly , the estimator for the volatility is chosen to be where is a parameter of the model ( ) .the notation emphasizes that the volatility estimated on a given day ( ) is actually used as a predictor for the volatility of the next day ( ) .the daily var at confidence level ( e.g. 95% ) can then be calculated ( under the normality assumption ) by multiplying with the quantile of the standard normal distribution . moreover, this technique can be used to measure the risk of individual assets and portfolios of assets as well . for linear portfolios (i.e. containing no options ) the usual method to obtain the volatility is to estimate the covariance matrix of asset returns , element - by - element , using the ewma technique and then calculate the portfolio volatility as , where is the weight of asset in the portfolio .equivalently , however , one can calculate the return on the portfolio first , and then apply the ewma technique directly to the whole portfolio .finally , the value of the parameter is determined by an optimization procedure . on a widely diversified international portfolio, riskmetrics found that the value produces the best backtesting results .in order to explain the reasonably successful performance of riskmetrics , first we recall the work by nelson who showed that even misspecified models can estimate volatility rather accurately . more explicitly , in it is shown that if the return generating process is well approximated by a diffusion , a broad class of even misspecified arch models can provide consistent estimates of the conditional volatility . since riskmetrics can be considered as an igarch(1,1 ) model , where i.i.d.(0,1 ) , and . since from eq .( [ eq : vol.estim ] ) , it follows that just as in the igarch(1,1 ) model ( , , ) . ]the results of offer a natural explanation for the success of riskmetrics in estimating volatility .actually , in the riskmetrics framework , this estimate is used as a one - day ahead volatility forecast , nevertheless it seems that this does not significantly worsen its accuracy .however , if one uses this estimate to calculate ( as often required by regulators ) a multiperiod forecast using the simple `` square - root - of - time '' rule , which is derived on the basis of the assumption of uncorrelated returns . ] , the quality of the forecast is bound to decline with the number of periods . a comparative study of the rate of deterioration of these forecasts with time within the riskmetrics model resp .other , more sophisticated volatility models is an interesting topic for further research .let us turn to the apparent success of riskmetrics in estimating the var now .the model sets the confidence level at 95% .the prescription to obtain this 5% quantile is to simply multiply the volatility estimate by 1.65 ( as if returns were conditionally normally distributed ) .why such a recipe can pass the backtests can be understood if one analyzes numerically the 5% quantiles of a few leptokurtic ( fat tailed ) distributions .it is very often found that despite the presence of fat tails , for many distributions the 5% quantile is roughly -1.65 times the standard deviation .for example , the 5% quantile of the student distribution with 7 degrees of freedom ( which is leptokurtic and has a kurtosis of 5 similar to the typical kurtosis of returns in financial markets ) is -1.60 , very close to -1.65 , or , conversely , the -1.65 percentile is 4.6% .this is illustrated in fig .[ fig : fig1 ] , where the pdf s of the student and the normal distributions are shown .it can be seen from the figure that the areas under the two pdf s to the left of the -1.65 line are roughly equal .we also analyzed the empirical frequencies of riskmetrics 95% var violations which correspond to returns , where is the riskmetrics volatility estimate obtained on day .for the 30 stocks of the dow jones industrial average ( djia ) , which are among the most liquid stocks traded on the new york stock exchange , it was found that the var violations frequency was , i.e. close to 5% .this explains why riskmetrics is usually found so successful in evaluating risk ( which it _ defines _ as the var at 95% confidence ) .it is evident , however , that for higher significance levels ( e.g. 99% ) the effect of fat tails becomes much stronger , and therefore the var will be seriously underestimated if one assumes normality . for example , the 1% quantile of the student distribution considered above is -2.54 , significantly larger than under the normality assumption ( -2.33 ) , while the percentile corresponding to -2.33 is 1.43% .this can also be seen from fig .[ fig : fig1 ] .furthermore , for the djia data considered above , the rejection frequencies were , significantly larger than 1% .these effects are even more pronounced for a truncated lvy distribution ( tld ) . for example , a koponen - like tld with lvy exponent , scale parameter and truncation parameter , which provides an excellent fit to the budapest stock exchange index ( bux ) data , has a 5% quantile equal to whereas the 1% quantile is already ! therefore , it can be concluded that the satisfactory performance of riskmetrics in estimating var is mainly the artifact of the choice of the significance level of 95% .however , existing capital adequacy regulations require 99% confidence , and at this level riskmetrics systematically underestimates risk .in this paper we analyzed the performance of riskmetrics , perhaps the most widely used methodology for measuring market risk .the riskmetrics model is based on the unrealistic assumption of normally distributed returns , and completely ignores the presence of fat tails in the probability distribution , a most important feature of financial data . for this reason, one would expect the model to seriously underestimate risk .however , it was commonly found by market participants that riskmetrics performed satisfactorily well and this helped the method to quickly become a standard in risk measurement .nevertheless , we found that the success of riskmetrics is actually the artifact of the choice of the risk measure : the effect of fat tails is minor when one calculates value - at - risk at 95% , however , for higher significance levels fat tails in the distribution of returns will make the simple riskmetrics rule of calculating var to underestimate risk .riskmetrics has played and continues to play an extremely useful role in disseminating risk management ideas and techniques , even if oversimplified .it is available free of charge and , coming with a careful documentation , it is completely transparent and amenable for a study like the present one : its limitations can be explored and , given sufficient resources , overcome .this is far from being the case with the overwhelming majority of the commercially available risk management systems which incorporate at least as strong simplifications as riskmetrics , but coming in the the form of `` black boxes , '' are completely impossible to modify .the continuing dominance of the gaussian paradigm in risk management software packages represents an industry - wide model risk .it is a pleasure to thank b. janecsk for useful interactions .this work has been supported by the hungarian national science found otka grant no .t 034835 .alexander c. ( 1996 ) .`` evaluating the use of riskmetrics as a risk measurement tool for your operation : what are its advantages and limitations ? '' _ derivatives : use trading and regulation _ , * 2 * , 277285 plerou v. , p. gopikrishnan , b. rosenow , l. a. n. amaral , h. e.stanley ( 1999 ) . `` universal and nonuniversal properties of cross correlations in financial time series , '' _ physical review letters _ , * 83 * , 14711474
we analyze the performance of riskmetrics , a widely used methodology for measuring market risk . based on the assumption of normally distributed returns , the riskmetrics model completely ignores the presence of fat tails in the distribution function , which is an important feature of financial data . nevertheless , it was commonly found that riskmetrics performs satisfactorily well , and therefore the technique has become widely used in the financial industry . we find , however , that the success of riskmetrics is the artifact of the choice of the risk measure . first , the outstanding performance of volatility estimates is basically due to the choice of a very short ( one - period ahead ) forecasting horizon . second , the satisfactory performance in obtaining value - at - risk by simply multiplying volatility with a constant factor is mainly due to the choice of the particular significance level . , riskmetrics , market risk , risk measurement , volatility , value - at - risk
in this paper , we develop interest rate models that offer consistent dynamics in the short , medium , and long term . often interest rate models have valid dynamics in the short term , that is to say , over days or perhaps a few weeks .such models may be appropriate for the pricing of securities with short time - to - maturity . for financial assets with long - term maturities ,one requires interest rate models with plausible long - term dynamics , which retain their validity over years .thus the question arises as to how one can create interest rate models which are sensitive to market changes over both short and long time intervals , so that they remain useful for the pricing of securities of various tenors . ideally , one would have at one s disposal interest rate models that allow for consistent pricing of financial instruments expiring within a range of a few minutes up to years , and if necessary over decades .one can imagine an investor holding a portfolio of securities maturing over various periods of time , perhaps spanning several years .another situation requiring interest rate models that are valid over short and long terms , is where illiquid long - term fixed - income assets need to be replicated with ( rolled - over ) liquid shorter - term derivatives .here it is central that the underlying interest rate model possesses consistent dynamics over all periods of time in order to avoid substantial hedging inaccuracy .insurance companies , or pension funds , holding liabilities over decades might have no other means but to invest in shorter - term derivatives , possibly with maturities of months or a few years , in order to secure enough collateral for their long - term liabilities reserves .furthermore , such hedges might in turn need second - order liquid short - term protection , and so forth . applying different interest rate models validated for the various investment periods , which frequently do not guarantee price and hedging consistency, seems undesirable .instead , we propose a family of pricing kernel models which may generate interest rate dynamics sufficiently flexible to allow for diverse behaviour over short , medium and long periods of time .we imagine economies , and their associated financial markets , that are exposed to a variety of uncertainties , such as economic , social , political , environmental , or demographic ones .we model the degree of impact of these underlying factors on an economy ( and financial markets ) at each point in time by combinations of continuous - time stochastic processes of different probability laws .when designing interest rate models that are sensitive to the states an economy may take , subject to its response to the underlying uncertainty factors , one may wonder a ) how many stochastic factor processes ought to be considered , and b ) what is the combination , or mixture , of factor processes determining the dynamics of an economy and its associated financial market .it is plausible to assume that the number of stochastic factors and their combined impact on a financial market continuously changes over time , and thus that any interest rate model designed in such a set - up is by nature time - inhomogeneous .the recipe used to construct interest - rate models within the framework proposed in this paper can be summarised as follows : 1 .assume that the response of a financial market to uncertainty is modelled by a family of stochastic processes , e.g. markov processes .2 . consider a mixture of such stochastic processes as the basic driver of the resulting interest rate models .3 . in order to explicitly design interest rate models ,apply a method for the modelling of the pricing kernel associated with the economy , which underlies the considered financial market .4 . derive the interest rate dynamics directly from the pricing kernel models , or , if more convenient , deduce the interest rate model from the bond price process associated with the constructed pricing kernel .the set of stochastic processes chosen to model an economy s response to uncertainty , the particular mixture of those , and the pricing kernel model jointly characterize the dynamics of the derived interest rate model .we welcome these degrees of freedom , for any one of them may abate the shortcoming ( or may amplify the virtues ) of another . for example , one might be constrained to choose lvy processes to model the impact of uncertainty on markets .the fact that lvy processes are time - homogeneous processes with independent increments , might be seen as a disadvantage for modelling interest rates for long time spans .however , a time - dependent pricing kernel function may later introduce time - inhomogeneity in the resulting interest rate model .the choice of a certain set of stochastic processes implicitly determines a particular joint law of the modelled market response to the uncertainty sources .although the resulting multivariate law may not coincide well with the law of the combined uncertainty impact , the fact that we can directly model a particular mixture of stochastic processes provides the desirable degree of freedom in order to control the dynamical law of the market s response to uncertainty . in this paper, we consider `` randomised mixing functions '' for the construction of multivariate interest rate models with distinct response patterns to short- , medium- , and long - term uncertainties .having a randomised mixing function enables us to introduce the concept of `` partially - observable mixtures '' of stochastic processes .we take the view that market agents can not fully observe the actual combination of processes underlying the market .instead they form best estimates of the randomised mixture given the information they possess ; these estimates are continuously updated as time elapses .this feature introduces a feedback effect in the constructed pricing models .the reason why we prefer to propose pricing kernel models in order to generate the dynamics of interest rates , as opposed to modelling the interest rates directly , is that the modelling of the pricing kernel offers an integrated approach to equilibrium asset pricing in general ( see cochrane , duffie ) , including risk management and thus the quantification of risk involved in an investment .the pricing kernel includes the _ quantified _ total response to the uncertainties affecting an economy or , in other words , the risk premium asked by an investor as an incentive for investing in risky assets . in this workwe first consider a particular family of pricing kernel models , namely the flesaker - hughston class ( see flesaker & hughston , hunt & kennedy , cairns , brigo & mercurio ) . since our goal in this paper is to primarily introduce a framework capable of addressing issues arising in interest rate modelling over short to long term time intervals, we apply our ideas first to the flesaker - hughston class of pricing kernels .we conclude the paper by introducing randomised weighted heat kernel models , along the lines of akahori _et al_. and akahori & macrina , which extend the class of pricing kernels developed in the first part of this paper .we begin by introducing the mathematical tools that we shall use to construct pricing kernel models based on randomised mixtures of lvy processes .we fix a probability space where denotes the real probability measure .[ randessmart ] let be an -dimensional lvy process with independent components , and let be an independent , -dimensional vector of random variables .for , the process is defined by },\ ] ] where the function is chosen such that <\infty ] be finite for all is ensured by definition .it remains to be shown that =m_{su}(x)\ ] ] for all .we observe that the denominator in ( [ martingale ] ) is -measurable so that we can write = \frac{{\mathbb{e}}\left[\exp{\left(h(u , x)l_t\right)}\,|\,{\mathcal{h}}_s\right]}{{\mathbb{e}}\left[\exp{\left(h(u , x)l_t\right)}\,|\,x\right]}.\ ] ] next we expand the right - hand - side of the above equation to obtain \exp\left[h(u , x)l_s\right]\,\vert\,{\mathcal{h}}_s\right]}{{\mathbb{e}}\left[\exp\left[h(u , x)(l_t - l_s)\right]\exp\left[h(u , x)l_s\right]\,\vert\,x\right]}.\ ] ] given , the expectation in the denominator factorizes since is independent of .in addition , the factor ] and = \kappa^2 m t ] , where is given by ( [ martingale ] ) .then , for , is an -martingale family .recall that for all . for , we have & = & { \mathbb{e}}\left[{\mathbb{e}}\left[m_{tu}(x)\,|\,{\mathcal{f}}_t\right]\,|\,{\mathcal{f}}_s\right],\nonumber\\ & = & { \mathbb{e}}\left[m_{tu}(x)\,|\,{\mathcal{f}}_s\right],\nonumber\\ & = & { \mathbb{e}}\left[{\mathbb{e}}\left[m_{tu}(x)\,|\,{\mathcal{g}}_s\right]\,|\,{\mathcal{f}}_s\right],\nonumber\\ & = & { \mathbb{e}}\left[m_{su}(x)\,|\,{\mathcal{f}}_s\right ] , \nonumber\\ & = & { \widehat{m}}_{su},\end{aligned}\ ] ] where we make use of the tower property of the conditional expectation , and the fact that is a -martingale since and is independent of and .* filtered brownian martingales .* we consider example [ bem ] , in which the total impact of uncertainties is modelled by a brownian motion .the corresponding filtered esscher martingale is where the density process , given in ( [ densityprocess ] ) , is driven by the information process defined by ( [ infoproc ] ) .the filtered brownian models have dynamics {\textup{d}}x,\ ] ] where ,\ ] ] ,\ ] ] \,{\textup{d}}s,\ ] ] and is defined in ( [ densityprocess ] ) .we first show that in filipovi _et al_. it is proved that \right){\textup{d}}z_t,\ ] ] where is an -brownian motion , defined by \,{\textup{d}}s.\ ] ] thus by the ito product rule , we get = f_t(x){\textup{d}}m_{tu}(x ) + m_{tu}(x){\textup{d}}f_t(x)\ ] ] since .this simplifies to = m_{tu}(x)f_t(x)\left[h(u , x){\textup{d}}w_t + \left(\ell(t , x)-\int_{-\infty}^\infty \ell(t , y)f_t(y){\textup{d}}y\right){\textup{d}}z_t\right],\ ] ] and we obtain {\textup{d}}x\ ] ] where we define the dynamics of can be written in the following form : {\textup{d}}w_t + { \mathbb{e}}\left[m_{tu}(x)v_t(x)\,|\,{\mathcal{f}}_t\right]{\textup{d}}z_t.\ ] ] * filtered gamma martingales .* let us suppose that the total impact of uncertainties on an economy is modelled by a gamma process with density }\,{\textup{d}}y,\ ] ] where and are the rate and the scale parameter , respectively .the associated randomised esscher martingale is given in example [ gem ] , where .the corresponding filtered process takes the form ^{mt}\exp{\left[h(u , x)\,\gamma_t\right]}\right){\textup{d}}x\ ] ] for , and where the density is given by ( [ densityprocess ] ) . +* filtered compound poisson and gamma martingales .* we now construct a model based on two independent lvy processes : a gamma process ( as defined previously ) and a compound poisson process .the idea here is to use the infinite activity gamma process to represent small frequently - occurring jumps , and to use the compound poisson process to model jumps , which are potentially much larger in magnitude , and may occur sporadically .let denote a compound poisson process given by where is a poisson process with rate .the independent and identically distributed random variables are independent of .the moment generating function is given by = \exp{\left[\lambda t\left(m_y(\varrho)-1\right)\right]}\ ] ] where is the moment generating function of .for , we have }\nonumber\\\nonumber\\ & = & \frac{\exp{\left(h_1(u , x)\gamma_t\right)}}{{\mathbb{e}}\left[\exp{\left(h_1(u , x)\gamma_t\right)}\,|\,x\right]}\cdot \frac{\exp{\left(h_2(u , x)c_t\right)}}{{\mathbb{e}}\left[\exp{\left(h_2(u , x)c_t\right)}\,|\,x\right]}\nonumber\\\nonumber\\ & = & m^{(\gamma)}_{tu}(x)\;m^{(c)}_{tu}(x),\end{aligned}\ ] ] where , conditional on , and are independent .furthermore , }.\end{aligned}\ ] ] then , the filtered process takes the form ^{mt}{\nonumber}\\ & \times \exp{\left[h_1(u , x)\,\gamma_t + h_2(u , x)c_t-\lambda t\left(m_y(h_2(u , x))-1\right)\right ] } { \textup{d}}x,\end{aligned}\ ] ] where is given by ( [ densityprocess ] ) .up to this point , we have considered a brownian information process given by equation ( [ infoproc ] ) .however , the noise component in the information process may be modelled by a lvy process with randomly sized jumps , that is independent of the lvy process used to construct the randomised esscher martingale . in what follows ,we give an example of continuously - observed information , which is distorted by gamma - distributed pure noise .let be a gamma process with rate and scale parameters and , respectively .we define the gamma information process by brody & friedman consider such an observation process in a similar situation .we define the filtration by and by where is given by ( [ gammai ] ) . to derive the conditional density of given ,we first show that is a markov process with respect to its own filtration .that is , for , = \mathbb{p}\left[i_t < a\,|\ , i_s\right]\end{aligned}\ ] ] for all and for all .it follows that & = & \mathbb{p}\left[i_t < a\,\bigg|\ , i_s , \frac{i_{s_1}}{i_s } , \ldots , \frac{i_{s_n}}{i_{s_{n-1}}}\right]{\nonumber}\\ & = & \mathbb{p}\left[x\,\widetilde{\gamma}_t < a\,\bigg|\ , x\,\widetilde{\gamma}_s , \frac{\widetilde{\gamma}_{s_1}}{\widetilde{\gamma}_s } , \ldots , \frac{\widetilde{\gamma}_{s_n}}{\widetilde{\gamma}_{s_{n-1}}}\right].\end{aligned}\ ] ] it can be proven that are independent of and ( see brody _et al_. ) .furthermore , are independent of .thus we have = \mathbb{p}\left[i_t< a\,|\ , i_s\right].\end{aligned}\ ] ] we assume that the random variable has a continuous _ a priori _ density . then the conditional density of , ,\ ] ] is given by }}{\int_{-\infty}^\infty f_0(y)y^{-\widetilde{m}t}\exp{\left[-i_t/(\widetilde{\kappa } y)\right]}{\textup{d}}y},\label{densitygamma}\end{aligned}\ ] ] where we have used the bayes formula .the filtered esscher martingale is thus obtained by .\ ] ] the result is : }}{\int_{-\infty}^\infty f_0(y)y^{-\widetilde{m}t}\exp{\left[-i_t/(\widetilde{\kappa } y)\right]}{\textup{d}}y}\ ; { \textup{d}}x.\ ] ]the absence of arbitrage in a financial market is ensured by the existence of a pricing kernel satisfying almost surely for all .we consider , in general , an incomplete market and let denote the price process of a non - dividend paying asset .the price of such an asset at time is given by the following pricing formula : .\ ] ] the price of a discount bond system with price process and payoff is given by .\ ] ] the specification of a model for the pricing kernel is equivalent to choosing a model for the discount bond system , and thus also for the term structure of interest rates , and the excess rate of return .a sufficient condition for positive interest rates is that be an -supermartingale .if , in addition , the value of a discount bond should vanish in the limit of infinite maturity , then must satisfy = 0.\ ] ] a positive right - continuous supermartingale with this property is called a potential .let be an -adapted process with right - continuous non - decreasing paths , where almost surely , and let be integrable , that is , < \infty ] , and = \kappa^2 mt ] where with , , , , and .we let , and .,title="fig : " ] , and associated short rate .we use the brownian - gamma model with } ] where with , , , , and .we let , and .,title="fig : " ] , and associated short rate .we use the brownian - gamma model with } ] where with , , , , and .we let , and .,title="fig : " ] , and associated short rate .we use the brownian - gamma model with } ] , and }\right)^{mt} ] and .we choose , , , , and .we set , and .,title="fig : " ] } ] and .we choose , , , , and .we set , and .,title="fig : " ] [ figgbsigma ] } ] and .we set , , , , and .we choose , and .,title="fig : " ] } ] and .we let , , , , and .we choose , and .,title="fig : " ] } ] and .we let , , , , and .we choose , and .,title="fig : " ] [ figgbb ] compared to example [ brownianmotionexampleone ] , this model is more robust to variation in the values of the parameters .an analysis of the sample trajectories suggests that for large , the short rate reverts to the initial level .we let denote a variance - gamma process . we define the variance - gamma process asa time - changed brownian motion with drift ( see carr _ et al . _ ) , that is with parameters , and . here is a gamma process with rate and scale parameters and respectively , and is a subordinated brownian motion .the randomised esscher martingale is expressed by }\left(1-\theta\nu h(u , x)-\tfrac{1}{2}\sigma^2\nu h^2(u , x)\right)^{t/\nu},\ ] ] and the associated filtered esscher martingale is of the form }\left(1-\theta\nu h(u , x)-\tfrac{1}{2}\sigma^2\nu h^2(u , x)\right)^{t/\nu}{\textup{d}}x,\ ] ] where may be given for example by ( [ densityprocess ] ) or a special case thereof , or by ( [ densitygamma ] ) depending on the type of information used to filter knowledge about .this leads to the following expression for the discount bond price process : }\left(1-\theta\nu h(u , x)-\tfrac{1}{2}\sigma^2\nu h^2(u , x)\right)^{t/\nu}\,{\textup{d}}x\,{\textup{d}}u}{\int_t^\infty \rho(v)\ , \int_{-\infty}^\infty f_t(y)\,\exp{\left[h(v , y)l_t\right]}\left(1-\theta\nu h(v , y)-\tfrac{1}{2}\sigma^2\nu h^2(v , y)\right)^{t/\nu}\,{\textup{d}}y\,{\textup{d}}v}.\ ] ] we can also obtain an expression for the short rate of interest by substituting ( [ hatmvg ] ) into ( [ fhshort ] ) .we now present another explicit bond pricing model .[ vgmod ] we assume that is a random time , and hence a positive random variable taking discrete values with _ a priori _ probabilities . we suppose that the information process is independent of , and that it is defined by we take the random mixer to be }\ ] ] where and .we see in figure [ hwaves ] that the random mixer , and thus the weight of the variance - gamma process , increases ( in absolute value ) until the random time , and decreases ( in absolute value ) thereafter . for , , and , where and ( left ) and ( right ) .] the associated bond price and interest rate processes have the following sample paths : and the short rate .we use the variance - gamma model with } ] .we let , and .we set , , , and , , , .we choose , , and the initial term structure is .,title="fig : " ] we observe that over time the sample paths of the interest rate process revert to the initial level . however , some paths may revert to at a later time than others , depending on the realized value of the random variable .the functional form of the random mixer strongly influences the interest rate dynamics .the choice of also affects the robustness of the model : there are choices in which the numerical integration in the calculation of the pricing kernel does not converge .so far , we have constructed examples based on an exponential - type random mixer .however , one may wish to introduce other functional forms for for which we can observe different behaviour in the interest rate dynamics , while maintaining robustness .for instance we may consider a random piecewise function of the form where for .the random mixer now has a `` chameleon form '' : initially appearing to be , and switching its form to at .this results in the martingale , and the resulting interest rate sample paths , exhibiting different hues over time , depending on the choices of we can extend this idea further by considering ( i ) multiple , or ( ii ) a multivariate random mixer of the form where , and are independent random variables with associated information processes . in this case , the are themselves random - valued functions . here can be regarded as the primary mixer which determines the timing of the regime switch .the variables can then be interpreted as the secondary mixers determining the weights of the lvy processes over two distinct time intervals .[ gammamodelwithbrownianinfochameleon ] we now present what may be called the `` brownian - gamma chameleon model '' .we consider the filtered gamma martingale family ( [ gammamhat ] ) in the situation where the random mixer has the form where and .the information process associated with is taken to be of the form we assume that is a positive discrete random variable taking values with _ a priori _ probabilities , .that is , the function will switch once from sine to exponential behaviour at one of the finitely many random times . inserting ( [ gammamhat ] ) , with the specification ( [ chamh ] ) , in the expression for the bond price ( [ fhbp ] ) , we obtain ^{mt}\exp{\left[h(u , x_i)\,\gamma_t\right]}\,{\textup{d}}u}{\int_t^\infty \rho(v)\ , \sum_{i=1}^n f_t(y_i ) \left[1-\kappa\ , h(v , y_i)\right]^{mt}\exp{\left[h(v , y_i)\,\gamma_t\right]}\,{\textup{d}}v},\ ] ] where is given by ( [ chamh ] ) for , and }{\sum_{i=1}^n f_0(y_i)\,\exp\left[\sigma y_i i_t -\frac{1}{2}\sigma^2 y_i^2 t\right]}.\ ] ] since the sine function oscillates periodically within the interval ] .we let , and .we set , , , and , , , .we choose , , and the initial term structure is . ] } ] is not an -propagator when is not a markov process .nevertheless , is a valid model for the pricing kernel , subject to regularity conditions .in this section , we generate term structure models by using markov processes with dependent increments .we emphasize that such models can not be constructed based on the filtered esscher martingales .let us suppose that is an ornstein - uhlenbeck process with dynamics where is the speed of reversion , is the long - run equilibrium value of the process and is the volatility .then , for , the conditional mean and conditional variance are given by & = & y_s\,\exp{\left[-\delta(t - s)\right ] } + \beta\left(1-\exp{\left[-\delta(t - s)\right]}\right).\label{condmean}\\ \textup{var}\left[y_t\,|\,y_s\right ] & = & \frac{\upsilon^2}{2\delta}\left(1-\exp{\left[-2\delta(t - s)\right]}\right ) .\label{condvar}\end{aligned}\ ] ] let us suppose , for a well - defined positive function , that since is -measurable , and by applying ( [ condmean ] ) and ( [ condvar ] ) , it follows that ,{\nonumber}\\ & = & h(t+u , x ) \,{\mathbb{e}}\left[\left(y_{t+u}-{\mathbb{e}}\left[y_{t+u}\,|\,y_t\right ] + { \mathbb{e}}\left[y_{t+u}\,|\,y_t\right]\right)^2\,|\,y_t\right],{\nonumber}\\ & = & h(t+u , x)\,\left[\textup{var}\left[y_{t+u}\,|\,y_t\right ] + { \mathbb{e}}\left[y_{t+u}\,|\,y_t\right]^2\right],{\nonumber}\\ & = & h(t+u , x)\,\left[\frac{\upsilon^2}{2\delta}\left(1-\textup{e}^{-2\delta u}\right ) + \left[y_t\,\textup{e}^{-\delta u } + \beta\left(1-\textup{e}^{-\delta u}\right)\right]^2\right].{\nonumber}\\\end{aligned}\ ] ] the pricing kernel is then given by ( [ pkwhkafil ] ) , and we obtain ^ 2\right]{\nonumber}\\ \times \int_{-\infty}^\infty h(t+u , x ) f_t(x ) \,{\textup{d}}x\,{\textup{d}}u.\end{aligned}\ ] ] it follows that the price of a discount bond is expressed by \,{\textup{d}}v\;\bigg|\,{\mathcal{f}}_t\right],\end{aligned}\ ] ] where is given in ( [ whkapiexpanded ] ) , and the conditional expectation can be computed to obtain ^ 2\right]{\nonumber}\\ \times \int_{-\infty}^\infty h(t+v , x ) f_t(x ) \,{\textup{d}}x\,{\textup{d}}v.\end{aligned}\ ] ] we assume that is a positive random variable that takes discrete values with _ a priori _ probabilities . we suppose that the information flow is governed by we choose the random mixer to be }(t+u),\ ] ] where and , and we assume that the weight function is \ ] ] for .later , in proposition [ fhwhka ] , we show that this model belongs to the flesaker - hughston class .therefore , the short rate of interest takes the form }{\int_0^\infty e^{-j(t+v)}\ , { \mathbb{e}}\left[g(h(t+v , x),y_{t+v})\,|\,{\mathcal{f}}_t\right]\,{\textup{d}}v}.\end{aligned}\ ] ] next we simulate the trajectories of the discount bond and the short rate process .we refer to iacus for the simulation of the ornstein - uhlenbeck process using an euler scheme .we observe oscillations in the sample paths owing to the mean - reversion in the markov process . and the short rate for the quadratic ou - brownian model with with and .we let , , and . we let and where and and .the weight function is given by } ] ., title="fig : " ] the model - generated yield curves follow . in this example, we mostly observe changes of slope and shifts .however , it should be possible to produce changes of shape in the yield curve by varying the choices of and . with and .we let , , and .we let and where and and .the weight function is given by } ] . ]in what follows , we show that , under certain conditions , the constructed pricing kernels based on weighted heat kernel models belong to the flesaker - hughston class . [ fhwhka ]let be a markov process , and let the weight function be given by with such that \ , { \textup{d}}s \;<\,\infty.\ ] ] then , the pricing kernel \,{\textup{d}}v\ ] ] is a potential generated by that is , a potential of class ( d ) .thus , the pricing kernel is of the flesaker - hughston type . the function satisfies ( [ weightineq ] ) , and thus is a weight function .then we see that \ , m_{tu}\,{\textup{d}}u{\nonumber}\\ & = & \pi_0\,\int_t^\infty \rho(u)\,m_{tu}\,{\textup{d}}u,\end{aligned}\ ] ] where }{{\mathbb{e}}\left[g\left(h(u , x),l_u\right)\right]}\ ] ] is a positive unit - initialized -martingale for each fixed .the constant is a scaling factor .we note that , for instance , the potential models of rogers which can be generated by the weighted heat kernel approach with } ] as .+ let us suppose that is a markov process with independent increments. then the class of esscher - type randomised mixture models presented in this paper , for which }}{{\mathbb{e}}\left[\exp{\left[h(u , x)l_{t}\right]}\,|\,x\right]},\ ] ] can not be constructed by using the weighted heat kernel approach .we see this by setting }}{{\mathbb{e}}\left[\exp{\left[h(v , x)l_{t+v}\right]}\,|\,x\right]},\ ] ] and by observing that $ ] is not a -propagator . as we mentioned earlier , the class of models introduced by brody _et al . _ is included in the class of esscher - type randomised mixture models .similarly , models based on kernel functions of the form can produce other esscher - type models by use of the weighted heat kernel approach .the following is a diagrammatic representation of the considered classes of positive interest rate models : we conclude with the following observations .the pricing kernel models proposed in this paper are versatile by construction , and potentially allow for many more investigations .for instance , we can think of applications to the modelling of foreign exchange rates where two pricing kernel models are selected perhaps of different types to reflect idiosyncrasies of the considered domestic and foreign economies . in this context , it might be of particular interest to investigate dependence structures among several pricing kernel models for all the foreign economies involved in a polyhedron of fx rates .we expect the mixing function to play a central role in the construction of dependence models .furthermore , a recent application by crisafi of the randomised mixtures models to the pricing of inflation - linked securities may be developed further .* acknowledgments . *the authors are grateful to j. akahori , d. brigo , d. c. brody , c. buescu , m. a. crisafi , m. grasselli , l. p. hughston , s. jaimungal , a. kohatsu - higa , o. menoukeu pamen , j. sekine , w. t. shaw and d. r. taylor for useful comments .we would also like to thank participants at : the actuarial science and mathematical finance group meetings at the fields institute , toronto , july 2011 ; the fourth international mif conference , kruger national park , south africa , august 2011 ; and the mathematical finance seminars , department of mathematics , ritsumeikan university , japan , november 2011 , for helpful remarks .a. parbhoo acknowledges financial support from the commonwealth scholarship commission in the united kingdom ( csc ) , the national research foundation of south africa ( nrf ) and the programme in advanced mathematics of finance at the university of the witwatersrand .j. akahori , y. hishida , j. teichmann and t. tsuchiya . a heat kernel approach to interest rate models .http://arxiv.org/abs/0910.5033v1 , 2009 .j. akahori and a. macrina .heat kernel interest rate models with time - inhomogeneous markov processes . to appear in : _ international journal of theoretical and applied finance _1 , special issue on financial derivatives and risk management , 2012 .d. applebaum ._ lvy processes and stochastic calculus. _ cambridge university press , 2004 .a. bain and d. crisan ._ fundamentals of stochastic filtering_. springer , 2008 .p. boyle , m. broadie and p. glasserman .monte carlo methods for security pricing ._ journal of economic dynamics and control _ 21 , 1267 - 1321 , 1997 .d. brigo and f. mercurio ._ interest rate models - theory and practice : with smile , inflation and credit_. springer , 2006 .d. c. brody and r. l. friedman .information of interest . risk magazine , february 2010 , 35 - 40 .d. c. brody , l. p. hughston and e. mackie .rational term structure models with geometric lvy martingales .http://arxiv.org/abs/1012.1793 , 2010 .d. c. brody , l. p. hughston and a. macrina .dam rain and cumulative gain ._ proceedings of the royal society london _ a 464(2095 ) , 1801 - 1822 , 2008 .d. c. brody , l. p. hughston and a. macrina .credit risk , market sentiment and randomly - timed default ._ stochastic analysis 2010_. d. crisan ( ed ) .springer verlag , 2010 .a. j. g. cairns ._ interest rate models : an introduction_. princeton university press , 2004 .p. p. carr , e. c. chang and d. b. madan .the variance gamma process and option pricing ._ european finance review _ 2 , 79 - 105 , 1998 . j. h. cochrane . _ asset pricing . _ princeton university press , 2005 .m. a. crisafi .pricing of inflation - linked assets with randomised lvy mixtures .king s college london msc thesis , 2011 .d. duffie ._ dynamic asset pricing theory . _ princeton university press , 2001 .d. filipovi , l. p. hughston and a. macrina .conditional density models for asset pricing . to appear in : _ international journal of theoretical and applied finance _ vol .1 , special issue on financial derivatives and risk management , 2012 .b. flesaker and l. p. hughston. positive interest . _ risk magazine _ 9 , 46 - 49 , 1996 ; reprinted in _ vasicek and beyond : approaches to building and applying interest rate models _ , l. p. hughston ( ed ) .risk books , 1997 .h. fllmer and p. protter .local martingales and filtration shrinkage ._ esaim : probability and statistics _ 15 , s25-s38 , 2011 .variance - gamma and monte carlo ._ advances in mathematical finance_. festschrift volume in honour of dilip madan .r. j. elliott , m. c. fu , r. a. jarrow and j .- y .yen ( ed ) .birkhuser , 2007 .h. u. gerber and e. s. w. shiu .option pricing by esscher transforms ._ transactions of society of actuaries _ 46 , 99 - 191 , 1994 .d. heath , r. jarrow & a. morton .bond pricing and the term structure of interest rates : a new methodology for contingent claims valuation ._ econometrica _60(1 ) , 77 - 105 , 1992 . p. j. hunt and j. e. kennedy ._ financial derivatives in theory and practice_. john wiley and sons , 2004. s. m. iacus . _simulation and inference for stochastic differential equations : with r examples . _ springer series in statistics , 2008 .p. a. meyer . a decomposition theorem for supermartingales ._ illinois journal of mathematics _ 6 , 193 - 205 , 1962 .l. c. g. rogers .the potential approach to the term structure of interest rates and foreign exchange rates ._ mathematical finance _7(2 ) , 157 - 176 , 1997 .state price density , esscher transforms and pricing options on stocks , bonds and foreign exchange rates . _ north american actuarial journal _5(3 ) , 104 - 117 , 2001 .
numerous kinds of uncertainties may affect an economy , e.g. economic , political , and environmental ones . we model the aggregate impact by the uncertainties on an economy and its associated financial market by randomised mixtures of lvy processes . we assume that market participants observe the randomised mixtures only through best estimates based on noisy market information . the concept of incomplete information introduces an element of stochastic filtering theory in constructing what we term `` filtered esscher martingales '' . we make use of this family of martingales to develop pricing kernel models . examples of bond price models are examined , and we show that the choice of the random mixture has a significant effect on the model dynamics and the types of movements observed in the associated yield curves . parameter sensitivity is analysed and option price processes are derived . we extend the class of pricing kernel models by considering a weighted heat kernel approach , and develop models driven by mixtures of markov processes . + + * keywords : * pricing kernel , asset pricing , interest rate modelling , yield curve , randomised mixtures , lvy processes , esscher martingales , weighted heat kernel , markov processes .
a unique feature of missions using fresnel lenses for gamma - ray astronomy is the large distances over which precise station - keeping of two spacecraft is required .the concept of a mission using a fresnel lens for ultra - high angular resolution gamma - ray astronomy has been studied by the nasa gsfc integrated mission design center ( imdc ) .the imdc is a resource dedicated to developing space mission concepts into advanced mission designs by providing system engineering analysis of all the flight systems and subsystems .the imdc process involves the mission scientists working with engineers of all the major engineering disciplines , e.g. mechanical , electrical , propulsion , flight dynamics , communications , mission operations , etc ., in a highly interactive environment which naturally allows for inter - discipline communications and trade studies .for the fresnel mission studies , an additional formation - flying flight dynamics group also participated .two configurations of the mission were studied , a definitive fresnel mission with a km spacecraft separation and a smaller , pathfinder mission with a km focal length .each mission configuration completed a dedicated , one week imdc study .after discussing the requirements and assumptions used as input for the imdc studies , the mission profiles and key findings are summarized along with a re - analysis of the propulsion requirement to include the implications of recent advances in ion thruster technology .lccl + + & pathfinder & full & + + focal length & & & km + angular resolution & & & + fresnel lens diameter & 1 & 4.5 & meter + fresnel lens mass & 30 & 600 & kg + czt detector size & 1 & 1 & meter + detector mass & 200 & 200 & kg + detector power & 200 & 200 & watts + czt pixel size & & & mm + pixel angular resolution & 4 & 0.4 & + telescope field - of - view & 2000 & 200 & + time on target & 21 & 21 & days + mean repointing time & _ minimize _ & _ minimize _ & + + + + the overarching philosophy inherent in both of these imdc studies was that only technologies already existing or anticipated in the near future were considered as available resources for the spacecraft and mission development .thus the results indicate if the missions are feasible in the near term , while identifying areas that need to be improved or developed .table 1 details the parameters that were used to define the science payloads .starting with the full fresnel mission , two 4.5 meter diameter aluminum fresnel lenses in an orthogonal arrangement were incorporated into the lens - craft .the lenses would be designed for different photon energies and require a simple , rotation to exchange one lens for the other .the size of the fresnel lenses were reduced to 1 meter in diameter for the pathfinder configuration .the detector , located on a separate detector - craft , was chosen to be a 1 meter czt array , segmented into mm pixels .this allowed for the experience with the swift mission to easily define the detector system parameters , e.g. mass and power . a single lens - craft and a single detector - craftwere initially chosen for each mission configuration along with the desire to accommodate these in a single launch vehicle with an existing defined dual - manifest capability .the choices of orbits and and spacecraft alignment orientations are dictated by the objective of minimizing the propulsion requirements while allowing for the potential viewing of astrophysical objects in any direction , albeit at different times of the orbit cycle .the on - target time was chosen _ a priori _ to be 3 weeks with the goal of minimizing re - orientation times between observational targets .a representative list of potential astronomical sources of gamma - rays determined the average re - orientation to be 20 .the spacecraft control and alignment requirements are determined by the performance of the telescope .the gamma - ray source measurements are insensitive to modest ( ) tilts of either the lens or detector and to mild changes in the separation of the two spacecraft ( a 1000 km displacement leads to a for a km baseline ) . however , position control and knowledge are essential in the lateral direction , especially in a narrow field - of - view system .attitude control must ensure that the image produced by the optics falls onto the detector , and the knowledge of this positioning must be at the level of the size of a detector pixel . using a meter size scale for the detector with 2 mm size pixels , the control of the spacecraft needs to be accomplished at the cm level with knowledge of mm .the challenging aspects of the fresnel mission are the flight dynamics and satisfying the propulsion requirements .flight dynamic analysis led to the selection of an earth - trailing heliocentric orbit at 1 au for both missions .the lens - craft would be in a true orbit while the detector - craft would be in an appropriately offset pseudo - orbit and thus require station - keeping propulsion .the pseudo - orbit can be within or outside the ecliptic plane depending on the direction of the gamma - ray source .other orbits , such as those around lagrange points , were considered but the analysis was determined to be too complex for these studies .table 2 details the spacecraft and flight dynamic parameters developed from the imdc studies . for the full mission , a delta - iv - heavy was identified as the launch vehicle .the availability of a 19.2 m tall , 5 meter diameter dual - manifest fairing naturally fits the full fresnel mission profile .the delta - iv - h can also throw kg outside the earth s gravitational potential .for the pathfinder mission , a smaller delta - iv - m has sufficient capabilities to achieve the launch although a mission specific dual - payload - attachment - fairing ( dpaf ) would have to be developed .ion thrusters were identified as the propulsion technology to accomplish both the station - keeping and the re - pointing between observational targets .the ion thrusters would be incorporated into the detector - craft while both spacecraft would have cold - gas propulsion for fine attitude control .solar sail propulsion was considered but dismissed due to the large ( km ) sail size required and the mission constraints imposed by the inherent directional asymmetry in using the solar flux .the propulsion requirements were developed from the analysis of the mission flight dynamics .the results are summarized by detailing the accelerations , , and thrust requirements , which were evaluated in the context of the available ion thruster performance data , to form each mission profile .as there is some freedom in the propulsion parameterization , the pathfinder mission will be described using the propulsion performance assumed for the imdc study while the full fresnel mission will be considered taking into account recent improvements in ion thruster performance .the power available for the propulsion is limited to that which can be delivered by the nearly 50 m solar arrays assumed in the imdc design of the detector - craft and corresponds to 19.8 kwatts peak power . the flight dynamic analysis for the fresnel pathfinder mission determined that the accelerations needed for station keeping in an earth - trailing heliocentric pseudo - orbit varied between m / s over the orbit cycle . conservatively assuming a mean acceleration of m / s translates into a m / s / week and an mnewton thrust requirement for a 1200 kg detector - craft . assuming a two week time to repoint 20 between targets , an acceleration of m / s and a m / s are needed .the value of includes the effects of having to start and stop the detector - craft and using continuous versus impulsive repointing maneuvers .the full fresnel mission requires the spacecraft separation distance to be increased by a factor of 10 to a focal length of km .thus the required station - keeping acceleration and also increases by 10 over that needed for the pathfinder mission . assuming a 3 week repoint time and 20 between targets , an acceleration of m / s and a m / s are required . an interesting result from the flight dynamics analysisis that the effects of solar gravity do not significantly increase the repointing acceleration requirement for times less than weeks .table 2 details the flight dynamical requirements and the propulsion parameters for specific configurations of the pathfinder and full fresnel missions . for the pathfinder mission ,the propulsion parameters for the ion thrusters assumed in the imdc study are used .the station - keeping is performed via a single rit-10 thruster while the repointing maneuvers require the addition of an nstar ion thruster .both of these thrusters have flight heritage and have demonstrated hours of operation with specific impulses ( isp s ) s. the propulsion power requirements assumed 650 watts for the rit-10 and 4 kwatts for the nstar thruster .it was noted by the imdc that this configuration allows for the size of the solar arrays to be reduced to 16 meter .for the full fresnel mission , a re - analysis of the propulsion has been performed , and verified by the imdc , incorporating recent advances in ion thruster technology .the rit-22 thruster has demonstrated an isp s with thrusts up to 250 mn in the laboratory . from the data in table 2, the station keeping can be accommodated with a single rit-22 while the repointing could be accomplished with 4 rit-22 , assuming the thrust could be increased to 300 mn per thruster .the power requirement assumed 5.5 kwatts per thruster which would require increasing the solar array size by from the 50 m assumed in the original imdc detector - craft definition .although not presented in table 2 , employing rit-22 thrusters in the pathfinder configuration would allow the repointing to be accomplished on a week timescale .thus 60 targets could be viewed in a 5 year mission using 335 kg of fuel with a power requirement of 16.5 kwatts .a single rit-10 thruster would still be employed for station - keeping propulsion .the availability of the improved ion thruster performance allows mass savings such that one could consider including a second detector craft in the full fresnel mission profile .the addition of a third spacecraft would allow observations with one detector - craft while the other is maneuvering for the following observation . using the data presented in table 2 , the total mass for a single lens - craft and two detector - craft leads to a total of 6150 kg , which is well below the capability of a delta - iv - h to launch a payload outside the earth s gravitational potential , even with the customary 20% contingency .however , the feasibility of integrating the three spacecraft with a custom payload attachment structure needs to be assessed .lccl + + & pathfinder & full & + + lens - craft dry mass & 370 & 940 & kg + lens - craft power & 260 & 260 & watts + detector - craft dry mass & 1115 & 1285 & kg + detector - craft power & 4.65 & 22 & kwatts + station - keeping & 4.2 & 42 & m / s / week + on - target time & 21 & 21 & days + station keeping thruster isp & 3000 & 6000 & s + station keeping trust & 7.8 - 10 & 90 - 185 & mn + mean repointing time & 7 & 21 & days + mean repointing & 120 & 830 & m / s + repointing thruster isp & 3000 & 6000 & s + repointing thrust & 110 - 145 & 580 - 1200 & mn + total propellant mass & 320 & 1320 & kg + # of targets in 5 years & 52 & 43 & + + + + + + + the orientation of the lens spacecraft is not critical ( 60 `` control and 10 '' knowledge were assumed , though even though these requirements could be further relaxed ) .commercially available laser gyro , star - tracker and reaction wheel units having the necessary capabilities were identified in the imdc studies .the intrinsic requirements for the detector- and lens - craft alignment in the two fresnel configurations are similar. the formation flying alignment could be accomplished by employing a laser beacon on the lens - craft , which then would be observed by a sensor on the detector - craft .this would require a sensor working at the micro- arcsecond level for the full fresnel mission , with perhaps a 100 microarcsecond field of view .one of the options considered was to place the sensor on a gimbal to allow fine pointing without placing undue constraints on the spacecraft orientation .a key finding of the imdc study was the need for further development of a metrology system to determine the absolute position of the gamma - ray sources in the narrow field - of - view of the telescope .the employment of a micro - arcsecond star tracker was determined to be problematic as it required meter - sized apertures and excessive integration times or a star of sufficiently large magnitude in the field - of - view .a gimbaled micro - arcsecond star tracker of more modest size viewing bright , off - axis stars operating with precise gyros was identified as a possible solution . however, this technology needs to be further developed .other missions , such as maxim , have similar metrology requirements and could offer a solution to this problem .another intriguing possibility considered by the imdc was to include the deployment of another detector - craft at a much shorter separation , from the lens - craft , to act as a finder - craft .the widened field - of - view of this telescopic configuration relaxes the source - finding problem .the alignment of the three spacecraft places the astrophysical source in the field - of - view of the longer , baseline configuration , once the finder - craft drifts out of the field - of - view .however , this bootstrapping technique requires the addition of a third , maneuverable spacecraft , albeit with modest propulsion requirements .other aspects of the imdc study , such as spacecraft mechanical and electrical design , thermal control and management , telemetry , etc . , were found by the imdc studies to be straightforward .communications could be handles as an s - band link between the two spacecraft for ranging , with the data from the lens - craft being relayed through the detector - craft .a 1.5 m gimbaled antenna then would allow all data from both spacecraft to be sent to the ground during a single daily 15 minute dsn contact .it was noted that halting the earth - drift - away orbit at au would preclude any obscuring of the spacecraft by the sun .two gamma - ray astronomy missions employing fresnel lenses were developed at the nasa gsfc integrated mission design center ; a pathfinder mission with a km spacecraft separation and 10 micro - arcsecond ( ) imaging ability and a definitive mission with a km separation and with a 1 angular resolution .the development of the spacecraft was determined to be straightforward .while the studies determined that the flight dynamics and propulsion requirements are challenging , they can be accomplished with current ion propulsion technology if a re - pointing time scale of a week is used .furthermore , recent advances in ion thrusters have relaxed the propulsion requirements potentially allowing the incorporation of an additional detector - craft into a mission profile , thus doubling the number of viewed gamma - ray sources in a 5 year mission lifetime . this work was supported by a grant from the nasa revolutionary aerospace systems concepts ( rasc ) program .g. skinner : astronomy and astrophysics , * 375 * , 691 ( 2001 ) for information on the imdc see _ http://imdc.nasa.gov/ _barthelmy , proc .spie , * 4140 * , 50 ( 2000 ) http://cs.space.eads.net/sp/spacecraftpropulsion/rita/rit-10.html http://www.inspacepropulsion.com/tech/next_gen.html http://cs.space.eads.net/sp/spacecraftpropulsion/rita/rit-22.html
the employment of a large area phase fresnel lens ( pfl ) in a gamma - ray telescope offers the potential to image astrophysical phenomena with micro - arcsecond ( ) angular resolution . in order to assess the feasibility of this concept , two detailed studies have been conducted of formation flying missions in which a fresnel lens capable of focussing gamma - rays and the associated detector are carried on two spacecraft separated by up to 10 km . these studies were performed at the nasa goddard space flight center integrated mission design center ( imdc ) which developed spacecraft , orbital dynamics , and mission profiles . the results of the studies indicated that the missions are challenging but could be accomplished with technologies available currently or in the near term . the findings of the original studies have been updated taking account of recent advances in ion thruster propulsion technology . _ space research association + goddard space flight center , greenbelt , maryland 20771 usa + , 9 , avenue du colonel - roche 31028 toulouse , france _
let be a set of _ agents _ that must share a _ reward _ .we are interested in settings where the share of that an agent receives depends on evaluations that its peers make concerning the agent s contribution to the group .hence , each agent is asked to provide _ evaluations _ for all peers .such evaluations can be either _direct evaluations _ or _ predictions _ of absolute frequencies of received evaluations . for avoiding a biased self - evaluation , an agent is not requested to provide evaluations for itself . given a positive integer parameter ,the direct evaluations made by an agent are formally represented by the vector , where represents agent s evaluation given to agent , and .hence , the parameter represents the top possible evaluation that an agent can receive and an explicit constraint that bounds the sum of direct evaluations .the predictions made by agent are formally represented by the vector , where represents the agent s prediction for the absolute frequency of evaluations given to agent , _i.e. _ for , and .the evaluations are submitted to a central entity called _ mechanism _ , which is responsible for sharing the reward .this entity relies only on reported evaluations when determining agents shares .we assume that evaluations are independent across agents , that evaluations provided by an agent for its peers are independent among themselves , and that agents act to maximize their expected shares .this implies that agents may deliberately lie when providing evaluations for others .therefore , we distinguish between the true evaluations made by agent , for direct evaluations and for predictions , and the evaluations that it reports to the mechanism , .we overload the notation using to denote both direct evaluations and predictions , but we make clear its meaning when necessary .we call the _ strategy _ of agent and a _strategy profile_. we define .thus , we can represent a strategy profile as . if the reported evaluation of agent is equal to its true evaluation , _i.e. _ for direct evaluations or for predictions , then we say that agent s strategy is _ truthful _ , and represent it by .we say that is _ collectively truthful _ if all reported strategies are truthful .we denote the share of given to agent when all the reported evaluations are by . the most important propertywe wish our mechanisms to have is that the share assigned to each agent should reflect the reported evaluations for that agent .in addition to this requirement , we would like our mechanisms to be _ budget - balanced _, _ individually rational _ , _ incentive - compatible _ ( or _ strategy - proof _ ) , and _ collusion - resistant _ .we consider that a _ collusion _ between agents and occurs when agent changes its truthful evaluation for agent , resulting in the report , and , for doing this , it receives a side - payment , , so that : 1 . + p > \mathbb{e }\left [ \gamma_i(\accentset{*}{\textbf{x}}_i , \textbf{x}_{-i})\right] ] .collusions with more than two agents can be decomposed into a union of collusions between two agents ( a liar and a beneficiary ) .we say that a mechanism is _ collusion - resistant _ when , for all agents and strategies , where for direct evaluations and for predictions , we have \leq \mathbb{e } \left [ \gamma_i(\accentset{*}{\textbf{x}}_i,\textbf{x}_{-i } ) + \gamma_j(\accentset{*}{\textbf{x}}_i,\textbf{x}_{-i } ) \right]$ ] . to provide incentives for truth - telling , we use the following strictly proper scoring rule : \ ] ] where is a probability distribution and is the observed event among possible outcomes .the peer - evaluation mechanism announces the parameter and requests agents to submit direct evaluations .the sharing scheme is presented in algorithm 1 . the share received by each agent computed by aggregating its received evaluations into a variable , and multiplying it by a normalizing factor . due to the constraint imposed on direct evaluations , _i.e. _ , it is clear that after this operation .consequently , the mechanism is budget - balanced . because the evaluations are greater than or equal to zero , an agent can not receive a negative share . then , the mechanism is individually rational .the following theorem states our main result concerning the properties of the peer - evaluation mechanism .the peer - evaluation mechanism is strategy - proof . the main drawback of the peer - evaluation mechanism is that agents do not have direct incentives for lying , but they also do not have incentives for telling the truth . this characteristic makes the mechanism extremely susceptible to collusions .the peer - prediction mechanism announces the parameter and requests agents to submit predictions .we can see this game as if each agent was answering the following question about each other agent : if agents were to evaluate agent , what would be the absolute frequency of the evaluations received by it ? .the sharing scheme is presented in algorithm 2 .the main idea of the peer - prediction mechanism is to compute agents shares using _ grades _ , which are aggregations of the expected evaluations calculated from predictions , and using _ scoring rules _ to generate_ scores _ and enforce truth - telling . for using scoring rules ,it is necessary to have a realityto score an assessment .our solution considers grades as observed events of an uncertain quantity , with possible outcomes inside the set , and scores the reported predictions as if they were assessments the sharing process has essentially four steps .the first one transforms all the predictions about the evaluations for an agent to a positive real number , , by creating a probability distribution from each prediction , and summing the expected value of each distribution . in the second step ,the score of each agent is calculated as follows : first , for each agent , a probability distribution is created from the prediction .second , a temporary grade for agent is calculated as the arithmetic mean of the expected evaluations received by it , without taking into consideration the expected value from agent s prediction .the function ( nearest integer function ) rounds this temporary grade to an integer number inside the set .finally , the mechanism applies the strictly proper scoring rule represented by equation 1 on the probability distribution ( assessment ) and the temporary grade ( observed event ) . in the end , the score of agent is the arithmetic mean of results provided by the scoring rule for each prediction submitted by agent . in the third step ,agents grades are computed as the arithmetic mean of the expected evaluations received by them . finally , they have their shares computed in the last step .agents scores are multiplied by a constant and added to their grades .the result is then multiplied by a weight to form agents shares .the constant fine - tunes the weight given to scores .because the highest grade that an agent can receive is equal to , and the highest score is equal to , using the weight guarantees that the mechanism will not make a loss in the case of every agent receives the highest grade and score .an obvious consequence of such approach is that when at least one agent does not receive the highest grade or score , then the mechanism will make a profit .this implies that it is not always budget - balanced .given that scores and grades are always greater than or equal to zero , the mechanism is individually rational .the following theorem states our main result related to the peer - prediction mechanism .
we study a problem where a group of agents has to decide how some fixed value should be shared among them . we are interested in settings where the share that each agent receives is based on how that agent is evaluated by other members of the group , where highly regarded agents receive a greater share compared to agents that are not well regarded . we introduce two mechanisms for determining agents shares : the _ peer - evaluation mechanism _ , where each agent gives a direct evaluation for every other member of the group , and the _ peer - prediction mechanism _ , where each agent is asked to report how they believe group members will evaluate a particular agent . the sharing is based on the provided information . while both mechanisms are individually rational , the first mechanism is strategy - proof and budget - balanced , but it can be collusion - prone . further , the second mechanism is collusion - resistant and incentive - compatible .
for the purpose of diagnosis , the use of explanatory multivariate classification tools is essential to efficiently characterize groups of patients with a high risks of developing the disease .two off - the - shelf classifiers are linear logistic regression and decision trees . based on two different model learning strategies ,both methods investigate the relationship between binary response variables and a set of heterogeneous explanatory variables .to remove non - relevant variables and/or limit the complexity of the solution , both methods allow the integration of a penalization term in its objective function .the choice of the penalization for logistic regression aims to reduce the risk of overfitting induced by potential co - linearity and the combinatorial exploration of all possible two - way interactions . logistic regression uses linear combinations of explanatory variables to learn a single decision boundary and build an easily understandable linear model .it selects a subset of discriminant features and assesses the predictive contribution of each of them in the model .however , it only considers linear interactions between features and their global variation related to the binary outcome .moreover , it does not take into account missing data .the decision tree approach , non - parametric and non - linear , is particularly helpful to explore which feature subspaces are predictive of a class of subjects .it learns multiple decision boundaries parallel to feature axes and builds easy - to - interpret models under the form of a set of if - then - else decision rules .it also handles data as complex as missing values , numerical and categorical data , multi - colinear variables , outliers and local relationships among variables. however , decision trees could generate over - complex and locally optimal solutions increasing the overfitting risk and ungystable decision trees due to small variations in the data .pruning is generally applied to avoid the overfitting phenomenon .some studies have compared these two approaches showing that their relative performances and stability depend of the nature of the signal ( e.g. , the signal - to - noise ratio ) and the characteristics of the dataset ( size ) .therefore , recent studies have tried to combine them , particularly by applying a logistic regression model at the leaves of the decision trees in order to smooth the final model as an alternative to the standard pruning .another popular and efficient way to produce more accurate and stable classifiers is feature selection . in this study , we proposed and tested a novel approach , called - tree , that combines the two previous strategies .the objective of this novel approach is to fit more robust and simple decision tree learners by first applying a feature selection resulting from the logistic regression model .we carried out this methodological study in the particular context of a thorough understanding of the mechanisms of severe forms of imported malaria . despite the decrease of malaria cases in endemic areas since 2000 , the increasing number of travelers between endemic regions and western countries promotes imported malaria cases in non - endemic areas .metropolitan france is the most concerned european country and the mortality rate of imported malaria is strongly related to severe malaria form favored by delays in access to health care .the world health organization ( who ) defined the different clinical and biological criteria for severe malaria in order to speed up the diagnosis and health care of patients that require urgent and intensive care units .this clinico - biological picture inferring the diagnosis is multi - criteria and complex .it also does not take into account epidemiological information which could provide further insight .indeed , contrary to endemic regions in africa , the populations of patients with imported malaria is heterogeneous and composed of first generation migrants ( born in endemic regions and living in france ) , second generation migrants ( children of first generation migrants born and living in france ) and african or european travelers , adults or children with a different history of malaria and genetic background .we also observed an epidemiological evolution in the clinical presentation of severe forms of imported malaria with an increase of older patients from a migrant background and a decrease in the number of patients having neurological disorders .therefore , in this applied research work , we explored the influence of factors ( demographic , epidemiological , clinical , biological and transcriptomic ) dismissed in the current clinico - biological picture on both the diagnosis of severe forms of imported malaria and some clinical observations of acute malaria attacks ( hematological syndrome , visceral failure , neurological disorders and parasitaemia level ) . risk factors for developing severe malaria have been investigated in this context of heterogeneous populations of patients with classical univariate statistical methods .however , these methods revealed their limits as they only assess the statistical associations between each factor and the severe criteria of imported malaria independently of each other and without prediction assumptions .hence , the use of explanatory multivariate classification tools is essential to efficiently characterize groups with a high risk of developing complicated imported malaria and to reduce the mortality of _ plasmodium falciparum _ infection in france based on multi - source data .our comparative study of the different learning strategies is especially interesting when considering the complexity of the data .indeed , the available datasets corresponding to the different case studies present several difficulties that need to be overcome : heterogeneous populations of patients with under - represented subgroups , local phenomena with a weak signal - to - noise ratio , missing data , bias in the current classification , small dataset , etc . in section [ sec : matmeth ] , we first present the data and the 6 case studies grouped into two experiments , and then , we briefly explain : the three classification methods , the model selection and the evaluation methodologies . in section [ sec : results ] , we describe the different performance results and the learned models .finally , in section [ sec : concl ] , we conclude on the benefit of the combined method - tree , the clinical aspects and the perspectives of this work .the french national reference center of malaria ( fnrcm ) monitors imported malaria for epidemiological purposes through a national network of correspondents in hospital centers . in a prospective manner , demographic ( age , sex , ethnic origin , medical history , history of malaria , chemoprophylaxis taken ) , epidemiological ( native country , country of residence , visited area ) , clinical ( history of the disease , severity criteria , management of the patient , treatment ) , biological ( severity criteria , biochemical parameters , hematological parameters , diagnostic tools , serological status ) and transcriptomic ( parasite genome ) data have been collected in a secured database .the objective of this monitoring is to identify high - risk groups for the development of severe malaria .|x|x| + * data type * & + demographic & age , sex , caucasian ( dichotomous ) , african ( dichotomous ) , chemoprophylaxis taken ( dichotomous ) + epidemiological & vis west africa ( visit in west africa , dichotomous ) , vis central africa ( visit in central africa , dichotomous ) , vis other ( visit in an other endemic country , dichotomous ) , res france or other non - endemic country ( resident in france or in another non - endemic country , dichotomous ) + clinical & atcd ( history of the disease ) , delay 2 ( days from symptoms to recovery ) , immunodependency ( dichotomous ) + biological & gb ( white blood cells count ) , platelets ( platelets count ) , serology , serological interpretation , titration + transcriptomic & a1 , a2 , a3 , b1 , b2 , c1,c2 , bc1 , bc2 , var1 , var2csa , var3 ( parasite genome , i.e. expression of _ var _ genes ) + we defined six case studies , called _ cs _ in this paper , grouped into two experiments . the first experiment , composed of two case studies , explores the influence of demographic , epidemiological , clinical biological and transcriptomic factors dismissed in the current clinico - biological picture ( see table [ tab : var ] ) on the diagnosis of severe forms of imported malaria .the second experiment is composed of four case studies and explores the influence of the same previous factors on four clinical observations of acute malaria attacks : hematological syndrome , visceral failure , neurological disorders and parasitaemia level .note that for these two experiments , we have removed from the input variables those that are directly used to infer the target , which is the malaria severity degree , such as organ or metabolic dysfunctions and blood smear measures .[ [ first - experiment ] ] first experiment + + + + + + + + + + + + + + + + the first case study focuses on the current diagnosis of severe imported malaria . for the second case study , we distinguished two subgroups of patients among those with severe malaria according to the existence of neurological and multi - organ clinical dysfunctions .the first one is called serious imported malaria and the second one is called critical imported malaria because this last form of the disease has a high probability of being fatal . finally , the studied dataset is composed of 353 patients diagnosed with three severity levels of imported malaria : moderate , serious or critical . for each patient, we have a total of 29 features .12 of them concern the parasite s genome , giving information of different nature and sources .we define two case studies , each one comparing two groups of subjects : 1 .the first case study includes the whole dataset by comparing subjects having moderate imported malaria to those having a severe form of the disease ( i.e. serious and critical ) .353 subjects are included in this experiment with 202 patients having a moderate form and 151 having a severe form .the objective of this experiment is to identify risk factors predictive of severe malaria in a heterogeneous population of patients .2 . the second case study compares subjects suffering from a serious imported malaria to those having a critical form of the disease .151 subjects are included in this experiment with 88 having a serious form and 63 displaying the critical form .the objective of this experiment is to characterize and to validate the relevance of these two subgroups among patients suffering from severe malaria .[ [ second - experiment ] ] second experiment + + + + + + + + + + + + + + + + + the 4 different case studies consist in discriminating between two clinical states used for the definition of severe forms of imported malaria among a set of 343 patients . 1 .the third case study compares patients suffering from hematological syndrome with people not affected by this condition .49 patients suffer from this syndrome in the dataset .2 . the fourth case study compares patients suffering from visceral failure with people not affected by this condition .271 patients suffer from this failure in the dataset .3 . the fifth case study compares patients suffering from neurological disorders with people not affected by this condition .32 patients suffer from these disorders in the dataset .4 . the sixth case study compares patients displaying parasitaemia greater than 4% with people not affected by this condition .113 patients display this condition in the dataset .for the classification step , we used an regularized logistic regression ( i.e. ) modeling the class membership probability as a linear combination of explanatory variable .standard logistic regression ( i.e. non penalized ) estimates a binary decision function by assuming that the logit can be modeled as a linear function of features : is the binary target , are the explicative variables , is the intercept and is the regressor vector .the penalization parameter is introduced in the model to shrink the estimates of the regression coefficients towards zero and set some of them to zero relative to the maximum likelihood estimates : where is the log - likelihood function : note that the parasite genome data have not been included in method as it requires not - empty features and many subjects have missing values for these features .+ decision tree analysis successively splits the dataset into increasingly homogeneous subsets by binary recursive partitioning of multidimensional covariate space until it is stratified to meet a splitting criterion .the splitting criterion used is , where is the sum of squares for the node and are the sums of squares for the right and left children respectively .this is equivalent to choosing a split that maximizes the between - groups sum of squares in an analysis of variance .we constrained a minimum number of 10 observations in the leaf nodes in order to avoid over - fitting .+ the combined model consists of two steps : * select a subset of features by fitting a logistic regression . * build a decision tree on the selected features .the purpose of this combined approach is the same as the pruning tree approach : to limit the appareance of an over - complex solution and lower the over - fitting risk .however , the penalization of these two approaches occurs in two different ways . instead of pruning the learned decision tree to limit its size, the -tree approach prior constrains the model by reducing the dimension of input features .+ for both methods , we applied a model selection to limit the complexity of the solution with a -fold cross - validation .we chose to both ensure a biais - variance trade - off of the test error estimates and a sufficient representation of the two groups within the test sets . in each experimental dataset, we kept the original proportion of the two classes within each fold . for logistic regression , we optimized the penalization coefficient so that we capture two levels of model complexity .therefore , we selected two values of : such that - is the best model minimizing the -folds cross - validation mean squared error and such that - corresponds to the simplest model which is no more than one standard error worse than the best model according to the one standard error rule . for decision trees, we applied a cost - complexity minimization to limit the size of the tree and we called this simplified tree , the pruned tree : the parameter defines the cost of adding another split to the model .this parameter is optimized within the -fold cross - validation .+ as the output is dichotomous and the two regression methods , and decision trees , estimate the class membership probability , we define the following decision function to classify our samples : we fixed the threshold of the decision function with respect to the distribution of the two classes in the six case studies of the two experiments in order to avoid biased models due to imbalanced classes .for all the case studies `` cs 1 to 6 '' , the threshold is equal to the proportion of symptomatic patients .hence , the patients are less frequently classified in the predominant class in a way that more strongly penalizes the misclassication of this class .we checked the predictive power of our constructed classifiers , based on the , the regression trees and the -tree methods , through three performance indicators : * _ recall _ : * _ specificity _ : * _ accuracy _ : the _ recall _ ( resp . _ specificity _ ) score aims to quantify the overall rate of samples correctly classified for the second ( resp .first ) class .they give two complementary insights about the quality of classification performances of the different methods .indeed , from a medical point of view , we aim to discriminate and well classify the two groups of patients and not only the predominant class .we also assessed the statistical significance of the recall and specificity scores with a binomial test and we defined three significance levels : * , * * , * * * .these performance indicators are computed with leave - one - out validation , classifying each patient one time , in order to generate stable learning models ( highly correlated ) .this choice is due to the high heterogeneity of the patients .in a first part ( _ first experiment _ ) we presented the results of our methodological objective , that is the comparative study of the different learning strategies , applied to the two case studies of the first experiment ( moderate _ vs _ severe malaria and serious _critical ) .we compared the results between the standard methods , i.e. lr and classification trees , and their sparse form , - _ vs _ - and tree _ vs _ prune .then , we selected the best forms of each standard methods in term of performance scores and sparsity and we combined them to build a - tree model . in a second part ( _ second experiment _ )we applied the combined model to the four case studies of the second experiment to validate this approach and point out some clinical insights given by the selected variables of the models .for all the case studies of the two experiments , we reported and explained the models obtained with the combined methods . [ [ performance - scores ] ] performance scores + + + + + + + + + + + + + + + + + + classification trees - based models ( i.e. tree and prune ) have a better accuracy than -based models ( i.e. - and - ) for both case studies `` cs 1 and 2 '' ( see fig . [fig : stdperf4methods ] and [ fig : gtgperf4methods ] ) .the tree method outperforms the other methods with an accuracy score of ( resp . ) to discriminate moderate and severe ( resp .serious and critical ) forms of imported malaria .it is also the only method to have both highly significant recall and specificity scores ( pvalue 0.001 ) for the two case studies . indeed , -based models tend to well classify the second class of the case studies , that is the least represented one composed of the patients with severe ( resp .critical ) imported malaria in the first ( resp .second ) case study , displaying significant recall scores . on the other hand , they tend to fail to correctly classify the first class .conversely , the prune models well classify the first class of the case studies , the most represented class , composed of the patients with moderate ( resp .serious ) imported malaria in the first ( resp .second ) case study , having the best significant specificity scores . on the other hand, they fail to classify the second class correctly .we can assume that the tree method is more robust to unbalanced groups of samples and therefore extracts discriminant decision rules that generalize well to predict both classes . for both case studies , we also observed that the simpler forms of the standard methods , namely - and prune models , achieve the best significant recall and specificity scores , respectively .conversely , they achieve the worst non - significant specificity and recall scores , respectively . therefore , the sparsest approaches seem to be more sensitive to unbalanced groups of samples tending to over - classify a class more than the other . [[ selected - variables ] ] selected variables + + + + + + + + + + + + + + + + + + as expected , the simpler models , namely - and prune , include less features than the standard models , namely - and tree . for both the first case studies , and particularly the first one , the tree - based models are on average sparser but less stable than the -based models ( see fig . [ fig:4_methods_std ] and [ fig:4_methods_gtg ] ) .indeed , they capture on average less features but some of them are selected only few times corresponding probably to locally optimal solutions . a common pattern of selected stable features ( i.e. almost selected systematically over the leave - one - out models ) for all the methods is composed of white blood cells count ( gb ) , platelets count , serological status and titration variables for `` cs 1 '' .we observed the same common pattern of selected stable features plus the age for `` cs 2 '' .note that the serological status is a discrete feature deriving from the titration values and so they are considered similar features . in the following , we focused on the tree method , since it is the most powerful approach and it gives meaningful information on the models through the learned classification rules .indeed , these latter characterize the different discriminant subregions of the feature space specific to subgroups of subjects .in addition to the common pattern , the three models capture the following stable features : the immunodependency and sex variables for `` cs 1 '' , and the log - transformed of the expression of sub - group of _ var _ gene family a and visit in west africa variables for `` cs 2 '' . some of these results confirmed the observations of previous studies on the potential interactions of _ plasmodium falciparum _ during acute malaria with negative hematological changes like an increase of gb and a decrease of platelets count and at the same time with an immunological protection represented by serological status on the development of the different severity forms of imported malaria .furthermore , being older is a well - known risk factor for developing the acute form of imported malaria . in , a statistical relationship has been reported between visiting west africa , especially gambia , and the risk of fatal malaria . concerning the impact of gender on the discrimination between moderate and severe malaria, no statistical relation has been proven between gender and malaria severity .nevertheless , one study showed that women are more susceptible to cerebral complications than men .concerning the expression of group a _ var _ gene , some studies have highlighted the role of the _ var _ gene family in cerebral malaria . to effectively penalize the tree method with a prior -based feature selection step, we used the method .[ [ performance - scores-1 ] ] performance scores + + + + + + + + + + + + + + + + + + the combined -tree method achieves similar or higher performances than the tree ones , except for the recall score of the first case study which is inferior ( see fig .[ fig : stdperf3methods ] and [ fig : gtgperf3methods ] ) .[ [ selected - variables-1 ] ] selected variables + + + + + + + + + + + + + + + + + + as the combined method builds the classification tree based on the features selected with - , it efficiently reduces the set of input features .the set of stable variables selected by the combined models corresponds to the previously observed common patterns for both case studies : gb , platelets count and serological status / titration ( resp . plus age ) variables for `` cs 1 '' ( resp . ``cs 2 '' ) ( see fig . [fig : stdselvar3 ] and [ fig : gtgselvar3 ] ) . note that for the second case study the 151 combined models have selected either serology or titration leading to a total frequency of 103 for both variables .therefore , the combined method led to sparser , more stable and discriminant ( in terms of accuracy performances ) models than those achieved by the tree method .tables [ tab:1 ] and [ tab:2 ] show examples of rule sets derived from stable - tree models for each case study . from these classification rules, we can easily point out the subregions of the feature space predictive of the severe forms of imported malaria . given the results of the methodological comparative study obtained on the two case studies of the first experiment, we applied the tree to the four case studies of the second experiment .[ [ performance - scores-2 ] ] performance scores + + + + + + + + + + + + + + + + + + for all the case studies , the combined method discriminates with good accuracy scores between the two clinical states of the four clinical severity criteria ( figure [ fig : perf_clinical ] ) : hematological syndrome ( ) , visceral failure ( ) , neurological disorders ( ) and parasitaemia level ( ) .we can also conclude that for all these case studies , we significantly classify the two classes , except for the recall score of `` cs 5 '' which can be explained by the low frequency of the class `` neurological disorders '' ( i.e. ) .[ [ selected - variables-2 ] ] selected variables + + + + + + + + + + + + + + + + + + the selected variables presented on figure [ fig : var_clinical ] and the classification rules ( see tables [ tab:3 ] to [ tab:6 ] ) give medical insights on the influence of unused factors ( demographic , epidemiological , clinical , biological and transcriptomic ) on some clinical observations of acute malaria attacks . as previously observed in the first experiment , the variables platelets count and white blood cells count are strongly involved in the prediction of neurological disorders , hyper - parasitaemia and hematological syndrome .this could reflect the parasite sequestration in _plasmodium falciparum _ malaria .concerning `` cs 5 '' , the models showed that caucasian patients seem to be more affected by neurological disorders .moreover , the corresponding classification rule set pointed out an interesting insight , that is the patients probably not previously affected by malaria ( caucasian , low titration / negative serology ) are more sensitive to neuro - malaria which indicates the presence of more severe forms of malaria . on the other hand , patients with a history of malaria indicated by a positive serology display visceral failures .we currently observed more frequently the moderate malaria form with visceral failures and without neurological disorders .furthermore , the trees models of `` cs 4 '' captured only the variable serology as a predictive factor of visceral failures .this can be explained by the fact that these symptoms may arise more from an inflammatory or immunological response than from a parasite sequestration .the gender seems also to have an impact on the presence of the hematological syndrome and the hyper - parasitaemia .indeed , the rule sets ( see tables [ tab:3 ] and [ tab:6 ] ) show that for a given range of platelets count and a given gb threshold , male patients develop these clinical symptoms while women do not. this may be due to the fact that men travel more frequently than women in endemic areas .among the standard approaches , i.e. logistic regression and regression trees , only the tree method efficiently well classifies the two classes of patients for both the first and the second case studies .however , the tree models are not sparse and stable enough providing locally optimal solutions reflecting the intrinsic heterogeneity of the studied dataset .the pruning method drastically simplifies the tree models while leading to poor , non - significant recall scores .this phenomenon could be explained by the fact that pruning tends to eliminate unstable branches , corresponding to variables with a great variance on threshold values and positions across cross - validation trees .therefore , a pre - selection of the input features can be a good alternative solution to pruning in order to constrain the complexity and to increase the robustness to small data variations of the decision trees by removing under - represented phenomena in the studied population .our new method , called -tree , significantly discriminates the two classes for both experiments and we show that it outperforms all the other methods in terms of accuracy for the two first case studies. moreover , it efficiently leads to sparser and more stable models than the tree ones .we can conclude that our combined method is a relevant sparse tree - based method for classification problems even when the classes are strongly unbalanced as it is the case for the classes of the second experiment .concerning the prediction of the severe criteria of imported malaria , the combined method classifies around of the patients ( until for the visceral failures ) for both studied experiments . hence, concerning the case study 2 , we can conclude that the subclassification of severe imported malaria in serious and critical classes is valid .moreover , the combined method produces explanatory and easily understandable models which can be represented under the form of rule sets .these rule sets confirm the predictive power of epidemiological and biological variables discarded from the current classification , such as platelets count , age , gender , white blood cells count and serology .they also provide meaningful information about the discriminant subregions of the selected features specifying for example the threshold or range of values of the selected biological measures .however , these models did not capture some local phenomena in a stable way ( cf variables captured with a low frequency over the leave - one - out models ) , probably due to their low representation in the dataset .this may explain a part of the misclassification of patients. a solution would be to expand the sample size , while ensuring the diversity of the population surveyed , in order to increase the statistical reliability of these phenomena .for the first experiment , a part of the classification error may also result from a bias in the definition of the classes based on the current clinico - biological picture .indeed , as explained in the introduction , the diagnosis of severe imported malaria is multi - criteria , complex and does not take into account the heterogeneity of the individual profiles .it is also important to mention that the use of the as a feature selection step prior to fitting the decision tree may be challenged to overcome the limitations of the method ( linear interactions , no missing data , etc . ) . in future work, it would be interesting to investigate other penalized approaches .24 [ 1]#1 [ 1]`#1 ` urlstyle [ 1]doi : # 1 argy , n and houz , s. paludisme grave : de la physiopathologie aux nouveauts thrapeutiques . _ journal des anti - infectieux _ , 160 ( 1):0 1317 , 2014 .austin , p c et al .using methods from the data - mining and machine - learning literature for disease classification and prediction : a case study examining classification of heart failure subtypes . _ journal of clinical epidemiology _, 660 ( 4):0 398407 , 2013 . berens - riha , n et al .evidence for significant influence of host immunity on changes in differential blood count during malaria . _ malaria journal _ , 130 ( 1):0 19 , 2014 .bouchaud , o et al .do african immigrants living in france have long - term malarial immunity ? _ the american journal of tropical medicine and hygiene _ , 720 ( 1):0 2125 , 2005 .breiman , l et al ._ classification and regression trees_. chapman and hall / crc , 1984 .checkley , a m et al .risk factors for mortality from imported falciparum malaria in the united kingdom over 20 years : an observational study . _ bmj _ , 344 , 2012 .rapport annuel dactivit ._ centre national de rfrence du paludisme _, 2015 .friedman , j h et al . _ the elements of statistical learning , 2nd edition_. springer series in statistics , 2009 .gupta , s et al .immunity to non - cerebral severe malaria is acquired after one or two infections . _ nature medicine _ , 50 ( 3):0 340343 , 1999 .guyon , i and elisseeff , a. an introduction to variable and feature selection ._ the journal of machine learning research _ , 3:0 11571182 , 2003 .hosmer jr , dw et al . _ applied logistic regression_. wiley series in probability and mathematical statistics , 2013 .kajungu , d k et al . using classification tree modelling to investigate drug prescription practices at health facilities in rural tanzania . _ malaria journal _ , 110 ( 1):0 311 , 2012 .kohavi , r et al .a study of cross - validation and bootstrap for accuracy estimation and model selection . _ 14th international joint conference on artificial intelligence _ , 2:0 11371145 , 1995 .lampah , da et al .severe malarial thrombocytopenia : a risk factor for mortality in papua , indonesia ._ journal of infectious diseases _ , 2110 ( 4):0 62334 , 2015. landwehr , n et al .logistic model trees ._ machine learning _, 590 ( 1):0 161205 , 2005 .mhlberger , n et al .age as a risk factor for severe manifestations and fatal outcome of falciparum malaria in european patients : observations from tropneteurop and simpid surveillance data ._ clinical infectious diseases _ , 360 ( 8):0 990995 , 2003 .muwonge , h et al .how reliable are hematological parameters in predicting uncomplicated plasmodium falciparum malaria in an endemic region ? _ isrn tropical medicine _ , 2013directives pour le traitment du paludisme ._ organisation mondiale de la sant _ , 2011 . park , s y and liu , y. robust penalized logistic regression with truncated loss functions ._ canadian journal of statistics _ , 390 ( 2):0 300323 , 2011 .perlich , c et al .tree induction vs. logistic regression : a learning - curve analysis . _ the journal of machine learning research _ , 4:0 211255 , 2003 .seringe , e et al .severe imported plasmodium falciparum malaria , france , 19962003 ._ emerging infectious diseases _ , 170 ( 5):0 807 , 2011 .prise en charge et prvention du paludisme dimportation plasmodium falciparum : recommendations pour la pratique clinique ._ socit de pathologie infectieuse de langue franaise _ , 2007 .therneau , t m et al .an introduction to recursive partitioning using the rpart routines . _ technical report cran r - project _ , 2015 .who . world malaria report ._ world health organization _ , 2014 .|x|x| & ,\,serology = positive ] & ,\,serology = negative ] & ,\,gb\geq 4.9,\,serology = positive ] + & + ,\,platelets\geq30,\,serology = negative ] & ,\,platelets\geq30,\,serology = positive,\,gb\geq 5.9 ] + & + |x|x| + * parasitology 4% * & + , \ , gb\in[5.5;6.8] ] + & + & + ,\ , gb<6.9,\ , sex = female ] + & ,\ , atcd = false,\ , serology = positive ] +
multivariate classification methods using explanatory and predictive models are necessary for characterizing subgroups of patients according to their risk profiles . popular methods include logistic regression and classification trees with performances that vary according to the nature and the characteristics of the dataset . in the context of imported malaria , we aimed at classifying severity criteria based on a heterogeneous patient population . we investigated these approaches by implementing two different strategies : _ l1 _ logistic regression ( _ l1lr _ ) that models a single global solution and classification trees that model multiple local solutions corresponding to discriminant subregions of the feature space . for each strategy , we built a standard model , and a sparser version of it . as an alternative to pruning , we explore a promising approach that first constrains the tree model with an _ l1lr_-based feature selection , an approach we called _ l1lr_-tree . the objective is to decrease its vulnerability to small data variations by removing variables corresponding to unstable local phenomena . our study is twofold : i ) from a methodological perspective comparing the performances and the stability of the three previous methods , i.e _ l1lr _ , classification trees and _ l1lr_-tree , for the classification of severe forms of imported malaria , and ii ) from an applied perspective improving the actual classification of severe forms of imported malaria by identifying more personalized profiles predictive of several clinical criteria based on variables dismissed for the clinical definition of the disease . the main methodological results show that the combined method _ l1lr_-tree builds sparse and stable models that significantly predicts the different severity criteria and outperforms all the other methods in terms of accuracy . the study shows that new biological and epidemiological factors may be integrated in the current clinico - biological picture to improve diagnosis and patient treatment .
is one of the most popular 2p ( p2p ) protocols used today for content replication .however , to this day , the privacy threats of the type explored in this paper have been largely overlooked .specifically , we show that contrary to common wisdom , it is not impractical to monitor large collections of contents and peers over a continuous period of time .the ability to do so has obvious implications for the privacy of bittorrent users , and so our goal in this work is to raise awareness of how easy it is to identify not only content provider that are peers who are the initial source of the content , but also big downloaders that are peers who subscribe to a large number of contents . to provide empirical results that underscore our assertion that one can routinely collect the ip - to - content mapping on most bittorrent users , we report on a study spanning 103 days that was conducted from a single machine . during the course of this study , we collected 148 million ip addresses downloading billions copies of contents .we argue that this is a serious privacy threat for bittorrent users .our key contributions are the following .\i ) we design an exploit that identify the ip address of the content providers for of the new contents injected in bittorrent .\ii ) we profile content providers and show that a few of them inject most of the contents in bittorrent . in particular , the most active injects more than 6 new contents every day and are located in hosting centers .\iii ) we design an exploit to continuously retrieve with time the ip - to - content mapping for any peer .\iv ) we show that a naive exploitation of the large amount of data generated by our exploit would lead to erroneous results .in particular , we design a methodology to filter out false positives when looking for big downloaders that can be due to nats , http and socks proxies , tor exit nodes , monitors , and vpns . whereas piracy is the visible part of the lack of privacy in bittorrent , privacy issues are not limited to piracy .indeed , bittorrent is provably a very efficient and widely used p2p content replication protocol .therefore , it is expected to see an increasing adoption of bittorrent for legal use . however, a lack of privacy might be a major impediment to the legal adoption of bittorrent .the goal of this paper is to raise attention on this overlooked issue , and to show how easy it would be for a knowledgeable adversary to compromise the privacy of most bittorrent users of the internet .in this section , we describe the bittorrent infrastructure and the sources of public information that we exploit to identify and profile bittorrent content providers and the big downloaders . at a high level ,the bittorrent infrastructure is composed of three components : the websites , the trackers , and the peers .the websites distribute the files containing the meta - data of the contents , i.e. , .torrent file .the .torrent file contains , for instance , the hostname of the server , called tracker , that should be contacted to obtain a subset of the peers downloading that content .the trackers are servers that maintain the content - to - peers - ip - address mapping for all the contents they are tracking .once a peer has downloaded the .torrent file from a website , it contacts the tracker to subscribe for that content and the tracker returns a subset of peers that have previously subscribed for that content .each peer typically requests peers from the tracker every minutes .essentially all the large bittorrent trackers run the opentracker software so designing an exploit for this software puts the whole bittorrent community at risk .finally , the peers distribute the content , exchange control messages , and maintain the dht that is a distributed implementation of the trackers .bittorrent content providers are the peers who insert first a content in bittorrent .they have a central role because without a content provide no distribution is possible .we consider that we identify a content provider when we retrieve its ip address .one approach for identifying a content provider would be to quickly join a newly created torrent and to mark the only one peer with an entire copy of the content as the content provider for this torrent .however , most bittorrent clients support the superseeding algorithm in which a content provider announces to have only a partial copy of the content .hence , this naive approach can not be used . in what follows, we show how we exploit two public sources of information to aide in identifying the content providers .the first source of public information that we exploit to identify the ip address of the content providers are the websites that list the content that have just been injected into bittorrent .popular websites such as thepiratebay and isohunt have a webpage dedicated to the newly injected contents .a peculiarity of the content provider in a p2p content distribution network is that he has to be the first one to subscribe to the tracker in order to distribute a first copy of the content .the webpage of the newly injected contents may betray that peculiarity because it signals an adversary that a new content has been injected ._ an adversary can exploit the newly injected contents to contact the tracker at the very beginning of the content distribution and if he is alone with a peer , conclude that this peer is the content provider ._ to exploit this information , every minute , we download the webpage of newly injected contents from thepiratebay website , determine the contents that have been added since the last minute , contact the tracker , and monitor the distribution of each content for hours .if there is a single peer when we join the torrent , we conclude that this peer is the content provider .we repeated this procedure for contents for a period of days from july to august , .sometimes , a content is distributed first among a private community of users .therefore , when the content appears in the public community there will be more than one peer subscribed to the tracker within its first minute of injection on the website . in that case , exploiting the newly injected contents is useless and an adversary needs another source of public information to identify the content provider . the second source that we exploit are the _ logins _ of the content providers on the website . indeed , content providers need to log into web sites using a personal login to announce new contents .those logins are public information. moreover , a content provider will often be the only one peer distributing all the contents uploaded by his login .the login of a content provider betrays which contents have been injected by that peer because it is possible to group all the contents uploaded by the same login on the website ._ an adversary can exploit the login of a content provider to see whether a given ip address is distributing most of the contents injected by that login . _ to exploit this information , every minute , we store the login of the content provider that has uploaded the .torrent file on the webpage of the newly injected contents .we then group the contents per login and keep those logins that have uploaded at least new contents . finally , we consider the ip address that is distributing the largest number of contents uploaded by a given login as the content provider of those contents .we collected the logins of content providers who have injected contents for a period of days from july to august , .we verified that we did not identify the same ip address for many logins which would indicate that we mistakenly identify an adversary as content provider .in particular , on such ip addresses , we identified only as the content provider for more than login , and only for more than logins .we performed additional checks that we extensively describe in le blond et al . .we validate the accuracy of those two exploits in section [ sec : providers_valid ] and present their efficiency to identify the content providers in section [ sec : providers_quantify ] . for now , we define the big downloaders as the ip addresses that subscribe to the tracker for the largest number of unique contents . it is believed to be impractical to identify them because it requires to spy on a considerable number of bittorrent users .we now describe the two sources of public information that we exploit to compromise the privacy of any peer and to identify the big downloaders .most trackers support _ scrape - all _ requests for which they return the identifiers of all the content they track and for each content , the number of peers that have downloaded a full copy of the content , the number of peers currently subscribed to the tracker with a full copy of the content , i.e. , seeds , and with a partial copy of the content , i.e. , leechers .a content identifier is a cryptographic hash derived from .torrent file of a content . whereas they are not strictly necessary to the operation of the bittorrent protocol , scrape - all requests are used to provide high level statistics on torrents ._ by exploiting the scrape - all requests , an adversary can learn the identifiers of all the contents for which he can then collect the peers using the announce requests described in section [ sec : info_big_announce ] ._ to exploit this information , every hours , we send a scrape - all request to all thepiratebay trackers and download about million identifiers , which represents mb of data per tracker .we then filter out the contents with less than one leecher and one seed which leaves us with between and contents depending on the day .we repeated this procedure for days from may to august , 2009 .thepiratebay tracker is by far the largest tracker with an order of magnitude more peers and contents than the second biggest tracker , and it runs the opentracker software therefore we limited ourselves to that tracker . the _ announce started / stopped _ requests are sent when a peer starts / stops distributing a content . upon receivingan announce started request , the tracker records the peer as distributing the content , returns a subset of peers , and the number of seeds and leechers distributing that content .when a peer stops distributing a content , he sends an announce stopped requests and the tracker decrements a counter telling how many contents that peer is distributing .we have observed that trackers generally blacklist a peer when he distributes around contents .so an adversary should send an announce stopped request after each announce started requests not to get blacklisted ._ by exploiting announce started / stopped requests for all the identifiers he has collected , an adversary can spy on a considerable number of users . _ to exploit this information , every hours , we repeatedly send announce started and stopped requests for all the contents of thepiratebay trackers so that we collect the ip address for at least of the peers distributing each content .we do this by sending announce started and stopped requests until we have collected a number of unique ip addresses equal to of the number of seeds and leechers returned by the tracker .this procedure takes around minutes for between and contents . by repeating this procedure for days from may to august , ,we collected million ip addresses downloading billion copies of contents .we will see in section [ sec : down_middle ] that once an adversary has collected the ip - to - content mappings for a considerable number of bittorrent users , it is still complex to identify the big downloaders because it requires to filter out the false positives due to middleboxes such as nats , ipv6 gateways , proxies , etc .we will also discuss how an adversary could possibly reduce the number of false negatives by identifying the big downloaders with dynamic ip addresses .finally , we will see that an adversary can also exploit the dht to collect the ip - to - content mappings in section [ sec : conclusion ] . once we have identified the ip address for the content providers and big downloaders , we use the .torrent files to profile them .a .torrent file contains the hostname of the tracker , the content name , its size , the hash of the pieces , etc . without .torrentfile , a content identifier is an opaque hash therefore , an adversary must collect as many .torrent files as possible to profile bittorrent users .for instance , an adversary can use the .torrent files , to determine if the content is likely to be copyrighted , the volume of unique contents distributed by a content provider , or the type of content he is distributing . clearly , .torrent files must be public for the peers to distribute contents however , it is surprisingly easy to collect millions of .torrent files within hours and from a single machine . _ by exploiting the .torrent files , an adversary can focus his spying on specific keywords and profile bittorrent users . _ to exploit this information , we collected all the .torrent files available on mininova and thepiratebay websites on may , .we discovered unique .torrent files on mininova and on thepiratebay .the overlap between both website was only files .then , from may , to august , , we collected the new .torrent files uploaded on the mininova , thepiratebay , and isohunt websites .those three websites are the most popular and as there is generally a lot of redundancy among the .torrent files hosted by different websites , we limit ourselves to those three. we will discuss the reasons why our measurement was previously thought as impractical by the related work in section [ sec : work ] .in this section , we run the exploits from section [ sec : info_provider ] in the wild , quantify the content providers that we identify , and present the results of their profiling ..cross - validation of the two exploits .this table shows the accuracy of the two exploits to identify the same content provider for the same content ._ alone login _ is the number of contents for which both sources identified a content provider . _accuracy _ is the percentage of such contents for which both sources identified the same content provider . [cols="^,^,^,^",options="header " , ] focusing on the top content providers in table [ tab : rank - top20-ips ] , we observe that half of them are using a machine whose ip address is located in a french and a german hosting center , i.e. , ovh and keyweb .those hosting centers provide cheap offers of dedicated servers with unlimited traffic and a / s connection .however , we observed that the users injecting contents from those servers are unlikely to be be french or german .indeed , on contents injected by the content providers from ovh , only contained the keyword _ fr _ ( french ) in their name whereas contained the keyword _spanish_. similarly , on contents injected from keyweb , we found contents with the keyword _ spanish _ in their name and none contained the keywords _ fr _ , _ ge _ ( german ) , or _ de _ ( deutsche ) . in conclusion , one can not easily guess the nationality of a content provider based on the geolocalization of the ip address of the machine he is using to inject contents .in this section , we focus on the identification and the profiling of the big downloaders , i.e. , the ip addresses that subscribed in the largest number of contents .once we have collected the information described in section [ sec : info_big ] , it is challenging to identify and profile the big downloaders because of the volume of information .indeed , we collected 148 m ip addresses and more than 510 m endpoints ( ip : port ) during a period of 103 days . ordering the ip addresses according to the total number of unique contents for which they subscribed , we observe a long tail distribution .in particular , the top ip addresses subscribed for at least contents and the top ip addresses subscribed for at least contents . in the remaining of this section , we focus on the top ip addresses . in the following ,we show that for many ip addresses , there is a linear relation between their number of contents and their number of ports suggesting that those ips are middleboxes with multiple peers behind them .however , we will also see that some ip addresses significantly deviate from this middlebox behavior and we will identify some of those players with deviant behavior . finally , we will profile those players .it is sometimes complex to identify a user based on its ip address or its endpoint , because the meaning of this information is different depending on his internet connectivity. a user can connect through a large variety of middleboxes such as nats , ipv6 gateways , proxies , etc .in all those cases , many users can use the same ip address and the same user can use a different ip address or endpoints .so an adversary using the ip addresses or endpoints to identify big downloaders may erroneously identify a middlebox as a big downloader . in the following ,we aim to filter out those false positives to identify the big downloaders .we do not consider false negatives due , for instance , to a big downloader with a dynamic ip address .it may be possible to identify big downloaders with a dynamic ip address but it would require a complex methodology using the port number as the identifier of a user within an as ; most bittorrent clients pick a random port number when they are first executed and then use that port number statically .the validation of such a methodology is beyond the scope of this paper and we leave this improvement for future work .however , we will see that we already find a large variety of big downloaders using public ip addresses as identifiers .ip addresses .each dot represents an ip address .the solid line is the average number of contents on the m ip addresses computed per interval of ports . ]we confirm the complexity of using an ip address or endpoint to identify a user in fig .[ fig : out - corr - port - torrents ] .indeed , we see that for most of the ip addresses the number of contents increases linearly with the number of ports .moreover , the slope of this increase corresponds to the slope of the average number of contents per ip over all m ip addresses ( solid line ) .each new port corresponds to between and additional contents per ip address .therefore , it is likely that those ip addresses correspond to middleboxes with a large number of users behind them .there are also many ip addresses that significantly deviate from this middlebox behavior .[ [ conclusions-1 ] ] conclusions + + + + + + + + + + + a large number of ip addresses that a naive adversary would classify as big downloaders actually corresponds to middleboxes such as nats , ipv6 gateways , or proxies .however , we also observe many ip addresses whose behavior significantly deviates from a typical middlebox behavior .to understand the role of the ip addresses that deviate from middlebox behavior , we identify categories of big players .[ [ http - and - socks - public - proxies ] ] http and socks public proxies + + + + + + + + + + + + + + + + + + + + + + + + + + + + + the two first categories are http and socks public proxies that can be used by bittorrent users to hide their ip address from anti - piracy groups .we retrieved a list of ip addresses of such proxies from the sites __ and _ proxy.org_. we found http proxies and socks proxies within the top ip addresses .[ [ tor - exit - nodes ] ] tor exit nodes + + + + + + + + + + + + + + the third category is composed of tor exit nodes that are the outgoing public interfaces of the tor anonymity network . to find , the ip address of the tor exit nodes , we performed a reverse dns lookup for the top ip addresses and extracted all names containing the _ tor _ keyword and manually filtered the results to make sure they are indeed tor exit nodes .we also retrieved a list of nodes on the web site _proxy.org_. we found tor exit nodes within the top ip addresses .[ [ monitors ] ] monitors + + + + + + + + the fourth category is composed of monitors that are peers spying on a large number of contents without participating in the content distribution .we identified two ases , corresponding to hosting centers located in the us and uk , containing a large number of ip addresses within the top with the same behavior . indeed , these ip addresses always used a single port and we were never able to download content from them .therefore , they look like a dedicated monitoring infrastructure instead of regular peers .we found such ip addresses within only two ases in the top ip addresses [ [ vpns ] ] vpns + + + + the fifth category is composed of vpns that are socks proxies requiring authentication and whose communication with bittorrent users is encrypted . to find vpns , we performed a reverse dns lookup for the top ip addresses and extracted all names containing the _ itshidden _ , _ cyberghostvpn _ , _ peer2me _ , _ ipredate _ , _ mullvad _ , and _ perfect - privacy _ keywords and manually filtered the results to make sure they are indeed the corresponding vpns .those keywords correspond to well - known vpn services .we found vpns within the top ip addresses .[ [ big - downloaders ] ] big downloaders + + + + + + + + + + + + + + + the last category is composed of big downloaders that we redefine as the ip addresses that _ distribute _ the largest number of contents and that are used by a few users .we selected the ip addresses we could download content from and that used fewer than ports .hence , those ip addresses can not be a monitors as we downloaded content from them and they can not be large middleboxes due to the small number of ports .we found such big downloaders .[ [ conclusions-2 ] ] conclusions + + + + + + + + + + + we have identified categories of big players including the big downloaders .we do not claim that we have identified all categories of players nor found all the ip addresses that belong to one of those categories .instead , we have identified few ip addresses in each category within the top peers that we use in the following to profile the big players .we see in fig .[ fig : out - actors - footprint ] that for http and socks proxies the number of contents per ip address is much larger than for middleboxes ( solid line ) . considering the huge number of contents these ip addresses subscribed to ,it is likely that the proxies are used by anti - piracy groups .indeed , we see in fig . [fig : out - actors - benef - cumul - crawl ] that our measurement system suddenly stops seeing the ip addresses of monitors after day . in fact , by that date , thepiratebay tracker changed its blacklisting strategy to reject ip addresses that are subscribed to a large number of contents . whereas it was not a problem for our measurement system because it uses announce stopped requests as described in section [ sec : info_big_announce ] ,monitors got blacklisted .however , we observe on day that the number of http and socks proxies suddenly increased , probably corresponding to anti - piracy groups migrating their monitoring infrastructure from dedicated hosting centers to proxies .considering , the synchronization we observe in fig .[ fig : out - actors - benef - cumul - crawl ] in the activity of the http and socks proxies , it is likely that those proxies were used in a coordinated effort .the correlation for monitors and big downloaders in fig .[ fig : out - actors - footprint ] does not show any striking result , therefore we do not discuss it further .however , we observe in fig . [fig : out - actors - footprint ] that for tor exit nodes and vpns the number of contents per ip address is close to the ip addresses of the middleboxes ( solid line ) . for large number of ports ,tor exit nodes deviate from the standard middlebox behavior .in fact , we found that just a few ip addresses are responsible of this deviation , all other tor exit nodes following the trend of the solid line .we believe that those few ip addresses responsible for the deviation are used by either big downloaders or anti - piracy groups .ip addresses of a given snapshot that belongs to the top ip addresses on all snapshots .the solid line represents , for each category , the fraction of the top ip addresses on all previous snapshots that belongs to the top ip addresses on all snapshots . ][ [ conclusions-3 ] ] conclusions + + + + + + + + + + + we have shown that many peers do not correspond to a bittorrent user but to monitors or to middleboxes with multiple users behind them .these peers introduce a lot of noise for an adversary who would like to spy on bittorrent users and in particular on the big downloaders .however , we have shown that it is possible to filter out that noise to identify the ip address and profile the big downloaders .as far as we know , no related work has explored the identification of the content providers in bittorrent so both the data and the results concerning these players are entirely new .some related work has measured bittorrent at a moderate scale but none at a large - enough scale to identify the big downloaders .this is because most of the measurements inherited two problems from using existing bittorrent clients .the first problem is that existing clients introduce a huge computational overhead on the measurement .for instance , each announce started request takes one fork and one exec .therefore , the measurement is hard to efficiently parallelize .the second problem is that regular bittorrent clients do not exploit all the public sources of information that we have presented in section [ sec : info_big ] and [ sec : info_torrent ] .a content identifier is essentially the hash of a .torrent file .so not exploiting scrape - all requests limits the number of spied contents to the number of .torrent files an adversary has collected . in addition , clients may not be stopped properly and so not send the announce stopped request , making the measurement prone to blacklisting . in the following , we describe how the scale of previous measurements differs from ours according to the sources of public information that they exploit .we split the related work not exploiting scrape - all requests into two families : a first family spying on few contents and a second one using a large infrastructure to spy on more contents .siganos et al .measured the top contents from the web site during days collecting million ip addresses .using only the top contents does not allow an adversary to identify the big downloaders .the same remark holds for choffnes et al . who monitored peers and did not record information identifying contents therefore they can not either identify the big downloaders .the second family spied on more contents but using a large infrastructure .piatek et al .used a cluster of workstations to collect million ip addresses distributing contents in total .it is unclear how many simultaneous contents they spied as they reported being blacklisted when being too aggressive , suggesting that they did not properly send announce stopped requests .finally , zhang et al . is the work that is the closest to ours in scale however , they used an infrastructure of machines to collect million ip addresses within a hours window . in comparison ,our customized measurement system used machine to collect around million ip addresses within the same time window , making it about times more efficient .in addition , that we performed our measurement from a single machine demonstrates that virtually _anyone _ can spy on bittorrent users , which is a serious privacy issue .dan et al . measured million torrents with million peers , but used a different terminology .indeed , they performed _ only _ scrape - all requests so they knew the number of peers per torrent but not the ip addresses of those peers .this data is much easier to get and completely different in focus .we have shown that enough information is available publicly in bittorrent for an adversary to spy on most bittorrent users of the internet from a single machine . at any moment in time for days , we were spying on the distribution of between and contents . in total , we collected m of ip addresses distributing m contents , which represents billion copies of content . leveraging on this measurement, we were able to identify the ip address of the content providers for of the new contents injected into bittorrent and to profile them .in particular , we have shown that a few content providers inject most of the contents into bittorrent making us wonder why anti - piracy groups targeted random users instead .we also showed that an adversary can compromise the privacy of any peer in bittorrent and identify the ip address of the big downloaders .we have seen that it was complex to filter out false positives of big downloaders such as monitors and middleboxes and proposed a methodology to do so .we argue that this privacy threat is a fundamental problem of open p2p infrastructures .even though we did not present it in this paper , we have also exploited the dht to collect ip - to - content mappings using a similar methodology as for the trackers . that we were also able to collect the ip - to - content mappings on a completely different infrastructure reinforces our claim that the problem of privacy is inherent to open p2p infrastructures . a solution to protect the privacy of bittorrent users might be to use proxies or anonymity networks such as tor , however a recent work shows that it is even possible to collect the ip - to - content mappings of bittorrent users on tor .therefore , the degree to which it is possible to protect the ip - to - content mappings of p2p filesharing users remains an open question .d. choffnes , j. duch , d. malmgren , r. guierm , f. e. bustamante , and l. amaral .swarmscreen : privacy through plausible deniability in p2p systems . technical report , northwestern university , march 2009 .m. piatek , t. kohno , and a. krishnamurthy .challenges and directions for monitoring p2p file sharing networks or why my printer received a dmca takedown notice . in _ hotsec08 _ ,san jose , ca , usa , july 2008 .
this paper presents a set of exploits an adversary can use to continuously spy on most bittorrent users of the internet from a single machine and for a long period of time . using these exploits for a period of days , we collected million ips downloading billion copies of contents . we identify the ip address of the content providers for of the bittorrent contents we spied on . we show that a few content providers inject most contents into bittorrent and that those content providers are located in foreign data centers . we also show that an adversary can compromise the privacy of any peer in bittorrent and identify the big downloaders that we define as the peers who subscribe to a large number of contents . this infringement on users privacy poses a significant impediment to the legal adoption of bittorrent .
consider a control system where is the state and is the control .the domain contains the trivial equilibrium point .we treat all vectors as columns and denote the transpose with a prime .the vector fields are assumed to be mappings of class from to .it is a well - known fact due to r.w .brockett that system is not stabilizable by a smooth feedback law such that , provided that and , , .... , are linearly independent vectors .note that brockett s condition remains necessary for the stabilizability in a class of discontinuous feedback laws provided that the solutions of the closed - loop system are defined in the sense of a.f .filippov . to overcome this obstruction ,two main strategies can be used for the stabilization of general controllable systems .the first strategy is based on the use of a time - varying continuous feedback law to stabilize the origin of a small - time locally controllable system . in the other strategy, the equilibrium of an asymptotically controllable system can be stabilized by means of a discontinuous feedback law , provided that the solutions ( `` -trajectories '' ) are defined in the sense of sampling .an approach for the practical stabilization of nonholomomic systems based on transverse functions is proposed by p. morin and c. samson .a survey of feedback design techniques is presented in the book by j .-coron .despite the rich literature in this area and to the best of our knowledge , there is no universal procedure available for the stabilizing control design for an arbitrary nonlinear system of form .the paper is devoted to the control design for a kinematic cart model with two inputs .a coordinate transformation from the three - dimensional state space to a two - dimensional manifold ( parameterized by the arc length and the orientation error ) plays a crucial role in the analysis .based on this representation , a discontinuous feedback law is proposed such that any solution of the closed - loop system exponentially converges to an equilibrium point .the orientation angle is defined modulo in such equilibria .applications of sinusoidal controls to the steering problem for systems of form are considered in the paper . a combination of constant controls and sinusoids at integrally related frequencies is used to steer the first - order canonical system to an arbitrary configuration .some modifications of this algorithm are presented for chained systems .an overview of algorithms for the motion planning of nonholonomic systems is presented in the book . in the paper ,the controllability and trajectory tracking problems are considered for a kinematic car model with nonholonomic constraints . a result on the solvability of the motion planning problemis established for such a model by using trigonometric controls .the error dynamics in a neighborhood of the reference trajectory is studied to solve the tracking problem .it is shown that the error dynamics is stabilizable by using a quadratic lyapunov function .the controller design scheme proposed is illustrated by examples of a state - to - state control and tracking a circle with time scheduling at selected points .the stabilization problem for a nonholonomic system in power form with bounded inputs is considered in the paper .the receding - horizon principle is used to solve an open - loop optimization problem and to derive a sampling control .it is proved that the family of controls obtained can be used to stabilize the destination state in finite time with any chosen precision .the numerical implementation of this algorithm is shown for a five dimensional system .the paper is devoted to the stabilization problem of nonholonomic systems about a feasible trajectory , instead of a point . for such kind of problem ,a time - varying feedback law is obtained by using the linearization around a feasible trajectory . the heisenberg system and a mobile robot modelare considered as examples for stabilizing a straight line trajectory in the three - dimensional space .this approach is shown to be applicable for the trajectory stabilization of a front wheel drive car .assume that and that , , ... , together with a fixed set of the first order lie brackets span the whole tangent space for system , i.e. (x)\ , | \,i=1,2 , ... , m,\ ; ( j , l)\in s \ } = \mathbb r^n , \label{rank}\ ] ] for each , where , (x ) = \frac{\partial f_l(x)}{\partial x}f_j(x ) - \frac{\partial f_j(x)}{\partial x}f_l(x)\ ] ] and is the jacobi matrix . without loss of generality , we assume that each pair is ordered with .following the idea of , we introduce an extended system for : (x)\equiv \bar f(x,\bar u ) \label{sigma_ext}\ ] ] with the control . because of the rank condition , every smooth curve is a trajectory of system .as subspaces spanned by the lie brackets of vector fields play a crucial role in the dynamics study of system , we note that harmonic inputs naturally appear as optimal controls implementing the motion along a lie bracket .a result on the convergence of solutions of system to a solution of is established by h.j .sussmann and w. liu .it is shown in the paper that if a sequence of input functions of class satisfies certain boundedness condition and converges to an extented input in the iterated integrals sense , then solutions of system with initial data converge to a solution of system , uniformly with respect to ] , such that condition holds and then there exist positive numbers and such that , for any ] , there is a : for each -solution of system with the control of form . * proof . *the assertion of theorem [ thm_sampling2 ] is a straightforward consequence of theorem [ thm_sampling ] .to ensure condition , we use inequalities from lemma [ lemma_solvability ] . the next section provides some technical results for the control design and stability analysis .then the proof of theorem [ thm_sampling ] will be given in section [ section_proof ] .any solution of system with initial data and controls , ] .if the vector fields , , ... , satisfy assumptions in with some constants , , then the remainder of the volterra expansion satisfies the following estimate : here the proof of lemma [ lemma_residual_volterra ] is given in section [ section_proof ] . in order to use the control strategy, we consider a family of open - loop controls depending on parameters , , , and . here is the kronecker delta . by computing the integrals in for functions given by and exploiting assumption , we get (x^0)\sum_{(q , l)\ins}\frac{a_{ql}}{k_{ql}}\left\{\delta_{jl}(a_{ql}\delta_{iq}-2v_i)-\delta_{il}(a_{ql}\delta_{jq}-2v_j)\right\}+r(\varepsilon ) . \label{volterra_epsilon}\ ] ] to estimate the decay rate of the function , we use the following lemma .[ lyapunov_decay ] let be a function of class such that inequalities hold with some constants , , , , in a convex domain . if \to d ] , and let then , \label{apriori_est}\ ] ] where * proof .* by differentiating the function along the trajectory of system , we get so , we solve the comparison equation for differential inequality to obtain the following estimate ( cf .iii ) ) : .\ ] ] this proves estimate . now we use lemma [ lemma_apriori ] to prove lemma [ lemma_residual_volterra ] ._ proof of lemma [ lemma_residual_volterra ] ._ for a solution of differential equation with the initial condition and control ] for each ] of inequality is not empty .let be such a solution , then from inequality and lemma [ lemma_apriori ] it follows that , \label{ineq_d2}\ ] ] for each solution of system with and , ] .let be a function that satisfies conditions .we introduce level sets and define it is easy to see that and are positive numbers as is positive definite . by the construction , for each .the next step is to show that , if is small enough , then there exists a positive such that for any solution of system with the initial data and the control given by .as is positive definite then , and taylor s theorem implies the following inequality : where is finite by weierstrass s theorem due to the compactness of . by applying similar argumentation to the function , we conclude that with some positive constant .because of conditions , it follows that where .inequalities , , and imply that all conditions of lemma [ lyapunov_decay ] are satisfied in if ( ) is a solution of system with the control , . in order to satisfy condition , it suffices to assume that because of lemma [ lyapunov_decay ] . herethe remainder of the volterra series can be estimated by lemma [ lemma_residual_volterra ] as follows : here is a positive constant , }\sum_{i=1}^m |u_i^{\varepsilon}(t , x^0)| , \label{w_def}\ ] ] and is given by .condition together with representation implies that with some positive constant for all .estimates and imply that condition holds if here is a positive number . as is positive ,we conclude that there exist and such that inequality holds for all ] with . let us define , where is a positive solution of inequality .then inequality holds for any ] and is given by formula , then the corresponding -solution of system is well - defined : and for all because of inequality . by iterating inequality for , we conclude that where and is an arbitrary positive number if . for an arbitrary , we denote the integer part of as $ ] and denote .then we apply inequality together with lemma [ lemma_apriori ] to estimate : where , , and are defined in , , and , respectively .estimates and imply the following asymptotic representation : then if follows from inequality that with . by using formulas and ,we conclude that the control system known as the brockett integrator : where is the state and is the control .the stabilization problem for system has been has been the subject of many publications over the past three decades ( see , e.g. , the book and references therein ) . in particular, it is shown that system can be exponentially stabilized by a time - invariant feedback law for the initial values in some open and dense set , . in this section ,we construct a time - varying feedback law explicitly in order to stabilize system exponentially for all initial data .system satisfies the rank condition of form with , (x)\}={\mathbb r}^3\quad \text{for each}\;\ ; x\in\mathbb r^3,\ ] ] where the vector fields are , , (x ) = \frac{\partial{f_2(x)}}{\partial{x}}f_1(x ) - \frac{\partial{f_1(x)}}{\partial{x}}f_2(x ) = ( 0,0,-2)'.\ ] ] the family of controls takes the form for an arbitrary initial value at , the solution of system with controls is represented by as follows : note that representation is exact ( i.e. the higher order terms in the volterra expansion vanish ) as system is nilpotent .this implies the following lemma . for arbitrary , , and ,define the controls and by formulas with then the corresponding solution of system with initial data satisfies the condition . to solve the stabilization problem for system ,consider a lyapunov function candidate following the approach of theorem [ thm_sampling ] , we define a time - varying feedback control to approximate the gradient flow of by trajectories of system : where without loss of generality , we may assume any integer value for if . by theorem [ thm_sampling ], the feedback control ensures exponential stability of the equilibrium in the sense of -solutions , provided that is small enough .in this section , we perform numerical integration of the closed - loop system with the feedback law of form . trajectories of this system are shown in fig .[ fig:1 ] and [ fig:2 ] for and the following initial conditions : , . ] . ]these simulation results show that the feedback law steers the brockett integrator to the origin not only in the sense of -solutions ( as stated in theorem [ thm_sampling ] ) , but also in the sense of classical solutions .in this paper , a family of time - dependent trigonometric polynomials with coefficients depending on the state has been constructed to stabilize the equilibrium of a nonholonomic system .these coefficients are obtained by solving an auxiliary system of quadratic algebraic equations involving the gradient of a lyapunov function .an important feature of this work relies on the proof of the solvability of such a system for an arbitrary dimension of the state space provided that the lie algebra rank condition is satisfied with first order lie brackets .it should be emphasized that this result is heavily based on the degree principle as the implicit function theorem is not applicable for a non - differentiable function in lemma [ lemma_solvability ]. another important remark is that our design scheme produces small controls for small values of , and the frequencies of the sine and cosine functions are constant for each fixed .this feature differs from the approach to the motion planning problem that uses a sequence of high - amplitude highly oscillating open - loop controls ( see ) .the proof of theorem [ thm_sampling ] is considered as an extension of lyapunov s direct method , where the decay condition for a lyapunov function is guaranteed by exploiting the volterra expansion instead of using the time derivative along the trajectories .although the exponential stability result is established for -solutions under a sampling strategy , simulation results demonstrate the convergence of classical solutions of the closed - loop system to its equilibrium .thus , the question of the limit behavior of classical ( or carathodory ) solutions of system with the feedback control remains open for further theoretical studies .the author is grateful to prof .bronisaw jakubczyk for valuable discussions .
this paper is devoted to the stabilization problem for nonlinear driftless control systems by means of a time - varying feedback control . it is assumed that the vector fields of the system together with their first order lie brackets span the whole tangent space at the equilibrium . a family of trigonometric open - loop controls is constructed to approximate the gradient flow associated with a lyapunov function . these controls are applied for the derivation of a time - varying feedback law under the sampling strategy . by using lyapunov s direct method , we prove that the controller proposed ensures exponential stability of the equilibrium . as an example , this control design procedure is applied to stabilize the brockett integrator .
all physical phenomena take place in space and time . the theory of space and time ( in the absence of gravity ) is called the special theory of relativity .we do not get bogged down with the philosophical problems related to the concepts of space and time .we simply acknowledge the fact that in physics the notions of space and time are regarded as basic and can not be reduced to something more elementary or fundamental .we therefore stick to pragmatic operational definitions : _ time is what clocks measure .space is what measuring rods measure . _ in order to study and make conclusions about the properties of space and time we need an observer .a natural choice is an observer who moves freely ( the one who is free from any external influences ) .an observer is not a single person sitting at the origin of a rectangular coordinate grid .rather , it is a bunch of friends ( call it team ) equipped with identical clocks distributed throughout the grid who record the events happening at their respective locations .how do we know that this bunch of friends is free from any external influences ?we look around and make sure that nothing is pulling or pushing on any member of the bunch ; no strings , no springs , no ropes are attached to them .an even better way is to use a collection of `` floating - ball detectors '' ( fig .[ fig:0 ] ) distributed throughout the grid .when detector balls are released , they should remain at rest inside their respective capsules . if any ball touches the touch - sensitive surface of the capsule , the frame is not inertial . in the reference frame associated with a freely moving observer ( our rectangular coordinate grid ) , galileo s law of inertia is satisfied : a point mass , itself free from any external influences , moves with constant velocity . to be able to say what `` constant velocity '' really means , and thus to verify the law of inertia , we need to be able to measure distances and time intervals between events happening at _ different _ grid locations .( color online . ) a floating - ball inertial detector .after . ]it is pretty clear how to measure distances : the team simply uses its rectangular grid of rods .it is also clear how to measure time intervals at a particular location : the team member situated at that location simply looks at his respective clock .what s not so clear , however , is how the team measures time intervals between events that are _ spacially _ separated . a confusion about measuring this kind of time intervals was going on for two hundred years or so , until one day einstein said : `` we need the notion of synchronized clocks !clock synchronization must be operationally defined . ''the idea that clock synchronization and , consequently , the notion of simultaneity of spacially separated events , has to be _ defined _ ( and not assumed _ apriori _ ) is the single most important idea of einstein s , the heart of special relativity .einstein proposed to use light pulses .the procedure then went like this : in frame , consider two identical clocks equipped with light detectors , sitting some distance apart , at and .consider another clock equipped with a light emitter at location which is half way between and ( we can verify that is indeed half way between and with the help of the grid of rods that had already been put in place when we constructed our frame ) .then , at some instant , emit two pulses from in opposite directions , and let those pulses arrive at and . if the clocks at and show same time when the pulses arrive then the clocks there are synchronized , _ by definition_. the light pulses used in the synchronization procedure can be replaced with two identical balls initially sitting at and connected by a compressed spring .the spring is released ( say , the thread holding the spring is cut in the middle ) , the balls fly off in opposite directions towards and , respectively. how do we know that the balls are identical ? because team made them in accordance with a specific manufacturing procedure .how do we know that all clocks at are identical ? because team made all of them in accordance with a specific manufacturing procedure .how do we know that a tic - toc of any clock sitting in frame corresponds to 1 second ?because team called a tic - toc of a clock made in accordance with the manufacturing procedure `` a second '' . similarly , clocks in are regarded as identical and tick - tocking at 1 second intervals because in that frame all of the clocks were made in accordance with the same manufacturing procedure .now , how do we know that the manufacturing procedures in and are the same ?( say , how do we know that a swiss shop in frame makes watches the same way as its counterpart in frame ? ) hmm .that s an interesting question to ponder about . when studying spacetime from the point of view of inertial frames of reference discussed above , people discovered the following .* properties of space and time : * 1 .at least one inertial reference frame exists .( geocentric is ok for crude experiments ; geliocentric is better ; the frame in which microwave background radiation is uniform is probably closest to ideal ) .space is uniform ( translations ; 3 parameters ) .space is isotropic ( rotations ; 3 parameters ) .4 . time is uniform ( translation ; 1 parameter ) . 5 .space is continuous ( down to [ m ] ) .time is continuous ( down to [ s ] ) .space is euclidean ( apart from local distortions , which we ignore ; cosmological observations put the limit at [ m ] , the size of the visible universe ; this property is what makes rectangular grids of rods possible ) .relativity principle ( boosts ; 3 parameters ) .einstein constructed his theory of relativity on the basis of ( 1 ) the principle of relativity ( laws of nature are the same in all inertial reference frames ) , and ( 2 ) the postulate of the constancy of the speed of light ( the speed of light measured by _ any _ inertial observer is independent of the state of motion of the emitting body ) .[ note : this is _ not _ the same as saying that the speed of light emitted _ and _ measured in is the same as the speed of light emitted _ and _ measured in .this latter type of constancy of the speed of light is already implied by the principle of relativity . ]here we want to stick to mechanics and push the derivation of the coordinate transformation as far as possible without the use of the highly counterintuitive einstein s second postulate .the method that achieves this will be presented below and was originally due to vladimir ignatowsky .[ * disclaimer : * i never read ignatowsky s original papers , but the idea is well - known within the community , often mentioned and discussed .anyone with time to burn can reproduce the steps without much difficulty . the derivation below consists of 14 steps .if you can reduce that number , let me know . ] implies the linearity of the coordinate transformation between and ( see fig .[ fig:1 ] ) , x&=&_11(v)x+_12(v)y+_13(v)z+_14(v)t , + y&=&_21(v)x+_22(v)y+_23(v)z+_24(v)t , + z&=&_31(v)x+_32(v)y+_33(v)z+_34(v)t , + t&=&_41(v)x+_42(v)y+_43(v)z+_44(v)t .here we assumed that the origins of the two coordinate systems coincide , that is event in has coordinates in .( color online . ) two inertial reference frames ( orthogonal grids of rods equipped with synchronized clocks ) in relative motion along the -axis . ] imply that ( ) is independent of and , ( ) is independent of , , and ; ( ) is independent of , , and ; ( ) is independent of and , so x&=&_11(v)x+ _ 14(v)t , + y&=&_22(v)y , + z&=&_33(v)z , + [ eq:8 ] t&=&_41(v)x+_44(v)t .note : the fact that and are independent of and follows from the requirement that the -axis ( the line ) always coincides with the -axis ( the line ) ; this would not be possible if and depended on and .important : eq .( [ eq:8 ] ) indicates that it is possible to have two spacially separated events and that are simultaneous in frame and , yet , non - simultaneous in frame , that is [ eq:9 ] t_ab = 0 , x_ab 0 : t_ab = _ 41x_ab0 .this is not as obvious as might seem : for example , before einstein it was assumed that whenever is zero , must also be zero .so keeping in ( [ eq:8 ] ) is a significant departure from classical newtonian mechanics .once the standard method of clock synchronization is adopted , it is , however , relatively easy to give an example of two events satisfying ( [ eq:9 ] ) .try that on your own ! also implies that and are physically equivalent , so that , and thus x&=&_11(v)x+ _ 14(v)t , + y&=&k(v)y , + z&=&k(v)z , + t&=&_41(v)x+_44(v)t . as seen from gives , or . on the other hand , as seen from , . for this to be possible, we must have , and thus x&=&(v)(x - vt ) , + y&=&k(v)y , + z&=&k(v)z , + t&=&(v)x+(v)t , where we have re - labeled and .note : the just introduced will soon become the celebrated gamma factor .( color online . )`` inverted '' frames in relative motion . ] which is just a relabeling of coordinate marks , preserves the right - handedness of the coordinate systems and is physically equivalent to ( inverted ) frame moving with velocity relative to ( inverted ) frame ( see fig .[ fig:2 ] ) , so that &=&(-v ) ( - vt ) , + &=&k(-v ) , + z&=&k(-v)z , + t&=&(-v)+(-v)t , or , -x&=&(-v)(-x - vt ) , + -y&=&-k(-v)y , + z&=&k(-v)z , + t&=&-(-v)x+(-v)t , which gives ( -v)&=&(v ) , + k(-v)&=&k(v ) , + ( -v)&=&-(v ) , + ( -v)&=&(v ) . tell us that the velocity of relative to , as measured by using primed coordinates , is equal to . reminder : the velocity of relative to , as measured by using unprimed coordinates , is .i justify this by considering two local observers co - moving with and , respectively , and firing identical spring guns in opposite directions at the moment when they pass each other ( for a more formal approach , see ) .if the ball shot in the direction by stays next to then , by the relativity principle and isotropy of space , the ball shot in the direction by should stay next to .this means that the velocity of relative to as measured by is negative of the velocity of relative to as measured by .thus , & = & ( -v)(x + vt ) , + y&=&k(-v)y , + z&=&k(-v)z , + t&=&(-v)x+(-v)t , and since = k(-v)y=k(-v)k(v)y = k^2(v)y , we get k(v ) = 1 . choosing , which corresponds to parallel relative orientation of and ( as well as of and ) , gives , for the direct transformation , x&=&(v)(x - vt ) , + y&=&y , + z&=&z , + t&=&(v)x+(v)t , and , for the inverse transformation , & = & ( v)(x + vt ) , + y&=&y , + z&=&z , + t&=&-(v)x+(v)t. as seen from gives ; also , as seen from , it gives and . from this , . but , which gives ( v ) = ( v ) . as a result , x&=&(v)(x - vt ) , + y&=&y , + z&=&z , + t&=&(v)x+(v)t , and & = & ( v)(x + vt ) , + y&=&y , + z&=&z , + t&=&-(v)x+(v)t , or , in matrix notation , x + t = \(v ) & -v(v ) +( v ) & ( v ) x + t , and x + t = \(v ) & + v(v ) + -(v ) & ( v ) x + t . can be written as ( v ) = -vf(v^2)(v ) , since is even . [note : the newly introduced function of will turn out to be a constant !actually , one of the goals of the remaining steps of this derivation is to show that is a constant .it will later be identifies with .] therefore , x + t = 1 & -v + -vf & 1 x + t , and x + t = 1 & v + vf & 1 x + t .this seems physically reasonable .we have , x + t & = & 1 & -v + -vf & 1 x + t + & = & ^2 1 & -v + -vf & 1 1 & v + vf & 1 x + t + & = & ^2 1-v^2f & 0 + 0 & 1-v^2f x + t , and thus ^2(1-v^2f)=1 , from where = . to preserve the parallel orientation of the and axes we have to choose the plus sign ( as can be seen by taking the limit ) , so that = .thus , x + t = 1 & -v + -vf & 1 x + t , and x + t = 1 & v + vf & 1 x + t , where , we recall , .this step is crucial for everything that we ve been doing so far , for it shows that is a constant , which will be identified with , where is nature s limiting speed .we have a sequence of two transformations : from to , and then from to , [ eq:100 ] x + t & = & 1 & -v + -vf & 1 x + t + & = & 1 & -v + -vf & 1 1 & -v + -vf & 1 x + t + & = & 1+vvf & -(v+v ) + -(vf+vf ) & 1+vvf x + t , where is the velocity of relative to ( as measured in using the coordinates ) , and is the velocity of relative to ( as measured in using the coordinates ) . butthis could also be written as a single transformation from to , [ eq:101 ] x + t & = & 1 & -v + -vf & 1 x + t , with being the velocity of relative to ( as measured in using the coordinates ) .this shows that the ( 1 , 1 ) and ( 2 , 2 ) elements of the transformation matrix ( [ eq:100 ] ) must be equal to each other and , thus , f = f , which means that is a constant that has units of inverse speed squared , $ ] .to derive the velocity addition formula ( along the -axis ) we use eqs .( [ eq:100 ] ) and ( [ eq:101 ] ) to get (v+v ) = v . squaring and rearranging give ( v)^2=()^2 , or , v=. choosing the plus sign ( to make sure that when ), we get [ eq:102 ] v=. because in that case the conclusions of relativistic dynamics would violate experimental observations ! for example , the force law , [ eq : forceequation ] = * f * , where is the velocity of a particle , would get messed up . in particular, such law would violate the observed fact that it requires an infinite amount of work ( and , thus , energy ) to accelerate a material particle from rest to speeds approaching [ m / s ] ( this argument is due to terletskii ) .in fact , it would become `` easier '' to accelerate the particle , the faster it is moving .incidentally , this _fact is what `` replaces '' einstein s second postulate in the present derivation !thus , eq . ( [ eq:102 ] ) is the limit to which our ( actually , ignatowski s ) derivation can be pushed .remark : relativistic dynamics has to be discussed separately .we wo nt do that here , but maybe you can suggest a different reason for _ not _ to be negative ? see and for possible approaches .denoting f , we get the _ velocity addition formula _ , [ eq:103 ] v=.if we begin with and attempt to take the limit , we ll get [ eq:104 ] v=c , which tells us that is the limiting speed that a material object can attain .( notice that `` material '' here means `` the one with which an inertial frame can be associated '' .the photons do not fall into this category , as will be discussed shortly ! ) the possibilities therefore are : 1 . ( newtonian mechanics ; contradicts ( [ eq : forceequation ] ) ) ; 2 . and finite ( special relativity ) ; 3 . ( contradics observations . )so we stick with option 2 .what if an object were created to have from the start ( a so - called tachyon ) , like in the recent superluminal neutrino controversy ?we d get some strange results . for example , if we take and , we get [ eq:105 ] v== c , so in the object would move to the right at a slower speed than relative to , while itself is moving to the right relative to .bizarre , but ok , the two speeds are measured by different observers , so maybe it s not a big deal .however , if we consider the resulting lorentz transformation , x + t = 1 & -v + - & 1 x + t , or , [ eq:77 ] t&= & , + [ eq:78 ] x&= & , + y&=&y , + z&=&z , we notice that in a reference frame associated with hypothetical tachyons moving with relative to ( imagine a whole fleet of them , forming a grid which makes up ) , the spacetime coordinates of any event would be imaginary ! in order for the spacetime measurements to givevalues for , the reference frame made of tachyons must be rejected .what about a reference frame made of photons ? in that case , coordinates would be infinite and should also be rejected .so a fleet of photons can not form a `` legitimate '' reference frame .nevertheless , we know that photons exist .similarly , tachyons may also exist and , like photons , ( a ) should be created instantaneously ( that is , ca nt be created at rest , and then accelerated ) , and ( b ) should not be allowed to form a `` legitimate '' inertial reference frame . what about violation of causality ?( color online . )violation of causality by a hypothetical tachyon . ] indeed , the lorentz transformation shows that tachyons violate causality . if we consider two events , ( tachyon creation ) and ( tachyon annihilation ) with such that tachyon s speed , , is greater than as measured in ( see fig .[ fig:3 ] ) , then in frame moving with velocity relative to we ll have from ( [ eq:77 ] ) and ( [ eq:78 ] ) , t_b - t_a&= & + & = & ( t_b - t_a ) + & = & ( 1 - ) ( t_b - t_a ) , + x_b - x_a&= & + & = & ( x_b - x_a ) + & = & ( 1- ) ( x_b - x_a ) , where , which shows that it is possible to find such that ; that is , in event happens before event .this seems to indicate that tachyons are impossible .however , causality is a consequence of the second law of thermodynamics , which is a statistical law , applicable to macroscopic systems ; it does not apply to processes involving individual elementary particles . as a result, the existence of tachyons can not be so easily ruled out .finally , returning to eq .( [ eq:104 ] ) , we see that if something moves with relative to , it also moves with relative to any other frame .that is : the limiting speed is the same in all inertial reference frames . andthere is no mentioning of any emitter .also , as follows from ( [ eq:103 ] ) , is the only speed that has this property ( of being the same in all inertial frames ) . we know that light has this property ( ala michelson - morley experiment ) ,so the speed of light _ is _ the limiting speed for material objects . since neutrinos have mass , they can not move faster than light , and thus superluminal neutrinos are not possible .here we have a rod of ( proper ) length sitting at rest in frame , see fig . [ fig:4 ] .its speed relative to frame is .the two events , and , represent the meetings of the two clocks at the ends of the rod with the corresponding clocks in the frame at .we have from ( [ eq:77 ] ) and ( [ eq:78 ] ) , t_b - t_a&= & ( -)(x_b - x_a ) , + x_b - x_a&= & ( x_b - x_a ) , or [ eq:85 ] t_b - t_a&= & ( -)(x_b - x_a ) , + [ eq:86 ] x_b - x_a&=&. eq .( [ eq:85 ] ) says that , that is , the meeting events are not simultaneous in ( _ relativity of simultaneity _ ) .( [ eq:86 ] ) says that the length of the rod , , as measured in is smaller than its proper length by the gamma factor , = _ 0/ , the phenomenon of _ length contraction_. this time a single clock belonging to , fig .[ fig:5 ] , passes by two different clocks in .the corresponding two events , and , have , and are related to each other by t_b - t_a&= & + & = & ( t_b - t_a ) + & = & ( 1-)(t_b - t_a ) + & = & .this means that upon arrival at the moving clock will read less time than the -clock sitting at that location .this phenomenon is called _ time dilation _ ( moving clocks run slower ) .
derivation of the lorentz transformation without the use of einstein s second postulate is provided along the lines of ignatowsky , terletskii , and others . this is a write - up of the lecture first delivered in phys 4202 e&m class during the spring semester of 2014 at the university of georgia . the main motivation for pursuing this approach was to develop a better understanding of why the faster - than - light neutrino controversy ( opera experiment , 2011 ) was much ado about nothing .
the wide - spread use of highly active antiretroviral therapies ( haart ) against hiv in the united states has resulted in reducing the burden of hiv - related morbidity and mortality [ ] .hiv dynamics following haart are usually studied in short - term clinical trials with frequent data collection design .for example , in viral dynamics studies of aids clinical trials the elimination process of plasma virus after treatment is closely monitored with daily measurements , which has led to a new understanding of the pathogenesis of hiv infection and provides guidance for treating aids patients and evaluating antiviral therapies [ ] . here in this articlehiv dynamics refer to a two - part response to haart : viral suppression and concurrent or subsequent immune reconstitution . in clinical practice ,the virus is considered suppressed when plasma hiv rna ( viral load ) is below a lower limit of detection ; the degree of immune reconstitution is commonly measured by the change of cd4 lymphocyte cell count ( cd4 count ) after haart initiation .it is well known that cd4 lymphocyte cells are targets of hiv and their abundance declines after hiv infection .investigators have studied the association between viral load and cd4 count during haart treatment and , in general , they are negatively correlated [ ; ] .longitudinal data on these markers have been analyzed separately , particularly by using random - effects models .recently , bivariate linear mixed models were proposed to jointly model viral load and cd4 count by incorporating correlated random effects .these models were specified in terms of concurrent association between viral load and cd4 count [ ; ] . however , a natural time ordering for virologic and immunologic response to haart ( or any antiviral therapy ) is often observed : when a patient begins a successful haart regimen , viral replication is usually inhibited first , leading to a decrease in viral load ; then , cd4 count often increases as the immune system begins to recover .consequently , increase in cd4 count is thought to depend on the degree of viral suppression ; it may be slower to respond than viral load or it may not increase at all if the virus is not suppressed [ ] . therefore , it would be advantageous to acknowledge these common sequential changes of viral load and cd4 count when modeling post - haart hiv dynamics .hepatitis c virus ( hcv ) coinfection is estimated to occur in of hiv - infected patients in the united states and has become one of the most challenging clinical situations to manage in hiv - infected patients [ ] .several studies have suggested that hcv serostatus is not associated with the virologic response to haart [ ; ] .however , the evidence for immunologic response is conflicting .some studies have shown that hiv hcv coinfected patients have a blunted immunologic response to haart , compared to those with hiv infection alone , although others have found comparable degrees of immune reconstitution in persons with hiv hcv coinfection [ ; ; ; ] .a primary motivation of our model is to investigate the effect of hcv coinfection on post - haart hiv dynamics using cohort data from natural history studies .we focus on two important questions : ( ) do hcv - negative patients have shorter time from haart initiation to viral suppression ? ( )do hcv - negative patients have better immune reconstitution at the time of viral suppression ?note that in the second question the sequential nature of the virologic and immunologic response to haart is emphasized . because the incidence of clinical progression to aids fell rapidly following the widespread introduction of haart in 1997 , long - term clinical trials in patients with hiv become time - consuming and expensive [ ] .currently , natural history studies are the major source of knowledge about the hiv epidemic and the full treatment effect of haart over the long term .for example , studies such as multicenter aids cohort study ( macs ) , women s interagency hiv study ( wihs ) and swiss hiv cohort study ( shcs ) have played important roles in understanding the science of hiv , the aids epidemic and the effects of therapy [ ; ; ] . in hiv natural history studies , hiv viral load and cd4 count are usually measured with wide intervals ( e.g. , every months approximately ) . therefore , for some event time of scientific interest , for example , the time between haart initiation and viral suppression , both the time origin ( haart initiation ) and the failure event ( viral suppression ) could be interval - censored .this situation is referred to as ` doubly interval - censored data ' in the literature . in fact , the statistical research on doubly interval - censored data was primarily motivated by scientific questions in hiv research , for example , modeling ` aids incubation time ' between hiv infection and the onset of aids [ ; ] . both nonparametric and semiparametricmethods have been proposed for the estimation of the distribution function of the aids incubation time and its regression analysis . a comprehensive review on the analysis of doubly interval - censored data can be found in .the hiv epidemiology research study ( hers ) is a multi - site longitudinal cohort study of hiv natural history in women between and [ ] . at baseline between and the study enrolled hiv - seropositive women and hiv - seronegative women at high risk for hiv infection .participants were scheduled for approximately a -year follow - up , where a variety of clinical , behavioral and sociologic outcomes were recorded approximately every months and measurements correspond to dates .the top part of table [ hersbaseline ] gives selected baseline characteristics of the 1310 study participants ; more details can be found in .quantification of hiv rna viral load was performed using a branched - dna ( b - dna ) signal amplification assay with the detection limit at copies / ml and flow cytometry from whole blood was used to determine cd4 counts at each visit .all participants were haart - naive at baseline . during the study reported haart use based on information gathered during in - person interviews . because assessments were scheduled to be carried out every months and participants were only asked about whether they were on haart during the last 6 months , exact dates for haart initiation are not available .the analysis in section [ analysis2 ] includes 374 women with haart use who had hiv sero - conversion before baseline and baseline hcv coinfection information .some characteristics of these 374 women are presented at the bottom of table [ hersbaseline ] ..6d2.6c@ & & + & & + median age at enrollment & 35.0 & 34.5 + age range at enrollment & 16.455.2 & 16.656.0 + injection drug user at enrollment ( ) & 25.1 & 26.4 + cd4 count at enrollment ( ) & & + & 17.1 & 0.0 + 200499 & 50.7 & 1.7 + & 32.2 & 98.3 + hcv antibody test at enrollment ( ) & & + positive & 60.3 & 47.8 + negative & 38.8 & 50.8 + missing & 0.9 & 1.4 + [ 6pt ] & & + & & + [ 3pt ] median follow - up time ( months ) & 67.3 & 71.0 + median age at enrollment & 36.7 & 33.1 + age range at enrollment & 21.255.0 & 19.055.2 + injection drug user at enrollment ( ) & 29.8 & 2.4 + ever on antiviral treatment before 1996 ( ) & 57.2 & 62.1 + cd4 count before first reported haart use ( ) & & + & 34.6 & 36.8 + 200499 & 52.9 & 45.8 + & 12.5 & 17.5 + 50 copies / ml ) since reported haart initiation in the hers cohort ; solid lines : smoothing spline fits ; dashed line : derivative curves of the smoothing spline fits ; black dots : maximum of the increasing rate for average cd4 count and maximum of the decreasing rate of viral load prevalence . ]figure [ empinew ] shows smoothing spline fits and the corresponding derivative ( change rate ) curves for average cd4 count and the prevalence of detectable viral load for the 374 hers women , where the measurement times are centered such that time represents the earliest visit with haart information reported .the left panels indicate that the increasing trend for average cd4 count started later than the decreasing trend for viral load prevalence , but this phenomenon is probably not related to haart because the starting times for these trends are 12 years before the reported haart initiation time. it might be more useful to examine the change rates for average cd4 count and viral load prevalence to assess the effectiveness of haart .in fact , the right panels of figure [ empinew ] show that the maximum decreasing rate for viral load prevalence occurred earlier ( around 4 months before reported haart initiation ) than the maximum increasing rate for average cd4 count ( around the reported haart initiation ) , which suggests the possible sequential relationship in post - haart hiv dynamics discussed in section [ hivdynamicsintro ] .our objective is to develop a model for the joint distribution of the time from haart initiation to viral suppression , and the longitudinal cd4 counts relative to the viral suppression time following haart . as discussed in section [ hersintro ] ,the time from haart initiation to viral suppression is doubly interval - censored .specifically , considering the reporting bias for haart initiation , we define the right endpoint of its corresponding censoring interval to be the first visit of reported haart use and the definition for the left endpoint is based on assumptions about the earliest possible time of haart initiation in the hers cohort .further , viral suppression following haart can be either interval - censored or right - censored .details can be found in section [ analysis2 ]. represents enrollment , indexes the time since enrollment , is haart initiation time , is viral suppression time following haart , is the time from haart initiation to viral suppression , and are cd4 count measurements with their expectations represented by the curve . ]figure [ selectedsub ] shows cd4 counts and corresponding censoring intervals of haart initiation and viral suppression following haart for selected hers women . as seen in the top left panel of figure [ selectedsub ] , viral suppression after haart can be right - censored due to participant dropout , death and/or study end .similarly , participants could have incomplete cd4 count measurements for scheduled follow - up visits .however , because we focus on the subpopulation of haart users in the hers cohort , the missingness rate is relatively low compared to the whole hers population ; of the women in our analysis had at least visits .therefore , for the hers analysis in section [ analysis2 ] , we assume that the missing data mechanism is missingness at random [ ] .given that the parameters for modeling the missing data mechanism and the outcomes are distinct and they have independent priors , the missing data are then ignorable when making posterior inference about the outcomes .the remainder of the article is organized as follows . in section [ model2 ]we specify the joint model for doubly interval - censored event time and longitudinal cd4 count data .section [ estimation2 ] describes the posterior inference and gives full conditional distributions for gibbs steps .we use the model to analyze the hers data for investigating the hcv coinfection problem introduced in section [ hcvintro ] , and present the results in section [ analysis2 ] .the conclusion and some discussion are given in section [ conclusion2 ] .our goal is to model the joint distribution of the time from haart initiation to viral suppression and the longitudinal cd4 counts .figure [ scheme ] is a schematic illustration of the variables of interest under an idealized situation .let denote the time since enrollment and let and represent the time from enrollment to haart initiation and the time from enrollment to viral suppression after haart , respectively. by definition , and is the time from haart initiation to viral suppression .further , are cd4 count measurements taken at time points . throughout this article ,the time points are assumed to be noninformative and fixed by study design .let denote covariates , for example , the baseline hcv serostatus .the joint density of and given , and can be written as \\[-8pt ] & & \quad \qquad = p(w\vert{\mathbf{x } } , h)p\ { y_1 , y_2 , \ldots , y_n\vert{\mathbf{x } } , t_1-(h+w ) , \ldots , t_n-(h+w)\}. { \nonumber}\end{aligned}\ ] ] the conditioning on is because we are not interested in the marginal distribution of and the observed is only used as the time origin for .the factorization in ( [ simple2 ] ) is based on the sequential relationship in post - haart dynamics .when haart regimen is successful in suppressing the virus , we are able to obtain , the time from haart initiation to viral suppression . as mentioned in section [ hivdynamicsintro ] , there is a time ordering of virologic response and immunologic response to haart . because of this sequential relationship of virologic and immunologic response as well as the large between - individual heterogeneity in terms of the ability to suppress viral replication , the time to suppression and the durability of suppression , we believe that the mean cd4 count profiles from different individuals are more comparable after realigning measurement times by their individual viral suppression times following haart . therefore , we assume that the distribution of given depends on and only through a change in the time origin for the measurement times .this is similar to _ curve registration _ , a method originated in the functional data analysis literature [ ] for dealing with the situations where the rigid metric of physical time for real life systems is not directly relevant to internal dynamics .for example , the timing variation of salient features of individual puberty growth curves ( e.g. , time of puberty growth onset , time of peak velocity of puberty growth ) can result in the distortion of population growth curves [ ] .likewise , in our case , simply averaging individual cd4 count profiles along the time since enrollment ( ) or the time since haart initiation ( ) can attenuate the true population immunologic response profile following haart . because viral suppression is the main driving force of immune reconstitution [ ], it is sensible to center the time scale at individual viral suppression times ( ) in order to describe the trends in immune reconstitution at the population level .however , as mentioned in section [ hersintro ] , can be doubly interval - censored in hiv natural history studies , which presents a challenge in making inferences about the density in ( [ simple2 ] ) .in fact , for , we are faced with a situation similar to the missing or interval - censored covariate problem in generalized linear model literature [ ; ] .to accommodate this situation , we will extend the semiparametric bayesian approach in by modeling and simultaneously . note that here we model the observed only for taking into account the uncertainty in the time origin of ; we do not intend to make inference about the marginal distribution for haart initiation time , which requests the right - censored data from those participants who did not initiate haart during the study .this is different from the aids incubation time problem which motivated the research in doubly interval - censored data , where both hiv infection time and aids incubation time are of interest and hiv infection time can be right - censored [ ] .moreover , for the hers cohort , haart was not available before 1996 ; therefore , when haart initiation time is of scientific interest , it is not valid to use enrollment as the time origin because all hers women were not at risk for haart initiation between enrollment and 1996 .however , for the purpose of accommodating uncertainty for the time origin of , we can still use the observed censoring intervals for with enrollment as their time origin . in the following sections , we present the details of the proposed joint model for the hers data .recall that all hers women were haart - naive at baseline . for those who initiated haart during follow - up ,let be a positive random variable representing the time from enrollment to haart initiation .participants were monitored only periodically , and at each follow - up visit they only reported whether they were on haart treatment since the last visit .hence , the true value for is only known to lie within an interval ] , where and are defined similarly as and . for those whose viral load was not suppressed during follow - up , , which corresponds to right censoring of .because right censoring can be treated as a special case of interval censoring with , we simply write ] and ] .further , we observe cd4 counts at time points , which can be different across individuals and is the covariate that includes baseline hcv coinfection status , where indicates positivity of hcv antibody . in summary , the observed data for a haart user in the hers cohort consist of the observed cd4 counts ,the covariate , the observation times and the intervals ] that respectively include haart initiation time and viral suppression time . the joint density for the above observed data and the unobserved and can be written as \\[-8pt ] & & \qquad \quad { } \times p_2 ( w \vert{\mathbf{x } } , h , l^h , r^h , l^v , r^v ) \nonumber\\ & & \qquad \quad { } \times p_3\ { \mathbf{y } \vert{\mathbf{x } } , t_{1}-(h+w),\ldots , t_{n}-(h+w ) , l^h , r^h , l^v , r^v\}. \nonumber\end{aligned}\ ] ] denote the cumulative distribution function ( cdf ) of given by , and the cdf of given by . the corresponding probability density functions ( pdf ) are and , respectively .we assume that the censoring of and occurs noninformatively [ ; ] , in the following sense : a. provide no additional information about when and are exactly observed .that is , the conditional density of given and does not depend on : b. the only information about and provided by the observed censoring intervals is that ] contain and , respectively .that is , the conditional density of given and ] .similarly , the conditional density of given , and ] .we denote ( [ truncatedh ] ) by and ( [ truncatedw ] ) by , where the subscript stands for ` truncated ' density . given these noninformative conditions , the joint density in ( [ jointpdf ] ) can be simplified as \\[-8pt ] & & \qquad \quad { } \times g_t^w(w\vert{\mathbf{x } } , h , l^v , r^v;{\bolds\lambda}^w ) { \nonumber}\\ & & \qquad \quad { } \times p_3\{\mathbf{y}\vert{\mathbf{x } } , t_{1}-(h+w),\ldots , t_{n}-(h+w ) ; { \bolds\theta}\ } .\nonumber\end{aligned}\ ] ] to construct the observed data likelihood , we index each individual s data by and let be the number of observations for the individual , ( , , , , , , are observed .if we denote by ] , by the iterations as follows : first , and are imputed by using corresponding conditional distributions ; second , the parameter is updated using the complete data set obtained from the first step and current values of the rest of parameters ; last , the parameters are updated using distinct values of imputed and .details on priors and full conditional posterior distributions are given in the .in this section we apply the joint model to the hers data introduced in section [ hersintro ] .two different definitions are used for censoring intervals of haart initiation and the results are compared . the first one is explicitly based on reported haart use information , and we refer to them as ` narrow ' intervals for . here is the first visit with reported haart use ; is the immediate previous visit without haart use .there are ( hcv seropositive , hcv seronegative ) patients with right - censored viral suppression time in this case .however , we find that some patients had viral suppression immediately before , which could be due to the possible reporting bias regarding haart initiation . as a result, we might miss the true viral suppression time following haart and artificially create some cases with right - censored viral suppression time ( or viral suppression that occurred long after haart initiation ) . to reduce its impact in a conservative manner , we redefine all left endpoints of haart initiation intervals to be march , , which is the left endpoint of the censoring interval for the patient who was the first reporting haart use in the hers cohort . because censoring intervals for haart initiation are wider under this new definition , we refer to them as ` wide ' intervals for and here the number of patients with right - censored viral suppression time is reduced to ( hcv seropositive , hcv seronegative ) .figure [ intervals ] shows the cd4 count data and censoring intervals under two definitions of haart initiation time intervals for two selected women in the hers cohort . in the left panel ,the ` wide ' definition for also changes the interval for viral suppression time , while in the right panel the intervals for remain the same . and under two definitions of haart initiation time intervals for two selected women in the hers cohort ; censoring intervals under ` narrow ' definition are represented by dashed lines , censoring intervals under ` wide ' definition are represented by solid lines ; censoring intervals of and are on the top and bottom of panels , respectively .] for cd4 counts , square - root transformation is used because it is more appropriate for the assumptions of normality and homogeneous variance as shown by exploratory analysis .in addition to baseline hcv serostatus , two other covariates are included in the cd4 model : the observed cd4 count ( scaled by ) immediately before reported haart initiation ( pretreatment cd4 level ) and the indicator of baseline injection drug use ( idu ) . for penalized splines approximating population - level smooth functions ,we use truncated quadratic bases with 20 knots , allowing sufficient flexibility for capturing cd4 count changes at viral suppression times .these knots are placed at viral suppression times as well as at the sample quantiles of the realigned measurement times using midpoints of the observed censoring intervals for viral suppression .because data for individual women are sparse over time and the maximum number of data points for individual women is , we use truncated quadratic bases with one knot at the viral suppression times for estimating individual - level smooth functions .since the first derivatives ( velocities ) of the population - level smooth functions can be computed in analytic form when truncated quadratic bases are used , we also examine the posterior inference for these derivatives ..0d3.0d3.0d4.0c@ & & & & & & & & + ` narrow ' & &marginal&hcv & 15 & 37 & 126 & 654 & 1339 + & & & hcv & 13 & 39 & 118 & 625 & 1384 + & & & & & & & & + & & joint&hcv & 13 & 28 & 88 & 291 & 906 + & & & hcv & 13 & 31 & 82 & 356 & 959 + & & & & & & & & + & & & & & & & & + ` wide'&&marginal&hcv & 3 & 145 & 582 & 1129 & 1497 + & & & hcv & 1 & 120 & 436 & 1021 & 1521 + & & & & & & & & + & & joint&hcv & 1 & 122 & 350 & 793 & 1232 + & & & hcv & 1 & 91 & 322 & 768 & 1315 + .2cd2.2@ & & & & + ` narrow ' & & & & + marginal&hcv & 0.75 & 0.42 & 0.56 + & hcv & 0.72 & 0.43 & 0.56 + & difference & -0.03 & 0.02 & -0.01 + & & & & + & & & & + joint&hcv & 0.63 & 0.48 & 0.66 + & hcv & 0.64 & 0.52 & 0.62 + & difference & 0.01 & 0.05 & -0.04 + & & & & + & & & & + ` wide ' & & & & + marginal&hcv & 0.85 & 0.13 & 0.29 + & hcv & 0.78 & 0.22 & 0.33 + & difference & -0.07 & 0.08 & 0.04 + & & & & + & & & & + joint&hcv & 0.68 & 0.17 & 0.36 + & hcv & 0.68 & 0.24 & 0.38 + & difference & 0.01 & 0.07 & 0.01 + & & & & + the prior specifications are as described in section [ estimation2 ] and the . for assessing sensitivity in estimating and , precision parameters of the dirichlet process are taken to be equal to and , which indicate different levels of faith in the prior normal base measures for and .we run two mcmc chains with iterations , the first of which are discarded .convergence is established graphically using history plots ; pooled posterior samples are then used for inference . the results at both values of , are similar ; here we present those with .mcmc is implemented in matlab programs [ ] .for the purpose of modeling doubly interval - censored event time only , marginal models can be used by excluding the part for cd4 counts from ( [ semimodel ] ) .we will compare the results from our joint model with those from marginal models , and investigate the possible impact of joint modeling .table [ times ] presents the posterior mean estimates of the percentiles of the time between haart initiation and viral suppression for the haart responder group .the results based on ` wide ' intervals for suggest that the hcv negative group might have shorter time to achieve viral suppression than the hcv positive group , but this is not the case with ` narrow ' intervals for , where the hcv negative group has more right skewed distribution .further , the joint model tends to give smaller estimates than the marginal model .for example , in table [ times ] both location estimates and variability estimates from the joint model based on ` wide ' intervals for are smaller than those from the marginal model , which suggests that modeling cd4 counts affects the estimation for doubly interval - censored when the information from censoring intervals is limited .table [ times2 ] gives the estimated proportions of haart responders with time between haart initiation and viral suppression less than or equal to / days . in both cases of ` wide ' and narrow intervals for , the credible intervals for differences between proportions by hcv groups cover zero .thus , in the hers cohort , there is not sufficient evidence that baseline hcv serostatus is associated with virologic response to haart .this is also demonstrated in figure [ ht ] , where the hazard functions of viral suppression are plotted over grid points of days . herethe hazard is defined as , where , are grid points . with both ` narrow ' and ` wide ' intervals for ,the hazard functions of viral suppression are generally similar across the hcv groups .note that estimated proportions of haart responders are also similar for the hcv groups in all cases .days from the joint model ; left panel : ` narrow ' intervals for ; right panel : ` wide ' intervals for . ] from table [ times ] , median estimates for the time between haart initiation and viral suppression are approximately one year with ` wide ' intervals for and months with ` narrow ' intervals for in the joint model . compared to the clinically expected value, the estimates with ` wide ' intervals for might be overestimated due to the following reasons .first , data were collected approximately every six months in the hers , thus the immediate virologic response to haart were not available .second , haart information was self - reported and we set up the left endpoints of haart initiation time to be march 11th , 1996 for reducing reporting bias .consequently , censoring intervals for observed haart initiation times are wide .third , of the participants had right - censored viral suppression times , which might be related to the adherence of haart treatment and individual heterogeneity in virologic response . however , these situations do not differ by hcv serostatus , thus the corresponding comparison can still be useful .the results for cd4 counts are similar under both definitions of censoring intervals for haart initiation and we present those based on ` wide ' intervals for .we compute posterior mean estimates for all targets of inference .the coefficient estimate for pretreatment cd4 level is ( credible interval ] ) , suggesting that baseline idu status was not associated with current cd4 counts , given baseline hcv and pretreatment cd4 level .pointwise credible bands .derivatives for cd4 count profiles by hcv groups for haart responders ( in square root cd4 count scale ) in the joint model , after accounting for pretreatment cd4 level and baseline injection drug use .difference between derivatives for cd4 count profiles ( in square root cd4 count scale ) in the joint model .the ticks at the top and the bottom of the panels are the haart initiation times corresponding to the , and quantiles of the time between haart initiation and viral suppression in table [ times ] : solid line , hcv - positive group ; dotted line , hcv - negative group . ] for haart responders , mean cd4 count profiles ( after accounting for pretreatment cd4 level and baseline idu ) are plotted in the panel ( a ) of figure [ herspop ] . we transform the estimates back to the original cd4 count scale for illustration purposes .the estimated cd4 count profiles of both hcv groups were decreasing at years before viral suppression .cd4 counts started to increase before hiv virus was completely suppressed ( time point ) .this is consistent with findings from other studies , that is , cd4 cells may increase after haart for patients who do not fully suppress the virus , because the level of viral load is decreasing [ ] .however , figure [ herspop](a ) , also suggests that the decreasing trend for hcv - negative patients ends earlier than hcv - positive patients when haart started to be initiated .in addition , the average cd4 level after viral suppression achieved by hcv - negative patients is higher than hcv - positive patients .for example , at viral suppression time the difference of average cd4 count for hcv groups is approximately ( credible interval ] .note that does not get involved in ( [ postu ] ) because conditioning on , and are independent . since only provides information on the range of , is simply the truncated , base measure of given .furthermore , and .thus , a new value of is equal either to with probability , or to a sampled value from the distribution with probability .also , we assume that depending on the value of , the base measure are normal distributions with distinct parameters or . for ,the full conditional distribution follows : { \nonumber}\\ & & \qquad \sim q_0 \cdot g_{0t}^w(w_i\vert\mathbf{y}_i , t_i , { \mathbf{x}}_i , h_i , l_i^v , r_i^v , { \bolds\lambda}^w ) + \sum_{j \ne i } q_j \cdot i(w_j = w_i ) , \nonumber\end{aligned}\ ] ] where is the truncated posterior distribution of in $ ] .furthermore , and .thus , a new value of is equal either to with probability , or to a sampled value from the distribution with probability , where is the full conditional distribution of that would be obtained if the completely parametric hierarchical model ( [ fullmodel ] ) is used and is the prior distribution ( base measure ) for given .we again assume that are normal distributions with distinct parameters , . because is based on the model in ( [ cd4model ] ) , there is no closed form for and the metropolis step [ ] is used for sampling .the integral in is approximated by the gauss legendre quadrature with nodes .we use bayesian penalized splines [ ] with a truncated polynomial basis for approximating cd4 count profiles at both population level and individual level. following , , , , , and ( ) in ( [ cd4model ] ) can be approximated by where , , and are truncated polynomial bases ; is an integer and . , , and are the corresponding knots ; ( , , , ) are the number of knots .let and , then the proposed model in ( [ cd4model ] ) can be rewritten as we use the standard prior distributions for all parameters in the cd4 count model as follows : , for , , , , , , , and ; for , and ; for , and ; for , ; for , ; , , , , , all follow distribution . note that , , , are smoothing parameters for the population penalized splines ; and are smoothing parameters for individual penalized splines ; , ( ) are variance component parameters for random effects .further , we assume for all observations and .thus , the parameter vector includes ( ) and ( , , , , , , , , ). since all conditional posterior distributions for are in closed form , the gibbs steps are straightforward .the parameters and are updated from their full conditional distributions : { \nonumber}\\ & & \qquad \sim\prod_{i \in{\mathbf{i}}^h } g_0^h(h_i\vert z_i , v_i , l_i^h , r_i^h ; { \bolds\lambda}^h)f({\bolds\lambda}^h ) , \nonumber \\ & & { \bolds\lambda}^w \vert{\mathbf{y}}_1 , \ldots , { \mathbf{y}}_n , { \mathbf{x}}_1 , \ldots , { \mathbf{x}}_n , { \mathbf{t } } , { \mathbf{h } } , { \mathbf{w } } , { \mathbf{l}}^h , { \mathbf{r}}^h , { \mathbf{l}}^v , { \mathbf{r}}^v , { \bolds\theta } , { \bolds\lambda}^h ] { \nonumber}\\ & & \qquad \sim \prod _ { i \in{\mathbf{i}}^w } g_0^w(w_i\vert z_i , h_i , l_i^v , r_i^v ; { \bolds\lambda}^w)f({\bolds\lambda}^w ) , \nonumber\end{aligned}\ ] ] where and are the subsets of indexes corresponding to the distinct and because the distinct and are random samples from and , respectively [ ] . in our case , and for the normal base measures ;we assume and .the conditional posterior distributions of and are both in closed forms .we are grateful to jeffrey blume , mike daniels , constantine gatsonis , patrick heagerty , tony lancaster and the referees for helpful comments .data for hers were collected under cdc cooperative agreements u64-ccu106795 , u64-ccu206798 , u64-ccu306802 and u64-ccu506831 .
hepatitis c virus ( hcv ) coinfection has become one of the most challenging clinical situations to manage in hiv - infected patients . recently the effect of hcv coinfection on hiv dynamics following initiation of highly active antiretroviral therapy ( haart ) has drawn considerable attention . post - haart hiv dynamics are commonly studied in short - term clinical trials with frequent data collection design . for example , the elimination process of plasma virus during treatment is closely monitored with daily assessments in viral dynamics studies of aids clinical trials . in this article instead we use infrequent cohort data from long - term natural history studies and develop a model for characterizing post - haart hiv dynamics and their associations with hcv coinfection . specifically , we propose a joint model for doubly interval - censored data for the time between haart initiation and viral suppression , and the longitudinal cd4 count measurements relative to the viral suppression . inference is accomplished using a fully bayesian approach . doubly interval - censored data are modeled semiparametrically by dirichlet process priors and bayesian penalized splines are used for modeling population - level and individual - level mean cd4 count profiles . we use the proposed methods and data from the hiv epidemiology research study ( hers ) to investigate the effect of hcv coinfection on the response to haart . .
cloud infrastructures are widely deployed to support various emerging applications such as : google app engine , microsoft window live service , ibm blue cloud , and apple mobile me .large - scale data centers ( _ _ dc__s ) , which are the fundamental engines for data processing , are the essential elements in cloud computing .information and communication technology ( _ ict _ ) is estimated to be responsible for about of the worldwide energy consumption by 2020 .the energy consumption of dcs accounts for nearly 25% of the total ict energy consumption .hence , the energy consumption of dcs becomes an imperative problem .renewable energy , which includes solar energy and wind power , produces domestic electricity of the united states in 2011 .renewable energy will be widely adopted to reduce the brown energy consumption of ict .for example , parasol is a solar - powered dc . in parasol ,greenswitch , a management system , is designed to manage the work loads and the power supplies .the availability of renewable energy varies in different areas and changes over time .the work loads of dcs also vary in different areas and at different time . as a result ,the renewable energy availability and energy demands in dcs usually mismatch with each other .this mismatch leads to inefficient renewable energy usage in dcs . to solve this problem , it is desirable to balance the work loads among dcs according to their green energy availability . although the current cloud computing solutions such as cloud bursting , vmware and f5 support inter - datacenter ( _ inter - dc _ ) virtual machine ( _ vm _ ) migration , it is not clear how to migrate vm among renewable energy powered dcs to minimize their brown energy consumption .elastic optical networks ( _ eons _ ) , by employing orthogonal frequency division multiplexing ( _ ofdm _ ) techniques , not only provide a high network capacity but also enhance the spectrum efficiency because of the low spectrum granularity .the granularity in eons can be 12.5 ghz or even much smaller .therefore , eons are one of the promising networking technologies for inter - dc networks . powering dcs with renewable energycan effectively reduce the brown energy consumption , and thus alleviate green house gas emissions .dcs are usually co - located with the renewable energy generation facilities such as solar and wind farms . since transmitting renewable energy via the power grid may introduce a significant power loss , it is desirable to maximize the utilization of renewable energy in the dc rather than transmitting the energy back to the power grid . in this paper , we investigate the _ _r__enewable _ _ e__nergy-__a__ware _ _ i__nter - dc vm _ _ m__igration ( _ re - aim _ ) problem that optimizes the renewable energy utilization by migrating vms among dcs .[ fig : six - cloud ] shows the architecture of an inter - dc network .the vertices in the graph stand for the optical switches in eons .dcs are connected to the optical switches via ip routers .these dcs are powered by hybrid energy including brown energy , solar energy , and wind energy . in migrating vms among dcs , the background traffic from other applicationsare also considered in the network .for example , assume that dc lacks renewable energy while dc and dc have superfluous renewable energy .some vms can be migrated out of dc in order to save brown energy . because of the background traffic and the limited network resource , migrating vms using different paths ( path or path ) has different impacts on the network in terms of the probability of congesting the network .it is desirable to select a migration path with minimal impact on the network .the rest of this paper is organized as follows .section [ sec : related_work ] describes the related work .section [ sec : problem ] formulates the re - aim problem .section [ sec : analysis - algorithms ] briefly analyzes the property of the re - aim problem and proposes two heuristic algorithms to solve the problem .section [ sec : evaluations ] demonstrates the viability of the proposed algorithms via extensive simulation results .section [ conclusion ] concludes the paper .owing to the energy demands of dcs , many techniques and algorithms have been proposed to minimize the energy consumption of dcs .fang _ et al . _ presented a novel power management strategy for the dcs , and their target was to minimize the energy consumption of switches in a dc .cavdar and alagoz surveyed the energy consumption of server and network devices of intra - dc networks , and showed that both computing resources and network elements should be designed with energy proportionality .in other words , it is better if the computing and networking devices can be designed with multiple sleeping states .a few green metrics are also provided by this survey , such as power usage effectiveness ( pue ) and carbon usage effectiveness ( cue ) .deng _ et al . _ presented five aspects of applying renewable energy in the dcs : the renewable energy generation model , the renewable energy prediction model , the planning of green dcs ( i.e. , various renewable options , avalabity of energy sources , different energy storage devices ) , the intra - dc work loads scheduling , and the inter - dc load balancing .they also discussed the research challenges of powering dcs with renewable energy .ghamkhari and mohsenian - rad developed a mathematical model to capture the trade - off between the energy consumption of a data center and its revenue of offering internet services .they proposed an algorithm to maximize the revenue of a dc by adapting the number of active servers according to the traffic profile . proposed algorithms to reduce emissions in dcs by balancing the loads according to the renewable energy generation .these algorithms optimize renewable energy utilization while maintaining a relatively low blocking probability .mandal _ et al . _ studied green energy aware vm migration techniques to reduce the energy consumption of dcs .they proposed an algorithm to enhance the green energy utilization by migrating vms according to the available green energy in dcs .however , they did not consider the network constraints while migrating vms among dcs . in the optical networks ,the available spectrum is limited .the large amount of traffic generated by the vm migration may congest the optical networks and increase the blocking rate of the network .therefore , it is important to consider the network constraints in migrating vms . in this paper , we propose algorithms to solve the green energy aware inter - dc vm migration problem with network constraints .in this section , we present the network model , the energy model , and the formulation of the re - aim problem .the key notations are summarized in table [ tab : notations ] . & the capacity of a link in terms of spectrum slots .+ & the capacity of a spectrum slot .+ & the maximum number of servers in the dc .+ & the maximum number of vms can be supported in a server .+ & the amount of renewable energy in the dc .+ & the number of vms in the dc. + & per unit energy cost for the dc .+ & the required bandwidth for migrating the vm in the dc .+ & the set of the migration requests .+ & the set of vms migrated in the migration .+ & the migration granularity .+ & the used spectrum slot ratio of the path in the migration from the dc .+ & the maximum network congestion ratio .+ & the maximum energy consumption of a server .+ & the power usage efficiency .+ we model the inter - dc network by a graph , . here , , and are the node set , the link set and the spectrum slot set , respectively .the set of dc nodes is denoted as .we assume that all dcs are powered by hybrid energy .we denote as the set of dcs that does not have sufficient renewable energy to support their work loads and as the set of dcs that has surplus renewable energy . during the migration , and to the two sets of dcs acting as the sources and destinations , respectively .we define as the migration granularity , which determines the maximum routing resource that can be used in one migration to each dc .we assume that there are servers in the dc and each server can support up to vms .the energy consumption of a server is when it is active .a server is active as long as it hosts at least one active vm ; otherwise , the server is in the idle state . here , we assume that an idling server will be turned off and its energy consumption is zero .then , is the number of active servers required in the dc .we denote as the power usage effectiveness , which is defined as the ratio of a dc s total energy consumption ( which includes the facility energy consumption for cooling , lighting , etc . ) to that of the servers in the dc . given , a dc s total energy consumption is .we denote as the brown energy consumption in the dc .then , in the problem formulation , is a binary variable . indicates that the vm in the dc is migrated using the path with the spectrum slot as the starting spectrum slot .the objective of the re - aim problem is to minimize the total brown energy cost in all dcs with the vm service constraints and the network resource constraints .the problem is formulated as : \cdot\\ & f_{max } , \quad\forall i \neq j \end{aligned}\\ &\begin{aligned}\label{eq : c10 } & f(j)+b(j)-f(i)\leq [ 1+\delta_{i , j}-y(i , j ) ] \cdot\\ & f_{max } , \quad\forall i \neq j \end{aligned}\end{aligned}\ ] ] here , eqs .- are the vm service constraints .constrains that all the vms should be hosted in the dcs , while eqs . -constrain that the total number of vms in a dc should not exceed the dcs capacity .the network resource constraints are shown in eqs . - .constrains the network congestion ratio to be less than , which is the maximum network congestion ratio allowed for routing in the network . in eq ., is the spectrum slot ratio of the path in the migration from the dc , which is defined as the ratio of the number of occupied spectrum slots in the path to the total number of spectrum slots of this path . is defined as the number of spectrum slots used in the path for the migration from the dc .is a link capacity constraint of the network ; it constrains the bandwidth used in migrating vms not to exceed the capacity of the network resource . here , is the bandwidth requirement in terms of spectrum slots , and is the index of the starting spectrum slot of a path .for example , represents the starting spectrum slot index of the path , which is used by .is the spectrum non - overlapping constraint of a path used by two different vms in one migration .this constraint must be met for each vm in every migration ; if two vms use the same spectrum slot in one migration , the total bandwidth allocated to the two vms should not exceed the capacity of a spectrum slot ; otherwise , each vm must use a unique spectrum slot . in the migration , the vmsare sorted in ascending order based on their bandwidth requirement .we assume the vms are migrated according to an ascending order ; for example , the vm is moved after the vm is migrated .eqs . - are the spectrum non - overlapping and the continuity constraints .this spectrum non - overlapping constraint is used for different paths . in these constraints , and represent two different paths used in the migration . here , is the upper bound of the total bandwidth requirement in terms of spectrum slots . is a boolean variable defined in eq . ,which equals if the starting spectrum slot index of the path is smaller than that of the path ; otherwise , it is .we define as a boolean indicator , which equals if the path and the path in the migration have at least one common link ; otherwise , it is .we give an example to illustrate these equations . if and , eq .becomes eq ., which ensures the bandwidth non - overlapping constraint .is automatically satisfied in this case . when we provision spectrum slots for requests in the eons , the path continuity constraint , spectrum continuity constraint and non - overlapping constraint must be considered .for the path continuity constraint , a lightpath must use the same subcarriers in the whole path for a request . for the spectrum continuity constraint, the used subcarriers must be continuous if a request needs more than one subcarriers .for the non - overlapping constraint , two different lightpaths must be assigned with different subcarriers if they have one or more common links .since we use a path based method to formulate the re - aim problem , the path continuity constraint of the network is already taken into account .the main contribution of this paper is considering the network influence on the migration when we minimize the brown energy consumption of the dcs .in other words , we want to impose a controllable effect on the network in the migration that leads to less network congestion .to solve the re - aim problem , both the energy costs in dcs and the network resource required for the migration should be considered . for example , when a dc consumes brown energy , it is desirable to migrate some vms to other dcs .the vm migration will introduce additional traffic to the network . to avoid congesting the network, we have to optimize the number of vms that will be migrated and select the routing path for the migration .therefore , it is challenging to solve the re - aim , which is proven to be np - hard .the re - aim problem is np - hard we prove that the re - aim problem is np - hard by reducing any instance of the multi - processor scheduling problem ( _ mps _ ) into the re - aim problem . in the re - aim problem , without considering the network constraints , the optimal number of vms hosted in the dcs can be derived according to the availability of the renewable energy .however , with the consideration of the network constraints and the background traffic , it is difficult and impossible to solve the re - aim problem online . for the re - aim problem , many vmsare migrated from a set of dcs ( source dcs ) to another set of dcs ( destination dcs ) .therefore , we can model the vm migration problem as a manycast problem .since the re - aim problem is np - hard , we propose heuristic algorithms to solve this problem .these algorithms determine which vm should be migrated to which dc and select a proper routing path in the network to avoid congesting the network .we consider two network scenarios .the first one is a network with light traffic load . under this network scenario ,we design manycast with shortest path routing ( _ manycast - spr _ ) algorithm for vm migrations .the second network scenario is a network with heavy traffic load . in this case, we propose manycast least - weight path routing ( _ manycast - lpr _ ) for migrating vms among dcs .when the network load is light , there are more available spectrum slots .it is easy to find a path with available spectrum slots for the migration requests .then , a lower computing complexity algorithm is preferred .manycast - spr only uses the shortest path , and thus it is a very simple algorithm .hence , manycast - spr is expected to provision the inter - dc vm migration requests in a network with light work loads .the manycast - spr algorithm , as shown in alg .[ manycast - spr ] , is to find the shortest routing path that satisfies the vm migration requirement and the network resource constraints . in the beginning, we input , and , and then calculate the optimal work loads distribution .afterward , we get , and . then , we collect the migration requests . here , our algorithm splits the manycast requests into many anycast requests .now , we start to find a source dc and a destination dc for the request .the migration will try to use the shortest path from to ; the request is carried out if the network congestion constraint is satisfied ; otherwise , the request is denied .then , we update and for the next request . after many rounds of migration , if or is empty , or eq . ( [ eq : c4 ] )is not satisfied , the migration is completed . details of the manycast - spr algorithmis described in _ algorithm _ [ manycast - spr ] . here, is a function which targets to get the path for the migration .the complexity of manycast - spr is .here , is the complexity to determine the optimal work loads , is the complexity to determine , and is the complexity in building the vm set for the migration . is the complexity of determining the path for manycast - spr .when the work load of the network is heavy , the number of available spectrum slots in the network is limited . since manycast - spr only uses the shortest path ( one path ) for routing , it is impossible for manycast - spr to find an available path and spectrum slots in this scenario . then , manycast - spr may block the migration request , and leads to high brown energy consumption of dcs .hence , we propose another algorithm manycast - lpr to achieve better routing performance , that results in low brown energy consumption .manycast - lpr checks -shortest paths from the source node to the destination node , and picks up the idlest path to serve the requests .the requests will be provisioned with a higher probability by manycast - lpr as compared to manycast - spr . in summary, manycast - lpr is expected to provision the inter - dc vm migration requests under a heavy work load .it targets to find a path with more available spectrum slots at the expense of a higher complexity .manycast - lpr , as shown in alg .[ manycast - lpr ] , is to find the least weight routing path that satisfies the vm migration requirement and the network resource constraints .the main difference between manycast - lpr and manycast - spr is using different ways to find a path . for manycast - spr, it first determines the source node and the destination node .manycast - lpr , however , finds the path first , then uses the path to find the source node and the destination node .the other steps are almost the same . since manycast - lpr should calculate the weights for all node pairs to find a path , it increases the complexity .details of the manycast - lpr algorithms is described in _ algorithm _[ manycast - lpr ] .the complexity for manycast - lpr is .here , is a function which targets to get the path for the migration . is the complexity to determine the optimal work loads , is the complexity to determine , and is the complexity in building the vm set for the migration . is the complexity of determining the path for manycast - lpr .the most complex part is to determine the set of vms for the migration .build , and by the the optimal work loads allocation collect manycast requests build , and by the the optimal work loads allocation collect manycast requests evaluate the proposed algorithms for the re - aim problem in this section . in order to make the re - aim problem simple ,we assume migratory vms can be completed in one time slot .the nsfnet topology , shown in fig .[ fig : nsf - green ] , is used for the simulation .there are 14 nodes , and the dcs are located at . the dcs are assumed to be equipped with wind turbines and solar panels , which provide the dcs with renewable energy , as shown in fig [ fig : nsf - green ] .the constant is randomly generated from ] , which is convenient for the simulation .the migration requests are generated by the optimal work loads distribution which is calculated based on and .the background traffic is randomly generated between node pairs in the network .the background traffic load is counted as an average of , where is an average arrival rate of the requests and is the holding period of each request . here , the background traffic arriving process is a poisson process , and the holding time is a negative exponential distribution .parameters which are used for the evaluation are summarized in table [ tab : simulation - parameters ] . & \{3 , 5 , 8 , 10 , 12 } + & vms + & servers + & + & ] + & units , unit for vm in average + & $ ] gb / s + & spectrum slots + & gbps + & spectrum slots + & + we run the simulation for 150 times , and exclude the scenario with empty vm requests traffic load . fig . [ fig : compare_energy ] shows the total cost of brown energy consumption of the strategy without using renewable energy , manycast - spr and manycast - lpr .apparently , manycast - spr and manycast - lpr can save brown energy substantially .manycast - spr saves about cost of brown energy as compared with the strategy without migration .manycast - lpr reduces up to cost of brown energy as compared with the strategy without migration .manycast - lpr has better performance because manycast - lpr employs the least weight path of all node pairs for routing , while manycast - lpr engages only the short path of one node pair . in order to obtain a better analysis , the running time of manycast - spr and manycast - lpr shown in fig .[ fig : compare_time ] .manycast - spr spends less time than manycast - lpr , implying that manycast - spr has a lower complexity and manycast - lpr has a higher computing complexity .it also illustrates that the time and the final cost value is a trade - off in the evaluation .manycast - lpr is more complex and hence incurs a lower brown energy cost .the results of manycast - spr for various are described in fig .[ fig : cm1000ks1 ] .the cost of brown energy consumption keeps increasing when the background traffic increases , because high background traffic tends to congest the network links and leads to more migration failures .apparently , a small brings more benefits than a big in reducing the energy cost . fig .[ fig : cm1000ks2 ] shows the results of manycast - lpr for various , almost the same results as shown in fig . [fig : cm1000ks1 ] , but the cost of the brown energy consumption is much less than that in fig . [fig : cm1000ks1 ] , because manycast - lpr can easily find a path which has available bandwidth for migration .obviously , manycast - lpr with achieves the best result with the lowest cost of consumed brown energy .all these results illustrate that a small leads to a lower cost of the brown energy consumption and a big induces a higher cost of the brown energy consumption .this is because it is difficult to find a path with enough bandwidth for a big , when the network has background traffic .a smaller achieves a lower energy cost at the cost of higher complexity . figs . [ fig : spr_time ] and [ fig : lpr_time ] show the running time of manycast - spr and that of manycast - lpr with different , respectively .we can observe that the computing time is decreased when the traffic load increases .for the same with a given background traffic load , manycast - spr consumes more time than manycast - lpr does . for either of the two algorithms under a specific background traffic load, we can see that the running time is nearly halved when is doubled .hence , a smaller brings a better performance but takes longer time , and a larger has worse performance with a shorter running time .datacenters are widely deployed for the increasing demands of data processing and cloud computing .the energy consumption of dcs will take up of the total ict energy consumption by 2020 . powering dcs with renewable energy can help save brown energy .however , the availability of renewable energy varies by locations and changes over time , and dcs work loads demands also vary by locations and time , thus leading to the mismatch between the renewable energy supplies and the work loads demands in dcs .inter - dc vm migration brings additional traffic to the network , and the vm mitigation is constrained by the network capacity , rendering inter - dc vm migration a great challenge .this paper addresses the emerging renewable energy - aware inter - dc vm migration problem .the main contribution of this paper is to reduce the network influence on the migration while minimizing the brown energy consumption of the dcs .the re - aim problem is formulated and proven to be np - hard .two heuristic algorithms , manycast - spr and manycast - lpr , have been proposed to solve the re - aim problem .our results show that manycast - spr saves about cost of brown energy as compared with the strategy without migration , while manycast - lpr saves about cost of brown energy as compared with the strategy without migration .the computing time of manycast - lpr is longer than that of manycast - spr because the complexity of manycast - lpr is higher than manycast - spr . in conclusion, we have demonstrated the viability of the proposed algorithms in minimizing brown energy consumption in inter - dc migration without congesting the network .m. sadiku , s. musa , and o. momoh , `` cloud computing : opportunities and challenges '' , _ ieee potentials _ ,1 , pp . 34-36 , jan . 2014 .y. zhang and n. ansari , `` hero : hierarchical energy optimization for data center networks '' , _ ieee systems journal _ , vol .9 , pp . 406-415 , jun . 2015 .y. zhang and n. ansari , " on architecture design , congestion notification , tcp incast and power consumption in data centers , _ ieee communications surveys tutorials _ , vol .1 , pp . 39-64 , jan .m. pickavet , _ et al ._ , `` worldwide energy needs for ict : the rise of poweraware networking '' , in proc . _ ants 2008 _ , pp .1-3 , dec . 2008 .u. mandal , _ et al ._ , `` greening the cloud using renewable - energy - aware service migration '' , _ ieee network _ , vol .36-43 , nov .t. han and n. ansari , `` powering mobile networks with green energy '' , _ ieee wireless communications _ , vol .1 , pp . 90-96 , feb . 2014 .i. goiri , _ et al ._ , `` designing and managing data centers powered by renewable energy '' , _ ieee micro _ , vol .3 , pp . 8-16 , may 2014 .t. wood , _ et al ._ , `` cloudnet : dynamic pooling of cloud resources by live wan migration of virtual machines '' , _ ieee / acm transactions on networking _ , vol .1-16 , aug .2014 . enabling long distance live migration with f5 and vmware vmotion.[online ] .available : https://f5.com/resources/white-papers/enablinglong-distance-live-migration-with-f5-and-vmware-vmotion .w. shieh , x. yi , and y. tang , `` transmission experiment of multi - gigabit coherent optical ofdm systems over 1000 km ssmf fibre '' , _ electronics letters _183-184 , feb .2007 . j. armstrong , `` ofdm for optical communications '' , _ j. lightw ._ , vol . 27 , pp . 189-204 , feb .2009 . c. develder , _ et al ._ , `` optical networks for grid and cloud computing applications '' , _ proceedings of the ieee _ , vol . 100 , pp . 1149-1167 , may 2012 . s. figuerola , _ et al ._ , `` converged optical network infrastructures in support of future internet and grid services using iaas to reduce ghg emissions '' , journal of lightwave technology , vol .12 , pp . 1941-1946 , jun . 2009 .m. ghamkhari and h. mohsenian - rad , `` energy and performance management of green data centers : a profit maximization approach '' , _ ieee transactions on smart grid _ , vol .2 , pp . 1017-1025 , jun . 2013 .s. fang , _ et al ._ , `` energy optimizations for data center network : formulation and its solution '' , in proc . _ globecom _ , pp .3256-3261 , dec . 2012 .d. cavdar and f. alagoz , `` a survey of research on greening data centers '' , in proc ._ globecom _ , pp .3237-3242 , dec . 2012 .w. deng , _ et al ._ , `` harnessing renewable energy in cloud datacenters : opportunities and challenges '' , _ ieee network _ , vol .1 , pp . 48-55 , jan .2014 m. gattulli , m. tornatore , r. fiandra , and a. pattavina , `` low - carbon routing algorithms for cloud computing services in ip - over - wdm networks '' , in proc ._ icc _ , pp .2999-3003 , jun . 2012 .a. kiani and n. ansari , `` toward low - cost workload distribution for integrated green data centers '' , _ ieee communications letters _ ,1 , pp . 26-29 , jan .k. christodoulopoulos , i. tomkos , and e. varvarigos , `` elastic bandwidth allocation in flexible ofdm - based optical networks '' , _j. lightw .1354-1366 , may 2011 .l. zhang and z. zhu , `` dynamic anycast in inter - datacenter networks over elastic optical infrastructure '' , in proc ._ icnc _ , pp . 491-495 , feb .l. zhang and z. zhu , `` spectrum - efficient anycast in elastic optical inter - datacenter networks '' , _ elsevier optical switching and networking ( osn ) _ , vol .250-259 , aug .
datacenters ( _ _ dc__s ) are deployed in a large scale to support the ever increasing demand for data processing to support various applications . the energy consumption of dcs becomes a critical issue . powering dcs with renewable energy can effectively reduce the brown energy consumption and thus alleviates the energy consumption problem . owing to geographical deployments of dcs , the renewable energy generation and the data processing demands usually vary in different dcs . migrating virtual machines ( _ _ vm__s ) among dcs according to the availability of renewable energy helps match the energy demands and the renewable energy generation in dcs , and thus maximizes the utilization of renewable energy . since migrating vms incurs additional traffic in the network , the vm migration is constrained by the network capacity . the inter - datacenter ( _ inter - dc _ ) vm migration with network capacity constraints is an np - hard problem . in this paper , we propose two heuristic algorithms that approximate the optimal vm migration solution . through extensive simulations , we show that the proposed algorithms , by migrating vm among dcs , can reduce up to of brown energy consumption . renewable energy - aware inter - datacenter virtual machine + migration over elastic optical networks + liang zhang + tao han + nirwan ansari + tr - anl-2015 - 005 + august 26 , 2015 + manycast , cloud computing , elastic optical networks .
we present a quantum algorithm for the satisfiability problem ( and other combinatorial search problems ) that works on the principle of quantum adiabatic evolution .an -bit instance of satisfiability is a formula where each clause is true or false depending on the values of some subset of the bits . for a single clause , involving only a few bits , it is easy to imagine constructing a quantum device that evolves to a state that encodes the satisfying assignments of the clause .the real difficulty , of course , lies in constructing a device that produces an assignment that satisfies all clauses .our algorithm is specified by an initial state in an -qubit hilbert space and a time - dependent hamiltonian that governs the state s evolution according to the schrdinger equation .the hamiltonian takes the form where each depends only on clause and acts only on the bits in . is defined for between and and is slowly varying .the initial state , which is always the same and easy to construct , is the ground state of . for each , the ground state of encodes the satisfying assignments of clause .the ground state of encodes the satisfying assignments of the intersection of all the clauses . according to the adiabatic theorem , if the evolution time is big enough , the state of the system at time will be very close to the ground state of , thus producing the desired solution . for this algorithm to be considered successfulwe require that grow only polynomially in , the number of bits . in this paperwe analyze three examples where grows only polynomially in .we are unable to estimate the required running time in general . the quantum adiabatic evolution that we are usingshould not be confused with cooling .for example , simulated annealing is a classical algorithm that attempts to find the lowest energy configuration of what we have called by generating the stochastic distribution proportional to , where is the inverse temperature , and gradually lowering the temperature to zero .in contrast , quantum adiabatic evolution forces the state of the system to remain in the ground state of the slowly varying . in section [ sec:1 ]we present the building blocks of our algorithm in detail .this includes some discussion of the adiabatic theorem and level crossings . in section [ sec:2 ]we illustrate the method on a small example that has three clauses , each acting on 2 bits .each 2-bit clause has more than one satisfying assignment but adiabatic evolution using of the form ( [ eq:0.2 ] ) produces the unique common satisfying assignment . in section [ sec:3 ]we look at examples that grow with the number of bits in order to study the dependence of the required running time on the number of bits .we give three examples of 2-sat problems , each of which has a regular structure , which allows us to analyze the quantum evolution . in these three cases the required evolution time is only polynomially big in the number of bits .we also look at a version of the grover problem that can be viewed as a relativized satisfiability problem . in this caseour algorithm requires exponential time to produce a solution .this had to be so , as explained in section [ sec : grover ] . in section [ sec:4 ]we show that our algorithm can be recast within the conventional paradigm of quantum computing , involving sequences of few - bit unitary operators .in this section we present a quantum algorithm for solving satisfiability problems .a quantum system evolves according to the schrdinger equation and the adiabatic theorem0 tells us how to follow this evolution in the case that is slowly varying .consider a smooth one - parameter family of hamiltonians , and take so that controls the rate at which varies .define the instantaneous eigenstates and eigenvalues of by with where is the dimension of the hilbert space .suppose is the ground state of , that is , according to the adiabatic theorem , if the gap between the two lowest levels , , is strictly greater than zero for all , then this means that the existence of a nonzero gap guarantees that obeying ( [ eq:1.1 ] ) remains very close to the instantaneous ground state of of the form ( [ eq:1.2 ] ) for all from to if is big enough .let us define the minimum gap by a closer look at the adiabatic theorem tells us that taking where can make arbitrarily close to 1 .for all of the problems that we study is of order a typical eigenvalue of and is not too big , so the size of is governed by .many computationally interesting problems can be recast into an equivalent problem of finding a variable assignment that minimizes an energy " function . as a specific example , consider 3-sat .an -bit instance of 3-sat is a boolean formula , ( [ eq:0.1 ] ) , that is specified by a collection of boolean clauses , each of which involves ( at most ) 3 of the bits .each bit can take the value or and the label runs from to .clause is associated with the 3 bits labeled , and . for each clause we define an energy function we then define the total energy as the sum of the individual s , clearly and if and only if satisfies all of the clauses . thus findingthe minimum energy configuration of tells us if the formula has a satisfying assignment .we will not distinguish between conventional clauses , which compute the or function of each constituent variable or negated variable , and generalized clauses , which are permitted to compute an arbitrary boolean function of the constituent variables .in some of our examples it will be more convenient to consider generalized clauses .if we go from classical to quantum computation we replace the bit by a spin- qubit labeled by where .the states are eigenstates of the component of the -th spin , so the hilbert space is spanned by the basis vectors .clause is now associated with the operator , the hamiltonian associated with all of the clauses , which we call , is the sum of hamiltonians each of which acts on a fixed number of bits . by construction , is nonnegative , that is , for all and if and only if is a superposition of states of the form where satisfy all of the clauses . in this context , solving a 3-sat problem is equivalent to finding the ground state of a hamiltonian .clearly many other computationally interesting problems can be recast in this form . for a given problem ,specifying is straightforward but finding its ground state may be difficult .we now consider an -bit hamiltonian that is also straightforward to construct but whose ground state is simple to find .let be the 1-bit hamiltonian acting on the -th bit so continuing to take 3-sat as our working example , clause is associated with the bits , , and .now define and the ground state of is .this state , written in the basis , is a superposition of all basis vectors , note that we can also write where is the number of clauses in which bit appears in the instance of 3-sat being considered .the key feature of is that its ground state is easy to construct .the choice we made here will lead to an that is of the form ( [ eq:0.2 ] ) , that is , a sum of hamiltonians associated with each clause. we will now use adiabatic evolution to go from the known ground state of to the unknown ground state of .assume for now that the ground state of is unique .consider so from ( [ eq:1.2 ] ) , prepare the system so that it begins at in the ground state of . according to the adiabatic theorem , if is not zero and the system evolves according to ( [ eq:1.1 ] ) , then for big enough will be very close to the ground state of , that is , the solution to the computational problem . using the explicit form of( [ eq:1.14 ] ) and ( [ eq:2.20 ] ) we see that and are sums of individual terms associated with each clause . for each clause and accordingly then we have and this gives the explicit form of described in the introduction as a sum of hamiltonians associated with individual clauses .typically is not zero . to see this , note from ( [ eq:1.7 ] ) that vanishing is equivalent to there being some value of for which . consider a general hamiltonian whose coefficients are functions of where , , , and are all real .the two eigenvalues of this matrix are equal for some if and only if , , and . the curve in will typically not intersect the line unless the hamiltonian has special symmetry properties .for example , suppose the hamiltonian ( [ eq:1.20 ] ) commutes with some operator , say for concreteness .this implies that and .now for the two eigenvalues to be equal at some we only require to vanish at some .as varies from 0 to 1 it would not be surprising to find cross zero so we see that the existence of a symmetry , that is , an operator which commutes with the hamiltonian makes level crossing more commonplace .these arguments can be generalized to hamiltonians and we conclude that in the absence of symmetry , levels typically do not cross. we will expand on this point after we do some examples . in order for our method to be conceivably useful ,it is not enough for to be nonzero .we must be sure that is not so small that the evolution time is impractically large ; see ( [ eq:1.8 ] ) . for an -bit problemwe would say that adiabatic evolution can be used to solve the problem if is less than for some fixed whereas the method does not work if is of order for some .returning to ( [ eq:1.8 ] ) we see that the required running time also depends on given in ( [ eq:1.9new ] ) . using ( [ eq:1.19 ] )we have .therefore can be no larger than the maximum eigenvalue of . from ( [ eq:1.14 ] )we see that the spectrum of is contained in where is the number of terms in ( [ eq:1.14 ] ) , that is , the number of clauses in the problem . from ( [ eq:2.21 ] )we see that the spectrum of is contained in where . for 3-sat , is no bigger than .we are interested in problems for which the number of clauses grows only as a polynomial in , the number of bits .thus grows at most like a polynomial in and the distinction between polynomial and exponential running time depends entirely on .we make no claims about the size of for any problems other than the examples given in section [ sec:3 ] . we will give three examples where is of order so the evolution time is polynomial in .each of these problems has a regular structure that made calculating possible .however , the regularity of these problems also makes them classically computationally simple . the question of whether there are computationally difficult problems that could be solved by quantum adiabatic evolution we must leave to future investigation .we have presented a general quantum algorithm for solving sat problems .it consists of : 1 .an easily constructible initial state ( [ eq:1.18new ] ) , which is the ground state of in ( [ eq:2.20 ] ) .2 . a time - dependent hamiltonian , , given by ( [ eq:1.18 ] ) that is easily constructible from the given instance of the problem ; see ( [ eq:1.14 ] ) and ( [ eq:2.20 ] ) .an evolution time that also appears in ( [ eq:1.18 ] ) .schrdinger evolution according to ( [ eq:1.1 ] ) for time .5 . the final state that for big enough will be ( very nearly ) the ground state of . 6 . a measurement of in the state .the result of this measurement will be a satisfying assignment of formula ( [ eq:0.1 ] ) , if it has one ( or more ) .if the formula ( [ eq:0.1 ] ) has no satisfying assignment , the result will still minimize the number of violated clauses .again , the crucial question about this quantum algorithm is how big must be in order to solve an interesting problem .it is not clear what the relationship is , if any , between the required size of and the classical complexity of the underlying problem .the best we have been able to do is explore examples , which is the main subject of the rest of this paper .here we give some one- , two- , and three - qubit examples that illustrate some of the ideas of the introduction .the two - qubit examples have clauses with more than one satisfying assignment and serve as building blocks for the three - qubit example and for the more complicated examples of the next section .consider a one - bit problem where the single clause is satisfied if and only if .we then take which has as its ground state . for the beginning hamiltonian we take ( [ eq:2.21 ] ) with and , the smooth interpolating hamiltonian given by ( [ eq:1.19 ] ) has eigenvalues , which are plotted in fig.[fig:1 ] . we see that is not small and we could adiabatically evolve from to with a modest value of . at this pointwe can illustrate why we picked the beginning hamiltonian , , to be diagonal in a basis that is _ not _ the basis that diagonalizes the final problem hamiltonian .suppose we replace by keeping as in ( [ eq:2.1 ] ) .now is diagonal in the -basis for all values of .the two eigenvalues are and , which are plotted in fig.[fig:2 ] .the levels cross so is zero .in fact there is a symmetry , commutes with for all , so the appearance of the level cross is not surprising .adiabatically evolving , starting at , we would end up at , which is _ not _ the ground state of .however , if we add to any small term that is not diagonal in the basis , we break the symmetry , and will have a nonzero gap for all .for example , the hamiltonian { { \epsilon}}(1-s ) & 1-s\end{bmatrix}\ ] ] has for small and the eigenvalues are plotted in fig.[fig:3 ] for a small value of .this `` level repulsion '' is typically seen in more complicated systems whereas level crossing is not . a simple two - qubit example has a single two - bit clause that allows the bit values and but not and .we call this clause `` 2-bit disagree . ''we take of the form ( [ eq:2.21 ] ) with and , and we take of the form ( [ eq:1.14 ] ) with the single 2-bit disagree clause .the instantaneous eigenvalues of of the form ( [ eq:1.19 ] ) are shown in fig.[fig:4 ] .there are two ground states of , and .the starting state , which is the ground state of , is ( [ eq:1.18new ] ) with .there is a bit - exchange operation that commutes with .since the starting state is invariant under the bit - exchange operation , the state corresponding to the end of the lowest level in fig.[fig:4 ] is the symmetric state .the next level , , begins at the antisymmetric state and ends at the antisymmetric state .because commutes with the bit - exchange operation there can be no transitions from the symmetric to the antisymmetric states .therefore the curve in fig.[fig:4 ] is irrelevant to the adiabatic evolution of the ground state and the relevant gap is .closely related to 2-bit disagree is the `` 2-bit agree clause , '' which has and as satisfying assignments .we can obtain for this problem by taking for 2-bit disagree and acting with the operator that takes .note that is invariant under this transformation as is the starting state given in ( [ eq:1.18new ] ) .this implies that the levels of corresponding to 2-bit agree are the same as those for 2-bit disagree and that beginning with the ground state of , adiabatic evolution brings you to .another two - bit example that we will use later is the clause `` imply '' . herethe satisfying assignments are , , and .the relevant level diagram is shown in fig.[fig:5 ] . next we present a three - bit example that is built up from two - bit clauses so we have an instance of 2-sat with three bits .we take the 2-bit imply clause acting on bits 1 and 2 , the 2-bit disagree clause acting on bits 1 and 3 , and the 2-bit agree clause acting on bits 2 and 3 .although each two - bit clause has more than one satisfying assignment , the full problem has the unique satisfying assignment .the corresponding quantum hamiltonian , , we write as the sum of hamiltonians each of which acts on two bits , h_{{\mathrm{b}}}&= ( h_{{\mathrm{b}}}^{(1 ) } + h_{{\mathrm{b}}}^{(2 ) } ) + ( h_{{\mathrm{b}}}^{(1 ) } + h_{{\mathrm{b}}}^{(3 ) } ) + ( h_{{\mathrm{b}}}^{(2 ) } + h_{{\mathrm{b}}}^{(3 ) } ) \ .\label{eq:3.5add}\end{aligned}\ ] ] the eigenvalues of are shown in fig.[fig:6 ] . we see that is not zero . starting in the ground state of , and evolving according to ( [ eq:1.1 ] ) with system will end up in the ground state of for .this example illustrates how our algorithm evolves to the unique satisfying assignment of several overlapping clauses even when each separate clause has more than one satisfying assignment .the alert reader may have noticed that two of the levels in 6 cross .this can be understood in terms of a symmetry .the hamiltonian of ( [ eq:3.5add ] ) is invariant under the unitary transformation , as is .now the three states with energy equal to 4 at are , , and .the transformation in the basis is , so the states are invariant under , whereas goes to minus itself .we call these two different transformation properties `` invariant '' and `` odd '' .thus at there are two invariant states and one odd state with energy 4 .we see from 6 that one combination of these states ends up at energy 2 when .the energy-2 state at is , which is invariant so the level moving across from energy 4 to energy 2 is invariant .this means that one of the two levels that start at energy 4 and end at energy 1 is invariant and the other is odd .since the hilbert space can be decomposed into a direct sum of the invariant and odd subspaces and accordingly is block diagonal , the invariant and odd states are decoupled , and their crossing is not an unlikely occurrence . since ,in this simple 3-bit example , we do see levels cross you may wonder if we should expect to sometimes see the two lowest levels cross in more complicated examples .we now argue that we do not expect this to happen and even if it does occur it will not effect the evolution of the ground state .first note that the transformation which is a symmetry of ( [ eq:3.5add ] ) is not a symmetry of the individual terms in the sum .thus it is unlikely that such symmetries will typically be present in more complicated -bit examples .however , it is inevitable that certain instances of problems will give rise to hamiltonians that are invariant under some transformation .imagine that the transformation consists of bit interchange and negation ( in the basis ) as in the example just discussed .then the starting state given by ( [ eq:1.18new ] ) is invariant .assume that has a unique ground state .since is invariant this state must transform into itself , up to a phase .however , from the explicit form of the ground state we see that it transforms without a phase , that is , it is invariant .thus , following the evolution of the ground state we can restrict our attention to invariant states .the gap that matters is the smallest energy difference between the two lowest invariant states .here we discuss four examples of -bit instances of satisfiability . in three of the examplesthe problems are classically computationally simple to solve .these problems also have structure that we exploit to calculate in the corresponding quantum version . in each case goes like , so these problems can be solved in polynomial time by adiabatic quantum evolution .the other example is the `` grover problem''1 , which has a single ( generalized ) -bit clause with a unique satisfying assignment .if we assume that we treat the clause as an oracle , which may be queried but not analyzed , it takes classical queries to find the satisfying assignment .our quantum version has of order , so the time required for quantum adiabatic evolution scales like , which means that there is no quantum speedup .nonetheless , it is instructive to see how it is possible to evaluate for the grover problem . [sec : ring ] consider an -bit problem with clauses , each of which acts only on adjacent bits , that is , clause acts on bits and where runs from 1 to and bit is identified with bit .furthermore we restrict each clause to be either `` agree '' , which means that and are satisfying assignments or `` disagree '' , which means that and are satisfying assignments .suppose there are an even number of disagree clauses so that a satisfying assignment on the ring exists .clearly given the list of clauses it is trivial to construct the satisfying assignment . also , if is a satisfying assignment , so is , so there are always exactly two satisfying assignments .the quantum version of the problem has where each is either agree or disagree .the ground states of are and all in the basis .define the unitary transformation z_j'=z_j & \mbox{if .}}\ ] ] under this transformation becomes and the symmetric ground state of is we take to be ( [ eq:2.21 ] ) with bits and each . is invariant under the transformation just given .this implies that the spectrum of , with given by ( [ eq:3.1 ] ) , is identical to the spectrum of with given by ( [ eq:3.2 ] ) .thus when we find using ( [ eq:3.2 ] ) we will have found for all of the -bit agree - disagree problems initially described .we can write using ( [ eq:3.2 ] ) for as we denote the ground state given by ( [ eq:1.18new ] ) as .define the operator that negates the value of each bit in the basis , that is , .this can be written as since and =0 ] clauses , and obviously the collection of clauses is highly redundant in determining the satisfying assignments .we chose this example to explore whether this redundancy could lead to an extremely small .in fact , we will give numerical evidence that goes like for this problem , whose symmetry simplifies the analysis .as with the problem discussed in section [ sec:3.1 ] , at the quantum level we can restrict our attention to the case of all agree clauses , and we have each bit participates in clauses , so when constructing using ( [ eq:2.21 ] ) we take for all .we can write explicitly for this problem which in terms of the total spin operators and is + s \bigl[\frac{n^2}{4 } - s_zs_z \bigr ] \ .\ ] ] as in section [ sec:3.3 ] , it is enough to consider the symmetric states . using ( [ eq:3.47 ] ), we can find the matrix elements and numerically find the eigenvalues of this -dimensional matrix . actually there are two ground states of , and , corresponding to all bits having the value 0 or all bits having the value 1 .the hamiltonian is invariant under the operation of negating all bits ( in the basis ) as is the initial state given by ( [ eq:1.18new ] ) .therefore we can restrict our attention to invariant states . in fig.[fig:11 ] we show the two lowest invariant states for 33 bits .the gap is clearly visible .( because the invariant states all have an even number of 1 s in the -basis . ) in fig.[fig:12 ] we plot against .the straight line shows that with . for this problemthe maximum eigenvalues of and are both of order so appearing in ( [ eq:1.8 ] ) is no larger than . adiabatic evolution with only as big as will succeed in finding the satisfying assignment for this set of problems .the algorithm described in this paper envisages continuous - time evolution of a quantum system , governed by a smoothly - varying time - dependent hamiltonian . without further development of quantum computing hardware ,it is not clear whether this is more or less realistic than conventional quantum algorithms , which are described as sequences of unitary operators each acting on a small number of qubits . in any case, our algorithm can be recast within the conventional quantum computing paradigm using the technique introduced by lloyd 4 .the schrdinger equation ( [ eq:1.1 ] ) can be rewritten for the unitary time evolution operator , and then to bring our algorithm within the conventional quantum computing paradigm we need to approximate by a product of few - qubit unitary operators .we do this by first discretizing the interval $ ] and then applying the trotter formula at each discrete time .the unitary operator can be written as a product of factors where .we use the approximation which is valid in ( [ eq:4.3 ] ) if \ .\ ] ] using ( [ eq:1.18 ] ) this becomes we previously showed ( in the paragraph after eq .( [ eq:1.20 ] ) ) that grows no faster than the number of clauses , which we always take to be at most polynomial in .thus we conclude that the number of factors must be of order times a polynomial in .each of the terms in ( [ eq:4.3 ] ) we approximate as in ( [ eq:4.4 ] ) .now where and are numerical coefficients each of which is between 0 and 1 . to use the trotter formula for each , ,we need .since and are at most a small multiple of the number of clauses , we see that need not be larger than times a polynomial in .now ( [ eq:4.7 ] ) is a product of terms each of which is or . from ( [ eq:2.21 ] )we see that is a sum of commuting one - bit operators. therefore can be written ( exactly ) as a product of one - qubit unitary operators .the operator is a sum of commuting operators , one for each clause. therefore can be written ( exactly ) as a product of unitary operators , one for each clause acting only on the qubits involved in the clause .all together can be well approximated as a product of unitary operators each of which acts on a few qubits .the number of factors in the product is proportional to times a polynomial in .thus if the required for adiabatic evolution is polynomial in , so is the number of few - qubit unitary operators in the associated conventional quantum computing version of the algorithm .we have presented a continuous - time quantum algorithm for solving satisfiability problems , though we are unable to determine , in general , the required running time .the hamiltonian that governs the system s evolution is constructed directly from the clauses of the formula .each clause corresponds to a single term in the operator sum that is .we have given several examples of special cases of the satisfiability problem where our algorithm runs in polynomial time .even though these cases are easily seen to be classically solvable in polynomial time , our algorithm operates in an entirely different way from the classical one , and these examples may provide a small bit of evidence that our algorithm may run quickly on other , more interesting cases .
we give a quantum algorithm for solving instances of the satisfiability problem , based on adiabatic evolution . the evolution of the quantum state is governed by a time - dependent hamiltonian that interpolates between an initial hamiltonian , whose ground state is easy to construct , and a final hamiltonian , whose ground state encodes the satisfying assignment . to ensure that the system evolves to the desired final ground state , the evolution time must be big enough . the time required depends on the minimum energy difference between the two lowest states of the interpolating hamiltonian . we are unable to estimate this gap in general . we give some special symmetric cases of the satisfiability problem where the symmetry allows us to estimate the gap and we show that , in these cases , our algorithm runs in polynomial time .
many protein functions fundamentally depend on structural flexibility . complex conformational transitions , induced by ligand binding for example , are often essential to proteins participating in regulatory networks or enzyme catalysis . more generally , a protein s ability to sample a variety of conformational sub - states implies that proteins have an intrinsic flexibility and mobility that influences their function. while experimental measurement can offer direct dynamical information about specific residues , uncovering the detailed mechanisms controlling conformational transitions between two meta - stable states is often elusive . in this paperwe present an analytic model that aims to clarify the relationship between main - chain dynamics and the mechanisms controlling conformational transitions of flexible proteins .in particular , we examine the mechanism for the open / closed transition of the n - terminal domain of calmodulin ( ncam ) to explore how calcium binding and target recognition can be understood by changes in the mobility and the degree of partial order of the protein backbone .calmodulin ( cam ) may be an ideal model system to illustrate how conformational flexibility is a major determinant of biological function .cam is found in all eucaryotic cells and functions as a multipurpose intracellular ca receptor , mediating many ca-regulated processes .cam is a small ( 148 amino acid ) dumbbell shaped protein with two domains connected by a flexible linker .each domain of cam contains a pair of helix - loop - helix ca -binding motifs called ef - hands ( helices a / b and c / d in the n - terminal domain ) .these two ef - hands are connected by a flexible b / c helix - linker ( see fig . [ fig:2nd_3d ] ) . in each domainthe four helices of apo - cam are directed in a somewhat antiparallel fashion giving the domains a relatively compact structure while leaving the ca-binding loops exposed .the conformational change induced by binding ca can be described as a change in ef - hand interhelical angle ( between helices a / b and c / d ) from nearly antiparallel ( apo , closed conformation ) to nearly perpendicular ( holo , open conformation ) orientation .further this domain opening mechanism in ncam indicates that binding of ca occurs almost exclusively within ef - hands , not between them. the structural rearrangement from closed to open exposes a large hydrophobic surface rich in methionine residues responisble for molecular recognition of various cellular targets such as myosin light chain kinase .the high flexibility of cam is essential to its function .the flexibility of the central helix linking the two domains allows the activated domains to simultaneously interact with target peptides .the conformational flexibility of the domains themselves allow for considerable binding promiscuity of target peptides , a property essential to its function as a primary messenger in ca signal transduction. while similar in structure and fold , the two domains of cam are quite different in terms of their flexibility , melting temperatures , and ca-binding affinities. the conformational dynamics of ca-loaded and ca-free cam are well characterized by solution nmr. site specific internal dynamics monitored by model free order parameters , indicate that the helices of the apo - cam domains are well - folded on the picosecond to nanosecond timescale , while the ca-binding loops , helix - linker and termini are more flexible. on the other hand , spin - spin relaxation ( or transverse auto - relaxation ) rates , , indicate that the free and bound forms of the regulatory protein exchange on the millisecond timescale. akke and coworkers have investigated the rate of conformational exchange between the open and closed conformational substrates of c - terminal cam ( ccam ) domain by nmr spin relaxation experiments. comparison of exchange rates as a function of ca concentration have established that the conformational exchange in apo - ccam involves an equilibrium switching between the closed and open states that is independent of ca concentration. x - ray crystallography temperature factors give additional insight into the conformational freedom and internal flexibility of cam in the open and closed state .recently , grabarek proposed a detailed mechanism of ca driven conformational change in ef - hand proteins based on the analysis of a trapped intermediate x - ray structure of ca-bound cam mutant. this two - step ca-binding mechanism is based on the hypothesis that ca-binding and the resultant conformational change in all two ef - hand domains is determined by a segment of the structure that remains fixed as the domain opens .this segment , called the ef - hand--scaffold , refers to the bond network that connects the two ca ions .it includes the backbone and the two hydrogen bonds formed by the residues in the 8 position of binding loops ( ile27 and ile63 ) and the c = o groups of the residues in the 7 position of the binding loops ( thr26 and thr62). indeed , in the absence of ca , the n - terminal end of the binding loop is found to be poorly structured and very dynamic from nmr structures and x - ray temperature factors. functional distinction between the two ends of the binding loops in the domain opening mechanism is buttressed by the great variability of the amino acid sequences of the n - terminal ends of the ca-binding loops compared with the more conserved c - terminal ends across a variety of different ef - hand ca-binding proteins. in this paper , we study the role of flexibility in the conformational transition of cam through an extension of a coarse - grained variational model developed to characterize protein folding. this model accommodates two meta - stable folded conformations as minima of the calculated free energy surface .the natural order parameters of this model , discussed in detail in the methods section , is well suited to describe partially ordered ensembles essential to the conformational dynamics of flexible proteins .transition routes and conformational changes of the protein are determined by constrained minimization of a variational free energy surface parameterized by the degree of localization of each residue about its mean position . the computational time to calculatethe transition route for ncam is on the order of several minutes on a typical single - processor pc .in addition to extensive experimental work characterizing the inherent flexibility of cam , our results also benefit from all atom molecular dynamics simulations as well as recent coarse - grained simulations inspired by models developed to characterize protein folding. although subject to systematic errors due to approximations , analytic models have the important advantage that the results are free of statistical noise that can obscure simulation results ( particularly troublesome when characterizing low probability states ) .a configuration of a protein is expressed by the position vectors of the -carbons of the polypepetide backbone .we are interested in describing transitions between two known structures denoted by and .partially ordered ensembles of polymer configurations are described by a reference hamiltonian ^ 2.\ ] ] where is the temperature and is boltzmann s constant . here , the first term enforces chain connectivity , in which the connectivity matrix , , corresponds to a freely rotating chain with mean bond length valance angle between successive bond vectors set to by . the variational parameters , , control the magnitude of the fluctuations about -carbon position vectors .the variational parameters , ( ) , specify residue positions as an interpolation between to .the boltzmann weight for a constrained chain described by is proportional to \ ] ] where denotes the correlations of monomers and relative to the mean locations , with . here, the correlations are given by the matrix inverse , and the mean positions of each monomer interpolate between the coordinates in each native structure , .\ ] ] the statistical properties of a structural ensemble can be described in terms of the first two moments and since is harmonic . in this model ,the probability for a particular configurational ensemble at temperature is given by the variational free energy .here , is the entropy loss due to localizing the residues around the mean postions the energy is derived from two - body interactions between native contacts , } \epsilon_{ij } u_{ij}$ ] , where is the average of the pair potential over , and is the strength of a fully formed contact between residues and given by miyazawa - jernigan interaction parameters. the sum is restricted to a set of contacts determined by pairs of residues in the proximity in each of the meta - stable conformations .the pair potential between two monomers is developed by a sum of three gaussians .the parameters are chosen so that has a minimum at with value formed by the long - range attractive interactions and intermediate - range repulsive interaction as in ref . . excluded volume interactionsare represented by a short - range repulsive potential with and is chosen so that each contact has , where is the basic energy unit of the miyazawa - jernigan scaled contacts. the energy of a contact between residues and in a partially ordered chain is given by .\end{aligned}\ ] ] in this work , we consider a two - state model in which the contacts are separated into three sets : contacts that occur in reference structure ( 1 ) only , contacts that occur in reference structure ( 2 ) only , and contacts in common from both reference structures .then , we consider that each contact involved exclusively with only one structure is in equilibrium with energy from the other state ( which is zero ) .that is , we replace the pair energy for contacts in sets and according to % = 1 + \exp[-\epsilon_{ij}\langle u(r_{ij } ) \rangle_0/k_{\mathrm{b } } t]\\ \epsilon_{ij } u_{ij } = -k_b t \log \left [ 1 + \exp(-\epsilon_{ij}\langle u(r_{ij})\rangle_0/k_bt ) \right].\ ] ] this form is analogous to coupling between conformational basins in folding - inspired molecular dynamics simulation. contacts described by eq .[ eq : contact ] independently switch on or off depending on the conformational density characterized by a set of constraints .analysis of the free energy surface parameterized by follows the program developed to describe folding: the ensemble of structures controlling the transition is characterized by the monomer density at the saddlepoints of the free energy . at this point , we simplify our model and restrict the interpolation parameter to be the same for all residues , following kim et al .. then , the numerical problem simplifies to minimizing the free energy with respect to rather than finding saddlepoints in . to explore the nature of conformational dynamics in detail, we apply this model to the n - terminal domain of cam ( ncam ) . in particular, we use residues numbered 4 - 75 of unbound ncam ( apo , 1cfd ) and bound ncam ( holo , 1cll ) ( see fig . [ fig:2nd_3d ] ) . in our model , we have defined closed ncam ( 1cfd ) as structure ( 1 ) and open ncam ( 1cll ) as structure ( 2 ) .thus , the interpolation parameter corresponds to the closed state , and corresponds to the open state .the coordinates of the open / closed structure was rotated to minimize the rmsd of -carbons between the two structures. we note global alignment has the risk of possibly obscuring or averaging out some local structural differences .the temperature for the open / closed transition is taken to be the folding temperature ( ) of the open ( holo , 1cll ) structure with . for comparison ,the folding temperature for closed ( apo , 1cfd ) structure is . for a given set of constraints , ,the monomer density of a partially ordered ensemble can be characterized by the gaussian measure of similarity to conformation described by & = & \left\langle \exp\left [ -\frac{3\alpha^n}{2a^2}(\mathbf{r}_i - \mathbf{r}_i^{n_1})^2\right ] \right\rangle_0 \nonumber\\ \label{eq : density 1 } & = & ( 1 + \alpha^ng_{ii})^{-3/2}\exp\left [ -\frac{3\alpha^n}{2a^2 } \frac{(\mathbf{s}_i - \mathbf{r}_i^{n_1})^2}{1 + \alpha^ng_{ii}}\right].\end{aligned}\ ] ] similarly , the structural similarity to the conformation described by is defined as = ( 1 + \alpha^ng_{ii})^{-3/2}\exp\left [ -\frac{3\alpha^n}{2a^2}\frac{(\mathbf{s}_i - \mathbf{r}_i^{n_2})^2}{1 + \alpha^ng_{ii}}\right].\ ] ] the structural similarity relative to the native structures given by and local order parameters suitable to describing conformational transitions between metastable states in proteins . to investigate the detailed main - chain dynamics controlling the structural change in cam, we characterize the relative similarity to the closed structure along the transition route through the normalized measure where is the monomer density of the residue with respect to the closed conformation ( eq . [ eq : density 1 ] ) .similarly , we represent the relative structural similarity to the open conformation as where is the monomer density of the residue with respect to the open conformation ( eq . [ eq : density 2 ] ) . in the open state , and , while in the closed state and . to represent the structural changes more clearly , it is convenient to consider the difference , for each residue. this difference shifts the relative degree of localization to be between and corresponding to the open and closed conformations , respectively .the local mean square fluctuations of -carbon positions ( related to the temperature factors from x - ray crystallography ) are a natural set of order parameters for the reference hamiltonian in our model .this parameter , , contains information about the degree of structural order and conformational flexibility of each residue . in fig .[ fig : gii ] we have plotted versus sequence number at different values of , the parameter that controls the uniform interpolation between the open structure ( ) and the closed structure ( ) .[ fig : b_3d ] shows the corresponding 3d structures of ncam domain with the residues colored according to . aside from the very flexible ends of two terminal helicesa and d , the ca-binding loops and the helix linker possess the highest flexibility .the calculated fluctuations from our model exhibit very good qualitative agreement with x - ray temperature factors and simulation results of cam . * binding loops .* each ef - hand in cam coordinates ca through a 12-residue loop : asp20-glu31 in loop i and asp56-glu67 in loop ii .the c - terminal ends of the loops contain a short -sheet ( residues 26 - 28 in loop i and residues 62 - 64 in loop ii ) adjacent the last three residues that are part of the exiting helices b and d , respectively . as shown in fig .[ fig : gii ] , the loops remain relatively flexible even in the open conformation .the highest flexibility is near the two glycines in position 4 of the ca-binding loops i ( gly23 ) and ii ( gly59 ) .this invariable gly residue provides a sharp turn required for the proper geometry of the ca-binding sites. the linker between helices b and c is also very mobile , with the highest flexibility near residue glu45 . taken together , the mobility of the loops and b / c linker indicates that the domain opening depends entirely on a set of inherent dynamics , or `` intrinsic plasticity '' , of cam. a closer look at the fluctuations of the ca-binding loops reveals that the n - terminal part of each loop is more flexible than the c - terminal part .this agrees with nmr data characterizing the flexibility of the n - terminal and c - terminal part of loop iii and iv of the c - terminal domain. in the transition route ( from closed open ) , the n - terminal ends of the loops stiffen gradually . on the other hand ,in the c - terminal part of the loops the short -sheet structure ( residues 26 - 28 in loop i and 62 - 64 in loop ii ) remain rigid ( see fig .[ fig : gii ] and [ fig : b_3d ] ) .also the last three residues of the loops ( residues 29 - 31 in loop i / helix b and residues 65 - 67 in loop ii / helix d ) remain relatively rigid , stabilized by the exiting helices b and d respectively. this immobile region , the ef - hand -scaffold , is central to a recent proposed mechanism for cam and other ef - hand domains. fig .[ fig : gii ] shows that residues thr26 and ile27 ( in -sheet of loop i ) and thr62 and ile63 ( in -sheet of loop ii ) remain very rigid during the domain opening . it is also interesting to compare the relative flexibility of binding loop i and ii .it is clear that binding loop ii is more flexible than loop i in the both conformations ( see fig .[ fig : gii ] and [ fig : b_3d](a ) ) .in particular , the connection between helix a and the binding loop i is much more rigid than the connection between helix c and the binding loop ii .this large difference in flexibility suggests that binding loop ii of ncam is more dominate in the mechanism for the structural transition .a similar mechanism in c - terminal cam domain was also observed from nmr studies , where the ca-dependent exchange contribution is dominated by binding loop iv with lower ( higher flexibility ) than loop iii. * helices b and c and the b / c linker . * fig .[ fig : gii ] and fig .[ fig : b_3d ] also shows that the bottom part of helix c ( close to b / c helix linker ) is very flexible in apo ncam . upon opening , the flexibility of helix c decreases significantly .[ see the change in color from blue to white ( fig .[ fig : b_3d](a)-(c ) ) at the bottom part ( close to b / c helix - linker ) of helix c and from white to red at the middle part of helix c. ] in contrast , the top part of helix b ( close to binding loop i ; residues 2931 ) becomes more flexible than the bottom part of helix b ( close to b / c helix - linker ; residues 3237 ) during closed to open transition ( see fig .[ fig : gii ] ) .we also note that residues 3742 of the b / c helix - linker shows significant increase in flexibility during opening of the domain .this change in flexibility of the b / c helix - linker helps facilitate the concerted reorientation of helices b and c during the closed open transition .similar behavior was also observed in molecular dynamics simulation of cam for this six - residue ( residues 3742 ) segment .the results discussed in the previous section gives a picture of the closed to open transition with good overall agreement with experiment and simulation results on an isolated apo - cam domain .nevertheless , the analysis has focused primarily on the difference in the magnitude of fluctuations of the two meta - stable states .we now turn our attention to the predicted transition mechanism and qualitative nature of structural changes along the transition route .such a description includes : along the transition route from closed to open , what structural changes are predicted to occur early / late , and which are predicted to happen gradually / cooperatively .while such details have yet to be revealed directly through measurement , in principle , site - directed mutagenesis experiments can be used to identify kinetically important structural regions of ncam . to clarify the transtition route ,we introduce a structural order parameter that measures the similarity to the open or closed state , given in eq .[ eq : natdens_diff ] .this order parameter is defined so that corresponds to the closed conformation and corresponds to the open conformation of ncam domain .[ fig : drhon ] illustrates the conformational transition in ncam domain in terms of for each residue .an alternative representation of the same data is shown in fig .[ fig : drhon_3d ] ; here , the value of is represented as colors ranging from red ( ) to white ( = 0 ) to blue ( ) superimposed on the interpolated structure for selected values of .we first notice that an early transition in the binding loops and in the central region of helix c evident in fig . [fig : drhon ] .[ see also the gradual change in color from blue to red in the structures of fig .[ fig : drhon_3d](a)-(d ) .] we also note the concerted structural change of parts of helices b and c and flexible b / c helix - linker ( residues 3149 ) .in particular , the flexible b / c helix - linker ( residues 38 - 44 ) in fig .[ fig : drhon ] exhibits a cooperative transition .residue gln41 which located in this linker region is highly mobile according to nmr data. the change in color from red to blue in the b / c helix linker in fig .[ fig : drhon_3d](a ) and ( b ) indicates that the structural transition of the n - terminal part ( close to helix b ) of this linker occurs earlier its c - terminal part ( close to helix c ) .[ fig : drhon ] and fig .[ fig : drhon_3d ] also show a delayed initiation of structural change in residues 47 of helix a , residues 2730 of binding loop i and n - terminal part of helix b. specifically , the residues near the top part of helix b ( close to binding loop i ) and in binding loop i , have very little structural change at the beginning of domain opening , with a sharp , cooperative transition near the end .[ see the relatively slow color change ( from red to blue ) in this part of helix b and binding loop i in fig .[ fig : drhon_3d](a)-(d ) . ]although , the middle part of helix c ( residues 5052 ) has some limited structural change early in the transition , it remains quite immobile after that .[ see fig .[ fig : drhon ] and the early color change from red to blue in fig .[ fig : drhon_3d ] . ] * binding loops i and ii . * because of the central importance of the interactions between the binding loops in the recently proposed two - step ca-binding mechanism , this ef-scaffold region is highlighted in fig .[ fig : loops_3d ] . in the first step of this binding mechanism ,the ca is immobilized by the structural rigidity in the plane of -sheet and the ligands from n - terminal part of the binding loops . in the second step , the backbone torsional flexibility of the ef-scaffold enables repositioning of the c - terminal part of the binding loop together with the exiting helix ( helix b in loop i and helix d in loop ii). since the ca ions are not included in our model and we can not characterize backbone torsional flexibility of the ef-scaffold , our analysis is independent of that developed in ref . .the closed to open conformational transition of each binding loop is quite different in fig .[ fig : loops_3d ] .we predict that the structural changes in binding loop ii occur before binding loop i upon domain opening ( see the relatively slow color change from red to blue in binding loop i than loop ii in fig .[ fig : loops_3d ] ) .since the flexibility of binding loop ii is also greater , this suggests that during ca-binding process the loop ii is more dominates the overall conformational change between the closed and open state .this agrees with results based on the all atom molecular dynamics simulations of ncam discussed by vigil et al .. fig .[ fig : loops_3d ] also shows that the n - terminal ends of the loops have relatively an early transition compared to the c - terminal ends .furthermore , the conformation change of the c - terminal end of binding loop i is more cooperative , presumably relying on the earlier structural change in binding loop ii .specifically , the closed state structure residue in position 9 ( thr28 ) of the loop i is very stable as shown in fig .[ fig : residues](a ) .this is due to a hydrogen bonding between thr28 and glu31 . fig .[ fig : residues](a ) also suggests that the structural change of glu31 occurs before thr28 upon domain opening , and proceeds through the transition much more gradually .similar hydrogen bonding is also present between asn64 and glu67 in binding loop ii .nevertheless , compared to the corresponding residues in loop i , the structural change of these two residues is quite gradual [ see fig . [fig : residues](a ) ] . nevertheless , asn64 does seem to have a somewhat sharper transition than glu67 .finally , residues gly61 and thr62 in binding loop ii exhibit little structural change in fig. [ fig : loops_3d ] as the domain begins to open .* methionine residues . * the large hydrophobic binding surfaces that open in both domains of cam are especially rich in methionine residues , with four methionines in each domain occupying nearly 46% of the total hydrophobic surface area. these side chains as well as other aliphatic residues , such as valine , isoleucine and leucine , which make up the rest of the hydrophobic binding surface are highly dynamic in solution. the flexibility of the residues composing hydrophobic binding surface for target peptides explains cam s high degree of binding promiscuity .here we consider the main - chain flexibility .the four methionine residues in ncam are situated in position 36 , 51 , 71 and 72 .the closed to open structural transition of residues met36 and met71 are similar and relatively sharp compared to residue met72 which is quite gradual as shown in fig .[ fig : residues](b ) .this suggests that residues met36 and met71 remains relatively buried in the beginning of the domain opening .curiously , from fig .[ fig : residues](b ) residue met51 in the middle part of helix c at , shows sudden increase in during closed to open conformational change . the one dimensional free energy profile parameterized by the interpolation parameter is shown in fig .[ fig : f_q ] .the minimum corresponding to the open state is very shallow and unstable compared to the closed state .combined molecular dynamics simulations and small angle x - ray scattering studies on apo ncam and ca-bound ncam by vigil et al . have also shown that in aqueous solution the closed state dominates the population .the equilibrium populations for the closed and open state from our model are found to be 94% and 6% respectively . for comparison ,the nmr measurement of apo ccam indicate a minor population of 510%. these results suggest that on average , the residues in the hydrophobic surface of cam are well protected from solvent .the maximum of the free energy occurs quite close to the open state at , though the barrier is very broad in terms of this reaction coordinate .we also consider the free energy of the global structural parameter where is given in in eq .[ eq : natdens_diff ] .[ fig : f_q ] shows that is also a reasonable reaction coordinate for the transition .the barrier broadens somewhat , with the maximum free energy occurring around . in terms of the global structure ,this roughly corresponds to 60%75% of ncam being similar to open state configuration in the transition state ensemble .even though the open state minimum is not well isolated , we estimate the conformational transition rate from closed to open using the arrhenius form , where is the free energy difference between the closed conformation and transition - state ensemble . assuming the prefactor gives the estimate .this value is in reasonable agreement with the transition rate estimate of based on nmr exchange rate data of ccam. primary motivation for the work presented in this paper is to understand protein functions that involve large scale ( main - chain ) dynamics and flexibility. proteins with relatively large conformational freedom include those in which folding and binding are coupled. , as well as hinge bending motions or proteins with high plasticity such as ion binding sites, and proteins with allosteric transitions. while not nearly as developed as the energy landscape theory of protein folding, a general thermodynamic framework for the energy landscape theory of protein - protein binding, large conformational transitions, and the coupling between folding and binding is beginning to emerge . aside from somenoted exceptions, relatively little theoretical work has focused on detailed analysis of transition mechanisms of flexible proteins in terms of specific ensembles of kinetic pathways .the dynamics of conformational transitions between well - defined conformational basins are generally controlled by relatively low probability partially ordered ensembles .the main challenge is to describe the transition state ensembles at the residue level giving a site - specific description of the transition mechanism .modern nmr relaxation experiments have provided a wealth of data about internal dynamics and conformational sub - states quantitatively on fast ( nanosecond ) and slow ( micro- to millisecond ) timescales. such studies are very useful in identifying residues with high flexibility upon target binding , not only through movements of surface loops and side chains , but also by global motions of the core structure. these experiments , however , provide only a few local structural changes and have not been able to capture the molecular details necessary to fully understand the mechanism of conformational transitions .whereas atomistic simulations can potentially bridge the gap on time scale up to microsecond , this timescale falls orders of magnitude short for slow protein dynamics ( millisecond to second ) . also , the use of atomistic approaches becomes computationally inefficient with the increased size of a system . to overcome the problems associated with all - atom simulations , many studieshas demonstrated the use of coarse - grained protein models with simplified representations , such as , only -carbons as point masses and simplified energy functions. such models require much less computational cost making them practical to describe the conformational transitions of even large proteins. analyzing the fluctuations about a single minimum has been surprisingly successful in identifying relevant cooperative motions in a wide range of proteins .the commonly used tirion potential ( which can be viewed as a harmonic go - model ) gives a simple one parameter model in which the relevant motions for the transition is identified as one of many low frequency normal modes. while this approach can provide considerable insight , it offers a limited description of the transition because it is based only on the fluctuations about one structure .the tirion potential has recently been extended to include two conformations in which the contact map defining the potential and normal modes is updated as the protein is moved along a known reaction coordinate. local unfolding and flexibility is accommodated by relieving regions of high stress , `` cracking '' , which modifies the contact map .coarse - grained simulations in which the potential interpolates between two folded - state biased contact maps have also been introduced recently. for example , in the plastic network model of margakis and karplus the individual basins are approximated by the tirion potential and are then smoothly connected by a secular equation formulation .a similar interpolation was considered by okazaki et al . alternatively , best et al .developed a two - state approximation analogous to eq .[ eq : contact ] .these advances are similar in spirt to our approach , albeit with distinct approximations for the basic description of partially ordered ensembles .in this paper , we study the intrinsic flexibility and structural change in the n - terminal domain of cam ( ncam ) during open to close transition .the predicted transition route from our model gives a detailed picture of the interplay between structural transition , conformational flexibility and function of n - terminal calmodulin ( ncam ) domain .the results from our model are largely consistent with the important role that the immobile ef-scaffold region plays in the transition mechanism . dissection of the transition route of this region further suggests that it is the early structural change of loop ii that drives the cooperative completion of the interactions between the loops in the open structure .the strong qualitative agreement with available experimental measurements of flexibility is an encouraging validation of the model .recently , the folding dynamics of zinc - metallated protein ( azurin ) was studied using a similar variational model and compared with experiments for the detail coordination reaction coupled with the entatic state. a similar future study of detail coordination reaction for the complete description of conformational change stabilized by ion binding in cam seems very promising .ultimately , we wish to extend this model to investigate the binding mechanism and kinetic paths of several peptides to ca-loaded cam .since large conformational changes coupled to binding depends fundamentally on the fluctuations of partially folded conformations, this polymer based variational formalism can accommodate coupled folding and binding very naturally .we thank zenon grabarek for helpful suggestions and critically reading the manuscript . this work was supported in part by grant awarded by the ohio board of regents research challenge program . *references * 53 natexlab#1#1bibnamefont # 1#1bibfnamefont # 1#1citenamefont # 1#1url # 1`#1`urlprefix[2]#2 [ 2][]#2 , , , * * , ( ) ., , , * * , ( ) . , * * , ( ) . , , , , * * , ( ) . , , , ,* * , ( ) . ,* * , ( ) . , , , * * , ( ) . , , , * * , ( ) ., , , , * * , ( ) . ,* * , ( ) . , , , , * * , ( ) . ,* * , ( ) . , * * , ( ) . , , , , , , * * , ( ) . , , ,* * , ( ) ., , , * * , ( ) . , , , * * , ( ) ., , , * * , ( ) . , , , , , * * , ( ) . , , , ,* * , ( ) . ,* * , ( ) . , * * , ( ) . , * * , ( ) . , , , , ,* * , ( ) . , , , * * , ( ) . ,* * , ( ) . , * * , ( ) . , , , * * , ( ) . , * * , ( ) ., , , * * , ( ) . , * * , ( ) . , , , , * * , ( ) . , , , * * , ( ) . , , , * * , ( ) . , * * , ( ) . , , , * * , ( ) . , , , , * * , ( ) . , , , , , ,* * , ( ) . , , , , * * , ( ) . ,* * , ( ) . , , , , , , * * , ( ) . , , , * * , ( ) . , , , * * , ( ) ., , , , , , , * * , ( ) . , , , , * * , ( ) . ,* * , ( ) . , * * , ( ) . , * * , ( ) ., , , , * * , ( ) . ,* * , ( ) ., , , * * , ( ) . , , , , , ,* * , ( ) . , , , * * , ( ) .the n - terminal domain of calmodulin ( ncam ) . ( a ) the ca-free ( apo , closed ) structure , pdb code 1cfd .( b ) the ca-bound ( holo , open ) structure , pdb code 1cll .( c ) the secondary structure of ncam is shown with one letter amino acid sequence code for residues 4 - 75 .the secondary structure of ncam is as follows : helix a ( 519 ) , ca-binding loop i ( 2031 ) , helix b ( 2937 ) , b / c helix - linker ( 3844 ) , helix c ( 4555 ) , ca-binding loop ii ( 5667 ) , helix d ( 6575 ) .note that , the last three residues of the binding loops i and ii are also part of the exiting helices b and d. there are short -sheet structures in binding loop i ( residues 2628 ) and loop ii ( residues 6264 ) .this , and other three - dimensional illustrations were made using visual molecular dynamics ( vmd). fluctuations vs sequence index of ncam for selected values of the interpolation parameter in the conformational transition route between open and closed . here the distance between successive monomers .different are denoted by , red ( ) open ; green ( ) ; blue ( ) ; pink ( ) ; orange ( ) and black ( ) closed .the secondary structure is indicated below the plot .change in fluctuations in ncam domain during the closed to open conformational transition .the 3d structure in ( a ) corresponds to the interpolation parameter , ( closed state ) ; ( b ) corresponds to ( intermediate state ) and ( c ) corresponds to ( open state ) .red corresponds to low fluctuations and blue corresponds to high . here, is the distance between successive monomers .difference between the normalized native density ( a measure of structural similarity ) of each residue for different .the change in color from red to blue is showing the closed open conformational transition of ncam .this is normalized to be at the open state minimum ( ; blue ) and 1 at the closed state minimum ( ; red ) . below the secondary structure of ncamis shown . here , in eq .[ eq : density 1 ] and eq .[ eq : density 2 ] is 0.5 .closed to open conformational transition in ncam with different interpolation parameter .the 3d structure in ( a ) corresponds to the interpolation parameter , ; ( b ) corresponds to ; ( c ) corresponds to and ( d ) corresponds to .the change in color from red to blue corresponds to different values of normalized native density ( a measure of structural similarity ) of each residue for different .red corresponds to ( closed conformation ) and blue ( open conformation ) corresponds to .comparison of structural change in binding loops i ( in bottom ) and ii ( in top ) in terms of the order parameter .the 3d structures in ( a)-(i ) corresponds to the interpolation parameter , -0.1 during the closed to open transition .the change in color from red to blue corresponds to different values of ( a measure of structural similarity ) of each residue .red corresponds to ( closed conformation ) and blue ( open conformation ) corresponds to .dynamical behavior of residues during conformational transition of ncam .the normalized native density difference vs are shown for four different group of residues .structural transition of ( a ) residues in position 9 ( thr28 and asn64 ) and position 12 ( glu31 and glu67 ) of the two binding loops ; ( b ) four hydrophobic methionine residues in positions 36 , 51 , 71 and 72 .free energy along the transition route . in the lower curvethe abscissa is the interpolation parameter . in the upper curvethe abscissa is the global structural order parameter .the entropy across the transition is relatively constant , so that the free energy barrier is largely energetic .
the key to understanding a protein s function often lies in its conformational dynamics . we develop a coarse - grained variational model to investigate the interplay between structural transitions , conformational flexibility and function of n - terminal calmodulin ( ncam ) domain . in this model , two energy basins corresponding to the `` closed '' apo conformation and `` open '' holo conformation of ncam domain are connected by a uniform interpolation parameter . the resulting detailed transition route from our model is largely consistent with the recently proposed ef-scaffold mechanism in ef - hand family proteins . we find that the n - terminal part in calcium binding loops i and ii shows higher flexibility than the c - terminal part which form this ef-scaffold structure . the structural transition of binding loops i and ii are compared in detail . our model predicts that binding loop ii , with higher flexibility and early structural change than binding loop i , dominates the conformational transition in ncam domain .
the evaluation of the total causal effect of a given point exposure , treatment or intervention on an outcome of interest is arguably the most common objective of experimental and observational studies in the fields of epidemiology , biostatistics and in the social sciences .however , in recent years , investigators in these various fields have become increasingly interested in making inferences about the direct or indirect pathways of the exposure effect , through a mediator variable or not , that occurs subsequently to the exposure and prior to the outcome .recently , the counterfactual language of causal inference has proven particularly useful for formalizing mediation analysis . indeed , causal inference offers a formal mathematical framework for defining varieties of direct and indirect effects , and for establishing necessary and sufficient identifying conditions of these effects .a notable contribution of causal inference to the literature on mediation analysis is the key distinction drawn between so - called controlled direct effects versus natural direct effects . in words, the controlled direct effect refers to the exposure effect that arises upon intervening to set the mediator to a fixed level that may differ from its actual observed value [ , , ] .in contrast , the natural ( also known as pure ) direct effect captures the effect of the exposure when one intervenes to set the mediator to the ( random ) level it would have been in the absence of exposure [ , ] . as noted by ,controlled direct and indirect effects are particularly relevant for policy making , whereas natural direct and indirect effects are more useful for understanding the underlying mechanism by which the exposure operates .in fact , natural direct and indirect effects combine to produce the exposure total effect . to formally define natural direct and indirect effectsfirst requires defining counterfactuals .we assume that for each level of a binary exposure , and of a mediator variable , there exist a counterfactual variable corresponding to the outcome had possibly contrary to fact the exposure and mediator variables taken the value .similarly , for , we assume there exists a counterfactual variable corresponding to the mediator variable had possibly contrary to fact the exposure variable taken the value .the current paper concerns the decomposition of the total effect of on , in terms of natural direct and natural indirect effects , which , expressed on the mean difference scale , is given by\\[-8pt ] \qquad&=&\overbrace{\mathbb{e } ( y_{e=1,m_{e=1}}-y_{e=1,m_{e=0 } } ) } ^{\mathrm{natural\ indirect\ effect}}+ \overbrace{\mathbb{e } ( y_{e=1,m_{e=0}}-y_{e=0,m_{e=0 } } ) } % ^{\mathrm{natural\ direct\ effect } } , \nonumber\hspace*{-25pt}\end{aligned}\ ] ] where stands for expectation . in an effort to account for confounding bias when estimating causal effects , such as the average total effect ( [ total_effect ] ) from nonexperimental data , investigators routinely collect and adjust for in data analysis , a large number of confounding factors . because of the curse of dimensionality , nonparametric methods of estimation are typically not practical in such settings , and one usually resorts to one of two dimension - reduction strategies ; either one relies on a model for the outcome given exposure and counfounders , or alternately one relies on a model for the exposure , that is , the propensity score .recently , powerful semiparametric methods have been developed to analyze observational studies that produce so - called double robust and highly efficient estimates of the exposure total causal effect [ robins ( ) , , , ] and similar methods have also been developed to estimate controlled direct effects [ ] .an important advantage of a double robust method is that it carefully combines both of the aforementioned dimension reduction strategies for confounding adjustment , to produce an estimator of the causal effect that remains consistent and asymptotically normal , provided at least one of the two strategies is correct , without necessarily knowing which strategy is indeed correct [ ] .unfortunately , similar methods for making semiparametric inferences about marginal natural direct and indirect effects are currently lacking .thus , this paper develops a general semiparametric framework for obtaining inferences about marginal natural direct and indirect effects on the mean of an outcome , while appropriately accounting for a large number of confounding factors for the exposure and the mediator variables .our semiparametric framework is particularly appealing , as it gives new insight on issues of efficiency and robustness in the context of mediation analysis .specifically , in section [ sec2 ] , we adopt the sequential ignorability assumption of imai , keele and tingley ( ) under which , in conjunction with the standard consistency and positivity assumptions , we derive the efficient influence function and thus obtain the semiparametric efficiency bound for the natural direct and natural indirect marginal mean causal effects , in the nonparametric model in which the observed data likelihood is left unrestricted .we further show that in order to conduct mediation inferences in , one must estimate at least a subset of the following quantities : the conditional expectation of the outcome given the mediator , exposure and confounding factors ; the density of the mediator given the exposure and the confounders ; the density of the exposure given the confounders . ideally , to minimize the possibility of modeling bias, one may wish to estimate each of these quantities nonparametrically ; however , as previously argued , when as we assume throughout , we wish to account for numerous confounders , such nonparametric estimates will likely perform poorly in finite samples .thus , in section [ sec2.3 ] we develop an alternative multiply robust strategy . to do so , we propose to model ( i ) , ( ii ) and ( iii ) parametrically ( or semiparametrically ) , but rather than obtaining mediation inferences that rely on the correct specification of a specific subset of these models , instead we carefully combine these three models to produce estimators of the marginal mean direct and indirect effects that remain consistent and asymptotically normal ( can ) in a union model , where at least one but not necessarily all of the following conditions hold : the parametric or semi - parametric models for the conditional expectation of the outcome ( i ) and for the conditional density of the mediator ( ii ) are correctly specified ; the parametric or semiparametric models for the conditional expectation of the outcome ( i ) and for the conditional density of the exposure ( iii ) are correctly specified ; the parametric or semiparametric models for the conditional densities of the exposure and the mediator ( ii ) and ( iii ) are correctly specified .accordingly , we define submodels , and of corresponding to models ( a ) , ( b ) and ( c ) respectively .thus , the proposed approach is triply robust as it produces valid inferences about natural direct and indirect effects in the union model .furthermore , as we later show in section [ sec2.3 ] , the proposed estimators are also locally semiparametric efficient in the sense that they achieve the respective efficiency bounds for estimating the natural direct and indirect effects in , at the intersection submodel .section [ sec3 ] summarizes a simulation study illustrating the finite sample performance of the various estimators described in section [ sec2 ] , and section [ sec4 ] gives a real data application of these methods .section [ sec5 ] describes a strategy to improve the stability of the proposed multiply robust estimator which directly depends on inverse exposure and mediator density weights , when such weights are highly variable , and section [ sec6 ] demonstrates the favorable performance of two modified multiply robust estimators in the context of such highly variable weights . in section [ sec7 ] , we compare the proposed methodology to the prevailing estimators in the literature . based on this comparison , we conclude that the new approach should generally be preferred because an inference under the proposed method is guaranteed to remain valid under many more data generating laws than an inference based on each of the other existing approaches .in particular , as we argue below the approach of is not entirely satisfactory because , despite producing a can estimator of the marginal direct effect under the union model ( and therefore an estimator that is double robust ) , their estimator requires a correct model for the density of the mediator .thus , unlike the direct effect estimator developed in this paper , the van der laan estimator fails to be consistent under the submodel .nonetheless , the estimator of van der laan is in fact locally efficient in model , provided the model for the mediator s conditional density is either known , or can be efficiently estimated . this property is confirmed in a supplementary online appendix [ ] , where we also provide a general map that relates the efficient influence function for model to the corresponding efficient influence function for model , assuming an arbitrary parametric or semiparametric model for the mediator conditional density is correctly specified . in section [ sec8 ] , we describe a novel double robust sensitivity analysis framework to assess the impact on inferences about the natural direct effect , of a departure from the ignorability assumption of the mediator variable .we conclude with a brief discussion .suppose i.i.d .data on is collected for subjects . recall that is an outcome of interest , is a binary exposure variable , is a mediator variable with support , known to occur subsequently to and prior to and is a vector of pre - exposure variables with support that confound the association between and .the overarching goal of this paper is to provide some theory of inference about the fundamental functional of mediation analysis which judea pearl calls `` the mediation causal formula '' [ pearl ( ) ] and which , expressed on the mean scale , is \\[-8pt ] & & \hspace*{19pt}{}\times f_{m|e , x } ( m|e=0,x = x ) f_{x}(x)\,d\mu(m , x),\nonumber\end{aligned}\ ] ] and are respectively the conditional density of the mediator given and the density of , and is a dominating measure for the distribution of .hereafter , to keep with standard statistical parlance , we shall simply refer to as the `` mediation functional '' or `` m - functional '' since it is formally a functional on the nonparametric statistical model of all regular laws of the observed data that satisfy the positivity assumption given below ; that is , , with the real line .the functional is of keen interest here because it arises in the estimation of natural direct and indirect effects as we describe next .to do so , we make the consistency assumption ._ consistency : _ in addition , we adopt the sequential ignorability assumption of imai , keele and tingley ( ) which states that for ._ _ sequential ignorability:__ where states that is independent of given ; paired with the following : _ positivity : _ then , under the consistency , sequential ignorability and positivity assumptions , imai , keele and tingley ( ) showed that so that and , , are identified from the observed data , and so is the mean natural direct effect and the mean natural indirect effect . for binary , one might alternatively consider the natural direct effect on the risk ratio scale or on the odds ratio scale and similarly defined natural indirect effects on the risk ratio and odds ratio scales .it is instructive to contrast the expression ( [ main_functional ] ) for with the expression ( [ ate functional ] ) for corresponding to , and to note that the two expressions bare a striking resemblance except the density of the mediator in the first expression conditions on the unexposed ( with ) , whereas in the second expression , the mediator density is conditional on the exposed ( with ) .as we demonstrate below , this subtle difference has remarkable implications for inference . was the first to derive the m - functional under a different set of assumptions .others have since contributed alternative sets of identifying assumptions . in this paper , we have chosen to work under the sequential ignorability assumption of , , but note that alternative related assumptions exist in the literature [ , , , hafeman and vanderweele ( ) ] ; however , we note that disagree with the label `` sequential ignorability '' because its terminology has previously carried a different interpretation in the literature .nonetheless , the assumption entails two ignorability - like assumptions that are made sequentially .first , given the observed pre - exposure confounders , the exposure assignment is assumed to be ignorable , that is , statistically independent of potential outcomes and potential mediators .the second part of the assumption states that the mediator is ignorable given the observed exposure and pre - exposure confounders .specifically , the second part of the sequential ignorability assumption is conditional on the observed value of the ignorable treatment and the observed pretreatment confounders .we note that the second part of the sequential ignorability assumption is particularly strong and must be made with care .this is partly because it is always possible that there might be unobserved variables that confound the relationship between the outcome and the mediator variables , even upon conditioning on the observed exposure and covariates .furthermore , the confounders must all be pre - exposure variables ; that is , they must precede .in fact , proved that without additional assumptions , one can not identify natural direct and indirect effects if there are confounding variables that are affected by the exposure , even if such variables are observed by the investigator [ also see tchetgen tchetgen and vanderweele ( ) ] .this implies that , similarly to the ignorability of the exposure in observational studies , ignorability of the mediator can not be established with certainty , even after collecting as many pre - exposure confounders as possible .furthermore , as point out , whereas the first part of the sequential ignorability assumption could , in principle , be enforced in a randomized study , by randomizing within levels of the second part of the sequential ignorability assumption can not similarly be enforced experimentally , even by randomization . andthus , for this latter assumption to hold , one must entirely rely on expert knowledge about the mechanism under study .for this reason , it will be crucial in practice to supplement mediation analyses with a sensitivity analysis that accurately quantifies the degree to which results are robust to a potential violation of the sequential ignorability assumption .later in the paper , we develop a variety of sensitivity analysis techniques that allow the analyst to quantify the degree to which his or her mediation analysis results are robust to a potential violation of the sequential ignorability assumption . in this section ,we derive the efficient influence function for the m - functional in .this result is then combined with the efficient influence function for the functional [ , ] to obtain the efficient influence function for the natural direct and indirect effects on the mean difference scale .thus , in the following , we shall use the efficient influence function of which is well known to be where for , we define so that , .the following theorem is proved in the .[ teo1 ] under the consistency , sequential ignorability and positivity assumptions , the efficient influence function of the m - functional in model is given by and the efficient influence function of the natural direct and indirect effects on the mean difference scale in model are respectively given by and thus , the semiparametric efficiency bound for estimating the natural direct and the natural indirect effects in are respectively given by and .although not presented here , theorem [ teo1 ] is easily extended to obtain the efficient influence functions and the respective semiparametric efficiency bounds for the direct and indirect effects on the risk ratio and the odds ratio scales by a straightforward application of the delta method .an important implication of the theorem is that all regular and asymptotically linear ( ral ) estimators of , and in model share the common influence functions and , respectively .specifically , any ral estimator of the m - functional in model , shares a common asymptotic expansion , where = n^{-1}\sum_{i } [ \cdot ] _ { i} ] with , solving = % \mathbb{p}_{n } \biggl [ \frac{\partial}{\partial\beta_{m}}\log f_{m|e , x}^{\,\mathrm{par } } ( m|e , x;\widehat{\beta}_{m } ) \biggr ] , \ ] ] and we set for , a parametric model for the density of ] ; 2 .\sim -0.024 - 0.4x_{1}+0.4x_{2}+n(0,1) ] ; 4 . \sim \mathit{bernoulli}([1+\exp\{-(0.5-x_{1}+0.5x_{2} ] 6 .\sim1 + 0.2x_{1}+0.3x_{2}+1.4x_{3} ] was known by design not to depend on covariates , and therefore its estimation is not prone to modeling error . the continuous outcome andmediator variables were modeled using linear regression models with gaussian error , with main effects for included in the outcome regression and main effects for included in the mediator regression .table [ tab3 ] summarizes results obtained using , , and together with , , to estimate the direct and indirect effects of the treatment ..4d2.4d2.4d2.4@ & & & & & + direct effect & estimate & -0.0310 & -0.0310 & 0.0280 & -0.0409 + & s.e . & 0.0124 & 0.0620 & 0.0465 & 0.021 7 + indirect effect & estimate & -0.0160 & -0.0160 & -0.0750 & -0.0070 + & s.e . & 0.0372 & 0.0620 & 0.0434 & 0.021 7 + point estimates of both natural direct and indirect effects closely agreed under models and , and also agreed with the results of .we should note that inferences under our choice of are actually robust to the normality assumption and , as in , only require that the mean structure of ] is correct .in contrast , inferences under model require a correct model for the mediator density .this distinction may partly explain the apparent disagreement in the estimated direct effect under when compared to the other methods , also suggesting that the gaussian error model for is not entirely appropriate . the multiply robust estimate of the natural direct effect is consistent with estimates obtained under models and , and is statistically significant , suggesting that the intervention may have beneficial direct effects on participants mental health ; while the multiply robust approach suggests a much smaller indirect effect than all other estimators although none achieved statistical significance .the triply robust estimator which involves inverse probability weights for the exposure and mediator variables , clearly relies on the positivity assumption , for good finite sample performance . but as recently shown by in the context of missing outcome data , a practical violation of positivity in data analysis can severely compromise inferences based on such methodology ; although their analysis did not directly concern the m - functional .thus , it is crucial to critically examine , as we do below in a simulation study , the extent to which the various estimators discussed in this paper are susceptible to a practical violation of the positivity assumption , and to consider possible approaches to improve the finite sample performance of these estimators in the context of highly variable empirical weights .methodology to enhance the finite sample behavior of is well studied in the literature and is not considered here ; see , for example , , and tan ( ) .we first describe an approach to enhance the finite sample performance of , particularly in the presence of highly variable empirical weights . to focus the exposition , we only consider the case of a continuous and a binary , but in principle , the approach could be generalized to a more general setting .the proposed enhancement involves two modifications .the first modification adapts to the mediation context , an approach developed for the missing data context ( and for the estimation of total effects ) in .the basic guiding principle of the approach is to carefully modify the estimation of the outcome and mediator models in order to ensure that the triply robust estimator given by equation ( [ triply ] ) has the simple m - functional representation where is carefully estimated to ensure multiple robustness .the reason for favoring an estimator with the above representation is that it is expected to be more robust to practical positivity violation because it does not directly depend on inverse probability weights .however , as we show next , to ensure multiple robustness , estimation of involves inverse probability weights , and therefore , indirectly depends on such weights .our strategy involves a second step to minimize the potential impact of this indirect dependence on weights . in the following ,we assume , to simplify the exposition , that a simple linear model is used : \beta_{y}.\ ] ] then , similar to , one can verify that the above m - functional representation of a triply robust estimator is obtained by estimating with obtained via weighted logistic regression in the unexposed - only , with weight ; and by estimating using weighted ols of on in the exposed - only , with weight provided that both working models include an intercept . the second enhancement to minimize undue influence of variable weights on the m - functional estimator , entails using in the previous step instead of , where with \bigr).\ ] ] this second modification ensures a certain boundedness property of inverse propensity score - weighting . specifically , for any bounded function of and ; consider for a moment the goal of estimating the counterfactual mean ; then it is well known that even though is bounded , the simple inverse - probability weighting estimator could easily be unbounded , particularly if positivity is practically violated .in contrast , as we show next , the estimator is generally bounded . to see why , note that }\bigl(1-\mathbb{p}_{n}(e)\bigr ) \biggr\}\\ & & { } + \mathbb{p}% _ { n } \ { r \}\end{aligned}\ ] ] which is bounded since the second term is bounded , and the first term is a convex combination of bounded variables , and therefore is also bounded .furthermore , ] 5 .^{-1 } ) ] 7 .^{-1}) ] 9 . .correctly specified working models were thus achieved when an additive linear regression of on , a logistic regression of with linear predictor additive in and and a logistic regression of with linear predictor additive in the , respectively .incorrect specification involved fitting these models with replacing , which produces higly variable weights . for instance , an estimated propensity score as small as occurred in the simulation study reflecting an effective violation of positivity ; similarly , a mediator predicted probability as small as also occured in the simulation study ..3d3.3d4.3d3.4d2.4d2.8@ & & & & & & & + all correct & bias & 0.001 & -0.207 & 0.498 & 0.003 & -0.08 & -0.079 + & s.e . & 2 .615 1 & 2 .615 5 & 2 .615 3 + wrong & bias & -9 .221 & 0.498 & -0.147 & -0.502 & -0.202 + & s.e . & 3 .141 + wrong & bias & -0.033 & -0.207 & -9 .497 & 0.001 & 0.046 & 0.046 + & s.e . & 2 .614 + wrong & bias & -0.001 & 0.132 & 210 .450 & 0.066 & -0.089 & -0.087 + & s.e .& 2.614 & 4.373 & 2336.92 & 4.891 & 2.619 & 2.615 + wrong & bias & -9.869 & -13.535 & 210.454 & -33.090 & -1.4609 & -2.487 + & s.e .& 3.322 & 5.256 & 2336.92 & 375.334 & 5.187 & 4.245 + wrong & bias & -9.355 & -10.220 & -9.496 & -4.346 & -3.579 & -3.579 + & s.e .& 3.224 & 10.539 & 15.376 & 3.912 & 3.480 & 3.441 + wrong & bias & -0.032 & 0.132 & 205.060 & 0.088 & -0.001 & -3.7710 ^ -5 + & s.e .& 2.614 & 4.373 & 2289.788 & 4.763 & 2.623 & 2.618 + wrong & bias & -9.355 & -13.535 & 205.060 & -37.757 & -4.223 & -5.253 + & s.e .& 3.224 & 5.356 & 2289.78 & 379.122 & 5.835 & 4.828 + .8d3.3d2.6d5.6d2.4d2.4@ & & & & & & & + all correct & bias & 0.0324 & 0.004 & -0.106 & 0.034 & -0.047 & -0.047 + & s.e . &1.136 & 3.06 & 6.490 & 1.136 & 1.137 & 1.137 + wrong & bias & -10.256 & -10.305 & -0.106 & 0.063 & -0.147 & -0.148 + & s.e . & 1.675 & 4.005 & 6.490 & 1.769 & 1.419 & 1.407 + wrong & bias & & 0.004 & -9.706 & 0.033 & 0.076 & 0.076 + & s.e . & 1.136 & 3.060 & 5.395 & 1.137 & 1.137 & 1.135 + wrong & bias & 0.032 & 0.135 & 2.410 ^ 6 & 1908.76 & -0.038 & -0.030 + & s.e . & 1.136 & 1.794 & 4.310 ^ 7 & 53911.63 & 1.400 & 1.242 + wrong & bias & -10.256 & -14.011 & 2.410 ^ 6 & -1.110 ^ 6 & 6.201 & 1.024 + & s.e . & 1.675 & 2.386 & 4.310 ^ 7 & 2.110 ^ 7 & 9.406 & 5.097 + wrong & bias & -9.705 & -10.305 & -9.706 & -4.216 & -3.555 & -3.557 + & s.e . & 1.626 & 4.004 & 5.395 & 1.667 & 1.527 & 1.510 + wrong & bias & 5.710 ^ -4 & 0.135 & 2.510 ^ 6 & 2034.83 & 0.0539 & 0.0599 + & s.e .& 1.136 & 1.794 & 4.610 ^ 7 & 56090.10 & 1.429 & 1.272 + wrong & bias & -9.075 & -14.011 & 2.510 ^ 6 & -1.210 ^ 6 & 4.659 & -0.755 + & s.e .& 1.626 & 2.386 & 4.610 ^ 7 & 2.210 ^ 7 & 10.121 & 5.910 + tables [ tab4 ] and [ tab5 ] summarize simulation results for , , , , and . when all three working models are correct , all estimators perform well in terms of bias , but there are clear differences between the estimators in terms of efficiency .in fact , , , and have comparable efficiency for , but , is far more variable .moreover , under mis - specification of a single model , , and remain nearly unbiased , and for the most part substantially more efficient than the corresponding consistent estimator in , , .when at least two models are mis - specified , the multiply robust estimators , and generally outperform the other estimators , although occasionally succumbs to the unstable weights resulting in disastrous mean squared error ; see table [ tab5 ] when model m and model e are both incorrect .in contrast , generally improves on which generally outperforms and for the most part and appear to eliminate any possible deleterious impact of highly variable weights .in this section , we briefly compare the proposed approach to some existing estimators in the literature .perhaps the most common approach for estimating direct and indirect effects when is continuous uses a system of linear structural equations ; whereby , a linear structural equation for the outcome , given the exposure , the mediator and the confounders , is combined with a linear structural equation for the mediator , given the exposure and confounders , to produce an estimator of natural direct and indirect effects .the classical approach of is a particular instance of this approach . in recent work , mainly motivated by pearl s mediation functional , several authors [ imai , keele and tingley ( ) , imai , keele and yamamoto ( ) , pearl( ) , , ] have demonstrated how the simple linear structural equation approach generalizes to accommodate both , the presence of an interaction between exposure and mediator variables , and a nonlinear link function , either in the regression model for the outcome , or in the regression model for the mediator , or both .in fact , when the effect of confounders is also modeled in such structural equations , inferences based on the latter can be viewed as special instances of inferences obtained under a particular specification of model for the outcome and the mediator densities . andthus , as previously shown in the simulations , an estimator obtained under a system of structural equations will generally fail to produce a consistent estimator of natural direct and indirect effects when model is incorrect , whereas , by using the proposed multiply robust estimator , valid inferences can be recovered under the union model , even if fails .a notable improvement on the system of structural equations approach is the double robust estimator of a natural direct effect due to .their estimator solves the estimating equation constructed using an empirical version of given in the online appendix .they show their estimator remains can in the larger submodel and therefore , they can recover valid inferences even when the outcome model is incorrect , provided both the exposure and mediator models are correct .unfortunately , the van der laan estimator is still not entirely satisfactory because unlike the proposed multiply robust estimator , it requires that the model for the mediator density is correct . nonetheless ,if the mediator model is correct , the authors establish that their estimator achieves the efficiency bound for model at the intersection submodel where all models are correct ; and thus it is locally semiparametric efficient in .interestingly , as we report in the online supplement , the semiparametric efficiency bounds for models and are distinct , because the density of the mediator variable is not ancillary for inferences about the m - functional .thus , any restriction placed on the mediator s conditional density can , when correct , produce improvements in efficiency .this is in stark contrast with the role played by the density of the exposure variable , which as in the estimation of the marginal causal effect , remains ancillary for inferences about the m - functional and thus the efficiency bound for the latter is unaltered by any additional information on the former [ ] . in the online appendix, we provide a general functional map that relates the efficient influence function for the larger model to the efficient influence for the smaller model where the model for the mediator is either parametric or semiparametric .our map is instructive because it makes explicit using simple geometric arguments , the information that is gained from increasing restrictions on the law of the mediator . in the online appendix, we illustrate the map by recovering the efficient influence function of van der laan and petersen in the case of a singleton model ( i.e. , a known conditional density ) for the mediator and in the case of a parametric model for the mediator .we describe a semiparametric sensitivity analysis framework to assess the extent to which a violation of the ignorability assumption for the mediator might alter inferences about natural direct and indirect effects .although only results for the natural direct effect are given here , the extension for the indirect effect is easily deduced from the presentation .let -\mathbb{e}% [ y_{1,m}|e = e , m\neq m , x = x ] , \ ] ] then that is , a violation of the ignorability assumption for the mediator variable , generally implies that thus , we proceed as in , and propose to recover inferences by assuming the selection bias function is known , which encodes the magnitude and direction of the unmeasured confounding for the mediator . in the following , the support of , is assumed to be finite . to motivate the proposed approach , suppose for the moment that is known ; then under the assumption that the exposure is ignorable given , we show in the that \\ & & \qquad = \mathbb{e } [ y_{1,m}|e=0,m = m , x = x ] \\ & & \qquad=\mathbb{e } [ y|e=1,m = m , x = x ] -t ( 1,m , x ) \bigl ( 1-f_{m|e , x } ( m|e=1,x = x ) \bigr ) \\ & & \quad\qquad{}+t ( 0,m , x ) \bigl ( 1-f_{m|e , x } ( m|e=0,x = x ) \bigr ) , \end{aligned}\ ] ] and therefore the m - functional is identified by -\mbox { } t ( 1,m , x ) \bigl ( 1-f_{m|e , x } ( m|e=1,x ) \bigr ) \nonumber\\ & & \hspace*{137pt } { } + t ( 0,m , x ) \bigl ( 1-f_{m|e , x } ( m|e=0,x ) \bigr ) \bigr\}\\ & & \qquad{}\times f_{m|e , x } ( m|e=0,x ) , \nonumber\end{aligned}\ ] ] which is equivalently represented as . \nonumber\end{aligned}\ ] ] below , these two equivalent representations , ( [ rep1 ] ) and ( [ rep2 ] ) , are carefully combined to obtain a double robust estimator of the m - functional , assuming is known . a sensitivity analysisis then obtained by repeating this process and reporting inferences for each choice of in a finite set of user - specified functions indexed by a finite dimensional parameter with corresponding to the unmeasured confounding assumption , that is , . throughout, the model for the probability mass function of is assumed to be correct .thus , to implement the sensitivity analysis , we develop a semiparametric estimator of the natural direct effect in the union model , assuming = for a fixed .the proposed doubly robust estimator of the natural direct effect is then given by where is as previously described , and ,\end{aligned}\ ] ] with & & \hspace*{-5pt}\quad=\sum_{m\in \mathcal{s } } \bigl\ { \widehat{\mathbb{e}}^{\mathrm{par } } ( y|x ,m = m , e=1 ) + t_{\lambda^{\ast } } ( 0,m , x ) \bigl ( 1-\widehat{f}% _ { m|e , x}^{\,\mathrm{par } } ( m|e=0,x ) \bigr ) \\[-2pt ] & & \hspace*{-5pt}\hspace*{165pt } { } -t_{\lambda^{\ast } } ( 1,m , x ) \bigl ( 1-\widehat{f}% _ { m|e , x}^{\,\mathrm{par } } ( m|e=1,x ) \bigr ) \bigr\}\\[-2pt ] & & \hspace*{-5pt}\hspace*{17pt}\qquad{}\times \widehat{f}% _ { m|e , x}^{\,\mathrm{par } } ( m|e=0,x ) .\end{aligned}\ ] ] our sensitivity analysis then entails reporting the set ( and the associated confidence intervals ) , which summarizes how sensitive inferences are to a deviation from the ignorability assumption .a theoretical justification for the approach is given by the following formal result , which is proved in the supplemental appendix .[ teo4 ] suppose ; then under the consistency , positivity assumptions and the ignorability assumption for the exposure, is a can estimator of the natural direct effect in .the influence function of is provided in the , and can be used to construct a corresponding confidence interval .it is important to note that the sensitivity analysis technique presented here differs in crucial ways from previous techniques developed by , and .first , the methodology of postulates the existence of an unmeasured confounder ( possibly vector valued ) which , when included in , recovers the sequential ignorability assumption .the sensitivity analysis then requires specification of a sensitivity parameter encoding the effect of the unmeasured confounder on the outcome within levels of , and another parameter for the effect of the exposure on the density of the unmeasured confounder given .this is a daunting task which renders the approach generally impractical , except perhaps in the simple setting where it is reasonable to postulate a single binary confounder is unobserved , and one is willing to make further simplifying assumptions about the required sensitivity parameters [ ] . in comparison, the proposed approach circumvents this difficulty by concisely encoding a violation of the ignorability assumption for the mediator through the selection bias function .thus the approach makes no reference and thus is agnostic about the existence , dimension and nature of unmeasured confounders .furthermore , in our proposal , the ignorability violation can arise due to an unmeasured confounder of the mediator - outcome relationship that is also an effect of the exposure variable , a setting not handled by the technique of . the method of which is restricted to binary data , shares some of the limitations given above .finally , in contrast with our proposed double robust approach , a coherent implementation of the sensitivity analysis techniques of , and rely on correct specification of all posited models .we refer the reader to for further discussion of and .the main contribution of the current paper is a theoretically rigorous yet practically relevant semiparametric framework for making inferences about natural direct and indirect causal effects in the presence of a large number of confounding factors .semiparametric efficiency bounds are given for the nonparametric model , and multiply robust locally efficient estimators are developed that can be used when nonparametric estimation is not possible .although the paper focuses on a binary exposure , we note that the extension to a polytomous exposure is trivial . in future work, we shall extend our results for marginal effects by considering conditional natural direct and indirect effects , given a subset of pre - exposure variables [ tchetgen tchetgen and shpitser ( ) ] .these models are particularly important in making inferences about so - called moderated mediation effects , a topic of growing interest , particularly in the field of psychology [ ] . in related work , we have recently extended our results to a survival analysis setting [ ] .a major limitation of the current paper is that it assumes that the mediator is measured without error , an assumption that may be unrealistic in practice and , if incorrect , may result in biased inferences about mediated effects .we note that much of the recent literature on causal mediation analysis makes a similar assumption . in future work, it will be important to build on the results derived in the current paper to appropriately account for a mis - measured mediator [ tchetgen tchetgen and lin ( ) ] .proof of theorem [ teo1 ] let denote a one - dimensional regular parametric submodel of , with , and let the efficient influence function is the unique random variable to satisfy the following equation : for the score of at , and denoting differentiation w.r.t . at .we observe that considering the first term , it is straightforward to verify that .\end{aligned}\ ] ] similarly , one can easily verify that ,\end{aligned}\ ] ] and finally , one can also verify that .\end{aligned}\ ] ] thus we obtain given , the results for the direct and indirect effect follow from the fact that the influence function of a difference of two functionals equals the difference of the respective influence functions . because the model is nonparametric , there is a unique influence function for each functional , and it is efficient in the model , leading to the efficiency bound results .proof of theorem [ teo2 ] we begin by showing that \\[-8pt ] & & \qquad = 0 \nonumber\end{aligned}\ ] ] under model . first note that under model .equality ( [ appzero ] ) now follows because and = \eta ( 1,0,x ) $ ] : } ^{\mathrm{=0 } } } \biggr ] \\ & & \quad\qquad { } + \mathbb{e } [ \eta ( 1,0,x;\beta_{y},\beta_{m } ) ] -\theta_{0 } \\ & & \qquad = 0.\end{aligned}\ ] ] second , under model .equality ( appzero ) now follows because and : \\ & & \quad\qquad{}+\mathbb{e } \biggl [ \frac{i(e=0)}{f_{e|x}^{\,\mathrm{par}}(1|x;\beta_{e})}\\ & & \hspace*{56pt}{}\times\mathbb{e } [ \ { \mathbb{e}^{\mathrm{par } } ( y|x , m , e=1;\beta_{y } ) -\eta ( 1,0,x;\beta_{y},\beta_{m}^{\ast } ) \ } |e=0,x ] \biggr ] \\ & & \quad\qquad{}+\mathbb{e } [ \eta ( 1,0,x;\beta_{y},\beta_{m}^{\ast } ) ] -\theta_{0 } \\ & & \qquad=\mathbb{e } \bigl [ \mathbb{e } [ \ { \mathbb{e}^{\mathrm{par } } ( y|x , m , e=1;\beta_{y } ) \ } |e=0,x ] \bigr ] -\theta_{0}=0.\end{aligned}\ ] ] third , equality ( [ appzero ] ) holds under model because \\ & & \quad\qquad{}+\mathbb{e } \biggl [ \frac{i(e=0)}{f_{e|x}^{\,\mathrm{par}}(1|x;\beta_{e})}\\ & & \hspace*{57pt}{}\times\mathbb{e } [ \ { \mathbb{e}^{\mathrm{par } } ( y|x , m , e=1;\beta_{y}^{\ast } ) -\eta ( 1,0,x;\beta_{y}^{\ast},\beta_{m } ) \ } |e=0,x ] \biggr ] \\ & & \quad\qquad{}+\mathbb{e } [ \eta ( 1,0,x;\beta_{y}^{\ast},\beta_{m } ) ] -\theta_{0 } \\ & & \qquad=\mathbb{e } \bigl [ \mathbb{e } [ \ { \mathbb{e } ( y|x , m , e=1 ) \ } |e=0,x ] \bigr]\\ & & \quad\qquad { } -\mathbb{e } \bigl [ \mathbb{e } [ \mathbb{e}^{\mathrm{par } } ( y|x , m , e=1;\beta_{y}^{\ast } ) |e=0,x ] \bigr ] \\ & & \quad\qquad{}+\mathbb{e } \bigl [ \mathbb{e } [ \mathbb{e}^{\mathrm{par } } ( y|x , m , e=1;\beta _ { y}^{\ast } ) |e=0,x ] \bigr ] -\mathbb{e } [ \eta ( 1,0,x;\beta_{y}^{\ast},\beta_{m } ) ] \\ & & \quad\qquad{}+\mathbb{e } [ \eta ( 1,0,x;\beta_{y}^{\ast},\beta_{m } ) ] -\theta_{0 } \\ & & \qquad=\mathbb{e } \bigl [ \mathbb{e } [ \ { \mathbb{e } ( y|x , m , e=1 ) \ } |e=0,x ] \bigr ] -\theta_{0}.\end{aligned}\ ] ] assuming that the regularity conditions of theorem 1a in hold for , the expression for follows by standard taylor expansion arguments , and it now follows that the asymptotic distribution of under model follows from the previous equation by slutsky s theorem and the central limit theorem. we note that is can in the union model since it is can in the larger model where either the density for the exposure is correct , or the density of the mediator and the outcome regression are both correct and thus .this gives the multiply robust result for direct and indirect effects .the asymptotic distribution of direct and indirect effect estimates then follows from similar arguments as above . at the intersection submodel hence the semiparametric efficiency claim then follows for , and a similar argument gives the result for direct and indirect effects .proofs of theorems 3 and [ teo4 ] the proofs are given in the online appendix .the authors would like to acknowledge andrea rotnitzky who provided invaluable comments that improved the presentation of the results given in section [ sec7 ] .the authors also thank james robins and tyler vanderweele for useful comments that significantly improved the presentation of this article .
while estimation of the marginal ( total ) causal effect of a point exposure on an outcome is arguably the most common objective of experimental and observational studies in the health and social sciences , in recent years , investigators have also become increasingly interested in mediation analysis . specifically , upon evaluating the total effect of the exposure , investigators routinely wish to make inferences about the direct or indirect pathways of the effect of the exposure , through a mediator variable or not , that occurs subsequently to the exposure and prior to the outcome . although powerful semiparametric methodologies have been developed to analyze observational studies that produce double robust and highly efficient estimates of the marginal total causal effect , similar methods for mediation analysis are currently lacking . thus , this paper develops a general semiparametric framework for obtaining inferences about so - called marginal natural direct and indirect causal effects , while appropriately accounting for a large number of pre - exposure confounding factors for the exposure and the mediator variables . our analytic framework is particularly appealing , because it gives new insights on issues of efficiency and robustness in the context of mediation analysis . in particular , we propose new multiply robust locally efficient estimators of the marginal natural indirect and direct causal effects , and develop a novel double robust sensitivity analysis framework for the assumption of ignorability of the mediator variable . .
the financial crises of the last decade have revealed the inherent structural fragilities of the financial system . in this context , the scientific community put substantial efforts in understanding the complex patterns of interconnections characterizing financial markets and how financial distress spreads among financial institutions through direct exposures to bilateral contracts and indirect exposures through common assets ownership .various techniques have thus been developed to study how _ local _ events may trigger a _ global _ instability through amplification effects like default cascades , and to quantify the resulting systemic risk in capital markets . at the same time , regulators were pushed to introduce more stringent rules on capital and liquidity requirements which , coupled with the adoption of micro - prudential policies by commercial and investment banks , the promotion of central counterparties ( ccps ) as contract intermediaries , and the quantitative easing monetary policy by central banks , eventually increased the robustness of the financial system . in this work we focus on ccps , corporate entities that guarantee the terms of a trade between two _ clearing members _( cms)_i.e ._ , financial institutions participating in the market cleared by the ccp in case of insolvency of one of the parties .this is achieved by collecting guarantees from each cm for covering potential losses stemming from a missed fulfillment of their clearing obligations , resulting in the ccp replacing the trade at the current market price . in particular ,a ccp collects two different types of guarantees from its cms : _ margins _ and _ default fund _ amounts .margins are called on a daily basis to cover the theoretical liquidation costs that the ccp would incur in the event of default of a cm , in order to close the open positions under severe market scenarios .the default fund instead is a mutualized guarantee fund that aims at covering market risks above and beyond those covered by margins , under the assumption of default of one or more cms .the default fund is calibrated through regularly performed stress tests .in particular , according to emir ( the european market infrastructures regulation ) , these stress tests should determine whether the ccp has sufficient resources to cover losses resulting from the default of at least the two cms to which it has the largest exposure under `` extreme but plausible '' market conditions , the so - called _ cover 2 _ requirement . given the significant role ccps play in the stability of the european financial system , the european securities and markets authority ( esma ) has recently coordinated the first eu - wide assessment of the resilience of ccps to adverse market developments , putting a specific focus on the interconnections between the numerous participants in the eu financial system .this stress exercise involved 17 ccps , and focused on the counterparty credit risk that ccps would face as a result of multiple cms defaults and simultaneous market price shocks .the assessment also included an analysis of the potential spill - over effects to non - defaulting cms .indeed , ccps manage defaults by means of the so - called `` default waterfall '' , defined in article 45 of emir . according to this mechanism , in case of a default all the guarantees posted by the defaulting member ( both margins and contribution to the default fund )is used first in order to cover potential losses the ccp is facing .however , in case of severe losses this may not be enough , so that a dedicated amount of the ccp s capital is used and , at last , the default fund of the non - defaulting cms is used too , resulting in spillover losses for non - defaulting cms .additionally , losses may also derive from the fact that the various ccps are highly interconnected through common cms , so that the default of one of the top members or groups of a ccp could potentially impact other ccps as well .the esma exercise acknowledged that the system of eu ccps is overall resilient to the scenarios used to model extreme yet plausible market developments .however , the report also highlighted that a significant part of the protection ccps are equipped with is given by the resources provided by non - defaulting cms , which are in turn at risk of facing significant losses .in severe scenarios , this could trigger second round effects via additional losses at ccp level and the default of additional cms .this is why esma recommended ccps to carefully evaluate the creditworthiness of cms , as well as their potential exposures due to their participation to other ccps . in this workwe address this request by developing a network - based framework for ccps stress tests that allows to assess contagion effects triggered by different initial shocks that propagate through credit and liquidity contagion channels according to the dynamics proposed by and later developed by .the model we propose aims at overcoming the current definition of ccps stress tests used to determine the size of their default funds , by considering spillover and contagion effects amongst cms that indeed should be taken into account when putting aside default resources .we quantitatively challenge the _ cover 2 _ rule to assess the systemic losses it may generate , and whether they are comparable to a distributed shock .specifically , instead of fixing _ ex - ante _ a number of cms that might default at the same time , we follow an _ ex - post _ approach and determine how many cms would be affected by an initial stress hitting the system and reverberating within cms . indeed, the propagation of the initial distress can lead to total losses that might be larger than those estimated by the _ cover 2 _ requirement .our stress test methodology is based on the assessment of the financial positions and of the interconnections between cms participating in the market cleared by a given ccp . in particular, we propose a network - based approach for modeling the links between cms , in order to assess the resilience of the considered market to a number of possible financial shocks ( namely , idiosyncratic , macroeconomic , price , credit and liquidity shocks ) .given the difficulties , at ccp level , to collect the data needed to assess the exposures of its cms to other ccps , here we focus on a single cleared market .thus , without loss of generality , we apply the proposed approach to cc&g , the only clearing house authorized in italy that operates on several markets and asset classes ( _ e.g. _ , fixed income , equities and equity derivatives ) .note that in order to avoid spill - over effects between cms trading in different asset classes , cc&g has a separate default fund for each of these classes .thus we can consider asset classes separately , and in what follows we focus on the fixed income class , which is the most significant in terms of cleared volumes and systemic importance in the italian financial system .importantly , cc&g s fixed income default fund is generally gauged on a _ cover 4 _ basis , _i.e. _ , in order to cover the 4 members to which the largest exposure is recorded , and is thus more conservative than what prescribed by the _ cover 2 _ requirement . in a nutshell, we find that network effects are relevant and lead to losses comparable to , or even bigger than those resulting from an initial macroeconomic and idiosyncratic shock . in general, cc&g s fixed income default fund turns out to be wide enough to cover the uncovered exposures of all defaulting cms . in the scenario corresponding to the default of the two most exposed cms , initial shocks trigger additional default events , challenging the effectiveness of the _ cover 2 _ requirement .we support our findings through specific examples , as well as with an exhaustive analysis in the space of the economic parameters of the model .the paper is organized as follows . in section [ model ]we give a step - by - step description of the methodology used in our stress test framework . in section [ default ]we describe how to assess the _ cover 2 _ adequacy under the hypothesis of extreme stress conditions . in section [ results ]we present and discuss the results of the stress test simulation , and in section [ conclusion ] we conclude and outline future perspectives .as stated in the introduction , without loss of generality we consider a single market cleared by a single ccp : cc&g s fixed income asset class .the market is composed of cms , that are mainly banks but can be financial institutions of different kinds . for a generic cm belonging to this set , its financial position at each date summarized by the balance sheet identity : -[l_i^{\mbox{\tiny{int}}}(t)+l_i^{\mbox{\tiny{oth}}}(t)],\ ] ] where and represent , respectively , total assets and liabilities of the cm .we then split into inter - cms assets , given by bilateral credits to other cms , and other assets , given both by assets to other cms collateralized by cc&g and other assets to the rest of the financial system .analogously , we separate into bilateral debits from other cms , , and other liabilities , . as clarified in section [ reverberation ] ,inter - cms bilateral contracts allow for the propagation of financial distress within cms .the balance sheet identity of eq .( [ eq : balancesheet ] ) defines the equity of cm as the difference between total assets and liabilities , and is considered solvent as long as its equity is positive . here , following the literature on financial contagion ,we take the insolvency condition as a proxy for default . given these basic definitions , the operative steps of our stress - test framework are the following : 1 .use a merton - like model to obtain daily balance sheet information of cms from periodic data reports ; 2 . reconstruct the network of inter - cms bilateral exposures ; 3 .apply a set of initial shocks to the market ( idiosyncratic , macroeconomic and on margins posted by cc&g ) ; 4 . reverberate the initial shock on the network via credit and liquidity channels , and quantify the final equity loss . in order to compute the financial position of each cm at each date , given by eq .( [ eq : balancesheet ] ) , we have to obtain daily `` dynamic '' values for both total assets and liabilities starting from the information disclosed periodically . to this end, we use a merton - like model that estimates the value of a firm s equity according to black and scholes option pricing theory .the main insight of this approach is that the equity of a firm can be modeled as the price of a call option on the assets of the firm , with a strike price equal to the notional amount of debt issued by the company .indeed , shareholders are the residual owners of a company : the value of the assets above the debt will be paid out to them , otherwise they will get nothing .here we resort to a variation of the original merton model where we remove the assumption that default ( or insolvency ) can only occur at the maturity of the debt .we suppose instead that default occurs the first time the firm s total assets fall below the default point , _ i.e. _ , the notional value of debt . as suggested by , we approximate the face value of the firm s debt with the book value of the firm s total liabilities .pricing techniques for barrier options , whose payoff depends on whether the underlying asset price reaches a certain level during a specified time interval , can be used for our purpose . in particular, we consider down - and - out call options , _i.e. _ , knock - out call options that cease to exist if the asset price decreases falls below the barrier . in our framework , the barrier is set equal to the firm s total liabilities and the maturity can be set to year , following .our approach is based on the assumption that the total assets of a generic firm follow a geometric brownian motion where is the expected continuously compounded return on , is the volatility of and is a standard wiener process . according to the black and scholes pricing model, the price of the considered down - and - out call option is given by : (t ) - n[d_-]l_i e^{-rt}-n[y]a_i(t ) \biggl(\frac{l_i}{a_i(t)}\biggr)^{2\lambda } + n[\tilde{y}]l_i e^{-rt}\biggl(\frac{l_i}{a_i(t)}\biggr)^{2\lambda-2 } , \ ] ] where = \frac{1}{\sigma_i^{(a ) } \sqrt{t}}\biggl[\ln\bigl(\frac{a_i(t)}{l_i}\bigr)+\bigl(r\pm\frac{1}{2}(\sigma_i^{(a)})^2\bigr)t\biggr ] , \qquad \lambda[\sigma_i^{(a)}]=\frac{r}{(\sigma_i^{(a)})^2}+\frac{1}{2},\ ] ] =\frac{1}{\sigma_i^{(a ) } \sqrt{t}}\ln\bigl(\frac{l_i}{a_i(t)}\bigr)+\lambda\sigma_i^{(a)}\sqrt{t},\qquad \tilde{y}[a_i(t),\sigma_i^{(a)}]=y-\sigma_i^{(a)}\sqrt{t},\ ] ] indicates the cumulative function of the standard normal distribution and is the risk - free rate .moreover , it can be shown that the following relation holds between the equity volatility and the assets volatility : (t)\sigma_i^{(a ) } + n[y]\biggl [ ( 2\lambda-1)a_i(t)\sigma_i^{(a)}\biggl(\frac{l_i}{a_i(t)}\biggr)^{2\lambda}\biggr ] + n[\tilde{y}]\biggl [ ( 2 - 2\lambda)a_i(t)\sigma_i^{(a ) } e^{-rt}\biggl(\frac{l_i}{a_i(t)}\biggr)^{2\lambda-1}\biggr]\ . \end{split}\ ] ] in the model we adopt , the option value is observed on the market as the total current value of the firm s equity , while its volatility can be easily estimated . on the other hand ,the unknown variables are the current value of assets and its volatility .they can be estimated by inverting the two nonlinear equations ( [ eq_mert_1 ] ) and ( [ eq_mert_2 ] ) . in the implementation of the merton model , the following input parameters have been used : has been approximated as the firm s market capitalization ( if the firm is a listed company ) or equity ( as reported in the last available balance sheet ) ; has been proxied as the volatility of the market capitalization ( if the company is listed ) , otherwise as the volatility of a reference index , as suggested by ; has been represented as the total liabilities of the firm as reported in the last available balance sheet ; has been estimated as the annual return on assets of the company .the model produces the following outputs : , an estimate of the firm s assets at time ; , the volatility of ; , an estimate of the firm s liabilities at time ; , the default probability of the firm .once we have obtained daily balance sheet entries for each cm , we can also determine daily values for their inter - cms assets and liabilities .given that most of the cms are banks , we can proxy and with interbank assets and liabilities as reported on the balance sheet. then we use the approach of where , for each cm , the proportion of interbank assets ( liabilities ) over total assets ( liabilities ) remains constant over time .] parameter values for the above equation are obtained by fitting quarterly balance sheet data of a pool of banking sector firms , for which both total and interbank assets / liabilities are available .fit results in our case read : , , ; , , . ]we now want to build the market of inter - cms bilateral exposures .because of its structure , this market can be properly represented as a directed weighted network , whose nodes are the cms and whose links correspond to the direct credits and debits between the cms which will constitute the ground for the propagation of financial distress . indeed , for each cm at date , its bilateral inter - cms assets and liabilities are the aggregates of the individual loans to and borrowings from other cms .thus where is the amount of the loan granted by to at , which represents an asset for and a liability for : .these amounts represent the weighted links of the interbank network , for which however we have no information . to overcome this limitation , we resort to the two - step inference procedure introduced by to reconstruct the network .we have : ^{-1}+1\}^{-1}\\ 0\quad\mbox{otherwise } \end{cases}\ ] ] where denotes the presence of the link , is the total volume of the inter - cms market , and is a parameter that controls for the density of the network .here we set to have a network density of 5% like the one observed in the italian interbank market e - mid at a daily aggregation scale .note that contracts can be established not only amongst cms but also with external firms , and in principle should be included in this network as links pointing out of the system . here , since we proxied inter - cms trades with interbank trades , and most of cc&g cms are italian banks , we use bis locational banking statistics to confirm that the foreign positions of italian banks are rather small ( amounting to roughly 10% of the total exposure ) and can thus be neglected in our analysis .we now model the initial shock to be applied to the system . for each cm at date , we decrease its equity by : \chi ( t ) e_i ( t)+\psi_i ( t ) \max \ { m_i^{\mbox{\tiny{str}}}(t)-m_i(t),0\}.\ ] ] the first term of the right - hand side of eq .( [ eq : shock_shape ] ) is the _ exogenous _ shock on the assets of cm which we model , in line with , with an idiosyncratic and a macroeconomic component : given that ] , where is the average total contribution of the external shock over the total amount of assets in the system.\chi(t ) e_i ( t ) \rangle = x \sum_i a_i ( t) ] , which rescales the stress due to the increase of margins to values comparable to cms equities .overall , note that we only consider stresses that are positive and thus cause equity decreases : for each cm having , we impose .the financial distress resulting from the initial equity losses can propagate within the network of inter - cms exposures , eventually becoming amplified and causing additional losses .two main mechanisms are responsible for such a reverberation : _ credit _ and _ liquidity _ shocks . indeed ,if a generic cm defaults , two kinds of events occur : * _ credit shock _ : cm fails to meet its obligations , resulting in effective losses for its creditors ; * _ liquidity shock _ :other cms are unable to replace all the liquidity previously granted by , which in turn triggers a fire sale of assets causing effective losses as illiquid assets trade at a discount .if another cm defaults because of these losses , a new wave of credit - funding shocks propagates through the market , eventually resulting in a cascade of failures. however , these shocks may propagate even if no default has occurred , as equity losses experienced by a cm do imply both a decreasing value of its obligations as well as a decreasing ability to lend money to the market , because the cm is now `` closer '' to default .this results in potential equity losses for other cms . in order to quantify these potential losses we use the approach of , which builds on , to obtain probabilities of cms default ( that is , the equity at risk ) by iteratively spreading the individual cms distress levels weighted by the potential wealth affected . in a nutshell, the method works as follows .assuming that relative changes of equity translate linearly into relative changes of claim values , the resulting _ impact _ of on reads : where is the parameter setting the amount of loss given default , sets the fraction of lost liquidity that has to be replenished by asset sales and quantifies asset depricing during fire sales .the dynamics of shock propagation then consists of several rounds , and the variables involved are the levels of financial distress of each cm at each iteration , given by the relative changes of equity }(t)=1-e_i^{[n]}(t)/e_i^{[0]}(t).\ ] ] by definition , when no equity losses occurres for , when defaults , and in general . * at step there is no distress in the system , hence }(t)\equiv e_i(t)\rightarrow h_i^{[0]}(t)=0 ] ; * subsequent values of are obtained by spreading this shock on the system and writing up the equation for the evolution of cms equity : }(t)=\min\left\{1,\ ; h_i^{[n+1]}(t)+\sum_{j\in\mathcal{a}[n+1]}[\lambda\lambda_{ij}(t)+\rho\gamma^{[n]}(t)\upsilon_{ij}(t)]\,[h_j^{[n+1]}(t)-h_j^{[n]}(t)]\,e^{-(n - n_j)/\tau}\right\}.\ ] ] in the above expression , =\{j : h_j^{[n]}(t)<1\} ] is the iteration when first becomes distressed and is the damping scale setting the mean lifetime of the shocks . for instance, means that each cm spreads its distress only the first time it becomes distressed and that cms always propagate received shocks until they default .the fire sale devaluation factor is }(t)=\{c^{\mbox{\tiny{int}}}(t)/[\rho q^{[n]}(t)]-1\}^{-1} ] .the above described dynamics stops at when no more cms can propagate their distress .the set of _ vulnerabilities _n} ] , the vulnerability ( relative equity loss ) given by the initial shock ; * } ] , the vulnerability after propagation of shocks on the network is exhausted . herewe use the worst - case reverberation , however we observe that the dynamics makes very few iterations to get very close to the stationary configuration ( this also means that the model is not highly sensitive to the value of unless it becomes very close to zero ) . obviously , it is }\le h_i^{[2]}\le h_i^{[*]} ] , ordered by final equity loss .bottom panels show instead scatter plots of } ] ( right ) as a function of the inter - cm leverage ( each bubble is a cm , colored according to its initial equity).,title="fig:",scaledwidth=47.5% ] + ) for .the histogram in the top panel shows for each cm the triplet },h_i^{[2]},h_i^{[*]} ] ( left ) and of } ] , ordered by final equity loss .bottom panels show instead scatter plots of } ] ( right ) as a function of the inter - cm leverage ( each bubble is a cm , colored according to its initial equity).,title="fig:",scaledwidth=47.5% ] figure [ fig2 ] shows instead results for the configuration of _ cover 2 _ initial shocks given by eq .( [ eq : shock_whatif ] ) . note that the total initial loss in this case ( which stems from the default of the two most exposed cms ) is equivalent to a initial equity loss for each cm .hence , the two scenarios reported in figures [ fig1 ] and [ fig2 ] are comparable in terms of initial shock magnitude . however , the very different initial conditions give rise to very different configurations ( and magnitudes of losses ) at early stages of the shock propagation dynamics ( ) .then , as shocks keep propagating ( ) , the system falls into a similar stationary configuration which basically corresponds to the maximum allowed losses for each cm .in this scenario , the total uncovered exposure of all defaulted cms is 3.2 billions , again comparable to the one obtained in the distributed initial shocks case .thus even in this case cc&g s default fund , which is gauged far more conservatively than what the _ cover 2 _ rule prescribes , is capable of covering the total uncovered exposure of the system .however , the fact that the default of two cms can lead to additional defaults suggests that gauging the default fund solely on a _ cover 2 _ basis might not be conservative enough . ) .the histogram in the top panel shows , for each cm , the triplet },h_i^{[2]},h_i^{[*]} ] ( left ) and of } ] , in the same order of the histogram in fig .bottom panels show instead scatter plots of } ] ( right ) as a function of the inter - cm leverage ( each bubble is a cm , colored according to its initial equity).,title="fig:",scaledwidth=47.5% ] ) .the histogram in the top panel shows , for each cm , the triplet },h_i^{[2]},h_i^{[*]} ] ( left ) and of } ] , obtained as the fraction of default fund which remains after subtracting the fixed income exposures of defaulted cms at reverberation round ( _ i.e. _ , those with }=1 ] , namely the ratio of equity which remains after rounds of shocks reverberation .note that the first quantity considers overall losses but accounts only for the cms that actually default , whereas , the second quantity takes into account only network losses ( by discounting the initial shock ) but accounts for each cm irrespectively of its final vulnerability .we thus expect a noisier behavior of compared to because of the strict thresholding used in its computation .and reverberation round * , measured by ( left panel ) and ( right panel ) .note that by varying we consider only the scenario of distributed initial shocks of eq .( [ eq : shock_shape ] ) but , as we have seen in the analysis of figure [ fig2 ] , for large values of , systemic losses are also representative for the _ cover 2 _ scenario with similar magnitude of initial shocks .note also that we take as a proxy for , as the dynamics of shocks reverberation takes a few iterations for convergence.,title="fig:",scaledwidth=47.5% ] and reverberation round * , measured by ( left panel ) and ( right panel ) .note that by varying we consider only the scenario of distributed initial shocks of eq .( [ eq : shock_shape ] ) but , as we have seen in the analysis of figure [ fig2 ] , for large values of , systemic losses are also representative for the _ cover 2 _ scenario with similar magnitude of initial shocks . note also that we take as a proxy for , as the dynamics of shocks reverberation takes a few iterations for convergence.,title="fig:",scaledwidth=47.5% ] figure [ fig3 ] shows the color maps of these two quantities as a function of the iteration step of the dynamics ( ) and of the magnitude of the initial shock ( ) as per eq .( [ eq : shock_shape ] ) .we observe a region of small and for which systemic losses are rather restrained , and a sharp transition to a region of high losses where the dependence of losses on dominates .overall , cc&g s default fund always succeeds in covering all exposures except for unreasonable high values of .indeed , the default fund becomes insufficient ( ) starting from .however , values of do not seem conceivable . to have a quantitative estimate for the range of plausible values for , we can assume that losses arising from non - performing - loans ( npls ) recorded in the market are likely values of assets losses due to the initial shock .in particular , the yearly increase ( for 2014/2015 ) of losses from npls for italian banks goes from 1.37% ( considering also banks reducing these losses ) to 2.5% ( considering only losses increases ) . using these values as initial vulnerabilities and inverting the first term of the right - hand side of eq .( [ eq : shock_shape ] ) , we obtain . ) and the lost funding to be replenished ( ) * , measured by ( left panels ) and ( right panels ) , and for ( top panels ) and ( bottom panels).,title="fig:",scaledwidth=47.5% ] ) and the lost funding to be replenished ( ) * , measured by ( left panels ) and ( right panels ) , and for ( top panels ) and ( bottom panels).,title="fig:",scaledwidth=47.5% ] ) and the lost funding to be replenished ( ) * , measured by ( left panels ) and ( right panels ) , and for ( top panels ) and ( bottom panels).,title="fig:",scaledwidth=47.5% ] ) and the lost funding to be replenished ( ) * , measured by ( left panels ) and ( right panels ) , and for ( top panels ) and ( bottom panels).,title="fig:",scaledwidth=47.5% ] ) and the lost funding to be replenished ( ) * , measured by ( left panels ) and ( right panels ) , and for ( top panels ) and ( bottom panels).,title="fig:",scaledwidth=47.5% ] ) and the lost funding to be replenished ( ) * , measured by ( left panels ) and ( right panels ) , and for ( top panels ) and ( bottom panels).,title="fig:",scaledwidth=47.5% ] ) and the lost funding to be replenished ( ) * , measured by ( left panels ) and ( right panels ) , and for ( top panels ) and ( bottom panels).,title="fig:",scaledwidth=47.5% ] ) and the lost funding to be replenished ( ) * , measured by ( left panels ) and ( right panels ) , and for ( top panels ) and ( bottom panels).,title="fig:",scaledwidth=47.5% ] we finally study the stability regime of the system with respect to the parameters and , setting the intensity of credit and liquidity shocks respectively .here we consider both kinds of initial shocks , _i.e. _ , distributed and _ cover 2 _ scenarios reported in figures [ fig4 ] and [ fig5 ] respectively .as expected , in general we observe higher losses for greater values of and , and a rather smooth transition between the regimes of low and high losses . in the case of distributed shocks ( figure [ fig4 ] , obtained again with ) , the first network reverberation of shocks never leads to severe consequences .however , if shocks keep propagating the default fund deteriorates significantly , becoming insufficient in the region above the segment delimited by and : in the worst case , the total uncovered exposure is 18% bigger than the total default fund .these values of and correspond , however , to very intense and rather unrealistic credit and liquidity shocks , for which cms lose the whole amount of a loan to a defaulted counterpart , and have no other means to get liquidity than fire selling their assets .results for the _ cover 2 _ scenario ( figure [ fig5 ] ) are different only for early reverberation rounds .indeed for the default fund already halved by covering the two most exposed cm , almost dries out for much lower values of and as compared to the distributed shock scenario . yet, total depletion of the default fund is observed again only for unrealistically high and , and for late shock propagation steps ( ) .overall , we can again conclude that cc&g default fund is robust in a wide range of model parameters ( _ i.e. _ , of economic scenarios ) , provided it is more conservatively gauged than prescribed by the _ cover 2 _ requirement .in this work we propose a new stress test methodology for central counterparties ( ccps ) aimed at assessing the _ vulnerability _ ( or equity at risk ) of their clearing members ( cms ) .the model is based on a network characterization of cms , whose balance sheets represent their financial situation and a cm is considered solvent as long as its equity is positive . in order to calculate daily financial position of each cm , we use a merton - like model whose input is publicly available information .cms are linked to each other through direct interbank credits and debits , that constitute the ground for the propagation of financial distress .an initial shock is applied to the system to reduce the equity of each cm .the shock is made up of two components : an _ exogenous _ component , with a stochastic poissonian shock and a deterministic shock , and an _ endogenous _ component , represented by an increase in margins to be posted to the ccp .the dynamics of financial distress propagation then combines two contagion channels : credit and liquidity shocks .credit losses are related to counterparty risk and are faced by lender cms when their borrower cms get closer to default and struggle to fulfill their obligations .these losses can thus affect lenders , resulting in another wave of credit shocks .liquidity shocks concern cms that are unable to replace all the liquidity previously granted to them so they start fire selling their assets , which implies effective losses as illiquid assets trade at a discount .these shocks then reverberate throughout the market , turning into equity losses for other cms .note that in our model fire sales happen because banks are in need to recover lost fundings .however , there are other ways fire sales can originate from , such as correlated sales spirals due to common assets holdings , the leverage targeting policy adopted by banks or liquidity hoarding behavior by cms , which can further exacerbate the effect of liquidity shocks .we remark that the stress propagation model builds on the assumption that equity losses experienced by a cm do imply both a decreasing value of its obligations and a decreasing ability to lend money to the market even if no default has occurred .the model then assesses _ potential _ losses for cms resulting from a virtual dynamics of shocks propagation and thus , in a conservative way , does not include the explicit possibility for cms to rearrange their balance sheet positions .however , we do model exogenous effects : a loss given default corresponds to cms being able to recover part of their loans to distressed institutions , whereas , a lost funding recovery allows banks to replace a fraction of lost liquidity with own cash reserves or from central banks before liquidating assets . additionally , we model a potential external regulatory intervention by halting the contagion process at early propagation steps .the model here described constitutes an advancement in the existing stress testing methodologies and responds to esma s call for an innovative modeling of interconnections in the financial system .we have applied this methodology to the fixed income asset class of cc&g , the italy - based clearing house , whose main cleared securities are italian government bonds .however , the model can be easily extended to cover multiple asset classes within the same ccp , as well as cms belonging to multiple ccps .this would allow to build a fully comprehensive stress testing framework , that considers the whole financial landscape as a single system made up of interconnected ccps and cms .numerical results obtained in this paper show that cc&g s default fund , which is generally gauged on a _ cover 4 _ basis , is adequate to cover losses recorded in the system after the introduction of a diversified set of initial shocks and their propagation in the network : even after an unlimited reverberation of distress within the system , the default fund is still able to cover losses stemming from all the defaulted cms .we also test the _ cover 2 _ requirement by supposing that the two cms to which cc&g has the largest exposure under `` extreme but plausible '' market conditions default simultaneously and spread their distress over the network .we observe that if we let shocks propagate unlimitedly ( _ i.e. _ , we suppose that no authorities intervention take place ) , the systemic impact of a _ cover 2 _ initial shock is very similar to the one obtained with a distributed initial shock .however the _ cover 2 _ initial shock produces a more severe impact on the system for early reverberations of financial distress , with higher cms vulnerabilities and a bigger number of defaults .this result suggests that gauging the default fund on a _ cover 2 _ basis might not be conservative enough , as additional defaults could be triggered .indeed , measuring the effective losses a defaulting cm causes through its complex patterns of financial interconnections to the overall market secured by a ccp can lead to a more efficient definition of the default fund , as well as to a fairer default fund amount asked by the ccp to its cms .this work was supported by the eu projects growthcom ( fp7-ict , grant n. 611272 ) , multiplex ( fp7-ict , grant n. 317532 ) , dolfins ( h2020-eu.1.2.2 . , grant n. 640772 ) and the italian pnr project crisis - lab .the funders had no role in study design , data collection and analysis , decision to publish , or preparation of the manuscript .this work was possible thanks to the support provided by paolo cittadini and colleagues of cassa di compensazione e garanzia ( cc&g ) .battiston , s. , farmer , j. d. , flache , a. , garlaschelli , d. , haldane , a. g. , heesterbeek , h. , hommes , c. , jaeger , c. , may , r. , and scheffer , m. ( 2016 ) .complexity theory and financial regulation ., 351(6275):818819 .chan - lau , j. a. , espinosa , m. , giesecke , k. , and sol , j. a. ( 2009 ) . assessing the systemic implications of financial linkages .global financial stability report ( chapter 2 ) , imf monetary and capital markets development . finger , k. , fricke , d. , and lux , t. ( 2013 ) .network analysis of the e - mid overnight money market : the informational value of different aggregation levels for intrinsic dynamic processes ., 10(2 - 3):187211 .montagna , m. and lux , t. ( 2014 ) .contagion risk in the interbank market : a probabilistic approach to cope with incomplete structural information .working paper series 1937 , kiel institute for the world economy .
in the last years , increasing efforts have been put into the development of effective stress tests to quantify the resilience of financial institutions . here we propose a stress test methodology for central counterparties based on a network characterization of clearing members , whose links correspond to direct credits and debits . this network constitutes the ground for the propagation of financial distress : equity losses caused by an initial shock with both _ exogenous _ and _ endogenous _ components reverberate within the network and are amplified through _ credit _ and _ liquidity _ contagion channels . at the end of the dynamics , we determine the _ vulnerability _ of each clearing member , which represents its potential equity loss . we apply the proposed framework to the fixed income asset class of cc&g , the central counterparty operating in italy whose main cleared securities are italian government bonds . we consider two different scenarios : a distributed , plausible initial shock , as well as a shock corresponding to the _ cover 2 _ regulatory requirement ( _ i.e. _ , the simultaneous default of the two most exposed clearing members ) . although the two situations lead to similar results after an unlimited reverberation of shocks on the network , the distress propagation is much more hasty in the latter case , with a large number of additional defaults triggered at early stages of the dynamics . our results thus show that setting a default fund to cover insolvencies only on a _ cover 2 _ basis may not be adequate for taming systemic events , and only very conservative default funds such as cc&g s one can face total losses due to the shock propagation . overall , our network - based stress test represents a refined tool for calibrating default fund amounts .
statisticians are nowadays frequently confronted with massive data sets from various frontiers of scientific research .fields such as genomics , neuroscience , finance and earth sciences have different concerns on their subject matters , but nevertheless share a common theme : they rely heavily on extracting useful information from massive data and the number of covariates can be huge in comparison with the sample size .in such a situation , the parameters are identifiable only when the number of the predictors that are relevant to the response is small , namely , the vector of regression coefficients is sparse .this sparsity assumption has a nice interpretation that only a limited number of variables have a prediction power on the response . to explore the sparsity ,variable selection techniques are needed . over the last ten years, there has been many exciting developments in statistics and machine learning on variable selection techniques for ultrahigh dimensional feature space .they can basically be classified into two classes : penalized likelihood and screening .penalized likelihood techniques are well known in statistics : bridge regression ( ) , lasso ( ) , scad or other folded concave regularization methods ( ) , and dantzig selector ( ) , among others . these techniques select variables and estimate parameters simultaneously by solving a high - dimensional optimization problem .see and for an overview of the field . despite the fact that various efficient algorithms have been proposed ( ) ,statisticians and machine learners still face huge computational challenges when the number of variables is in tens of thousands of dimensions or higher .this is particularly the case as we are entering the era of `` big data '' in which both sample size and dimensionality are large . with this background , propose a two - scale approach , called iterative sure independence screening ( isis ) , which screens and selects variables iteratively .the approach is further developed by in the context of generalized linear models .theoretical properties of sure independence screening for generalized linear models have been thoroughly studied by .other marginal screening methods include tilting methods ( ) , generalized correlation screening ( ) , nonparametric screening ( ) , and robust rank correlation based screening ( ) , among others .the merits of screening include expediences in distributed computation and implementation . by ranking marginal utility such as marginal correlation with the response ,variables with weak marginal utilities are screened out by a simple thresholding .the simple marginal screening faces a number of challenges . as pointed out in , it can screen out those hidden signature variables : those who have a big impact on response but are weakly correlated with the response. it can have large false positives too , namely recruiting those variables who have strong marginal utilities but are conditionally independent with the response given other variables . and use a residual based approach to circumvent the problem but the idea of conditional screening has never been formally developed .conditional marginal screening is a natural extension of simple independent screening . in many applications ,researchers know from previous investigations that certain variables are responsible for the outcomes .this knowledge should be taken into account when applying a variable selection technique in order not to remove these predictors from the model and to improve the selection process .conditional screening recruits additional variables to strengthen the prediction power of , via ranking conditional marginal utility of each variable in presence of . in absence of such a prior knowledge, one can take those variables that survive the screening and selection as in .conditional screening has several advantages .first of all , it makes it possible to recover the hidden significant variables .this can be seen by considering the following linear regression model with .the marginal covariance between and is given by where is equal to 0 , except for its element which equals to 1 .this shows that the marginal covariance between and is zero if , where is the element of , with . yet, can be far away from zero .in other words , under the conditions listed above , is a hidden signature variable . to demonstrate that , let us consider the case in which , with true regression coefficients , and all variables follow the standard normal distribution with equal correlation 0.5 , and follows the standard normal distribution . by design , is a hidden signature variable , which is marginally uncorrelated with the response .based on a random sample of size from the model , we fit marginal regression and obtain the marginal estimates .the magnitudes of these estimates are summarized by their averages over three groups : indices 1 to 5 ( denoted by ) , 6 and indices 7 to 2000 .clearly , the magnitude on the first group should be the largest , followed by the third group .figure [ fig1a ] depicts the distributions of those marginal magnitudes based on 10000 simulations .clearly variable can not be selected by marginal screening .+ adapting the conditional screening approach gives a very different result .conditioning upon the first five variables , conditional correlation between and has a large magnitude . with the same simulated data as in the above example, the regression coefficient of in the joint model with the first five variables is computed .this measures the conditional contribution of variable in presence of the first five variables .again , the magnitudes are summarized into two values : and the average of .the distributions of those over 10000 simulations are also depicted in figure [ fig1b ] . clearly , the variable has higher marginal contributions than others .that is , conditioning helps recruiting the hidden signature variable .furthermore , conditioning is fairly robust to extra elements . to demonstratethat , we have repeated the previous experiment with conditioning on five more randomly chosen features .the distribution of the magnitudes are given in figure [ fig1c ] .it is seen that the important hidden variable again has a large magnitude .the benefits of conditioning are observed even if the conditioned variables are not in the active set . to demonstrate that , the regression coefficient of has been computed while conditioning on five randomly chosen inactive variables .that is , contribution of variable is calculated in the presence of these five randomly chosen inactive variables .the magnitudes of are summarized in three groups : the average of the first five important variables , i.e. , and the average of .the distributions for these variables over 10000 simulations are given in figure [ fig1d ] .it is observed that the magnitude of the hidden signature variable increases significantly and hence it will surely not be missed during the screening . in other words , conditioning can help to recruit the important variables , even when the conditional set is not ideally chosen .+ secondly , conditional screening helps for reducing the number of false negatives .marginal screening can fail when there are covariates in the non - active set that are highly correlated with active variables . to appreciate this ,consider the linear model ( [ eq1 ] ) again with sparse regression coefficients , equi - correlation 0.9 among all covariates except , which is independent of the rest of the covariates .this setting gives in this case , marginal utilities for all nonactive variables are higher than that for the active variable .a summary similar to figure [ fig1 ] is shown in the upper left panel of figure [ fig2 ] .therefore , based on sis ( sure independence screening ) in fan and lv ( 2008 ) , the active variable has the least priority to be included . by using the conditional screening approach in which the covariate is conditioned upon ( used in the joint fit ) ,marginal utilities of the spurious variables are significantly reduced .the distributions of the average of the magnitude of the conditional fitted coefficients and are shown in the middle panel of figure [ fig2 ] . clearly , the nonactive variables are significantly demoted by conditioning . to observe effects of conditioning on extra variables and randomly chosen variables , a similar experiment to the first case is also done .figure [ fig2c ] depicts the distribution of the conditioned marginal fits when five extra variables are conditioned on .the contributions of variables in the presence of ten randomly chosen variables are given in figure [ fig2d ] .it is seen that , the relative magnitude of the hidden active variable is considerably larger and hence it is more likely that it is recruited during screening .finally , as shown by and , for a given threshold of marginal utility , the size of the selected variables depends on the correlation among covariates , as measured by the largest eigenvalue of : .the larger the quantity , the more variables have to be selected in order to have a sure screening property . by using conditional screening, the relevant quantity now becomes , where refers to the covariates that we will condition upon and is the rest of the variables .conditioning helps reducing correlation among covariates .this is particularly the case when covariates share some common factors , as in many biological ( e.g. treatment effects ) and financial studies ( e.g. market risk factors ) . to illustrate the benefits we consider the case where is given by equally correlated normal random variables .simple calculations yield that where is the common correlation and . as has a normal distribution ,the conditional covariance matrix can be calculated easily and it can be shown that note that when , the formula reduces to the unconditional one .it is clear that conditioning helps reducing the correlation among the variables . to quantify the degree of de - correlation , figure [ fig3 ] depicts the ratio as a function of for various choices of when .the reduction is dramatic , in particular when is large or is large .the benefits of conditioning are clearly evidenced . in this paper, we propose the conditional screening technique and formally establish the conditions under which it has a sure screening property .we also give an upper bound for the number of selected variables for each given threshold value .two data - driven methods for choosing the thresholding parameter are proposed to facilitate the practical use of the conditional screening technique .the rest of the paper is organized as follows . in section 2, we introduce the conditional sure independence screening procedure .the sure independence screening property and the uniform convergence of the conditional marginal maximum likelihood estimator are presented in section 3 . in section 4 ,two approaches are proposed to choose the thresholding parameter for csis .finally , we examine the performance of our procedure in section 5 on simulated and real data .the details of the proofs are deferred to the appendix .generalized linear models assume that the conditional probability density of the random variable given belongs to an exponential family where and are specific known functions in the canonical parameter .note that we ignore the dispersion parameter , since the interest only focuses on estimation of the mean regression function .however , it is easy to include a dispersion parameter . under model ( [ eq3 ] ), we have the regression function the canonical parameter is further parameterized as namely the canonical link is used in modeling the mean regression function .well known distributions in this exponential family include the normal , binomial , poisson , and gamma distributions . in the ultrahigh dimensional sparse linear model , we assume that the true parameter is sparse .namely , the set is small .our aim is to estimate the set and coefficient vector , as well as predicting the outcome .this is a more challenging task than just predicting as in many machine learning problems . when the dimensionality is ultrahigh , one often employs a screening technique first to reduce the model size .it is particularly effective in distributed computation for dealing with `` big data '' .conditional screening assumes that there is a set of variables that are known to be related to the response and we wish to recruit additional variables from the rest of variables , given by , to better explain the response variable . for simplicity of notation , we assume without loss of generality that is the set of first variables and is the remaining set of variables .we will use the notation and similar notation for and . assume without loss of generality that the covariates have been standardized so that given a random sample from the generalized linear model ( [ eq3 ] ) with the canonical link, the conditional maximum marginal likelihood estimator for is defined as the minimizer of the ( negative ) marginal log - likelihood where and is the empirical measure .denote from now on by the last element of .it measures the strength of the conditional contribution of given . in the above notation, we assume that the intercept is used and is incorporated in the vector .conditional marginal screening based on the estimated marginal magnitude is to keep the variables for a given thresholding parameter .namely , we recruit variables with large additional contribution given .this method will be referred to as conditional sure independence screening ( csis ) .it depends , however , on the scale of and to be defined in section 3.1 .a scale - free method is to use the likelihood reduction of the variable given , which is equivalent to computing after ignoring the common constant .the smaller , the more the variable contributes in presence of .this leads to an alternative method based on the likelihood ratio statistics : recruit additional variables according to where is a thresholding parameter .this method will be referred to as conditional maximum likelihood ratio screening ( cmlr ) .we emphasize that , the set of variables does not necessarily have to contain active variables .conditional screening only makes use of the fact that the effects of important variables are more visible in the presence of and the correlations of variables are weakened upon conditioning .this is commonly the case in many applications such as finance and biostatistics , where the variables share some common factors .it gives hidden signature variables a chance to survive .in fact , it was demonstrated in the introduction that conditioning can be beneficial even if the set is chosen randomly .our theoretical study gives a formal justifications of the iterated method proposed in fan and lv ( 2008 ) and fan _ et .in order to prove the sure screening property of our method , we first need some properties on the population level .let , , and with the expectation taken under the true model .then , is the population version of . to establish the sure screening property , we need to show that the marginal regression coefficient , the last component of , provides useful probes for the variables in the joint model andits sample version is uniformly close to the population counterpart .therefore , the vector of marginal fitted regression coefficients is useful for finding the variables in .since we are fitting marginal regressions , that is we are using only out of the original predictors , we need to introduce model misspecifications .thus , we do not expect that the marginal regression coefficient is equal to the joint regression parameter .however , we hope that when the joint regression coefficient exceeds a certain threshold , exceeds another threshold in most cases .therefore , the marginal conditional regression coefficients provide useful probes for the joint regression . by ( [ eq8 ] ) , the marginal regression coefficients the score equation where the second equality follows from the fact that . without using the additional variable ,the baseline parameter is given by and satisfies the equation we assume that the problems at marginal level are fully identifiable , namely , the solutions and are unique .to understand the conditional contribution , we introduce the concept of the conditional linear expectation .we use the notation which is the best linearly fitted regression within the class of linear functions .similarly , we use the notation to denote the best linear regression fit of by using .then , equation ( [ eq11 ] ) can be more intuitively expressed as note that the conditioning in this paper is really a conditioning linear fit and the conditional expectation is really the conditional linear expectation .this facilitates the implementation of the conditional ( linear ) screening in high - dimensional , but adds some technical challenges in the proof .let us examine the implication marginal signal , i.e. .when , by ( [ eq9 ] ) , the first components of , denoted by , should be equal to by uniqueness of equation ( [ eq11 ] ) . then , equation ( [ eq9 ] ) on the component entails using ( [ eq13 ] ) , the above condition can be more comprehensively expressed as this proves the necessary condition of the following theorem .[ thm1 ] for , the marginal regression parameters if and only if . proof of the sufficient part is given in appendix [ app thm1 ] . in order to have the sure screening property at the population level of equation ( [ eq8 ] ), the important variables should be conditionally correlated with the response , where . moreover ,if ( with ) is conditionally correlated with the response , the regression coefficient is non - vanishing .the sure screening property of conditional mle ( cmle ) , given by equation , will be guaranteed if the minimum marginal signal strength is stronger than the estimation error .this will be shown in theorem [ thm2 ] and requires condition [ cond1 ] .the details of the proof are relegated to appendix [ app thm2 ] .[ cond1 ] 1 . for ,there exists a positive constant and such that .2 . let be the random variable defined by then , uniformly in .note that , by strict convexity of , almost surely .when we are dealing with linear models , i.e. , then and condition [ cond1](ii ) requires that is bounded uniformly , which is automatically satisfied by the normalization condition .[ thm2 ] if condition [ cond1 ] holds , then there exists a such that in this section , we prove the uniform convergence of the conditional marginal maximum likelihood estimator and the sure screening property of the conditional sure independence screening method .in addition we provide an upper bound on the size of the set of selected variables .since the log - likelihood of a generalized linear model with the canonical link is concave , has a unique minimizer over at an interior point , where is the set over which the marginal likelihood is maximized . to obtain the uniform convergence result at the sample level ,a few more conditions on the conditional marginal likelihood are needed .[ cond2 ] 1 . for the fisher information , its operator norm , is bounded , where and is the euclidian norm .2 . there exists some positive constants and such that for sufficiently large and that 3 .the second derivative of is continuous and positive .there exists an such that for all : where is the indicator function and is an arbitrarily large constant such that for a given in , the function is lipschitz for all in with .4 . for all , we have for some positive , bounded from below uniformly over .the first three conditions given in condition [ cond2 ] are satisfied for almost all of the commonly used generalized linear models .examples include linear regression , logistic regression , and poisson regression .the first part of condition [ cond2](ii ) puts an exponential bound on the tails of . in the following theorem ,the uniform convergence of our conditional marginal maximum likelihood estimator is stated as well as the sure screening property of the procedure .the proof of this theorem is deferred to appendix [ app thm3 ] .[ thm3 ] suppose that condition [ cond2 ] holds .let , with given in condition [ cond2 ] . 1 . if , then for any , there exists a positive constant such that where .2 . if in addition , condition [ cond1 ] holds , then by taking with , we have for some constant , where the size of the set of nonsparse elements . note that the sure screening property , stated in the second conclusion of theorem 3 , depends only on the size of the set of nonsparse elements and not on the dimensionality or .this can be seen in the second conclusion above .this result is understandable since we only need the elements in to pass the threshold , and this only requires the uniform convergence of over .the truncation parameter appears on both terms of the upper bound of the probability .there is a trade - off on this choice . for the bernoulli model with logistic link , is bounded and the optimal order for is . in this case , the conditional sure independence screening method can handle the dimensionality which guarantees that the upper bound in theorem [ thm3 ] converges to zero .a similar result for unconditional screening is shown in .in particular , when the covariates are bounded , we can take , and when covariates are normal , we have that . for the normal linear model , following the same argument as in ,the optimal choice is where .then , conditional sure independence screening can handle dimensionality which is of order when .we have just stated the sure screening property of our csis method , that is .however , a good screening method does not only possess sure screening , but also retains a small set of variables after thresholding .below , we give a bound on the size of the selected set of variables , under the following additional conditions .[ cond3 ] 1 .the variance and are bounded .2 . the minimum eigenvalue of the matrix \beta \beta ] . as noted above , for the normal linear model , .condition [ cond3 ] ( ii ) requires that the minimum eigenvalue of be bounded away from zero . in general , by strict convexity of , almost surely .thus , condition [ cond3](ii ) is mild . for the linear model with , by ( [ eq11 ] ) , andhence since ] .note that for , .the selected variables are then in our numerical implementations , we do coupling five times , i.e. , and take .a similar idea for unconditional sis appears already in for additive models .in this section , we demonstrate the performance of csis on simulated data and two empirical datasets . we compare csis versus sure independence screening and penalized least squares methods in a variety of settings . in the simulation study, we compare the performance of the proposed csis with lasso ( ) and unconditional sis ( ) , in terms of variable screening .we vary the sample size from to for different scenarios and the number of predictors range from to .we present results with both the linear regression and the logistic regression .we evaluate different screening methods on simulated data sets based on the following criteria : 1 .mmms : median minimum model size of the selected models that are required to have a sure screening .the sampling variability of minimum model size ( mms ) is measured by the robust standard deviation ( rsd ) , which is defined as the associated interquartile range of mms divided by across 200 simulations .fp : average number of false positives across the 200 simulations , 3 .fn : average number of false negatives across 200 simulations .we consider two different methods for selecting thresholding parameters : controlling fdr and random decoupling as outlined in the previous section , and we present false negatives and false positives for each method .number of average false positives and false negatives are denoted by and for the random decoupling method and and for the fdr method . for the fdr method, we have chosen the number of tolerated false positives as .for the experiments with and , we do not report the corresponding results for lasso , since it is not proposed for variable screening , and the data - driven choice of regularization parameter for model selection is not necessarily optimal for variable screening .the first two simulated examples concern linear models introduced in the introduction , regarding the false positives and false negatives of unconditional sis .we report the simulation results in table [ tab1 ] in which the column labeled `` * example 1 * '' refers to the first setting and column labeled `` * example 2 * '' referred to the second setting .these examples are designed to fail the unconditional sis .not surprisingly , sis performs poorly in sure screening the variables , and conditional sis easily resolves the problem . also , we note that csis needs only one additional variable to have sure screening , whereas lasso needs 15 additional variables .both the fdr and the random decoupling methods return no false negatives under almost all of the simulations .in other words , both of the data - driven thresholding methods ensured the sure screening property . however , they tend to be conservative , as the numbers of the false positives are high .the fdr approach has a relatively small number of false positives when used for conditional sure independent screening . for these settings ,fdr method was found to be less conservative than the random decoupling method ..the mmms , its rsd ( in parentheses ) , the `` false negative '' and `` false positive '' for the linear model with and . [ cols="^,^,^,^,^,^ " , ] ccccccc + & & & & & & + 0.00 & 300 & 215 ( 312 ) & 0.19 & 5.78 & 23.06 & 1.77 + 0.20 & 300 & 27 ( 14 ) & 73.22 & 0.02 & 109.56 & 0.00 + 0.40 & 300 & 49 ( 21 ) & 88.19 & 0.00 & 110.15 & 0.00 + 0.60 & 300 & 56 ( 20 ) & 88.17 & 0.00 & 110.00 & 0.00 + 0.80 & 300 & 68 ( 19 ) & 88.20 & 0.00 & 110.34 & 0.00 + + & & & & & & + 0.00 & 300 & 87 ( 173 ) & 20.15 & 1.24 & 24.03 & 1.11 + 0.20 & 300 & 19 ( 13 ) & 49.25 & 0.14 & 53.87 & 0.11 + 0.40 & 300 & 34 ( 23 ) & 67.82 & 0.17 & 61.72 & 0.31 + 0.60 & 300 & 43 ( 24 ) & 77.36 & 0.21 & 53.83 & 1.01 + 0.80 & 300 & 66 ( 55 ) & 78.33 & 0.51 & 36.16 & 3.42 + ccccc + & & & & + 0.00 & 300 & 210 ( 312 ) & 20.18 & 0.08 + 0.20 & 300 & 28 ( 17 ) & 107.08 & 0.00 + 0.40 & 300 & 47 ( 24 ) & 107.82 & 0.00 + 0.60 & 300 & 60 ( 22 ) & 107.47 & 0.00 + 0.80 & 300 & 67 ( 19 ) & 107.30 & 0.00 + + & & & & + 0.00 & 300 & 83 ( 173 ) & 20.18 & 1.21 + 0.20 & 300 & 20 ( 14 ) & 45.27 & 0.20 + 0.40 & 300 & 39 ( 30 ) & 53.48 & 0.49 + 0.60 & 300 & 71 ( 87 ) & 49.47 & 1.15 + 0.80 & 300 & 402 ( 561 ) & 35.42 & 3.43 + ccccccc + & & & & & & + 0.00 & 500 & 318 ( 7038 ) & 12.04 & 1.22 & 51.32 & 0.79 + 0.20 & 500 & 38 ( 428 ) & 32.47 & 0.57 & 68.46 & 0.38 + 0.40 & 500 & 38 ( 12 ) & 38.66 & 0.27 & 73.42 & 0.19 + 0.60 & 500 & 38 ( 12 ) & 41.99 & 0.16 & 76.11 & 0.10 + 0.80 & 500 & 35 ( 12 ) & 43.84 & 0.03 & 77.38 & 0.02 + + & & & & & & + 0.00 & 500 & 13 ( 354 ) & 5.96 & 0.66 & 42.51 & 0.49 + 0.20 & 500 & 15 ( 16 ) & 14.51 & 0.39 & 49.79 & 0.27 + 0.40 & 500 & 16 ( 13 ) & 19.11 & 0.24 & 51.68 & 0.22 + 0.60 & 500 & 19 ( 10 ) & 22.80 & 0.21 & 51.78 & 0.24 + 0.80 & 500 & 19 ( 10 ) & 26.39 & 0.14 & 46.49 & 0.64 + ccccc + & & & & + 0.00 & 500 & 309 ( 7030 ) & 14.06 & 0.22 + 0.20 & 500 & 37 ( 255 ) & 34.10 & 0.09 + 0.40 & 500 & 35.5 ( 11 ) & 40.50 & 0.05 + 0.60 & 500 & 35.5 ( 12 ) & 42.89 & 0.03 + 0.80 & 500 & 33.5 ( 14 ) & 44.39 & 0.00 + + & & & & + 0.00 & 500 & 25 ( 892 ) & 5.96 & 0.14 + 0.20 & 500 & 13 ( 62 ) & 12.38 & 0.09 + 0.40 & 500 & 13 ( 22 ) & 14.17 & 0.08 + 0.60 & 500 & 15.5 ( 17 ) & 13.75 & 0.11 + 0.80 & 500 & 22 ( 72 ) & 9.30 & 0.28 + in this section ,we evaluate the performance of csis under three different conditioning sets : the set consists of ( i ) only active variables , ( ii ) both active and inactive variables and ( iii ) only ( randomly chosen ) inactive variables .we consider a different correlation structure where the number of correlated variables is significantly large . for this experiment ,example 5 , we set and .we generate covariates from equation and choose the constants such that the correlation and among the first 2000 variables and .we fix .the following three conditioning sets are considered ( i ) ; ( ii ) and ( iii ) \{random choice of 4 inactive variables}. more precisely , consists of 3 randomly chosen variables from the first two thousand variables which are correlated and 1 randomly chosen inactive variable from the rest .note that variables 1 and 2 are active variables whereas variables 5 and 2001 are inactive .we have simulation results using both the conditional mle ( [ eq5 ] ) and conditional mlr ( [ eq6 ] ) . to save the space , we only present the results using the conditional mle for the normal model in table [ tabi1 ] and for the binomial model in table [ tabi3 ] .the results show clearly that the benefits of conditional screening are significant even when variables are wrongly chosen .csis reduces the minimum model size at least by half , and for most of the cases it uses 10 times as less variables as the unconditioning one .csis performs well even if some of the conditioned variables are inactive or even all are randomly selected inactive variables . for the worst cases , mis - conditioning " forced csis to recruit twice as many variables , and for most of the cases , the difference is not excessive . in all cases , csis performs significantly better than the unconditioning case .ccccccc + & & & & & & + 0.00 & 200 & 35 ( 80 ) & 98.20 & 0.28 & 20.16 & 0.63 + 0.20 & 200 & 1601 ( 812 ) & 1854.75 & 0.34 & 1537.35 & 0.51 + 0.40 & 200 & 2038 ( 267 ) & 2083.30 & 0.45 & 2010.73 & 0.63 + 0.60 & 200 & 2108 ( 470 ) & 2088.11 & 0.52 & 2010.59 & 0.73 + 0.80 & 200 & 2193 ( 663 ) & 2092.08 & 0.58 & 2010.59 & 0.83 + + & & & & & & + 0.00 & 200 & 6 ( 8) & 98.17 & 0.07 & 23.51 & 4.00 + 0.20 & 200 & 13 ( 47 ) & 440.33 & 0.04 & 143.85 & 3.90 + 0.40 & 200 & 75 ( 215 ) & 1001.84 & 0.03 & 336.05 & 3.67 + 0.60 & 200 & 216 ( 358 ) & 1372.48 & 0.01 & 379.81 & 3.64 + 0.80 & 200 & 423 ( 429 ) & 1518.04 & 0.00 & 234.19 & 3.79 + + & & & & & & + 0.00 & 200 & 6 ( 7 ) & 98.29 & 0.08 & 23.44 & 4.00 + 0.20 & 200 & 21 ( 75 ) & 565.76 & 0.03 & 212.80 & 3.75 + 0.40 & 200 & 152 ( 413 ) & 1367.95 & 0.03 & 642.06 & 3.33 + 0.60 & 200 & 443 ( 676 ) & 1766.88 & 0.01 & 830.50 & 3.12 + 0.80 & 200 & 868 ( 643 ) & 1860.01 & 0.00 & 594.86 & 3.40 + + & & & & & & + 0.00 & 200 & 44 ( 90 ) & 100.33 & 0.30 & 23.23 & 2.31 + 0.20 & 200 & 481 ( 687 ) & 1022.85 & 0.24 & 499.31 & 1.50 + 0.40 & 200 & 1322 ( 752 ) & 1806.40 & 0.20 & 1147.03 & 0.86 + 0.60 & 200 & 1652 ( 462 ) & 2003.43 & 0.10 & 1345.32 & 0.63 + 0.80 & 200 & 1716 ( 297 ) & 2037.08 & 0.03 & 1103.83 & 0.94 + ccccccc + & & & & & & + 0.00 & 400 & 24 ( 59 ) & 97.39 & 0.21 & 27.29 & 0.48 + 0.20 & 400 & 1606 ( 776 ) & 1933.60 & 0.20 & 1725.60 & 0.39 + 0.40 & 400 & 2029 ( 101 ) & 2082.82 & 0.30 & 2016.35 & 0.52 + 0.60 & 400 & 2070 ( 258 ) & 2087.22 & 0.45 & 2015.59 & 0.64 + 0.80 & 400 & 2096 ( 429 ) & 2090.86 & 0.51 & 2015.07 & 0.66 + + & & & & & & + 0.00 & 400 & 8 ( 16 ) & 98.20 & 0.10 & 31.98 & 4.00 + 0.20 & 400 & 22 ( 75 ) & 361.04 & 0.10 & 138.73 & 3.85 + 0.40 & 400 & 107 ( 223 ) & 743.80 & 0.08 & 247.20 & 3.74 + 0.60 & 400 & 289 ( 439 ) & 1022.71 & 0.10 & 246.67 & 3.75 + 0.80 & 400 & 637 ( 528 ) & 1142.79 & 0.16 & 133.97 & 3.82 + + & & & & & & + 0.00 & 400 & 7 ( 17 ) & 98.33 & 0.11 & 31.31 & 4.00 + 0.20 & 400 & 27 ( 114 ) & 460.60 & 0.11 & 196.27 & 3.83 + 0.40 & 400 & 176 ( 429 ) & 1045.28 & 0.08 & 456.86 & 3.52 + 0.60 & 400 & 578 ( 759 ) & 1394.61 & 0.10 & 508.52 & 3.55 + 0.80 & 400 & 910 ( 673 ) & 1480.91 & 0.10 & 291.69 & 3.71+ + & & & & & & + 0.00 & 400 & 309 ( 919 ) & 100.00 & 0.89 & 14.83 & 2.69 + 0.20 & 400 & 777 ( 1129 ) & 529.20 & 0.66 & 149.64 & 2.12 + 0.40 & 400 & 1285 ( 1075 ) & 1087.79 & 0.56 & 333.27 & 1.96 + 0.60 & 400 & 1572 ( 977 ) & 1383.80 & 0.58 & 336.54 & 2.06 + 0.80 & 400 & 1629 ( 892 ) & 1485.02 & 0.57 & 178.37 & 2.79 + in this section , we demonstrate how csis can be used to do variable selection with an empirical dataset .we consider the leukemia dataset which was first studied by and is available at http://www.broad.mit.edu/cgi-bin/cancer/datasets.cgi .the data come from a study of gene expression in two types of acute leukemias , acute lymphoblastic leukemia ( all ) and acute myeloid leukemia ( aml ) .gene expression levels were measured using affymetrix oligonucleotide arrays containing 7129 genes and 72 samples coming from two classes , namely 47 in class all and 25 in class aml . among these 72 samples , 38 ( 27 all and 11 aml )are set to be training samples and 34 ( 20 all and 14 aml ) are set as test samples . for this dataset we want to select the relevant genes , and based on the selected genes estimate whether the patient has all or aml .aml progresses very fast and has a poor prognosis .therefore , a consistent classification method that relies on gene expression levels would be very beneficial for the diagnosis . in order to choose the conditioning genes ,we take a pair of genes described in that result in low test errors .first is zyxin and the second one is transcriptional activator hsnf2b .both genes have empirically high correlations for the difference between people with aml and all . after conditioning on the aforementioned genes ,we implement our conditional selection procedure using logistic regression .using the random decoupling method , we select a single gene , tcrd ( t - cell receptor delta locus ) .although this gene has not been discovered by the all / aml studies so far , it is known to have a relation with t - cell all , a subgroup of all ( ) . by using only these three genes , we are able to obtain a training error of 0 out of 38 , and a test error of 1 out of 34 . similar studies in the past using sparse linear discriminant analysis ornearest shrunken centroids methods have obtained test errors of 1 by using more than 10 variables .we conjecture that this is due to the high correlation between the zyxin gene and others , and that this correlation masks the information contained in the tcrd gene . in this sectionwe illustrate the advantages of conditional sure independence screening on a factor model with financial data . from the website http://mba.tuck.dartmouth.edu/pages /facultywe obtain 30 portfolios formed with respect to their industries .the returns for each portfolio are denoted by ( for ) .the fama - french three - factor model suggests that these returns follow the following equation where is the excess return of the proxy market portfolio ( given by the difference of the one - month t - bill yield and the value weighted return of all stocks on nyse , amex and nasdaq ) , is the difference between the return of small and big companies ( measured by the difference of returns of two portfolios , one with companies that have small market cap and one with companies with large market cap ) and finally is the difference of return from value companies and growth companies .this model was first proposed by and has been extensively analyzed since then . since this seminal work ,many other factors have been considered . in our numerical example , we used screening with the permutation test to detect if other factors are necessary .besides the three factors mentioned above , we consider the momentum factor as an additional factor .this gives us 4 factors that are conditioned upon in csis .for each given industrial portfolio , we also consider the returns from the other 29 portfolios as potential prediction factors .we use daily returns data from 1/3/2002 to 12/31/2007 . for each portfolio( 30 in total ) , we first consider the marginal screening without conditioning . on average , for each portfolio , marginal screening picks 25.3 among 29 other industrial portfolios as predictors .this is mainly due to correlations between the returns of different portfolios .we next consider conditional marginal screening , in which the three fama - french factors and the momentum factor are conditioned upon .as expected , the number of the selected variables decreases significantly to an average of 4.8 .that is , about 4.8 portfolios on average can still have some potential prediction power in presence of the aforementioned four major factors .the marginal and conditional fits of the values are given in figure [ fig factors ] .the black parts indicate the variables which are not included .it is seen from these results that , conditional screening is more advantageous compared to marginal screening if few of the factors are known to be important .furthermore , when there is significant correlation between some of the factors , as shown in the introduction , marginal screening considers most of the factors as relevant . in almost all financial models ,stock returns are correlated with the return of the market portfolio .therefore , in variable selection for financial factor models with many variables , one should always consider the returns conditional on the main driving forces of the market .the necessary part has already been proven in section 3.1 . to prove the sufficient condition , we first note that condition is equivalent to as shown in section 3.1 .this and ( [ eq11 ] ) imply that is a solution to equation ( [ eq9 ] ) . by the uniqueness, it follows that , namely .this completes the proof .we denote the matrix as and partition it as = \left [ \begin{array}{ccc } \omega_{\mc,\mc } & \omega_{\mc , j}\\ \omega_{\mc , j}^t & \omega_{j , j } \end{array } \right].\ ] ] from the score equations , i.e. equations ( [ eq9 ] ) and ( [ eq11 ] ) , we have that using the definition of , the above equation can be written as by letting , we have that or equivalently furthermore , by ( [ eq13 ] ) , we can express as it follows from ( [ eq12 ] ) that using the definition of again , we have by ( [ eqa2 ] ) , we conclude that 1 .the fisher information \left[\frac{\partial}{\partial{\mbox{\boldmath }}}l\left({\mbox{\bf x}}^{t}{\mbox{\boldmath }},y\right)\right]^{t}\right\ } , \ ] ] is finite and positive definite at .furthermore , exists .2 . the function is lipschitz with a positive constant for any in , and in with and arbitrarily large constants .furthermore , there exists a constant such that \left(1-i_{n}\left({\mbox{\bf x}},y\right)\right)\right|\leq o\left(p / n\right),\ ] ] where with constant defined below .the function is convex in and \right|\geq v_{n}\left\vert { \mbox{\boldmath }}-{\mbox{\boldmath }}_{0}\right\vert ^{2},\ ] ] for some positive constants , and all . by lemma 1 of ,condition [ cond2](ii ) gives the bound hence , we have using this and theorem [ thm quasi ] , letting , we have for some positive constant .then , by bonferroni s inequality , we obtain this proves the first conclusion .the second statement can be shown by considering the event on the event , by theorem [ thm2 ] , it holds that for all by letting , on the event we have the sure screening property , that is .the probability bound can be shown by using the first result along with bonferroni s inequality over all chosen , which gives .\ ] ] this completes the proof .the first part of the proof is similar to that of theorem 5 of .the idea of this proof is to show that if this holds , the size of the set can not exceed for any .thus on the event the set is a subset of the set , whose size is bounded by . if we take , we obtain that finally , by theorem [ thm3 ] , we obtain that and therefore the statement of the theorem follows .we now prove by using and ( [ eqa5 ] ) . by condition [ cond3](ii ) ,the schur s complement is uniformly bounded from below .therefore , by ( [ eqa5 ] ) , we have for a positive constant .hence , we need only to bound the conditional covariance . by ( [ eqa4 ] ) , ( [ eq9 ] ) and lipschitz continuity of , we have \bigr |.\end{aligned}\ ] ] where .writing the last term in the vector form , we need to bound from the property of the least - squares , we have = \operatorname{\mathbb{e } } [ { \mbox{\bf x}}_{{\mathcal{d } } } { \mbox{\bf x}}_{{\mathcal{c}}}^t ] \beta \beta \beta \sigma \beta ] due to condition [ cond3 ] .therefore , we have that { \right)}{\right)},\ ] ] and that gives us the desired result . with the given conditions , by theorem 1 , we have . since includes the intercept term , .it is known that ( for ) has an asymptotically standard normal distribution ( gao et al . , 2008 , heyde , 1997 ) .then , it follows that for a fama , e.f . , and french , k.r .( 1993 ) , common risk factors in the returns on stocks and bonds , " 33 , 356 .fan , j. , feng , y. , and song , r. ( 2011 ) , nonparametric independence screening in sparse ultra - high - dimensional additive models , " 106 , 544557 .golub , t. , slonim , d. , tamayo , p. , huard , c. , gaasenbeek , m. , mesirov , j. , coller , h. , loh , m. , downing , j. , caligiuri , m. , bloomfield , c. , and lander , e. ( 1999 ) , molecular classification of cancer : class discovery and class prediction by gene expression monitoring , " 286 , 531537 .szczepaski , t. , van der velden , v.h . ,raff , t. , jacobs , d.c . ,van wering , e.r . ,brggemann , m. , kneba , m. , and van dongen , j.j .( 2003 ) , comparative analysis of t - cell receptor gene rearrangements at diagnosis and relapse of t - cell acute lymphoblastic leukemia ( t - all ) shows high stability of clonal markers for monitoring of minimal residual disease and reveals the occurrence of second t - all , " 17 , 21492156 .
independence screening is a powerful method for variable selection for ` big data ' when the number of variables is massive . commonly used independence screening methods are based on marginal correlations or variations of it . in many applications , researchers often have some prior knowledge that a certain set of variables is related to the response . in such a situation , a natural assessment on the relative importance of the other predictors is the conditional contributions of the individual predictors in presence of the known set of variables . this results in conditional sure independence screening ( csis ) . conditioning helps for reducing the false positive and the false negative rates in the variable selection process . in this paper , we propose and study csis in the context of generalized linear models . for ultrahigh - dimensional statistical problems , we give conditions under which sure screening is possible and derive an upper bound on the number of selected variables . we also spell out the situation under which csis yields model selection consistency . moreover , we provide two data - driven methods to select the thresholding parameter of conditional screening . the utility of the procedure is illustrated by simulation studies and analysis of two real data sets . _ keywords and phrases _ : false selection rate ; generalized linear models ; sparsity ; sure screening ; variable selection .
in this article , we study last passage time to a specific state and time reversal of linear diffusions , and consider their applications to credit risk management . more specifically ,we are interested in certain threshold , denoted by , of company s leverage ratio , an exit from which means an entry into danger zone and would lead to insolvency without returning to that point . in other words , we study the lat passage time to before bankruptcy .it is often the case that companies in financial distress can not recover once the leverage ratio deteriorates to a certain level : the lack of creditworthiness makes it next to impossible to continue usual business relations with their contractors , suppliers , customers , creditors , and investors , which in turn further endangers the company s solvency .then companies would become bankrupt , not being able to improve the leverage ratio back to . in this sense , this can be considered as a precautionary level , the passage of which triggers an alarm . in this paperwe assume the leverage ratio of the company is a function of a linear diffusion ; thus , is equivalent to a certain threshold for this linear diffusion .below , we will continue the discussion using .+ we address three problems .first , we fix an arbitrary level as , an entrance point to danger zone , and study the distribution of the last passage time to this point .second , we derive the distribution of the time until insolvency after the last passage time to the fixed dangerous zone entrance point .finally , we suggest how company managers can choose this level as a solution to an optimization problem .as will be illustrated below with empirical analysis , the information to be provide by the last passage time to is rich , offering a nice risk management tool .the leverage process of the company is a transient linear diffusion in our setting . in solving the above three problems, we first derive the formulas for a _ general transient diffusion _ with certain characteristics( propositions [ prop:1 ] , [ prop : reversal ] , and [ prop:3 ] ) , and then apply these results to the leverage process of our interest .specifically , we will be dealing with a transient diffusion process which has an attracting left boundary with killing and the natural right boundary that can be attracting or non - attracting . to our knowledge , this article is the first to present a framework that provides an analytical tool to credit management as applications of time reversal and last passage time of general linear diffusions .as to our first problem ( discussed in section [ sec : last - passage - time ] ) , last passage times of standard markov processes are studied in .they study the joint distribution of the process at the last exit time from a transient set and the last exit time itself using a potential kernel of the additive functional associated with the set . study the density of last exit time of transient linear diffusions on positive axis with scale function satisfying and using tanaka s formula .they apply the result to bessel processes .last passage time of a firm s value is discussed in .they analyze a value of defaultable claims which involve rebate payments at some random time .the random time when the payment is made is assumed to be last passage time of the value of the firm to a fixed level .we shall employ some techniques for time reversal of diffusions in that uses -transform methods because the technique is a fast and easy way to obtain an explicit formula , compared to other studies .we shall provide a comprehensive treatment of various cases in proposition [ prop:1 ] .see also sharpe for a specific example of time reversal and a recent account by chung and walsh .we can use the last passage time density for a premonition of imminent absorption : _ how dangerous would it be if the process hits that level ? _ specifically , based on the density where is the last time to visit level , we compute , the probability that the premonition occurs within one year . we demonstrate this by using actual company data . + after deriving the transition density of the last passage time to a certain level , we analyze the time left until bankruptcy after the last passage time in section [ sec : mean - time - after - passage ] .in contrast to previous literature , we study the time after the last passage time to level but not the total time spent in a danger zone .this is one of the novel features of our paper .nevertheless , we are citing some articles that analyze the occupation time of the danger zone since this is somehow related to our paper . models the surplus of the company using brownian motion with positive drift and uses omega model to analyze occupation time in red " .this model assumes that there is a time interval between surplus of the company becoming negative for the first time and the bankruptcy . study the total time brownian motion spends below zero and the relation between this laplace transform of this occupation time and the probability of going bankrupt in finite time . model the firm value by time - homogeneous diffusion with level dependent drift and volatility parameters and set as a reorganization barrier , as liquidation barrier , and as the duration of the grace period decided by the court .they study the probability of the event where denoted the first time the first has stayed below for time period .other studies concerning total time in red are and .in contrast to these studies , our focus is not on the occupation time of the danger zone but rather on knowing the distribution of the lifetime left after the last passage time to a certain level .this is a more complicated task ( involving two random times ) , but it is essential information for risk management .finally , in section [ sec : endogenizing ] , we will suggest a methodology how the specific level for the last passage time should be chosen as appropriate one .for this , we present an optimization problem using occupation time distribution and excursion theory , which we believe is new .( see egami and yamazaki which is a different optimization problem for this purpose but does not use last passage argument . )let us consider the probability space , where is the set of all possible realizations of the stochastic economy , and is a probability measure defined on .we denote by the filtration satisfying the usual condition and consider the diffusion process adapted to .the state space of is and we adjoint an isolated point .we call a sample path from with coordinates . the lifetime of is defined by .suppose that a firm has asset with its market value .we assume the asset process follows geometric brownian motion with parameter and , and the debt process grows at the rate of : where we set initial values and , respectively . by assuming , we define the leverage process as .then we set the insolvency time of the firm as since and , and implies which means that the insolvency time is the first passage time of brownian motion with drift to state .consequently , our study about the leverage process can be reduced to the study of the brownian motion with drift on the state space . since the stopping time is predictable , it is possible and may be a good idea to set a threshold level for the leverage process , so that when it passes this point from above , the firm should prepare and start precautionary measures to avoid possible subsequent insolvency . means and we can again study the passage time to this arbitrary for the brownian motion with drift . while we are interested in this specific problem , we rather discuss and prove our results for a generic diffusion .we wish to stress that our assertions below hold in a general setting and thus are applicable to other problems .we are interested in the last passage time of brownian motion with drift to the state : which is if the set in the brace is empty .the _ scale function _ for brownian motion with drift is for which is the state space in our case ( ) .the left boundary is attracting since .the right boundary can be attracting ( when ) or non - attracting ( when ) . before finding the distribution of the last passage time ,let us introduce some objects that are needed .let on with and ( or ) be a transient diffusion process with lifetime .the left boundary is regular with killing and the right boundary is natural .we have the following proposition . [ prop:1 ] for any satisfying , the distribution of when the company goes bankrupt in finite time is where is the transition density of the brownian motion with drift to be killed at : with respect to the speed measure . for , the distribution of when the company goes bankrupt in finite time has atom at .the continuous part is given by .we will work with a general transient diffusion with scale function satisfying and .the latter is true when is a brownian motion with a negative drift ( ) .this case includes that is a regular point with killing and is a natural boundary .hence , we so assume .the functions are minimal excessive .the green function for is where is the transition density .let us consider the minimal excessive function -transform of is a regular diffusion with transition density function for a borel set .the -transform ( or -diffusion ) is identical in law with when conditioned to hit and killed at its last exit time from .that is , for all , .this is because the density of such conditioned diffusion with respect to the speed measure satisfies }{{\mathbb{p}}_u(\lambda_x>0)}=\frac{{\mathbb{e}}_u[{\mbox{1}\hspace{-0.25em}\mbox{l}}_{x_t\in { { \rm d}}v}{\mbox{1}\hspace{-0.25em}\mbox{l}}_{t<\lambda_x}]}{{\mathbb{p}}_u(\lambda_x>0)}\\ & = \frac{{\mathbb{e}}_u[{\mbox{1}\hspace{-0.25em}\mbox{l}}_{x_t\in { { \rm d}}v}{\mathbb{e}}_u[{\mbox{1}\hspace{-0.25em}\mbox{l}}_{t<\lambda_x}\mid \mathcal{f}_t]]}{{\mathbb{p}}_u(\lambda_x>0)}=\frac{{\mathbb{e}}_u[{\mbox{1}\hspace{-0.25em}\mbox{l}}_{x_t\in { { \rm d}}v}{\mathbb{e}}_u[{\mbox{1}\hspace{-0.25em}\mbox{l}}_{\lambda_x\circ \theta_t>0}\mid \mathcal{f}_t]]}{{\mathbb{p}}_u(\lambda_x>0)}\\ & = \frac{{\mathbb{e}}_u[{\mbox{1}\hspace{-0.25em}\mbox{l}}_{x_t\in { { \rm d}}v}]e_v[{\mbox{1}\hspace{-0.25em}\mbox{l}}_{\lambda_x>0}]}{{\mathbb{p}}_u(\lambda_x>0)}=\frac{p(t;u , v)m({{\rm d}}v)k_x(v)}{k_x(u)}\end{aligned}\ ] ] where denotes speed measure . first , we are interested in the case when starts at . then , will hit a.s . andthe last exit time distribution from and the lifetime distribution of the abovementioned -transform coincide .thus , we wish to compute the lifetime distribution of -diffusion .the arguments below follow proposition 4 in salminen .let us consider the diffusion in space - time .choose a point as a reference point in time and space .we set the martin function ( i.e. , the minimal space - time excessive function ) with support at a point as we claim that , for , if we integrate the minimal space - time excessive function along the line with respect to the right - hand side of , we should have a space - time excessive function such that .indeed , by the representation of the excessive function where is a set of the borel -algebra of the martin compactification of the state space and is the unique spectral measure at .] of , we must have . in our present case ,we note that the original diffusion is killed at and thereby read and . is given in .we obtain remember that is the lifetime of -transform .now let .we shall show that the lifetime of transform and the last exit time from for -transform have the same distribution in this case ( ) .note that and the transition density function of -diffusion with respect to its speed measure is given by then , the scale function and the speed measure of the are written as and the transition density takes the form in fact , -transform of and -transform of starting from are the same .this is because converges to a.s .and visits level a.s .. hence , we can argue as in the previous case ( i.e. , ) with replaced by . in particular , the last passage time to the distribution this is the distribution of the lifetime of .the last passage time from for our original has atom at , since may not hit at all .the continuous part is given by let us assume that the scale function for the diffusion satisfies and . the latter is true when is a brownian motion with a positive drift .the left boundary is regular with killing and the right boundary is natural .then , the minimal excessive functions are and with ( for example , see theorem 2.10 in ) .the green function for is where is the transition density .we observe that . for the diffusion in this case , the condition that the bankruptcy occurs in a finite time ( ) is equivalent to . by the markov property , the transition density of this conditioned diffusion is }{{\mathbb{p}}_u[t_l < t_r]}\\ & = p(t;u , v)m(v){{\rm d}}v\dfrac{s(r)-s(v)}{s(r)-s(l)}\div\dfrac{s(r)-s(u)}{s(r)-s(l)}=p(t;u , v)m(v){{\rm d}}v\dfrac{s(r)-s(v)}{s(r)-s(u)}\end{aligned}\ ] ] where denotes speed density .so , the transition density of the conditioned diffusion with respect to speed measure is given by using a taylor expansion of scale functions ( see ) , we obtain the scale function and the speed density for the conditioned diffusion : and we will consider the -transform of . again , this transform is a regular diffusion with transition density function for a borel set and is identical in law with when conditioned to hit and killed at its last exit time from . and and is similar to the diffusion in case 1 . when , will a.s .hit and the lifetime distribution of the -transform and the last exit time ( ) distribution from for coincide . from the result of case 1, we have now , and the last exit time distribution when the company almost surely goes bankrupt is given by and the result is the same as in case 1 .next , let .we consider -transform of the conditioned diffusion .this conditioned diffusion is similar to the diffusion with in case 1 .its last passage time from has atom at and we get the density of the continuous part from : finally , and it is the same as in case 1 . is the equation in , where . when , the company may not go bankrupt in finite time and the probability of this event is where we have used . belowwe illustrate how the last passage time can be useful .we analyze companies that actually went bankrupt .we choose american apparel inc .that filed for bankruptcy in october , 2015 .we define a dangerous threshold as a level when debt makes up of assets , i.e. . the graph displays the probability that starting at the end of the indicated month , the last passage time of this threshold will be less than or equal to 1 year . for this purpose , we estimate the necessary parameters by the method in , which assumes the company equity is a european call option written on company assets with a strike price equal to a certain level of debt .let us take an example . at the end of september 2013 ,the estimated drift and volatility parameters ( and ) of the company s asset process are and , respectively .we employed the method in , , and by using the equity and debt data of the previous 6 months ( see table [ tbl : american - apparel ] ) .then we computed the probability shown in figure [ fig : american - apparel ] by setting the end of each period as a starting point , i.e. brownian motion starts from 0 .we set debt level as a sum of revolving credit facilities and current " , cash overdraft " , current portion of long - term debt " , subordinated notes payable " , and one half of total long term debt " .we use 1 year treasury yield as risk - free rate .we note that there is a rise in the graph in 2013 , which is consistent with the fact that american apparel had problems with a new distribution center in 2013 .let us emphasize that by varying the level , one can obtain more detailed information about credit conditions ( see table [ tbl : american - apparel - alpha ] and figure [ fig : last - passage - time - alpha ] ) .reviewing these numbers , the management may finetune the company s strategy , investment , and operations . to the contrary ,the default probability within 1 year , i.e. , is just one number .hence , the last passage time provides information supplementary to default probability and important in its own right . for american apparel inc . for suitable , the graph displays for brownian motion with drift . ] for american apparel inc . using the end of december 2013 as starting point ( see table [ tbl : american - apparel ] ) .the horizontal axis displays and the vertical axis displays ) ] from by noting .similarly , using , we get =1 ] with be the running maximum .it is well known that the process is the local time of at point since means that updates ( see bertoin ) .it follows that see avram et al. . indeed ,when is updated , we have .when and if is true , then .when the maximum process is updated , hits the boundary .hence the insolvency ( i.e. , ) is equivalent to that is , the height of excursion becomes for the first time .let us denote that occasion by note also that , the first passage time of to state .by the strong markov property of we have & = \int^{\infty}_y{\mathbb{e}}_{y , y}\left[{\mbox{1}\hspace{-0.25em}\mbox{l}}_{\ { s_{\tau_a}\in{{\rm d}}m\}}e^{-q\tau_a}\right]\\ & = \int^{\infty}_y{\mathbb{e}}_{y , y}\left[{\mbox{1}\hspace{-0.25em}\mbox{l}}_{\{t_m\leq \tau_a\}}e^{-qt_m}{\mathbb{e}}_{m , m}\left[e^{-q \tau_a}{\mbox{1}\hspace{-0.25em}\mbox{l}}_{\ { s_{{\tau_a}}\in { { \rm d}}m\}}\right]\right]\nonumber\\ & = \int^{\infty}_y{\mathbb{e}}_{y , y}\left[{\mbox{1}\hspace{-0.25em}\mbox{l}}_{\{s_{\tau_a}\geq m\}}e^{-qt_m}\right]{\mathbb{e}}_{m , m}\left[e^{-q\tau_a}{\mbox{1}\hspace{-0.25em}\mbox{l}}_{\{s_{\tau_a}\in { { \rm d}}m\}}\right]\nonumber.\end{aligned}\ ] ] for writing the above expectations in an explicit way , we refer the reader to egami and oryu : in particular the explanation that leads to proposition 3.1 . then we have =&\int_y^\infty\frac{\varphi(y)}{\varphi(m-(\alpha - c))}\exp\left ( -\int^m_y\frac{f'(u){{\rm d}}u}{f(u)-f(u-(\alpha - c ) ) } \right)\\ & & \times\frac{f'(m)}{f(m)-f(m-(\alpha - c))}{{\rm d}}m \nonumber\end{aligned}\ ] ] where and are two fundamental solutions of the o.d.e . and the continuous and strictly increasing function is defined by . for the case of our original diffusion , we have with with in , while and with in . a straightforward algebra completes the proof .f. avram , z. palmowski , and m. r. pistorius . on the optimal dividend problem for a spectrally negative lvy process ._ , 17:0 156180 , 2007 .issn 1050 - 5164 .doi : 10.1214/105051606000000709 .url http://dx.doi.org/10.1214/105051606000000709 .
this paper develops a risk management framework for companies , based on the leverage process ( a ratio of company asset value over its debt ) by analyzing the characteristics of general linear diffusions with killing . we approach this issue by time reversal , last passage time , and h - transform of linear diffusions . for such processes , we derive the probability density of the last passage time to a certain alarming level and the distribution of the time left until killing after the last passage time . we apply these results to the leverage process of the company . finally , we suggest how a company should specify that abovementioned alarming level for the leverage process by solving an optimization problem .
the python language is increasingly popular in the scientific and , more specifically , in the astronomical community . for instance, it has been already adopted by very large communities such as the space telescope science institute ( stsci ) or by the alma project .it is also very strong among the various tools used and developed in the frame of the virtual observatory endeavour ( see http://www.ivoa.net/ for instance ) . in combination with a small number of specific libraries , we explain briefly here how it can provide a very powerful interactive data language which can be run on a number of operating system platforms , such as linux , mac os x or windows .the set of python libraries we shall describe herafter does not present an interest limited to data reduction since it can also be perfectly used for any kind of post - processing like , for instance , the one of output from numerical simulations . for the spectropolarimetric data reduction we performed here , we used a combination of tools from the libraries pyfits , numarray ( now numpy / scipy ) and matplotlib .using data collected by g. molodij and f. paletou in may 2000 at the thmis solar telescope , we demonstrate the capabilities of such tools by extracting the so - called 2nd spectrum of the sri spectral line at 460.7 nm close to the solar limb , as an illustrative example ( e.g. , * ? ? ?* ) . by reprocessing such data with our python - based tools, we could easily reproduce the results published elsewhere .image of the fractional linear polarization in the spectral domain around 460.7 nm.,width=529 ]in order to deal with fits format files , the stsci developed and maintains the pyfits library .it is very easy to install and to use since it is very well documented .it allows for both the reading of such files , entirely or by slices , and for the generation of new fits files ; working with headers is also very easy .however its use implies that another library such as numarray or numpy ( see below ) for handling multi - dimensional arrays is already available .indeed , for our purpose vector calculations with multi - dimensional arrays have been performed with the numarray library . with the lattercomes along also a number of high - level numerical tools allowing for linear algebra , statistical analysis , fast fourier transforms , convolutions or interpolations for instance .however , since numarray will _ not _ be supported anymore after 2007 , we wish to warn the reader to use instead , from now on , the numpy package for such scientific calculations ( see also * ? ? ?an on - line cookbook and very useful documentation can be found at http://www.scipy.org/. finally , for graphical output and figures saving , we used the matplotlib library for 2d plots .the default gui is very convenient , allowing for a posteriori interactive work on the image , such as area selection and zooming . unlike other software , it is also very easy with matplotlib to export images into the most useful formats .the quality of the output is perfectly suitable for publication ( see e.g. , the figures in * ? ? ? * ) .the freeware set we adopted was installed and used by us without any difficulties together with several linux distributions as well as mac os x.the raw data consisted in a time - sequence of 200 frames taken with the slit parallel to the solar limb at while modulating in time through a sequence of 4 independent polarization states in order to perform full - stokes measurements .demodulation involving inverse or pseudo - inverse matrix calculation was very easily coded with the linear algebra package of numpy .our older routines ( written in idl ) for aligning spectral lines were also very quickly re - coded using convolution with a finite width kernel for bisector search , and cubic spline or fourier interpolations for shifting the profiles line by line .the adequate functions are in the fftpack and signal packages of scipy. can also be noticed.,width=453 ] in fig . 1, we plot the ( ) image corresponding to the fractional linear polarization obtained by recombining the two polarized beams of the mtr taken at 460.7 nm .the strong peak in the sri spectral line corresponds to .2 makes this reading easier since it is the mean profile obtained by averaging the polarized signal over the rows of the previous image .it is in agreement with previous values obtained at very high polarimetric sensitivity with the zimpol i polarimeter attached to the nso / kitt peak macmath - pierce facility . as with other data language, these packages permit to build one s own collection of specific functions and , therefore makes it possible for any user to constitute its own library .the use of these public and private resources can be made both from scripts and/or interactively , using a command line .one of the advantages of using python is flexibility .several other high - level plotting libraries exist such as dislin , or vtk for 3d graphics , for instance . andshould one be unsatisfied of some of the functions of matplotlib , it would be very easy to use instead a number of functions from such other libraries .the python language and the numerous scientific and graphic libraries which are being developed for , already provide very valuable and powerful tools for data analysis in astrophysics .it is well supported by increasingly larger communities so , shifting to such freeware tools appear to us as both reasonable and economical ( not to say legal too ... ) options yet .our experience with the reduction of thmis spectropolarimetric data since 1999 made rather fast , on the timescale of a few weeks only , the conversion of our former software written in idl to the above - mentioned python - based resources .we could reprocess without any difficulties whatsoever old data which lead to published results , and we now reduce our new data with those numerical tools . andfinally , we definitely adopt a proselytizing attitude in favour of them .lger , l. , chevallier , l. & paletou , f. 2007 , , 470 , 1 oliphant , t.e .2006 , guide to numpy ( over - the - web : trelgol publishing ) paletou , f. , & molodij , g. 2001 , in asp conf .236 , advanced solar polarimetry , ed . m. sigwarth , ( san francisco : asp ) , 9 stenflo , j.o . ,bianda , m. , keller , c.u . , & solanki , s. 1997 , , 322 , 985 trujillo bueno , j. , collados , m. , paletou , f. , & molodij , g. 2001 , in asp conf .236 , advanced solar polarimetry , ed . m. sigwarth , ( san francisco : asp ) , 141 van rossum , g. 1990 , http://www.python.org/
most of the solar physicists use very expensive software for data reduction and visualization . we present hereafter a reliable freeware solution based on the python language . this is made possible by the association of the latter with a small set of additional libraries developed in the scientific community . it provides then a very powerful and economical alternative to other interactive data languages . although it can also be used for any kind of post - processing of data , we demonstrate the capabities of such a set of freeware tools using thmis observations of the second solar spectrum .
for a number of random constraint satisfaction problems ( csp ) , by now very good estimates are available for the largest constraint density ( ratio of constraints to variables ) for which typical problems have solutions .for example , a random graph of average degree is with high probability occurs with high probability ( w.h.p . ) if = 1 ] with and .recall that , by the results in , is w.h.p .satisfiable and , thus , excluding the possibility of satisfying pairs at certain distances is a non - vacuous statement .we see that for \cup [ 0.68,1]].,scaledwidth=50.0% ] establishing that there exists a distance such that there are no pairs of assignments at distance immediately implies an upper bound on the diameter of every cluster .this is because if a cluster has diameter , then it must contain pairs of solutions at every distance . to see this ,take any pair that have distance , any path from to in , and observe that the sequence of distances from along the vertices of the path must contain every integer in .therefore , if , then w.h.p .every cluster in has diameter at most .if we can further prove that in an interval , then we can immediately partition the set of satisfying assignments into well - separated regions , as follows .start with any satisfying assignment , let be its cluster , and consider the set of truth assignments that have distance at most from and the set of truth assignments that have distance at most from .observe now that the set can not contain any satisfying truth assignments , as any such assignment would be at distance from some assignment in .thus , the set of satisfying assignments in is a union of clusters ( cluster - region ) , all of which have distance at least from any cluster not in the region .repeating this process until all satisfying assignments have been assigned to a cluster region gives us exactly the subsets of theorems [ basic ] and [ sharp_basic ] .moreover , note that this arguments bounds the diameter of each entire cluster - region , not only of each cluster , by .the arguments above remains valid even if assignments are deemed adjacent whenever their distance is bounded by , for any . as a result , theorems [ basic ] and [ sharp_basic ] remain valid as stated for any definition of clusters in which assignments are deemed to belong in the same cluster if their distance is . proving the existence of exponentially many non - empty cluster regions requires greater sophistication and leverages in a strong way the results of .this is because having for some does _ not _ imply that pairs of satisfying assignments exist for such : in principle , the behavior of could be determined by a tiny minority of solution - rich formulas .hence the need for the second moment method .specifically , say that a satisfying assignment is _ balanced _ if its number of satisfied literal occurrences is in the range , and let be the number of balanced assignments in . in , it was shown that ^ 2 = \lambda_b(1/2,k , r)^n ] . by the payley - zigmund inequality, this last fact implies that for any ], implies that is within a polynomial factor of its expectation , also with constant probability . since the property has more than satisfying assignments " has a sharp threshold , this assertion implies that for every , has at least satisfying assignments w.h.p . to prove that there are exponentially many clusters , we divide the above lower bound for the total number of satisfying assignments with the following upper bound for the number of truth assignments in each cluster - region . recall that and let } \lambda(\alpha , k , r ) \enspace .\ ] ] if is the expected number of pairs of truth assignments with distance at most in , it follows that , since the expected number of pairs at each distance is at most and there are no more than possible distances . by markov s inequality, this implies that w.h.p .the number of pairs of truth assignments in that have distance at most is .recall now that w.h.p .every cluster - region in has diameter at most .therefore , w.h.p .the total number of pairs of truth assignments in each cluster - region is at most .thus , if , we can conclude that has at least cluster - regions .indeed , the higher of the two horizontal lines in figure [ ploo ] highlights that . from the discussions in this section we see that to establish theorem[ basic ] it suffices to prove the following .[ thm : est ] for every , there exists a value of and constants and such that for all and > \epsilon_k \enspace .\ ] ] in particular , for any and all , if , we can take specifically , in section [ sec : ab ] we will prove the claims in theorem [ thm : est ] regarding , while in section [ sec : e ] we prove the claims regarding . the observation that if then w.h.p . has no pairs of satisfying assignments at distance was first made and used in .moreover , in the authors gave an expression for the expected number of _ locally maximal _ pairs of satisfying assignments at each distance , where a pair is locally maximal if there is no variable which has value 0 in and flipping its value in both and yields a new pair of satisfying assignments .( if a formula has a pair of satisfying assignments at distance , then it always has a locally maximal pair at distance ) .clearly , always , but for large and ) the difference is minuscule for all .the connection between in an interval and clustering " was also first made in .unfortunately , in no concrete definition of clusters was given and , certainly , no scheme for grouping clusters into well - separated cluster regions ( clusters need not be well separated themselves ) .besides these simple clarifications , our minor contribution regarding clustering lies in giving rigorous bounds on the diameter and distance of the cluster regions .the novel one , as we discuss below , lies in establishing the existence of exponentially many cluster regions .additionally , in the authors derive an expression for the second moment of the number of _ pairs of _ balanced assignments at distance , for each ] , w.h.p .there is a pair of truth assignments that have distance .we note that even if the maximizer in the second moment computation was determined rigorously and coincided with the heuristic guess of , the strongest statement that can be inferred from the above two assertions in terms of establishing clustering " is : for every , there is , such that w.h.p. has at least two clusters .in contrast , our theorem [ basic ] establishes that w.h.p . consists of _ exponentially _ many , well - separated cluster regions , each region containing at least one cluster .additionally , theorem [ sharp_basic ] establishes that as grows and approaches the threshold , these regions grow maximally far apart and their diameter vanishes .for a cluster , the string is the * projection * of and we will use the convention , so that .imagine for a moment that given a formula we could compute the marginal of each variable over the cluster projections , i.e. , that for each variable we could compute the fraction of clusters in which its projection is , and . then , as long as we never assigned to a variable which in every cluster was frozen to the value , we are guaranteed to find a satisfying assignment : after each step there is at least one cluster consistent with our choices so far .being able to perform the above marginalization seems quite far fetched given that even if we are handed a truth assignment in a cluster , it is not at all clear how to compute in time less than .survey propagation ( sp ) is an attempt to compute marginals over cluster projections by making a number of approximations .one fundamental assumption underlying sp is that , unlike the marginals over truth assignments , the marginals over cluster projections essentially factorize , i.e. , if two variables are far apart in the formula , then their joint distribution over cluster projections is essentially the product of their cluster projection marginals .determining the validity of this assumption remains an outstanding open problem . the other fundamental assumption underlyingsp is that _ approximate _ cluster projections can be encoded as the solutions of a csp whose factor graph can be syntactically derived from the input formula .our results are closely related to this second assumption and establish that , indeed , the approximate cluster projections used in sp retain a significant amount of information from the cluster projections . to make this last notion concrete and enhance intuition, we give below a self - contained , brisk discussion of survey propagation .for the sake of presentation this discussion is historically inaccurate .we attempt to restore history in section [ sec : sp_related ] . as we said above ,even if we are given a satisfying assignment , it is not obvious how to determine the projection of its cluster . to get around this problem sp sacrifices information in the following manner . given a string , a variable is * free * in if in every clause containing or , at least one of the other literals in is assigned true or .we will refer to the following as a * coarsening - step : * if a variable is free , assign it .given say that is dominated by , written , if for every , either or .consider now the following process : [ lem : uni ] for every formula and truth assignment , there is a unique coarsening fixed point .if belong to the same cluster , then .trivially , applying a coarsening step to a string produces a string such that . moreover , if was free in , then will be free in . as a result ,if both are reachable from by coarsening steps , so is the string that results by starting at , concatenating the two sequences of operations and removing all but the first occurrence of each coarsening step .this implies that there is a unique fixed point for each under coarsening .observe now that if differ only in the -th coordinate , then the -th variable is free in both and coarsening it in both yields the same string . by our earlier argument , , where is the cluster containing .considering all adjacent pairs in , we see that .[ core_def ] the * core * of a cluster is the unique coarsening fixed point of the truth assignments in . by lemma [ lem : uni ] , if a variable takes either the value 0 or the value 1 in the core of a cluster , then it is frozen to that value in . to prove theorem [ gen_c ] we prove that the core of every cluster has many non- variables .[ gen_w ] for any , let and be as in theorem [ gen_c ] .if and , then w.h.p .the coarsening fixed point of * every * contains fewer than variables that take the value . to prove theorem [ gen_w ] ( which implies theorem [ gen_c ] ) we derive sharp bounds for the large deviations rate function of the coarsening process applied to a fixed satisfying assignment . as a result, we also prove that in the planted - assignment model the cluster containing the planted assignment already contains frozen variables at .also , we will see that our proof gives a strong hint that for small values of , such as , for all densities in the corresponding satisfiable regime , most satisfying assignments _ do _ converge to upon coarsening .we can think of coarsening as an attempt to estimate the projection of by starting at and being somewhat reckless . to see this , consider a parallel version of coarsening in which given we coarsen all free variables in it simultaneously .clearly , the first round of such a process will only assign to variables whose projection in is indeed .subsequent rounds , though , might not : a variable is deemed free , if in every clause containing it there is some other variable satisfying the clause , _ or _ a variable assigned .this second possibility is equivalent to assuming that the -variables in the clauses containing , call them , can take joint values that allow to not contribute in the satisfaction of any clause . in general formulasthis is , of course , not a valid assumption . on the other hand , the belief that in random formulas there are no long - range correlations _ among the non - frozen _ variables of each cluster makes this is a reasonable statistical assumption : since the formula is random , the variables in are probably far apart from one another in the factor graph that results after removing the clauses containing .thus , indeed , any subset of variables of that do not co - occur in a clause should be able to take _ any _ set of joint values .our results can be seen as evidence of the utility of this line of reasoning , since we prove that for sufficiently large densities , the coarseningfixed point of a satisfying assignment is _ never _ . indeed , as we approach the satisfiability threshold , the fraction of frozen variables in it tends to 1 . of course, while the core of a cluster can be easily derived given some , such a is still hard to come by .the last leap of approximation underlying sp is to define a set that includes all cluster cores , yet is such that membership in is locally checkable " , akin to membership in .specifically , a string is a * cover * of a cnf formula if : ( i ) under , every clause in contains a satisfied literal or at least two , and ( ii ) every free variable in is assigned , i.e. , is .cores trivially satisfy ( ii ) as fixed points of coarsening ; it is also easy to see , by induction , that any string that results by applying coarsening steps to a satisfying assignment satisfies ( i ) .thus , a core is always a cover . at the same time , checking whether satisfies ( i ) can be done trivially by examining each clause in isolation . for ( ii )it is enough to check that for each variable assigned or in , there is at least one clause satisfied by and dissatisfied by all other variables in it .again , this amounts to simple checks , each check done in isolation by considering the clauses containing the corresponding variable .the price we pay for dealing with locally - checkable objects is that the set of all covers can be potentially much bigger than the set of all cores .for example , is always a cover , even if is unsatisfiable .the survey propagation algorithm can now be stated as follows .* repeat until all variables are set : 1 .compute the marginals of variables over covers .2 . select a variable with least mass on and assign it the 0/1 value on which it puts most mass .3 . simplify the formula .the computation of marginals over covers in the original derivation of sp was , in fact , done via a message passing procedure that runs on the factor graph of the original formula rather than a factor graph encoding covers ( more on this in section [ sec : sp_related ] ) .also , in , if a configuration is reached in which all variables put ( nearly ) all their mass on , the loop is stopped and a local search algorithm is invoked .the idea is that when such a configuration is reached , the algorithm has arrived " at a cluster and finding a solution inside that cluster is easy since only non - frozen variables remain unset .the original presentation of survey propagation motivated the algorithm in terms of a number of physical notions ( cavities , magnetic fields , etc . ) .specifically , the algorithm was derived by applying the cavity method " within a 1-step replica symmetry breaking " scheme , with no reference whatsoever to notions such as cluster projections , cores , or covers ( in fact , even clusters where only specified as the connected components that result when satisfying assignments at finite hamming distance " are considered adjacent ) . on the other hand ,a very definitive message - passing procedure was specified on the factor graph of the original formula and the computer code accompanying the paper and implementing that procedure worked spectacularly well . moreover , a notion foreshadowing cores was included in the authors discussion of warning propagation " .casting sp as an attempt to compute marginals over cores was done independently by braunstein and zecchina in and maneva , mossel , and wainwright in . in particular , in both papers it is shown that the messages exchanged by sp over the factor graph of the input formula are the messages implied by the belief propagation formalism applied to a factor graph encoding the set of all covers . the first author and thorpe additionally shown that for every formula , there is a factor graph encoding the set of s covers which inherits the cycle structure of s factor graph , so that if the latter is locally tree - like so is . in ,the authors give a number of formal correspondences between sp , markov random fields and gibbs sampling and note that a cover can also be thought of as partial truth assignment in which every unsatisfied clause has length at least 2 , and in which every variable assigned or has some clause for which it is essential in , i.e. , satisfies but all other variables in are set opposite to their sign in .this last view motivates a generalization of sp in which marginals are computed not only over covers , but over all partial assignments in which every unsatisfied clause has length at least 2 , weighted exponentially in the number of non - essential 0/1 variables and the number of -variables .one particular motivation for this generalization is that while sp appears to work very well on random 3-cnf formulas , gives experimental evidence that such formulas do not have non - trivial cores , i.e. , upon coarsening truth assignments end up as .this apparent contradiction is reconciled by attributing the success of sp to the existence of near - core " strings allowed under the proposed generalization .while provided a framework for studying sp by connecting it to concrete mathematical objects such as cores and markov random fields , it did not provide results on the actual structure of the solution space of random -cnf formulas .indeed , motivated by the experimental absence of cores for , the authors asked whether random formulas have non - trivial cores for any .our results , establish a positive answer to this question for all .theorem [ gen_c ] follows from theorem [ gen_w ] and lemma [ lem : uni ] . to prove theorem [ gen_w ] we say that a satisfying assignment is if its coarsening fixed point has at least -variables .let be the random variable equal to the number of -coreless satisfying assignments in a random -cnf formula . by symmetry , & = & \sum_{\sigma \in \{0,1,\}^n } \pr[\sigma \mbox { is -coreless is satisfying } ] \cdot \pr[\sigma \mbox { is satisfying}]\\ & = & 2^n\cdot \left(1-\frac{1}{2^k}\right)^{rn } \cdot \ ; \pr[\mbox{ is -coreless is satisfying}]\enspace.\label{conditional}\end{aligned}\ ] ] observe that conditioning on is satisfying " is exactly the same as planting " the solution , and amounts to selecting the random clauses in our formula , uniformly and independently from amongst all clauses having at least one negative literal .we will see that for every , there exists such that = \begin{cases}\label{tk } 1-o(1 ) & { \mbox{if \enspace , } } \cr o(1 ) & { \mbox{if \enspace . } } \end{cases}\ ] ] in particular , we will see that .we find it interesting ( and speculate that it s not an accident ) that all algorithms that have been analyzed so far work for densities below .more precisely , all analyzed algorithms set each variable by considering only a subset of the not yet satisfied clauses containing and succeed for some , where depends on the choice of subset . to prove =o(1) ] for a function such that for all , by , for all such we have = o(1) ] we consider a random -cnf formula with clauses chosen uniformly among those satisfying . to determine , by our discussion above, it suffices to consider the clauses in our formula that have precisely one satisfied ( negative ) literal .the number of such clauses is distributed as it will be convenient to work in a model where each of these clauses is formed by choosing 1 negative literal and positive literals , uniformly , independently _ and with replacement_. ( since , by standard arguments , our results then apply when replacement is not allowed and the original number of clauses is . ) we think of the literals in each clause as balls ; we paint the single satisfied literal of each clause red , and the unsatisfied literals blue .we also have one bin for each of the variables and we place each literal in the bin of its underlying variable .we will use the term blue bin " to refer to a bin that has at least one blue ball and no red balls . with this picture in mind, we see that the -variables in correspond precisely to the set of empty bins when the following process terminates : 1 .[ qlegal ] let be any blue bin ; if none exists exit .2 . remove any ball from .[ random ] remove random blue balls. 4 . [ or : red ] remove a random red ball .note that the above process removes exactly one clause ( 1 red ball and blue balls ) in each step and , therefore , if we pass the condition in step 1 , there are always suitable balls to remove . to give a lower bound on the probability that the process exits before steps ( thus , reaching a non - trivial fixed point ), we will give a lower bound on the probability that it exits within the first steps , for some carefully chosen .in particular , observe that for the process to not exit within the first steps it must be that : to bound the probability of the event in we will bound the probability it occurs in the following simplified process .the point is that this modified process is significantly easier to analyze , while the event in is only ( slightly ) more likely ( for the values of of interest to us ) . 1 .let be any blue bin ; if none exists go to step ( c ) .2 . remove any ball from .3 . remove a random red ball .[ ui ] the event in is no less likely in the modified process than in the original process .we prove lemma [ ui ] below . to bound the probability of the event in in the modified process we argue as follows .let be the number of bins which do not contain any red ball after steps and let be the original number of blue balls in these bins .if , then after steps of the modified process every non - empty bin will contain at least one red ball , since up to that point we remove precisely one blue ball per step . therefore , the probability of the event in is bounded above by the probability that . to bound this last probabilitywe observe that the red balls in the modified process evolve completely independently of the blue balls .moreover , since we remove exactly one red ball in each step , the state of the red balls after steps is distributed exactly as if we had simply thrown red balls into the bins .so , all in all , given a random -cnf formula with clauses and a fixed , conditional on satisfying , the probability that the coarsening process started at fails to reach a fixed point within steps is bounded by the probability that , where where is the distribution of the number of empty bins when we throw balls into bins . as a result , given ,our goal is to determine a value for that minimizes ] , .}\end{aligned}\ ] ] we begin by bounding from above as follows , \\ & < & 2 \ln 2 - 2 \left(1/2 - { \alpha}\right)^2 - \gamma \ln 2 \big[2 - ( 1-{\alpha})^k\big ] \\ & \equiv & w({\alpha } , k , \gamma)\enspace .\end{aligned}\ ] ] we note that for any fixed , the function is non - increasing in and decreasing in .moreover , implying that for any fixed , the equation can have at most three roots for . to bound the location of these rootswe observe that for any and , \ln 2 > 0 \enspace , \\w(99/100,k,\gamma ) & < & w(99/100 , 8 , 2/3 ) = -0.0181019 ... < 0 \enspace , \label{alkis}\end{aligned}\ ] ] where the inequality in relies on the mononicity of in . therefore ,from we can conclude that for every and , if there exist such that and , then for all ] , using lemma [ fede1 ] to pass from to , we see that for every and ] and since , it follows that also for such . using, it is straightforward to check that for $ ] , the derivative of is negative both when i ) and , and when ii ) and , thus concluding the proof . recalling the definition of from we have \enspace , \ ] ] where satisfies we note for later use that , as shown in , if satisfies then since all coefficients in the binomial expansion of are positive , to get a lower bound for the numerator inside the logarithm in we consider the binomial expansion of .we observe that the sum of a pair of successive terms where the lower term corresponds to an even power equals \enspace .\ ] ] for and the expression in is positive .moreover , when is even the last term in the binomial expansion has a positive coefficient and can be safely discarded .therefore , for all and , substituting and into we get a lower bound of the form .it is not hard to check directly that for all .similarly , using the upper bound for from , it is not hard to check that for , we have for all .therefore , we can conclude \nonumber \\ & \ge & 2 \ln 2 + r \ln\left[1 - 2^{1-k } + 2^{-2k } - k 2^{-k } ( 1 - 2^{-k } ) ( 2^{1-k } + 3 k 2^{-2k } ) \right ] \enspace , \label{fola}\end{aligned}\ ] ] where in we have replaced with its upper bound from .the argument of the logarithm in is increasing in for all ( a fact that can be easily established by considering its derivative ) . as a result, we have that for all , it is at least equal to its value for which is . thus , using the inequality valid for all , we can finally write \enspace , \ ] ] where work has been partially supported by the ec through the fp6 ist integrated project `` evergrow '' .a. kaporis , l. m. kirousis , and e. g. lalas , _ the probabilistic analysis of a greedy satisfiability algorithm _ , in proc .10th annual european symposium on algorithms , volume 2461 of _ lecture notes in computer science _ , springer ( 2002 ) , 574585 .
for a large number of random constraint satisfaction problems , such as random k - sat and random graph and hypergraph coloring , there are very good estimates of the largest constraint density for which solutions exist . yet , all known polynomial - time algorithms for these problems fail to find solutions even at much lower densities . to understand the origin of this gap we study how the structure of the space of solutions evolves in such problems as constraints are added . in particular , we prove that much before solutions disappear , they organize into an exponential number of clusters , each of which is relatively small and far apart from all other clusters . moreover , inside each cluster most variables are frozen , i.e. , take only one value . the existence of such frozen variables gives a satisfying intuitive explanation for the failure of the polynomial - time algorithms analyzed so far . at the same time , our results establish rigorously one of the two main hypotheses underlying survey propagation , a heuristic introduced by physicists in recent years that appears to perform extraordinarily well on random constraint satisfaction problems .
being a major subject of probability theory , measure - valued diffusions , or superprocesses , such as super - brownian motion and the fleming - viot process have been well studied during the last three decades .important properties of superprocesses have been proved and connections to other areas of mathematics such as partial differential equations have been established . for a detailed exposition of the subject the readeris referred to , and . herewe introduce briefly super - random walks - the spatially discrete analogues of super - brownian motion . studyingthese processes gave a strong motivation to investigate spatial branching processes with interactions , and in particular , mutually catalytic branching processes on discrete space - the main theme of this article .to introduce super - random walks , we start with the following approximating particle system .assume that an initial configuration of a large number ( of order ) of particles distributed over is given .the particles move as independent simple random walk in and each particle independently of the others dies after an exponential time of rate , with , and at the place of death it leaves a random number of offspring particles , drawn from a fixed integer valued law .the particles of the updated population continue their motion and reproduction according to the same rules .this process is usually referred to as a branching random walk with the branching law and we will assume in the sequel that has expectation ( this means criticality ) and finite variance .the process is then defined to be the finite atomic measure which loosely speaking gives measure of mass to each particle alive at time . to be more precise , where is a position of the -th particle alive at time .assume that , as tends to infinity , converges weakly in the space of finite measures on to a measure .then one can show that the measure - valued process converges weakly to a limiting measure - valued process which is called super - random walk and is uniquely characterized via the following martingale problem : for bounded test - functions is a square - integrable martingale with quadratic variation process here , denotes the discrete laplace operator as defined in theorem [ 0 ] .an interesting observation is the following invariance property : irrespectively of , the finite variance assumption for the branching mechanism leads to a universal limit depending only on the variance and the parameter which is also called the branching rate . in what follows ,we assume .it is worth mentioning that if we ignore the spatial motion and count just the total number of particles , the scaling procedure is nothing else but the scaling of critical and finite variance galton - watson processes which leads towards classical feller s branching diffusion where .note that super - random walks can be characterized as solutions to stochastic differential equations .abbreviating , the super - random walk is a weak solution to following system of stochastic differential equations ( which is , in fact , a discrete version of a stochastic heat equation ) where is a collection of independent brownian motions .next , we proceed to a more recent development : measure - valued processes with interactions . one way to introduce interaction into the model is to replace the constant branching rate in the particle approximation by a random , adapted and space - time varying branching rate , also called the catalyst. some particular choices of branching environments and related models over continuous space have been discussed in the literature ( see for instance , , ) .for example , one can consider a super - random walk on in a super - random walk environment .building upon ( [ e ] ) , this model can be described as a solution to the following system of stochastic differential equations : driven by independent families of independent brownian motions .a solution is called super - random walk in the catalytic super - random walk environment .note that ( [ f ] ) describes the so - called one - way interaction model : the -population catalyzes the -population .then the natural extension of ( [ f ] ) to two - way interaction is the following mutually catalytic model . in the following , weak solutions , on a stochastic basis , to the infinite system of stochastic differential equations ( [ ss ] ) driven by independent brownian motionswill be called mutually catalytic branching processes with initial conditions and branching rate . to abbreviate, solutions will be denoted by * *. in the sequel will also denote a mutually catalytic branching process defined on a more general state space instead of and with -matrix instead of .it is easy to see that the branching property fails for .hence , many of the classical tools developed for superprocesses also fail .nonetheless , the simple symmetric choice of the interaction between and makes this mutually catalytic system tractable . in order to stress the underlying branching processes ,the two components will be called types . as an example for the convention , if for all we will say that the first type died out .interestingly , the study of mutually catalytic branching processes can also be motivated by the study of interacting diffusion processes . given a family of independent brownian motions and some function to be specified below , discrete - space parabolic stochastic partial differential equations have been studied extensively in the literature .some prominent examples will be briefly discussed in the sequel .[ ex3 ] for solutions of ( [ int ] ) are super - random walks .this example has already been dealt with in detail in the previous subsection .[ ex1 ] for , equation ( [ int ] ) is called stepping stone model . in fact, the stepping stone model is the spatial generalization of the one - dimensional wright - fisher diffusion that arises as a scaling limit of the moran model in population genetics similarly as the feller diffusion arises as a scaling limit of critical galton - watson processes .in contrast to the galton - watson model , the moran model is not used to model the total number of individuals but instead counts the proportion of one allele in a diploid population for a fixed number of individuals . in particular, this interpretation corresponds to the solution of ( [ wf ] ) taking values in ] .+ to abbreviate , the system of equations ( [ ss ] ) and their solutions will be denoted by * * or just * *. the name symbiotic branching model was used in in order to stress the biological interpretation of the mutually catalytic behavior ; the solution processes and might be considered as the distribution in space of two types . for later uselet us capture the correlation structure used for symbiotic branching in a name .we will say that two brownian motions satisfying =\varrho t ] be a finite subset of . to define the approximating system , we consider the following system of finite - dimensional stochastic differential equations which we denote by : the correlation structure of the brownian motions remains as in definition [ defsol ] . since this is a system of finite - dimensional stochastic differential equations existence of weak solutions follows from finite - dimensional diffusion theory for sufficiently good " coefficients ( see for instance theorem 5.3.10 of ) . to prove non - negativity of solutions, one shows that the semimartingale s local time at zero equals to zero ( see for instance page 1127 of ) .solutions can be extended to the entire lattice by setting for . due to the choice of the initial conditions , the contained in for all .the main ingredients , to prove convergence of , are the following estimates . it suffices to show that for , and &\rightarrow 0 , \quad \text { as } k\rightarrow \infty,\label{e1}\\ \sup_{n\in{\mathbb{n}}}\sup_{|t - s|\leq h , 0\leq t , s\leq t}{\mathbb{p}}\big[|u^n_t(k)-u_s^n(k)|>\epsilon\big]&\rightarrow 0 , \quad\text { as } h\rightarrow 0,\label{e2 } \end{aligned}\ ] ] and analogously for .the desired convergence in ( [ e1 ] ) , ( [ e2 ] ) is analogous to ( 2.9 ) and ( 2.10 ) of . in order to ensure that all stochastic integrals are martingales we introduce a sequence of stopping times : .this sequence , almost surely , converges to infinity , as tends to infinity , since solutions do not explode . using only the definition of we estimate & = { \mathbb{e}}\big[\sup_{t\leq t\wedge t_n^n}\sum_{i\in { \mathbb{z}}^d}u_t^n(i)\beta(i)\big]\\ \nonumber & \leq\langle u_0,\beta\rangle+{\mathbb{e}}\bigg[\sup_{t\leq t\wedge t_n^n}\sum_{i\in s_n}\beta(i)\int_0^t \sum_{\underset{|j - i|=1}{j\in s_n}}\frac{1}{2d}(u^n_s(j)-u^n_s(i ) ) \,ds\bigg]\\ \nonumber & \quad+{\mathbb{e}}\bigg[\sup_{t\leq t\wedge t_n^n}\sum_{i\in s_n}\beta(i)\int_0^t\sqrt{\gamma u_s^n(i)v_s^n(i)}\,db^{1,n}_s(i)\bigg]\\\begin{split } & \leq\langle u_0,\beta\rangle+{\mathbb{e}}\bigg[\sum_{i\in s_n}\beta(i)\int_0^{t\wedge t_n^n } \sum_{\underset{|i - j|=1}{j\in s_n}}\frac{1}{2d}u_s^n(j)\,ds\bigg ] \\\label{2807_1 } & \quad + { \mathbb{e}}\bigg[\sup_{t\leq t\wedge t_n^n}\sum_{i\in s_n}\beta(i)\int_0^t\sqrt{\gamma u_s^n(i)v_s^n(i)}\,db^{1,n}_s(i)\bigg].\end{split } \end{aligned}\ ] ] using the burkholder - davis - gundy inequality and then fubini s theorem we obtain the following upper bound for the above expressions \,ds+\gamma\sum_{i\in s_n}\beta(i)\int_0^{t}{\mathbb{e}}\big [ u_s^n(i)v_s^n(i)\big]\,ds .\end{aligned}\ ] ] so far , this procedure is fairly standard for interacting diffusions of type ( [ int ] ) where instead of the mixed moments , the expectations ] and ] is bounded uniformly in .this can be done similarly as before , using the same bounds on the moments .+ following the arguments on page 399 of , the bounds ( [ e1 ] ) , ( [ e2 ] ) suffice to ensure convergence ( in a sufficiently strong sense ) of the sequences to a limiting process solving the equation defining . from the very definition ,interacting diffusion processes are parabolic equations with random potential functions . in the spirit of the deterministic theory one can equally ask for representations that are easier to work with in some situations .we will use the weak - solution representation and the variation of constant form . in the following we use the semigroup generated by on , i.e. the family of linear operators where is the transition kernel of a simple random walk on .the constant function on taking value is abbreviated by . for a continuum analogue of the following two representations we refer to corollary 19 of and for a very detailed proof on the lattice for to theorem 2.2 of .[ prop : mild ] suppose that is a solution of with summable , then and are summable and the total - mass processes satisfy where the infinite sums converge in . if , then the point wise representation holds .the covariation structure of the brownian motions is as in the definition of .the weak solution representation can be obtained for also for more general test - functions . instead of sketching a proof we give an important application leading the way from to the exit - law defined in ( [ exitd ] ) .a property that is shared by many particle systems is that started at summable initial conditions the total - mass process is a martingale .a natural question for the two - type model is how the two total - mass martingales relate to each other if both types are started at summable initial conditions . in order to avoid confusion with denote in the following the cross - variations of square - integrable martingales by ] .it was proved in that where ( resp . ) denotes the dirac distribution concentrated on the constant function ( resp . ) .this can be reformulated in terms of perfectly anti - correlated brownian motions : for , the pair takes values only on the straight - line connecting and , and stops at the boundaries .hence , the law of is a mixture of and and the probability of hitting is equal to the probability of a one - dimensional brownian motion started at ] can now be readily deduced by considering the cases and .[ fig ] as a function of all results discussed above can equally be shown for the continuum space analogue model in low dimensions .we will briefly discuss this setting as it serves as an important motivation for the study of .+ let us first introduce the model for .the continuum space symbiotic branching model is defined by the pair of stochastic heat equations where now denotes the typical laplace operator on .the driving noises are standard gaussian white noises on with correlation parameter ] .uniqueness for can be obtained via the self - duality as in the proof of corollary [ uniq ] .the moment duality also holds with particles moving as brownian motions and collision times replaced by collision local times .stochastic heat equations typically have function - valued solutions only in spatial dimension .the particular symmetric nature of changes this property : it was shown in and that do exist in the continuous setting in dimension for small enough .existence of solutions in dimensions is unknown .the results on the longtime behavior will not be repeated here ; those are similar to the results discussed for the discrete spatial case for .instead , we include a result of refining a theorem of .to explain this , the notion of the interface of continuous - space symbiotic branching processes is needed .[ def : ifc ] the interface at time of a solution of the symbiotic branching model with ] there exists a constant and a finite random - time so that almost surely for all .\end{aligned}\ ] ] for the stochastic heat equation with wright - fisher noise corresponding to , it was shown in that the correct propagation of the interface is of order so that one might ask whether ( [ ef ] ) is sharp for .here is a refinement of ( [ ef ] ) , proved in , for which the critical moment curve was originally developed .[ cor : wavespeed ] suppose is chosen sufficiently small such that and , then there is a constant and a finite random - time such that almost surely for all . .\end{aligned}\ ] ] the strong restriction on is probably not necessary and is only caused by the technique of the proof which is based on the dyadic grid technique utilized for the proof of . to circumvent the boundedness of all moments that holds only for , moments have to be bounded in time .+ though the assumption forces the result is still interesting .it shows that sub - linear speed of propagation is not restricted to situations in which solutions are uniformly bounded as they are for .finally , let us motivate the construction and the study of in section 3 .the scaling property for symbiotic branching on the continuum ( see lemma 8 of ) states that if is a solution started at heavyside initial conditions , then is a solution of with heavyside initial condition .hence , propagation of the interface of order will be intimately related to the behavior of with tending to infinity .+ unfortunately , the constructions in section 3 can only be seen as a first step towards the correct order of interface propagation : the construction for the limiting process could so far be carried out only for discrete spatial symbiotic branching processes .it is still an open question how to extend the characterizations and constructions of to the continuum analogue .in section [ sec : voter1 ] we discussed how the standard voter processes can be viewed as an infinite rate stepping stone model , or , in other words , for .it is not at all clear if and how that motivation extends to as the coalescing particles duality seems to have no extension to .taking into account the colored particles dual instead , it is by no means clear whether sending to infinity leads to a non - trivial process : for the changes of color occur instantaneously but at the same time the exponent is multiplied by , so that the moment expression only makes sense if the exponent is almost surely non - positive .+ nonetheless , using the self - duality instead of the moment - duality , it can be shown that sending the branching rate to infinity makes sense . to understand the effect in a nutshell ,let us take a closer look at the non - spatial system of symbiotic branching sdes with non - negative initial conditions . due to the symmetric structure , we got in lemma [ la : eds ] that are -correlated brownian motions if we use the time - change .caused by the product structure of the time - change the boundary of the first quadrant is absorbing .hence , the brownian motions stop at the first hitting - time of .increasing only has the effect that follows the brownian paths with different speed so that corresponds to at once picking a point in according to the exit - measure on and freeze thereafter ( recall ( [ exitd ] ) ) .+ to make this argument precise one has to be slightly more careful as the parameter does not only occur as multiple in the time - change but also effects the solution itself . to circumvent this obstacleone has to take into account the structure of the equations .let us label the solutions by their fixed branching rate .it can be shown that the sequence converges in the so - called me yer - zheng pseudo - path " topology ( for which we refer to and ) to a limit .stochastic boundedness in and of the square - function by implies that hence , the limiting process takes values in .the only possible limit is the constant process , where is distributed according to because the prelimiting processes are eventually trapped at at a point distributed according to .incorporating space , a second effect occurs : both types change their mass on according to a heatflow .this smoothing effect immediately tries to lift a zero coordinate if it was pushed by the exit - measure to zero .interestingly , none of the two effects dominates and a non - trivial limiting process ( with values in for each site ) can be obtained when letting the branching rate tend to infinity .in contrast to section [ sec:2 ] we do not restrict to the discrete laplacian here and instead replace by as in section [ sec : uni ] .accordingly , is replaced by a general countable set .the aim of this section is to explain how the results of and on the infinite rate mutually catalytic branching process can be generalized to . after introducing more notation for the state - spaces , different approaches to infinite rate symbiotic branching processesare presented : a characterization via an abstract martingale problem , two limiting constructions and a more hands - on representation via poissonian integral equations .the finite rate symbiotic branching processes were studied on subspaces of , i.e. at each site of the countable set the solution processes consist of a pair of non - negative values . according to the heuristic reasoning above , at each site infinite rate processes take values on the boundary of the first quadrant so that we can expect to find an -valued process . as usual , certain growth restrictions need to be imposed to find a tractable subspace of . in accordance with the state - space for finite rate symbiotic branching processes we stick to the analogue subspace of : equipped with the same norm as . furthermore , we will use subspaces of compactly supported and summable initial conditions that will be denoted by and .in contrast to , the infinite rate processes are not continuous so that solutions have paths in , the set of functions that are right - continuous with limits from the left . in order to define infinite rate processes rigorously , in a martingale problem characterizationwas proposed for infinite rate mutually catalytic branching processes .this formulation uniquely determines the process but is not very useful for understanding properties of the process .crucial properties of the process , such as non - continuity of sample paths , are not clear from this formulation . nonetheless , it seems to be the most convenient way to introduce the process as it directly reveals the connection to the finite rate processes . in what follows we are going to extend the results of to . to define the characterizing martingale problem one crucially uses the self - duality function defined in ( [ sd ] ) .we include the next two simple ( stochastic ) calculus lemmas in order to clarify the appearance of in the definition of .[ l6 ] suppose and are compactly supported , then for all and (x_1,x_2,y_1,y_2 ) = 4(1-\varrho^2)f(x_1,x_2,y_1,y_2)y_1(k)y_2(k ) , \end{aligned}\ ] ] where ( resp . ) denotes the partial derivative with respect to the coordinate of the first ( resp .second ) entry .first note that all appearing infinite sums are actually finite as and are compactly supported .we leave the simple derivations of the first derivatives to the reader as it does not clarify the influence of .+ abbreviating and , by the chain rule we obtain (x_1,x_2,y_1,y_2)\\ & = f(x_1,x_2,y_1,y_2 ) \bigg[\frac{1}{2}\big(-\sqrt{1-\varrho}c(k)+i\sqrt{1+\varrho}d(k)\big)^2+\frac{1}{2}\big(-\sqrt{1-\varrho}c(k)-i\sqrt{1+\varrho}d(k)\big)^2\\ & \quad+\varrho\big(-\sqrt{1-\varrho}c(k)+i\sqrt{1+\varrho}d(k)\big)\big(-\sqrt{1-\varrho}c(k)-i\sqrt{1+\varrho}d(k)\big)\bigg ] \end{aligned}\ ] ] which is equal to \\ & = f(x_1,x_2,y_1,y_2)4(1-\varrho^2)y_1(k)y_2(k ) .\end{aligned}\ ] ] the intrinsic need for the particular choice of can now be revealed : the additional square - roots involving are chosen in such a way that the cross - variations caused by the correlated driving noises cancel .[ prop : mart1 ] suppose , and is a symbiotic branching process with finite branching rate and correlation parameter ] are locally bounded .the latter follows for instance from the moment duality of lemma [ la : mdual ]. it would be desirable to uniquely define solutions of finite rate symbiotic branching processes via this martingale property which unfortunately is impossible : the corresponding martingale problem does not involve and it is satisfied by for arbitrary . as symbiotic branching processes for different branching rates do not coincide in law , the martingale problem has infinitely many solutions .+ however , the class of processes on the restricted state - space is less rich so that the small class of test - functions suffices here for the martingale problem to be well - posed . in particular, the restriction rules out all solutions of .here is the generalization from to of proposition 4.1 of .[ pro:1 ] let , then there is a unique solution to the following martingale problem : for all initial conditions , there exists a process with paths in such that for all test - sequences the process is a martingale null at zero .the induced law on constitutes a strong markov family and the corresponding strong markov process will be called infinite rate symbiotic branching .we postpone a sketch of a proof to section [ sec : jumpsde ] where solutions are constructed by means of the poissonian equations already mentioned in theorem [ 0 ] .since we discussed extensively the longtime behavior of finite rate symbiotic branching processes we say a few words about the longtime behavior of infinite rate symbiotic branching processes .the case of has been studied in and some sufficient conditions for coexistence and impossibility of coexistence have been derived there . for full recurrence / transience dichotomy has been established in in the spirit of the results presented in section [ longl ] .[ prop:110 ] let , then coexistence of types for is possible if and only if a markov process on with -matrix is transient .note that this proposition extends proposition [ prop:101 ] to on a general countable site space and an arbitrary symmetric markov process with -matrix .for the proof we refer the reader to .so far we have discussed the finite rate symbiotic branching processes and introduced the well - posed martingale problem from which one can define the family of processes , . to get the link between the two , we sketch in this section how to show that converges in some weak sense to the solution of the martingale problem ( [ mp ] ) as goes to infinity .this , in fact , justifies to call the processes of theorem [ pro:1 ] infinite rate symbiotic branching processes .unfortunately , the convergence of to will not hold in the convenient skorohod topology in which continuous processes converge to continuous processes . as a solution of the system of brownian equations ( [ ss ] ) , is continuous , whereas is non - continuous as solution to the system of poissonian equations .even though the convergence can not hold in the skorohod topology , it holds in some weaker sense .the suitable pseudo - path " topology on the skorohod space of rcll functions was introduced in .the topology is much weaker than the skorohod topology and is , in fact , equivalent to convergence in measure ( see lemma 1 of and also results in ) .sufficient ( but not necessary ) tightness conditions for this pseudo - path " topology were given in .in particular , these conditions are convenient to check the tightness of semimartingales .here is the extension of theorem 1.5 of to .[ thm:2 ] fix any .suppose that for any , solves and the initial conditions do not depend on .then , for any sequence tending to infinity , we have the convergence in law in equipped with the meyer - zheng pseudo - path " topology . here, is the unique solution of the martingale problem of theorem [ pro:1 ] .the proof consists of three steps : + * step 1 : * tightness in the meyer - zheng pseudo - path " topology follows from the tightness criteria of . to carry this out, one has to show tightness for the drift and the martingale terms in the definition of : by standard estimates the drift terms are , in fact , tight in the stronger skorohod topology : this follows from <\infty,\quad p\in ( 1,p(\varrho ) ) .\end{aligned}\ ] ] apart from the facts that and is not assumed to be summable this is close to the moment bounds for the total - mass processes that we obtain from lemma [ lem:11 ] and theorem [ thm : curve ] . with the same trick as in lemma 6.1 of , the lefthand side of ( [ amo ] ) can be bounded uniformly in by a multiple of ] we now briefly show that the choice ( [ intensity ] ) indeed does the job : because =\lim_{\epsilon\to 0}\frac{1}{\epsilon}\lim_{t\to\infty}e^{1,\epsilon}\big[w^2_{t\wedge \tau}\big]=\lim_{\epsilon\to 0}\frac{1}{\epsilon}\epsilon=1 .\end{aligned}\ ] ] note that here the superscript in refers to the second coordinate of the pair of brownian motions and not to the second moment .the first equality follows from the definition of ; the second follows from the martingale convergence theorem for which the uniform integrability is ensured by the upper bound \leq e^{1,\epsilon}\big[\tau^{\frac{p(\varrho)-\mu}{2}}\big]<\infty , \end{aligned}\ ] ] where the positive constant is chosen sufficiently small such that ( existence of is ensured by the exit - time exit - point equivalence of lemma [ ete ] ) .+ with the poissonian construction of in hand we now sketch a proof of theorem [ pro:1 ] .existence of solutions to the martingale problem follows from theorem [ exis ] and proposition [ thm : s ] .+ the uniqueness proof is inspired by the proof of lemma [ uniq ] for based on self - duality . here , we sketch the chain of arguments of section 4 in which can be copied line by line while replacing the duality function in by the -dependent duality function defined in ( [ f ] ) . + * step 1 : * for compactly supported initial conditions solutions to the martingale problem are constructed via the poissonian equations ( [ eqn : st ] ) . from the first momentestimates one obtains that solutions decay sufficiently fast at infinity . + * step 2 : * first moment bounds for arbitrary solutions of the martingale problem are derived by differentiating the laplace transform part ( see lemma 4.2 of for ) . + * step 3 : * the crucial part is to derive the self - duality relation ={\mathbb{e}}\big[f(u_0,v_0,\tilde u_t,\tilde v_t)\big ] \end{aligned}\ ] ] between the two independent solutions and starting at and .now , as in the proof of corollary [ uniq ] , self - duality determines the one - dimensional laws along the lines of the proof of proposition 4.7 in for .standard theory ( see theorem 4.4.2 of ) allows us to extend the uniqueness of -dimensional distributions to uniqueness of finite dimensional distributions .finally , the strong markov property for follows from measurability in the initial condition which is inherited from the finite jump rate approximation processes .combining theorems [ pro:1 ] , [ exis ] and proposition [ thm : s ] we immediately get the following theorem .[ thm:111 ] let and .then there exists unique weak solution to ( [ eqn : st ] ) which is the unique solution to the martingale problem from theorem [ pro:1 ] .the infinite rate symbiotic branching processes were characterized in previous subsections via various approaches . in this final sectionwe describe from the viewpoint of the standard voter process which is closely related to symbiotic branching with as we have already seen in the section [ sec : voter1 ] . for the rest of this section we stick to on for convenience .we start with restating theorem [ thm:2 ] for the case .however , note that we additionally have to assume since we can not use the self - duality anymore as for it does not carry enough information to characterize the full law of the limiting process . under this additional assumptionwe can rely on the folklore results mentioned at the very end of section [ sec : voter1 ] whereas for general initial conditions a different approach should be developed .[ thm:12 ] suppose and for any , solves and the initial condition do not depend on .if furthermore we suppose then , for any sequence tending to infinity , we have the convergence in law in equipped with the meyer - zheng pseudo - path " topology . here , is a standard voter process and its reciprocal voter process ( i.e. opinions and are interchanged ) .[ 2907_1 ] in what follows the pair of voter processes constructed in the above theorem will be called . as discussed in the end of section [ sec : inter ] , with the additional assumption on the initial conditions , is a solution to the stepping stone model of example [ ex1 ] and . for tending to infinity ,a well - known result ( see for instance section 10.3.1 of ) states that the finite dimensional distributions of solutions to the stepping stone model converge to those of the standard voter process ; solutions are bounded and the moments converge as discussed in section [ sec : voter1 ] .tightness in the meyer - zheng pseudo - path " topology follows as for . to understand and the voter process in a unified frameworklet us first summarize .the infinite rate symbiotic branching processes are the weak limits of , as , * for , by theorem [ thm:2 ] , * for and , by theorem [ thm:12 ] . a unified representation can be given with the poissonian approach developed above if is extended to as with the intensities defined in ( [ intensity ] ) and the poisson point processes with intensity measure as in ( [ n ] ) we can extend theorem [ thm:111 ] as follows : suppose , and for assume additionally .then the infinite rate symbiotic branching process with initial condition coincides in law with the unique weak solution to ( [ eqn : st ] ) . note that the additional assumption on the initial condition is not necessary for equation ( [ eqn : st ] ) to have weak solutions .we believe that also the convergence of to the solutions of ( [ eqn : st ] ) holds without the restriction . for the case the theorem is nothing else but theorem [ thm:111 ] so that we only need to discuss the extension to .+ existence of a weak solution to ( [ eqn : st ] ) , for , can be verified as sketched in the proof of theorem [ exis ] for ; since the jump measure is finite the proof is simpler since no truncation procedure for is needed .+ to identify the weak solutions to ( [ eqn : st ] ) with it suffices , by theorem [ thm:12 ] , to show that , for any weak solution to ( [ eqn : st ] ) , is a voter process and .we use two facts : first , the jumps preserve the property for all and , secondly , the drift and the compensator integral cancel each other . to establish the first , note that the choice of implies that always and so that the only transitions are ( compare with ( [ x ] ) ) or , simply , the latter follows from the simple computation for which we used and hence , canceling the compensator integral with the drift shows that equation ( [ eqn : st ] ) can be written equivalently in the simplified form since the configurations only change by a jump and the jumps only switch to and vice versa one can already guess that both coordinates are reciprocal voter processes . to make this precise we apply it s formula to functions of andderive that satisfies the martingale problem for the standard voter process .it suffices to carry this out for since we already know that for all .+ let us fix a test - function that only depends on finitely many coordinates , , and apply it s formula to to obtain \mathcal n(\{k\},d(y_1,y_2),dr , ds ) .\end{aligned}\ ] ] we denoted again by the configuration that is obtained from the configuration flipping only the opinion at site . adding and subtracting the compensated integral leads to \mathcal{(n - n')}(\{k\},d(y_1,y_2),dr , ds)\\ & \quad+\sum_{k\in k}\int_0^ti_s(k)\left[f\big ( ( u_{s-})^{(k)}\big)-f\big ( u_{s-}\big)\right]\,ds . \end{aligned}\ ] ] next, we use that for all we have to obtain plugging - in , we proved that \end{aligned}\ ] ] is a local martingale and since everything is bounded it is , in fact , a martingale .this shows that has the generator ( [ gener ] ) of the voter process .+ well - posedness for this martingale problem implies the weak uniqueness statement of the theorem for .finally , we want to explain that the extended choice of is more natural than it appears on first view .there are two good reasons .first , going back to definitions [ defj ] and [ jumpmeasure ] let us see what we get for : since for completely negatively correlated brownian motions started at the exit - measure from the first quadrant is . secondly , a more careful look at the density of for shows that the mass accumulates at and since explodes for tending to .more precisely , converges in the vague topology ( extended to the completion of ) to .unfortunately , both justifications lead to with an additional infinite atom at .luckily , the infinite atom at has no impact on the poissonian equations since the integrand of ( [ eqn : st ] ) vanishes if and .we believe that some rigorous work on this observation might lead to some interesting results . + * this brief discussion explains the natural unification of the family with the voter process at its boundary and justifies our interpretation of as generalized voter process , given below theorem [ 0 ] . *ld acknowledges an esf grant random geometry of large interacting systems and statistical physics " and hospitality of the technion .lm acknowledges hospitality of the universit paris 6 .symbiotic branching models are by definition solutions of ( possibly infinite ) systems of ordinary stochastic differential equations interestingly , the infinite rate analogues that have been defined so far as solutions to exponential martingale problems can be represented as solutions to jump - type stochastic differential equations. the most straight - forward generalization of ( [ dif ] ) is for a lvy process .the modeling drawback of ( [ dif2 ] ) is that once has a jump , then has a jump .if the jumps of the solution process are meant to depend on the jumps of the jump - measure in a non - linear way , other concepts are needed .one way to model such processes is to replace the jump noise by a general compensated random measure : this notion of jump - type stochastic differential equation is needed for our purposes .unfortunately , the basic jump measure of theorem [ 0 ] has a second order singularity at and a polynomial decreasing tail which for prevents existence of second moments .this causes the general second moment integration theory to collapse here and the abstract martingale integration theory with respect to compensated random measures comes into play . to guide the reader unfamiliar with those concepts we briefly recall some core definitions and concepts .first , suppose is a poisson point measure on with compensator measure on a stochastic basis , i.e. for all measurable sets with , ,a) ] are independent . defining ,a)=t\lambda(a) ] . as , by assumption , the jump measure is integer valued it should come as no surprise that may be regarded as counting measure for the jumps of an auxiliary -valued optional process , i.e. \times a)(\omega)=\sum_{s\leq t } \mathbf 1_{a}(\delta\beta_s(\omega ) ) .\end{aligned}\ ] ] with this notation in hand we can proceed with the abstract definition of the stochastic integral ( see definition ii.1.27b ) of ) .absolute continuity in time of the compensator implies that almost surely so that the quantity in vanishes .the set of possible integrands is changed to ^{1/2}<\infty\bigg\ } \end{aligned}\ ] ] and the stochastic integral is defined to be the unique ( up to indistinguishable ) purely discontinuous local martingale such that hence , if has an atom at , the stochastic integral has a jump .recall that by definition a purely discontinuous local martingale is required to be orthogonal to all continuous martingales but not to be pathwise everywhere discontinuous .for example , if is a standard poisson process , the compensated process is purely discontinuous but far from being pathwise everywhere discontinuous .+ the integrability condition for class is rather unsatisfactory as it involves the jump measure itself rather than only its compensator which might be more easy to handle .a characterization of the set is given in theorem ii.1.33 of : it suffices to show that ( recall that in our setting of vanishes ) <\infty,\\ { \mathbb{e}}\left [ \int_0^t \int _ e |h(s , x)|{\mathbf{1}}_{\{|h(s , x)|\geq 1\ } } \mathcal n'(ds , dx)\right]<\infty,\end{split } \end{aligned}\ ] ] showing in particular that . finally , to motivate the naming `` stochastic integral '' for the abstract local martingale , the following property should be mentioned .if the integrand is nice , that is , additionally <\infty$ ] , then both integrals against and the compensator measure can be defined pathwise and e. perkins ( 2002 ) .watanabe superprocesses and measure - valued diffusions ._ lectures on probability theory and statistics , saint - flour 1999 _ , lecture notes in mathematics , * 1781 * , springer , berlin , 132329 .j. rebholz `` a skew - product representation for the generator of a two sex population model . ''stochastic partial differential equations ( edinburgh , 1994 ) , 230240 , london math .lecture note ser ., 216 , cambridge univ . press , 1995 .
since the seminal work of dawson and perkins , mutually catalytic versions of superprocesses have been studied frequently . in this article we combine two approaches extending their ideas : the approach of adding correlations to the driving noise of the system is combined with the approach of obtaining new processes by letting the branching rate tend to infinity . the processes are considered on a countable site space . + we introduce infinite rate symbiotic branching processes which surprisingly can be interpreted as generalized voter processes with additional strength of opinions . since many of the arguments go along the lines of known proofs this article is written in the style of a review article . [ multiblock footnote omitted ] going back to the seminal work of watanabe and dawson , the subject of measure - valued diffusion processes arising as scaling limits of branching particle systems has attracted the interest of many probabilists . many tools had to be developed to study the fascinating properties of the dawson / watanabe process ( also called superprocess or super - brownian motion ) and its relatives . characterizations and constructions of the process via a laplace transform duality to non - linear parabolic partial differential equations , infinitesimal generator and corresponding martingale problem , the pathwise lookdown construction of donelly / kurtz , or le gall s brownian snake construction based on the ray - knight theorems ( see the overview ) led to many deep results . much of the analysis is based on the branching property , i.e. the sum of two independent super - brownian motions is equal in distribution to a single super - brownian motion started at . in the early 90s further directions became popular . super - brownian motion was found to be a universal scaling limit not only of branching systems but also of interacting particle systems such as voter process and its modifications ( see for instance , ) . furthermore , instead of considering plainly super - brownian motion , interactions were introduced . tools such as dawson s generalized girsanov theorem have been successfully applied in various contexts . here , we will be mostly interested in variants of catalytic super - brownian motion , i.e. super - brownian motion with underlying branching mechanism depending on a catalytic random environment . as long as the environment is fixed , a good deal of the analysis can still be performed with techniques developed for the super - brownian motion . more delicately , taking into account connections to stochastic heat equations , dawson / perkins introduced a mutually catalytic superprocess ( see ) . their mutually catalytic branching model on the continuous site space consists of two super - brownian motions each being the catalyst of branching for the other . the model was described via stochastic heat equations . they considered driven by two independent white noises on . here , denotes the one dimensional laplacian . the mutually catalytic interaction of two super - brownian motions has one particular drawback : the branching property is destroyed so that many of the previously known tools collapse . fortunately , some ideas borrowed from the study of interacting particle systems and interacting diffusion models could be applied successfully due to the symmetric nature of the model . in particular , a self - duality that extends the linear system duality known for interacting particle systems could be established and utilized to prove uniqueness and longterm properties . besides the above continuous model , the mutually catalytic model on the lattice was constructed and studied by dawson and perkins as well . this article , which is focused on spatial branching models on discrete space , is motivated by two recent developments . first , in the series of papers , , the effect of sending the branching rate to infinity was studied in the discrete space mutually catalytic branching model . the resulting infinite rate mutually catalytic branching model is one of the rare tractable spatial models with finite moments but infinite moment forcing the system to have critical scaling behavior . + secondly , etheridge / fleischmann introduced the following generalization of the dawson - perkins model . they considered the mutually catalytic branching model with correlated driving noises which , on the level of a branching system approximation , corresponds to a two type system of branching particles with correlated branching mechanism . they called their model symbiotic branching model in contrast to the mutually catalytic branching model of dawson / perkins that appears as a special case for zero correlations . we will use equally the name symbiotic branching and mutually catalytic branching with correlations . correlating the branching mechanism might seem artificial on first view . on second view one observes that the extremal correlations lead to well - known models from the theory of interacting diffusion models : the stepping stone model with applications in theoretical biology and a parabolic anderson model with applications in statistical physics . as those models have very different path behavior one could expect phase - transitions occurring when changing the correlations . on the level of moments those phase transitions have been revealed recently in : there is a precise transition for moments when the correlation parameter changes from negative to positive . the main result of this article , formulated here in a slightly simplified version , is the following theorem which should be viewed as the natural combination of the two aforementioned developments . in particular , the theorem below extends results from to the case of `` correlated ( symbiotic ) branching '' . [ 0 ] suppose is a parameter and is the unique non - negative weak solution to the symbiotic branching model on the lattice defined by here , denotes the discrete laplacian on and the driving gaussian process has correlation structure &=\delta_0(k - j)t,\\ { \mathbb{e}}\big[b^2_{t}(k)b^2_{t}(j)\big]&=\delta_0(k - j)t,\\ { \mathbb{e}}\big[b^1_{t}(k)b^2_{t}(j)\big]&=\varrho\delta_0(k - j)t.\end{split } \end{aligned}\ ] ] additionally , assume that the non - negative initial conditions do not depend on , satisfy a minor growth condition ( for the precise definitions see ( [ 111 ] ) and section [ sec:2.1.1 ] ) and also for all . then converges , as tends to infinity , weakly in the meyer - zheng pseudo - path " topology ( introduced in ) , to a limiting rcll process taking values in which is the unique weak solution to the system of poissonian integral equations for and . here , is a poisson point measure on with intensity measure where and , for any , theorem [ 0 ] will be proved in section [ sec:3 ] for more general countable state - space instead of and -matrix instead of . the proof of the theorem follows from theorems [ pro:1 ] and [ thm:111 ] . the parameter only occurs in the measure so that it does not surprise that proofs go along the lines of replacing in their poissonian equations by some . the striking fact of the generalization to is that it allows to understand as a family of generalized voter processes with the standard voter process appearing for . the generalized voter process interpretation goes as follows : suppose at each site lives a voter with one of two possible * opinions*. their opinions additionally have a non - negative * strength*. mathematically speaking , the type of opinion is determined by the non - zero coordinate of the opinion - vector ( recall the definition of ) and the strength is determined by the absolute value , i.e. * codes opinion of strength , * codes opinion of strength . formulated like this , the standard voter process only takes values and since all opinions do have a fixed strength , say . if ( resp . ) is large , we say the opinion is strong , otherwise weak . + voters change dynamically their opinions and their strength according to the next two possibilities : * * change of opinion strength only * : suppose has an atom at . then , by definition of the two integrands , the poissonian integrals produce two - dimensional jumps of the form so that , added to the current state of the system , the state of the system at site changes according to if before the jump the voter had opinion of strength , the change is and if the voter had opinion before . hence , if is chosen by the basic jump measure , only the strength of the opinion changes but not the type . * * change of opinion and its strength * : suppose has an atom at . then , by definition of the integrands , the poissonian integrals produce jumps of the form so that , added to the current state of the system , the state of the system at site changes according to if before the jump the voter had opinion of strength , the change is and if the voter had opinion before . hence , if is chosen by the basic jump measure , the voter changes strength and type of opinion . we show in section [ sec : voter2 ] that theorem [ 0 ] extends naturally to when is replaced by . if additionally , then solutions to ( [ uv ] ) give standard voter processes . note that in this case only the second type of changes occurs since only has atoms at . hence , the strength of the opinion does not change . in particular , we only see opinion changes from to and vice versa . finally , we should also give an interpretation to the rates : due to the definition of and , the rate of change for the voter at site is high if the strength of the opinions of his neighbors of different opinion is high compared to his opinion . in particular , voters with weak conviction tend to change quicker their opinions than voters with strong conviction . the result of theorem [ 0 ] might look frightening to the reader not familiar with interacting diffusion processes and/or jump diffusions . however , once the connection to the results of and is understood , the proofs of the theorem go along the lines of . therefore , we decided to write this article in the form of a review article explaining in depth the background . we do not give many detailed proofs but instead give more detailed calculations to explain the origins of ( [ uv ] ) . in the following we explain carefully * the background of catalytic branching processes , * definitions , existence , uniqueness and tools for ( [ ss ] ) , * what is known on the longtime behavior of ( [ ss ] ) to motivate the choice of in the theorem via planar brownian motions exiting a cone , * more details on ( [ uv ] ) and ( alternative ) constructions of , * concepts and definitions for jump diffusions . the background and connections to well - known stochastic processes from the literature will be explained exhaustively in * section [ sec:1]*. two different routes from known models to mutually catalytic branching models are disclosed : the original motivation of dawson / perkins originating from catalytic super - brownian motion and symbiotic branching as unifying model for some interacting diffusions . as a final motivation the connection of stepping stone processes and voter processes is recalled . * section [ sec:2 ] * is devoted to an overview of precise definitions , existence and uniqueness results and longtime properties for finite rate symbiotic branching processes . in particular , the second moment transitions are discussed in detail . proofs are cooked down to the main ingredients . finally , in * section [ sec:3 ] * the infinite rate symbiotic branching processes are introduced and reinterpreted as generalized voter processes in the very end . additionally , a brief summary of jump diffusions is included to the appendix .
this paper provides a description of a new technique for studying magnetic fields using gradients of synchrotron intensity .gradients of synchrotron polarization have been successfully used before ( see ) . however , in this letter we explore theoretically and numerically a more simple measure , namely , synchrotron intensity gradients ( sigs ) and evaluate its utility for observational study of magnetic fields and accounting for the foreground contamination induced by the interstellar media within the cmb polarization studies .galactic and extragalactic synchrotron emission arises from relativistic electrons moving in astrophysical magnetic field ( see ) . in terms of cmb and high redshift hi studies ,the most important is galactic synchrotron emission .however , diffuse synchrotron emission is observed throughout the interstellar medium ( ism ) , the intracluster medium ( icm ) , as well as in the lobes of radio galaxies ( e.g. ) .thus synchroton emission provides the largest range of scales for studying magnetic fields .astrophysical magnetic fields are turbulent as observations testify that turbulence is ubiquitous in astrophysics . asrelativistic electrons are present in most cases , the turbulence results in synchrotron fluctuations , which may provide detailed information about magnetic fields at different scales , but , at the same time , impede measures of cmb and high redshift hi .the latter has recently become a topic of intensive discussion ( see ) .the statistics of synchrotron intensity has been studied recently in ( , hereafter lp12 ) , where it was shown how fluctuations of synchrotron intensity can be related to the fluctuations of magnetic field for an arbitrary index of cosmic rays spectrum .there it was shown that the turbulence imprints its anisotropy on synchrotron intensity and this provides a way of determining the direction of the mean magnetic field using synchrotron intensities only .the current paper explores whether on the basis of our present - day understanding of the nature of mhd turbulence , _ synchrotron intensities _ can provide more detailed information about magnetic fields . inwhat follows 2 we discuss the theoretical motivation of this work routed in the modern theory of the of mhd turbulence , the properties of synchrotron intensity gradients ( sigs ) , their calculation as well as the influence of noise and sonic mach number are discussed in 3 .the comparison of the sigs technique with the technique based on the anisotropy of the correlation functions of intensity is presented in 4 , the synergy with other techniques of magnetic field studies is outlined in 5 .we present our summary in 6 .while the original studies of alfvenic turbulence done by and were based a hypothetical model of isotropic mhd turbulence , the later studies ( see ) uncovered the anisotropic nature of the mhd cascade.the modern theory of mhd turbulence arises from the prophetic work by , henceforth gs95 ) .further theoretical and numerical studies ( , henceforth lv99 , , see for a a review ) extended the theory and augmented it with new concepts .our theoretical motivation for the present work is based on the modern understanding of the nature of mhd turbulence that we briefly summarize below .the gs95 theory treats the alfvenic turbulence .the numerical simulations in testify that for non - relativistic mhd turbulence the coupling between different types of fundamental modes is the effect that can be frequently neglected .therefore , in realistic compressible astrophysical media one can consider three distinct cascades , namely , the cascade of alfven , slow and fast modes .alfven modes initially evolve by increasing the perpendicular wavenumber in the subalfvenic regime , i.e. for the injection velocity being less than the alfven velocity , of weak turbulence , while the parallel wavenumber stays the same ( see lv99 , ) .this is not yet the regime of gs95 turbulence , but , nevertheless , the increase of the perpendicular wave number means the modes get more and more perpendicular to magnetic field . in alfvenic turbulence the magnetic field and velocityare symmetric and therefore the aforementioned situation means that both the gradients of magnetic field and velocity are getting aligned perpendicular to the direction of the magnetic field .the weak alfvenic turbulence can be viewed as the interaction of wave packets with a fraction of the energy cascading as a result of such an interaction . as the perpendicular wavenumber increases ,this fraction gets larger and eventually becomes ( see ) .this is the maximal fraction of energy that can be transferred during the wavepacket interaction .however , the equations dictate the necessity of further increase of perpendicular wavenumber as the result of the interaction of the oppositely moving wavepackets .this can only be accomplished through the simultaneous increase of the parallel wavenumber .this happens at the transition scale , where is the turbulence injection scale and is the alfven mach number ( see lv99 , ) .this is the stage of the transfer to the strong or gs95 regime of turbulence . at this stagethe so - called critical balance condition should be satisfied , which states that the time of the interaction of the oppositely moving wavepackets , where is the parallel to magnetic field scale of the wavepacket is equal to the perpendicular shearing time of the wavepacket , where is the perpendicular scale of the wavepacket and is the turbulent velocity associated with this scale .this is how the cascade proceeds in the strong regime with the wavepackets geting more and more elongated according to ( lv99 ) : which testifies that for the parallel scale of the wavepackets gets much larger than the perpendicular scale , i.e. the wavepackets get more and more elongated as the perpendicular scale decreases .this means that both the velocity and magnetic field gradients get even more aligned perpendicular to the magnetic - field direction .this increase of the disparity of parallel and perpendicular scales continues until the energy reaches the dissipation scale .the magnetic field direction is changing in the turbulent flow .therefore the important question that arises is what the direction of magnetic field should be used in the arguments above .most of the earlier work assumed the perturbative approach and thus the mean field direction was used .this looks also to be an implicit assumption in gs95 study .however , in the works that followed groundbreaking gs95 paper ( namely , lv99 , ) , where it was shown that it is not correct to use the mean magnetic field , but one should use the _ local _ magnetic field that is sampled by the wavepacket .therefore the aforementioned gradients of the velocities and magnetic fields are aligned with the _ local _ magnetic field and therefore sample the local direction of the magnetic - field flux .this point is very important for the technique that we are going to propose .indeed , by measuring the velocity / magnetic field gradients one can trace magnetic field in 3d volume .a more complicated situation emerges in the case of superalfvenic turbulence , i.e. for . if the turbulence in this regime is dominated by hydrodynamic motions with magnetic field lines following the streamlines of the flow within large turbulent eddies .therefore both the velocity gradients and the magnetic field gradients are expected to be the largest when they are perpendicular to the streamlines .this means that both the velocity and magnetic field gradients are going to be perpendicular to the magnetic field in the flow . as the turbulence proceeds along the kolmogorov cascade , i.e. , the kinetic energy decreases . at the scale the kinetic energygets into equipartition with the magnetic energy and thus the gs95 picture of trasalfvenic turbulence is applicable from the scale .naturally , our earlier considerations for the alignment of magnetic and velocity gradients with the local magnetic - field directions is applicable for the eddies with . at the same time, the large - scale hydrodynamic eddies can also compress magnetic field providing the alignment of magnetic field gradients perpendicular to the velocity gradients .we explore this numerically in the paper .we also note that in both subalfvenic and superalfvenic turbulence the magnitude of velocity gradients increase with the decrease of the scale .our considerations above testify that the same should happen with the magnetic field gradients as well . in other words ,magnetic field gradients are expected to reflect the smallest eddies that are well - aligned with the magnetic - field direction at the scale of these small eddies .the mhd cascade also contains slow and fast modes .the slow modes are slaved by alfvenic modes , that shear them both in the case of magnetically dominated ( low- ) and gas - pressure dominated ( high- ) plasmas ( gs95 , ) .thus we expect that the slow modes will also show the properties of the magnetic gradients similar to the alfven waves .the fast modes are different , however .they create an accustic - type cascade which marginally cares about magnetic - field direction .however , numerical simulations show that the fast modes are subdominant even for supersonic driving .therefore we expect to see alignment of magnetic gradient perpendicular to the local magnetic - field direction .this is the conclusion that we use for the study below .we may add that for the weakly compressible flows the density associated with slow waves will mimic the gs95 scalings .however , for supersonic flows the production of shocks significantly disturbs the statistics of density . as a result , for subsonic flows density gradientsare also expected to be aligned perpendicular to magnetic field which explain the results in empirical results in as well as our numerical experiments with density gradients in ( , henceafter gl16 ) and ( , henceforth yl17 ) . in terms of magnetic field tracing the density gradients are expected to be inferior to velocity and magnetic field gradients , but the misalignment of density gradients and magnetic - field directions can be informative in terms of shocked gas and supersonic flows .synchrotron emission arises from relativistic electrons spiraling about magnetic fields ( see references therein ) .careful study of the formation of the synchrotron signal ( see ) revealed that the signal is essentially non - linear in the magnetic field with the origin of nonlinearity arising from relativistic effects .for the power law distribution of electrons , the synchrotron emissivity is where is the corresponds to the magnetic field component perpendicular to the line of sight , the latter given by the z - axis .the fractional power of the index was a complication for many statistical studies .however , the problem of the magnetic field dependence on the fractional power was dealt with in , where it was shown that the correlation functions and spectra of can be expressed as a product of a known function of times the statistics of . although we do not use the correlation function approach explicitly , our approach is based on the statistical properties turbulence and we expect that similar to the case considered in the gradients calculated with will correctly represent the results for other .thus the fractional power of magnetic field in eq .( [ synch ] ) will not be considered as an issue within the present study of aimed at determining magnetic field gradients .it is evident from eq .( [ synch ] ) that gradients of the synchrotron intensity transfer into the integral of magnetic field gradients .our considerations in [ sec:2 ] suggest that the latter should be aligned perpendicular to the magnetic field . as the synchrotron polarizationis also directed perpendicular to the magnetic field , we expect that the directions of the gradients and synchrotron polarization should coincide . to test this assumption we use the results of numerical simulations obtained with our 3d mhd compressible code ( see more details in ) and following the procedures described in created both the maps of synchrotron polarization and synchrotron intensity . to calculate synchrotron intensity gradients ( sigs ) , we use the recipe of gradient calculation that we introduced in yl17 .the recipe is composed of three steps .we first pre - process our synchrotron intensity maps with an appropriate noise - removal gaussian filter .we then interpolate the map to ten times its original level , and determine the gradient field by computing the maximum gradient direction in the interpolated synchrotron intensity maps . by probing the peak in gradient orientation distributions in the sub - blocks of the gradient map , we gain an estimate of the _ sub - block averaged _ gradient vector as in yl17 .that allows us to compare our magnetic field predictions to synthetic polarization vectors .figure [ fig:1 ] demonstrates that the recipe can deliver sigs in a robust way and the magnetic - field directions that are obtained with sigs provide a good representation of magnetic field . to demonstrate the latter point in figure [ fig:1 ] we also show the magnetic field directions as traced by the synchrotron polarization in the synthetic observations .the latter are also directed perpendicular to the magnetic field and therefore we observe a good alignment of the two types of vectors in figure [ fig:1 ] .for most of the environments of the spiral galaxies the areas dominating the synchrotron emission may correspond to the hot gas with low sonic mach numbers .however , suggesting it is interesting to explore to what extend the effects of compressibility can affect the sig technique .we also test how the sigs trace magnetic field in systems with different .the upper panels of figure [ fig:2 ] show the relative alignment of polarization and sig .we observe the alignment decreases with the increase of , but the sigs which is also supported by the lower panels of figure [ fig:2 ] where the distribution of the sigs about the polarization direction is shown .we note that there exist different ways of measuring the turbulence sonic mach number and the studies like those illustrated in figure [ fig:2 ] allow to evaluate the accuracy of magnetic field tracing using the sigs .the alignment measure of the sigs as a function of the noise level for maps produced for and using block - size of 64 , which corresponds to the middle panels of figure [ fig:1 ] . indicates the width of the gaussian filters , in the unit of pixels , applied in the pre - processing step ., scaledwidth=48.0% ] real observations are associated with the noise in the data .therefore we test to what extend the alignment persists in a noisy environment .we calculate the sigs by adding white noises to our synthetic maps and use the alignment measure ( see gl16 ) to quantify how well the sigs are tracing the synchrotron polarization that represents the projected magnetic field : where is the angle between the sig direction and that of the polarization .the noise is introduced into the data in the following way .we generate white noises such that the noise amplitude is gaussian with mean value zero .the noise level is defined as the standard deviation of the noise distribution .the resultant noise is added to the original map .the noise level is selected to be the multiple of of the mean synchrotron intensity , extended to a maximum equals to the mean synchrotron intensity .this ensures maps with weak and strong noise level are produced correctly .we treat the synthetic data as it if it were the real observational data . for this purpose, we analyze our noisy data using pre - processing gaussian filters , which is a procedure frequently used as a noise reduction tool in observations . the smoothing effect from the gaussian filterenables us to compute per - pixel gradient information more accurately .the strength of the filter is controlled by the width , which characterizes how many pixels are averaged to give the information of one pixel in the filtered map .a larger will suppress noises and produce a smoother map , while sacrificing the accuracy in small scales . to see the effect of the filter on alignments , we perform a test on several on maps with different noise levels , and measure the of the resultant map .the results of alignment measures under different noise levels and gaussian filters are shown in figure [ fig:3 ] .without the gaussian filter pre - process , the alignment is expected to be strongly contaminated , which we also observed .applying gaussian filters significantly improves the alignment .while a small gaussian filter shows that the alignment decreases with the increase of the noise , a filter with larger width saves the alignment even in a strong - noise environment .this experiment demonstrates that sigs can trace magnetic fields in the presence of noise .the detailed maps of synchrotron radiation can be obtained with interferometers .interferometers measure the spatial fourier components of the image and changing the baseline of the interferometer one gets different spatial frequencies . for interferometric observationsthe single dish measurements deliver low spatial frequencies .the single dish observations frequencies are not available .then it is important to understand how this can affect the accuracy of the sigs .synchrotron polarization gradients were used in and one of the motivations for their use was the possibility of using gradients with the interferometric data obtained without single dish observations .below we test how the accuracy of the sigs of tracing of magnetic field depends on the missing spacial frequencies . in figure [ fig:4 ]we show the alignment measure given by eq .( [ am ] ) using the same data as in figure [ fig:3 ] but gradually removing spatial frequencies starting with the lowest spatial frequencies from the inertial range of our data .we observe a gradual decrease of the .when the block size increases , removing more lower spacial frequencies will still preserve the alignment .our results confirm that the high spacial frequencies are the most important for determining the sigs .the alignment measure of the sigs as a function of the removed low spacial frequencies from the map with and using block - sizes of 16,32,64,128 , and applied a gaussian pre - filter of 4 .we by default remove the injection range , which is around .,scaledwidth=48.0% ]the encouraging results above stimulated us to use the sig technique with the planck synchrotron data . for our test we picked a part of planck foreground synchrotron intensity map and compared the magnetic - field directions that we obtained with sigs with the magnetic - field directions as determined by the planck synchrotron polarization data . to simplify the interpretation we used only the part of the data corresponding to high galactic latitudes , namely the data corresponding to , .we projected the data to the cartesian frame .as shown in section [ subsec : gf ] that can already preserve the alignment in strong - noise environment , we follow the procedure and reduced the noise using a gaussian pre - filter .figure [ fig:5 ] shows a part of the sky overplotted with sig and synchrotron polarization , which shows an good alignment in this region . the alignment measure for this region .sigs ( yellow ) and synchrotron polarization vector ( cyan ) on a piece of planck synchrotron data , overplotted with the synchrotron intensity.,scaledwidth=48.0% ] our test shows that the sigs can be applied to observational data at high latitudes .this test corresponds to our simulations of low turbulence . for the galactic plane regionthe magnetic field is much more complicated and the faraday rotation may be important .therefore it is more challenging to obtain a polarization map that would represent the true structure of the magnetic field .naturally , more tests of the sigs in the presence of the complex magnetic - field morphology are needed . therefore moving from our demonstration here to studies of magnetic fields in the galactic diskrequires a more detailed study and will be performed elsewhere .sigs are not the only way to trace magnetic field with synchrotron intensity maps . for instance, anisotropic mhd turbulence also results in synchrotron anisotropies that are quantified in lp12 .there the quadruple moment of the synchrotron intensity correlation functions was shown to be aligned with the magnetic field .therefore by measuring the longer direction of the contours of the magnetic field isocorrelation ( see lp12 ) one can approximate the magnetic - field direction over the sky .the calculations of the correlation functions , require averaging , which for the astrophysical situations means the volume averaging .therefore one may expect that compared to the sigs , the lp12-type anisotropies are a significantly more coarse - graded measure . to test this statement we provide in figure [ fig:6 ] the for the sigs and the similarly - defined alignment measure of the correlation function anisotropies ( cfas ) .the directions of the cfas longer axes of anisotropy are rotated 90 degrees to be compared with the directions of the sigs and the synchrotron polarization .we compare sub - block averaged sigs with the cfas obtained in the same blocks .figure [ fig:6 ] clearly shows that the sigs have a great advantage over the cfas on tracing magnetic field in smaller block size . in terms of the alignment measure ,the cfas can trace magnetic field only in a for a sufficiently coarse block size .comparatively , the sigs can work on smaller scales without losing much of the alignment .the ability of cfas for the same purpose is highly limited . we , however , suggest that sigs and cfas are complementary measures in a number of ways .the correspondence between coarse - graded magnetic - field directions measured by the two techniques makes the tracing of magnetic field more trustworthy .their correspondence also indicates that the performed averaging may be sufficient to use the studies of the cfa anisotropies for the purpose of separating the contribution from fundamental mhd modes , i.e. alfven , fast and slow , as it is described in lp12 .it is always good to have yet another way of studying astrophysical magnetic fields .however , the advantages of the sigs are not limited by this .synchrotron polarization is a generally accepted way of studying magnetic fields in our galaxy , external galaxies and galaxy clusters .one of the difficulties of using of synchrotron polarization is that the polarized radiation is subject to the faraday rotation effect . to account for this effect ,multifrequency observations are performed and the faraday rotation is compensated .this is a significant complication .in addition , different regions may be responsible for the emission at different frequencies , which may be source of an error .moreover , recent analytical studies in have demonstrated that the separation of the effects of the faraday rotation in the presence of turbulent magnetic fields are far from trivial ( see also ) . in this situationthe possibility of obtaining magnetic field direction using sigs is very advantageous .combining the sigs and the polarization measurements can be very synergetic . by measuring the actual direction of magnetic field using the sigs and comparing it with the direction of polarization onecan get the measure of the faraday rotation of the media between us and the synchrotron - emitting region .the sig technique is similar to the velocity centroid gradient ( vcg ) technique that was introduced in gl16 and applied to studies of magnetic fields in atomic hydrogen in yl17 . within the vcg technique the calculation of gradientsis performed using the 2d maps of velocity centroids gradients .the latter are readily available from the doppler - shifted spectroscopic data . compared to the vcgs , the calculation of the sigs is simpler , as it requires only synchrotron intensities , rather than full spectroscopic data . in this sensethe sig technique is similar to tracing magnetic fields using intensity gradients ( igs ) .the synergy of these techniques will be further explored in future publications .it is clear that , in general , sigs , vcgs and igs trace magnetic field in different environments .for instance , cold and warm diffuse hi , line emission , e.g. co emission , from molecular clouds present the natural environments for studies using the vcg technique . combining that with the igs , one can study shocks and self - gravitating regions ( yl17 , yuen & lazarian 2017b ) and measuring the relative alignment of the directions defined by the vcgs and igs one can characterize the sonic mach number of the media . at the same time ,synchrotron radiation in the mikly way mostly originate at the large expanses of the galactic halo and this is the domain of the sig technique .for some regions , however , e.g. for the supernovae shocks , it seems very advantageous to apply all three techniques .we would also like to note that there are several significant advantages of the sigs compared to the igs .first of all , numerical studies show that in mhd turbulence magnetic and velocity statistics are more robust and predictable measures compared to densities .this was confirmed for the gradient techniques in gl16 and yl17 .in addition , it is clear from the discussion in [ sec:2 ] that the most reliable magnetic - field tracing is expected in nearly incompressible turbulence in the absence of self - gravity .these are the conditions for the warm and hot phases of the ism ( see for the list of the idealized ism phases ) .these are exactly the media that are responsible for the bulk of synchrotron radiation .in fact , earlier studies ( e.g. ) indicated that the sonic mach number of the synchrotron emitting warm media is around unity .it is expected to be much less than unity for the hot coronal gas of the galactic halo .therefore we expect that the sigs can trace magnetic fields well and be less affected by the distortion that arises from compressibility effects .compared to the vcgs , the sigs are also more robust , as the vcgs are influenced by the density distribution ( see ) and the density is not a robust tracer of mhd turbulence statistics . at the same time , the synchrotron intensity fluctuations are produced by uniformly distributed electrons and thus are expected to better reflect the magnetic - field statistics .in fact , rather than confronting the different techniques , it is more advantageous to search for their synergy .for instance , the sigs and the vcgs can trace magnetic fields in different phases of the ism , e.g. while vcgs are convenient for studying magnetic fields in cold and warm phases , the sigs can study magnetic fields in warm and hot phases . thus combining the measurements , one can investigate the relative distribution of magnetic fields in different ism phases along the line of sight .such studies are essential for understanding of the complex dynamics of magnetized multiphase ism .we can add that the ways of studying vcgs and sigs are similar .for instance , within our present study we successfully used the way of calculating gradients first suggested in yl17 .in addition , our present study also shows that sigs similar to vcgs can be obtained using interferometric data with missing low spatial frequencies , e.g. the interferometric data obtained without the corresponding single dish observations .this opens prospects of using of the two techniques for studying extragalactic magnetic fields .faraday rotation is an important way of studying magnetic field component parallel to the line - of - sight ( see ) .one can define the rotation measure ( rm ) , which is proportional to the integral of the product of the parallel to the line - of - sight component of magnetic field and thermal electron density , if the original magnetic - field direction at the source is known .the sigs can be used to define this direction , which has advantages over the currently - used faraday - rotation measurements that employ multifrequency polarization measurements . moreover, sigs can help to distinguish the faraday rotation that arises from the source of polarized radiation and the media intervening between the source and the observer .indeed , at the source the sigs are measuring the actual magnetic - field direction .the alignment of interstellar dust is a well - accepted way of tracing magnetic field . both theoretical considerations and the observational testing ( see and ref .therein ) indicate that the alignment of dust is very efficient in the diffuse media where radiative torques are strong .the alignment can trace magnetic fields in the self - gravitating regions , but it may fail in starless molecular cloud cores .the polarization arising from grain alignment is complementary to the vcgs as it is discussed in yl17 .combining sigs , vcgs and polarimetry one can study how magnetic fields connect hot , warm and cold ism phases with molecular clouds . a promising possibility is presented with tracing of magnetic field using aligned atoms or ions ( and ref . therein ) .this alignment happens for atoms / ions with fine or hyperfine structure and is induced by radiation .the larmor precession realigns atoms / ions and thus the resulting polarization becomes dependent on the magnetic field direction .this type of alignment can potentially trace extremely weak fields in the diffuse rarefied media and we expect that this can be complementary to the sig technique .using the theory of mhd turbulence we predicted that in magnetized flows the synchrotron intensity gradients ( sigs ) are expected to be aligned perpendicular to the magnetic field .we successfully tested this prediction using synthetic synchrotron maps obtained with the 3d mhd compressible simulations as well as planck synchrotron intensity and polarization data .the new technique is complementary to the other ways of tracing magnetic field , which include traditional techniques using synchrotron and dust polarization as well new techniques that employ velocity centroid gradients ( vcgs ) and atomic / ionic alignment .the sigs are giving the true direction of the magnetic field in the synchrotron - emitting volume that is not distorted by the faraday rotation effect .therefore , combining the sigs with synchrotron polarimetry measurements one can determine the faraday rotation measure .this is useful for studying line - of - sight component of magnetic field .we have demonstrated that the sigs are a robust measure in the presence of gaussian noise .* acknowledgements . *al acknowledges the support the nsf grant ast 1212096 , nasa grant nnx14aj53 g as well as a distinguished visitor pve / capes appointment at the physics graduate program of the federal university of rio grande do norte , the inct inespao and physics graduate program / ufrn .the stay of khy at uw - madison is supported by the fulbright - lee fellowship .hl is supported by the research fellowship at department of physics , cuhk .beresnyak , a. , lazarian , a. , & cho , j. 2005 , http://dx.doi.org/10.1086/430702 [ , 624 , l93 ] beck , r. 2015 , magnetic fields in diffuse media , 407 , 507 brandenburg , a. , & lazarian , a. 2013 , http://dx.doi.org/10.1007/s11214-013-0009-3 [ , 178 , 163 ] , a. , & lazarian , a. 2005 , http://dx.doi.org/10.1086/432458 [ , 631 , 320 ] fernandez , e. r. , zaroubi , s. , iliev , i. t. , mellema , g. , & jeli , v. 2014 , , 440 , 298 gaensler , b. m. , haverkorn , m. , burkhart , b. , et al .2011 , http://dx.doi.org/10.1038/nature10446 [ , 478 , 214 ] galtier , s. , pouquet , a. , & mangeney , a. 2005 , physics of plasmas , 12 , 092310 lazarian , a. , & vishniac , e. t. 1999 , http://dx.doi.org/10.1086/307233 [ , 517 , 700 ] lee , h. , lazarian , a. , & cho , j. 2016 , , 831 , 77 lithwick , y. , & goldreich , p. 2001 , http://dx.doi.org/10.1086/323470 [ , 562 , 279 ]
on the basis of the modern understanding of mhd turbulence , we propose a new way of using synchrotron radiation , namely using synchrotron intensity gradients for tracing astrophysical magnetic fields . we successfully test the new technique using synthetic data obtained with the 3d mhd simulations and provide the demonstration of the use of the technique with the planck intensity and polarization data . we show that the synchrotron intensity gradients ( sigs ) can reliably trace magnetic field in the presence of shocks and noise and can provide more detailed maps of magnetic - field directions compared to the technique employing anisotropy of synchrotron intensity correlation functions . we also show that the sigs are relatively robust for tracing magnetic fields while the low spacial frequencies of the synchrotron image are removed . this makes the sigs applicable to tracing of magnetic fields using interferometric data with single dish measurement absent . we discuss the synergy of using the sigs together with synchrotron polarization in order to find the actual direction of the magnetic field and quantify the effects of faraday rotation as well as with other ways of studying astrophysical magnetic fields . we stress the complementary nature of the studies using the sig technique and those employing the recently - introduced velocity centroid gradient technique that traces the magnetic fields using spectroscopic data .
the explosive growth of large - scale wireless applications motivates people to study the fundamental limits over wireless networks .consider a randomly distributed wireless network with density over a unit area , where the nodes are randomly grouped into one - to - one source - destination ( s - d ) pairs .initiated by the seminal work in , the throughput scaling laws for such a network have been studied extensively in the literature - . for static networks, it is shown that the traditional multi - hop transmission strategy can achieve a throughput scaling of means that there exists a constant and integer such that for ;ii ) means that ; iii ) means that and ; iv ) means that as . ] per s - d pair .such a throughput scaling can be improved when the nodes are able to move .it is shown in that a per - node throughput scaling of is achievable in mobile networks by exploring two - hop transmission schemes .unfortunately , the throughput improvement in mobile networks incurs a large packet delay , which is another important performance metric in wireless networks .in particular , it is shown in that the constant per - node throughput is achieved at the cost of a delay scaling of .the delay - throughput tradeoffs for static and mobile networks have been investigated in - .specifically , for static networks , it is shown in that the optimal delay - throughput tradeoff is given by for , where and are the delay and throughput per s - d pair , respectively .the aforementioned literature mainly focuses on the delay and throughput scaling laws for a single network .recently , the emergence of cognitive radio networks motives people to extend the result from a single network to overlaid networks . consider a licensed primary network and a cognitive secondary network coexisting in a unit area .the primary network has the absolute priority to use the spectrum , while the secondary network can only access the spectrum opportunistically to limit the interference to the primary network .based on such assumptions , it is shown in that both networks can achieve the same throughput and delay scaling laws as a stand - alone network .however , such results are obtained without considering possible positive interactions between the primary network and the secondary network . in practice ,the secondary network , which is usually deployed after the existence of the primary network for opportunistic spectrum access , can transport data packets not only for itself but also for the primary network due to their cognitive nature .as such , it is meaningful to investigate whether the throughput and/or delay performance of the primary network ( whose protocol was fixed before the deployment of the secondary tier ) can be improved with the opportunistic aid of the secondary network , while assuming the secondary network still capable of keeping the same throughput and delay scaling laws as the case where no supportive actions are taken between the two networks . in this paper , we define a _ supportive _ two - tier network with a primary tier and a secondary tier as follows : the secondary tier is allowed to supportively relay the data packets for the primary tier in an opportunistic way ( i.e. , the secondary users only utilize empty spectrum holes in between primary transmissions even when they are relaying the primary packets ) , whereas the primary tier is only required to transport its own data .let and denote the node densities of the primary tier and the secondary tier , respectively .we investigate the throughput and delay scaling laws for such a supportive two - tier network with in the following two scenarios : i ) the primary and secondary nodes are all static ; ii ) the primary nodes are static while the secondary nodes are mobile . with specialized protocols for the secondary tier ,we show that the primary tier can achieve a per - node throughput scaling of in the above two scenarios with a classic time - slotted multi - hop transmission protocol similar to the one in . in the associated delay analysis for the first scenario ,we show that the primary tier can achieve a delay scaling of with . in the second scenario , with two mobility models considered for the secondary nodes : an i.i.d .mobility model and a random walk model , we show that the primary tier can achieve delay scaling laws of and , respectively , where is the random walk step size .the throughput and delay scaling laws for the secondary tier are also established , which are the same as those for a stand - alone network .based on the fact that an opportunistic supportive secondary tier improves the performance of the primary tier , we make the following observation : the classic time - slotted multi - hop primary protocol does not fully utilize the spatial / temporal resource such that a cognitive secondary tier with denser nodes could explore the under - utilized segments to conduct nontrivial networking duties . note that in , the authors also pointed out that adding a large amount of extra pure relay nodes ( which only relay traffic for other nodes ) , the throughput scaling can be improved at the cost of excessive network deployment .however , there are two key differences between such a statement in and our results . first , in this paper , the added extra relays ( the secondary nodes ) only access spectrum opportunistically( i.e. , they need not to be allocated with primary spectrum resource , given their cognitive nature ) , while the extra relay nodes mentioned in are regular primary nodes ( just without generating their own traffic ) who need to be assigned with certain primary spectrum resource in the same way as other primary nodes . as such ,based on the cognitive features of the secondary nodes considered in this paper , the primary throughput improvement could be achieved in an existing primary network without the need to change its current protocol ; while in , the extra relay deployment has to be considered in the initial primary network design phase for its protocol to utilize the relays . in other words ,the problem considered in this paper is how to improve the throughput scaling over an existing primary network by adding another supportive network tier ( the secondary cognitive tier ) , where the primary network is already running a certain protocol as we will discuss later in the paper , which is different from the networking scenario considered in .second , in this paper , the extra relays are also source nodes on their own ( i.e. , they also initiate and support their own traffic within the secondary tier ) , and as one of the main results , we will show that even with their help to improve the primary - tier throughput , these extra relays ( i.e. , the secondary tier ) could also achieve the same throughput scaling for their own traffic as a stand - alone network considered in .the rest of the paper is organized as follows .the system model is described and the main results are summarized in section ii .the proposed protocols for the primary and secondary tiers are described in section iii . the delay and throughput scaling laws for the primary tierare derived in section iv . the delay and throughput scaling laws for the secondary tierare studied in section v. finally , section vi summarizes our conclusions .consider a two - tier network with a static primary tier and a denser secondary tier over a unit square .we assume that the nodes of the primary tier , so - called primary nodes , are static , and consider the following two scenarios : i ) the nodes of the secondary tier , so - called secondary nodes , are also static ; ii ) the secondary nodes are mobile .we first describe the network model , the interaction model between the two tiers , the mobility models for the mobile secondary nodes in the second scenario , and the definitions of throughput and delay .then we summarize the main results in terms of the delay and throughput scaling laws for the proposed two - tier network .the primary nodes are distributed according to a poisson point process ( ppp ) of density and randomly grouped into one - to - one source - destination ( s - d ) pairs .likewise , the secondary nodes are distributed according to a ppp of density and randomly grouped into s - d pairs .we assume that the density of the secondary tier is higher than that of the primary tier , i.e. , where we consider the case with . the primary tier and the secondary tier share the same time , frequency , and space , but with different priorities to access the spectrum : the former one is the licensed user of the spectrum and thus has a higher priority ; and the latter one can only opportunistically access the spectrum to limit the resulting interference to the primary tier , even when it helps with relaying the primary packets . for the wireless channel, we only consider the large - scale pathloss and ignore the effects of shadowing and small - scale multipath fading . as such, the channel power gain is given as where is the distance between the transmitter ( tx ) and the corresponding receiver ( rx ) , and denotes the pathloss exponent .the ambient noise is assumed to be additive white gaussian noise ( awgn ) with an average power . during each time slot , we assume that each tx - rx pair utilizes a capacity - achieving scheme with the data rate of the primary tx - rx pair given by where the channel bandwidth is normalized to be unity for simplicity , denotes the norm operation , is the transmit power of the primary pair , and are the tx and rx locations of primary pair , respectively , is the sum interference from all other primary txs , is the sum interference from all the secondary txs . likewise ,the data rate of the secondary tx - rx pair is given by where is the transmit power of the secondary pair , and are the tx and rx locations of the secondary pair , respectively , is the sum interference from all other secondary txs to the rx of the secondary pair , and is the sum interference from all primary txs . as shown in the previous work , although the opportunistic data transmission in the secondary tier does not degrade the scaling law of the primary tier , it may reduce the throughput in the primary tier by a constant factor due to the fact that the interference from the secondary tier to the primary tier can not be reduced to zero . to completely compensate the throughput degradation or even improve the throughput scaling law of the primary tier in the two - tier setup , we could allow certain positive interactions between the two tiers .specifically , we assume that the secondary nodes are willing to act as relay nodes for the primary tier , while the primary nodes are not assumed to do so .when a primary source node transmits packets , the surrounding secondary nodes could pretend to be primary nodes to relay the packets ( which is feasible since they are software - programmable cognitive radios ) . in the scenario where the primary and secondary nodes are all static ,the secondary nodes chop the received primary packets into smaller pieces suitable for secondary - tier transmissions .the small data pieces will be reassembled before they are delivered to the primary destination nodes . in the scenario where the secondary nodes are mobile , the received packets are stored in the secondary nodes and delivered to the corresponding primary destination node only when the secondary nodes move into the neighboring area of the primary destination node . as such , the primary tier is expected to achieve better throughput and/or delay scaling laws .more details can be found in the secondary protocols proposed in section iii .note that , these `` fake '' primary nodes do not have the same priority as the real primary nodes in terms of spectrum access , i.e. , they can only use the spectrum opportunistically in the same way as a regular secondary node .the assumption that the secondary tier is allowed to relay the primary packets is the essential difference between our model and the models in . in the scenario where the secondary nodes are mobile , we assume that the positions of the primary nodes are fixed whereas the secondary nodes stay static in one primary time slot and change their positions at the next slot .in particular , we consider the following two mobility models for the secondary nodes . *two - dimensional i.i.d . mobility model * : the secondary nodes are uniformly and randomly distributed in the unit area at each primary time slot .the node locations are independent of each other , and independent from time slot to time slot , i.e. , the nodes are totally reshuffled over each primary time slot . * two - dimensional random walk ( rw ) model * : we divide the unit square into small - square rw - cells , each of them with size . the rw - cells are indexed by , where .a secondary node that stays in a rw - cell at a particular primary time slot will move to one of its eight neighboring rw - cells at the next slot with equal probability ( i.e. , 1/8 ) . for the convenience of analysis ,when a secondary node hits the boundary of the unit square , we assume that it jumps over the opposite edge to eliminate the edge effect .the nodes within a rw - cell are uniformly and randomly distributed .note that the unit square are also divided into primary cells and secondary cells in the proposed protocols as discussed in section iii , which are different from the rw - cells defined above . in this paper, we only consider the case where the size of the rw - cell is greater than or equal to that of the primary cell .the _ throughput per s - d pair _ ( per - node throughput ) is defined as the average data rate that each source node can transmit to its chosen destination as in , which is asymptotically determined by the network density .besides , the _ sum throughput _ is defined as the product between the throughput per s - d pair and the number of s - d pairs in the network .in the following , we use and to denote the throughputs per s - d pair for the primary tier and the secondary tier , respectively ; and we use and to denote the sum throughputs for the primary tier and the secondary tier , respectively . the delay of a primary packet is defined as the average number of primary time slots that it takes to reach the primary destination node after the departure from the primary source node .similarly , we define the delay of a secondary packet as the average number of secondary time slots for the packet to travel from the secondary source node to the secondary destination node .we use and to denote packet delays for the primary tier and the secondary tier , respectively .for simplicity , we use a fluid model for the delay analysis , in which we divide each time slot to multiple packet slots and the size of the data packets can be scaled down with the increase of network density .we summarize the main results in terms of the throughput and delay scaling laws for the supportive two - tier network here .we first present the results for the scenario where the primary and secondary nodes are all static and then describe the results for the scenario with mobile secondary nodes .i ) : : the primary and secondary nodes are all static .+ * it is shown that the primary tier can achieve a per - node throughput scaling of and a delay scaling of for .* it is shown that the secondary tier can achieve a per - node throughput scaling of and a delay scaling of .ii ) : : the primary nodes are static and the secondary nodes are mobile . + * it is shown that the primary tier can achieve a per - node throughput scaling of , and delay scaling laws of and with the i.i.d . mobility model and the rw mobility model , respectively .* it is shown that the secondary tier can achieve a per - node throughput scaling of , and delay scaling laws of and with the i.i.d .mobility model and the rw mobility model , respectively .in this section , we describe the proposed protocols for the primary tier and the secondary tier , respectively .the primary tier deploys a modified time - slotted multi - hop transmission scheme from those for the primary network in , while the secondary tier chooses its protocol according to the given primary transmission scheme . in the following , we use to represent the probability of event , and claim that an event occurs with high probability ( w.h.p . ) if as .the main sketch of the primary protocol is given as follows : \i ) divide the unit square into small - square primary cells with size . in order to maintain the full connectivity within the primary tier even without the aid of the secondary tier and enable the possible support from the secondary tier( see _ theorem 1 _ for details ) , we have such that each cell has at least one primary node w.h.p ..\ii ) group every primary cells into a primary cluster .the cells in each primary cluster take turns to be active in a round - robin fashion .we divide the transmission time into tdma frames , where each frame has primary time slots that correspond to the number of cells in each primary cluster .note that the number of primary cells in a primary cluster has to satisfy such that we can appropriately arrange the preservation regions and the collection regions , which will be formally defined later in the secondary protocol . for convenience , we take throughout the paper .\iii ) define the s - d data path along which the packets are routed from the source node to the destination node : the data path follows a horizontal line and a vertical line connecting the source node and the destination node , which is the same as that defined in .pick an arbitrary node within a primary cell as the designated relay node , which is responsible for relaying the packets of all the data paths passing through the cell .\iv ) when a primary cell is active , each primary source node in it takes turns to transmit one of its own packets with probability .afterwards , the designated relay node transmits one packet for each of the s - d paths passing through the cell .the above packet transmissions follow a time - slotted pattern within the active primary time slot , which is divided into packet slots .each source node reserves a packet slot no matter it transmits or not .if the designated relay node has no packets to transmit , it does not reserve any packet slots . for each packet , if the destination node is found in the adjacent cell , the packet will be directly delivered to the destination .otherwise , the packet is forwarded to the designated relay node in the adjacent cell along the data path . at each packet transmission ,the tx node transmits with power of , where is a constant .\v ) we assume that all the packets for each s - d pair are labelled with serial numbers ( sns ) .the following handshake mechanism is used when a tx node is scheduled to transmit a packet to a destination node : the tx sends a request message to initiate the process ; the destination node replies with the desired sn ; if the tx has the packet with the desired sn , it will send the packet to the destination node ; otherwise , it stays idle . as we will see in the proposed secondary protocol for the scenario with mobile secondary nodes, the helping secondary relay nodes will take advantage of the above handshake mechanism to remove the outdated ( already - delivered ) primary packets from their queues .we assume that the length of the handshake message is negligible compared to that of the primary data packet in the throughput analysis for the primary tier as discussed in section iv .note that running of the above protocol for the primary tier is independent of whether the secondary tier is present or not .when the secondary tier is absent , the primary tier can achieve the throughput scaling law as a stand - alone network discussed in .when the secondary tier is present as shown in section iv , the primary tier can achieve a better throughput scaling law with the aid of the secondary tier . in the following , we first present the proposed secondary protocol for the scenario with static secondary nodes , and then describe the one for the scenario with mobile secondary nodes .* protocol for static secondary tier * we assume that the secondary nodes have the necessary cognitive features such as software - programmability to `` pretend '' as primary nodes such that they could be chosen as the designated primary relay nodes within a particular primary cell . as later shown by _lemma [ lemma2 ] _ in section iv , a randomly selected designated relay node for the primary packet in each primary cell is a secondary node w.h.p .. once a secondary node is chosen to be a designated primary relay node for primary packets , it keeps silent and receives broadcasted primary packets during active primary time slots when only primary source nodes transmit their packets .furthermore , we use the time - sharing technique to guarantee successful packet deliveries from the secondary nodes to the primary destination nodes as follows .we divide each secondary frame into three equal - length subframes , such that each of them has the same length as one primary time slot as shown in fig .[ frame ] .the first subframe is used to transmit the secondary packets within the secondary tier .the second subframe is used to relay the primary packets to the next relay nodes .accordingly , the third subframe of each secondary frame is used to deliver the primary packets from the intermediate destination nodes in the secondary tier to their final destination nodes in the primary tier .specifically , for the first subframe , we use the following protocol : * divide the unit area into square secondary cells with size . in order to maintain the full connectivity within the secondary tier, we have to guarantee with a similar argument to that in the primary tier .* group the secondary cells into secondary clusters , with each secondary cluster of 64 cells .each secondary cluster also follows a 64-tdma pattern to communicate , which means that the first subframe is divided into 64 secondary time slots . *define a preservation region as nine primary cells centered at an active primary tx and a layer of secondary cells around them , shown as the square with dashed edges in fig .[ preservation ] . only the secondary txs in an active secondary cell outsideall the preservation regions can transmit data packets ; otherwise , they buffer the packets until the particular preservation region is cleared . when an active secondary cell is outside the preservation regions in the first subframe , it allows the transmission of one packet for each secondary source node and for each s - d path passing through the cell in a time - slotted pattern within the active secondary time slot w.h.p .. the routing of secondary packets follows similarly defined data paths as those in the primary tier . * at each transmission , the active secondary tx node can only transmit to a node in its adjacent cells with power of . in the second subframe ,only secondary nodes who carry primary packets take the time resource to transmit .note that each primary packet is broadcasted from the primary source node to its neighboring primary cells where we assume that there are secondary nodes in the neighboring cell along the primary data path successfully decode the packet and ready to relay .in particular , each secondary node relays portion of the primary packet to the intermediate destination node in a multi - hop fashion , and the value of is set as from _ lemma 1 _ in section iv , we can guarantee that there are more than secondary nodes in each primary cell w.h.p .when .when , the number of the secondary nodes in each primary cell is less than w.h.p .. in this regime , the proposed protocols could be modified by using the maximum number of the secondary nodes in the neighboring primary cell of a primary tx along the s - d data path .we leave this issue in our future work .the specific transmission scheme in the second subframe is the same as that in the first subframe , where the subframe is divided into 64 time slots and all the traffic is for primary packets . at the intermediate destination nodes ,the received primary packet segments are reassembled into the original primary packets .then in the third subframe , we use the following protocol to deliver the packets to the primary destination nodes : * define a collection region as nine primary cells and a layer of secondary cells around them , shown as the square with dotted edges in fig .[ preservation ] , where the collection region is located between two preservation regions along the horizontal line and they are not overlapped with each other . *deliver the primary packets from the intermediate destination nodes in the secondary tier to the corresponding primary destination nodes in the sink cell , which is defined as the center primary cell of the collection region .the primary destination nodes in the sink cell take turns to receive data by following a time - slotted pattern , where the corresponding intermediate destination node in the collection region transmits by pretending as a primary tx node .given that the third subframe is of an equal length to one primary slot , each primary destination node in the sink cell can receive one primary packet from the corresponding intermediate destination node .* at each transmission , the intermediate destination node transmits with the same power as that for a primary node , i.e. , .* protocol for mobile secondary tier * like in the scenario with static secondary nodes , we assume that the secondary nodes have the necessary cognitive features to `` pretend '' as primary nodes such that they could be chosen as the designated primary relay nodes within a particular primary cell . divide the transmission time into tdma frames , where the secondary frame has the same length as that of one primary time slot as shown in fig .[ frame ] . to limit the interference to primary transmissions , we define preservation regions in a similar way to that in the scenario with static secondary nodes .to faciliate the description of the secondary protocol , we define the _ separation threshold time _ of random walk as where measures the separation from the stationary distribution at time , which is given by where denotes the probability that a secondary node hits rw - cell at time starting from rw - cell at time , and is the probability of staying at rw - cell at the stationary state .we have .the secondary nodes perform the following two operations according to whether they are in the preservation regions or not : \i ) if a secondary node is in a preservation region , it is not allowed to transmit packets . instead , it receives the packets from the active primary transmitters and store them in the buffer for future deliveries .each secondary node maintains separate queues for each primary s - d pair . for the i.i.d .mobility model , we take , i.e. , only one queue is needed for each primary s - d pair . for the rw model , takes the value of given by ( [ separation time ] ) .the packet received at time slot is considered to be ` type ' and stored in the queue , if , where denotes the flooring operation .ii ) if a secondary node is not in a preservation region , it transmits the primary and secondary packets in the buffer . in order to guarantee successful deliveries for both primary and secondary packets ,we evenly and randomly divide the secondary s - d pairs into two classes : class i and class ii .define a collection region in a similar way to that in the scenario with static secondary nodes .in the following , we describe the operations of the secondary nodes of class i based on whether they are in the collection regions or not .the secondary nodes of class ii perform a similar task over switched timing relationships with the odd and even primary time slots .* if the secondary nodes are in the collection regions , they keep silent at the odd primary time slots and deliver the primary packets at the even primary time slots to the primary destination nodes in the sink cell , which is defined as the center primary cell of the collection region . in a particular primary time slot , the primary destination nodes in the sink cell take turns to receive packets following a time - slotted pattern . for a particular primary destination node at time , we choose an arbitrary secondary node in the sink cell to send a request message to the destination node .the destination node replies with the desired sn , which will be heard by all secondary nodes within the nine primary cells of the collection region .these secondary nodes remove all outdated packets for the destination node , whose sns are lower than the desired one . for the i.i.d .mobility model , if one of these secondary nodes has the packet with the desired sn and it is in the sink cell , it sends the packet to the destination node . for the rw model ,if one of these secondary nodes has the desired packet in the queue with and it is in the sink cell , it sends the packet to the destination node . at each transmission, the secondary node transmits with the same power as that for a primary node , i.e. , . * if the secondary nodes are not in the collection regions , they keep silent at the even primary time slots and transmit secondary packets at the odd primary time slots as follows .divide the unit square into small - square secondary cells with size and group every 64 secondary cells into a secondary cluster .the cells in each secondary cluster take turns to be active in a round - robin fashion . in a particular active secondary cell , we could use scheme 2 in to transmit secondary packets with power of within the secondary tier .in the following , we first present the throughput and delay scaling laws for the primary tier in the scenario where the primary and secondary nodes are all static , and then discuss the scenario where the secondary nodes are mobile .we first give the throughput and delay scaling laws for the primary tier , followed by the delay - throughput tradeoff .* throughput analysis * in order to obtain the throughput scaling law , we first give the following lemmas .[ lemma1 ] the numbers of the primary nodes and secondary nodes in each primary cell are and w.h.p .. the proof can be found in appendix i. [ lemma2 ] if the secondary nodes compete to be the designated relay nodes for the primary tier by pretending as primary nodes , a randomly selected designated relay node for the primary packet in each primary cell is a secondary node w.h.p .. let denote the probability that a randomly selected designated relay node for the primary packet in a particular primary cell is a secondary node .we have from _ lemma 1 _ , which approaches one as .this completes the proof .[ lemma3 ] with the protocols given in section iii , an active primary cell can support a constant data rate of , where independent of and .the proof can be found in appendix ii .[ lemma4 ] with the protocols given in section iii , the secondary tier can deliver the primary packets to the intended primary destination node at a constant data rate of , where independent of and .the proof can be found in appendix ii .based on _ lemmas 1 - 4 _ , we have the following theorem . [ pthroughput ] with the protocols given in section iii, the primary tier can achieve the following throughput per s - d pair and sum throughput w.h.p .when : and where and . from _ lemma 3 _ and _ lemma 4 _, we know that the primary tx can pour its packets into the secondary tier at a constant rate . since the primary nodes take turns to be active in each active primary cell , and the number of the primary nodes in each primary cell is of as shown in _lemma 1 _ , the theoretically maximum throughput per s - d pair is of .next , we show that with the proposed protocols , the maximum throughput scaling is achievable . in the proposed protocols , each primary source node pours all its packets into the secondary tier w.h.p .( from _ lemma 2 _ ) by splitting data into secondary data paths , each of them at a rate of .set , which satisfies .as such , each primary source node achieves a throughput scaling law of . since the total number of primary nodes in the unit square is of w.h.p . , we have w.h.p .. this completes the proof . by setting , the primary tier can achieve the following throughput per s - d pair and sum throughput w.h.p .: and * delay analysis * we now analyze the delay performance of the primary tier with the aid of a static secondary tier . in the proposed protocols , we know that the primary tier pours all the primary packets into the secondary tier w.h.p . based on _ lemma 2_. in order to analyze the delay of the primary tier , we have to calculate the traveling time for the segments of a primary packet to reach the corresponding intermediate destination node within the secondary tier .since the data paths for the segments are along the route and an active secondary cell ( outside all the preservation regions ) transmits one packet for each data path passing through it within a secondary time slot , we can guarantee that the segments depart from the nodes , move hop by hop along the data paths , and finally reach the corresponding intermediate destination node in a synchronized fashion . according to the definition of packet delay , the segments experience the same delay later given in ( [ sdelay ] ) within the secondary tier , and all the segments arrive the intermediate destination node within one secondary slot .let and denote the durations of the primary and secondary time slots , respectively . according to the proposed protocols , we have since we split the secondary time frame into three fractions and use one of them for the primary packet relaying , each primary packet suffers from the following delay : where the secondary - tier delay is later derived in ( [ sdelay ] ) , denotes the average time for a primary packet to travel from the primary source node to the secondary relay nodes plus that from the intermediate destination node to the final destination node , which is a constant .we see from ( [ pdelay ] ) that the delay of the primary tier is only determined by the size of the secondary cell . in order to obtain a better delay performance ,we should make as large as possible .however , a larger results in a decreased throughput per s - d pair in the secondary tier and hence a decreased throughput for the primary tier , for the primary traffic traverses over the secondary tier w.h.p .. in appendix iv , we derive the relationship between and in our supportive two - tier setup as where we have when . substituting ( [ cellsize ] ) into ( [ pdelay ] ), we have the following theorem . according to the proposed protocols in section iii, the primary tier can achieve the following delay w.h.p .when . * delay - throughput tradeoff * combining the results in ( [ pthroughput1 ] ) and ( [ pdelayfinal ] ) , the delay - throughput tradeoff for the primary tier is given by the following theorem .[ ptradeoff ] with the protocols given in section iii , the delay - throughput tradeoff in the primary tier is given by * throughput analysis * in order to obtain the throughput scaling law , we first give the following lemmas .[ lemma5 ] with the protocols given in section iii , an active primary cell can support a constant data rate of , where independent of and .the proof can be found in appendix iii .[ lemma6 ] with the protocols given in section iii , the secondary tier can deliver the primary packets to the intended primary destination node in a sink cell at a constant data rate of , where independent of and .the proof can be found in appendix iii .based on _ lemmas 1 - 2 _ and _ lemmas 5 - 6 _ , we have the following theorem . with the protocols given in section iii, the primary tier can achieve the following throughput per s - d pair and sum throughput w.h.p .: and when and . from _ lemma [ lemma5 ] _ and_ lemma [ lemma6 ] _ , we know that a primary tx can pour its packets into the secondary tier at rate w.h.p .. since the primary nodes take turns to be active in each active primary cell , and the number of primary source nodes in each primary cell is of w.h.p . as shown in _ lemma [ lemma1 ] _ , the maximum throughput per s - d pair is of w.h.p .. next , we show that with the proposed protocols , the above maximum throughput scaling is achievable . in the proposed protocols , from _lemma 2 _ we know that a randomly selected designated relay node for the primary packet in each primary cell is a secondary node w.h.p . from _lemma [ lemma2]_. as such , when a primary cell is active , the current primary time slot is just used for the primary source nodes in the primary cell to transmit their own packets w.h.p .. therefore , the achievable throughput per s - d pair is of and thus a achievable sum throughput of for the primary tier w.h.p .. this completes the proof . by setting , the primary tier can achieve the following throughput per s - d pair and sum throughput w.h.p .: and * delay analysis * based on the proposed supportive protocols , we know that the delay for each primary packet has two components : i ) the hop delay , which is the transmission time for two hops ( from the primary source node to a secondary relay node and from the secondary relay node to the primary destination node ) ; ii ) the queueing delay , which is the time a packet spends in the relay - queue at the secondary node until it is delivered to its destination .the hop delay is two primary time slots , which can be considered as a constant independent of and .next , we quantify the primary - tier delay performance by focusing on the expected queueing delay at the relay based on the two mobility models described in section ii.c .we have the following theorem regarding the delay of the primary tier . [ pdelaym1 ] with the protocols given in section iii, the primary tier can achieve the following delay w.h.p .when : according to the secondary protocol , within the secondary tier we have secondary nodes act as relays for the primary tier , each of them with a separate queue for each of the primary s - d pairs .therefore , the queueing delay is the expected delay at a given relay - queue . by symmetry ,all such relay - queues incur the same delay w.h.p .. for convenience , we fix one primary s - d pair and consider the secondary nodes together as a virtual relay node as shown in fig .[ virtualnode ] without identifying which secondary node is used as the relay . as such, we can calculate the expected delay at a relay - queue by analyzing the expected delay at the virtual relay node .denote the selected primary source node , the selected primary destination node , and the virtual relay node as s , d , and r , respectively . to calculate the expected delay at node r, we first have to characterize the arrival and departure processes .a packet arrives at r when a ) the primary cell containing s is active , and b ) s transmits a packet . according to the primary protocol in section iii, the primary cell containing s becomes active every primary time slots .therefore , we consider primary time slots as an observation period , and treat the arrival process as a bernoulli process with rate ( ) .similarly , packet departure occurs when a ) d is in a sink cell , and b ) at least one of the relay nodes that have the desired packets for d is in the sink cell containing d. let detnote the probability that event b ) occurs , which can be expressed as where means that and have the same limit when , denotes the number of the secondary nodes that have desired packets for d in the sink cell containing d and belong to class i ( class ii ) if d is in a sink cell at even ( odd ) time slots . as such, the departure process is an asymptotically deterministic process with departure rate .let denote the delay of the queue at the virtual relay node based on the i.i.d . model .thus , the queue at the virtual relay node is an asymptotically bernoulli / deterministic queue , with the expected queueing delay given by where denotes the expectation and the factor is the length of one observation period .note that the queueing length of this asymptotically bernoulli / deterministic queue is at most one primary packet length w.h.p .. next we need to verify that the relay - queue at each of the secondary nodes is stable over time .note that based on the proposed protocol every secondary node removes the outdated packets that have the sns lower than the desired one for d when it jumps into the sink cell containing d. since the queueing length at r can be upper - bounded by one , by considering the effect of storing outdated packets , the length of the relay - queue at each secondary node can be upper - bounded by where can be considered as an upper - bound for the inter - visit time of the primary cell containing d , since as .thus , the relay - queues at all secondary nodes are stable over time for each given , which completes the proof . for the rw model , we have the following theorem regarding the delay of the primary tier . [ pdelaym2 ] with the protocols given in section iii, the primary tier can achieve the following delay w.h.p .when : where . like the proof in the i.i.d .mobility case , we fix a primary s - d pair and consider the secondary nodes together as a virtual relay node . denote the selected primary source node , the selected primary destination node , and the virtual relay node as s , d , and r , respectively .based on the proposed secondary protocol in section iii , each secondary node maintains queues for each primary s - d pair .equivalently , r also maintains queues for each primary s - d pair where each queue is a concatenated one from small ones , and the packet that arrives at time is stored in the queue , where . by symmetry ,all such queues incur the same expected delay . without loss of generality ,we analyze the expected delay of the queue by characterizing its arrival and departure processes .a packet that arrives at time enters the queue when a ) the primary cell containing s is active , b ) s transmits a packet , and c ) .consider primary time slots as an observation period .the arrival process is a bernoulli process with arrival rate .similarly , a packet departure occurs at time when a ) d is in a sink cell , b ) at least one of the relay nodes that have the desired packets for d is in the sink cell containing d , and c ) .let denote the probability that event b ) occurs during one observation period , which can be expressed as where denotes the set of the secondary nodes that have the desired packets for d and belong to class i ( class ii ) if d is in a sink cell at even ( odd ) time slots ; represents the index of the rw - cell , in which the secondary node in is located when s sends the desired packet ; is the index of the rw - cell , in which d is located ; stands for the difference between the arrival time and the departure time for the desired packet , which can be lower - bounded by ; and denotes the probability that a secondary node is within the sink cell containing when it moves into rw - cell , which is given by . as such , the departure process is an asymptotically deterministic process with departure rate .let denote the delay of the queue at node r based on the rw model .thus , the queue at node r is an asymptotically bernoulli / deterministic queue , with the queueing delay given by where the factor is the length of one observation period . since , we have . using the similar argument as in the i.i.d .case , we can upper - bound the length of the relay - queue at any secondary node by ( [ queueing length ] ) for any .thus , the relay - queues at all secondary nodes are stable , which completes the proof .* delay - throughput tradeoff * for the rw model , we have the following delay - throughput tradeoff for the primary tier by combining ( [ pthroughput1 ] ) and ( [ result2 ] ) . we see that the delay - throughput tradeoff for the primary tier with the aid of the secondary tier is even better than the optimal delay - throughput tradeoff given in for a static stand - alone network .note that the above throughput and delay analysis is based on the assumption , and we leave the case with in our future work .* throughput analysis * in this section , we discuss the delay and throughput scaling laws for the secondary tier .according to the protocol for the secondary tier , we split the time frame into three equal - length fractions and use one of them for the secondary packet transmissions . since the above time - sharing strategy only incurs a constant penalty ( i.e. , 1/3 ) on the achievable throughput and delay within the secondary tier , the throughput and delay scaling laws are the same as those given in , which are summarized by the following theorems . with the secondary protocol defined in section iii, the secondary tier can achieve the following throughput per s - d pair and sum throughput w.h.p .: and where and the specific value of is determined by as shown in appendix iv .* delay analysis * with the secondary protocol defined in section iii , the packet delay is given by * delay - throughput tradeoff * combining the results in ( [ sthroughput1s ] ) and ( [ sdelay ] ) , the delay - throughput tradeoff for the secondary tier is given by the following theorem . with the secondary protocol defined in section iii ,the delay - throughput tradeoff is for detailed proofs of the above theorems , please refer to .when a secondary rx receives its own packets , it suffers from two interference terms from all active primary txs and all active secondary txs .we can use a similar method as in the proof of _ lemma 5 _ to prove that each of the two interference terms can be upper - bounded by a constant independent of and .thus , the asymptotic results for a stand - alone network in hold in this scenario . in the following , we summarize these results for completeness .* throughput analysis * we have the following theorem regarding the throughput scaling law for the secondary tier . with the protocols given in section iii , the secondary tier can achieve the following throughput per s - d pair and sum throughput w.h.p .: and * delay analysis * next , we provide the delay scaling laws of the secondary tier for the two mobility models as discussed in section ii.c . with the protocols given in section iii , the secondary tier can achieve the following delay w.h.p .based on the i.i.d .mobility model : with the protocols given in section iii , the secondary tier can achieve the following delay w.h.p .based on the rw model : note that ( [ sdelay2 m ] ) is a generalized result for . when , the delay is the same as that in this paper , we studied the throughput and delay scaling laws for a supportive two - tier network , where the secondary tier is willing to relay packets for the primary tier .when the secondary tier has a much higher density , the primary tier can achieve a better throughput scaling law compared to non - interactive overlaid networks .the delay scaling law for the primary tier can also be improved when then the secondary nodes are mobile . meanwhile, the secondary tier can still achieve the same delay and throughput tradeoff as in a stand - alone network .based on the fact that an opportunistic supportive secondary tier improves the performance of the primary tier , we make the following observation : the classic time - slotted multi - hop primary protocol does not fully utilize the spatial / temporal resource such that a cognitive secondary tier with denser nodes could explore the under - utilized segments to conduct nontrivial networking duties .let denote the number of the primary nodes in a particular primary cell , which is a poisson random variable with parameter . by the chernoff bound , the probability that a particular primary cell has no more than primary nodesis given by where , , and we use the fact that .let denote the event that at least one primary cell has no more than primary nodes .by the union bound , we have as . therefore ,each primary cell has more than primary nodes w.h.p .. furthermore , given , we have let denote the event that at least one primary cell has no less than primary nodes . bythe union bound , we have as .thus , each primary cell has less than primary nodes w.h.p .. combining ( [ eventa ] ) and ( [ eventb ] ) completes the proof for the case of primary nodes .the proof for the case of secondary nodes follows a similar way with replaced by .assume that at a given moment , there are active primary cells .the rate of the active primary cell is given by where denotes the rate loss due to the 64-tdma transmission of primary cells . in the surrounding of the primary cell, there are 8 primary interferers with a distance of at least and 16 primary interferers with a distance of at least , and so on . as such, the is upper - bounded by next , we discuss the interference from secondary transmitting interferers to the primary rx .we consider the following two case : case i : : : the secondary tier transmits either the primary packets to the next secondary relay nodes or transmits the secondary packets to the next hop , i.e. , in the first or secondary subframes .case ii : : : the secondary tier delivers the data packets to the primary destination nodes , i.e. , in the third secondary subframe . in case i , assume that there are active secondary cells , which means that the number of the active secondary txs is also .since a minimum distance can be guaranteed from all secondary transmitting interferers to the primary rxs in the preservation regions , is upper - bounded by in case ii , there are collection regions and thus active secondary txs . in the surrounding of the primary cell ,there are 2 secondary interferers with a distance of at least and 4 secondary interferers with a distance of at least , and so on .then , is upper - bounded by given and , we have since converges to a constant for , there exists a constant such that .this completes the proof .the proof is similar to that for _ lemma _ 3 .when a primary rx receives packets from its surrounding secondary nodes , it suffers from two interference terms from all active primary txs and all active secondary txs , either of which can be upper - bounded by a constant independent of and .thus there is a constant rate , at which the secondary tier can deliver packets to the intended primary destination node .assume that at a given moment , there are active primary cells .the supported rate of the active primary cell is given by where denotes the rate loss due to the 64-tdma transmission of primary cells .in the surrounding of the primary cell , there are 8 primary interferers with a distance of at least and 16 primary interferers with a distance of at least , and so on . as such, the is upper - bounded by next , we discuss the interference from secondary transmitting interferers to the primary rx . according to the proposed secondary protocol , the secondary nodes are divided into two classes : class i and class ii , which operate over the switched timing relationships with the odd and the even time slots . without the loss of generality , we consider the interference from secondary transmitting interferers to the primary rx at the odd time slots .assume that there are active secondary cells , which means that the number of the active secondary txs of class i is . since a minimum distance can be guaranteed from all secondary transmitting interferers of class i to the primary rxs in the preservation regions , the interference from the active secondary txs of class i , , is upper - bounded by furthermore , there are collection regions , which means that the number of the active secondary txs of class ii is .since a minimum distance can be guaranteed from all secondary transmitting interferes of class ii to the primary rxs in the preservation regions , the interference from the active secondary txs of class ii , , is upper - bounded by the proof is similar to that for _ lemma _ 5 . when a primary rx receives packets from its surrounding secondary nodes , it suffers from three interference terms from all active primary txs , all active secondary txs of class i , and all active secondary txs of class ii , each of which can be upper - bounded by a constant independent of and .thus , there is a constant rate , at which the secondary tier can deliver packets to the intended primary destination node .we know that given , the maximum throughput per s - d pair for the primary tier is . since a primary packet is divided into segments and then routed by parallel s - d paths within the secondary tier , the supported rate for each secondary s - d pair is required to be .as such , based on ( [ sthroughput1s ] ) , the corresponding secondary cell size needs to be set as where we have when .p. gupta and p. r. kumar , `` the capacity of wireless networks , '' _ ieee transactions on information theory _388 - 404 , mar .m. francheschetti , o. dousse , d. tse , and p. thiran , `` closing the gap in the capacity of random wireless networks via percolation theory , '' _ ieee transactions on information theory _ , vol .1009 - 1018 , mar .a. josan , m. liu , d. l. neuhoff , and s. s. pradhan , `` throughput scaling in random wireless networks : a non - hierarchical multipath routing strategy , '' preprint . oct . 2007 .[ online ] .available : http://arxiv.org/pdf/0710.1626 .a. ozgur , o. leveque , and d. tse , `` hierarchical cooperation achieves optimal capacity scaling in ad hoc networks , '' _ ieee transactions on information theory _ , vol 53 , no .3549 - 3572 , oct .m. grossglauser and d. n. c. tse , `` mobility increases the capacity of ad hoc wireless network , '' _ ieee / acm transaction on networking _ , vol .477 - 486 , aug . 2002 .m. j. neely and e. modiano , `` capacity and delay tradeoffs for ad - hoc mobile networks , '' _ ieee transactions on information theory _ ,51 , no . 6 , pp .1917 - 1936 , june 2005 .a. e. gamal , j. mammen , b. prabhakar , and d. shah , `` optimal throughput - delay scaling in wirless networks part i : the fluid model , '' _ ieee transaction on information theory _ , vol .2568 - 2592 , june 2006 .l. ying , s. yang , and r. srikant , `` optimal delay - throughput trade - offs in mobile ad hoc networks , '' _ ieee trans .9 , sept . 2008 . n. bansal and z. liu , `` capacity , delay and mobility in wireless ad - hoc networks , '' _ ieee infocom 2003 _ , vol .1553 - 1563 , mar .s. jeon , n. devroye , m. vu , s. chung , and v. tarokh , `` cognitive networks achieve throughput scaling of a homogeneous network , '' preprint .jan . 2008 .[ online ] .available : http://arxiv.org/pdf/0801:0938 . c. yin , l. gao , and s. cui , `` scaling laws for overlaid wireless networks : a cognitive radio network vs. a primary network , '' preprint .[ online ] .available : http://arxiv.org/pdf/0805:1209 .h. daduna , _ queueing networks with discrete time scale _ , springer , 2001 .d. aldous and j. fill , `` reversible markov chain and random walks on graph , '' [ online ] .available : http://www.stat.berkeley.edu/users/aldous/rwg/book.html .
consider a wireless network that has two tiers with different priorities : a primary tier vs. a secondary tier , which is an emerging network scenario with the advancement of cognitive radio technologies . the primary tier consists of randomly distributed legacy nodes of density , which have an absolute priority to access the spectrum . the secondary tier consists of randomly distributed cognitive nodes of density with , which can only access the spectrum opportunistically to limit the interference to the primary tier . based on the assumption that the secondary tier is allowed to route the packets for the primary tier , we investigate the throughput and delay scaling laws of the two tiers in the following two scenarios : i ) the primary and secondary nodes are all static ; ii ) the primary nodes are static while the secondary nodes are mobile . with the proposed protocols for the two tiers , we show that the primary tier can achieve a per - node throughput scaling of in the above two scenarios . in the associated delay analysis for the first scenario , we show that the primary tier can achieve a delay scaling of with . in the second scenario , with two mobility models considered for the secondary nodes : an i.i.d . mobility model and a random walk model , we show that the primary tier can achieve delay scaling laws of and , respectively , where is the random walk step size . the throughput and delay scaling laws for the secondary tier are also established , which are the same as those for a stand - alone network .
computer - aided drug discovery ( cadd ) is an area of research that is concerned with the identification of chemical compounds that are likely to possess specific biological activity , that is , the ability to bind certain target biomolecules such as proteins .cadd approaches are employed in order to prioritize molecules in commercially available chemical libraries for experimental biological screening .the prioritization of molecules is critical since these libraries frequently contain many millions of molecules making experimental testing intractable .the process of using computational methods to filter out those compounds which are not expected to exhibit strong biological activity is called virtual screening .computational methods have been used extensively to assist in experimental drug discovery studies . in general, there are two major computational drug discovery approaches , ligand based and structure based .the former is used when the three - dimensional structure of the drug target is unknown but the information about a reasonably large number of organic molecules active against a specific set of targets is available . in this case, the available data can be studied using cheminfomatic approaches such as quantitative structure - activity relationship ( qsar ) modeling [ for a review of qsar methods see a. tropsha , in ] .in contrast , the structure - based methods rely on the knowledge of three - dimensional structure of the target protein , especially its active site ; this data can be obtained from experimental structure elucidation methods such as x - ray or nuclear magnetic resonance ( nmr ) or from modeling of protein three - dimensional structure .virtual screening is one of the most popular structure - based cadd approaches where , typically , three - dimensional protein structures are used to discover small molecules that fit into the active site ( a process referred to as docking ) and have high predicted binding affinity ( scoring ) .traditional docking protocols and scoring functions rely on explicitly defined three - dimensional coordinates and standard definitions of atom types of both receptors and ligands . albeit reasonably accurate in some cases , structure - based virtual screening approaches are for the most part computationally inefficient [ ] . as a result of computational inefficiencythere is a limit to the number of compounds which can reasonably be screened by these methods .furthermore , recent extensive studies into the comparative accuracy of multiple available scoring functions suggest that accurate prediction of binding orientations and affinities of receptor ligand pairs remains a formidable challenge [ ] . yetmillions of compounds in available chemical databases and billions of compounds in synthetically feasible chemical libraries are available for virtual screening calling for the development of approaches that are both fast and accurate in their ability to identify a small number of viable and experimentally testable computational hits .recently , we introduced a novel structure - based cheminformatic workflow to search for complimentary ligands based on receptor information ( colibri ) [ ] .this novel computational drug discovery strategy combines the strengths of both structure - based and ligand - based approaches while attempting to surpass their individual shortcomings . in this approach, we extract the structure of the binding pocket from the protein and then represent both the receptor active site and its corresponding ligand in the same universal , multidimensional chemical descriptor space ( note that in principle , the descriptors used for receptors and ligands do not have to be the same , and we will be exploring the use of different descriptor types in future studies ) . we reasoned that mapping of both binding pockets and corresponding ligands onto the same multidimensional chemistry space would preserve the complementary relationships between binding sites and their respective ligands .thus , we expect that ligands binding to similar active sites are also similar . in cheminformatics applications ,the similarity is described quantitatively using one of the conventional metrics , such as manhattan or euclidean distance in multidimensional descriptor space .thus , the chief hypothesis in colibri is that the relative location of a novel binding site with respect to other binding sites in multidimensional chemistry space could be used to predict the location of the ligand(s ) complementary to this site in the ligand chemistry space .after generation of descriptors , the dataset is split into training and test sets and then variable selection is carried out to generate models optimizing this complementarity between the binding pocket and ligand spaces .these models are then applied to a binding pocket in a protein of interest to generate a predicted virtual ligand point which is used as a query in chemical similarity searches to identify putative ligands of the receptor in available chemical databases . in this paper , we build upon the work of to develop a substantially more advanced and efficient version of colibri .the problem can be generally stated as follows : for a set of known protein ligand pairs , with and descriptors , respectively , given a new protein we want to be able to predict what ligand(s ) will bind to it .two virtual drug screens will be used as a benchmark for testing the methods discussed and developed here : a set of 800 chemically and functionally diverse protein ligand pairs obtained from the protein data bank ( pdb ) database on experimentally measured binding affinity ( pdbbind ) [ ] .these compounds are described by a set of 150 chemical descriptors .these descriptors include information related to the electronic attributes , hydrophobicity and steric properties of the compounds . for a more detailed discussion on the different types of chemical descriptors , see ( ). we will refer to this data set as the 800 receptor ligand pairs ( rlp800 ) data .results and further details on this and two additional data sets can be found in section [ secresults ] .the world drug index ( wdi ) [ ] database which contains approximately 54,000 drug candidates ( ligands ) .each compound in the wdi is described by the same set of 150 chemical descriptors as the rlp800 data .the accuracy of our prediction is based on how close , in euclidean distance , our prediction is to the actual ligand .this is then compared against the distances of all of the ligands in the space to the actual ligand .a standard measure of predictive accuracy used in the qsar literature [ , ] is based on ranking these distances , from smallest to largest .defining to be the rank of our prediction of test ligand , model performance is defined as the average rank over each of the new points we are trying to predict , .this criterion reflects the average size of the search space needed to find each compound . here denotes the number of new ( i.e. , test ) ligands we are predicting . the effectiveness of the methods studied and developed here is illustrated in figure [ figikcca ] .figure [ figikcca](a ) is a histogram of the ranks , for our novel method which is a variant of canonical correlation analysis ( cca ) we call _ indefinite kernel cca _ ( ikcca ) ( section [ secikcca ] ) , on the rlp800 data .the previous state of the art for these data sets is ( the vertical line furthest to the right labelled oloff et al . ) which are larger by a factor of 5 to 10 as compared to cca ( section [ seccca ] ) and its improvements , kcca ( section [ seckcca ] ) and ikcca .as we were primarily interested in comparing our results against those of we did not look into other performance metrics other than mean rank .however , it would be interesting to pursue other , potentially more relevant measures of binding affinity such as kd , ki and ic50 as was done by , where cca is linked to these performance measures . , resulting from prediction on the test data from the rlp800 dataset .performance on the wdi data . ] while not discussed in this paper , an important unresolved issue in this cheminformatic - based approach to the prediction of protein ligand binding is the selection of meaningful chemical descriptors .the type of chemical descriptors used can have a drastic effect on the predictive accuracy of an algorithm .one possible approach to addressing this issue would be to use a recently developed method called sparse cca ( scca ) , , , and .scca uses a lasso - like approach to identify sparse linear combinations of two sets of variables that are highly correlated with each other .an approach based on scca to the prediction of protein ligand binding may prove to be quite useful in resolving some of the issues arising from chemical descriptor selection . in section [ secresults ] we present results and details on the rlp800 data set as well as on two additional data sets .in sections [ seccca ] and [ seckcca ] we outline cca and kcca , respectively . in section [ secikcca ]we propose a new method , ikcca , which encompasses nonpositive semi - definite ( psd ) kernels ( i.e. , indefinite kernels ) , specifically we consider a class of kernels related to the normalized graph laplacian used in spectral clustering .finally , in section [ secprediction ] we show how prediction of a new ligand is done using cca ( and its variants ) .in addition to the real data results discussed in section [ secintroduction ] , we also tested our method on two additional data sets [ which we refer to as experimental settings ( es ) , the reason for which will become clearer in what follows ] .these data ( including the rlp800 data ) are subsets of a collection of 1300 complexes taken from pdbbind [ ] .these 1300 complexes are referred to as the _ refined set _ ( rs ) , a set of entries that meet a defined set of criteria regarding crystal structure quality .a representative subsample of 195 of the complexes is called the _ core set _ ( cs ) .this is a collection of complexes selected by clustering the rs into 65 groups using protein sequence similarity and retaining only 3 complexes from each cluster .the three experimental settings considered are denoted by es i [ this experimental setting was used in ] , es ii and es iii . in each of these experimental settingsthe rs and cs complexes are separated into training and testing sets in such a way as to test different aspects of our model .in es i the 637 training and 163 test complexes are randomly sampled from the rs .es i is meant to provide a general test of our models performance . in esii the training ( 153 complexes ) and testing ( 36 complexes ) sets are sampled from the cs in such a way that the various protein families in the cs are well represented in both .this separation is meant to test the performance of our cca - based methods when the sample size is small .finally , in es iii the testing set ( 162 complexes ) is composed of proteins which are under represented in the training set ( 1006 complexes ) .this is meant to test our methods ability to correctly identify novel complexes ..0cd3.1d3.1d2.2d2.1@ * setting * & & & & & & & & + es i & rs & 637 & 163 & 800 & 18.1 & 10 & 7.5 & 4.5 + & rs & & & 54121 & 310 & 67 & 56 & 30 + [ 3pt ] es ii & rs & 153 & 36 & 189 & & 8 & 13.75 & 3.5 + & rs & & & 53994 & & 275.1 & & 92.9 + [ 3pt ] es iii & rs & 1006 & 162 & 1168 & & 11.9 & 7.4 & 4.4 + & rs & & & 54120 & & 53 & 24.3 & 18.2 + a note on how we use the training and testing sets : the tuning parameters for our model are selected , as discussed in section [ subsectuneparams ] , using only the training set . once tuning parameters have been selected , prediction on the testing set is then performed .this is meant to test the models performance on as - yet unobserved complexes .the results for each of these experimental settings is summarized in table [ tableresults ] .the columns labeled `` train '' and `` test '' correspond to the size of the training / testing sets for each particular experimental setting .the column labeled `` embed '' corresponds to the total number of ligands against which our prediction is to be ranked .the remaining columns correspond to the method used and the average rank performance ( defined in section [ secintroduction ] ) of that method .the second row , second column in each cell labeled `` rs '' shows the results for each method on the reduced set plus the world drug index .these results are meant to more accurately mimic an actual drug screen by having a larger test set to search against . as the method used in failed to provide useful results for the es ii and esiii experimental settings , no results are reported here .generally speaking , in all cases ikcca , using the ngl kernel , outperformed the other methods .all the cca - based methods provide considerable improvement over the previous approach . looking a bit closer at the results it is interesting to note that while all three cca - based methods performed worse on the es ii data , kcca had the largest drop in performance .this can be seen by comparing the average rank performance against the total number of ligands we are searching against .the decrease in performance in all cases more than likely has to do with the small size of the training set . in the case of kcca , its considerable decrease in performance ,we suspect , may have to do with not having a large enough training sample to reliably select the bandwidth parameter .for ikcca the adaptive nature of the local kernel is probably what allows it to perform well in the low sample size setting .cca [ ] naturally lends itself to the problem of predicting the binding between proteins and ligands .this can be understood for the following reasons : first , traditional methods of prediction , for example , regression , assume a direction of dependence between the variables to be predicted and the predictive variables .here we have a symmetric , not causal , type of relationship : the binding between a protein and its ligand is inherently co - dependent .second , in addition to capturing the dependence structure we are looking to model , cca is well suited to the type of prediction we are interested in performing . to understand this ,consider the following ( see also section [ sectoyex1 ] for a more detailed discussion ) .the objective of cca is to find directions in one space , and directions in a second space such that the correlation between the projections of these spaces onto their respective directions is maximized .these directions are commonly referred to as canonical vectors .let us assume that a set of directions are found so that the corresponding projections of proteins and of ligands are strongly correlated .predicting a new ligand given a new protein would begin with projecting the new protein into canonical correlation space .then , assuming the same correlation structure holds for this new point , prediction of the new ligand would amount to interpolating its location in ligand space based on the location of the protein in protein space .this will be discussed in greater detail in section [ secprediction ] .next we provide a brief discussion on the details of cca and kcca .let and , denote a protein ligand pair .the sample of pairs is collected in matrices and with and as the descriptors for a row .the objective of cca is to find the linear combinations of the columns of ( proteins ) , say and the linear combinations of the columns of ( ligands ) , say such that the correlation , is maximized . without loss of generalityassume that the matrices and have been mean centered .letting , and the cca optimization problem is subsequent directions are found by imposing the additional constraints for and , , . in order to avoid issues arising from multicollinearity and singularity of the covariance matriceswe impose a penalty [ ] on the directions and so that the constraints in ( [ eqcca ] ) are modified to be where is a regularization parameter .the predictive accuracy of this approach was discussed in section [ secintroduction ] , with results summarized in figure [ figikcca ] . recall that the lines in these figures labeled cca correspond to the average predicted rank using cca , which improved upon shown by the lines labeled a such . an appealing aspect of cca is its intuitive geometric interpretation [ and ] .a geometric perspective lends itself to a better understanding of the general behavior of cca , and provides further evidence of its applicability to the protein ligand matching problem .taking a closer look at the canonical correlation , , ( ) , in the optimization problem shown in ( [ eqcca ] ) , it can be seen that this quantity is in fact equal to the cosine of the angle between and ( and are commonly referred to as canonical variates ) . with this in mind maximizing the cosine ( i.e. , correlation ) can equivalently be thought of as minimizing the angle between and .furthermore , it can be shown that minimizing the angle is equivalent to minimizing the distance between pairs of canonical variates , subject to the constraints described in ( [ eqcca ] ) .note that viewed in this way , in canonical correlation space , this amounts to finding a system of coordinates such that the distance between coordinates is minimized .this is a sense in which cca is an appropriate approach to the protein ligand matching problem .as will be seen in sections [ seckcca ] and [ secikcca ] , this geometric interpretation of cca extends naturally to kcca and ikcca .note that the regularized variant of cca does not have the same geometric interpretation , nonetheless viewing regularized cca in this manner still provides useful insight into its behavior .consider the protein ligand problem as outlined above .for this toy example we set and .suppose the descriptors for this toy example are molecular weight ( mw ) and surface area ( sa ) of the molecule .recall that each row of and each row of corresponds to an observation , a protein or a ligand , respectively , and the columns correspond to the descriptors mw and sa .the pairs are identified by a unique label , corresponding to ids from the protein data bank ( pdb ) ( http://www.pdb.org[www.pdb.org ] ) .figure [ plotex1 ] shows the two toy data sets . in the ligand spacecorresponds to a weighted average ( discussed in section [ secprediction ] ) of the cyan points and the purple point , that is , of the nearest neighbors of 11gs in the protein space . ] from figure [ plotex1 ] it can be seen that the distribution of points in the two spaces are quite similar in the sense that the location of corresponding points in the two spaces are close .the points connected to 11gs ( red ) by dashed black lines are its three nearest neighbors .the cyan points are neighbors shared in both spaces and the blue and purple points are mismatched .two of three neighbors are shared in common ( in the euclidean sense ) .consider the case where the red point in ligand space is not observed and the task is to predict its value .using the weighted average ( see section [ secprediction ] for details on the derivation of the weights ) of the points in ligand space that correspond to the nearest neighbors of the point 11gs in the protein space ( points highlighted in cyan and purple in ligand space ) would yield a relatively poor prediction despite the strong apparent similarity between the two distributions of points .next suppose that instead of carrying out the prediction of a new ligand in the original data space we carry out our prediction in canonical correlation space .solving for and in ( [ eqcca ] ) , gives us the canonical vectors shown in figure [ ccaprojdir ] .what is important to notice is how the distribution of points along the first and second canonical directions in both protein and ligand space are quite similar .this is due to the property of alignment that arises naturally from maximizing the correlation .figure [ motivateprojplot ] shows the projections of the data onto the first two canonical vectors ( note that separate directions are found in protein and ligand space ) .we can see that with the slight modification in alignment that has resulted from the cca projections , the point 11gs now shares the same neighbors in both spaces .in particular note that now the predicted value in the projected ligand space is closer to the actual value ( again using the weighted average ) . onto the first and second canonical vectors .in contrast to figure [ plotex1 ] , the point 11gs now shares the same neighbors in both spaces and the predicted value in green is much closer to the actual value . ]this example was deliberately chosen to illustrate the case where cca is effective .however , in most cases the relationship between points in different spaces may be far more complicated , as we now illustrate .we now consider an example where the relationship between spaces is more complex .suppose that we have the same general framework as in section [ sectoyex1 ] but rather than having both protein and ligand space characterized by mw and sa , we now have that the space of proteins has descriptors and and that the space of ligands has descriptors and , shown in figure [ datasimplekernel ] . asbefore the observation highlighted in red , 1a94 , corresponds to a new protein whose corresponding ligand we are trying to predict .the point highlighted in cyan is one of the 3-nearest neighbors of 1a94 in both spaces .those points highlighted in purple ( and blue ) are nearest neighbors in only the protein ( and ligand ) spaces , respectively .the point in the ligand space , highlighted in green is a weighted average of the nearest neighbors of the point 1a94 in protein space .using as a prediction of the new ligand would not provide a particularly accurate prediction. in ligand space corresponds to a weighted average of the points 1a08 , 1a09 and 1a1b , that is , the nearest neighbors of the point 1a94 in protein space . ] as before , we use cca to try and find a linear combination of the descriptors which best align the two spaces .figure [ ccsimplekernel ] is a plot of the projections onto the first and second canonical variates in protein and ligand space .the color scheme is the same as in figure [ datasimplekernel ] .as can be seen , standard cca does not seem to be able to find a good alignment between the two spaces , which is confirmed by the relatively low values of the canonical correlations , 0.79 and 0.54 , respectively , for the first and second directions . in section [ seckcca ]we show how mappings into a kernel induced feature space can be used to improve prediction .this will lead to our discussion of kcca .returning to the example in section [ sectoyex2 ] , suppose it is believed that some type of functional relationship exists between the descriptors across spaces that is best characterized by looking at the second order polynomials of the descriptors within each space , that is , \\[-8pt ] \phi_y\dvtx ( d_y^1 , d_y^2 ) & \rightarrow&((d_y^1)^2,(d_y^2)^2,d_y^1d_y^2 ) .\nonumber\end{aligned}\ ] ] figure [ kernsimplekernelreceptor ] shows plots of proteins and ligands embedded into this three dimensional space .as can be seen there are now two neighbors shared in common between spaces ( colored in cyan ) .furthermore the prediction of the new observation , ( in green ) by a weighted average of its three nearest neighbors in feature space is , by comparison , much closer to the actual value than the corresponding prediction in object space . .looking at the plots on the top and bottom ( corresponding to protein and ligand space , respectively ) , the overall correspondence between points in protein space and ligand space is much better than in the original ( object ) space .this improved mapping will allow cca to do a better job aligning the two spaces . ] as before cca is used on this transformed data , now in feature space , to align the space of proteins and ligands .figure [ kernccsimplekernelreceptor ] shows a plot of the projected data .note that now both the new protein and its ligand ( highlighted in red ) share three neighbors and that the distribution of points within each of the spaces is quite similar .the quality of the alignment is further confirmed by looking at the canonical correlation values which are near 1 for each of the first two directions . since the value of the third canonical correlation is considerably smaller ( approximately 0.2 ) we only project onto the first two directions .it is worth noting that , as a result of overfitting , the kernel canonical correlation values can sometimes be artificially large due to strong correlation between features in kernel space .regularization methods for helping to control these effects in the kernel case will be discussed in section [ subseckcca ] .highlighted in green on the plot on the right is close to the actual value of 1a94 . ] in general , finding explicit mappings such as those in ( [ simplefeatmap ] ) is impractical or simply not possible as in some cases this would require an infinite dimensional feature space . as we will see in the following section ,kernels allow us to avoid such issues .kcca [ bach and jordan( ) , , ] extends cca by finding directions of maximum correlation in a kernel induced feature space .let and be the feature space maps for proteins and ligands , respectively .the sample of pairs , now mapped into feature space , are collected in matrices and with and as their respective row elements .the objective , as before , is to find linear combinations , and such that the correlation , , , is maximized .note that because and lie in the span of and , these can be re - expressed by the linear transformations and . letting and with and being the associated kernel functions for each space , respectively, the cca optimization problem in ( [ eqcca ] ) now becomes here the subscript in is included to emphasize the fact that the space of functions we are considering are in a rkhs .subsequent directions are found by including the additional constraints that for , and , . in order to avoid trivial solutions ,we penalize the directions and modifying the constraints in ( [ eqkcca ] ) to be here is a regularization parameter .note that the geometric interpretation of ( unregularized ) kcca , provided that data have been centered in feature space , is the same as cca .the only difference lies in the fact that the space in which this geometry is observed is in feature space rather than object space . in order for kcca to be understood as maximizing correlation in feature space centering must be performed in feature space .centering in feature space can be done as follows .let where is an matrix of ones , then we assume throughout that the kernel matrices are centered . [ smileydata ] the predictive accuracy of this approach was discussed in section [ secintroduction ] , with results summarized in figure [ figikcca ] .recall that the cyan line in figure [ figikcca ] corresponds to the average predicted rank using kcca which is an improvement over both and cca .we saw in section [ exfeatmap ] that kcca was able to overcome some of the obstacles encountered by standard cca . where kcca begins to encounter problemsis when the distribution of points within a space is nonstandard and/or heterogeneous . to illustrate thisconsider the example shown in figure [ smileydata ] , as with the protein ligand matching problem , there is a one - to - one correspondence between points in the two spaces ., each of the three clusters is in fact composed of two subclusters .likewise , each of the two clusters in the plots on the right are composed of three subclusters . ]is highlighted . ]the underlying structure between these spaces is illustrated in figure [ smileystructure ] .the top row of plots tells us about how the distribution of points on the right ( cluster space ) relates to the distribution of points on the left ( smiley face space ) .the bottom set of plots tells us about how the distribution of points on the left is related to distribution of points on the right .if we were to look at the two spaces as marginal distributions , there is a distinct impression of the three clusters in the left , and two in the right . the joint distribution , however , has six distinct groups .looking at the plots on the left in figure [ smileystructure ] , each of the three clusters is in fact composed of two subclusters .likewise , each of the two clusters in the plots on the right are composed of three subclusters .ideally , the projections onto the kcca directions would identify each of these six groups , shown in figure [ figsmiley6cols ] .using an rbf kernel with we look at the first five canonical directions .ideally , what we would see is a separation of each of the groups as well as a strong alignment between each of the spaces .what we find looking at figure [ smileyrbf ] , a scatter plot matrix of the first five kernel canonical variates ( kcv ) , is that while the leading correlations are large ( 0.98 , 0.97 , 0.95 , 0.80 , 0.75 ) , we are not able to find the structure in the data we were looking for , that is , separating out the six groups ( with each of the colors corresponding to one of the six groups ) .note that only the projections in the smiley face space are shown since the cluster space projections look essentially the same . .each of the colors in this plot corresponds to one of the six underlying subpopulation in the data ( see figure [ smileystructure ] for details ) . ] in the context of the protein ligand matching problem this type of situation presents a potential problem . suppose a new point , say in the space with the smiley face , is projected into kcca space .as can be seen in figure [ smileyrbf ] , there is a great deal of overlap between each of the six subgroups in the projected space .in particular note that each of the overlapped groups is composed of , respectively , the left eye , right eye and mouth .the reason this type of behavior presents a problem is that each of the eyes and the mouth are actually composed of two different subpopulations where each of the populations correspond to very different groups in the space with the two clusters .so while we may be able to accurately predict the location of a new point in kcca space the interpretation of its surrounding neighbors may not be so meaningful .a potential shortcoming of standard kcca , which was illustrated in the example presented in figure [ smileydata ] , is that standard positive definite kernels can be limited in their ability to capture nonstandard heterogeneous behavior in the data .a more general class of kernels which is better suited to handle this type of behavior takes the form here denotes some neighborhood of the observation , such as a nearest neighborhood or a fixed radius -neighborhood .kernels of this form restrict attention to the local structure of the data and allow for a flexible definition of similarity .our motivation for considering this class of kernels in the context of the protein ligand matching problem is the following . in the rlp800 datasetthere are approximately 150 important subgroups in the data .these subgroups correspond to unique proteins , or more specifically their binding pockets , which typically have three or four different conformations specific to a particular ligand .exploitation of this group structure in the data can help improve prediction .this can be accomplished by using a `` local kernel '' function that allows us to capture these groups more readily than , say , the rbf kernel .the intuition here follows from the example presented in section [ sectoyexnonstd ] where we saw that the type of groups that an rbf kernel will be able to find will be dictated by the choice of the bandwidth parameter .the local kernel overcomes this by adjusting locally to the data . by adjusting to the datalocally it is better able to exploit this group structure.=1 in summary , given a new protein , its projection in this local kernel cca space will be more likely to fall into a group of similar proteins . then, as before , the goal is that the ligands associated with this group of proteins provide an accurate representation of the ligand we are trying to predict .this improved performance exploiting group structure in the data comes at some price .in particular , the problem encountered with this class of kernels is that they are frequently indefinite ( see the discussion following definition [ innprod ] ) . as a result of the indefiniteness ,many of the properties and optimality guarantees no longer hold .indefinite kernels have recently gained increased interest [ , , , ] where , rather than defining to be a function defined in a rkhs , is defined in a space characterized by an _ indefinite inner product _ called a _ krein _ space . in section [ secindefkern ]we provide an overview of some of the definitions and theoretical results about krein spaces [ following the discussion of ] . before discussing ikcca, we will need to provide some definitions and theorems related to indefinite inner product spaces , that is , krein spaces [ more details can be found in ] .[ innprod ] let be a vector space on the scalar field .an inner product on is a bilinear form where for all , : * ; * ; * for all implies .the importance of being a vector space on a _ scalar field _ is that it allows for a flexible definition of an inner product ( i.e. , the scalar in one of the dimensions could be complex or negative as we will see below ) .an inner product is said to be _ positive _ if for all , .it is called a _ negative _ inner product , if for all , .an inner product is called indefinite if it is neither strictly positive nor strictly negative .[ rem1 ] to illustrate how indefinite inner products arise in the context of our problem , consider the following .suppose we have a symmetric kernel function , which is indefinite , the implication of this is that the resulting kernel matrix is indefinite and that it therefore contains positive _ and _ negative eigenvalues .let be the eigendecomposition of , where are the eigenvectors and is the diagonal matrix of eigenvalues starting with the positive eigenvalues , followed by the negative ones and the eigenvalues equal to 0 . to see how can be interpreted as a matrix composed of inner products in this indefinite inner product space consider the following representation of its eigendecomposition : let and be equal to the first columns of .define the row of to be equal to we then have a kernel matrix composed of elements \\[-8pt ] & = & \langle\phi_i , \phi_j \rangle_{\mathcal{h}_{+ } } - \langle\phi _ i , \phi_j \rangle_{\mathcal{h}_{- } } \nonumber\\ & = & \langle\phi_i , \phi_j \rangle_{\mathcal{k}}. \nonumber\end{aligned}\ ] ] from ( [ eqik ] ) we can see that unlike psd kernels where for any , with indefinite kernels can take on any value , making optimization over such a quantity challenging . despite this difference , many of the properties that hold for reproducing kernel hilbert spaces ( rkhs ) , such as ( and perhaps most importantly ) the reproducing property [ ] , also hold for these indefinite inner product spaces [ see for details ] . the key difference lies in the fact that rather than minimizing ( maximizing ) a regularized risk functional , as in the rkhs setting , the corresponding optimization problem becomes that of finding a stationary point of a similar risk functional .section [ secindefkern ] provided some insight into the challenges that arise from dealing with indefinite kernels .in particular , remark [ rem1 ] points to the fact that the solution that we find may not be globally , or even locally , optimal ( as it may be a saddle point ) .the form of the ikcca problem we present in this section is motivated by the discussion of the previous section and the works of and .in particular , the addition of a stabilizing function on the indefinite inner product as discussed in led us to consider introducing a constraint on the behavior on the indefinite kernels matrix itself . in the following ,let denote the frobenius norm .define to mean that the matrix is positive semi - definite and let be tuning parameters ( discussed in more detail later this section ) . here and are the ( potentially ) indefinite kernels and and will be the positive semi - definite approximations of these kernels . with this notation in mind , we now define the ikcca optimization problem : where and . note that this optimization problem and the kcca optimization problem are only equivalent when the kernel matrices and are positive semi - definite ( see the [ ] for details on the equivalency between the optimization problem in ( [ sccaoptim ] ) and ( [ eqkcca ] ) and a proof of theorem [ thmikccakoptim ] ) .[ thmikccacavvex ] letting , the optimization problem in is concave in and , and convex in and . see the for a proof .let denote the positive part of the matrix , that is , , where and are eigenvalue eigenvector pair of the matrix . with this in mind , we have the following theorem .[ thmikccakoptim ] letting , and given the optimization problem in the optimal values for and are given by \\[-8pt ] \mathbf{k}_y & = & ( \mathbf{k}_y^0)_{+}. \nonumber\end{aligned}\ ] ] the proof of theorem [ thmikccakoptim ] makes use of the following lemma .let be a known , square , not necessarily positive - definite matrix , and a square , unknown matrix , then : [ lemfroboptim ] the solution to the optimization problem is the proofs of theorem [ thmikccakoptim ] and lemma [ lemfroboptim ] can be found in the .points and are projected onto their first canonical directions as follows : first compute their kernelization , using the indefinite kernel functions and , then calculate where and .we now return to the example in section [ sectoyexnonstd ] using the kernel defined in ( [ localkernel ] ) with weights ( [ eqnglweightsrbf ] ) .note that this kernel is closely related to the normalized graph laplacian ( ngl ) kernel used in spectral clustering ; see for an overview of spectral clustering methods . from figure [ glsmiley ], it can be seen that we are now able to capture the underlying structure of the data , identifying each of the six subpopulations : and here is the symmetric -neighborhood of the point [ i.e. , if then and where is the neighbor of the point . .this is a scatter plot matrix of the projections onto the first five ikcca variates ( ikcv ) using the kernel in ( [ localkernel ] ) with weights ( [ eqnglweightsrbf ] ) .unlike the projections shown in figure [ smileyrbf ] , here we are able to separate out the six groups . ] looking at plots of the first four eigenvectors ( figures [ eigensmile ] and [ eigencluster ] ) in both the smiley face space and the cluster space , we can see how the behavior of the eigenvectors causes the segmentation of the data that we observe in figure [ glsmiley ] .first , we discuss how these figures are generated and then what it is they are telling us .generate an equally spaced dimensional grid spanning the range of values in each space .calculate the kernel representation and projection of each grid point into ikcca space .use the projected values to assign color intensities to each point in the grid of each space ( darker for negative values , lighter for positive values ) .plot the grid and for each point using the colors calculated from the previous step .the important thing to note in both of these figures is the distribution of positive and negative projected values and how these are driving the segmentation , which we observe in figure [ glsmiley ] .for example , in figure [ eigensmile ] the first canonical variate segments out one of the faces ( red ) from the other ( blue ) . using the kernel in ( [ localkernel ] ) with weights ( [ eqnglweightsrbf ] ) .these plots allow us to visualize how the canonical vectors separate out each of the clusters . ] using the kernel in ( [ localkernel ] ) with weights ( [ eqnglweightsrbf ] ) . ][ secprediction ] let us define the projected values of the observations in protein and ligand space onto their first canonical vectors as , , and , , .the predicted value of is calculated as follows [ using a modification of the lle algorithm of ] : compute the neighbors of the data point ( the projected value of into canonical correlation space ) .define to be the nearest neighbors of the point .recall that cca finds directions which best align two spaces .thus , assuming that directions and , , have been found such that the correlation between spaces is strong , using the weights found in protein space should provide a reliable estimate of .values for the tuning parameters , ( the regularization parameter ) , ( the number of dimensions we are projecting into ) , ( the neighborhood for the lle - based prediction ) , ( for the rbf kernel ) and ( for the ngl kernel ) are found by searching over a suitable grid for each .the final set of parameters are selected based on which produces the lowest average rank ( discussed in section [ secintroduction ] ) .
drug discovery is the process of identifying compounds which have potentially meaningful biological activity . a major challenge that arises is that the number of compounds to search over can be quite large , sometimes numbering in the millions , making experimental testing intractable . for this reason computational methods are employed to filter out those compounds which do not exhibit strong biological activity . this filtering step , also called virtual screening reduces the search space , allowing for the remaining compounds to be experimentally tested . in this paper we propose several novel approaches to the problem of virtual screening based on canonical correlation analysis ( cca ) and on a kernel - based extension . spectral learning ideas motivate our proposed new method called indefinite kernel cca ( ikcca ) . we show the strong performance of this approach both for a toy problem as well as using real world data with dramatic improvements in predictive accuracy of virtual screening over an existing methodology . , , , .
a signal of interest , which is a random vector taking values in with ( prior ) distribution ( i.e. , is gaussian distributed with mean and covariance matrix ) .the signal is carried over a noisy channel to a sensor , according to the model where is a full rank channel matrix . for simplicity , in this paperwe focus on the case where , though analogous results are obtained when .the problem is to compress realizations of ( , ) with measurements ( where is specified upfront ) .but the implementation of each compression has a noise penalty .so , the compressed measurement is where the compression matrix is .consequently , the measurement takes values in .assume that the measurement noise has distribution and channel noise has distribution .the measurement and channel noise sequences are independent over and independent of each other .equivalently , we can rewrite ( [ on ] ) as and consider as the total noise with distribution .we consider the following adaptive ( sequential ) compression problem .for each , we are allowed to choose the compression matrix ( possibly subject to some constraint ) . moreover ,our choice is allowed to depend on the entire history of measurements up to that point : .let the posterior distribution of given be .more specifically , can be written recursively for as where and .if this expression seems a little unwieldy , by the woodbury identity a simpler version is assuming that and are nonsingular .also define the _ entropy _ of the posterior distribution of given : the first term is actually proportional to the volume of the error concentration ellipse for ] sequentially , one at a time . in the special case ,the measurement model is where is called the measurement vector , and is a white gaussian noise vector . in this context , the construction of a `` good '' compression matrix to convey information about is also a topic of interest . when , this is a problem of greedy adaptive noisy compressive sensing .our solution is a more general solution than this for the more general problem ( [ ssm1 ] ) . in this more general problem ,the uncompressed measurement is a noisy version of the filtered state , and compression by introduces measurement noise and colors the channel noise .the concept of sequential scalar measurements in a closed - loop fashion has been discussed in a number of recent papers ; e.g. , .the objective function for the optimization here can take a number of possible forms , besides the net information gain .for example , in , the objective is to maximize the posterior variance of the expected measurement .if the can only be chosen from a prescribed _ finite _ set , the optimal design of is essentially a sensor selection problem ( see , ) , where the greedy policy has been shown to perform well .for example , in the problem of sensor selection under a submodular objective function subject to a uniform matroid constraint , the greedy policy is suboptimal with a provable bound on its performance , using bounds from optimization of submodularity functions , . consider a constraint of the form for ( where is the euclidean norm in ) , which is much more relaxed than a prescribed finite set .the constraint that has unit - norm columns is a standard setting for compressive sensing .the expression in ( [ eqn : ig ] ) simplifies to this expression further reduces ( see ( * ? ? ?* lemma 1.1 ) ) to combining ( [ ig ] ) and ( [ ig2 ] ) , the information gain at the step is it is obvious that the greedy policy maximizes to obtain the maximal information gain in the step .clearly , the measurement may be written as then ( [ ratio ] ) is simply the ratio of variance components : the numerator is , ] ( the standard basis for ) a particular choice that minimizes the complexity of compression .so compressed measurements will consist of the noisy measurements . after picking ,the eigenvalues of are , .analogously , after picking , the eigenvalues of are , , and so on .if , then after iterations of the greedy policy the eigenvalues of are , . in the first iterations ,the per - step information gain is . if , after iterations of the greedy policy , .we now simply encounter a similar situation as in the very beginning .we update and .the analysis above then applies again , leading to a round - robin selection of measurements .in this subsection we consider the problem of maximizing the net information gain , subject to the unit - norm constraint : the policy that maximizes ( [ un ] ) is called the _ optimal policy_. the objective function can be written as where :=\left[\frac{{\mathbf{a}}_1}{\sqrt{\|{\mathbf{a}}_1\|^2\sigma_n^2+\sigma_w^2}},\ldots,\frac{{\mathbf{a}}_m}{\sqrt{\|{\mathbf{a}}_m\|^2\sigma_n^2+\sigma_w^2}}\right].\ ] ] assume that the eigenvalue decomposition , where and ] for the relaxed constraint problem ( [ aun ] ) by using ( [ gtg ] ) , ( [ ge ] ) , and ( [ ea ] ) . our main motivation to relax the constraint to an _ average_ unit - norm constraint is our knowledge of the relaxed optimal solution .specifically , for the multivariate gaussian signal the maximal net information gain under the relaxed constraint is given by the water - filling solution .this helps us to identify cases where the greedy policy is in fact optimal , as discussed in the next section .in the preceding sections , we have discussed three types of policies : the greedy policy , the optimal policy , and the relaxed optimal policy .denote by , , and the net information gains associated with these three policies respectively .clearly , in the rest of this section , we characterize , , and . in general, we do not expect to have ; in other words , in general , greedy is not optimal .however , it is interesting to explore cases where greedy _ is _ optimal . in the rest of this section , we provide sufficient conditions for the greedy policy to be optimal . before proceeding , we make the following observation on the net information gain . in ( [ max ] ) denote ; then the determinant in the objective function becomes under the unit - norm constraint , [ ra ] in the maximization problem ( [ un ] ) , if the were only picked from , by ( [ v ] ) where each is an integer multiple of and .this integer would be determined by the multiplicity of appearances of among .thus the net information gain would be where we use the fact that . clearly , to maximize the net information gain by selecting compressors from ,we should never pick from , because ( [ objequal ] ) is not a function of .in particular , the greedy policy picks from .after iterations of the greedy policy , the net information gain can be computed by the right hand side of ( [ objequal ] ) .we now provide two sufficient conditions ( in theorems [ mr2 ] and [ thm1 ] ) under which holds for the sequential scalar measurements problem ( [ ssm1 ] ) . [ mr2 ] suppose that , , can only be picked from the prescribed set , which is a subset of the orthonormal eigenvectors of . if , then the greedy policy is optimal , i.e. , .see appendix [ app1 ] .next , assume that we can pick to be any arbitrary vector with unit norm . in this much more complicated situation , we show by directly showing that , which implies that in light of ( [ order ] ) .[ thm1 ] assume that , , can be selected to be any vector with .if , where is some nonnegative integer , for , and divides , then the greedy policy is optimal , i.e. , see appendix [ app2 ] the two theorems above furnish conditions under which greedy is optimal . however , these conditions are quite restrictive .indeed , as pointed out earlier , in general the greedy policy is not optimal .the restrictiveness of the sufficient conditions above help to highlight this fact . in the next section ,we provide examples of cases where greedy is _ not _ optimal .in this subsection we give an example where the greedy policy is not optimal for the scenario and .suppose that we are restricted to a set of only three choices for : note that . in this case , . moreover , set , , and .let us see what the greedy policy would do in this case . for , it would pick to maximize a quick calculation shows that for or , we have whereas for , so the greedy policy picks , which leads to . for , we go through the same calculations : for or , we have whereas for , so , this time the greedy policy picks ( or ) , after which .consider the alternative policy that picks and . in this case , and so , which is clearly provides greater net information gain than the greedy policy . call this alternative policy the _ alternating policy _ ( because it alternates between and ) . in conclusion , for this example the greedy policy is not optimal with respect to the objective of maximizing the net information gain . how much worse is the objective function of the greedy policy relative to that of the optimal policy ? on the face of it , this question seems easy to answer in light of the well - known fact that the net information gain is a submodular function .as mentioned before , in this case we would expect to be able to bound the suboptimality of the greedy policy compared to the optimal policy ( though we do not explicitly do that here ) . nonetheless , it is worthwhile exploring this question a little further .suppose that we set and let the third choice in be , where is some small number .( note that the numerical example above is a special case with . ) in this case , it is straightforward to check that the greedy policy picks and ( or ) if is sufficiently small , resulting in which increases unboundedly as .however , the alternating policy results in which converges to as . hence ,letting get arbitrarily small , the ratio of for the greedy policy to that of the alternating policy can be made arbitrarily large .insofar as we accept minimizing to be an equivalent objective to maximizing the net information gain ( which differs by the normalizing factor and taking ) , this means that _ the greedy policy is arbitrarily worse than the alternating policy_. what went wrong ?the greedy policy was `` fooled '' into picking at the first stage , because this choice maximizes the per - stage information gain in the first stage .but once it does that , it is stuck with its resulting covariance matrix .the alternating policy trades off the per - stage information gain in the first stage for the sake of better net information gain over two stages .the first measurement matrix `` sets up '' the covariance matrix so that the second measurement matrix can take advantage of it to obtain a superior covariance matrix after the second stage , embodying a form of `` delayed gratification . ''interestingly , the argument above depends on the value of being sufficiently small . for example , if , then the greedy policy has the same net information gain as the alternating policy , and is in fact optimal .an interesting observation to be made here is that the submodularity of the net information gain as an objective function depends crucially on including the function .in other words , although for the purpose of optimization we can dispense with the function in the objective function in view of its monotonicity , bounding the suboptimality of the greedy policy with respect to the optimal policy turns on submodularity , which relies on the presence of the function in the objective function . in particular , if we adopt the volume of the error concentration ellipse as an equivalent objective function , we can no longer bound the suboptimality of the greedy policy relative to the optimal policy the greedy policy is provably _ arbitrarily worse _ in some scenarios , as our example above shows .consider the channel model and scalar measurements .assume that ,\ ] ] , and set .our goal is to find such that , maximize the net information gain : by simple computation , we know that the eigenvalues of are and . if we follow the greedy policy , the eigenvalues of are and . by ( [ greedy ] ) ,the net information gain for the greedy policy is next we solve for the optimal solution .let ^t$ ] . by ( [ pp ] ), we have .\ ] ] we compute that when we choose in the second stage , we can simply maximize the information gain in that stage . in this special casewhen , the second stage is actually the last one .if is given , maximizing the net information gain is equivalent to maximizing the information gain in the second stage .therefore , the second step is equivalent to a greedy step . by ( [ greedy ] ) , by ( [ infogain ] ) , we know using , we simplify ( [ i1 ] ) and ( [ i2 ] ) to obtain this expression reaches its maximal value when .so the optimal net information gain is , when ^t \ ] ] and ^t . \ ] ] this implies that the greedy policy is not optimal .if , , can only be picked from , then by ( [ objequal ] ) the net information gain is .we can simply manage in each channel to maximize the net information gain . rewrite as we claimed before , where , , is an integer multiple of .inspired by the water - filling algorithm , we can consider as an allocation of blocks ( each with size ) into channels .in contrast to water - filling , we refer to this problem as _ block - filling _ ( or , to be more evocative , _ ice - cube - filling _ ) .the original heights of these channels are .finally , the net information gain is determined by the product of the final heights .the optimal solution can be extracted from an optimal allocation that maximizes ( [ heights ] ) .because , to maximize we should allocate nonzero values of in the first channels .accordingly , there exists an optimal solution such that assume that we pick , , using the greedy policy . by ( [ v1 ] ) and ( [ v2 ] ), we see that the iteration of the greedy algorithm only changes into , which is equivalent to changing into .consider this greedy policy in the viewpoint of block - filling .the greedy policy fills blocks to the lowest channel one by one .if there are more than one channel having the same lowest height , it adds to the channel with the smallest index .likewise , since the original heights of the channels are , the greedy policy only fills blocks to the first channels , i.e. , greedy solution also satisfies we now provide a necessary condition for both optimal and greedy solutions .[ necess ] assume that an allocation is determined by either an optimal solution or a greedy solution .if is nonzero , then is bounded in the interval .moreover , it suffices for the optimal and greedy solutions to pick from the set .first , assume that is given by an optimal solution .recall that is the final height of the channel . by examining the total volumes of water and blocks , we deduce the following .if and for some , where is the water level defined in ( [ waterlevel ] ) , then there exists some channel such that . for the purpose of proof by contradiction , let us assume that .we move the top block of the channel to the channel to get another allocation .clearly , and have the same entries except the and components .the argument in this paragraph is illustrated in figure [ fig : etagamma ] . from .,width=321 ] for simplicity , denote for .so because .thus gives a better allocation , which contradicts the optimality of . by a similar argument, we obtain that for any optimal solution , there also does not exist such that and . in conclusion ,the final height , , in each channel in the optimal solution is bounded in the interval .additionally , in both cases when and , .this means that it suffices for the optimal solution to pick from the set .next , we assume that is determined by a greedy solution .if and , for some , then there exists a channel with index such that . for the purpose of proof by contradiction , let us assume that .this implies that when the greedy algorithm fills the top block to the channel , it does not add that block to the channel with a lower height .this contradicts how the the greedy policy actually behaves . by a similar argument, there does not exist some channel such that and . in conclusion ,the final height , , in each channel in the greedy solution is bounded in the interval . moreover , .this means that it suffices for the greedy solution to pick from the set .we now proceed to the equivalence between the optimal solution and the greedy solution . to show this equivalence ,let be an arbitrary allocation of blocks satisfying the necessary condition in lemma [ necess ] .next , we will show how to modify to obtain an optimal allocation .after that , we will also show how to modify to obtain an allocation that is generated by the greedy policy .it will then be evident that these two resulting allocations have the same information gain . to obtain an optimal allocation from ,we first remove the top block from each channel whose height is above to get an auxiliary allocation .assume that the total number of removed blocks is .this auxiliary is unique , because each is simply the maximal number of blocks can be filled in the channel to obtain a height not above the water level : this number is uniquely determined by , , and .we now show how to re - allocate the removed blocks , so that , together with , we have an optimal allocation of all blocks .note that by lemma [ necess ] , to obtain an optimal solution we can not allocate more than one block to any channel , because that would make the height of that channel above .we claim that the optimal allocation simply re - allocates the removed blocks to the lowest channels in .we can show this by contradiction .assume that the optimal allocation adds one block to the channel instead of a lower channel in .this means that , , and . by an argument similar to ( [ move ] ) ,if we move the top block in the channel to the channel , we would obtain a better allocation ( which gives a larger net information gain ) .this contradiction verifies our claim .next , we concentrate on the allocation provided by the greedy policy .first , we recall that at each step of the greedy algorithm it never fills a block to some higher channel instead of a lower one .so after the greedy algorithm fills one block to some channel , its height can not differ from a lower channel by more than .if we apply the greedy policy for picking , , then we obtain the same allocation as .this is because any other allocation of blocks would result in a channel , after its top block filled , with a height deviating by more than from some other channel .this allocation contradicts the behavior of the greedy policy . continuing with ,the greedy policy simply allocates the remaining blocks to the lowest channels one by one .so the greedy policy gives the same final heights as the optimal allocation .the only possible difference is the order of these heights .therefore , the greedy solution is equivalent to the optimal solution in the sense of giving the same net information gain , i.e. , .this completes the proof of theorem [ mr2 ] .we have studied the performance of the greedy policy in the viewpoint of block - filling in the proof of theorem [ mr2 ] . for the purpose of simplicity ,we rewrite as where .after iterations of the greedy policy , the heights in the first channels give a flat top , which is illustrated in figure [ fig : integer ] .there are blocks remaining after iterations .if divides , the final heights of the first channels still give a flat top coinciding with in each channel .therefore . from ( [ order ] ), we conclude that . 9 a. ashok , j. l. huang , and m. a. neifeld , `` information - optimal adaptive compressive imaging , '' _ proc . of the asilomar conf . on signals , systems , and computers _ , pacific grove , ca , nov .2011 , pp .12551259 . s. boyd and l. vandenberghe , _convex optimization_. cambridge , ma : cambridge university press , 2004 .g. calinescu , c. chekuri , m. pal , and j. vondrak , `` maximizing a monotone submodular function subject to a matroid constraint , '' _ the 20th sicomp conf ._ , 2009 . w. r. carson , m. chen , m. r. d. rodrigues , r. calderbank , and l. carin , `` communications - inspired projection design with application to compressive sensing , '' preprint .r. castro , j. haupt , r. nowak , and g. raz , `` finding needles in noisy haystacks , '' _ proc .ieee intl .conf . on acoustics , speech and signal processing _, las vegas , nv , apr . 2008 , pp . 51335136 . j. ding and a. zhou , `` eigenvalues of rank - one updated matrices with some applications , '' _ applied mathematics letters _ , vol .20 , no . 12 , pp . 12231226 , 2007 .d. l. donoho , `` compressed sensing , '' _ ieee trans .inf . theory _ ,52 , no . 4 , pp . 12891306 , 2006 . m. elad , `` optimized projections for compressed sensing , '' _ ieee trans .signal process ._ , vol .55 , no . 12 , pp . 56955702 , 2007 .r. g. gallager , _ information theory and reliable communication_. new york : john wiley & sons , inc . , 1968. j. haupt , r. castro , and r. nowak , `` distilled sensing : adaptive sampling for sparse detection and estimation , '' preprint , jan .2010 [ online ] .available : http://www.ece.umn.edu//publications/sub10_ds.pdf j. haupt , r. castro , and r. nowak , `` improved bounds for sparse recovery from adaptive measurements , '' _ isit 2010 _ , austin , tx , jun .r. a. horn and c. r. johnson , _ matrix analysis_. cambridge , ma : cambridge university press , 1985 .s. ji , d. dunson , and l. carin , `` multitask compressive sensing , '' _ ieee trans .signal process .57 , no . 1 ,pp . 92106 , 2009 .s. ji , y. xue and l. carin , `` bayesian compressive sensing , '' _ ieee trans .signal process ._ , vol .56 , no . 6 , pp .23462356 , 2008 . s. joshi and s. boyd , `` sensor selection via convex optimization , '' _ ieee trans .signal process ._ , vol .57 , no . 2 , pp . 451462 , 2009 . j. ke , a. ashok , and m. a. neifeld , `` object reconstruction from adaptive compressive measurements in feature - specific imaging '' , _ applied optics _49 , no .34 , pp . h27-h39 , 2010 .e. liu and e. k. p. chong , `` on greedy adaptive measurements , '' _ proc .ciss _ , 2012 .e. liu , e. k. p. chong , and l. l. scharf `` greedy adaptive measurements with signal and measurement noise , '' submitted to asilomar conf . on signals , systems , and computers , mar .g. l. nemhauser and l. a. wolsey , `` best algorithms for approximating the maximum of a submodular set function , '' _ math .oper . research _ ,vol . 3 , no . 3 , pp . 177188 , 1978 .f. prez - cruz , m. r. rodrigues , and s. verd , `` mimo gaussian channels with arbitrary inputs : optimal precoding and power allocation , '' _ ieee trans .inf . theory _ ,56 , no . 3 , pp .10701084 , 2010 . h. rowaihy , s. eswaran , m. johnson , d. verma , a. bar - noy , t. brown , and t. l. portal , `` a survey of sensor selection schemes in wireless sensor networks , '' _ proc .spie _ , 2007 , vol .m. shamaiah , s. banerjee and h. vikalo , `` greedy sensor selection : leveraging submodularity , '' _ proc . of the 49th ieee conf . on decision and control_ , atlanta , ga , dec . 2010 .d. p. wipf , j. a. palmer , and b. d. rao , `` perspectives on sparse bayesian learning , '' _ neural information processing systems ( nips ) _ , vancouver , canada , dec .h. s. witsenhausen , `` a determinant maximization problem occurring in the theory of data communication , '' _ siam j. appl .math _ , vol .29 , no . 3 , pp .515522 , 1975 .
the purpose of this article is to examine greedy adaptive measurement policies in the context of a linear gaussian measurement model with an optimization criterion based on information gain . in the special case of sequential scalar measurements , we provide sufficient conditions under which the greedy policy actually is optimal in the sense of maximizing the net information gain . we also discuss cases where the greedy policy is provably not optimal . entropy , information gain , compressive sensing , compressed sensing , greedy policy , optimal policy .
uncovering cause - and - effect relationships remains an exciting challenge in many fields of applied science .for instance , identifying the causes of a disease in order to prescribe effective treatments is of primary importance in medical diagnosis ; locating the defects that could cause abrupt changes of the connectivity structure and adversely affect the performance of the system is a main objective in structural health monitoring .consequently , the problem of inferring causal relationships from observational data has attracted much attention in recent years .identifying causal relationships in large - scale complex systems turns out to be a highly nontrivial task . as a matter of fact, a reliable test of causal relationships requires the effective determination of whether the cause - and - effect is real or is due to the secondary influence of other variables in the system .this , in principle , can be achieved by testing the relative independence between the potential cause and effect conditioned on all other variables in the system .such a method essentially demands the estimation of joint probabilities for ( very ) high dimensional variables from limited available data and suffers the curse of dimensionality . in practice , there are various approaches in statistics and information theory that aim at accomplishing the proper conditioning without the need of testing upon all remaining variables of the system at once .the basic idea behind many such approaches originates from the classical pc - algorithm , which repeatedly measures the relative independence between the cause and effect conditioned on combinations of the other variables . as an alternative, we recently developed a new entropy - based computational approach that infers the causal structure via a two - stage process , by first aggregatively discovering potential causal relationships and then progressively removing those ( from the stage ) that are redundant .in almost all computational approaches for inferring causal structure , it is necessary to estimate the joint probabilities underlying the given process .large - scale data sets are commonly analyzed via discretization procedures , for instance using binning , ranking , and/or permutation methods .these methods generally require fine - tuning of parameters and can be sensitive to noise . on the other hand , the time - evolution of a physical system can only be measured and recorded to a finite precision , resembling an approximation of the true underlying process .this finite resolution can be characterized by means of a finite set of symbols , yielding a discretization of the phase space .regardless of the nature and motivation of discretization , the precise impacts on the causal structure of the system is essentially unexplored . here, we investigate the symbolic description of a dynamical system and how it affects the resulting markov order and causal structures. such description , based on partitioning the phase space of the system , is also commonly known as symbolization .symbolization converts the original dynamics into a stochastic process supported on a finite sample space .focusing on the tent map for the simplicity , clarity and completeness of computation it allows , we introduce numerical procedures to compute the joint probabilities of the stochastic process resulting from arbitrary partitioning of the phase space .furthermore , we develop causation entropy , an information - theoretic measure based on conditional mutual information as a mean to determine the markov order and ( temporal ) causal structure of such processes .we uncovered that a partitioning that maintains dynamic invariants of the system does not necessarily preserve its causal structure .on the other hand , both the markov order and causal structure depend nonmonotonically and , indeed , sensitively on the partitioning .a powerful method of analyzing nonlinear dynamical systems is to study their symbolic dynamics through some topological partition of the phase space .the main idea characterizing symbolic dynamics is to represent the state of the system using symbols from a finite alphabet defined by the partition , rather than using a continuous variable of the original phase space . for more details ,we refer to .the issue of partitioning was shown to affect entropic computations in a nontrivial manner and , as we will highlight in the paper , is also intricate and central to a general information - theoretic description of the system .consider a discrete dynamical system given by where represents the state of the system at time and the vector field governs the dynamic evolution of the states .a ( _ topological _ ) _ partition _ of the phase space is a finite collection of disjoint open sets whose closures cover , i.e. , the partition leads to the corresponding _symbolic dynamics_. in particular , for any trajectory of the original dynamics contained in the union of s , the partition yields a _ symbol sequence _ given by where is the indicator function defined as in other words ,the symbolic state is determined by the open set that contains the state .see fig .[ fig1 ] for a schematic illustration ., the trajectory leads to a symbol sequence ., scaledwidth=39.0% ] in general , the same symbol sequence may result from distinct trajectories .if the partition is _ generating _ , then every symbol sequence corresponds to a unique trajectory .a special case is the so - called markov partition , for which the transition from one symbolic state to another is independent of past states , analogous to a markov process . on the other hand , a generating partition is not necessarily markov .the precise effects of partitioning on the symbolic dynamics remains an interesting and challenging problem , with recent progress in a few directions .focusing on the equivalence between the original and symbolic dynamics , bollt _ et ._ studied the consequence of misplaced partitions on dynamical invariants , while teramoto and komatsuzaki investigated topological change in the symbolic dynamics upon different choices of markov partitions . on the other hand ,the degree of self - sufficiency of the symbolic dynamics , irrespective of the equivalence to the original dynamics , has started to gain increasing interest , focusing on information - theoretical measures such as information closure and prediction efficiency .we here adopt a different perspective and study how causal structures emerge and/or change under different choices of partitioning .the symbolic description of a dynamical system leads naturally to an interpretation of such systems as stochastic processes .let be a measure space with borel field and probability measure such that ] is defined as specifically , we shall discuss the manner in which different choices of the partitioning lead to ( qualitatively and quantitatively ) different symbolizations of the original dynamics with specific markov orders and causal structures . for the time being , we limit our investigation to a binary symbolic description of the dynamical map . consider a general binary partitioning of the phase space defined by the parameter , so that \}.\ ] ] such partitioning allows us to represent a continuous trajectory by a sequence of binary symbols ( bits ) .we remark that * * * * the choice of leads to a generating partition which gives rise to a symbolic dynamics that is topologically equivalent to the original system .the unique ergodic invariant measure of the tent map can be found by solving the first equation in ( also called a continuity equation ) for each subinterval of ] as : t^{l}\left ( x\right ) \in\left [ 0\text { , } \alpha\right ) \right\ } , \\i_{l}^{\left ( 1\right ) } \overset{\text{def}}{=}\left\ { x\in\left [ 0,1\right ] : t^{l}\left ( x\right ) \in\left ( \alpha\text { , } 1\right ] \right\ } .\end{cases } \label{eq : preimage}\ ] ] in other words , the initial conditions corresponding to a specific symbolic string of length are formed by a finite disjoint union of intervals .figure [ fig4 ] shows an example of these intervals for and four levels .( green ) and ( red ) are defined by eq . and are shown for levels for the choice of . in general , at each level , the subintervals start from and then alternate in between and .the relative ordering of the subintervals across levels can change for different values of , although they remain the same as shown in the picture for all .,scaledwidth=59.0% ] this offers a computationally feasible description with which joint probabilities can be calculated . from eq ., we obtain that for , , giving and as expected .for , we have .this gives the probabilities , , , and for all ( see also fig .[ fig4 ] ) . for general values of , we proceed as follows .first , we define the level- preimages of to be ( ) , which are the roots of the equation for convenience , we sort in the ascending order of and , additionally , define and .then , the preimages sets of and ] in a uniform manner : , using a threshold value of for the causation entropy at the given .the results are shown in fig .in particular , we found several examples for which the markov order satisfies while the number of causal parents is strictly less than ( i.e. , certain markov time indices are skipped in the causal structure ) . for values of , for the entire range of (a ) and a subrange (b ) .vertical dashed lines in both panels mark four specific choices of : 0.444 , 0.47 , 0.5 , and 0.516 , respectively ., scaledwidth=85.0% ] is chosen from .for each we distinguish the first causal parent computed from the forward ( aggregative discovery ) step of the ocse algorithm ( light red ) , all causal parents of from the set ( gray ) , and noncausal components ( black ) . in all computations we used a threshold under which causation entropy is regarded as zero ., scaledwidth=90.0% ]symbolization is a common practice in data analysis : in the field of dynamical systems , it bridges topological dynamics and stochastic processes through partitioning / symbolization of the phase space ; in causality inference , it allows for the description of continuous random variables by discrete ones . symbolized data , in turn ,are not as demanding in terms of precision and are often considered more robust with respect to parameters and noise . motivated by the problem of uncovering causal structures from finite , discrete data , we investigated the symbolization of outputs from a simple dynamical system , namely the tent map .we provided a full description of the joint probabilities occurring from partitioning / symbolization of the phase space and investigated how markov order and causal structure can be determined from these probabilities in terms of causation entropy , an information - theoretical measure .we found that in general , partitioning of the phase space strongly influences the markov order and causal structure of the resulting stochastic process in an irregular manner which is difficult to classify and predict .in particular , a small change in the partition can lead to relatively large and unexpected changes in the resulting markov order and causal structure . to the best of our knowledge , this is the first attempt in the literature that aims at unravelling the intricate dependence of inferred causal structures of dynamical systems on their different symbolic descriptions analyzed in an information - theoretic setting .furthermore , although the effects of map refinements are well understood , it remains a main challenge to discover the exact consequences of arbitrary refinements . especially for this reason , we have left the application of our approach to more complex dynamical systems and/or experimental time - series data to future investigations . on a different perspective, we note that although * * * * finding partitions that preserve dynamical invariants ( i.e. , generating partitions ) are known to be a real challenge especially for * * * * high - dimensional systems , it is yet unclear whether or not such challenge remains when considering partitions that maintain markov order and/or causal structure .this venue of research can be especially interesting to explore given recent advances in many different perspectives on partitioning the phase space including adaptive binning , ranking and permutation of variables , and nearest - neighbor statistics .finally , we remark that the non - uniqueness of symbolic descriptions of a system implies that important concepts such as the markov order and causal structure are not necessarily absolute concepts : rather , they unavoidably depend on the observational process , just like classical relativity of motion and quantum entanglement .this , in turn , suggests the possibility of the causal structure of the very same system to be perceived differently , even given unlimited amount of data .the concept of causality , therefore , is observer - dependent .we thank dr .samuel stanton from the united states army research office ( aro ) complex dynamics and systems program for his ongoing and continuous support .this work was funded by aro grant no .w911nf-12 - 1 - 0276 .we will prove that for a transformation that has a uniquely ergodic invariant probability measure , the markov order of the stochastic process resulting from a partition of the phase space decreases strictly by one under a map refinement of the partition unless the original markov order is less or equal to one .* definition : markov order of a partition . * consider a measure - preserving transformation on a compact metric space with a uniquely ergodic invariant probability measure .let be a measurable partition of the phase space that yields a stochastic process with time - invariant joint probabilities if such a process is markov of order , we define the markov order of the partition to be . *definition : map refinement . *consider a measure - preserving transformation with a probability measure .the map refinement of a given measurable partition is defined as the partition * theorem ( markov order upon map refinement . ) * consider a measure - preserving transformation on a compact metric space with a uniquely ergodic invariant probability measure .let be a partition of and be its map refinement .suppose that the markov order of and are and , respectively .it follows that for , and when ._proof ._ we shall denote the probabilities resulting from the map refinement of as where .since every sequence is determined by some orbit of under the partition , it follows that if and only if . on the other hand, implies that .therefore in eq . and for all sequences with nonvanishing probability .then , the theorem follows from applying eq . to the definition of markov order given in eq . rewritten using the product rule ( chain rule ) of conditional probability. j. runge , j. heitzig , n. marwan , and j. kurths , _ quantifying causal coupling strength : a lag - specific measure for multivariate time series related to transfer entropy _ , phys .e * 86 * , 061121 ( 2012 ) .a. porta , l. faes , v. bari , a. marchi , t. bassani _ et ._ , _ effect of age on complexity and causality of the cardiovascular control : comparison between model - based and model - free approaches _ , plos one * 9 * e89463 ( 2014 ) .t. haruna and k. nakajima , _ symbolic transfer entropy rate is equal to transfer entropy rate for bivariate finite - alphabet stationary ergodic markov processes _ , the european physical journal b * 86 * , 1 ( 2013 ) .e. m. bollt , t. stanford , y .- c .lai , and k. zyczkowski , _ what symbolic dynamics do we get with a misplaced partition ? on the validity of threshold crossing analysis of chaotic time - series _ , physica * d154 * , 259 ( 2001 ) .a. porta , p. castiglioni , v. bari , t. bassani , a. marchi , a. cividjian , l. quintin , and m. di rienzo , _-nearest - neighbor conditional entropy approach for the assessment of the short - term complexity of cardiovascular control _34 * , 17 ( 2013 ) .
identification of causal structures and quantification of direct information flows in complex systems is a challenging yet important task , with practical applications in many fields . data generated by dynamical processes or large - scale systems are often symbolized , either because of the finite resolution of the measurement apparatus , or because of the need of statistical estimation . by algorithmic application of causation entropy , we investigated the effects of symbolization on important concepts such as markov order and causal structure of the tent map . we uncovered that these quantities depend nonmontonically and , most of all , sensitively on the choice of symbolization . indeed , we show that markov order and causal structure do not necessarily converge to their original analog counterparts as the resolution of the partitioning becomes finer . * * * although based on a simple mathematical model , our results shed new light on the challenging nature of causality inference . *
this paper is a formal corollary to the recent ref . , where the truncated wigner representation of quantum optics was extended to multitime problems .( for a discussion of operator orderings , such as normal , time - normal and symmetric , we refer the reader to the classic treatise of mandel and wolf . )the goal of ref . was to develop a practical computational tool , while formal analyses were reduced to the bare necessities . in this paperwe give proper justification to the formal techniques underlying ref .the functional techniques used here are to a large extent borrowed from vasilev ; they were first outlined in preprint . however , in a number of important results ( notably , continuity of time - symmetrically ordered operator products ) were overlooked .important for the putting the results of this paper in perspective is the connection between phase - space techniques and the so - called _ real - time quantum field theory _ ( for details and references see ) . according to , the _ keldysh rotation _underlying the latter is a generalisation of weyl s ordering to heisenberg operators .this generalisation is nothing but the aforementioned time - symmetric ordering .this paper could thus equally be titled , `` phase space approach to real - time quantum field theory '' .for physical motivation and literature we refer the reader to the introduction to . herewe only briefly touch upon the more recent developments .the truncated wigner representation has found its main utility for the investigation and numerical modelling of bose - einstein condensates ( bec ) , where the presence of a significant third - order nonlinearity makes the exact positive - p method unstable except for very short times , although it is still also used in quantum optics . over the last few yearsit has been used for an increasing number of investigations , with the most relevant mentioned in what follows . the ability to include the effects of initial quantum states other than coherentwas first shown numerically for trapped bec molecular photoassociation in three papers by olsen and plimak , olsen , and olsen , bradley , and cavalcanti , using methods later published by olsen and bradley to sample the required quantum states .johnsson and hope calculated the multimode quantum limits to the linewidth of an atom laser .the quantum dynamics of superflows in a toroidal trapping geometry were treated by jain _ investigated dynamical instabilities of bec at the band edge in one - dimensional optical lattices , while hoffmann , corney , and drummond made an attempt to combine the truncated wigner and positive - p representations into a hybrid method for bose fields . used the truncated wigner to simulate polarisation squeezing in optical fibres , a system which is mathematically analogous to bec . the mode entanglement and einstein - podolsky paradoxwere numerically investigated in the process of degenerate four - waving mixing of bec in an optical lattice by olsen and davis and ferris , olsen , and davis . also analysing bec in an optical lattice , shrestha , javanainen , and rusteokoski looked at the quantum dynamics of instability - induced pulsations .et al . _ performed a comparison of the truncated wigner with the exact positive - p representation and a hartree - fock - bogoliubov approximate method for the simulation of molecular bec dissociation , concluding that the truncated wigner representation was the most useful in practical terms .this practical usefulness has been demonstrated in studies of bec interferometry , domain formation in inhomogeneous ferromagnetic dipolar condensates , vortex unbinding following a quantum quench , a reverse kibble - zurek mechanism in two - dimensional superfluids , the quantum and thermal effects of dark solitons in one - dimensional bose gases , the quantum dynamics of multi - well bose - hubbard models , and analysis of a method to produce einstein - podolsky - rosen states in two - well bec . along with its continuing use in quantum optics ,the above examples demonstrate that the truncated wigner representation is an extremely useful approximation method , allowing for the numerical simulation of a number of processes for which nothing else is known to be as effective .the task in was split naturally in two .firstly , we constructed a path - integral approach in phase space , which we called _ multitime wigner representation_. within this approach , truncated wigner equations emerged as an approximation to exact quasi - stochastic equations for paths . secondly , we developed a way of bringing time - symmetric products of heisenberg operators , expressed by the path integral , to time - normal order .the respective relations , called _ generalised phase - space correspondences _ , originate in kubo s formula for the linear response function .however , the properties of time - symmetric operator products were formulated without proof .details of the limiting procedure defining the path integral were ignored as irrelevant within the truncated wigner approximation .the logic of the present paper is best understood by drawing an analogy with our ref . . in ,our goal was to extend _ normal - ordering - based _ approaches of quantum stochastics beyond quartic hamiltonians .this paper is an attempt to extend the techniques of ref . further , this time to the _ weyl - ordering - based _ method of ref .particulars aside , in we applied _ causal _ , or _ response _ , transformation to perel - keldysh diargam series for the system in question .the result was a wyld - type diagram series , which we called _causal series_. these kind of diagram techniques are well known in the theory of classical stochastic processes .it is therefore not surprising that we could reverse engineer the causal series resulting in a stochastic differential equation ( sde ) for which this series was a formal solution . for quartic ( collisional ) interactionsthe result is the well - known positive - p representation of quantum optics . for other interactions , finding an sde in the true meaning of the termis not always possible because of the pawula theorem . in such casesthe causal series may be only approximated by a stochastic _ difference _ equation ( s ) in discretised time .attempts to simulate emerging s numerically have not been encouraging .this is the primary reason why in ref . we resorted to approximate methods .the key point of is a formal link existing between conventional quantum field theory ( schwinger - perel - keldysh s closed - time - loop formalism ) and conventional quantum optics ( time - normal operator ordering and positive - p representation ) .the existence of such a link was first pointed out by one of the present authors in .for the purposes of this paper , both backgrounds are inadequate or missing and have to be built from scratch . on the quantum - field - theoretical side, we develop a formal framework of `` symmetric wick theorems '' expressing time - ordered operator products by symmetrically - ordered ones . similar to wick s theorem proper ,this allows us to construct a `` symmetric '' variety of perel - keldysh diagram series with corresponding `` symmetric '' propagators . on the quantum - optical side, we introduce what we called a multitime wigner representation generalising weyl s ordering to multitime averages of heisenberg operators .the necessary `` flavour '' of causal transformation can then be borrowed from .the link to time - normal ordering is then based on a fundamental link between commutators of heisenberg operators for different times and the response properties of quantum fields .all our results apply to nonlinear quantum systems with , to a large extent , arbitrary interaction hamiltonians .the existence of symmetric wick theorems for the change of time ordering to symmetric may be of independent interest for a wider audience than that for which this paper is primarily intended .it is for this wider audience that in the appendix we present the generalisation of symmetric wick theorems to arbitrary quantised fields .the glue that holds the paper together is the concept of time - symmetric ordering of heisenberg operators . continuing the analogy with ,this ordering replaces the time - normal operator ordering on which was built . on the quantum - field - theoretical side , we express `` symmetric '' propagators by the retarded greens function . the induced restructuring of `` symmetric '' perel - keldysh series _automatically _ turns them into perturbative series for quantum averages of time - symmetrically ordered products of heisenberg operators . on the quantum - optical side, time - symmetric ordering appears as a natural generalisation of the conventional symmetric ordering to heisenberg operators .quantum field theory and quantum optics are not equal here .time - symmetric ordering emerges in quantum field theory as a fully specified formal concept . at the same time, there does not seem to be any way of guessing , save deriving , this concept from within conventional phase - space techniques without a reference to schwinger s closed - time - loop formalism .the paper is organised as follows . in sections [ ch :symm ] and [ ch : wick ] , two formal concepts are prepared for later use . in section [ ch : symm ]we define time - symmetric ordering of heisenberg operators and prove its most important properties : reality , continuity , and the fact that for free fields it reduces to conventional symmetric ordering .in section [ ch : wick ] we prove `` symmetric wick theorems '' generalising wick s theorem proper from normal to symmetric ordering . in section [ ch :keldysh ] , we introduce a functional framework and derive closed perturbative relations with symmetric ordering of operators . the multitime wigner representation of an arbitrary bosonic system is formulated in section [ ch : commresp ] . in section [ ch : dynaph ]we construct a representation of time - symmetric operator averages by phase - space path integrals .section [ ch : seccaus ] presents a discussion of causal regularisation needed to make our analyses mathematically defined .the problem of reordering heisenberg operators is discussed in section [ ch : order ] . in the appendix, we extend symmetric wick theorems to arbitrary quantised fields .we introduce the common pair of bosonic creation and annihilation operators , =1 , \end{aligned}% % \nonumber \label{eq : osccomm } \end{gathered}\]]and the usual free oscillator hamiltonian , use units where .the symmetric ( weyl ) ordering of the creation and annihilation operators is conveniently defined in terms of the operator - valued characteristic function that , by definition simplicity , we consider a single - mode case ; multi - mode cases are recovered formally by attaching mode index to all quantities , cf .section [ ch : mod ] and the appendix .operator orderings act mode - wise , so that extension to many modes is straightforward . the time - dependent creation and annihilation operators are defined as , , they are heisenberg field operators with respect to the free hamiltonian , e.g. , , \end{aligned } \label{eq:60a } % \nonumber % \z \end{aligned}\]]where is in the interaction picture , shall term free - field , or interaction - picture , operators , because this is the role they play in real problems with interactions .symmetric ordering is extended to free fields postulating that , , adding an arbitrary , and , in general , time - dependent , interaction term to , us to introduce the pair of the heisenberg fields operators proper , the evolution operator is defined through the schrdinger equation , picture is introduced in the usual way by splitting the evolution operator in two factors , is the evolution operator with respect to , obeys the equation , being in the interaction picture , the reader should have noticed that we adhere to certain notational conventions .calligraphic letters are reserved for evolution and heisenberg operators .plain letters denote schrdinger and interaction - picture operators , which are in turn distinguished by the absence or presence of the time argument , respectively .the schrdinger operator becomes in the interaction picture and in the heisenberg picture , similarly for other operators .we note that , with unspecified interaction , heisenberg operators are in essence placeholders . by specifying interaction one _ipso facto _ specifies all heisenberg operators . throughout the paperwe make extensive use of the concept of _ time - symmetric _ product of the heisenberg operators . we define it by the following recursive procedure . for a single operatorthe ordering is irrelevant , , if } ] stands for the anticommutator , + = { \hat{\mathcal x}}{\hat{\mathcal y}}+ { \hat{\mathcal y}}{\hat{\mathcal x } } .\end{aligned }\label{eq:3a } % \nonumber % \z\end{aligned}\]]this allows one to built a time - symmetric product of any number of factors , by applying ( [ eq : twrec ] ) in the order of _ decreasing time arguments _ ( _ increasing _ in is a typo ) . the resultare nested anticommutators , ( with ) + , { \hat{\mathcal x}}_3(t_3 ) \big ] _ + , \cdots,{\hat{\mathcal x}}_n(t_n ) \big ] _ + , \\t_1>t_2>\cdots > t_n .\hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq:78js } % \nonumber % \z \end{gathered}\]]equations ( [ eq : twrec ] ) and ( [ eq:78js ] ) imply that all time arguments in a time - symmetric product are different .extension to coinciding time arguments may be given by continuity , see below .for two factors we find plain symmetrised combinations , , \\{ \cal t}^w\!{\hat{\mathcal a}}(t_1){\hat{\mathcal a}}^{\dag}(t_2 ) & = \frac{1}{2}\big [ { \hat{\mathcal a}}(t_1){\hat{\mathcal a}}^{\dag}(t_2)+{\hat{\mathcal a}}^{\dag}(t_2){\hat{\mathcal a}}(t_1 ) \big ] , \\{ \cal t}^w\!{\hat{\mathcal a}}^{\dag}(t_1){\hat{\mathcal a}}^{\dag}(t_2 ) & = \frac{1}{2}\big [ { \hat{\mathcal a}}^{\dag}(t_1){\hat{\mathcal a}}^{\dag}(t_2)+{\hat{\mathcal a}}^{\dag}(t_2){\hat{\mathcal a}}^{\dag}(t_1 ) \big ] , \end{aligned}% % \nonumber \label{eq:12 } \end{gathered}\]]where the order of times does not matter . for three and more factors it already does ,e.g. , + , { { \hat{\mathcal a}}}^{\dag}(t_3 ) \big ] _ + \\ = \frac{1}{4 } \big [ { { \hat{\mathcal a}}}(t_1){{\hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}^{\dag}(t_3 ) + \settowidth{\crossitoutwidth}{\ensuremath{{{\hat{\mathcal a}}}(t_1){{\hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_2)}}% \settoheight{\crossitoutheight}{\ensuremath{{{\hat{\mathcal a}}}(t_1){{\hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_2)}}% \ensuremath{{{\hat{\mathcal a}}}(t_1){{\hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_2)}\hspace{-1\crossitoutwidth}% \rule[0.253\crossitoutheight]{1\crossitoutwidth}{0.03em } + { { \hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_1){{\hat{\mathcal a}}}(t_2)+ { { \hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}(t_1){{\hat{\mathcal a}}}^{\dag}(t_3)\\ + \settowidth{\crossitoutwidth}{\ensuremath{{{\hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_1)}}% \settoheight{\crossitoutheight}{\ensuremath{{{\hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_1)}}% \ensuremath{{{\hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_1)}\hspace{-1\crossitoutwidth}% \rule[0.253\crossitoutheight]{1\crossitoutwidth}{0.03em } + { { \hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}(t_1 ) \big ] , \ \t_1>t_2>t_3 , \label{eq:123 } % \nonumber % \z \end{gathered}\ ] ] as opposed to + , { { \hat{\mathcal a}}}(t_2 ) \big ] _ + \\ = \frac{1}{4 } \big [ \settowidth{\crossitoutwidth}{\ensuremath{{{\hat{\mathcal a}}}(t_1){{\hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}^{\dag}(t_3)}}% \settoheight{\crossitoutheight}{\ensuremath{{{\hat{\mathcal a}}}(t_1){{\hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}^{\dag}(t_3)}}% \ensuremath{{{\hat{\mathcal a}}}(t_1){{\hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}^{\dag}(t_3)}\hspace{-1\crossitoutwidth}% \rule[0.253\crossitoutheight]{1\crossitoutwidth}{0.03em } + { { \hat{\mathcal a}}}(t_1){{\hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_2 ) + { { \hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_1){{\hat{\mathcal a}}}(t_2 ) + { { \hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}(t_1){{\hat{\mathcal a}}}^{\dag}(t_3 ) \\ + { { \hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_1 ) + \settowidth{\crossitoutwidth}{\ensuremath{{{\hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}(t_1)}}% \settoheight{\crossitoutheight}{\ensuremath{{{\hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}(t_1)}}% \ensuremath{{{\hat{\mathcal a}}}^{\dag}(t_3){{\hat{\mathcal a}}}(t_2){{\hat{\mathcal a}}}(t_1)}\hspace{-1\crossitoutwidth}% \rule[0.253\crossitoutheight]{1\crossitoutwidth}{0.03em } \big ] , \ \ t_1>t_3>t_2 .\label{eq:132 } % \nonumber % \z \end{gathered}\ ] ] in these formulae , the crossed - out terms are those absent in time - symmetric operator products but occuring in symmetrised ones .equation ( [ eq:78js ] ) expresses a time - symmetrically ordered product of factors as a sum of products of field operators which differ in the order of factors . in all these productsthe time arguments first increase then decrease ( whereas in crossed - out terms in ( [ eq:123 ] ) and ( [ eq:132 ] ) the earliest operator is in the middle ) .this kind of time - ordered structure is characteristic of schwinger s closed - time - loop formalism ( see refs . and sections [ ch : swschwinger ] and [ ch : keldysh ] below ) , so we shall talk of _ schwinger ( time ) sequences _ and of the _ schwinger order _ of factors in _schwinger products_. it is easy to see that all possible schwinger products appear in the sum .indeed , the earliest time in a schwinger sequence can only occur either on the left or on the right .that recursions ( [ eq : twrec ] ) generate all schwinger products can then be shown by induction , starting from pair products for which this is obvious .we may thus give an alternative definition of the time - symmetric product : this definition of the time - symmetric product is illustrated in fig .[ fig : tw ] , where the schwinger products comprising it are visualised as distinct ways of placing the operators on the so - called c - contour .the latter travels from to ( the forward branch ) and then back to ( reverse branch ) .the operators are imagined as positioned on the c - contour ; each operator may be either on the forward or reverse branch , cf .[ fig : tw ] .the order of operators on the c - contour determines the order of factors in a particular schwinger product ( from right to left , to match eq .( [ eq : tptm ] ) below ) . definition of a time - symmetric product as a sum of all schwinger products makes obvious the reality property , ^{\dag}= { \cal t}^w\!(\cdots)^{\dag } .\end{aligned}% % \nonumber \label{eq : twconj } \end{gathered}\]]this follows from the trivial fact that if a particular sequence of times is in a schwinger order , the reverse sequence is also in a schwinger order . ) .this particular example implies operators , , with .the operators are imagined as positioned on the c - contour travelling from to ( the forward branch ) and then back to ( reverse branch ) .each operator may be either on the forward or on the reverse branch .the order of operators on the c - contour determines the order of factors in a particular schwinger product ( from right to left , to match eq .( [ eq : tptm ] ) ) .so , the way of placing operators shown by dark circles corresponds to the product , where time arguments are omitted for brevity .placing of the latest time ( ) does not affect the order of operators , so we put it arbitrarily on the forward branch .with all alternative placings of operators ( light circles ) we recover distinct schwinger products .for the operator placing shown by dark circles , the quantities occuring in equation ( [ eq : quadr ] ) are , , , , and . ] furthermore , visualisation in fig .[ fig : tw ] helps us to prove the most important property of the time - symmetric products : their continuity at coinciding time arguments .assume that a pair of times can change their mutual order but both stay either earlier or later in respect of all other times , cf .[ fig : tw ] . assume also that placing of all times on the c - contour except is fixed .if are not the latest times , we are left with four ways of placing them on the c - contour resulting , up to the overall coefficient , in four terms ( with ) , the operator product comprises all operator factors with time arguments larger than , comprises operators with time arguments less than placed on the forward branch of the c - contour , and comprises operators with time arguments less than placed on the reverse branch of the c - contour ( see fig .[ fig : tw ] ) .then , { { \hat{\mathcal p } } } _ > { { \hat{\mathcal p}}}_{\text{r } } - % \\ - { { \hat{\mathcal p}}}_{\text{l } } { { \hat{\mathcal p } } } _ >\big [ { \hat{\mathcal x}}(t),{\hat{\mathcal y}}(t ) \big ] { { \hat{\mathcal p}}}_{\text{r } } = 0 , \end{aligned}\]]because the commutator here may only be a c - number ( or zero ) ; this holds also in a multimode case . if are the latest times , then and the freedom of placing reduces to two terms , { { \hat{\mathcal p}}}_{\text{r } } , \end{aligned}% % \nonumber % \eqlabel { } \end{gathered}\]]which is independent of the order of the times . continuity of the time - symmetric products has thus been proven .note that this spares us the necessity to specify their values at coinciding time arguments .we now prove that , when applied to free fields , the time - symmetric ordering is just a fancy way of redefining conventional symmetric ordering defined by eqs .( [ eq : wdef ] ) and ( [ eq:53hq ] ) , that the order of times here in fact does not matter . at first glance , equations ( [ eq:123 ] ) and ( [ eq:132 ] )serve as counter - examples . from eq .( [ eq:123 ] ) we get , , \ \t_1>t_2>t_3 , \label{eq : aaad } \hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}\end{gathered}\]]as contrasted by the formula obtained from eq .( [ eq:132 ] ) , ,\ \ t_1>t_3>t_2 .\label{eq : aada } \hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}\end{gathered}\]]neither result coincides with the symmetrically - ordered product following from ( [ eq : wdef ] ) , .% \eqlabel { } \hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}\end{gathered}\]]however by using the commutational relation ( [ eq : osccomm ] ) it is easy to verify that = \frac{1}{4 } \left [ \hat a^2\hat a^{\dag}+\hat a^{\dag}\hat a^2 + 2\hat a\hat a^{\dag}\hat a \right]\\ = \frac{1}{3 } \left [ \hat a^2\hat a^{\dag}+\hat a^{\dag}\hat a^2 + \hat a\hat a^{\dag}\hat a \right ] = w\hat a^2\hat a^{\dag}. % \eqlabel { } \hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}\end{gathered}\ ] ] to prove ( [ eq : twasw ] ) in general we note that the recursion procedure defining the time - symmetric product may be started from the unity operator , , + = { \hat{\mathcal a}}(t ) , \\ { \cal t}^w\!{\hat{\mathcal a}}^{\dag}(t ) & = \frac{1}{2 } \big [ \hat\openone,{\hat{\mathcal a}}^{\dag}(t ) \big]_+ = { \hat{\mathcal a}}^{\dag}(t ) , \end{aligned}% % \nonumber \label{eq:1a } \end{gathered}\]]in obvious agreement with ( [ eq : tw1 ] ) .furthermore , we can replace in ( [ eq : twone ] ) by given by eq .( [ eq : chidef ] ) and then apply the recursion ( [ eq : twrec ] ) starting from the latest time . with the end limit this replacement does not affect the resulting time - symmetric products .so , assuming that , +|_{\eta = 0 } , \\ { \cal t}^w\!\hat a(t)\hat a^{\dag}(t ' ) = \frac{1}{4}\big [ \big [ \chi ( \eta,\eta^*),\hat a(t ) \big ] _+ , \hat a^{\dag}(t ' ) \big ] _ + |_{\eta = 0 } . \end{gathered } \label{eq:4a } % \nonumber % \z\end{gathered}\]]similar relations hold for larger number of factors .we now transform them using the standard phase - space correspondences . recalling that find , combining these relations and multiplying them by suitable time exponents we obtain + = \text{e}^{-i\omega_0t}\frac{\partial \chi ( \eta,\eta^*)}{\partial \eta^ * } , \\ \frac{1}{2}\left [ \chi ( \eta,\eta^*),\hat a^{\dag}(t ) \right]_+ = \text{e}^{i\omega_0t}\frac{\partial \chi ( \eta,\eta^*)}{\partial \eta } .\end{gathered}% % \nonumber % \eqlabel { } \end{gathered}\]]this allows us to rewrite eqs .( [ eq:4a ] ) as an arbitrary number of factors we have , virtue of ( [ eq : wdef ] ) this is another form of ( [ eq : twasw ] ) . to conclude this paragraph we note that eq .( [ eq : twasw ] ) is not as straightforward as the corresponding relation for the normal ordering : setting the free - field operators in the normal order _ ipso facto _ sets the creation and annihilation operators in the normal order , while in eq .( [ eq : twasw ] ) the symmetric order is only recoved on rearranging the operators using ( [ eq : osccomm ] ) .since the commutator contains dynamical information one may say that relation ( [ eq : tnasn ] ) is purely kinematical while equation ( [ eq : twasw ] ) is dynamical .the time - symmetric ordering of free - field operators delivers us `` the best of both worlds . '' on the one hand , it is just symmetric ordering in disguise . on the other hand , the schwinger products of which itis build are consistent with such `` big guns '' as schwinger s closed - time - loop formalism .the time - symmetric ordering of the free - field operators correctly `` guesses '' certain fundamental structures underlying the interacting quantum field theory , making it an irreplaceable bridging concept when deriving perturbative relations with the symmetric ordering . in practice , it is convenient to eliminate symmetric ordering altogether by reexpressing all information about the initial state of the system directly in terms of time - symmetric averages .we introduce the wigner distribution in the standard way by the quantum averaging is over the initial state of the system .the symmetric averages characterising the inital state may then be written as making use of this relation , the result of averaging eq .( [ eq : twasw ] ) takes the form , this is nothing but stochastic moments of the random c - number field , which is specified by its value at ( initial condition ) being distributed in accordance with the probability distribution . for nonpositive interpretation holds by replacing probability by quasi - probability .the time ordering places operators from right to left in the order of increasing time arguments , e.g. , similarly for a larger number of factors . by definition ,bosonic operators under the time ordering commute .the notation as distinct from used in emphasises that the -ordering implies a different specification for coinciding time arguments . in specification was by normal ordering , while , not quite unexpectedly , in this paper we imply symmetric ordering . in ,such specification was enforced by the _causal regularisation _ which was part of the dynamical approach .a similar ( but different ) regularisation scheme is part of the dynamical approach in this paper , cf .section [ ch : seccaus ] . till that sectionwe suppress all specifications related to operator orderings for coinciding time arguments .the notation should thus be regarded a placeholder for the concept full meaning of which will be made clear in section [ ch : seccaus ] .we now prove a generalisation of wick s theorem to the symmetric ordering . as a major simplification ,results of the previous section allow us to formulate and prove the _ symmetric wick theorem _ as a relation between the time - ordered and time - symmetrically ordered operators products : the symmetric contraction is defined , in obvious analogy to wick s theorem proper , as can use here the symmetric ordering in place of the time - symmetric one , but we follow our general principle of eliminating the -ordering from considerations .the cumbersome notation we use for the symmetric contraction will be explained in section [ ch : swschwinger ] below . for the oscillator , = \varepsilon ( t ) \text{e}^{-i\omega_0 t } , \end{aligned}% % \nonumber \label{eq : scont } \end{gathered}\]]where is the odd stepfunction , .\end{aligned}% % \nonumber % \eqlabel { } \end{gathered}\ ] ] the symmetric wick theorem obviously holds for one and two operators ; in the latter case it coincides with eq .( [ eq : gwpp ] ) .a general proof follows by induction .assume the symmetric wick theorem has been proven up to a certain number of factors and consider a time - ordered product with one `` spare '' factor .we choose this spare factor as the earliest one , which is thus on the right of the time ordered product .let it be . the whole time - ordered productmay then be written as we have introduced a notation for the product without the `` spare '' factor wish to have the spare factor symmetrically on either side of so we write + + \frac{1}{2 } \big [ { \hat{\mathcal p}},\hat a(t ) \big ] .\end{aligned}% % \nonumber \label{eq : ca } \end{gathered}\]]the commutator is easily calculated = \frac{1}{2 } \sum_{k=1}^m{\hat{\mathcal p}}'_k \big [ \hat a^{\dag}(t_k),\hat a(t ) \big ] , \end{aligned}% % \nonumber \label{eq : comm } \end{gathered}\]]where is without the factor ; note that remain time - ordered . for , = -i g^w_{++}(t - t_k ) , \end{aligned}% % \nonumber \label{eq : contr } \end{gathered}\]]and we find + -i \sum_{k=1}^m{\hat{\mathcal p}}'_k g^w_{++}(t - t_k ) .\label{eq : indstep } \hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}\end{gathered}\]]we now use the induction assumption and expand and all according to the symmetric wick theorem . by virtue of ( [ eq : twrec ] ) the anticommutator in ( [ eq : indstep ] )is then a sum of time - symmetric products .it gives us the sum of terms where contractions exclude .the sum in ( [ eq : indstep ] ) deliveres the terms where is involved in a contraction .it is straightforward to verify that all terms reguired by the symmetric wick theorem are recovered this way , each occuring only once .the case when the earliest term is is treated by simply swapping in the above .equation ( [ eq : contr ] ) is replaced by & = -i g^w_{++}(t_k - t ) , & t < t_k \end{aligned}% % \nonumber % \eqlabel { } \end{gathered}\]]which indeed holds , cf .( [ eq : scont ] ) . the symmetric wick theorem has thus been proven .we note that the regularisation we mentioned after eq .( [ eq : tp ] ) results , in particular , in without mathematical ambiguity .hence the caveat of wick s theorem , that `` no contractions should occur between operators with equal time arguments , '' also applies to the symmetric wick theorem . ) . ]if the initial state of the system is not vacuum , closed equations of motion can not be written in terms of averages of the time - ordered operator products only , cf ., e.g. , ref .the necessary type of ordering was introduced , rather indirectly , in schwinger s seminal paper , and later applied to developing diagram techniques for nonequilibrium nonrelativistic quantum problems by konstantinov and perel .extension to relativistic problems was given by keldysh under whose name the approach became known .we note that the concept of the _ closed time loop _ and the related operator ordering is much more general than the keldysh diagram techniques commonly associated with it .as was shown in paper , it underlies such a phase - space concept as the time - normal ordering of glauber and kelly and kleiner . in this paperwe investigate the relation between the closed - time - loop and the symmetric orderings .the closed - time - loop operator ordering is commonly defined as an ordering on the so - called c - contour which can be seen in figs .[ fig : tw ] and [ fig : c ] .the c - contour travels from to ( forward branch ) and then back to ( reverse branch ) .the operators are formally assigned an additional binary argument which distinguishes operators on the forward branch from those on the reverse branch .`` earlier '' and `` later '' are then generalised to match the travelling rule along the c - contour .all operators on the forward branch are by definition `` earlier '' than those on the reverse branch and go to the right . among themselves , the operators on the forward branch are set from right to left in the order of increasing time arguments ( -ordered ) , while those on the reverse branch are set from right to left in the order of decreasing time arguments ( -ordered ) . using that hermitianconjugation inverts the time order of factors , the -ordering may be defined as ^{\dag}. \end{aligned}% % \nonumber \label{eq : tpconj } \end{gathered}\]]the closed - time - loop - ordered operator product may thus be alternatively defined as a _double - time - ordered _ product , are two independent operator products . in terms of the c - contour , the operators in are visualised as positioned on the forward branch of the c - contour , and those in the reverse branch .factors in a double - time - ordered product are always arranged in a schwinger sequence , cf .the opening paragraph of section [ ch : gentw ] ; for specifications pertaining to coinciding time arguments we refer the reader to section [ ch : seccaus ] . in this paperwe employ eq .( [ eq : tptm ] ) as a primary definition and use the concept of closed time loop only for illustration purposes .extension of the symmetric wick theorem to the double time ordering employs the linear order of the c - contour .a formal generalisation of the symmetric wick theorem to an arbitrary linearly ordered set may be found in the appendix .hori s form of the symmetric wick theorem given by eq .( [ eq : dwc1 ] ) below , of which the derivation is our goal , is in fact a particular case of the general structural relation ( [ eq:52hp ] ) .nonetheless we believe that the direct proof we present here will benefit the reader .the symmetric wick theorem is generalised to the double - time ordering by making the symmetric contraction dependent on the positioning of the contracted pair in the double - time - ordered product ( or , which is the same , on the c - contour , cf .[ fig : c ] ) . as a resultwe recover four contractions : in ref . , there are four nonzero symmetric contractions , not three . to make the genesis of the contractions clearer we retained the orderings also where they are redundant , e.g. , .the contraction is exactly that defined by ( [ eq : gwpp ] ) explaining the notation . as c - number kernels , and coupled by hermitian conjugation , while and are hermitian : ^ * , \\ -i g^w_{+-}(t - t ' ) & = \big[-i g^{w}_{+-}(t'-t)\big]^ * , \\-i g^w_{-+}(t - t ' ) & = \big[-i g^{w}_{-+}(t'-t)\big]^ * .\end{aligned}% % \nonumber \label{eq : gpmconj } \end{gathered}\]]in obtaining this we used ( [ eq : twconj ] ) as well as the similar relation for the double - time ordering , ^{\dag}= t^w_- { \hat{\mathcal p}}_+^{\dag}\,t^w_+ { \hat{\mathcal p}}_-^{\dag } , \end{aligned}% % \nonumber \label{eq : tptmconj } \end{gathered}\]]cf .( [ eq : tpconj ] ) . for the oscillator , .( [ eq : scont ] ). with the contractions defined by ( [ eq : gwpm ] ) the symmetric wick theorem reads : this includes the time - ordered products as a special case .importantly , the generalisation to the double - time - ordering makes the symmetric wick theorem invariant under hermitian conjugation .this follows from eqs .( [ eq : twconj ] ) and ( [ eq : tptmconj ] ) , supplemented by the observation that hermitian conjugation turns every schwinger time sequence into a reversed sequence , so that arguments of all contractions must change sign . equations ( [ eq : gpmconj ] ) show that this is exactly the effect the compex conjugation has on the contractions , including the replacement .it is therefore only necessary to generalise the inductive step in the above proof to the case when the `` spare '' operator is the earliest one under the ordering .again , let us firstly assume that this operator is . equation ( [ eq : ca ] ) holds with now being a double - time - ordered product , ( [ eq : comm ] ) applies as well , by assuming that `` knows '' whether originates from or from .the critical observation is that the contractions depend in fact only on the visual order of times , so that , remembering that , = -i g^w_{++}(t - t_k ) = -i g^w_{+-}(t - t_k ) .\end{aligned}% % \nonumber \label{eq:2contr } \end{gathered}\]]with this observation equation ( [ eq : indstep ] ) is replaced by + \\-i \sum_{t_k\in{\hat{\mathcal p}}_- } { \hat{\mathcal p}}'_k g^w_{+-}(t - t_k ) -i \sum_{t_k\in{\hat{\mathcal p}}_+ } { \hat{\mathcal p}}'_k g^w_{++}(t - t_k ) , \label{eq : indstepd } \hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}\end{gathered}\]]where means summation over all such that originates from , and similarlly for .the case when the earliest operator under the -ordering is is treated similarly , the `` critical observation '' in this case being that , for , = -i g^w_{++}(t_k - t ) = -i g^w_{-+}(t_k - t ) .\end{aligned}% % \nonumber \label{eq:2contrx } \end{gathered}\]]this completes the inductive step of the proof .the symmetric wick theorem has thus also been proven for the double time ordering .one technical tool in ref . was hori s form of wick s theorem expressing it as an application of a functional differential operator , cf .18 ) in . except for the redefinition of contractions , the symmetric wick theorem coincides with wick s theorem properthis makes the functional form of wick s theorem equally applicable to the symmetric wick theorem .all we need is to redefine the differential operator given by eq .( 19 ) in as \\ = -i \sum_{c , c'=\pm } \int dt dt ' g^w_{cc'}(t - t ' ) \frac{\delta^2}{\delta a_{c}(t)\delta { \bar a}_{c'}(t ' ) } , \hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq : dwc1 } % \nonumber % \z \end{gathered}\]]where are four independent c - number fields .square brackets signify functionals .the symmetric contractions were defined by eqs .( [ eq : gwpm ] ) . with these replacementswe find hori s form of the symmetric wick theorem , \,t_+p_+\big [ \hat a,\hat a^{\dag } \big ] \\ = { \cal t}^w\!\bigg \ { \exp % \rbracket{\bigg } { \delta ^w_c\bigg [ \frac{\delta}{\delta a_+ } , \frac{\delta}{\delta \bar a_+ } , \frac{\delta}{\delta a_- } , \frac{\delta}{\delta \bar a_- } \bigg ] % } \\\times p_-\big [ a_- , \bar a_- \big ] p_+\big [ a_+ , \bar a_+ \big ] \big| _ { a \to { \hat a } } \bigg \}\ .\hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq : horiw } % \nonumber % \z \end{gathered}\]]here , ] .the indices are understood modulo , ; in other words , we consider a ring rather than a chain . as usualwe break hamiltonian ( [ eq : bhh ] ) into the `` free '' and `` interaction '' hamiltonians , free - field and the heisenberg operators , and are defined in detailed analogy to section [ ch : bas ] .we exclude hopping from the free hamiltonian making our reasoning most easlily adaptable to arbitrary multi - mode bosonic systems . in particularthis allows the analyses in the previous two sections to be generalised simply by applying them mode - wise .we now introduce the formal framework that will serve us throughout the rest of the paper .a brief historical introduction into the schwinger - perel - keldysh closed - time - loop formalism was given at the beginning of section [ ch : swschwinger ] .the basic quantity in this formalism is a double - time - ordered green function are arbitrary integers .specifications of eq .( [ eq : gf ] ) for equal time arguments are postponed till section [ ch : seccaus ] , cf . the remarks at the end of section [ ch : swdyson ] . a convenient interface to the whole assemblage of functions ( [ eq : gf ] )is their generating , or characteristic , functional = \big \langle t^w_-\exp \big ( -i\bar{{\mbox{\rm\boldmath}}}_- { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}- i{\mbox{\rm\boldmath}}_- { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}^{\dag } \big ) \\\times t^w_+\exp \big ( i\bar{{\mbox{\rm\boldmath}}}_+ { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength } + i{\mbox{\rm\boldmath}}_+ { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}^{\dag } \big ) \big \rangle , \hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq : xiw } % \nonumber % \z \end{gathered}\]]where are four arbitrary c - number functions per mode .notationally we treat them , as well as the mode operators , as vectors .we follow the sign conventions of ref . . to emphasise the structural side of our analyseswe employ a condensed notation , , and are arbitrary functions and is an arbitrary kernel ; for q - numbers the order of factors matters .the main advantage of the functional framework is that the _ universal structural _part of the perturbative calculations can be expressed as a small set of _ closed perturbative relations_. this was demonstrated in for the normal ordering , and will be demonstrated in this paper for the symmetric ordering .we wish to construct a perturbative formula the generating functional ( [ eq : xiw ] ) . by the same means as equation ( 24 )was found in ref . we rewrite eq .( [ eq : xiw ] ) in the interaction picture as \\ = \big\langle t_-^w \exp\left ( -i\bar{{\mbox{\rm\boldmath}}}_- \hat{{\mbox{\rm\boldmath } } } -i{\mbox{\rm\boldmath}}_- \hat{{\mbox{\rm\boldmath}}}^{\dag } + il_{\text{i}}^{w } \big [ \hat{{\mbox{\rm\boldmath}}},\hat{{\mbox{\rm\boldmath}}}^{\dag } \big ] \right ) \\\times t_+^w \exp\left ( i\bar{{\mbox{\rm\boldmath}}}_+ \hat{{\mbox{\rm\boldmath}}}+ i{\mbox{\rm\boldmath}}_+ \hat{{\mbox{\rm\boldmath}}}^{\dag } - i l_{\text{i}}^w \big [ \hat{{\mbox{\rm\boldmath}}},\hat{{\mbox{\rm\boldmath}}}^{\dag } \big ] \right ) \big\rangle .\hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq : xipert } \end{gathered}\]]here , is a functional of two arguments , = \int dt\ , h_{\text{i}}^w \big ( \hat{{\mbox{\rm\boldmath}}}(t),\hat{{\mbox{\rm\boldmath}}}^{\dag}(t ) \big ) , \end{aligned}% % \nonumber \label{eq : sm } \end{gathered}\]]where the c - number _ function _ is the _ symmetric form _ of the interaction hamiltonian , last equation is written in the schrdinger picture .note that \big ) \end{aligned } \label{eq:33a } % \nonumber % \z\end{aligned}\]]is the interaction - picture s - matrix .equation ( [ eq : hwi ] ) is general and holds for any interaction . for the bose - hubbard chain , \\= w\sum_{k=1}^n \bigg [ \frac{\kappa } { 2 } \hat a^{\dag 2}_k\hat a_k^2 - \kappa\hat a^{\dag}_k\hat a_k + \frac { \kappa } { 4 } \\ - j\big ( \hat a_{k}^{\dag}\hat a_{k+1 } + \hat a_{k+1}^{\dag}\hat a_{k } \big ) \bigg ] \equiv \hat h^w_{\text{i } } , \hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq : hwdef } % \nonumber % \z\end{gathered}\]]cf .( [ eq : bhh ] ) . in deriving thiswe have used that , for the harmonic oscillator , reason why the symmetric form of the interaction must be used is exactly that why in we had to use the normal form .the key property of the -ordering in ref . is that it does not affect the single - time normally ordered operator products , so that , in particular , have the same consistency between and : the order of operators under is _ fully _ decided by this ordering , the only way to prevent it from redefining the interaction is to put the latter into symmetric form .operators may be eliminated from eq .( [ eq : xipert ] ) altogether by , firstly , applying the symmetric wick theorem ( [ eq : horiw ] ) so as to bring the operator construct under the quantum averaging to a ( time-)symmetrically ordered form , and , secondly , using a multimode generalisation of eq .( [ eq : wmom ] ) to express the average .the result of this transformation reads = \int \frac{d^2 \alpha_{1 } } { \pi } \cdots \frac{d^2 \alpha_{n } } { \pi } w\big ( { \mbox{\rm\boldmath } } , { \mbox{\rm\boldmath}}^ * \big ) % \\\times \bigg \ { \exp%\sbracket{\bigg } { \delta ^w_c\bigg [ \frac{\delta}{\delta a_+ } , \frac{\delta}{\delta \bar a_+ } , \frac{\delta}{\delta a_- } , \frac{\delta}{\delta \bar a_- } % } \bigg ] \\\times \exp\big ( -i\bar{{\mbox{\rm\boldmath}}}_- { { \mbox{\rm\boldmath}}}_- -i{\mbox{\rm\boldmath}}_- \bar{{\mbox{\rm\boldmath}}}_- + i l_{\text{i}}^{w } \big [ { { \mbox{\rm\boldmath}}}_-,\bar{{\mbox{\rm\boldmath}}}_- \big ] + i\bar{{\mbox{\rm\boldmath}}}_+ { { \mbox{\rm\boldmath}}}_+ + i{\mbox{\rm\boldmath}}_+ \bar{{\mbox{\rm\boldmath}}}_+ - i l_{\text{i}}^w \big [ { { \mbox{\rm\boldmath}}}_+,\bar{{\mbox{\rm\boldmath}}}_+ \big ] \big ) \bigg \ } \big|_{a\to\alpha } .\label{eq:32a } % \eqlabel { } % \preprintmargin\end{gathered}\ ] ] in the above , is the multimode wigner function ( cf .( [ eq : wmom ] ) ) , and stands for the replacement , not to be confused by eq .( [ eq:32a ] ) , note that the functional differential operation within the curly brackets is applied under the condition that are four arbitrary c - number vector functions .these functions are then replaced pairwise by , which depend only on the initial condition .this replacement turns the _ functional expression _ in curly brackets into a _ function _ of the _ initial condition _ , making averaging over the wigner _ function _ meaningful .following kubo , we add a source term to the hamiltonian ( [ eq : bhh ] ) , .\end{aligned}% % \nonumber \label{eq : hpr } \end{gathered}\]]the heisenberg operators corresponding to will be denoted as .similar to ( [ eq : xiw ] ) , we introduce a characteristic functional for the double - time - ordered averages of : } \\ = \big \langle t_-^w\exp \big ( -i\bar{{\mbox{\rm\boldmath}}}_- { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength } ' -i { \mbox{\rm\boldmath}}_- { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}^{\prime\dag } \big ) \\\times t_+^w\exp \big ( i\bar{{\mbox{\rm\boldmath}}}_+ { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength } ' + i{\mbox{\rm\boldmath}}_+ { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}^{\prime\dag } \big ) \big \rangle .\hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq : xiws } \end{gathered}\]]the condensed notation we use here was defined by ( [ eq : cond ] ) . with a taste for the paradoxical ,the message of this section may be formulated as , _ the quantum response problem does not exist , because the information on the response properties of the system is already present in the commutators of the field operator _ .formally , this is expressed by the following relation between the characteristic functionals , \\= \xi^w \big [ { \mbox{\rm\boldmath}}_- + { \mbox{\rm\boldmath } } , \bar{{\mbox{\rm\boldmath}}}_- + { \mbox{\rm\boldmath}}^ * , { \mbox{\rm\boldmath}}_+ + { \mbox{\rm\boldmath } } , { \mbox{\rm\boldmath}}_+^{\dag}+ { \mbox{\rm\boldmath}}^ * \big ] .\hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq : xisxi } \end{gathered}\]]this formula is a trivial consequence of the closed perturbative relation ( [ eq : xipert ] ) ; it suffices to note that adding the source term to the hamiltonian results in the following replacement in eq .( [ eq:32a ] ) , \to % l_{\text{i}}^{w\prime } % \sbracket{\big } { % { \bm{a}},\bar{\bm{a } } % } = l_{\text{i}}^w \big [ { { \mbox{\rm\boldmath}}},\bar{{\mbox{\rm\boldmath } } } \big ] - \bar{{\mbox{\rm\boldmath}}}{\mbox{\rm\boldmath } } - { { \mbox{\rm\boldmath}}}{\mbox{\rm\boldmath}}^ * .\end{aligned } \label{eq:28a } % \nonumber % \z\end{aligned}\]]equation ( [ eq : xisxi ] ) shows that all information one needs in order to predict response of a quantum system to an external source is already present in the heisenberg field operators defined without the source . a closer inspection of eq .( [ eq : xisxi ] ) reveals that this information is `` stored '' in the commutators of the heisenberg operators .for details see refs . .similar to , the starting point of the ensuing derivation is understanding the causal structure of the symmetric contractions ( [ eq : gwpm ] ) .we begin with the observation that characterises the symmetric contraction as a green function of the c - number equation spare us future redefinitions we wrote ( [ eq : eqa ] ) as an equation for the phase - space trajectories .we also broke the total source into the given _external _ source and the _ self - action _ source accounting for the interactions . in general, is quasi - stochastic and field - dependent .note that here we work with a quasi - stochastic equation in the classical phase space , unlike in ref . where stochastic equations in the phase space of doubled dimension were considered . as a result , here the reader encounters complex - conjugate field pairs instead of independent field pairs as in ref . . while is obviously one of green functions of eq .( [ eq : eqa ] ) , it is wrong as far as natural causality is concerned . in phase space as well as in classical mechanics physics are associated with _ causal response _ defined through the _ retarded _ green function , quantity also obeys equation ( [ eq : gppasfg ] ) but , unlike , is retarded .causal solutions of eq .( [ eq : eqa ] ) are specified by replacing it by the integral equation , \end{aligned}% % \nonumber \label{eq : eqi } \end{gathered}\]]with the condition as . by analogy with ref . we expect the in - field to be defined by the multimode generalisation of eq .( [ eq : wmom ] ) .that is , the initial condition for eqs .( [ eq : eqa ] ) is defined by the initial state of the system .consistency of these assumptions and the way eq .( [ eq : eqa ] ) is linked to quantum averages remain subject to verification . as in ref . we proceed by noticing that all contractions may be expressed by . simply by trial and errorit is easy to get , \\g^w_{-+}(t ) & = - g^w_{+-}(t ) = \frac{1}{2 } \big[g_{\text{r}}(t ) - g_{\text{r}}^*(-t)\big ] .\end{aligned}% % \nonumber \label{eq : gwbyg } \end{gathered}\]]similar to , we are looking for variables which would bring given by ( [ eq : dwc1 ] ) to the form use here condensed notation defined by eq .( [ eq : cond ] ) . equation ( [ eq : causdwc ] ) takes us to the substitution relations imply that this condition the symmetric wick theorem ( [ eq : horiw ] ) remains valid .it is also consistent with the substitution ( [ eq:85jz ] ) , which in variables becomes , in the below we assume that eq . ( [ eq:86ka ] ) holds .continuing the analogy with ref . , we impose the condition , the linear form in eq .( [ eq:32a ] ) .this results in another substitution , this time in functionals ( [ eq : xiw ] ) and ( [ eq : xiws ] ) : inverse substitution reads , , & { { \mbox{\rm\boldmath}}}^ * ( t ) & = i\big [ \bar{{\mbox{\rm\boldmath}}}_+(t ) - \bar{{\mbox{\rm\boldmath}}}_-(t ) \big ] , \\ { { \mbox{\rm\boldmath } } } ( t ) & = \frac{1}{2 } \big [ { { \mbox{\rm\boldmath}}}_+(t ) + { { \mbox{\rm\boldmath}}}_-(t ) \big ] , & { { \mbox{\rm\boldmath}}}^ * ( t ) & = \frac{1}{2 } \big [ \bar{{\mbox{\rm\boldmath}}}^*_+(t ) + \bar{{\mbox{\rm\boldmath}}}^*_-(t ) \big ] .\end{aligned}% \nonumber \label{eq : causzw } \end{aligned}\]]showing that ( [ eq : suba ] ) is a genuine change of functional variables .similar to eqs .( [ eq : causaw ] ) and ( [ eq:86ka ] ) , these relations impose conditions on the functional arguments , conditions do not interfere with ( [ eq : causaw ] ) and ( [ eq:86ka ] ) serving as characteristic functionals for the corresponding green function , nor with ( [ eq : suba ] ) being a one - to - one substitution . by definition ,the _ generalised multitime wigner representation _ emerges by applying substitution ( [ eq : suba ] ) to functionals ( [ eq : causaw ] ) and ( [ eq:86ka ] ) . to start with, we note that the replacement , .( [ eq : xisxi ] ) , in variables becomes simply , variable unaffected . in variables functionals `` with and without the sources '' are naturally expressed by a single functional , } & = \xi^w\bigg [ { \mbox{\rm\boldmath } } + \frac{i{\mbox{\rm\boldmath}}}{2 } , { \mbox{\rm\boldmath}}^*+ \frac{i{\mbox{\rm\boldmath}}^*}{2 } , { \mbox{\rm\boldmath } } - \frac{i{\mbox{\rm\boldmath}}}{2 } , { \mbox{\rm\boldmath}}^ * - \frac{i{\mbox{\rm\boldmath}}^*}{2 } \bigg ] , \\ \phi^w { \big [ { \mbox{\rm\boldmath } } , { \mbox{\rm\boldmath } } ^ * \big| { \mbox{\rm\boldmath } } + { \mbox{\rm\boldmath}},{\mbox{\rm\boldmath}}^*+{\mbox{\rm\boldmath}}^ * \big ] } & = \xi^w{\bigg [ { \mbox{\rm\boldmath } } + \frac{i{\mbox{\rm\boldmath}}}{2 } , { \mbox{\rm\boldmath}}^*+ \frac{i{\mbox{\rm\boldmath}}^*}{2 } , { \mbox{\rm\boldmath } } - \frac{i{\mbox{\rm\boldmath}}}{2 } , { \mbox{\rm\boldmath}}^ * - \frac{i{\mbox{\rm\boldmath}}^*}{2 } \bigg| { \mbox{\rm\boldmath}},{\mbox{\rm\boldmath}}^ * \bigg ] } .\end{aligned } \label{eq:90ke } % \nonumber % \z \end{aligned}\ ] ] we see that the functional variable corresponds to the _ formal input _ of the system , which in turn is specified by the _formal c - number source _ added to the hamiltonian .we emphasise formality of both concepts . under macroscopic conditions ,an external source may become a good approximation for a laser ( say ) .importantly , even in this case , it remains a phenomenological model for a complex quantum device .if defines an input of the system , what does the output defined by variable stand for ? following , we _ postulate _ that , in the wigner representation , the formal output of a system is expressed by time - symmetric averages of the heisenberg field operators , } = \xi^w { \bigg [ \frac{i{\mbox{\rm\boldmath}}}{2 } , \frac{i{\mbox{\rm\boldmath}}^*}{2 } , \frac{-i{\mbox{\rm\boldmath}}}{2 } , \frac{-i{\mbox{\rm\boldmath } } ^*}{2 } \bigg| { \mbox{\rm\boldmath}},{\mbox{\rm\boldmath}}^ * \bigg ] } .\end{aligned } \label{eq : xisw } \end{aligned}\]]for the operator without the source , } = \xi^w \bigg [ \frac{i{\mbox{\rm\boldmath}}}{2 } , \frac{i{\mbox{\rm\boldmath}}^*}{2 } , \frac{-i{\mbox{\rm\boldmath}}}{2 } , \frac{-i{\mbox{\rm\boldmath } } ^*}{2 } \bigg ] .\end{aligned } \label{eq:94kk } % \nonumber % \z \end{aligned}\]]then , } { \delta\zeta^*_{k_1}(t_1 ) \cdots \delta\zeta^*_{k_{m}}(t_m ) \delta\zeta_{k_{m+1}}(t_{m+1 } ) \cdots \delta\zeta_{k_{m+\bar m}}(t_{m+\bar m } ) } \bigg | _ { { \mbox{\rm\boldmath } } = 0}\ , % \nonumber \label{eq : fwdiff } \end{gathered}\]]and similarly for the primed operator .it is easy to prove that eq .( [ eq : fwdiff ] ) agrees with the recursive definition of section [ ch : twdef ] .explicitly , = % \\ \bigg \langle t^w_- \exp\bigg(\frac{1}{2 } { \mbox{\rm\boldmath } } { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}^{\dag}+ \frac{1}{2 } { \mbox{\rm\boldmath}}^ * { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}\bigg ) \,%\\ \times t^w_+ \exp \bigg(\frac{1}{2 } { \mbox{\rm\boldmath } } { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}^{\dag}+ \frac{1}{2 } { \mbox{\rm\boldmath}}^ * { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}\bigg ) \bigg \rangle , \end{gathered}% \nonumber \label{eq : fwdef } \end{gathered}\]]for simplicity we assume that all times on the lhs of ( [ eq : fwdiff ] ) are different .we can then also assume that is nonzero only in close vicinity of , and that different do not overlap .furthermore , if is the smallest time in ( [ eq : fwdiff ] ) , isolating in ( [ eq : fwdef ] ) the contribution linear in we can write \big |_{\text{linear in \zeta } } \\= \bigg \langle \bigg(\frac{1}{2 } { \mbox{\rm\boldmath}}^*_1 { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}\bigg ) \bigg [ t^w_- \exp\bigg(\frac{1}{2 } { \mbox{\rm\boldmath } } ' { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}^{\dag}+ \frac{1}{2 } { \mbox{\rm\boldmath}}^{\prime * } { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}\bigg ) \,%\\ \times t^w_+ \exp \bigg(\frac{1}{2 } { \mbox{\rm\boldmath } } ' { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}^{\dag}+ \frac{1}{2 } { \mbox{\rm\boldmath}}^{\prime * } { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}\bigg ) \bigg ] \bigg \rangle \\ + \bigg \langle \bigg [ t^w_- \exp\bigg(\frac{1}{2 } { \mbox{\rm\boldmath } } ' { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}^{\dag}+ \frac{1}{2 } { \mbox{\rm\boldmath}}^{\prime * } { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}\bigg )\,%\\ \times t^w_+ \exp \bigg(\frac{1}{2 } { \mbox{\rm\boldmath } } ' { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}^{\dag}+ \frac{1}{2 } { \mbox{\rm\boldmath}}^{\prime * } { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}\bigg ) \bigg ] \bigg(\frac{1}{2 } { \mbox{\rm\boldmath}}^*_1 { \mbox{\rm\boldmath}}\hspace{-1\valength } \hspace{0.27\valength}\hat{\phantom{{\mbox{\rm\boldmath } } } } \hspace{-0.27\valength}\bigg ) \bigg \rangle .% \nonumber \label{eq : twrecf } \end{gathered}\ ] ] the square brackets of the rhs of this formula delineate the range to which the remaining time orderings are applied . clearly equation ( [ eq : twrecf ] ) is nothing but an elaborate form of the recursive definition ( [ eq : twrec ] ) .the case when the `` earliest '' operator is one of the follows by complex conjugation of ( [ eq : twrecf ] ) .physically , the most natural way of looking at the system is through time - symmetric averages of the field operator in the presence of the source .their characteristic functional }a\alpha \xia\xia\xia\xia\xia\xi\xiaa ] is a characteristic functional for the _ cumulants _ of the random source . for the bose - hubbard modelwe have , \\ + j\big [ a_{k+1}(t ) + a_{k-1}(t ) \big ] + \bar s^{(3)}_k(t ) , \label{eq : sbar } \hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}\end{gathered}\]]where the actual random contribution comes from the _ third - order noise _ .it is specified by the conjugate pair of nonzero cumulants , all other cumulants of are zero ( this allowed us to specify the average in place of the cumulant ) . while the wigner function responsible for the initial condition in ( [ eq : eqa ] ) may be positive thus affording a statistical interpretation , the cubic noise is a purely pseudo - stochastic object . droppingthe third - order noise turns the full wigner representation into the so - called _ truncated wigner representation_. in more traditional phase - space techniques , the truncated wigner is found by dropping the third - order derivatives in the generalised fokker - planck equation for the single - time wigner distribution . in the absence of losses , the corresponding langevin equation is non - stochastic . for the hamiltonian ( [ eq : bhh ] ) ,it coincides with eq .( [ eq : eqa ] ) , where the source is given by ( [ eq : sbar ] ) without .however simple and straightforward , the conventional way of deriving the truncated wigner leaves it unclear if it can be applied to any _ multitime _ quantum averages . in our approach here as well as in ref . the truncated - wigner equations emerge as an approximation within rigorous techniques intended for calculation of the time - symmetric averages of the heisenberg operators .the generalisation associated with extending the truncated - wigner equations to multitime averages is thus highly nontrivial and requires a new concept : the time - symmetric ordering of the heisenberg operators .there does not seem to be a way of guessing this concept from within the conventional phase - space techniques .strictly speaking , all relations derived so far are leading considerations that require specifications .two things need to be warranted : that the functional form of the symmetric wick theorem conforms with the specification of as symmetric ordering for coinciding time arguments , and that the stochastic integral equation ( [ eq : eqi ] ) is defined mathematically . in ref . , both problems were taken care of `` in a single blow '' by the causal regularisation of the retarded green function , cf .( [ eq:24a ] ) .namely , should be replaced by a sufficiently smooth function while preserving its causal nature , .one may assume , for instance , that the limit is implied .the regularised green function is zero at and has zero derivatives .equation ( [ eq : gr ] ) is a toy version of the pauli - villars regularisation used in the quantum field theory as part of the common renormalisation procedure ( cf .also ) . in this paper as well as in the causal regularisation of a two - fold effect .firstly , it assigns mathematical sense to equations ( [ eq : eqa ] ) , ( [ eq : eqi ] ) , defining them as ito equations . in ,this was in agreement with the more traditional approach based on pseudo - distributions and generalised fokker - planck equations .furthermore , with regularised , equation ( [ eq : gwbyg ] ) assures that the kernels are smooth functions and any mathematical ambiguity . as a result ,the symmetric wick theorem ( [ eq : horiw ] ) leaves alone any product of operators with equal time arguments . since the final expression in ( [ eq : horiw ] )is ordered symmetrically , the symmetric ordering is also enforced for any same - time product on the lhs of ( [ eq : horiw ] ) .regularisation ( [ eq : gr ] ) applied to the symmetric wick theorem ( [ eq : horiw ] ) thus indeed specifies the double - time - ordered product ( [ eq : tptm ] ) so that operators with equal time arguments are ordered symmetrically .an unwanted effect of regularisation is that symmetric ordering is also enforced for same - time groups of operators split between the and orderings .for example , with regularisation , problem was also encountered in , and the solution to it remains the same : the limit should always preceed .then , expected .the general recipe is to keep time arguments of operators under the and orderings slightly different , which is equivalent to ignoring regularisation of and .this recipe implies that the double - time - ordered averages in the limit are continuous functions of all time differences , where and are time arguments of an operator pair split between the and orderings .this is obviously consistent with continuity properties of the and kernels , and are unregularised kernels . with this amendment all specifications enforced by regularisation only apply to operators under the time orderings .in fact , we only need to specify the -ordering , defining it as time ordering for different times and symmetric ordering for equal times .the -ordering remains defined by ( [ eq : tpconj ] ) .we return to the problem of `` unwanted effects '' of regularisation in the next section .assume we know a way of calculating time - symmetric averages of heisenberg operators while the physics demand time - normal ones , or vice versa . for two - time averagesthis problem is solved by kubo s formula for the linear response function , cf . refs . . herewe consider a general approach to reordering heisenberg operators . among other resultswe show that this always reduces to considering a response problem of sorts .it is instructive to compare the formulae relating the time - symmetric and time - normal averages to the perel - keldysh green functions ( [ eq : gf ] ) : } & = \xi^w \bigg [ \frac{i{\mbox{\rm\boldmath}}}{2 } + { { \mbox{\rm\boldmath } } } , \frac{i{\mbox{\rm\boldmath}}^*}{2 } + { { \mbox{\rm\boldmath}}^ * } , \frac{-i{\mbox{\rm\boldmath}}}{2 } + { { \mbox{\rm\boldmath } } } , \frac{-i{\mbox{\rm\boldmath}}^*}{2 } + { { \mbox{\rm\boldmath}}^ * } \bigg ] , \label{eq:43a } % \nonumber % \z \\ \phi^n { \big [ { \mbox{\rm\boldmath } } , { \mbox{\rm\boldmath } } ^ * \big| { \mbox{\rm\boldmath}},{\mbox{\rm\boldmath}}^ * \big ] } & = \xi^n \big [ { i{\mbox{\rm\boldmath } } } + { { \mbox{\rm\boldmath } } } , { { \mbox{\rm\boldmath}}^ * } , { { \mbox{\rm\boldmath } } } , { -i{\mbox{\rm\boldmath}}^ * } + { { \mbox{\rm\boldmath}}^ * } \big ] . \label{eq:44a } % \nonumber % \z\end{aligned}\ ] ] cf . section [ ch : commresp ] . the functionals and both characteristic ones for green functions ( [ eq : gf ] ) but imply different specifications for coinciding time arguments : ( [ eq:43a ] ) implies symmetric ordering while ( [ eq:44a ] ) normal ordering .such specifications only become of importance if green functions ( or their linear combinations ) are considered for coinciding time arguments , and otherwise can be disregarded .mathematically , and coincide up to a _singular part_. hence , up to singular parts , the functionals and differ only in a functional substitution applied to eq .( [ eq : xiw ] ) .time - symmetric averages appear with the time - normal ones require eqs .( [ eq:45a ] ) and ( [ eq:46a ] ) may be inverted , and it is straightforward to show that } = \phi ^n { \bigg [ { \mbox{\rm\boldmath } } , { \mbox{\rm\boldmath } } ^ * \bigg| { \mbox{\rm\boldmath } } - \frac{i { \mbox{\rm\boldmath } } } { 2 } , { \mbox{\rm\boldmath}}^ * + \frac{i { \mbox{\rm\boldmath}}^ * } { 2 } \bigg ] } , \\ \phi ^n { \big [ { \mbox{\rm\boldmath } } , { \mbox{\rm\boldmath } } ^ * \big| { \mbox{\rm\boldmath } } , { \mbox{\rm\boldmath}}^ * \big ] } = \phi ^w { \bigg [ { \mbox{\rm\boldmath } } , { \mbox{\rm\boldmath } } ^ * \bigg| { \mbox{\rm\boldmath } } + \frac{i { \mbox{\rm\boldmath } } } { 2 } , { \mbox{\rm\boldmath}}^ * - \frac{i { \mbox{\rm\boldmath}}^ * } { 2 } \bigg ] } . \end{aligned } \label{eq:47a } % \nonumber % \z\end{aligned}\]]in particular , for the time - normal and time - symmetric averages defined without the source we have } = \phi ^n { \bigg [ { \mbox{\rm\boldmath } } , { \mbox{\rm\boldmath } } ^ * \bigg| - \frac{i { \mbox{\rm\boldmath } } } { 2 } , \frac{i { \mbox{\rm\boldmath}}^ * } { 2 } \bigg ] } , \\ \phi ^n { \big [ { \mbox{\rm\boldmath } } , { \mbox{\rm\boldmath } } ^ * \big| 0 , 0 \big ] } = \phi ^w { \bigg [ { \mbox{\rm\boldmath } } , { \mbox{\rm\boldmath } } ^ * \bigg| \frac{i { \mbox{\rm\boldmath } } } { 2 } , - \frac{i { \mbox{\rm\boldmath}}^ * } { 2 } \bigg ] } .\end{aligned } \label{eq:48a } % \nonumber % \z\end{aligned}\]]these relations make it evident that the reordering problem for the heisenberg operators is indeed equivalent to the response problem .we remind the reader that eqs .( [ eq:47a ] ) and ( [ eq:48a ] ) hold only up to a singular part .all formulae for quantum averages following from them are only valid for different time arguments and otherwise require specifications .we now consider an example which also illustrates how to approach coinciding time arguments .we wish to express the time - normal average } } { \delta\zeta_{k}(t)\delta\zeta_{k'}^*(t ' ) } \big|_{\zeta = 0 } \end{aligned } \label{eq:49a } % \nonumber % \z\end{aligned}\]]by the time - symmetric ones . combining ( [ eq:49a ] ) with the second of eqs .( [ eq:48a ] ) we have \bigg|_{s=0 } .\hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq:50a } % \nonumber % \z\end{gathered}\]]in deriving this we used that } = 1 , \end{aligned } \label{eq:51a } % \nonumber % \z\end{aligned}\]]so that the derivatives purely by input arguments always disappear . in ,( [ eq:50a ] ) was derived by an _ad hoc _ method ; here we show that the same relation follows from the general equations ( [ eq:48a ] ) . for details on numerical implementation of eq .( [ eq:50a ] ) see .equation ( [ eq:50a ] ) is well behaved in the limit .for instance , let . then , by causality , ( [ eq:12 ] ) we rewrite ( [ eq:50a ] ) as \big \rangle , & t'>t .\end{aligned } \label{eq:53a } % \nonumber % \z\end{aligned}\]]we have thus rederived kubo s famous relation for the linear response function .both sides here have a well - defined limit for .similarly , \big \rangle , & t > t ' , \end{aligned } \label{eq:54a } % \nonumber % \z\end{aligned}\]]which is nothing but complex conjugation of ( [ eq:53a ] ) with the replacement .at the same time , eq . ( [ eq:50a ] ) resists any attempt to extend it to equal time arguments . within the causal reqularisation , rise to a patently wrong result , , making the response correction in ( [ eq:50a ] ) nonzero for inevitably causes troubles with causality under regularisation .this shows , in particular , that the `` unwanted effects '' of regularisation ( see section [ ch : seccaus ] ) are unavoidable and can not be eliminated by a better regularisation scheme .for this reason we regard it as `` good conduct '' to treat all quantum averages as generalised functions , making the question of their values at coinciding time arguments meaningless .we have developed a generalisation of the symmetric ordering to multitime problems with nonlinear interactions .this includes generalisation of the symmetric ( weyl ) ordering to time - symmetric ordering of heisenberg operators , and of the renowned wigner function to a path integral in phase - space . among other results ,continuity of time - symmetric operator products has been proven .a way of calculating time - normally - orderied operator products within the time - symmetric - ordering - based techniques has also been developed .the authors are grateful to s. stenholm , w. schleich , m. fleischhauer , and a. polkovnikov for their comments on the manuscript .thanks the institut fr quantenphysik at the universitt ulm for generous hospitality . l.p . is grateful to arc centre of excellence for quantum - atom optics at the university of queensland for hospitality and for meeting the cost of his visit to brisbane .this work was supported by the program atomoptik of the landesstiftung baden - wrttemberg and sfb / tr 21 `` control of quantum correlations in tailored matter '' funded by the deutsche forschungsgemeinschaft ( dfg ) , australian research council ( grant i d : ft100100515 ) and a university of queensland new staff grant .in this appendix , we formulate an ultimate generalisation of the symmetric wick theorem .it appears to cover all imaginable cases of interest and , if necessary , can be extended to fermionic fields .firstly , let be a generalised time variable which belongs to some linearly ordered set with the succession symbol . of practical relevance are the time axis and the c - contour .secondly , let , be a pair of free - field operators defined in some fock space , ( in this appendix ) , \\\hat { \bar x}(\xi , \tau ) = \sqrt{\hbar } \sum_{\kappa } \big [ \bar u_{\kappa } ( \xi ) \hat a_{\kappa } ^{\dag}(\tau ) + \bar v_{\kappa } ( \xi ) \hat b_{\kappa } ( \tau ) \big ] . \end{aligned } \label{eq:60hx } % \nonumber % \z \end{aligned}\]]here , symbolises arguments of field operators except time ; can contain coordinate , momentum , spin indices , and so on .the index enumerates the modes of which the fock space is built . with extentions to relativity or solid state in mind , for each mode we define two pairs of creation and annihilation operators , phases are c - number functions of ` time;' note that eq .( [ eq:61hy ] ) employs one phase for `` particles and antiparticles '' in each mode .the stationary operators obey standard commutational relations , = \big [ \hat b_{\kappa } , \hat b_{\kappa ' } ^{\dag } \big ] = \delta_{\kappa \kappa ' } , \end{aligned } \label{eq:62hz } % \nonumber % \z \end{aligned}\]]otherwise mode operators pairwise commute . the c - number functions , , , and replaced in particular problems by suitable solutions of a single - body equation . in nonrelativistic problems , . for our purposes here all c - number functions in eqs .( [ eq:60hx ] ) and ( [ eq:61hy ] ) are regarded as arbitrary .specification of the quantities occuring in eqs .( [ eq:60hx])([eq:62hz ] ) belongs to a particular problem .it is implied that such specification should meet certain conditions of algebraic consistency .for example , summation in ( [ eq:60hx ] ) should be consistent with the `` kronecker symbol '' in ( [ eq:62hz ] ) , ( with being an arbitrary function of the mode index ) consistencies should exist between integrations and corresponding delta - functions , delta - functions and kronecker symbols are symmetric .these consistencies extend to functional differentiations by c - number functions of and .so , for the c - number functions occuring in eqs .( [ eq:52hp ] ) , ( [ eq:54hr ] ) , similarly for the c- number pair in ( [ eq:63ja ] ) , ( [ eq:64jb ] ) . conditions ( [ eq:74jn])([eq:57hu ] ) make all relations below algebraically defined .linear order of the generalised time axis allows one to define the step - function , the `` time '' ordering , .( [ eq : tp ] ) .similar definitions apply to a larger number of factors .the time - symmetric ordering is defined replacing `` larger than '' by `` succeed '' in eq .( [ eq:78js ] ) : + , { \hat{\mathcal x}}_3(\tau_3 ) \big ] _ + , \cdots,{\hat{\mathcal x}}_n(\tau_n ) \big ] _ + ,\\ \tau_1\succ \tau_2\succ\cdots\succ \tau_n .\hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq:51hn } % \nonumber % \z \end{gathered}\]]the proof of section [ ch : eqtwtow ] that for the free operators symmetric and time - symmetric orderings are the same thing generalises literally with and . the _ generalised structural symmetric wick theorem _ that we now wish to prove reads , = { \cal t}^w\!\bigg \ { \exp \mathcal{z } \bigg [ \frac{\delta } { \delta x } , \frac{\delta } { \delta \bar x } \bigg ] \\\times p[x,\bar x ] |_{x\to\hat x , \bar x\to\hat{\bar x } } \bigg \ } , \end{aligned } \label{eq:63ja } % \nonumber % \z \end{aligned}\]]where are a pair of c - number functions and , = -i\hbar \int d\tau d\tau ' d\xi d\xi ' \\ \times g^w(\xi , \tau ; \xi ' , \tau ' ) \frac{\delta^2}{\delta x(\xi , \tau ) \delta \bar x(\xi ' , \tau ' ) }. \hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq:64jb } % \nonumber % \z \end{gathered}\]]applying ( [ eq:63ja ] ) to we find that coincides with the symmetric contraction of the pair ( [ eq:60hx ] ) , since all operator reorderings occur mode - wise , it suffices to verify ( [ eq:63ja ] ) for one mode ; the general formula then follows by direct calculation . in the one - mode case , \\= \frac{i}{2}\text{e}^{-i\varphi_{\kappa } ( \tau ) + i\varphi_{\kappa } ( \tau ' ) } \big [ \theta(\tau , \tau ' ) - \theta(\tau ' , \tau ) \big ] , \hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq:77jr } % \nonumber % \z \end{gathered}\]]and eq .( [ eq:63ja ] ) becomes , = { \cal t}^w\!\bigg \ { \exp \mathcal{z}_{\kappa } \bigg [ \frac{\delta } { \delta a } , \frac{\delta } { \delta \bar a } \bigg ] \\ \times p[a,\bar a ] |_{a\to\hat a_{\kappa } , \bar a\to\hat a_{\kappa } ^{\dag } } \bigg \ } , \hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq:52hp } % \nonumber % \z \end{gathered}\]]where are a pair of arbitrary c - number functions of `` time . ''the `` per mode '' reordering exponent reads , = -i\int d\tau d\tau ' g_{\kappa } ^w(\tau , \tau ' ) \frac{\delta ^2}{\delta a(\tau ) \delta\bar a(\tau ' ) } . \end{aligned } \label{eq:54hr } % \nonumber % \z \end{aligned}\]]verification of eq .( [ eq:52hp ] ) goes in two steps .firstly , we note that the equivalence between the verbal and algebraic forms of wick s theorem proper does not depend on details of the contraction and hence equally applies to the symmetric wick theorem .secondly , the proof of the symmetric wick theorem for the time axis in section [ ch : swdyson ] may be literally adjusted to generalised time replacing time by , the symbol by , and the contraction by .we therefore regard eq .( [ eq:52hp ] ) proven . = { \cal t}^w\!\bigg ( \exp\bigg \ { \sum_{\kappa } \bigg ( \mathcal{z}_{\kappa } \bigg [ \frac{\delta } { \delta a_{\kappa } } , \frac{\delta } { \delta \bar a_{\kappa } } \bigg ] + \mathcal{z}_{\kappa } \bigg [ \frac{\delta } { \delta b_{\kappa } } , \frac{\delta } { \delta \bar b_{\kappa } } \bigg ] \bigg ) \bigg \ } p[x,\bar x ] |_{a\to\hat a } \bigg ) .\end{aligned } \label{eq:66jd } % \nonumber % \z \end{aligned}\ ] ] in this formula , the c - number fields , are given by eq .( [ eq:60hx ] ) without hats , , \\ { \bar x}(\xi , \tau ) = \sqrt{\hbar } \sum_{\kappa } \big [ \bar u_{\kappa } ( \xi ) \bar a_{\kappa } ( \tau ) + \bar v_{\kappa } ( \xi ) b_{\kappa } ( \tau ) \big ] , \end{aligned } \label{eq:67je } % \nonumber % \z \end{aligned}\]]where , , , and are four independent c - number fields per mode , and is a shorthand notation for the substitution , `` per mode '' reordering exponents are given by ( [ eq:54hr ] ) . applying the chain rule to differentiations in ( [ eq:66jd ] ) we get , = \sqrt{\hbar}\int d\xi\ , u_{\kappa } ( \xi ) \frac{\delta } { \delta x(\xi , \tau ) } p[x,\bar x ] , \\\frac{\delta } { \delta \bar a(\tau ) } p[x,\bar x ] = \sqrt{\hbar}\int d\xi\ , \bar u_{\kappa } ( \xi ) \frac{\delta } { \delta \bar x(\xi , \tau ) } p[x,\bar x ] , \\\frac{\delta } { \delta b(\tau ) } p[x,\bar x ] = \sqrt{\hbar}\int d\xi\ , \bar v_{\kappa } ( \xi ) \frac{\delta } { \delta \bar x(\xi , \tau ) } p[x,\bar x ] , \\\frac{\delta } { \delta \bar b(\tau ) } p[x,\bar x ] = \sqrt{\hbar}\int d\xi\ , v_{\kappa } ( \xi ) \frac{\delta } { \delta x(\xi , \tau ) } p[x,\bar x ] .\end{aligned } \label{eq:71jk } % \nonumber % \z \end{aligned}\]]substituting these relations into eq .( [ eq:66jd ] ) we recover eq .( [ eq:63ja ] ) with .\hspace{0.4\columnwidth}\hspace{-0.4\twocolumnwidth}% \label{eq:72jl } % \nonumber % \z \end{gathered}\]]it is readily verified that this definition of agrees with ( [ eq:65jc ] ) .the only remaining artifact of the underlying mode expansion is then the substitution ( [ eq:70jj ] ) .however , with everything expressed in terms of , it may be replaced by concludes the proof of eq .( [ eq:63ja ] ) .we conclude this appendix with a remark on scaling of fields and contractions in the classical limit ._ commutators of physical quantities must scale as _ . in this sense , quantised mode amplitudes are not physical .the physical amplitude is , which in the limit becomes the classical mode amplitude .for this reason the factor is explicitly present in the definitions of `` physical fields '' ( [ eq:60hx ] ) .the scaling is then removed from the contraction ( [ eq:65jc ] ) by the factor on the lhs .the result is that , as was demonstrated in refs . , green functions ( propagators ) of bosonic fields are to a large extent classical quantities . in particular, they are all expressed by the _ response transformation _ in terms of the retarded green function of the corresponding classical field . for more detailssee .
in this work we present the formal background used to develop the methods used in earlier works to extend the truncated wigner representation of quantum and atom optics in order to address multi - time problems . the truncated wigner representation has proven to be of great practical use , especially in the numerical study of the quantum dynamics of bose condensed gases . in these cases , it allows for the simulation of effects which are missed entirely by other approximations , such as the gross - pitaevskii equation , but does not suffer from the severe instabilities of more exact methods . the numerical treatment of interacting many - body quantum systems is an extremely difficult task , and the ability to extend the truncated wigner beyond single - time situations adds another powerful technique to the available toolbox . this article gives the formal mathematics behind the development of our `` time - wigner ordering '' which allows for the calculation of the multi - time averages which are required for such quantities as the glauber correlation functions which are applicable to bosonic fields .
trapped ions are one of the most promising platforms for implementing atomic control and quantum information processing .varying quasi - static potentials on trap electrodes can transport the ions while preserving information stored in the atomic state .laser fields and oscillatory magnetic fields can manipulate the atomic states of single ions or couple the information between two or more ions . in the transport architecture for a quantum information processor , large numbers of ions ,each representing one quantum bit of information , can be transported between many interaction zones where pairs of ions can interact . the advent of microfabricated ion traps and , in particular , surface - electrode structures that can be fabricated on single substrates allows the complexity of ion traps to grow to unprecedented levels .larger traps require nearly one hundred applied potentials , each typically supplied by a digital - to - analog converter ( dac ) , to fully control the positions of the ions .however , providing the wiring for these complex traps is unwieldy , as traditional through - vacuum feedthroughs are bulky and can introduce noise via stray inductance and capacitance .recent efforts have reduced the wiring complexity using vacuum chambers that incorporate the ceramic trap carriers as part of the vacuum housing or use in - vacuum circuit boards to distribute and filter signals . both techniquessimplify the wiring of the dac signals , but they do not reduce the total number of vacuum - to - air connections needed to supply these signals . a novel alternative presented here is to place the dacs inside the vacuum system , replacing bundles of analog wires with a few digital serial lines that can address the individual dac channels .integration with surface - electrode ion traps places strict requirements on such an in - vacuum dac system .the dacs must not introduce excess noise on the ion trap electrodes , since this can cause ion heating .voltage updates must be sufficiently fast and smooth for ion transport operations .physical layout of the in - vacuum dacs and their support components ( power , filters , etc . ) must not obstruct laser access to the surface trap circuit board , as required for ion manipulation .finally , the in - vacuum dac materials , including the die packaging , circuit boards , and component solders , must meet the ultra - high vacuum ( uhv ) requirements for ion trapping ( torr or lower ) .here we describe an integrated ion trapping system ( fig .[ fig : overview ] ) consisting of a 78-electrode trap with direct current ( dc ) potentials provided by two 40-channel in - vacuum dac chips .the dacs are programmed via a four line serial bus ( 1 clock , 1 bit align , and 2 serial data lines ) .the dacs are decapsulated commercial components , packaged together with the ion trap into a compact assembly with two three - layer circuit boards .the assembly installs by insertion into an in - vacuum edge connector socket built into the vacuum chamber .we demonstrate ion trapping and transport and measure the ion heating rate and axial mode stability . this paper is organized as follows .design , specifications , and fabrication of the in - vacuum electronics are described in sec .[ sec : electronics ] .control software and timing considerations are presented in sec .[ sec : software ] .integration with the microfabricated ion trap is described in sec .[ sec : integration ] .section [ sec : testing ] presents results from testing and characterization of the integrated system , with performance comparisons to standard air - side dacs .architecture overview for the integrated system . ]considerations for the in - vacuum control electronics include update speed , voltage noise , number of electrical feedthroughs , and flexibility to address channels individually rather than through a global update . the chosen architecture ( fig .[ fig : overview ] ) locates two 40 channel dacs and a set of low - pass rc filters near the ion trap in the vacuum system .the selected dacs , analog devices ad5730 s ( 40-channel , 16-bit ) , are capable of v operation using a + 5 vdc stable reference and vdc supplies .we measure an unfiltered spectral noise density of 15 nv at 1 mhz .the architecture reduces the required number of uhv feedthroughs to 9 essential control lines plus a separate feedthrough for radio frequency ( rf ) power .the design scales well to larger numbers of electrodes ; each additional serial data line controls an additional 40 dac channels .board layouts : ( a ) trap board bottom ; ( b ) trap board top ; ( c ) regulator board bottom ; ( d ) regulator board top . ]the in - vacuum circuitry is configured in a dual board layout . the trap board ( fig .[ fig : iemitlayout]a - b ) contains the microfabricated ion trap as the sole component on its top surface , with the dac chips and rc filters on its bottom surface .this arrangement keeps the critical active electronics on the backside of the board , avoiding direct exposure to laser light which could modify semiconductor components due to carrier generation or cause unwanted photon scattering during ion trap operations .the trap rf line is a 100 grounded coplanar transmission line , routed to minimize radiative crosstalk . the regulator board ( fig .[ fig : iemitlayout]c - d ) takes vdc and generates the vdc dac supply voltages , a + 3.3 vdc logic supply , and a + 5 vdc ultra - stable reference .all connections to in - vacuum cabling are made through a pin card edge connector on the regulator board ; placement of this connector on the trap board would over - restrict laser access to the trap surface .the trap rf , bipolar power supply , ground , and serial lines are supplied through the edge connector .digital and analog sections are segregated to minimize cross - coupling .the digital lines are supplied as twisted pairs terminated with 100 resistors on the regulator board .decoupling 1 nf capacitors are added to several supply lines to further suppress rf pickup .the regulator board is connected to the trap board with spring - loaded pins and polyether - ether - ketone ( peek ) board stackers ( see sec . [sec : integration ] ) .the trap and regulator circuit board material is a multilayer rogers 4350b patterned substrate .this material is a uhv - compatible circuit board that has been used successfully for mounting operational ion traps .the multilayer structure is constructed with 4450 prepreg , an uncured form of the 4350b board .eight plies of 4450 prepreg are used to form a 3-metal - layer stack with a single internal ground plane .the multilayer construction adds stiffness and provides an internal ground plane to reduce crosstalk between the top and bottom layers . to reduce outgassing ,no soldermask is applied .the boards are 0.065 thick with soft bondable gold finish over 1 oz copper .the entire soldering process is kept at or below the 4350b glass transition temperature of 280 .the ad5370 s and regulators are decapsulated from the packaged component to bare die ; this is required to prevent outgassing of the molding plastic in the uhv ion trap environment .a jet etching technique with fuming nitric acid ( hno ) and sulfuric acid ( h ) dissolves the encapsulant on the package , leaving the die , bond pads , and wire bonds intact .details of a similar decapsulation process are available in ref . .the rc low - pass components are second - order filters , obtained from semiconwell as a custom order of their thin - film tapped emi / rfi filter product line .each 0.10.12 die contains 12 two - pole rc filters .the resistors are 35 k thin film tantalum - nitride and the capacitors are 220 pf metal - nitride - oxide - silicon .the two rc stages are identical ; filter performance could in principle be improved by choosing different impedance values to minimize loading of the second stage by the first . during probing and wirebonding ,delicate handling of the filter pads is required to avoid damage to the thin silicon nitride beneath the pad .the filter function is found to depend on the bias voltage in certain regimes , due to a metal - oxide - semiconductor capacitance effect caused by insufficient doping of the substrate . a constant rc response ( khz )is obtained by back - biasing the substrate at v , leaving the device in the accumulation state over the full range of applied bias voltages v to v. in this regime the overall filter capacitance is given by pf .the trap and regulator boards are ultrasonically cleaned ( in acetone , isopropyl alcohol , and deionized water ) prior to soldering components .passive capacitors and resistors are pre - tinned with sac305 eutectic ( 96.5% sn , 0.5% cu , 3% ag ) and hand soldered to the boards using a gold - plated soldering needle . to minimize the use of high - outgassing materials , no flux or soldermask are used .the soldering process is facilitated by placing the board on a 170 hot stage , allowing the use of an ultra fine needle tip which otherwise can not transfer sufficient heat to the solder ( eutectic point 217 ) .each connection is inspected for brittleness ; fig .[ fig : bondingjoints]a shows an example solder joint .once fully soldered , the boards are again ultrasonically cleaned to remove tin oxide particulate .( a ) 0402 component soldered with a flux free process ; ( b ) decapsulated dac die bonded with a thermocompression process ; ( c ) filter dies bonded with a gold ball process . ] on the regulator board , all integrated circuit ( ic ) die are adhered with low outgassing silver - filled epoxy ( epo - tek h21d ) .the epoxy is cured at 120 for 15 minutes .on the trap board the two dac die and 8 filter banks are adhered with the h21d epoxy .the decapsulated dac die are wirebonded using a thermocompression bonder ( fig . [fig : bondingjoints]b ) ; gold ball bonding is used to wirebond the filter die ( fig . [ fig : bondingjoints]c ) . with all passives and icsconnected , the boards are cleaned in isopropyl alcohol and deionized water rinse .the microfabricated ion trap is adhered with h21d epoxy , and its electrodes are wirebonded to the trap board electronics .figure [ fig : boardphotos ] shows photos of the completed trap and regulator boards . section [ sec : integration ] describes assembly of the boards into in - vacuum housing .( a ) regulator and ( b ) trap boards with all passives and decapsulated ics attached . ]for ion trap control , communication functions with the in - vacuum dacs are integrated into a data acquisition system as shown in fig .[ fig : iemitintegration ] .the microcontroller ( nxp 204mhz lpc4350 ) programs the in - vacuum dacs via a serial bus .nine signal and power lines run from the controller box through vacuum feedthroughs to the in - vacuum electronics : serial - digital interface ( sdi ) programming lines for each dac ( a_sdi and b_sdi ) , serial clock ( sclk ) , bit align ( sync ) , load data ( ldac ) , dac busy ( busy ) , vdc supply lines , and gnd .the controller utilizes general purpose input / output ( gpio ) lines on the microcontroller instead of the built - in serial peripheral interface ( spi ) lines , enabling a single common clock to be shared between the two 40 channel dac chips .integrated data acquisition system . ] during operation , the dac system pulls approximately 33 ma from each of the two supply lines for a total of 1 w dissipation .the system pulls an additional 8 ma from the + 14 v supply during dac updates , adding 0.1 w during ion transport .the exterior housing temperature increases to 47 during normal operation in vacuum .the controller receives data via rs232 at 115200 baud .each waveform consists of a series of `` packets '' to be sent to the in - vacuum dacs ; each packet contains a list of dac channels and voltages to update .the first packet is sent to the dacs once the data for the entire waveform is loaded into the microcontroller .externally supplied ldac pulses from the field - programmable gate array ( fpga ) timing system are used to trigger subsequent voltage updates on the dac chips ( fig .[ fig : transporttiming ] ) .each ldac pulse triggers the controller to send the next packet to the dacs after a short delay to allow the dacs to latch the previous packet data .a shared busy line from both dacs indicates their status ( idle or receiving data ) . to speed up the update rate for the dacs, data transfer is divided into an initial packet that covers all 40 channel pairs ( dacs a and b are programmed in parallel ) followed by smaller packets that only cover those channels that have changed since the last update .it takes roughly 60 for the controller to upload 40 channel pairs to the dacs , so the timing system must provide a 60 delay before sending the ldac that triggers the dacs to use this initial packet . for ion transport ,subsequent packets need to change only 8 channel pairs and thus require only 25 to upload . in combination with the 1 ldac pulse, this corresponds to a 38 khz update rate .typical data transfer to in - vacuum dacs for ion transport : ( a ) sequence of ldacs sent by the timing system ; ( b ) resulting dac busy signals ( low when receiving data ) ; ( c ) output from dac b26 , a diagnostic channel not connected to any trap electrode . ]a new linear ion trap , the georgia tech research institute ( gtri ) gen v ( fig . [ fig : genvtrap ] ) , was fabricated to demonstrate the in - vacuum dac system .fabrication processes for the gen v trap are similar to those described for the gtri gen ii trap in ref . .the trap as designed contains 86 dc electrodes ; for this work a number of non - critical electrodes on the ends of the trap are shorted together so that only 78 dac channels are required .these are supplied entirely by the two in - vacuum ad5730 dacs , with two channels reserved for diagnostics .the trap housing is machined from 316 stainless steel .figure [ fig : iemitassembly ] shows an exploded view of the in - vacuum package .the two - board design ( see sec . [sec : config ] ) introduces the additional complexity of providing interconnects between the trap and regulator boards .spring - loaded contact pins ( interconnect devices inc ., part 100785 - 002 ) are chosen for their vacuum compatible materials , an overall length that allows placement of the regulator board ( with attached edge connector ) well below the trap board , and a high pin density that permits multiple ground connections between the boards .the pins are supported and spaced by two peek plastic spacers that are glued with h21d epoxy into rectangular slots in the housing .assembly view of housing with peek edge connector . ]the ion trap package is designed to fit into a 4.5 spherical octagon ( kimball physics mcf450-sphoct - e2a8 ) .the peek edge connector ( sullins wmc10dteh ) is mounted to grooves in the octagon using custom `` groove grabbers '' ( kimball physics ) .the groove grabbers also support a frame to hold the neutral oven and a shield to protect the trap from overspray from the oven .the chamber is pumped by a 20 l / s ion pump and a titanium sublimation pump .the in - vacuum edge connector is a configuration with ten contacts for each side of the inserted regulator board .the contacts on the edge connector are soldered to kapton - coated uhv vacuum wire with ausn eutectic solder .each signal wire is bundled with a ground wire into a twisted pair .this provides a roughly 100 characteristic impedance that is matched by resistive termination on the regulator board .the other end of the wiring is connected to a 25-pin peek d - sub connector that mates with a vacuum feedthrough .prior to installation of the trap and regulator boards , the vacuum chamber along with peek spacers and peek edge connector are prebaked in vacuum at 240 for 470 hours .after installation in the vacuum chamber , the package is baked in vacuum at 200 for 50 hours .rf crosstalk may introduce bit errors in communication with the in - vacuum dacs .we place bounds on this effect by observing two diagnostic dac channels not connected to any electrode .a voltage waveform is sent repeatedly to the diagnostic channels and the resulting dac potentials are recorded on a digital oscilloscope ( fig .[ fig : envelopetest ] ) .after recorded traces while applying rf ( 300 vpp , 37 mhz ) to the trap , no measurable update error is observed .this corresponds to a bit error rate of or lower .the rf noise pickup observed in fig .[ fig : envelopetest ] is not expected to appear on dac lines going to the trap . traces of a discretized sine and cosine signal generated by the in - vacuum dacs .the noise on the traces is due mostly to rf pickup on the diagnostic lines at the edge connector . ] ions are loaded and confined in the trap using standard techniques described in greater detail elsewhere .neutral ca atoms are supplied by an oven located below the trap board .the atoms enter through the loading slot ( fig .[ fig : genvtrap ] ) and are photoionized 60 m above the trap surface .ions are confined radially via a ponderomotive pseudopotential generated by applying rf to the rf electrodes .potentials applied to the dc electrodes by the dacs confine the ion axially at the desired location along the trap .ions are transported by varying the dc trapping potentials to move the location of the potential minimum .trapped ions are detected and cooled with the cycling transition at 397 nm .an additional beam at 866 nm repumps ions from the metastable level .all laser beams are positioned parallel to the plane of the ion trap .ion fluorescence emitted perpendicular to the trap plane passes through the mesh shield ( fig .[ fig : iemitassembly ] ) and is focused by a lens assembly onto a photomultiplier tube and a charge - coupled device ( ccd ) camera . the storage lifetime with doppler cooling is as long as several hours .ion dark lifetimes ( survival rates without doppler cooling ) are measured over the non - slotted region of the trap .an acousto - optic modulator ( aom ) is normally used for switching the 397 nm cooling beam on and off . to ensure that leakage through the aom does not affect the measured lifetimes ,a mechanical shutter is used during the dark time of this measurement . as shown in fig .[ fig : darklifetime ] , the ion dark lifetime ( 50% survival fraction ) is around 100 s. survival fraction for stationary ions ( black circles ) and ions transported ( red triangles ) at 1 m / s during the experiment dark time .the trap rf is 300 vpp at 53.17 mhz with axial well depth mev . ] in previous surface traps , ions were transported at a speed of 1 m / s using standard air - side dacs with 500 khz update rate . by contrast , even after optimizing packet sizes as described in sec .[ sec : software ] , update rates for the in - vacuum dacs are limited to 40 khz .a potential concern with the slower update rate is distortion of the transport waveforms , which must now contain fewer voltage update steps to achieve 1 m / s transport . nevertheless , transport tests using the in - vacuum dacs demonstrate effective ion transport for many meters ( multiple round trips over a 1 mm region of the trap ) without significant loss .in particular , when the ion is transported at 1 m / s for the duration of the dark time , the measured dark lifetime is 70 s ( fig .[ fig : darklifetime ] ) , corresponding to 70 meters of transport in the dark .fluorescence measurements confirm that the ion is indeed transporting out of range of the cooling beam during each round - trip .carrier transition and adjacent motional sidebands for one trapped ion as a function of the 729 nm laser frequency .the axial mode frequency is 1.5 mhz with trap rf 300 vpp at 53.17 mhz ( mathieu parameter ) . ]the electric quadrupole transition at 729 nm ( ) is used to measure ion motional sidebands around a carrier ( motion independent ) transition .a weak magnetic field of gauss lifts the degeneracy of zeeman sublevels .figure [ fig : oneionmodes ] shows the carrier transition with resolved axial and radial sidebands along with cross terms and higher order sidebands .the measured ion axial frequency is 1.5 mhz , with radial secular modes at 5.5 mhz and 6.4 mhz .axial mode frequency stability .measurements just after ion reloads are indicated by solid squares .the linear fit gives a 100(50 ) hz / hr drift .] mode frequency stability is a critical factor for gate fidelity in quantum information processing .the ion axial frequency is measured over the unslotted region of the trap ( m , 660 m from the load zone center ) , repeatedly over a two hour period which includes several ion reloads .the resulting set of measurements is shown in fig .[ fig : modestability ] .the frequency drift is roughly 100(50 ) hz / hr , and ion loading does not noticeably shift the mode frequency , as would result from laser charging of the trap surface or drifts in dc trapping potentials .radial mode stabilities are not measured , as these are set mainly by the rf stability , which is independent of the in - vacuum dac system .the heating rate of the ion axial mode ( 1.5 mhz ) is measured over the unslotted region at m .the ion is sideband cooled to an average phonon occupation number of in the axial mode .it is then allowed to sit in the dark for a controlled delay time between 0 and 3 ms . following the delay ,the red and blue sidebands are measured and is calculated from where is the ratio of sideband strengths .the results are shown in fig .[ fig : heatingrate ] . a linear fit to the data gives a heating rate of 0.8(1 ) quanta / ms .heating rate measurement showing the number of quanta in the axial mode as a function of time in the dark . the resulting heating rate is 0.8(1 )quanta / ms .the ion height is 60 m above the trap surface . ] to test connectedness of all electrodes to their corresponding dac channels and to probe for possible stray electric fields introduced by the in - vacuum dacs , the ion position and stray axial field strength are mapped out over the length of the trap structure .nonzero is indicated by a shift in the ion s axial position as the harmonic trapping potential is scaled in strength .results are shown in fig .[ fig : axialscan ] .the ion position is measured via fluorescence imaging , with ccd camera pixel size calibrated to trap features of known length .electrode failure would result in strong excursions of the ion from the calculated positions ; no such excursions are observed .stray axial field measurements over the trap structure .gray box indicates positions within the loading slot ( edge m ) .field magnitudes over the unslotted region are 500 v m or smaller . ] quantum information processing requires storage and manipulation of multiple trapped ions .basic multi - ion capability is demonstrated with the in - vacuum dac system : loading of up to five ions in a single trapping well , and co - transporting of two ions to the non - slotted region of the trap .figure [ fig : multipleions ] shows ccd camera images of multiple trapped ions above the load slot .we regularly transport two ions out of the load zone and resolve motional sidebands corresponding to center - of - mass and stretch axial modes ( frequencies and , respectively ) .fluorescence images in the load slot of one to five trapped ions . ]in - vacuum electronics reduce the number of electrical feedthroughs required to control a 78-electrode microfabricated ion trap by nearly an order of magnitude . the design scales favorably to more complex trap geometries , with each additional 40-channel dac requiring only one additional pair of vacuum feedthrough lines . commercially available integrated circuitsare decapsulated from original packaging to produce fully uhv - compatible trap and regulator circuit boards .a serial interface allows communication with an air - side computer and controller board .the in - vacuum electronics are used successfully to control loading and manipulation of ions in a gtri gen v surface - electrode ion trap .trap performance is characterized by axial mode stability , heating rate , stray axial electric fields , and ion dark lifetime .the integrated system performs comparably to earlier traps with standard air - side electronics , demonstrating the potential of this approach to simplify hardware requirements for the increasingly complex schemes of trapped - ion quantum computing .the large die area of the commercial dacs with respect to the active region of the ion trap will place an eventual limit on the number of channels that can be driven in a single electronics package .increasing trap complexity would require reducing the dac die area and developing a higher connection - density alternative to wirebonds for the trap chip .scaling could also be improved by incorporating multiplexers in the in - vacuum circuitry .higher - voltage operation in the 10 to 100 v range could be achieved by following the in - vacuum dacs with decapsulated high - voltage amplifiers supplied by an additional pair of power feedthroughs . however , the die footprint and power dissipation of the high voltage amplifiers might limit the number of channels that could be amplified . integration with cryogenic ion trapping systems would pose the additional challenge of thermally isolating the in - vacuum electronics from the trap structure and maintaining the components at high enough temperatures to operate .this material is based upon work supported by the office of the director of national intelligence ( odni ) , intelligence advanced research projects activity ( iarpa ) under the space and naval warfare systems command ( spawar ) contract number n6600112c2007 .all statements of fact , opinion , or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of iarpa , the odni , or the u.s .government .18ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] link:\doibase 10.1038/nature00784 [ * * , ( ) ] * * , ( ) link:\doibase 10.1103/physrevlett.96.253003 [ * * , ( ) ] http://stacks.iop.org/1367-2630/15/i=3/a=033004 [ * * , ( ) ] * * , ( ) ( ) link:\doibase 10.1007/s00340 - 011 - 4788 - 5 [ * * , ( ) ] link:\doibase 10.1103/physreva.84.032314 [ * * , ( ) ] http://stacks.iop.org/1367-2630/13/i=7/a=075018 [ * * , ( ) ] http://stacks.iop.org/1367-2630/15/i=8/a=083053 [ * * , ( ) ] http://stacks.iop.org/0953-4075/42/i=15/a=154006 [ * * , ( ) ] link:\doibase 10.1109/tepm.2006.882499 [ * * , ( ) ] link:\doibase 10.1016/j.physrep.2008.09.003 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.4802948 [ * * , ( ) ] _ _( , , ) link:\doibase 10.1103/physreva.61.063418 [ * * , ( ) ] \doibase http://dx.doi.org/10.1063/1.3058605 [ * * , ( ) ]
the advent of microfabricated ion traps for the quantum information community has allowed research groups to build traps that incorporate an unprecedented number of trapping zones . however , as device complexity has grown , the number of digital - to - analog converter ( dac ) channels needed to control these devices has grown as well , with some of the largest trap assemblies now requiring nearly one hundred dac channels . providing electrical connections for these channels into a vacuum chamber can be bulky and difficult to scale beyond the current numbers of trap electrodes . this paper reports on the development and testing of an in - vacuum dac system that uses only 9 vacuum feedthrough connections to control a 78-electrode microfabricated ion trap . the system is characterized by trapping single and multiple ions . the measured axial mode stability , ion heating rates , and transport fidelities for a trapped ion are comparable to systems with external ( air - side ) commercial dacs .
in recent years , we faced rapidly growing interest in analysing systems that obey the constraints of impossibility of instant transmission of messages .the constraints , called _ non - signaling _ , are satisfied by quantum mechanics , hence any limitations they pose are also present in quantum mechanics .however , in so - called non - signaling theories , there are objects that exhibit behaviour forbidden by quantum mechanics .one of the basic blocks of non - signaling theories is the so called popescu - rohrlich box ( pr - box ) a device that possesses much stronger correlations than those allowed by quantum mechanics .it has a remarkable property of being able to simulate a _ random access code _( rac ) with the support of only one bit of communication .suppose that alice wants to send to bob one of two bits , so that bob has the choice which bit he wants to learn about .suppose further that the following conditions are met : first , when bob gets perfect knowledge about bit , he must have no knowledge about the other bit , second , no communication from bob to alice is allowed , i.e. , bob should not tell alice which bit he wants to learn , as well as after the execution of the protocol alice should still not know which bit he learned .such a scenario is called random access code .this task is impossible , when alice and bob share either classical or quantum states .however , if alice and bob share the pr - box , they can implement it by sending just _one _ bit .this peculiar feature was used to formulate the principle of _ information causality _ . in this context in the notion of a _ racbox _ was introduced .it is a box which can implement a random access code with the support of one bit of communication .it was shown that any non - singaling racbox is equivalent to pr - box .a natural question is whether one can have a quantum analogue of this phenomenon .namely , we consider _ quantum random access code _, where alice has two qubits , and bob wants to learn about the qubit of his choice . again ,communication from bob to alice is not allowed , and bob should not learn about the other qubit .let us emphasize that this is a different concept from quantum random access code introduced in and further considered in where qubits are used to simulate the standard random access code the one with classical inputs and outputs by encoding input classical bits into the quantum system , and then decoding the chosen classical bit by measurement . in our case ,both inputs and outputs of the quantum random access code are quantum states .one can now ask , whether such functionality can be achieved by means of a _ quantum non - signaling box _ , i.e. , the non - signaling box that accepts qubits as inputs .such a box can be viewed as a quantum channel , with two inputs and two outputs , with property , that the statistics of the output at one site do not depend on the input at the other site . in this paper, we propose a quantum non - signaling box which , if supported by two bits of classical communication , implements the above quantum version of rac .the box is built out of two pr - boxes and two maximally entangled quantum states .we also prove that two bits of communication are necessary , by using analogy with quantum teleportation .we then show , that no quantum non - signaling box can give rise to a fully quantum rac .namely , if bob inputs _ superposition _ of decisions on which qubit he wants to learn about , the output must be a mixture of states of alice s qubits rather than superposition .this resembles the question of whether quantum computer can be fully quantum asked in , where the superposition of halt times was impossible .in this section we describe the standard random access code , and a closely related object called `` racbox '' .namely , suppose that alice has two bits and , and bob wants to learn one of them .we want bob to have a choice , which bit he would like to learn , but if he learns one of the bits , then the other should be lost .moreover at any time alice can not know bob s choice . as already mentioned in introduction , such a task is called random access code. there does not exist classical or quantum communication protocol , that can perform this task , which is easy to see at least in classical case .indeed , the only thing alice can do , so that bob can read the bit of his choice , is to send both bits .but in such case the condition that he should not learn the other bit is not met .let us also note , that if we weaken the definition of random access code and will not assume , that bob can not learn two bits , then such a weaker version of random access code needs two bits of classical communication .the situation changes if alice and bob share so called pr - box .pr - box is a bipartite device shared by two distant parties alice and bob .each of the parties can choose one of two inputs : alice and bob .the parties have two binary outputs .the box is defined by a family of joint probability distributions which satisfy p(ab|xy)= \ { 12 ab = xy , + 0 . .the pr box can be interpreted in two ways .on one hand it can be considered as a `` super - quantum '' resource , as it allows for correlations , that can not be obtained from measuring bipartite quantum state . on the other hand , it can be treated as a classical channel , with two remote inputs and two remote outputs .the channel has a special property : its implementation requires 1 bit of communication , but if it works as a `` black box '' - i.e. if the parties can only use the box through the inputs and outputs , it can not be used for communication - we say it is non - signaling .now , in it is shown , that if alice and bob share a pr box , they can implement random access code by means of just one bit of communication . in a converse question was answered : namely , an object was defined called _racbox_. it is a box that implements rac if supported by one bit of communication from alice to bob ( see fig .[ fig : racbox ] ) .it was then shown that a _non - sinaling _ racbox is equivalent to pr box .in this section we define non - signaling quantum random access code box ( qrac - box , cf . ) , which performs a quantum version of random access code if supplemented with 2 bits of communication .qrac - box is a bipartite device shared by alice and bob .alice has a two - qubit input and a two - bit classical output ( later we show that this is the smallest possible size of alice s classical output .bob has two inputs : a one - qubit input and a two - bit classical input .he also has a one - qubit output ( see fig .[ fig : qrac ] ) .we assume that the device obeys quantum mechanical laws , i.e. , it is trace preserving completely positive map .we further assume that device can not signal from one party to the other party , i.e. , one party s output can not depend on the other party s input .now , such a device will be called qrac - box , if it possesses the following property .suppose that alice inputs the first qubit in a state and the second qubit in a state .she then obtains as her output .when bob s classical input is equal to alice s classical output and his input qubit is in a state then we require that he obtains a state as his output . on the other hand , when bob s input is equal to alice s output and his input qubit is in a state , then we require that he obtains a state as his output . as a result ,if alice sends her output to bob , then bob can obtain alice s qubit of his choice , by simply inputing .let us note , that from the fact that device is non - signaling , i.e. , in particular , alice s output does not depend on bob s input , the above definition of qrac - box is consistent , i.e. , the classical output of alice can be fed as bob s input without causing a contradiction .if , on the contrary , the output of alice would depend on input of bob , then it might happen that whenever bob wants to input , then this changed output of alice , so that it were no longer and we would obtain a contradiction , i.e. , bob would not be able to input alice s output . finally , let us note , that the above properties of qrac - box imply , that the box obtained from feeding alice s classical input as bob s classical input is a quantum channel too ( which may not be no - signaling anymore ) with three inputs , and one output , see fig .[ fig : lambda ] .we shall call the channel qrac .it is a sum of subchannels ( which are completely positive trace non - increasing maps ) labeled by alice s outputs = _ a_a .[ eq : lambda ] where by we denote the channel representing qrac .as mentioned above , we assumed that qrac - box obeys the laws of quantum mechanics , i.e. , it is a quantum channel ( with several inputs and outputs ) of some particular features .a possible way to implement such a channel in lab is to send inputs from alice and bob to a joint place , perform the quantum operation , e.g. , by means of a circuit composed of quantum gates ( nowadays more and more complicated circuits are possible to implement in labs ) , and resend the outputs of the channel back to alice and bob . in such a scenario , to implement the channel , quantum communication is required .however , from the point of view of alice and bob , our channel is a black box .hence , it can not be used to signal from alice to bob and vice versa .thus the situation is analogous to the case of the pr box - the latter is a classical channel , and one can implement it by means of classical communication , yet considered as a black box , it can not be used itself to perform communication .one can also consider another way of implementing such channels through pre- and post - selection as proposed in and realised experimentally in .the two ways , are strictly connected .since the channel requires communication to implement it , if one wants to implement it without communication , one needs to consider some pre- or post - selection ( which is a hidden form of communication ) .let us suppose that instead of preparing his qubit in a state or and decoding the first or the second of alice s qubits bob prepares his qubit in a state .what will his output state be when his classical input is equal to alice s output ?will he obtain a superposition of states and ?below we answer these questions .first we will show that the channel qrac defined in the previous section produces a mixture of those states , rather than superposition .then we will argue , that each of subchannels also produces such a mixture ( now subnormalized ) .consider then of eq .[ eq : lambda ] .we extend this trace preserving completely positive map to unitary operation acting on a system and environment .let us check how it acts when bob prepares his qubit in a state and and his input is equal to alice s output .we have & & u(|_a|_a|0_r|_e)=|_b|^(0)_are + & & u(|_a||_a|1_r|_e)=|_b|^(1)_are where we renamed alice s first input register as bob s output register , is the initial state of the environment while and are the final states of alice s second register , bob s input register and environment .let us note that the states and are orthogonal .hence either is orthogonal to , or is orthogonal to .since for all non - orthogonal to , the state is orthogonal to , then by continuity for orthogonal to the state has to be orthogonal to as well . when bob prepares his qubit in a state then by linearity we have tracing out alice s second register , bob s input register and environment and using orthogonality of states and we obtain that bob s output state is _ b=||^2||_b+||^2||_b .[ eq : mixture ] we see that bob obtains a mixture rather than a superposition of states and .now , consider the subchannels , and suppose , by contradiction , that some of them produces a state which is not equal to such mixture .let us denote outputs by .one then easily sees , that there exist unitaries such that _ a u_a _ b^a u_a^= ||^2||_b+||^2||_b [ eq : non_mixture ] thus , we consider qrac - box with the above subchannels , and we will construct a new qrac - box as follows : bob while inputting will apply transformation to his output .one checks that it defines a valid qrac - box , resulting in subchannels , where .therefore , due to the resulting qrac given by will produce a state which is not a mixture .however this contradicts to our first result , that qrac resulting from arbitrary qrac - box , necessarily produces mixture .let us now find lower bound for minimal amount of classical information which alice has to send to bob so that he can retrieve alice s qubit of his choice . in the next section we present a box which achieves this bound .let us assume that bob prepares his input qubit in a state and tries to obtain alice s first qubit ( similar analysis applies when bob prepares his input qubit in a state and tries to obtain alice s second qubit ) .we know from the previous section that there is no need to consider the case when bob prepares his qubit in a state as he can simply measure it and depending on a result of the measurement input a state or . in the case when bob prepares his input qubit in a state qrac - box acts just like quantum teleportation .indeed , if alice sends her classical output to bob and bob uses it as his classical input , then he obtains alice s first qubit . now , since we require that qrac - box is non - signaling , we just need to argue , that if alice and bob have non - signaling resources , then they need at least two bits to perform teleportation .however , this was already proven in .namely it is argued that by combining quantum teleportation with dense coding , one would obtain instantaneous communication , thereby violating causality .we show how one can simulate qrac - box with two maximally entangled pairs and two pr - boxes .the protocol is based on quantum teleportation and implementation of classical rac with pr - boxes ( see fig . [fig : eprpr ] ) .let us suppose that alice and bob apart from qubits which they input into the box share two pairs of qubits and two pr - boxes .each pair of qubits is in the maximally entangled state latexmath:[\[\begin{aligned } pr - box has two inputs and two outputs one input and output is on alice s side and one input and output is on bob s side . when alice and bob input and into the box they obtain outputs and with probability in order to implement qrac alice performs measurement in the bell basis x_a^a_0z_a^a_1|^+_aa ( a_0,a_1 \{0,1 } ) on the first qubit and her qubit from the first maximally entangled pair and obtains two - bit result .after the measurement bob s qubit from the first maximally entangled pair is in the state . similarly , alice performs measurement in the bell basis on the second qubit her qubit from the second maximally entangled pair and obtains two - bit result .now alice inputs into the first pr - box , and into the second pr - box and obtains outputs and .alice s output bits of the qrac - box will be and .next alice sends two - bit message to bob .if bob wants to obtain alice s first qubit ( corresponding to the state of his qubit input ) he inputs both into the first pr - box and into the second pr - box .he obtains outputs and respectively .he then calculates and .the last equality in each expression follows from eq .[ pr - box ] .finally he applies unitary operation to his qubit from the first maximally entangled pair .if bob wants to obtain alice s second qubit ( corresponding to state of his qubit input ) he inputs into both the first pr - box and the second pr - box , obtains outputs and , calculates and ( now and ) and applies unitary operation to his qubit from the second maximally entangled pair .after application of unitary operation the qubit will be in a state equal to the initial state of the first ( second ) of alice s qubits .bob also discards his qubit from the second ( first ) maximally entangled pair . in a general case when bob prepared his qubit input in a state he first performs a measurement on it in computational basis and then conditioned on the result of the measurement he decodes one of alice s qubits .let us note , that the constructed box is non - signaling , since it is obtained by local operations on non - signaling resources such as pr boxes and maximally entangled states .also our construction satisfies the condition that given bob s classical input and alice s classical output the transformation from alice s input quantum state to bob s output quantum state is a trace preserving completely positive map .indeed , the transformation results from some local quantum operations and classical communication where communication is used to implement pr boxes. we also note , that by applying dense coding , we can change the proposed qrac - box into one that operates solely with qubits , i.e. , instead of alice s two - bit output , and bob s two - bit output , they will have 1 qubit output and input , respectively . in more detail , our`` qubit - only '' qrac - box will consist of the original qrac - box , supplemented by maximally entangled pair .the two bits of outputs will be sent by means of this pair .note that the pair will be treated as a part of the qubit - only qrac - box , and will not be seen by users of the box , who will only see inputs and outputs , now all of them quantum .thus , we obtain that a quantum random access code can be performed by use of a quantum non - signaling box supplemented by one qubit of communication .we introduced a non - signaling quantum random access code box a device which enables bob to obtain one of two of alice s qubits when alice sends bob two bits of classical information .it is important that bob can choose which qubit he wants to obtain .we investigated properties of such a box and showed that two bits is minimum amount of classical information which alice has to send to bob , i.e. , if there was less communication , the box must be signaling .we also showed how the box can be implemented with entanglement and pr - boxes .we thank w. kobus and m. piani for valuable comments on the manuscript .this work is supported by the erc advanced grant qolaps , the national centre for research and development grant quasar and national science centre project maestro dec-2011/02/a / st2/00305 .part of this work was done in national quantum information centre of gdansk ( kcik ) .
a well known cryptographic primitive is so called _ random access code_. namely , alice is to send to bob one of two bits , so that bob has the choice which bit he wants to learn about . however at any time alice should not learn bob s choice , and bob should learn only the bit of his choice . the task is impossible to accomplish by means of either classical or quantum communication . on the other hand , a concept of correlations stronger than quantum ones , exhibited by so called _ popescu - rohrlich box _ , was introduced and widely studied . in particular , it is known that popescu - rohrlich box enables simulation of the random access code with the support of one bit of communication . here , we propose a quantum analogue of this phenomenon . namely , we define an analogue of a random access code , where instead of classical bits , one encodes qubits . we provide a quantum non - signaling box that if supported with two classical bits , allows to simulate a quantum version of random access code . we point out that two bits are necessary . we also show that a quantum random access code can not be fully quantum : when bob inputs _ superposition _ of two choices , the output will be in a mixed state rather than in a superposition of required states .
the increasing use of engineered nanomaterials ( enm ) in hundreds of consumer products has recently raised concern about their potential effect on the environment and human health in particular . in nanotoxicology , in vitro dose - escalation assays describe how cell lines or simple organisms are affected by increased exposure to nanoparticles .these assays help determine materials and exposure levels .standard dose - escalation studies are sometimes completed by more general exposure escalation protocols , where a biological outcome is measured against both increasing concentrations and durations of exposure .cost and timing issues usually only allow for a small number of nanoparticles to be comprehensively screened in any study . therefore, both one- and two - dimensional escalation experiments are often characterized by small sample sizes .furthermore , data exhibits natural clusters related to varying levels of nanoparticles bio - activity .the two case studies presented in section [ sectionappli ] provide an overview of the structure of typical data sets obtained with both experimental protocols . beyond dose - response analysis ,nanomaterial libraries are also designed to investigate how a range of physical and chemical properties ( size , shape , composition , surface characteristics ) may influence enm s interactions with biological systems .the nano - informatics literature reports several quantitative structure activity relationship ( qsar ) models .this exercise is conceived as a framework for predictive toxicology , under the assumption that nanoparticles with similar properties are likely to have similar effects .most of existing qsar models summarize or integrate experimental data across times , doses and replicates as a preprocessing step , before applying traditional data mining or statistical algorithms for regression .for example , use a modified student s -statistic to discretize outputs in two classes ( toxic or nontoxic ) and a logistic regression model to relate toxicity to physico - chemical variables . use the area under the dose - response curve as a global summary of toxicity and they model dependence on predictors via a regression tree . both approaches , while reasonably sensible , ignore the uncertainty associated with data summaries and can lead to unwarranted conclusions as well as unnecessary loss of information . summarize toxicity profiles using a new definition of toxicity , called _ the probability of toxicity _ , which is defined as a linear function of nanoparticle physical and chemical properties .while this last approach solves the issue of uncertainty propagation , it still makes it impossible to predict full dose - response curves from nanoparticle characteristics .moreover , the use of regression trees is inherently appealing , as they are able to model nonlinear effects and interactions without compromising interpretation .we aim to extend regression tree models to account for structured multivariate outcomes , defined as toxicity profiles of nanoparticles , measured over a general exposure escalation domain .multivariate extensions of the regression tree methodology have been proposed by . in this paper ,the original tree - building algorithm of is modified to handle multivariate responses for commonly used covariance matrices , such as independence or autoregressive structures . proposes a similar method for an independent covariance structure . develop regression tree models for functional data , by representing each individual response as a linear combination of spline basis functions and using the estimated splines coefficients in multivariate regression trees .an alternative for longitudinal responses consists of combining a tree model and a linear model : replace the fixed effects of the traditional linear mixed effects model by a regression tree .the linear random effects are unchanged . fit a semi - parametric model , containing a linear part and a tree part , for multivariate outcomes in genetics .the linear part is used to model main effects of some genetic or environmental exposures .the nonparametric tree part approximates the joint effect of these exposures .finally , develop regression tree models for longitudinal data with time - dependent covariates . in thissetting , measures for the same individual can belong to different terminal nodes .other extensions of standard regression trees include bayesian approaches , where tree parameters become random variables . introduce a bayesian regression tree model for univariate responses .the method is based on a prior distribution and a metropolis hastings algorithm which generates candidate trees and identifies the most promising ones .this methodology has since been extended to so - called _ treed _ models , where a parametric model is fitted in each terminal node [ ] , to a sum - of - trees model [ ] , and to incorporate spatial random effects for merging data sets [ ] . model nonstationary spatial data by combining bayesian regression trees and gaussian processes in the leaves .this approach is extended to the multivariate gaussian process with separable covariance structure in .building on previous contributions , we propose a new method to analyze the relationship between nanoparticles physico - chemical properties and their toxicity in exposure escalation experiments .we extend the bayesian methodology of to allow for dose- and time - response kinetics in terminal nodes .our work is closely related to the methodology introduced in .however , our model is specifically adapted to exposure escalation experiments , as observations for the same nanoparticle at different doses and times can not fall in separate leaves of the tree .therefore , the binary splits of the tree only capture structure activity relationships instead of the general increase of toxicity with exposure .a global covariance structure accounts for correlation between measurements at different doses and times for the same nanoparticle .our approach is able to model nonlinear effects and potential interactions of physico - chemical properties without making parametric assumptions about toxicity profiles .it also addresses the issues associated with conventional qsar models by combining evidence across measurements for all doses and times in a general experimental design .the proposed model is particularly versatile , as it provides scores of importance for physico - chemical properties and visual assessment of the marginal effect of these properties on toxicity .the rest of this paper is organized as follows : section [ sectionmodel ] describes the regression model for dose - response data and section [ sectionprior ] describes the corresponding prior model . the resulting posterior distribution and the associated mcmc algorithm are presented in section [ sectionalgo ] .the model is extended to the case of dose- and time - response surfaces in section [ sectiongeneralcase ] .the method is applied to a library of 24 metal oxides in section [ sectionappli ] and section [ sectiondiscu ] concludes this paper with a discussion .we first consider the case of a typical dose escalation experiment , where a biological outcome is measured over a protocol of increased nanoparticle concentration .this case will be expanded in section [ sectiongeneralcase ] to include more general exposure escalation designs .let denote a real - valued response associated with exposure to nanoparticle and replicate at dose , for , and ] . in thissetting , two outcomes associated with the same nanoparticle at similar doses are assumed to be more correlated than measurements taken at distant doses , for any replicate .the major advantage of this assumption is related to a reduced representation of a high - dimensional covariance matrix , which is now fully characterized in terms of a -dimensional variance parameter and a -dimensional correlation .the binary tree recursively splits the predictor space into two subspaces , according to criteria of the form vs , for and .each split defines two new nodes of the tree , corresponding to two newly created subspaces of predictors .let be the set of terminal nodes of tree .we model the dose - response curves in each terminal node as a linear combination of spline basis functions .unlike parametric models such as log - logistic , spline functions do not assume a particular shape for the curve .this makes our model fully applicable to sub - lethal biological assays , which are not expected to follow a sigmoidal dose - response dynamic .however , if needed , the spline model can easily allow for possible shape constraints , such as monotonicity , by using a modified basis [ ] .this flexibility makes the use of spline basis representations potentially preferable to gaussian process priors or similar smoothers .a formal comparison is , however , outside the scope of this manuscript .our chosen functional representation is easily extended to two - dimensional response surfaces ( section [ sectiongeneralcase ] ) .let denote uniform b - spline basis functions of order on ] .let be the dose - escalation design sequence : } , \label{distphi}\end{aligned}\ ] ] where is the indicator function , is a hyperparameter matrix , and are defined through its diagonal , subdiagonal , and superdiagonal elements as follows : , , . in practice , we choose , the identity matrix of size , to put more weight on low values of and assume weak prior correlations between responses at different doses .this last distribution completes the prior model .we now turn to posterior inference on parameters , given the observations .we are interested in the posterior distribution .the rest of this section describes a markov chain monte carlo algorithm for sampling from this distribution , as the number of potential trees prevents direct calculations .our gibbs sampler is adapted from the algorithms of and , with changes due to the specific structure of our model . at each iteration ,the algorithm performs a joint update of , conditionally on the rest of the parameters , followed by standard gibbs component - wise updates of each variance parameter .the joint tree and terminal nodes spline coefficients update is decomposed into the draw of in ( [ eqtr ] ) is performed by the metropolis hastings algorithm of , which simulates a markov chain of trees that converges to the posterior distribution .the proposal density suggests a new tree based on four moves : grow a terminal node , prune a pair of terminal nodes , change the split rule of an internal node , and swap the splits of an internal node and one of its children s .the target distribution can be decomposed as follows : \\[-8pt]\nonumber & & \qquad \propto p({\mathcal{t } } ) \int p \bigl(\mathbf{y } { |}\bolds{\beta } , { \mathcal{t } } , \sigma^2 , \varphi_d , \tau^2 \bigr ) p \bigl ( \bolds{\beta } { |}{\mathcal{t } } , \sigma^2 , \varphi_d , \tau^2 \bigr ) \,d \bolds{\beta}.\end{aligned}\ ] ] the expression for the integral above is given in , in a closed form by conjugacy of the prior on .therefore , the draw of in ( [ eqtr ] ) does not require a reversible - jump procedure for spaces of varying dimensions , even if nodes are added or deleted . the proposal density of the metropolis hastings algorithm can be conveniently coupled with to simplify calculations [ ] .full conditional distributions for in ( [ eqbeta ] ) and variance parameters , and are available in . given posterior samples , predictive statistics are easily obtained via monte carlo simulation of , for .more precisely , let .at each iteration , the mcmc algorithm performs a draw from , followed by a draw of from the multivariate normal distribution . in our case studies ( section [ sectionappli ] ) , for example , we compare posterior summaries from the predictive distribution to observed dose - response data .we perform two series of posterior predictive checks : in the first one , the generated predictive samples are conditioned on the full set of dose - response curves , via the tree .the objective is to assess model adequacy and calibration .the second series studies model prediction accuracy using a leave - a - curve - out validation scheme , where each data curve is compared to the corresponding predictive sample obtained by fitting the tree on the remaining curves .posterior inference based on monte carlo samples is also used to derive inferential summaries about nontrivial functionals of the parameter / model space .the marginal effect of a physico - chemical property on the response can be represented by the partial dependence function of : let be a grid of new values for .then the partial dependence function is , where is the observation of in the data .for all doses , plotting the average of this function over monte carlo draws provides a visualization of the marginal effect of .this partial dependence function can also be extended to account for the joint marginal effect of two variables .similarly , posterior realizations can be used to report importance scores for each variable . for all , and the _ first - order _ and _ total _ sensitivity indices for variable , and represent the main and total influence , respectively , of this variable on the response [ ] . unlike other metrics such as the variance reduction attributed to splits on the variable , sensitivity indices are robust to leaf model specifications and are therefore adapted for a dose - response leaf model .both indices are defined given an uncertainty distribution on the inputs , usually the uniform distribution on the covariates space .we follow and use a monte carlo scheme to approximate and , that accounts for unknown responses by using predicted values for a latin hypercube sampling design .more general exposure escalation protocols involve the observation of a biological outcome in association with a prescription of dose escalation ] . letting , be a replication index , we define as the outcome of interest , evaluated at dose , time and extend the model in ( [ eqmodel1 ] ) : , where is a random mean response surface and . to account for dependence between doses and durations of exposure , for each nanoparticle , we assume , where ] are autocorrelation parameters .the response surface in the terminal nodes of is modeled by a tensor product of two one - dimensional p - splines [ ] .let defined as in section 2.2 and denote b - spline basis functions of order on $ ] , with fixed knots . then , if is in the subset corresponding to the terminal node of , , where is a vector of spline coefficients associated to the terminal node .the prior model has the same global dependence structure as in section [ sectionprior ] , but now includes an additional independent term for time - covariance .let be the sequence of exposure times when toxicity was measured .we adapt prior ( [ distphi ] ) to preserve conjugacy and introduce a similar distribution for : } , \\p(\varphi_t ) & \propto&\bigl ( 1 - \varphi_t^2 \bigr)^{-(n_d(n_t-1))/2 } \exp\biggl ( - \frac{\gamma_{01 } - \varphi_t \gamma_{02 } + \varphi_t^2 \gamma_{03}}{2 ( 1 - \varphi_t^2 ) } \biggr ) \mathbb{i}_{\varphi_t \in[0,1]},\end{aligned}\ ] ] where , and are obtained by summing elements of the diagonal , subdiagonal , and superdiagonal of matrix parameter prior , constructed following the guidelines introduced in section [ sec3.3 ] .for the terminal nodes spline coefficient priors , we use a spatial extension of , a first order random walk prior based on the four nearest neighbours of splines coefficients , with appropriate changes for corners and edges : , where is a penalty band matrix of size , which extends matrix ( [ penmatrix ] ) to the two - dimensional case . for posterior inference , we add a step to generate in the gibbs sampler of section [ sectionalgo ] .a simulation study to assess model performance is described in . in the rest of this sectionwe illustrate our approach with experimental results from a case study reported by , measuring toxicity of 24 metal oxides on human bronchial epithelial ( beas-2b ) cells . after 24 h, lactate dehydrogenase ( ldh ) release was used to measure the death rate of cells exposed to eleven doses of metal oxides ( from 0 to 200 g ) , evenly spaced on the logarithmic scale .cell death is commonly used to screen for enm cytotoxicity without reference to a specific mechanism .figure [ fig1 ] shows the ldh dose - responses curves for the 24 metal oxide nanoparticles . in a second assay , propidium iodide ( pi ) fluorescencewas used to indicate the percentage of cells experiencing oxidative stress through cellular surface membrane permeability , across the same ten doses and after six times of exposure ( from 1 to 6 h , at every hour ) .figure [ fig2 ] shows a heatmap representation for the pi assay , for all metal oxides , doses , times , and replicates , where responses are color - coded from light ( low ) to dark ( high ) . in both assays , seven metal oxides ( co , coo , cr , cuo , mn , ni and zno ) display a notable rise for the higher doses , suggesting toxicity .ml ) , arranged from bottom to top . ]all metal oxides are characterized by six physico - chemical properties of potential interest to explain toxicity profiles : nanoparticle size in media , a measure of the crystalline structure ( b ( ) ) , lattice energy ( ) , which measures the strength of the bonds in the nanoparticles , the enthalpy of formation ( ) , which is a combined measure of the energy required to convert a solid to a gas and the energy required to remove electrons from that gas , metal dissolution rate , and conduction band energy ( the energy to free electrons from binding with atoms ) . in our analysis , we use cubic splines , that is , , and place interior knots at each intermediate dose from 0.39 to 100 g .therefore , and . for the treeprior , we adopt the default choice of , , which puts more weight on trees of size 2 or 3 .we place relatively diffuse priors on precision parameters and .we choose and , assuming no prior correlations between measurements at different doses and times .finally , moves `` grow , '' `` prune , '' `` change '' and `` swap '' of the metropolis hastings tree - generating algorithm have probabilities , , and , respectively .we used a total of 160,000 iterations . after discarding 80,000 iterations for burn - in ,the remaining samples for estimation were thinned to save computer storage .the rest of this section shows the results obtained on ldh and pi assays .figure [ fig4 ] ( top ) shows both sensitivity indices described in section [ sectionalgo ] for the six physico - chemical properties .figure [ fig4 ] ( bottom ) shows the combined marginal effect of conduction band energy and dissolution on ldh , obtained with the partial dependence function of , and color - coded from light ( low ) to dark ( high ) , for dose 200 g .the tree isolates a first region of high toxicity , corresponding to enm with high dissolution rates ( zno and cuo ) .this region corresponds to the first mechanism of toxicity identified by : highly soluble metal oxides , such as zno and cuo , are more likely to release metal ions and disturb the cellular state .a second region of toxicity on figure [ fig4 ] ( left ) includes metal oxides co , coo , cr , mn and ni , with ec values ranging from .33 ev for mn to .59 ev for ni .this region matches the second mechanism for toxicity described by : the overlap of the conduction band energy of the metal oxides with the biological redox potential of cells , ranging from .12 to .84 ev . when these two energy levels are alike , transfer of electrons from metal oxides to cells is facilitated , disturbing the intracellular state .note that figure [ fig4 ] ( bottom ) also shows an additional split that isolates mn , whose toxicity for the ldh assay is more comparable to zno and cuo ( see figure [ fig1 ] ) .similar figures for other doses are included in .the ldh assay illustrates how threshold effects and interactions of physico - chemical properties are accurately captured by a tree structure .the toxicity response is color - coded from light ( low ) to dark ( high ) .the figure also shows the projections of the 24 metaloxides in this subspace . ]we perform posterior predictive checks for model fitting .figure [ fig3 ] shows the expected posterior predictive dose - response curves for two nontoxic metal oxides ( ceo and fe ) and two toxic ones ( cr and zno ) , with the associated intervals .all four intervals provide good coverage for the original data .the other 20 curves exhibit similar behavior and can be found in .we also study the prediction accuracy of the model using a leave - a - curve - out validation framework .results for ceo , fe , cr and zno are presented in .while leave - one - out predictions recover general trends , in some cases we observe suboptimal coverage , especially in sparse areas of the physico - chemical spectrum .for example , nanoparticles zno and cuo alone determine tree splits on the metal dissolution parameter and , once removed , can not be accurately predicted by the model . ,fe , cr and zno .the points are the observed replicates and the dashed line is the average observed response .the expected posterior predictive curve and interval are in solid lines .] finally , the proposed methodology is compared for validation to the bayesian additive regression trees ( bart ) method of , a sum - of - tree extension of , with the r package `` bayestree '' [ ] . as bart modelone - dimensional responses , we use the area under the ldh curves ( auc ) as the dependent variable . in ,the proportion of all splitting rules attributed to a variable at each draw on all trees , averaged over all iterations , is proposed as a measure of variable importance , when the number of trees is small .results are presented in .variable importance scores and marginal effects from bart are similar to those obtained with our method and confirm that the auc is an accurate summary for toxicity for the ldh assay .the first advantage of using a dose - response leaf model instead of the auc is that we avoid preliminary assessment of the data for choosing a summary over another : toxicologists usually report several toxicity parameters ( ec50 , slope ) , as they may convey different information .the second advantage is better understood from a predictive perspective , as our model allows for full dose - response dynamics instead of the auc .a comparison with the treed gaussian process , using the ` r ` package ` tgp ` [ ] , is also included in .after tuning ` tgp ` to forbid splitting on dose ( ` basemax ` , ` splitmin ` ) , we can indeed reproduce the essential structure of our model using this well - tested ` r ` library .our findings proved to be robust to differing details in the prior specification , as the model fit with ` tgp ` also captures the marginal effects of the predictors metal dissolution and conduction band energy on toxicity .ml and 6 h. the toxicity response is color - coded from light ( low ) to dark ( high ) .the figure also shows the projections of the 24 metaloxides in this subspace . ]o , tio , co and cuo .the solid line is the expected posterior predictive surface with the associated interval .the points are the observed data replicates . ]figure [ fig9 ] ( top ) shows the variable sensitivity indices of the six physico - chemical properties .figure [ fig9 ] ( bottom ) illustrates the marginal effect of both conduction band energy and dissolution on membrane damage , calculated with the partial dependence function , and color - coded from light to dark , for dose 200 g and time 6 h.the tree model for pi assay also identifies the two areas of toxicity indicated in , corresponding to highly soluble metal oxides and nanoparticles whose conduction band energy overlaps with cellular redox potential range .additional figures for marginal effect of conduction band energy and metal dissolution , for all doses and times , are included in .the similarity of variable importance scores and marginal effect of conduction band energy and dissolution obtained for ldh and pi assays indicates a strong correlation between these assays for nanoparticle toxicity assessment , as noted by .figure [ fig8 ] illustrates the posterior predictive surface intervals for two nontoxic metal oxides ( la and tio ) and two toxic ones ( co and cuo ) , showing good posterior coverage over all doses and times of exposure .similar surfaces for the other 20 metal oxides are plotted in .leave - a - surface - out predictions for la , tio , co , and cuo are presented in the appendix and show the limitations of the model for prediction when extrapolating to sparse areas of the covariate space , similar to what we observed in the ldh assay .we propose a bayesian regression tree model to define relationships between physico - chemical properties of engineered nanomaterials and their functional toxicity profiles in dose - escalation assays . as demonstrated by the case studies , the tree structure is adapted to account for flexible models of structure - activity relationships , such as threshold effects and interactions .the proposed model integrates information across all doses and replicates , and therefore is adapted to small sample sizes usually found in nanotoxicology data sets .monte carlo integration over the model space provides straightforward inference on nontrivial functionals of parameters of interest and prediction of full dose - response curves from nanoparticle characteristics .the smoothing splines representation allows for easy extension of the model to two - dimensional toxicity profiles of general exposure escalation assays as well as for modeling sub - lethal outcomes .the convergence of bayesian tree models should be carefully assessed for all applications of the proposed methodology .the four moves of the metropolis hastings algorithm of work well in our simulations and case studies , however , other applications might require additional moves to move faster through the tree space and improve convergence [ see , e.g. , ] . as illustrated in section [ sectionappli ] , another potential pitfall of the model is its predictive performance for sparsely explored nanoparticle characteristics .this issue is not specific to our model and possible improvements would be obtained by combining multiple studies in a meta - analysis framework , with the appropriate adjustments for data heterogeneity or formalizing explicit prior knowledge about hazardous nanoparticle properties . as seen in the case study for cell death and cellular membrane permeability , different toxicity mechanisms can be closely related .therefore , an important opportunity for model extensions would be to combine different biological assays in a single analysis , the final goal being that of understanding if nanoparticles physical and chemical properties have a differential effect on different cellular injury pathways .this would require more sophisticated modeling strategies that will be more likely to be useful if technological advances will allow for feasible screening of much larger nanomaterial libraries .any opinions , findings , conclusions or recommendations expressed herein are those of the author(s ) and do not necessarily reflect the views of the national science foundation or the environmental protection agency .this work has not been subjected to an epa peer and policy review .
we introduce a bayesian multiple regression tree model to characterize relationships between physico - chemical properties of nanoparticles and their in - vitro toxicity over multiple doses and times of exposure . unlike conventional models that rely on data summaries , our model solves the low sample size issue and avoids arbitrary loss of information by combining all measurements from a general exposure experiment across doses , times of exposure , and replicates . the proposed technique integrates bayesian trees for modeling threshold effects and interactions , and penalized b - splines for dose- and time - response surface smoothing . the resulting posterior distribution is sampled by markov chain monte carlo . this method allows for inference on a number of quantities of potential interest to substantive nanotoxicology , such as the importance of physico - chemical properties and their marginal effect on toxicity . we illustrate the application of our method to the analysis of a library of 24 nano metal oxides . , , , , , .
multi - frequency electrical impedance tomography ( mfeit ) can be applied to the non - invasive assessment of abdominal obesity , which is a predictor of health risk .mfeit data of the boundary current - voltage relationship at various frequencies of 1 mhz reflect the regional distribution of body fat , which is less conductive than water and tissues such as muscle , and can therefore be used to estimate the thicknesses of visceral and subcutaneous adipose tissue .this diagnostic information can be used to assess abdominal obesity , which is considered a cause of metabolic syndrome as well as a risk factor for various other health conditions .the spatial resolution of computed tomography ( ct ) and magnetic resonance ( mr ) images is high enough for the assessment of abdominal obesity ; however , there are concerns and limitations regarding their use for this purpose ; e.g. , ct exposes the subject to ionizing radiation , while mr imaging has poor temporal resolution .electrical impedance tomography ( eit ) is a noninvasive , low - cost imaging technique that provides real - time data without using ionizing radiation .however , experiences over the past three decades have not succeeded in making eit robust against forward modeling errors such as those associated with uncertainties in boundary geometry and electrode positions . in time - difference eit , which images changes in the conductivity distribution with time , forward modeling errorsare effectively handled and largely eliminated when the data are subtracted from the reference data at a predetermined time .time - difference eit for imaging the time changes in the conductivity distribution effectively handle the forward modeling errors , which are somewhat eliminated from the use of time - difference data subtracting reference data at a predetermined fixed time . in static eit , however , there are no reference data that can be used to eliminate the forward modeling error .creating reference - like data would be the key issue in static eit .this paper proposes a new reconstruction method that uses prior anatomical information , at the expense of spatial resolution , to compensate for this fundamental drawback of static eit and improve its reproducibility . in the case of abdominal eit , it is possible to use a spatial prior to handle its inherent ill - posed nature .the proposed method employs a depth - based reconstruction algorithm that takes into account the background region , boundary geometry , electrode configuration , and current patterns . here, we could take advantage of recent advances in 3d scanner technology to minimize the forward modeling error by extracting accurate boundary geometry and electrode positions .the proposed method uses a specially chosen current pattern to obtain a depth - dependent data set , generating reference - like data that are used to outline the borders between fat and muscle . from the relation between a current injected through one pair of electrodes and the induced voltage drop though the other pair of electrodes , we obtain the transadmittance , which is the ratio of the current to the voltage .hence , the transadmittance depends on the positions of two pairs of electrodes , body geometry , and admittivity distribution .we can extract the corresponding apparent admittivity in term of two pairs of electrodes from the transadmittance divided by a factor involving electrode positions and the body geometry .this apparent admittivity changes with the choice of pairs of electrodes .( in the special case when the subject is homogeneous , the apparent admittivity does not depend on electrode positions . )noting that the change in apparent admittivity in the depth direction can be generated by varying the distance between electrodes , we could probe the admittivity distribution by developing a proper and efficient algorithm based on a least - squares minimization .the performance of the proposed least - squares approach is demonstrated using numerical simulations with a 32-channel eit system and human like domain .future research study is to adopt an mfeit technique to exploit the frequency - dependent behavior of human tissue .the distribution of visceral fat in abdominal region can then be estimated from the linear relation between the data and the admittivity spectra , and thus obtain a clinically useful absolute conductivity image .let an imaging object occupy three ( or two ) dimensional domain with its admittivity distribution where is the conductivity , the permittivity , and the angular frequency .the domain can be divide by 4 subregions ; subcutaneous fat region , muscle region , bone region , and remaining region as shown in figure [ fig : illustrate - abdomen - tissue - spectroscopy ] ( a ) . [cols="^,^ " , ] + ( a ) & ( b ) as shown in figure [ fig : recon - mesh ] for numerical tests , we use the inject - measure set in by choosing eight electrodes depicted in red and generating the corresponding triangular mesh . for computing geometry - free data ,the is essential which can be obtained numerically by using the fem with homogeneous admittivity in . to give conditions similar to real situations , the fem uses optimally generated meshescorresponds to each admittivity distributions for and .we tested all inject - measure index set with corresponding triangular mesh in the reconstructed region corresponds to the choice of .the image reconstruction result of applying the method using index set with 1063 triangular mesh ( figure [ fig : recon - mesh ] ) is represented in figure [ fig : recon - humanbody-4methods ] .index set.,width=188 ] now , we add gaussian random noise with snr 15db to the data .the reconstruction results using index sets of applying the linearized method with tikhonov regularization using the noisy data are represented in figure [ fig : recon - humanbody - linearized-32 ] . for visual comfort ,we merge the 32 result images to 1 image as shown in figure [ fig : recon - humanbody - linearized-32 ] . with noisy data.,width=491 ] it is worth pointing out that the column vectors of are highly correlated .the correlation function between the column vectors can be defined by for , where , are column vectors of .we compute the correlation , , for which correspond to the mesh elements near the boundary , in the middle of , and far from the boundary .as shown in figure [ fig : correlation - smat ] , the column vectors are highly correlated .cc , , with mesh 1 , 2 , and 3 , respectively.,width=158 ] & , , with mesh 1 , 2 , and 3 , respectively.,width=158 ] , , with mesh 1 , 2 , and 3 , respectively.,width=26 ] + ( a ) & ( b ) + + , , with mesh 1 , 2 , and 3 , respectively.,width=158 ] , , with mesh 1 , 2 , and 3 , respectively.,width=26 ] & , , with mesh 1 , 2 , and 3 , respectively.,width=158 ] , , with mesh 1 , 2 , and 3 , respectively.,width=26 ] + ( c ) & ( d ) the success of the proposed least - squares method arises from taking advantage of the tikhonov regularization .the tikhonov regularization minimizes both and as follows : the component of ^t$ ] can be considered as a coefficient of the linear combination of column vectors to generate . when minimizing , high correlation of causes uncertainty in finding . however , minimizing can compensate the mismatch of caused by high correlations between the for , since and tend to have a similar value when and are highly correlated .it is also worth emphasizing that other imaging algorithms such as music can not perform well because precisely of the high correlation of column vectors of the sensitivity matrix .in this work , static eit image reconstruction algorithm of human abdomen for identifying subcutaneous fat region is developed .the proposed depth - based reconstruction method relies on theorem [ thm : main ] which shows that subcutaneous fat influence in the data can be eliminated by using a reference - like data and geometry information ( domain shape and electrode configuration ) to overcome a fundamental drawback in static eit ; lack of reference data for handling the forward modeling errors .we suggest a linearized method with tikhonov regularization which uses the subcutaneous influence eliminated data with a specially chosen current pattern .numerical simulations show that the reconstruction result of identifying the subcutaneous fat region is quite satisfactory .the suggested way of eliminating influence of homogeneous background admittivity can be applied in other static eit area , for instance , ground detection .the knowledge of subcutaneous fat region can be a useful information of developing an algorithm of estimating visceral fat occupation . for clinical use , estimating visceral fat occupation is required to provide useful information in abdominal obesity .the following result holds .let be a lower half - space of .let and be circular electrodes centered at points and on , respectively , with radius .let satisfy where and are constants .define . for given , there exists a constant such that , for , where .[ lem : cemispem ] let be the solution of then can be represented by (x ) \quad\mbox{for}~x\in\omega .\ ] ] for , we get it follows from mean - value theorem that , for , since for , we have therefore for , and can be represented by (x)-2\ms\left[\frac{\partial \widetilde{u}^h}{\partial n}\chi_{\me^h_-}\right](x ) , \\v^h(x ) & = & -2\ms\left[\frac{\partial v^h}{\partial n}\chi_{\me^h_+}\right](x)-2\ms\left[\frac{\partial v^h}{\partial n}\chi_{\me^h_-}\right](x).\end{aligned}\ ] ] then , for , where on . since , it follows from mean - value theorem that for a similar argument shows that let be a simply connected domain in .let and be circular electrodes centered at points and on , respectively , with radius .let satisfy where and are constants .let satisfy for given , there exists a constant such that , for , where .[ thm : cemispem ] let be the solution of the following equation then and can be represented by and hence , for , therefore , from mean - value theorem it follows that , for , according to the decay estimation for the neumann function in and the fact that for , there exists a positive constant such that consequently , the functions and can be represented by then , for , where on . since , again , from the mean - value theorem it follows that , for , a similar argument shows that therefore which completes the proof of the theorem .
this paper presents a static electrical impedance tomography ( eit ) technique that evaluates abdominal obesity by estimating the thickness of subcutaneous fat . eit has a fundamental drawback for absolute admittivity imaging because of its lack of reference data for handling the forward modeling errors . to reduce the effect of boundary geometry errors in imaging abdominal fat , we develop a depth - based reconstruction method that uses a specially chosen current pattern to construct reference - like data , which are then used to identify the border between subcutaneous fat and muscle . the performance of the proposed method is demonstrated by numerical simulations using 32-channel eit system and human like domain . abdominal electrical impedance tomography , reference - like data , outermost region estimation , sensitivity matrix . 35r30 , 49n45 , 65n21 .
the correlation structure of financial asset returns is informative for stock return time series ( for a recent review see ) , market index returns of stock exchanges located worldwide and currency exchange rates .the correlation based clustering procedures allow also to associate correlation based networks with the correlation matrix .useful examples of correlation based networks are the minimum spanning tree , graphs obtained by using thresholding procedures and the planar maximally filtered graph ( pmfg ) . in this paperwe investigate the daily correlation present among indices of stock exchanges located all over the world .the study is performed by using the index time series of 57 different stock markets monitored during the time period jan 1996 - jul 2009 . by investigating this set of stock market indiceswe discover that the correlation among world indices has both short term and long term dynamics .the long term dynamics is a slow monotonic growth associated with the development and consolidation of globalization while the short term dynamics is associated with events originated in a specific part of the world and rapidly affecting the entire system .examples are the 1997 asian crisis , the 1998 russian crisis , the 2007 development of the subprime crisis and the onset of the 2008 global financial crisis .the presence of both short term and long term timescales in the dynamics of the correlations among world indices make difficult their analysis .in fact an estimation of the empirical correlation matrix minimizing the unavoidable statistical uncertainty associated with the evaluation needs a large number of records to be used in the time evaluation period .however , an extended time evaluation period reduces the ability to resolve the fast dynamics of correlation . in the present study, we first perform our analyses by using different time evaluation periods and we then analyze the dynamics of the correlation at the shortest time scale accessible by ensuring that the correlation matrix is invertible ( the correlation matrix is no more invertible when the number of records in the time evaluation period is less then the number of elements in the investigated set ) .we provide empirical evidence that the short timescale of correlation among world indices can be less than 3 trading months ( about 60 trading days ) and that there are quite stable factors driving the dynamics of stock market indices located in specific regions of the world .we also show that the interrelation between stock market indices can be efficiently described by using correlation based networks and principal component analysis .unsupervised cluster detection is performed on a correlation based network obtained by using the correlation matrix estimated using all daily records available .the cluster detection is done by applying a community detection algorithm to the correlation based network .we show that the characteristics of fast dynamics of the interrelations among stock indices are well described by the pmfgs and by the two largest eigenvalues and eigenvectors of the correlation matrix .abrupt short term alterations are detected at the onset of several financial crisis but the changes detected in the structure of graphs and in the principal component analysis profile are of difficult economic interpretation due to the high level of statistical uncertainty associated with the correlation estimation and because different events might be crisis specific and therefore specific only to each single event . to quantify in an efficient way successive changes of correlation based networks estimated with the shortest evaluation time periodwe introduce a new way to compute a mutual information measure between two networks based on link co - occurrence . the paper is organized as follows . in section [ s1:data ] ,we briefly present the set of investigated data and we discuss the time scales of the dynamics of correlations of market indices . in section [ s1:corrgraph ] , we analyze the unconditional correlation based graph associated with index returns and we perform a community detection on it . in section [ s1:events ] , we discuss the short term dynamics of correlations of stock indices . in section [ s1:mutualinfo ] , we discuss the time evolution of pmfgs computed with the shortest evaluation time period and we compare successive networks by using a newly introduced mutual information measure based on link overlap . in section [ s1:spectral ] , we investigate the dynamics of the largest eigenvalues and eigenvectors associated with correlation matrices computed at the shortest timescale . in the last section we present our conclusions .in this study , we investigate a set of 57 stock market indices of 57 different exchanges located in several continents .the complete list of stock market indices is given in the appendix .data are sampled daily .we have selected these 57 stock market indices because we have access for them to a long time period ranging from january 1996 to july 2009 .we perform our analysis on the daily logarithmic return , which , for each index , is defined as : where is the price of index on day . starting from the return time serieswe compute the correlation matrix of this multivariate set of data at time by using past return records sampled during evaluation time periods of different length ranging from 3 calendar months ( , approximately 60 trading days ) up to 5 calendar years ( , approximately 1250 trading days ) . for each month ( converted to in unit of years ) and for each different evaluation time interval , we compute the pearson correlation coefficient [r_j(k)-\mu_j]\rangle}{\sigma_i \sigma_j},\ ] ] where and are the sample means and and are the standard deviations of the two stock index time series and respectively . and of the evaluation time period .] and of the evaluation time period .the white region is the region where the past records are not enough to estimate the correlation matrix with the same statistical accuracy of other values for the same . ] in fig .[ fig1 ] we plot the average correlation value of the non diagonal elements of the correlation matrix computed at time using a set of past daily records spanning a interval . in fig .[ fig2 ] we show the contour plot of the average correlation value of the correlation matrix .the results summarized in figs .[ fig1 ] and [ fig2 ] shows that a dynamics is present in the time evolution of the correlation among the indices of different stock exchanges .important aspects to be investigated concern both the fast and the slow dynamics of the correlations .ideally one would like to estimate correlation among indices by using a short estimation interval .unfortunately by using a short estimation interval the level of the statistical uncertainty in the estimation is increased and eventually one ends up with not well - characterized correlation matrices when the number of time records used in the estimation are less or close to the number of investigated market indices . on the other hand , when a long estimation interval is used successive estimations of the correlation are not independent and therefore localized jumps of the average correlation are smeared out over a long time period .in fact , by looking at figs .[ fig1 ] and [ fig2 ] , we notice that both a short term and a long term dynamics is present in the evolution of correlation .we also conclude that to detect properly the short time scale of correlation dynamics we need to use a short evaluation time period because the contour plot of fig .[ fig2 ] shows that the localization of the onset of big sizable changes are affected by the length of the evaluation time period .for example the onset of the asian 1997 crisis , the russian 1998 crisis , the 2007 subprime crisis and the 2008 global crisis are quite clearly detected when an evaluation time period of 3 months is used whereas the onset is smeared out and postponed when longer evaluation time periods are used .correlation based graphs provide a powerful tool to detect , analyze and visualize part of the most robust information which is present in the correlation matrix . herewe first investigate the pmfg of the 57 selected market indices obtained from the correlation matrix estimated by using all the daily records of the selected time period ( 1/1/1996 - 31/7/2009 ) .the unconditional pmfg is shown in fig .[ figpmfg ] .as already observed in previous studies , the relationship between market indices pointed out by the pmfg is primarily of geographical origin . on the top left of the graphwe recognize the market indices of american stock exchanges ( blue circles ) , market indices of european stock exchanges ( green circles ) are found in the central part of the graph and the bottom part of the graph links primarily market indices of asian ( yellow ) , oceanian ( magenta ) , middle east ( cyan ) and african ( maroon ) stock exchanges .an unsupervised cluster analysis of the indices can be performed on the pmfg by applying a community detection algorithm used to find community of elements in networks ( for a recent review on this topic see ref . ) .specifically , we investigate the community of elements of the pmfg by using the infomap method proposed by rosvall and bergstrom .this algorithm is considered one of the best algorithms of community detection in networks .the method uses the probability flow of random walks to identify the community structure of the system in the investigated network .this approach implies that two independent applications of the method to the same network may produce ( typically slightly ) different partitions of vertices .we repeat the application of the method 100 times to detect a minimum value of the fitness parameter estimating the goodness of the partition .the result of the application of the method to the unconditional pmfg is given in fig .[ figinfomap ] .the method identifies four distinct clusters .the bottom right cluster of the figure is the cluster of american stock exchanges .two other clusters ( top right and bottom left in the figure ) are clusters of primarily european stock exchanges , whereas the forth cluster ( top left in the figure ) is primarily composed by asian and oceanian stock exchanges . in the following , we will use this unsupervised classification when we present the results of our analysis on the fast dynamics of correlations .in fact , the remaining part of this paper is devoted to an analysis of the properties of the correlation matrices and of the correlation based graphs estimated by using short evaluation time periods with a limited number of daily records so that they might provide information about the fast dynamics of correlation as a function of time . .the four detected clusters correspond primarily to different geographical regions .top left : asia and oceania .top right : north and east europe .bottom left : central and southern europe .bottom right : america . ]in fig . [ fig3m6 m ] we show the average correlation of the non diagonal elements of correlation matrices estimated for the 57 selected indices by using the evaluation time period of 6 months ( ) and the shortest evaluation time period of 3 months ( ) which is the shortest accessible for this set of indices by requiring that the correlation matrix is invertible ( in fact a 3 month window typically presents in average 63.7 daily records a number which is only 1.12 times the number of indices of the investigated set ) .the figure clearly shows that the time scale of the average correlation among indices is certainly shorter than six trading months . by using an evaluation time window of 6 monthswe already observe the smearing out of the correlation dynamics .a detailed analysis of some prominent financial crises clearly supports our conclusion .in fact in fig .[ fig3m6 m ] the analysis of the 1997 asian crisis ( see arrow labeled as a in the figure ) and of the 1998 russian crisis ( labeled as b in the figure ) is quite resolved only when the 3 month evaluation time period is used .similarly , the september 11 , 2001 shock is visible as a sharp increase of the average correlation ( labeled as c in the figure ) only when the 3 month evaluation time period is used . the onset of the subprime crisis ( labeled as d in the figure ) , the lehman s failure ( labeled as e in the figure ) and the peak of onset of the recent global financial crisis ( labeled as f in the figure ) are much more resolved again when the evaluation time period is 3 months .we therefore conclude that a short time scale of less than 3 trading months is present in the time evolution of the dynamics of correlation coefficient of market indices of stock exchanges located all over the world . in the following sections we will focus our attention on the changes of the correlation matrix estimated by using a 3 month evaluation time period .for each month of the investigated time period ranging from march 1996 to jul 2009 we estimate a correlation matrix by using a 3 month interval comprising in average 63.7 daily records ( for example the first record is computed by using the daily records of the period 1/1/1996 - 31/3/1996 ) . from each correlation matrixwe construct the pmfg and we investigate how links changes from month to month .specifically , we consider the mutual information of links computed between two successive pmfgs .the way we compute the mutual information of links is explained in the following subsection .we consider two networks with the same vertices , but in general with different sets of links .let be the number of vertices in both networks .let us indicate the number of links in the first network with , and the number of links in the second network with .we associate a binary random variable with all pair of vertices in the first network and a binary random variable with all pair of vertices in the second network .the variable takes the value 1 if two vertices are linked in the first network and it is 0 otherwise .similarly describes links between vertices of the second network .the probability ( ) is the probability that a randomly selected pair of vertices is linked in the first ( second ) network .this definition implies that : the joined probability of the two variables and is given by : where is the number of same links which are present in both networks .the mutual information of the random variables and is given by : the mutual information can be suitably normalized by dividing it for the geometric mean of the entropies and : where is the entropy of variable and is the entropy of variable : it is to notice that the normalized mutual information between identical networks is equal to 1 . and the pmfg estimated at the successive month .vertical lines indicate events a to f highlighted in fig .[ fig3m6 m ] . ] in fig .[ figmd ] we show the mutual information between the pmfg at month and the pmfg at the successive month . in the figure we also highlight for reference the months when events a to f described in fig .[ fig3m6 m ] occurs . from the figure we notice that the structure of the pmfg is significantly altered during these months of big events .in fact , the correlation based graphs carry relevant information about the correlation profile of the indices .we now move to the analysis of ( i ) the time evolution of the degree profile of the different market indices in the pmfgs and ( ii ) the assessment of the statistical differences observed between the set of links defined by different correlation based graphs . . to clusters are the clusters shown in fig .[ figinfomap ] . ] the result of the first investigation is summarized in fig .[ figdte ] . in the figurewe show a color code representation of the time evolution of the degree of each market index observed in the pmfgs computed for all the 161 investigated months .different market indices are ordered accordingly to the rank of the four clusters obtained by the infomap partitioning of the unconditional pmfg computed by using the correlation matrix estimated using all daily records ( see fig .[ figpmfg ] and fig .[ figinfomap ] of section [ s1:corrgraph ] ) . specifically , cluster 1 ( ) is the cluster of american market indices , cluster 2 ( ) is a cluster of primarily european indices with market indices from continental and mediterranean countries , cluster 3 ( ) is a cluster of primarily european indices with market indices from uk , ireland and continental and north european countries , and cluster 4 ( ) is a cluster of oceanian and asian market indices . by analyzing fig .[ figdte ] we notice an overall persistence of the level of degree specific to each index . in particular the indices of highest rank within each cluster ( rank is given within each cluster by the infomap algorithm and reflects the role played by the element in the cluster detection ) , which are located at the left side of each cluster area are characterized by higher degree .examples are market indices of france ( label 8) , netherlands ( label 9 ) , germany ( label 11 ) and uk ( label 21 ) in europe and indices of australia ( label 38 ) and hong kong ( label 39 ) in the pacific - asian region .the second of our investigations shows that an alteration of the structure of the correlation based graphs is present around specific months .specifically , a t - test for difference in mean is used to compare the correlation values associated with the links of two basic correlation based graphs , namely the pmfg so far discussed and the minimum spanning tree ( mst ) . the p - value provided by the testis reported for each month of the investigated period in fig .[ figpvalue ] . the p - value is larger than 5% for all the considered 5 crises , indicating that the average correlations of pmfg s links and mst s links are statistically consistent .a different behavior is observed in those periods of time not characterized by widely spread financial crises . whereas the horizontal line indicates a 0.01 threshold . ]we interpret this second result as a manifestation of a significant alteration of the overall structure of the correlation matrix .the nature of these changes in the pmfg structure seems not to be of topological nature ( in fact the degree profile of fig .[ figdte ] is quite stable during time evolution ) but rather might involve specific links .we have not been able so far to interpret in a simple and convincing way these changes , mainly due to the high level of statistical uncertainty associated with the need of a short evaluation time period .in other words , we are able to see that useful information is there but it is dressed with a relevant level of noise unavoidably reflecting the statistical uncertainty associated with the correlation matrix estimation .specific alterations associated with specific crises can not be reliably detected without a procedure assessing the statistical robustness of each link .we lastly complement our analysis of the correlation based graphs with a spectral analysis of the correlation matrices . in our analysiswe mainly focus on the time dynamics of the largest eigenvalues and of their corresponding eigenvectors . in fig .[ figte ] we show the time evolution of the first , second and third eigenvalues of the correlation matrices computed with a 3 months evaluation time period .the time profile of the first eigenvalue is highly correlated with time profile of the average correlation .the second eigenvalue shows abrupt changes in the presence of , or immediately after in the case of event c , special events ( a - f ) such as the ones highlighted in the figure .the third eigenvalue has a more limited excursion and it is unclear whether it carries information .in fact , the average number of eigenvalues above the random matrix theory threshold determined as suggested in ref . is equal to 3.00 and its standard deviation is 0.65 .therefore the first two eigenvalues are the only large eigenvalues whose presence can not be consistent with a statistical uncertainty of the correlation matrix due to the finiteness of the market index time series .again we conclude that information not compatible with a random null hypothesis is therefore present in these correlation matrices in spite of the high degree of statistical uncertainty associated with their estimation . .] one way to investigate the nature of this information is to analyze the profile of the eigenvectors associated with the largest eigenvalues . in fig .[ figfe ] we show a color code representation of the components of the eigenvector associated with the first eigenvalue for all the 161 investigated months .the direction of the eigenvector is arbitrary . in the figure we select the direction associated with a positive component of the usa market index as the positive direction .the eigenvector components are mainly positive indicating the presence of a common factor driving a large number of market indices .this driving factor has high positive components in the majority of the european indices .american and asian indices also show medium to high positive components .negligible components or negative components are observed for some indices of emergent countries located in europe , middle east , africa and asia . in summarythe components of the first eigenvector reflect a common factor driving mature markets located in all continents . . to are the clusters shown in fig .[ figinfomap ] . ]similarly to the case of the first eigenvector , in fig .[ figse ] we show a color code representation of the components of the second eigenvector .also in this case , the positive direction of the eigenvector is associated with a positive component of the usa index .the components of the second eigenvector have a more complex structure than the ones of the first eigenvector .in fact we note that asian and oceanian market indices have components characterized by a sign opposite to the sign of american and some european indices . in other wordsthe factor associated with this second eigenvalue is affecting indices of different regions of the world in a different manner .differences are more pronounced between asia - oceania and europe - america but also differences between european and american indices are sometimes observed .the behavior of the european indices is not as homogeneous as it is in the case of the components of the first eigenvector . in summary, our analysis of the first two largest eigenvalues and eigenvectors of the correlation matrices shows that relevant information is present in them and in their dynamics .two global factors are present , the first affecting primarily mature markets and the second discriminating quite well between market indices of different world regions . . to are the clusters shown in fig .[ figinfomap ] . ]in this paper we investigate the daily correlation present among market indices of stock exchanges located all over the world .the study is performed by using the index time series of 57 different stock exchanges located all over the world and continuously monitored during the time period jan 1996 - jul 2009 . by investigating thisset of market indices we discover that the correlation among market indices presents both a fast and a slow dynamics .the slow dynamics is a gradual growth associated with the development and consolidation of globalization .we show that the fast dynamics is associated with events that originate in a specific part of the world and rapidly affect the global system . we provide evidence that the short term timescale of correlation among market indices is quite fast and less than 3 trading months ( about 60 trading days ) . by computing correlation matrices each trading month using a 3 months evaluation time period we show that correlation matrices contain information about the world global system that can be investigated by using average values of the correlation , correlation based graphs and the spectral properties of the largest eigenvalues and eigenvectors .the overall changes of the correlation based graphs are investigated by using a newly introduced mutual information of link co - occurrence in networks with the same number of elements .changes affecting specific links during prominent crises are of difficult interpretation due to the high level of statistical uncertainty associated with the correlation estimation and because successive rewiring of links might be crisis specific and therefore specific to each single event . in a future studywe aim to achieve a more robust statistical validation of the rewiring of links occurring in the presence of short term abrupt changes of correlation profile with method based on bootstrap .we thank ken bastiaensen for providing the stock index data .dms and wxz acknowledge financial support from the national natural science foundation of china ( 11075054 ) and the fundamental research funds for the central universities .rnm acknowledges financial support from the prin project 2007tkltsr `` indagine di fatti stilizzati e delle strategie risultanti di agenti e istituzioni osservate in mercati finanziari reali ed artificiali '' .we investigate the daily synchronous dynamics of 57 stock market indices located in 57 different countries .the countries and stock indices investigated are : argentina ( merval ) , australia ( as30 ) , austria ( atx ) , belgium ( bel20 ) , bermuda ( bsx ? ) , brazil ( ibov ) , canada ( sptsx ) , chile ( ipsa ) , china ( shashr ) , costa rica ( crsmbct ) , czech republic ( px ) , denmark ( omx copenhagen 20 ) , egypt ( hermes ) , spain ( ibex 35 ) , finland ( omx helsinki ) , france ( cac 40 ) , germany ( dax ) , greece ( athex composite ) , hong kong ( hang seng ) , hungary ( bux ) , indonesia ( jakarta composite ) , india ( sensex 30 ) , ireland ( iseq ) , iceland ( omx iceland all - share ) , israel ( ta-100 ) , italy ( it30 ) , jamaica ( jmsmx ) , japan ( tpx ) , kenya ( knsmidx ) , korea ( kospi ) , saudi arabia ( saseidx ) , morocco ( cfg25 ) , malaysia ( ftse bursa malaysia ) , mexico ( ipc ) , mauritius ( semdex ) , netherlands ( aex ) , norway ( obx ) , new zealand ( nzse10 ) , oman ( msm30 ) , pakistan ( kse100 ) , peru ( igbvl ) , philippines ( psei ) , poland ( wig ) , portugal ( psi general ) , south africa ( indi25 ) , russia ( rtsi ) , slovenia ( sbi20 ) , sri lanka ( cseall ) , switzerland ( ch30 ) , slovakia ( sksm ) , sweden ( omx stockholm ) , thailand ( set ) , turkey ( xu100 ) , taiwan ( taiex ) , uk ( ftse all - share ) , united states ( dow jones indus . ) , venezuela ( ibvc ) .
we investigate the daily correlation present among market indices of stock exchanges located all over the world in the time period jan 1996 - jul 2009 . we discover that the correlation among market indices presents both a fast and a slow dynamics . the slow dynamics reflects the development and consolidation of globalization . the fast dynamics is associated with critical events that originate in a specific country or region of the world and rapidly affect the global system . we provide evidence that the short term timescale of correlation among market indices is less than 3 trading months ( about 60 trading days ) . the average values of the non diagonal elements of the correlation matrix , correlation based graphs and the spectral properties of the largest eigenvalues and eigenvectors of the correlation matrix are carrying information about the fast and slow dynamics of correlation of market indices . we introduce a measure of mutual information based on link co - occurrence in networks , in order to detect the fast dynamics of successive changes of correlation based graphs in a quantitative way .
in case space requirements of dynamic parsing often outweigh the benefit of not duplicating sub - computations .we propose a parser that avoids this drawback through combining the advantages of dynamic bottom - up and advanced top - down control .the underlying idea is to achieve faster parsing by avoiding tabling on sub - computations which are not expensive .the so - called _ selective magic parser _ allows the user to apply magic compilation to specific constraints in a grammar which as a result can be processed dynamically in a bottom - up and goal - directed fashion .state of the art top - down processing techniques are used to deal with the remaining constraints .magic is a compilation technique originally developed for goal - directed bottom - up processing of logic programs .see , among others , ( ramakrishnan et al .1992 ) . as shown in magicis an interesting technique with respect to natural language processing as it incorporates filtering into the logic underlying the grammar and enables elegant control independent filtering improvements . in this paperwe investigate the selective application of magic to _ typed feature grammars _ a type of constraint - logic grammar based on typed feature logic ( ; gtz , 1995 ) .typed feature grammars can be used as the basis for implementations of head - driven phrase structure grammar ( hpsg ; pollard and sag , 1994 ) as discussed in ( gtz and meurers , 1997a ) and ( meurers and minnen , 1997 ) .typed feature grammar constraints that are inexpensive to resolve are dealt with using the top - down interpreter of the controll grammar development system which uses an advanced search function , an advanced selection function and incorporates a coroutining mechanism which supports delayed interpretation .the proposed parser is related to the so - called _ lemma table _ deduction system which allows the user to specify whether top - down sub - computations are to be tabled .in contrast to johnson and drre s deduction system , though , the selective magic parsing approach combines top - down and bottom - up control strategies . as such it resembles the parser of the grammar development system attribute language engine ( ale ) of . unlike the ale parser ,though , the selective magic parser does not presuppose a phrase structure backbone and is more flexible as to which sub - computations are tabled / filtered .we describe typed feature grammars and discuss their use in implementing hpsg grammars .subsequently we present magic compilation of typed feature grammars on the basis of an example and introduce a dynamic bottom - up interpreter that can be used for goal - directed interpretation of magic - compiled typed feature grammars .a typed feature grammar consists of a signature and a set of definite clauses over the constraint language of equations of terms which we will refer to as definite clauses .equations over terms can be solved using ( graph ) unification provided they are in normal form . describes a normal form for terms , where typed feature structures are interpreted as satisfiable normal form terms .terms are merely syntactic objects . ]the signature consists of a type hierarchy and a set of appropriateness conditions .the signature specified in figure [ sig1 ] and [ sig2 ] and the definite clauses in figure [ dcs ] constitute an example of a typed feature grammar .we write terms in normal form , i. e. , as typed feature structures . in addition , uninformative feature specifications are ignored and typing is left implicit when immaterial to the example at hand .equations between typed feature structures are removed by simple substitution or tags indicating structure sharing .notice that we also use non - numerical tags such as and .in general all boxed items indicate structure sharing . for expository reasonswe represent the arg__n _ _ features of the append relation as separate arguments .typed feature grammars can be used as the basis for implementations of head - driven phrase structure grammar . for hpsg and a comparison with other feature logic approaches designed for hpsg . ] propose a compilation of lexical rules into definite clauses which are used to restrict lexical entries . describe a method for compiling implicational constraints into typed feature grammars and interleaving them with relational constraints . because of space limitations we have to refrain from an example .the controll grammar development system as described in implements the above mentioned techniques for compiling an hpsg theory into typed feature grammars .magic is a compilation technique for goal - directed bottom - up processing of logic programs .see , among others , ( ramakrishnan et al .because magic compilation does not refer to the specific constraint language adopted , its application is not limited to logic programs / grammars : it can be applied to relational extensions of other constraint languages such as typed feature grammars without further adaptions .due to space limitations we discuss magic compilation by example only .the interested reader is referred to for an introduction .we illustrate magic compilation of typed feature grammars with respect to definite clause 1 in figure [ dcs ] .consider the definite clause in figure [ magic - dcs ] . as a result of magic compilation a magic literalis added to the right - hand side of the original definite clause .intuitively understood , this magic literal `` guards '' the application of the definite clause .the clause is applied only when there exists a fact that unifies with this magic literal .definite clause without right - hand side literals , from the grammar or derived using the rules in the grammar . in the latter case one also speaks of a passive edge . ]the resulting definite clause is also referred to as the _ magic variant _ of the original definite clause . the definite clause in figure [ seed ]is the so - called _ seed _ which is used to make the bindings as provided by the initial goal available for bottom - up processing . in this casethe seed corresponds to the initial goal of parsing the string ` mary sleeps '. intuitively understood , the seed makes available the bindings of the initial goal to the magic variants of the definite clauses defining a particular initial goal ; in this case the magic variant of the definite clause defining a constituent of category ` s ' .only when their magic literal unifies with the seed are these clauses applied .the so - called _ magic rules _ in figure [ magic - rules ] are derived in order to be able to use the bindings provided by the seed to derive new facts that provide the bindings which allow for a goal - directed application of the definite clauses in the grammar not directly defining the initial goal .definite clause 3 , for example , can be used to derive a magic_append fact which percolates the relevant bindings of the seed / initial goal to restrict the application of the magic variant of definite clauses 4 and 5 in figure [ dcs ] ( which are not displayed ) .magic - compiled logic programs / grammars can be interpreted in a bottom - up fashion without losing any of the goal - directedness normally associated with top - down interpretation using a so - called _ semi - naive bottom - up _ interpreter : a dynamic interpreter that tables only complete intermediate results , i. e. , facts or passive edges , and uses an agenda to avoid redundant sub - computations .the prolog predicates in figure [ sbi ] implement a semi - naive bottom - up interpreter .in this interpreter both the table and the agenda are represented using lists .the agenda keeps track of the facts that have not yet been used to update the table .it is important to notice that in order to use the interpreter for typed feature grammars it has to be adapted to perform graph unification .we refrain from making the necessary adaptions to the code for expository reasons .the table is initialized with the facts from the grammar .facts are combined using a operation called _match_. the match operation unifies all but one of the right - hand side literals of a definite clause in the grammar with facts in the table .the remaining right - hand side literal is unified with a newly derived fact , i. e. , a fact from the agenda . by doing this ,repeated derivation of facts from the same earlier derived facts is avoided .in case of large grammars the huge space requirements of dynamic processing often nullify the benefit of tabling intermediate results . by combining control strategies and allowing the user to specify how to process particular constraints in the grammar the selective magic parser avoids this problem .this solution is based on the observation that there are sub - computations that are relatively cheap and as a result do not need tabling . combining control strategiesdepends on a way to differentiate between types of constraints .for example , the ale parser presupposes a phrase structure backbone which can be used to determine whether a constraint is to be interpreted bottom - up or top - down . in the case of selective magic parsingwe use so - called _ parse types _ which allow the user to specify how constraints in the grammar are to be interpreted .a literal ( goal ) is considered a _ parse type literal ( goal ) _ if it has as its single argument a typed feature structure of a type specified as a parse type .all types in the type hierarchy can be used as parse types . this way parse type specification supports a flexible filtering component which allows us to experiment with the role of filtering . however , in the remainder we will concentrate on a specific class of parse types : we assume the specification of type _ sign _ and its sub - types as parse types .this choice is based on the observation that the constraints on type _ sign _ and its sub - types play an important guiding role in the parsing process and are best interpreted bottom - up given the lexical orientation of hpsg .the parsing process corresponding to such a parse type specification is represented schematically in figure [ schema - parse ] .[ schema - sel ] starting from the lexical entries , i. e. , the definite clauses that specify the word objects in the grammar , phrases are built bottom - up by matching the parse type literals of the definite clauses in the grammar against the edges in the table .the non - parse type literals are processed according to the top - down control strategy described in section [ advance ] . in order to process parse type goals according to a semi - naive magic control strategy, we apply magic compilation selectively . only the definite clauses in a typed feature grammar which define parse type goalsare subject to magic compilation .the compilation applied to these clauses is identical to the magic compilation illustrated in section [ sec2_1 ] except that we derive magic rules only for the right - hand side literals in a clause which are of a parse type .the definite clauses in the grammar defining non - parse type goals are not compiled as they will be processed using the top - down interpreter described in the next section .non - parse type goals are interpreted using the standard interpreter of the controll grammar development system as developed and implemented by thilo gtz .this advanced top - down interpreter uses a search function that allows the user to specify the information on which the definite clauses in the grammar are indexed .an important advantage of deep multiple indexing is that the linguist does not have to take into account of processing criteria with respect to the organization of her / his data as is the case with a standard prolog search function which indexes on the functor of the first argument .another important feature of the top - down interpreter is its use of a selection function that interprets deterministic goals , i. e. , goals which unify with the left - hand side literal of exactly one definite clause in the grammar , prior to non - deterministic goals .this is often referred to as incorporating _deterministic closure _ .deterministic closure accomplishes a reduction of the number of choice points that need to be set during processing to a minimum .furthermore , it leads to earlier failure detection .finally , the used top - down interpreter implements a powerful coroutining mechanism : at run time the processing of a goal is postponed in case it is insufficiently instantiated . whether or not a goal is sufficiently instantiated is determined on the basis of so - called _delay patterns_. these are specifications provided by the user that indicate which restricting information has to be available before a goal is processed .the definite clauses resulting from selective magic transformation are interpreted using a semi - naive bottom - up interpreter that is adapted in two respects .it ensures that non - parse type goals are interpreted using the advanced top - down interpreter , and it allows non - parse type goals that remain delayed locally to be passed in and out of sub - computations in a similar fashion as proposed by . in order to accommodate these changesthe adapted semi - naive interpreter enables the use of edges which specify delayed goals .figure [ adapcompletion ] illustrates the adapted match operation .the first defining clause of match/3 passes delayed and non - parse type goals of the definite clause under consideration to the advanced top - down interpreter via the call to advanced_td_interpret/2 as the list of goals topdown .the second defining clause of match/3 is added to ensure all right - hand side literals are directly passed to the advanced top - down interpreter if none of them are of a parse type .allowing edges which specify delayed goals necessitates the adaption of the definition of edges/3 .when a parse type literal is matched against an edge in the table , the delayed goals specified by that edge need to be passed to the top - down interpreter .consider the definition of the predicate edges in figure [ adapedges ] .the third argument of the definition of edges/4 is used to collect delayed goals .when there are no more parse type literals in the right - hand side of the definite clause under consideration , the second defining clause of edges/4 appends the collected delayed goals to the remaining non - parse type literals .subsequently , the resulting list of literals is passed up again for advanced top - down interpretation .the described parser was implemented as part of the controll grammar development system .figure [ setup1 ] shows the overall setup of the controll magic component .the controll magic component presupposes a parse type specification and a set of delay patterns to determine when non - parse type constraints are to be interpreted . at run - timethe goal - directedness of the selective magic parser is further increased by means of using the phonology of the natural language expression to be parsed as specified by the initial goal to restrict the number of facts that are added to the table during initialization . only those facts in the grammar corresponding to lexical entries that have a value for their phonology feature that appears as part of the input stringare used to initialize the table .the controll magic component was tested with a larger ( 5000 lines ) hpsg grammar of a sizeable fragment of german .this grammar provides an analysis for simple and complex verb - second , verb - first and verb - last sentences with scrambling in the mittelfeld , extraposition phenomena , wh - movement and topicalization , integrated verb - first parentheticals , and an interface to an illocution theory , as well as the three kinds of infinitive constructions , nominal phrases , and adverbials . as the test grammar combines sub - strings in a non - concatenative fashion , a preprocessoris used that chunks the input string into linearization domains . this way the standard controll interpreter ( as described in section [ advance ] ) achieves parsing times of around 1 - 5 seconds for 5 word sentences and 1060 seconds for 12 word sentences .the use of magic compilation on all grammar constraints , i.e. , tabling of all sub - computations , leads to an vast increase of parsing times .the selective magic hpsg parser , however , exhibits a significant speedup in many cases .for example , parsing with the module of the grammar implementing the analysis of nominal phrases is up to nine times faster . at the same timethough selective magic hpsg parsing is sometimes significantly slower . for example , parsing of particular sentences exhibiting adverbial subordinate clauses and long extraction is sometimes more than nine times slower .we conjecture that these ambiguous results are due to the use of coroutining : as the test grammar was implemented using the standard controll interpreter , the delay patterns used presuppose a data - flow corresponding to advanced top - down control and are not fine - tuned with respect to the data - flow corresponding to the selective magic parser .coroutining is a flexible and powerful facility used in many grammar development systems and it will probably remain indispensable in dealing with many control problems despite its various disadvantages . the test results discussed above indicate that the comparison of parsing strategies can be seriously hampered by fine - tuning parsing using delay patterns .we believe therefore that further research into the systematics underlying coroutining would be desirable .we described a selective magic parser for typed feature grammars implementing hpsg that combines the advantages of dynamic bottom - up and advanced top - down control . as a resultthe parser avoids the efficiency problems resulting from the huge space requirements of storing intermediate results in parsing with large grammars .the parser allows the user to apply magic compilation to specific constraints in a grammar which as a result can be processed dynamically in a bottom - up and goal - directed fashion .state of the art top - down processing techniques are used to deal with the remaining constraints .we discussed various aspects concerning the implementation of the parser which was developed as part of the grammar development system controll .the author gratefully acknowledges the support of the sfb 340 project b4 `` from constraints to rules : efficient compilation of hpsg '' funded by the german science foundation , and the project pset : practical simplification of english text " , a three - year project funded by the uk engineering and physical sciences research council ( gr / l53175 ) , and apple computer inc .. the author wishes to thank dale gerdemann and erhard hinrichs and the anonymous reviewers for comments and discussion .of course , the author is responsible for all remaining errors .jochen drre .generalizing earley deduction for constraint - based grammars . in jochen drre and michael dorna ( eds . ) , 1993 . _computational aspects of constraint - based linguistic description i_. dyana-2 , deliverable r1.2.a .thilo gtz and detmar meurers .the controll system as large grammar development platform . in _ proceedings of the acl workshop on computational environments for grammar development and linguistic engineering _ ,madrid , spain .thilo gtz .1995 . compiling hpsg constraint grammars into logic programs . in _ proceedings of the workshop on computational logic for natural language processing _ ,edinburgh , uk .erhard hinrichs , detmar meurers , frank richter , manfred sailer , and heike winhart .ein hpsg - fragment des deutschen , teil 1 : theorie .technical report sfb 340 95 , university of tbingen , germany .guido minnen.1996.magic for filter optimization in dynamic bottom - up processing.in _ acl proceedings _ , santa cruz , california , usa .guido minnen .thesis , university of tbingen , germany . technical reportsfb 340 nr .130 .raghu ramakrishnan , divesh srivastava , and s. sudarshan .efficient bottom - up evaluation of logic programs . in joosvandewalle ( ed . ) , 1992 . _the state of the art in computer systems and software engineering_. kluwer academic publishers .
we propose a parser for constraint - logic grammars implementing hpsg that combines the advantages of dynamic bottom - up and advanced top - down control . the parser allows the user to apply magic compilation to specific constraints in a grammar which as a result can be processed dynamically in a bottom - up and goal - directed fashion . state of the art top - down processing techniques are used to deal with the remaining constraints . we discuss various aspects concerning the implementation of the parser as part of a grammar development system .
data compression is already a fundamental and well developed branch of classical information theory .it has wide reaching implications on every aspect of information storage and transmission and its quantum analogue is of considerable interest in a wide range of applications . in quantum information theory the idea of quantum data compression , in its strictest sense , has still much to gain from the classical theory with only some of the more fundamental classical notions being translated .the basis for compression of classical data is shannon s noiseless coding theorem , which states that the limit to classical data compression is given by the shannon entropy .schumacher presented the quantum analog to this and proved that the minimum resources necessary to faithfully describe a quantum message in the asymptotic limit is the von neumann entropy of the message , given by .schumacher also demonstrated that by encoding only the typical subspaces this bound could be achieved using a fixed length block coding scheme , and how in the asymptotic limit the compressed message can be recovered with average fidelity arbitrarily close to unity .the fact that schumacher s scheme is only faithful in the asymptotic limit has led many to ask whether there is a scheme where we can losslessly compress in the finite case i.e. where we want to be able to compress without loss of information in the case where we have a finite ( i.e. more practical ) number of qubits .of course there are many reasons why such a scenario would be desirable , e.g. in a quantum key distribution ( qkd ) scheme where often very high fidelity of the finite received signal is crucial to the integrity of the scheme .it is in such cases that the asymptotically faithful schumacher compression would not be ideal .this has , directly or indirectly , inspired a number of other quantum schemes based on classical ideas of lossless coding , such as huffman , arithmetic and shannon fano coding .the primary consideration in lossless compression schemes is the efficient generation and manipulation of variable length codes ( i.e. codes of variable rather than fixed block length ) .this is because ( as proven in particular by bostrm and felbinger ) it is not possible to achieve truly lossless compression using block codes .the application of variable length codes to quantum data compression is however not quite so straightforward .the main issue seems to be that we are forbidden by quantum mechanics to measure the length of each signal without disturbing and irreversibly changing the state and resulting message . in our schemehowever we show both compression and decompression to be unitary operations and there never be any need for a length measurement of the variable length states .the main point of this paper is two fold , by looking at quantum data compression in the second quantisation , we present an entirely new model of how we can generate and efficiently use variable length codes .the significance of this model is that we believe it is a more natural application of variable length coding in quantum information theory .more importantly still is that fact that any data compression ( lossless or lossy ) can be seen as the _ minimum energy _ required to faithfully represent or transmit classical information contained within a quantum state .this allows us to use energy and entropy arguments to give a deeper insight into the physical nature of quantum data compression and to suggest bounds on the efficiency of our lossless compression protocol in a novel and interesting way .the rest of this paper is broken down as follows ; section ii of our paper is dedicated to reviewing the second quantisation and introducing our notation for the description of quantum states . in this descriptionthe average length of the codeword is related to the number of modes " that are occupied .using this fact , we look at the average energy of the message instead of its average length and therefore represent the compression limit from an energy rather than length perspective . in section iiithis offers the possibility to interpret data compression in terms of the landauer erasure principle . in sectioniv we introduce a compression algorithm that uses the second quantisation to generate variable length codes and show how the need for prefix or uniquely decipherable codes is unnecessarily restrictive given the structure of quantum mechanics .the absence of this restraint leads us to the concept of one - to - one ( 1 - 1 ) codes .classical results are then used to present an analogous quantum 1 - 1 entropy bound which , when taking into account the classical side information , asymptotically tends towards the existing von neumann bound .finally , in section v , we give an experimental setup for a small example that could be used to demonstrate the legitimacy of this new compression algorithm .in this section we introduce our second quantisation notation and then show , initially using schumacher s scheme as an example , how data compression can be seen as the _ minimum energy _ required to faithfully represent or transmit classical information contained within a quantum state . the general scenario for data compressionis that a memoryless source , say alice , wants to send a message to a receiver bob , in the most efficient way possible .the efficiency of this communication in space or time may be described through the optimisation of any one of a number of parameters e.g. minimising the number of bits or the total energy required to represent the message ( the two are not necessarily equal ) .the scenario we use in this paper is similar to that employed by schumacher . in our protocol the source alice ,wishes to communicate a number , , of quantum systems ( which we call the _ letters _ ) prepared from a set of distinct ( but not necessarily mutually orthogonal ) states latexmath:[ ] , and $ ] . of course the states appear to be of different length but as we explained in section ii this is not the case , the missing modes are occupied by vacuum states ( which carry no information ) .the logic of our compression is that the state with the highest probability ( the one that appears most in the classical language ) is encoded in the shortest possible form .note that this is different to schumacher s strategy .schumacher only encodes the states in the typical subspace , and all the other states are deleted ( leading to unfaithful compression for the finite size message ) .the states in the typical subspace , on the other hand , are in schumacher s case all encoded into codewords of equal length ( asymptotically equal to the original length times the entropy of the signal ) . in our scheme the typical subspace does not have exclusive importance , with all the messages being encoded faithfully . note finally that the whole transformation is unitary and therefore can be implemented in quantum mechanics ( we show how to do so in section v ) . from this encoding , when say , , we can infer the entropy of the string as bits / symbol .this is better than our expected optimal of bits / symbol and is a significant improvement on schumacher codings bits / symbol .however , as we will see , it is not appropriate and is actually misleading to directly compare these results without the added the respective information required from the classical side channel .the main advantage of this compression method over that of schumacher s is that this is lossless in the finite case , i.e. signal can be completely recovered , unlike in schumacher s case where a certain loss in fidelity is inevitable .it is clear that our example with qubits can in fact be applied to any number of qubits ( or , more accurately , to quantum systems of any dimension ) by continuing with the principle of encoding less likely strings into states with more photons .this mapping is perfectly well defined and unique even given the case that we have messages of equal probability , where here we can arbitrarily choose which message to encode into the shorter word .an important point to make is that in this scheme we no longer need to use the classical notion of unique decipherability ( fig 3) for defining codeword mappings .this is because given the encoding technique any codeword set that represents a 1 - 1 map between codeword and letter state is sufficiently effective in being uniquely decipherable ( u.d . ) . therefore the quantum notion of u.d ., as directly applied in this case , is stronger and allows for shorter codewords than is classically possible , something that has has also been considered by bostrm . in terms of decompression , classically we make use of the fact that we have the length information of each codeword .however in quantum mechanics encoding the length information of each codeword along with the respective codeword is quite impractical , as a number of authors have pointed out .this is because in quantum mechanics in order to infer the individual length of a codeword would require there to be a measurement of some sort and performing any measurement would collapse this superposition onto any one of those codewords irreversibly , resulting in a disturbance to the state and therefore an unacceptable loss of information .it is therefore indeed fortunate that in order to faithfully decompress ( i.e. replace the redundancy that was removed by quantum compression ) we need only to know the total length of the message that was initially encoded ( i.e. total number of qubits transmitted ) rather than the individual lengths of each codeword ( i.e. length of each letter state ) . with having only this total length information , we then know the redundancy we need to add to the compressed signal ( i.e. the signal containing the statistical properties of the original message ) in order to restore the original message. clearly if this information is missing we can only probabilistically achieve faithful decompression by best guessing the original length of the message as also pointed out by bostrm .as can not be measured , it must be known by the sender and sent additional to the compressed quantum message ( see fig . 4)(via a classical or quantum side channel ) or perhaps agreed upon between sender and receiver prior to communication .it is worth clarifying that classically , is always available to us regardless of the coding scheme employed , as we can easily make a measurement on its length without any risk of disturbing the state . from landauers erasure principle , briefly discussed and applied in section iii , it is possible to derive an lower bound on the efficiency of this compression scheme .we use the fact that according to landauer when we erase n units of information we have to increase the entropy of the environment by n units . if the entropy increase of the environment is less than this , that then must imply that there is a suitable amount of information that was not deleted .the encoding we use to achieve compression is faithful for any finite length of message , only if , as mentioned before , together with the compressed message we send another piece of information. this could be the total length of the uncompressed message , or instead , slightly more efficiently , the entropy of the message .so , if the statistical properties of the message are represented by , we could send additional ( qu)bits along with the compressed message to represent the length of the total signal , or , at best send ( which is ) .therefore from landauer s principle we expect that the limit to compression in our scheme is bounded from below by : if we are sending log(n ) bits of length information or if we , more efficiently , only send the entropy of the total signal , from which it is possible to infer the length information. a more rigorous proof to these bounds and to the 1 - 1 quantum compression scheme can however be obtained using results from cover and prisco . from our encoding schemewe can see that the average length of the _ ith _codeword is : and therefore by definition , the average word length associated with this coding scheme , is : in a similar fashion to the shannon entropy and minimum average word length for u.d .codes , we define the lower bound of our 1 - 1 average word length as the corresponding 1 - 1 entropy , .this 1 - 1 entropy tells us the best that we can compress to using 1 - 1 codes and it is related to the shannon entropy in the following manner : and by then using the method of lagrange multipliers to maximize the right hand side of the expression as shown by we find that : this proof by was later refined by to given that the 1 - 1 part of our encoding scheme may be essentially considered to be classical ( since classical mechanics is a special case of quantum mechanics in the diagonal basis ) we can interchange the shannon for the von neumann entropy and obtain an exact lower bound for the compression of our quantum 1 - 1 coding : where is our quantum 1 - 1 entropy and is the entropy of the total ( unencoded ) message .so we see that for large this bound coincides with the one obtained independently and more physically through landauer s erasure .therefore from the 3 qubit example given earlier the total entropy of the state after compression ( i.e. including classical side channel ) is therefore bits / symbol , still an improvement on 1.88 bits / symbol by schumacher .it is the case however that regardless of the efficiency of this scheme in the finite limit , both schemes tend towards the same von neumann entropy . in summaryboth these methods could be equally useful in quantum data compression depending on the required accuracy , speed and convenience of the compression algorithm .our motivation was to optimise total energy which we achieve by having an even greater permissible codespace ( i.e. 1 - 1 rather than u.d .codespace ) and hence on average shorter codewords available to us .as mentioned it may be the case that different schemes build on these ideas to optimise resources other than energy e.g. compression time , circuit complexity , difficulty of implementation , equipment availability or cost .in this section we discuss practical issues related to realising a a very simple instance of our 1 - 1 quantum encoding scheme .we will be encoding two quantum bits in the state or where as in the earlier 3 qubit example with . as in the 3 qubit example , going into the basis where the density matrix is diagonal and then mapping the respective letter states to corresponding codewords in order of most probable to least probable ( again here assuming ) , we get : note that this operation just tells us to annihilate the second photon if it is the state and map to and to . so in order to implement this transformation we clearly need to be able to perform a conditional operation from the polarisation degrees of freedom to the spatial degrees of freedom .the subsequent transformation is then just a change of basis , from a basis to a basis , which is known as the hadamard transform and is easy to implement . we know that as we have an orthogonal set on the left hand side and an orthogonal set on the right hand side , according to quantum mechanics there must be a unitary transformation to implement this .since hadamard is a simple transformation to implement we only need to show how to implement the following two qubit transformation : latexmath:[\[\begin{aligned } deleting the second photon if its state is and otherwise leaving everything the same .note that , due to linearity of quantum mechanics , a superposition of states on the left hand side would be transformed into the corresponding superposition of the states on the right hand side .this means that we will have elements of unequal length ( different number of photons ) present in the superposition . while this may in practice be difficult to prepare ,there is nothing fundamental to suggest that in principle such states can not be prepared , as we show next . in order to implement this transformation , one possible method is presented in fig . 5 in the form of a simple quantum computational network . in this circuit , we first need to distinguish the two modes as we only want to delete a particle from the second ( and not the first ) mode .we can imagine that in practice if these are two light modes , then we actually need to distinguish their frequencies and , and this could be done by a prism splitting the two frequencies at * a*. therefore the two modes now occupy different spatial degrees of freedom .next we need to distinguish the two polarisations in the second mode , which in the case of a photon would involve a polarisation dependent beam splitter ( pdbs ) at * b*. now , after this beam - splitter we can distinguish both the frequency and the polarization in the second mode , and so we only need to remove a photon from the second mode if we have h polarization . herewe use the trick we mentioned in the landauer s erasure section , namely that we swap the photon in the second mode with an environmental vacuum state conditional on it being horizontally polarized ( and otherwise we do nothing ) .if initially we had a superposition of all states , and the state of the environment was , then after the swap , the state will be we now need to perform a simple hadamard transformation on the environment such that latexmath:[\[\begin{aligned } after which the total state can be written as latexmath:[\[\begin{aligned } ( & a & |1_h,0_v \rangle|1_h,0_v \rangle + \ldots d|0_h,1_v \rangle|0_h,1_v \rangle ) |0_h,0_v\rangle + \nonumber\\ ( & a & |1_h,0_v \rangle|1_h,0_v \rangle + \ldots - d|0_h,1_v \rangle|0_h,1_v \rangle ) measurement on the environment ( at * c * ) , if we obtain the outcome , then the resulting state of two photons is already our encoded state , while if the outcome is , then the state is the encoded state up to a negative phase shift in the last two elements of the superposition .this can be corrected by applying a simple phase shift conditional on the second photon being vertical .the whole operation at * c * can also be performed coherently without performing the measurement as indicated in fig 5 .we acknowledge that this operation may not be simple to execute in practice and may require a percent effective photo - detection scheme which is currently unavailable .however , this gate can certainly be implemented with some probability at present . at the end, we need to reverse the operation of the pdbs , and then reverse the operation of the prism , thus finally recombining the two modes into the same spatial degree of freedom .the resulting state is our encoded state and can then be sent as such .in this paper a new variable length quantum data compression scheme has been outlined . by looking at quantum data compression in the second quantisation framework, we can generate variable length codes in a natural and efficient manner without having the significant memory overhead common to other variable length schemes .the quantum part of our signal is compressed beyond the von neumann limit , but at the expense of having to communicate a certain amount of classical information . by sending the total length of the transmitted signal through a classical channelenables us to compress and decompress with perfect fidelity for any number of qubits .we have presented an argument based on landauer s erasure principle which provides us with a with a lower bound on the efficiency of our compression scheme .this is independently verified by classical results due to cover and prisco . as expected , the sum of the classical and the quantum parts of the compressed message still tends towards the limit given by the von neumman entropy .asymptotically , the quantum part dominates over the classical part and becomes equal to the von neumman entropy . the tightest compression bound for our schemeis not known .note that we assume that both the sender and receiver know exactly the properties of the source , i.e. they know the quantum states the source emits and the corresponding probabilities and modes within which they are emitted .this of course means that our scheme is not universal .it is unlikely however that in a universal scheme the sender and receiver would need less information than this to perform compression , e.g. just knowledge of the entropy of the source and length of message without knowledge of the density matrix ( or perhaps even this is unnecessary ) .but this is a separate issue that warrants further investigation .our encoding has a novel feature that it involves superpositions of different numbers of photons within the superposition states .we acknowledge that there may be a superselection " rule that prohibits the nature of this approach . however , we believe that , while these states might be difficult to prepare , they are certainly not impossible according to the basic rules of quantum mechanics . to support this we offer a general way of implementing our scheme in the simple case of encoding two quantum bits .whether the space - time complexity of our implementation is most efficient in practice remains an open question .as it is , our encoding is a unitary transformation and the receiver applies the decoding operation ( inverse of the unitary transformation ) to decompress the quantum message . in the case that we encode and send classical bits , the receiver may wish to infer the original classical string that was sent .the receiver can then perform measurements on the decompressed quantum states to infer the original classical letters .since the original classical letters are by definition fully distinguishable and if the transmitted quantum states are orthogonal only then can this final step be done with perfect efficiency ( this is of course a special case of our most general quantum scheme ) .it is also worth noting that it is on the sequence of these quantum states i.e. on the total message , that our compression scheme acts .this means that this scheme would not be so useful in an application where instantaneous lossless decompression is required , where one would have to wait for all the photons to arrive before beginning the decoding operation . in the event that the receiver starts the decompression operation in advance of the last photon arriving, he truncates the signal and hence will not be able to decode the original signal with perfect fidelity . in our scheme it is the average length of the message , or more appropriately the average energy required to represent the information within the quantum state , that tends in the asymptotic limit towards the von neumann entropy .we therefore decided instead to re - formulate compression from an energy perspective , as the measure is then more consistent as an optimal measure of a systems information carrying ability . as we are aware in quantum mechanics , energy and informationare intricately linked , far more so than photon number and information .we are interested in primarily reducing the energy required to represent the message , which we stress is not affected by the fact that we need to wait until the whole signal is received . in our framework the incorporation of any vacuum states to extend the variable length message to the same length as the longest component of the superposition , by definition , does not increase the energy total for the message .in reality of course we do not even have to wait for the whole signal , we can just truncate it at the average length of the signal and although we end up with a lossy compression scheme we still tends towards the von neumann entropy asymptotically .the issue of waiting until the whole signal arrives really is to do with the fact that we can not measure the length of the signal without collapsing it into a particular length , which is not what we want as we want to keep intact the superposition and consequently preserve the rest of the information within the system .our approach raises a number of interesting questions .firstly , it gives us a more physical model of data compression and relates the entropy to the minimum energy required to represent the information contained within a quantum state .this could be very useful from an energy saving perspective and gives a guideline as to the minimum temperature we could cool a system to before we begin to loose information .another benefit to this compression scheme is that it does not depend on the nature of particles , the scheme applies equally well to both bosonic and fermionic systems .the reason for this is that we never put more than one particle per state when we are encoding and therefore we never need to consider the pauli exclusion principle .whether this principle plays a more important role in data compression , i.e. whether there could be a fundamental difference between the bosonic and fermionic systems ability to store ( and in general process ) information is not yet known .the ultimate bound due to bekenstein suggests that the answer is no " , however , specific encodings may highlight differences between the two kinds of particles .finally , our scheme assumes that the encoding and the decoding processes as well as the possible channel in between the two are error free . in practicethis is , of course , never true and it is interesting to analyze to what extent our scheme suffers in the presence of noise and decoherence at its various stages .we hope that our work will stimulate more research into quantum data compression as well as experimental realization in the optical and the solid state domain .we would like to acknowledge useful discussions with k. bostrm , d. bouwmeester , g. bowen , c. rogers and useful communication with t. cover and j. keiffer .l. r. acknowledges financial support from invensys plc .v.v . would like to thank hewlett - packard , elsag s.p.a . andthe european union project equip ( contract ist-1999 - 11053 ) for financial support .99 k. bostroem and t. felbinger .`` lossless quantum data compression and variable length coding '' . phys .a * 65 * , 032313 ( 2002 ) .v. vedral , rev .phys . * 74 * , 297 ( 2002 ) .r. cleve , d. divicenzo .`` schumacher s quantum data compression as a quantum computation '' .a * 54 * , 2636 ( 1996 ) .s. bose , l. rallan and v. vedral .`` communication capacity of quantum computation '' .lett . * 85 * , 5448 ( 2000 ) .b. schumacher . quantum coding " .a * 51 * , 2738 ( 1995 ) .b. schumacher and m.d . indeterminate length quantum coding `` . phys .a * 64 * , 042304 ( 2001 ) .r. jozsa and b. schumacher . ' ' a new proof of the quantum noiseless coding theorem `` . j., 2343 ( 1994 ) .chuang and d.s . modha . ' ' reversible arithmetic coding for quantum data compression `` .ieee transactions on information theory , * 46 * , ( 2000 ) .s. braunstein , c.a .fuchs , d. gottesman , and h .- k . lo . ' ' a quantum analog of huffman coding " .ieee international symposium on information theory ( 1998 ) .quant - ph/9805080 ( 1998 ) . c. e. shannon and w. weaver , the mathematical theory of communication " , ( university of illinois press , urbana , il , 1949 ). m. a. nielsen and i.l .chuang , quantum computation and quantum information `` .cambridge university press ( 2001 ) .t. m. cover and j. a. thomas . ' ' elements of information theory `` .wiley , new york , ( 1991 ) .jones and j.m . jones . ' ' information theory and coding " .springer , london ( 2000 ) .r.p feynman , feynmann lectures on computation `` , edited by a. j. g. hey and r. w. allen , addison - wesley publishing company , inc . , (1996 ) . v. vedral , ' ' landauer s erasure , error correction and entanglement " .lond . a mat .456 , 969 - 984 ( 2000 ) .r. landauer , ibm j. res .develop . * 5 * , 183 ( 1961 ) .leung - yan - cheong and t.m .cover . some equivalences between shannon entropy and kolmogorov complexity `` .ieee transactions on information theory , * 24 * , 331 , ( 1978 ) . c. blundo and r.d. prisco . ' ' new bounds on the expected length of 1 - 1 codes " .ieee transactions on information theory , * 42 * , 246 , ( 1996 ) . c. rogers , private communication .j. d. bekenstein , entropy content and information flow in systems with limited energy " , phys .d * 30 * , 1669 , ( 1984 ) .[ fig : schumcoding ] in schumacher s coding the first operation compresses a quantum source stored in qubits into qubits .this is then decompressed by an operation , and for a finite length message the output state is not in general the same as input . in the asymptotic limit, on the other hand , the source is accurately recovered .[ fig : codespace ] illustration of codespace : uniquely decipherable ( u.d . ) and prefix - free ( instantaneous ) codes are a subset of the 1 - 1 codes used in our data compression scheme .classically , 1 - 1 codes are not very useful for data compression as they usually require another symbol signaling the end of one letter and the beginning of another one .however , our presented quantum scheme enables us to make us of 1 - 1 codes in a way that is not classically practical .[ fig : algoverview ] this diagram presents the core of our quantum 1 - 1 compression algorithm .initially , is compressed into and sent together with the classical message , , containing the information about the total number of input qubits ( i.e. total length of the signal ) . on decompression , using the information in , the original signal is faithfully recovered for any number of qubits .[ fig : experiment ] a circuit for our quantum data compression scheme applied to qubits .the two photons and are initially split according to their frequency , after which the photon in frequency is further split according to its polarization .the h branch is then swapped with an environmental vacuum state , while nothing happens to the v branch of .the gate is the hadamard transform subsequently acting on the environment as defined in the text .the following gate is a conditional phase gate , , introducing a negative phase in the second photon , only if its polarization is v , and the environment has one photon . at the endthe two polarizations and then the two photons are recombined back into the single spatial degree of freedom .our circuit is completely general and could be applied to different kinds of particles , such as electrons for example .
by looking at quantum data compression in the second quantisation , we present a new model for the efficient generation and use of variable length codes . in this picture lossless data compression can be seen as the _ minimum energy _ required to faithfully represent or transmit classical information contained within a quantum state . in order to represent information we create quanta in some predefined modes ( i.e. frequencies ) prepared in one of two possible internal states ( the information carrying degrees of freedom ) . data compression is now seen as the selective annihilation of these quanta , the energy of whom is effectively dissipated into the environment . as any increase in the energy of the environment is intricately linked to any information loss and is subject to landauer s erasure principle , we use this principle to distinguish lossless and lossy schemes and to suggest bounds on the efficiency of our lossless compression protocol . in line with the work of bostrm and felbinger , we also show that when using variable length codes the classical notions of prefix or uniquely decipherable codes are unnecessarily restrictive given the structure of quantum mechanics and that a 1 - 1 mapping is sufficient . in the absence of this restraint we translate existing classical results on 1 - 1 coding to the quantum domain to derive a new upper bound on the compression of quantum information . finally we present a simple quantum circuit to implement our scheme .
a rigid sphere falling through a viscous medium is a classic problem in fluid dynamics , which was first solved in the steady state for the limit of vanishingly small reynolds number in an infinite domain by g. g. stokes in 1851 .the time - dependent approach to the steady state allows the partial differential equation for the sphere and fluid to be reduced to an integrodifferential equation for the motion of the sphere in an infinite medium .the physical effects included in this equation are the buoyancy which drives the motion , the inertia of the sphere , the viscous drag , an added mass term , and a memory or history integral . in the case of a _newtonian fluid _ , the main effect of the memory integral on the dynamics is to modify the approach to steady state from exponential to algebraic .the integral also makes the equation effectively second order , though it is generally accepted that no oscillations occur as the sphere reaches its steady state value .physically it is clear that no oscillations can occur due to the absence of a restoring force against gravity , and a sphere released from rest in a newtonian fluid at low reynolds number is observed to reach its terminal velocity monotonically .however , it is not directly evident mathematically that oscillating solutions are precluded , particularly as the governing integrodifferential equation can be transformed to a nonautonomous second order ordinary differential equation which has the form of a harmonic oscillator . ina _ non - newtonian fluid _ , such as a polymer solution , a falling sphere is often observed to undergo transient oscillations before reaching its terminal velocity .these oscillations occur due to the elasticity of the fluid , which provides a restoring force .the steady state value is of primary concern in many applications , and much work focuses only on this aspect of the problem .the oscillations which occur during the approach to steady state have been reproduced in a linear viscoelastic model by king and waters .more recently , _nontransient _ oscillations of falling spheres ( and rising bubbles ) have been observed in specific aqueous solutions of surfactants ( wormlike micellar solutions ) .these observations were initially made for a bubble in the wormlike micellar fluid ctab / nasal , which showed oscillations in its position and shape .the shape oscillations included an apparent cusp which momentarily appears at the trailing end of the bubble .such a cusplike tail is a well known property of rising bubbles in non - newtonian fluids , which we initially believed to play an important role in the micellar oscillations .= 5.5 cm subsequent observations of solid spheres which also oscillate while falling through the same solutions made it clear that the cusp is not involved in the phenomenon , and that another explanation must be sought .unlike a sedimenting sphere in a conventional non - newtonian fluid , these oscillations do not appear to be transient .an example is given in figure [ f - expt ] , which shows the motion of a 1/8 teflon sphere falling through a tube ( cm , cm ) filled with a 6 mm 1:1 solution of ctab / nasal .our attempts to model this phenomenon brought to our attention the unusual aspects of the integrodifferential equation for a falling sphere . we prove here that the equation for sedimenting sphere in a newtonian fluid in the limit of zero reynolds number ( creeping flow ) does not admit oscillating solutions , despite some appearances that it does .this result is due to the special properties of the error function when multiplied by oscillating functions .it is ultimately related to the stability of nonautonomous ordinary differential equations with monotone secular terms , which is appropriately viewed as an initial value problem , and not in terms of linear stability analysis around the terminal velocity .we begin by reviewing some classical results for the equation of motion governing a falling sphere in a viscous newtonian fluid of infinite extent ( no boundaries ) .an incompressible fluid in the absence of body forces is described by the equations where is the density of the fluid , is the pressure , is the velocity field for the fluid , and is the extra stress tensor , which measures force per unit area ( other than pressure ) in the present configuration of the fluid .a newtonian fluid is a fluid for which the stress tensor is linearly related to the rate of strain tensor through the relation where is the viscosity of the fluid and is the symmetric part of the velocity gradient . from ( [ ce2]),([ce3 ] ) , and ( [ st ] ) one obtains the navier - stokes equation : non - newtonian fluids are fluids for which the assumption ( [ st ] ) is invalid . for instance ,polymeric and viscoelastic fluids often fail to conform to the instantaneous relation between stress and velocity gradients implicit in ( [ st ] ). in general will depend nonlinearly on and on the past history of stress in the fluid . by choosing a time scale and an appropriate length scale , ( [ ns ] ) can be written in a nondimensional form where , and are the nondimensional velocity and pressure .the dimensionless constant is called the reynolds number and it measures the relative importance of inertial effects to that of viscous effects .when the inertial effects are negligible ( ) , equation ( [ ns ] ) is called the stokes equation . in this paperwe restrict our analysis to this situation .stokes solved the steady version of ( [ ns2])-([incomp2 ] ) for the case of sphere falling though the fluid for vanishing reynolds number .the stokes solution gives the steady state drag on the sphere of radius falling through a fluid with a steady speed to be .however , in order to solve the transient problem of falling sphere , we first solve the problem of sphere oscillating with a frequency and compute the drag experienced by the sphere as a function of .the drag experienced by a sphere falling at a arbitrary speed can then be computed as a fourier integral of this drag .consider a sphere of radius and density in a newtonian fluid of density and viscosity .the exterior stokes flow driven by small oscillations of the sphere at a frequency can be solved exactly , which leads to a hydrodynamic force dependent on both and : where is a diffusive lengthscale common to stokes problems , and is the kinematic viscosity . using this, the general time - dependent problem of the motion of a falling sphere can be reduced from a partial to an ordinary differential equation for the speed of the sphere , an exact equation which takes into account the motion of the surrounding fluid . for a sphere moving with an arbitrary speed , the hydrodynamic drag it experiences can be calculated by representing as a fourier integral : the drag for each fourier component is then given by ( [ eq : fw - drag ] ) .the total hydrodynamic drag on the sphere is obtained by integrating over all fourier components , leading to where the first term represents the steady state drag on a sphere falling with a velocity , the second term represents the added mass term ( the force required to accelerate the surrounding fluid ) , the third term is the basset memory term , and is the volume of the sphere . if the sphere starts from rest , then the lower limit of the integral in ( [ eq : f - net ] ) starts from instead of .the expression for the unsteady drag force can then be substituted into the balance of force equation for the sphere : thus the equation of motion for the sphere is which can be rewritten in the simpler form where and is the density difference which drives the motion . in this approach the motion of the sphere is described by an integrodifferential equation whose integral term has the same singularity as abel s equation .note that this equation is only valid in the limit of zero reynolds number .physically one expects the solution to approach a terminal velocity .it is clear from ( [ eq : uprime ] ) that the only steady state solution ( ) possible is which is the classical result of balancing the stokes drag with the buoyancy .we first rewrite the integrodifferential equation ( ide ) for the sphere in a nondimensional form using as the velocity scale , and , the viscous diffusion time , as the time scale .the variables are with this rescaling ( [ eq : uprime ] ) becomes where the control parameter is given by thus the motion of the sphere depends only on the relative densities of the sphere and the fluid through the parameter .the density of the sphere can range from zero to infinity , which implies a parameter range . when the density of the sphere is equal to the density of the fluid , and . herewe will only be concerned with falling spheres , for which .although we will solve the ide ( [ eq : maineq ] ) directly , it is of interest to connect the problem to ordinary differential equations ( ode s ) and discuss some important consequences therein , especially with regard to the stability of the terminal velocity solution .following villat , we can rewrite ( [ eq : maineq ] ) as an ode using abel s theorem ( see appendix [ app ] ) : more general initial conditions and lead to a slightly different ode : note that it follows from equation ( [ eq : maineq ] ) that is prescribed in terms of , and thus the second order ode we have obtained requires only one initial condition . since we are investigating the possibility of steady - state oscillations of a sedimenting sphere ,we are primarily concerned with the asymptotic behavior of ( [ eq : non - dim1 ] ) . moreover , since the nonautonomous term tends to zero as , one might expect the stability of ( [ eq : non - dim1 ] ) to mimic the homogeneous problem . with this in mind , let and denote the roots of the characteristic equation itis readily verified that ( [ ce ] ) has complex roots for .moreover , the roots have positive real parts for .since the relevant range of for a falling sphere is , one sees that oscillations are not a priori precluded . in terms of the actual densities of the fluid and sphere the condition for complex roots corresponds to , which is true in the case of a heavy sphere falling through a lighter liquid ( ) .if additionally , then the complex roots have positive real parts . if we rewrite ( [ eq : non - dim1 ] ) in the asymptotic limit ( ) as a first order linear system where , and , then is the unique equilibrium point , which corresponds to the terminal velocity .the eigenvalues of this system are precisely and , whence the equilibrium point becomes unstable . nonetheless , as we will show , even in this range ( ) , the solution to the full equation ( [ eq : non - dim1 ] ) _ monotonically _ approaches the value 1 , corresponding to the monotonic approach to the steady stokes value ( [ e - stokes ] ) for the actual velocity .clearly the nonautonomous term continues to play a dominant role in the stability of ( [ eq : non - dim1 ] ) , despite its algebraic approach to zero . to solve for , we return to the ide ( [ eq : maineq ] ) and apply the laplace transform in the case : since and , we may express this last equation in the form .\ ] ] moreover , the identity implies .\label{eq : uprime2}\ ] ] finally , since we find the solution .\label{eq : soln } \end{aligned}\ ] ] this solution to the ide ( [ eq : maineq ] ) is also the solution to the ode ( [ eq : non - dim1])-([eq : non - dim2 ] ) . applying transform methods to the more general set of equations defined by ( [ eq : gen1])-([eq : gen2 ] )one finds the solution where is the solution defined by ( [ eq : soln ] ) .note that the solution for arbitrary initial velocity is a simple rescaling of the solution for the sphere initially at rest .it is not obvious from the form of in ( [ eq : soln ] ) that the values approach 1 monotonically .let us first investigate the asymptotic behavior of this solution .finding the asymptotic behavior of the solution is straightforward .we employ the asymptotic expansion of the error function \end{aligned}\ ] ] as , provided . since for , we see that asymptotically thus the product of the exponential term and the error function approaches zero in the limit . using this expansion in ( [ eq : soln ] ) we obtain as the solution for any initial condition is a rescaling of , we see this limit applies for all values of .although the asymptotics of this solution are clear , the transient solution has some unusual properties .numerical simulation of the ide ( [ eq : maineq ] ) , or even attempts to plot the analytic solution ( [ eq : soln ] ) , eventually blow up at large when is in the unstable range .clearly the cancellation between the exponentially growing and decreasing terms are quite sensitive to numerical errors .this is an indication that the product should be considered as a special function with its own properties .we begin with the transient solution to the ide or ode considered in the previous section . in addition to the insensitivity of the transient solution to the real part of the homogeneous roots ,it is surprising that the nonzero complex part does not lead to _ any _ oscillations in the velocity of the sphere , although there has sometimes been confusion on this point regarding transient oscillations .experimentally the sphere in a newtonian fluid has never been observed to oscillate , in contradistinction to most non - newtonian ( particularly elastic ) fluids .we will show that the solution defined by ( [ eq : soln ] ) is monotone as a function of for all .although this may be well known , we have not yet found a reference to any proof other than the case , which corresponds to .let us define the function by we shall refer to as the villat function , since this combination appeared in the explicit solution of the differential equation for the falling sphere problem by villat .closely related to is the `` plasma dispersion function '' , defined by : in fact , . using the villat function we may now prove the main theorem . for each , the function \ ] ]approaches the limit 1 monotonically .[ [ proof . ] ] proof .+ + + + + + we have shown that , thus it remains to show it does so monotonically .we will demonstrate this by proving for all . to this end , fix , and recall that and denote the conjugate pair of roots of the polynomial . recall from ( [ eq : uprime2 ] ) that .\ ] ] since , it follows that and since for each , it is evident from ( [ eq : uprimeform2 ] ) that the sign of is determined by the imaginary part of the function . for the plasma dispersion function introduced above, the real and imaginary parts are given by ( see e.g. , ) and it is readily verified that , thus in polar form we have for some fixed , in which case , where using this information we compute in the last step we have used the fact that . since lies in the first quadrant , the prefactor of the last integral above is positive and we may conclude that provided let us denote the integrand as note that for ( recall is fixed ) .the proof is complete once the following two observations are made : 1 . for ; 2 . . to see , notice for we have , thus observation follows from a standard change of variables the two observations above imply the inequality ( [ negeq ] ) holds , in which case by ( [ eq : uprimeform2 ] ) and ( [ eq : poseq ] ) we see . since was arbitrary , the proof is complete . the solution to the initial value problem ( [ eq : gen1])-([eq : gen2 ] ) monotonically approaches its steady state value .[ [ proof.-1 ] ] proof .+ + + + + + the proof follows from applying theorem 3.1 to equation ( [ 220 ] ) . investigate the generality of the above result , consider the nonautonomous linear damped harmonic oscillator equation for as an initial value problem with arbitrary initial conditions and .we are specifically interested in the case where as , as opposed to the often studied case where is periodic ( see e.g. ) .the newtonian sphere problem ( [ eq : non - dim1 ] ) is a special case of ( [ eq : sho ] ) , with and making the change of variables , we may simplify the equation to so that solves the homogeneous equation .note however that if , then is not a solution to ( [ eq : shov ] ) for any .we are interested in the following question : what conditions on and are necessary for the solution to remain monotone , even within the regime of instability for the homogeneous equation . as a first step in this direction we consider the following initial value problem for where and are constants .the motivation for this form is to test the necessity of the singularity at in the monotonicity result of section [ monosection ] . to ensure complex roots , we assume .using variation of parameters one finds a particular solution of ( [ eq : odealg ] ) to be where and are the roots of the characteristic polynomial employing the villat function we may express equation ( [ up ] ) as where the function is defined by in section [ monosection ] we proved approaches 0 monotonically for all .the general solution to ( [ eq : odealg ] ) may be expressed as this equation clearly demonstrates how the long term dynamics of depend on the solution of the homogeneous problem . in particular , it shows that the solution will retain the stability properties of the homogeneous solution unless the coefficients and are chosen to zero out the first two terms in ( [ fsoln ] ) .the unique choice of and for this to happen are moreover , it is clear from ( [ fsoln ] ) that the solution in this case is , with and .thus , in this case , the solution is a translate of the monotone solution .the coefficients and are related to the initial conditions and via the values of and may also be obtained by solving equations ( [ ab ] ) and ( [ a0b0 ] ) . in summary , given , , and , for the equation there exists a unique choice of initial values , such that the solution remains monotone for all .therefore the presence of a singularity at in the nonhomogeneous term is not necessary to obtain a monotone solution . in light of the above analysis it becomes clear how the solution for the sedimenting sphere remains monotone in its approach to terminal velocity for _ all _ relevant values of ( i.e. , sphere densities ) . from equations ( [ eq : non - dim1])-([eq : non - dim2 ] ) we see that for each value of , equation ( [ gensho ] ) describes the dynamics for the dimensionless velocity , with , , and .moreover , for each we have demonstrated that equation ( [ gensho ] ) with , , and , has a unique initial value for which the solution remains monotone , namely , where and denote the roots of the polynomial .however , since lies in the first quadrant , , and the computation together with ( [ kd ] ) , implies in other words , the particular relation between the parameters and decouples from all parameters , so that one obtains a monotone solution for all values of the sphere density .we conclude this section with a geometric interpretation of the monotonicity result .in particular , we focus on the interesting case of ( [ eq : odealg ] ) when the parameter . for these parameter valuesthe solution of the homogeneous problem is unstable .we have shown that there is a unique set of initial conditions that defines a solution to the nonhomogeneous problem which approaches the unstable fixed point monotonically , despite the surrounding instabilities .thus we return to the fundamental puzzle posed in section [ asympt ] : how is it that the nonautonomous term in ( [ eq : odealg ] ) , which decays to zero as , can `` stabilize '' a trajectory for all , in the sense that this solution approaches 0 while all other trajectories diverge due to the instability of the linearized problem ?the following observation resolves the puzzle .first , recall that the unique initial conditions for which the nonhomogeneous problem remains monotone are defined by second , note that as the amplitude of the nonhomogeneous term tends to zero , the initial conditions approach .this corresponds to the initial condition starting on the unstable equilibrium point , which is the unique initial condition for the homogeneous problem whose solution does not diverge . in other wordswe have a correspondence between the trajectories of the homogeneous equation and the nonhomogeneous equation , which is continuous with respect to the parameter .the monotone solution is then the image of the unstable fixed point under this map .in this paper we have studied the ode model for a sphere falling through a newtonian fluid .we have proven that the equations do not admit oscillations , even in the transient , in agreement with general experimental observations . from our analysisit appears that the lack of oscillations is due to a delicate balance of terms .it is tempting to conclude that an oscillating motion could be produced with only a slight modification to the equations .however it is important that the solution still remain bounded , and as we have shown there is only one trajectory which is insensitive to the linear instability ( ) of the homogeneous equation .transient oscillations of a falling sphere have been successfully modeled by king & waters using an elastic constitutive model , for which a final steady state velocity is approached . in principle , however , one can not simply modify the differential equation ( [ eq : non - dim1 ] ) or even ( [ eq : maineq ] ) to address the oscillations of a sedimenting sphere in a micellar fluid ; one must return to the full time - dependent partial differential equation .this was indeed how king & waters obtained their result for a linear viscoelastic constitutive model , but it is not clear that this approach will continue to be fruitful as the complexity of the problem increases .self - assembling wormlike micellar solutions are thought to have a nonmonotonic stress / shear rate relation , based on the existence of an apparently inaccessible range of shear rates .it may be that the dynamics of such a nonlinear fluid requires the spatial information inherent in the pdes , and that the ode reduction discussed here is practically limited to linear models .the authors would like to thank j. p. keener , h. a. stone , b. ermentrout , and w. zhang for helpful discussions .a. b. acknowledges the support of the alfred p. sloan foundation .the equation describing the transient motion of a falling sphere is where is the velocity of the sphere and is a non - dimensional parameter which depends on the relative densities of the sphere and the fluid .this integro - differential equation can be converted to a second order ode through the following procedure . if then abel s theorem ( see e.g. , ) implies .\label{abthr}\ ] ] multiplying ( [ eq:1 ] ) by integrating , and using ( [ abthr ] ) yields the equation = \int_0^t \frac{1}{\sqrt{t-\tau } } \ , d\tau .\label{eq:2}\end{aligned}\ ] ] from ( [ eq:1 ] ) one observes substituting this into ( [ eq:2 ] ) and rewriting yields the desired second order differential equation is now obtained by differentiating ( [ eq:4 ] ) . in this regard , note that the substitution implies thus where again we have used ( [ eq : n ] ) . therefore differentiating ( [ eq:4 ] ) and using ( [ lst ] ) yields the second order equation .\ ] ] note that from ( [ eq:1 ] ) the initial value of is determined by the initial value of , i.e. , .therefore , the equation describing the transient motion of the sphere is if the sphere starts from rest ( i.e. , ) then the system reduces to which is precisely ( [ eq : non - dim1])-([eq : non - dim2 ] ) . , _error function and fresnel integrals _ , in handbook of mathematical functions with formulas , graphs , and mathematical tables , m. abramowitz and i. a. stegun , eds . ,dover publications , inc . , new york , ny , 1992 ( reprint of the 1972 edition ) , pp .295 - 329 . ,_ shear induced phase transitions in highly dilute aqueous detergent solutions _ , rheol .acta , 21 ( 1982 ) , p. 561 . , _ nonlinear rheology of wormlike micelles _ , phys .lett . , 71 , ( 1993 ) ,939 - 942 .
we study a class of integrodifferential equations and related ordinary differential equations for the initial value problem of a rigid sphere falling through an infinite fluid medium . we prove that for creeping _ newtonian _ flow , the motion of the sphere is monotone in its approach to the steady state solution given by the stokes drag . we discuss this property in terms of a general nonautonomous second order differential equation , focusing on a decaying nonautonomous term motivated by the sedimenting sphere problem . [ section ] [ definition]theorem [ definition]corollary @=11 addtoresetequationsection
record matching ( or linkage ) , data cleansing and plagiarism detection are among the most frequent operations in many large - scale data processing systems over the web ._ minwise hashing _ ( or minhash ) is a popular technique deployed by big data industries for these tasks .minhash was originally developed for economically estimating the _ resemblance _ similarity between sets ( which can be equivalently viewed as binary vectors ) .later , because of its locality sensitive property , minhash became a widely used hash function for creating hash buckets leading to efficient algorithms for numerous applications including spam detection , collaborative filtering , news personalization , compressing social networks , graph sampling , record linkage , duplicate detection , all pair similarity , etc .binary representations for web documents are common , largely due to the wide adoption of the `` bag of words '' ( bow ) representations for documents and images . in bow representations , the word frequencies within a document follow power law .a significant number of words ( or combinations of words ) occur rarely in a document and most of the higher order shingles in the document occur only once .it is often the case that just the presence or absence information suffices in practice .leading search companies routinely use sparse binary representations in their large data systems .the underlying similarity measure of interest with minhash is the resemblance , also known as the jaccard similarity .the resemblance similarity between two sets , is sets can be equivalently viewed as binary vectors with each component indicating the presence or absence of an attribute .the cardinality ( e.g. , , ) is the number of nonzeros in the binary vector .+ while the resemblance similarity is convenient and useful in numerous applications , there are also many scenarios where the resemblance is not the desirable similarity measure . for instance , consider text descriptions of two restaurants : a. five guys burgers and fries brooklyn new york " b. five kitchen berkley " shingle based representations for strings are common in practice .typical ( first - order ) shingle based representations of these names will be ( i ) \{five , guys , burgers , and , fries , brooklyn , new , york } and ( ii ) \{five , kitchen , berkley}. now suppose the query is five guys " which in shingle representation is \{five , guys}. we would like to match and search the records , for this query five guys " , based on resemblance .observe that the resemblance between query and record ( i ) is = 0.25 , while that with record ( ii ) is = 0.33 .thus , simply based on resemblance , record ( ii ) is a better match for query five guys " than record ( i ) , which should not be correct in this content .clearly the issue here is that the resemblance penalizes the sizes of the sets involved .shorter sets are unnecessarily favored over longer ones , which hurts the performance in record matching and other applications .there are many other scenarios where such penalization is undesirable .for instance , in plagiarism detection , it is typically immaterial whether the text is plagiarized from a big or a small document . to counter the often unnecessary penalization of the sizes of the sets with resemblance , a modified measure ,the _ set containment _ ( or jaccard containment ) was adopted .jaccard containment of set and with respect to is defined as in the above example with query five guys " the jaccard containment with respect to query for record ( i ) will be and with respect to record ( ii ) it will be , leading to the desired ordering .it should be noted that for any fixed query , the ordering under jaccard containment with respect to the query , is the same as the ordering with respect to the intersection ( or binary inner product ) .thus , near neighbor search problem with respect to is equivalent to the near neighbor search problem with respect to .formally , we state our problem of interest .we are given a collection containing sets ( or binary vectors ) over universe with ( or binary vectors in ) .given a query , we are interested in the problem of finding such that where is the cardinality of the set .this is the so - called _ maximum inner product search ( mips ) _ problem . for binary data ,the mips problem is equivalent to searching with jaccard containment with respect to the query , because the cardinality of the query does not affect the ordering and hence the . which is also referred to as the _ maximum containment search ( mcs ) _ problem . owing to its practical significance, there have been many existing heuristics for solving the mips ( or mcs ) problem .a notable recent work among them made use of the inverted index based approach .inverted indexes might be suitable for problems when the sizes of documents are small and each record only contains few words .this situation , however , is not commonly observed in practice .the documents over the web are large with huge vocabulary. moreover , the vocabulary blows up very quickly once we start using higher - order shingles .in addition , there is an increasing interest in enriching the text with extra synonyms to make the search more effective and robust to semantic meanings , at the cost of a significant increase of the sizes of the documents . furthermore ,if the query contains many words then the inverted index is not very useful . to mitigate this issue several additional heuristicswere proposed , for instance , the heuristic based on minimal infrequent sets .computing minimal infrequent sets is similar to the set cover problem which is hard in general and thus resorted to greedy heuristics .the number of minimal infrequent sets could be huge in general and so these heuristics can be very costly .also , such heuristics require the knowledge of the entire dataset before hand which is usually not practical in a dynamic environment like the web .in addition , inverted index based approaches do not have theoretical guarantees on the query time and their performance is very much dataset dependent .locality sensitive hashing ( lsh ) based randomized techniques are common and successful in industrial practice for efficiently solving nns ( _ near neighbor search _ ) .they are some of the few known techniques that do not suffer from the curse of dimensionality .hashing based indexing schemes provide provably sub - linear algorithms for search which is a boon in this era of big data where even linear search algorithms are impractical due to latency .hashing based indexing schemes are massively parallelizable and can be updated incrementally ( on data streams ) , which makes them ideal for modern distributed systems .the prime focus of this paper will be on efficient hashing based algorithms for binary inner products . despite the interest in jaccard containment and binary inner products, there were no hashing algorithms for these measures for a long time and minwise hashing is still a widely popular heuristic .very recently , it was shown that general inner products for real vectors can be efficiently solved by using asymmetric locality sensitive hashing schemes .the asymmetry is necessary for the general inner products and an impossibility of having a symmetric hash function can be easily shown using elementary arguments .thus , binary inner product ( or set intersection ) being a special case of general inner products also admits provable efficient search algorithms with these asymmetric hash functions which are based on random projections .however , it is known that random projections are suboptimal for retrieval in the sparse binary domain .hence , it is expected that the existing asymmetric locality sensitive hashing schemes for general inner products are likely to be suboptimal for retrieving with sparse high dimensional binary - like datasets , which are common over the web .we investigate hashing based indexing schemes for the problem of near neighbor search with binary inner products and jaccard containment .binary inner products are special .the impossibility of existence of lsh for general inner products shown in does not hold for the binary case . on the contrary, we provide an explicit construction of a provable lsh based on sampling , although our immediate investigation reveals that such an existential result is only good in theory and unlikely to be a useful hash function in practice .recent results on hashing algorithms for maximum inner product search have shown the usefulness of asymmetric transformations in constructing provable hash functions for new similarity measures , which were otherwise impossible .going further along this line , we provide a novel ( and still very simple ) asymmetric transformation for binary data , that corrects minhash and removes the undesirable bias of minhash towards the sizes of the sets involved .such an asymmetric correction eventually leads to a provable hashing scheme for binary inner products , which we call _ asymmetric minwise hashing ( mh - alsh _ ) .our theoretical comparisons show that for binary data , which are common over the web , the new hashing scheme is provably more efficient that the recently proposed asymmetric hash functions for general inner products .thus , we obtain a provable algorithmic improvement over the state - of - the - art hashing technique for binary inner products . the construction of our asymmetric transformation for minhash could be of independent interest in itself .the proposed asymmetric minhash significantly outperforms existing hashing schemes , in the tasks of ranking and near neighbor search with jaccard containment as the similarity measure , on four real - world high - dimensional datasets .our final proposed algorithm is simple and only requires very small modifications of the traditional minhash and hence it can be easily adopted in practice .past attempts of finding efficient algorithms , for exact near neighbor search based on space partitioning , often turned out to be a disappointment with the massive dimensionality of modern datasets . due to the curse of dimensionality ,theoretically it is hopeless to obtain an efficient algorithm for exact near neighbor search .approximate versions of near neighbor search problem were proposed to overcome the linear query time bottleneck .one commonly adopted such formulation is the -approximate near neighbor ( -nn ) .( -approximate near neighbor or -nn ) . given a set of points in a -dimensional space , and parameters , , construct a data structure which , given any query point q , does the following with probability : if there exist an -near neighbor of q in p , it reports some -near neighbor .the usual notion of -near neighbor is in terms of distance .since we are dealing with similarities , we define -near neighbor of point as a point with , where is the similarity function of interest .the popular technique , with near optimal guarantees for -nn in many interesting cases , uses the underlying theory of _ locality sensitive hashing _ ( lsh ) .lsh are family of functions , with the property that similar input objects in the domain of these functions have a higher probability of colliding in the range space than non - similar ones .more specifically , consider a family of hash functions mapping to some set .[ def : lsh](locality sensitive hashing ) a family is called sensitive if for any two point and chosen uniformly from satisfies the following : * if then * if then for approximate nearest neighbor search typically , and is needed . note , as we are defining neighbors in terms of similarity . to obtain distance analogy we can resort to [ fct ] given a family of -sensitive hash functions, one can construct a data structure for -nn with query time and space , lsh trades off query time with extra preprocessing time and space that can be accomplished off - line .it requires constructing a one time data structure which costs space and further any -approximate near neighbor queries can be answered in time in the worst case .+ a particularly interesting sufficient condition for existence of lsh is the monotonicity of the collision probability in .thus , if a hash function family satisfies , where is any strictly monotonically increasing function , then the conditions of definition [ def : lsh ] are automatically satisfied for all . the quantity is a property of the lsh family , and it is of particular interest because it determines the worst case query complexity of the -approximate near neighbor search. it should be further noted , that the complexity depends on which is the operating threshold and , the approximation ratio we are ready to tolerate . in case when we have two or more lsh families for a given similarity measure , then the lsh family with smaller value of , for given and , is preferred .minwise hashing is the lsh for the _ resemblance _ , also known as the _ jaccard similarity _ , between sets . in this paper , we focus on binary data vectors which can be equivalent viewed as sets . given a set , the minwise hashing family applies a random permutation on and stores only the minimum value after the permutation mapping .formally minwise hashing ( or minhash ) is defined as : given sets and , it can be shown that the probability of collision is the resemblance : where , , and .it follows from eq .( [ eq : minhash ] ) that minwise hashing is -sensitive family of hash function when the similarity function of interest is resemblance . +even though minhash was really meant for retrieval with resemblance similarity , it is nevertheless a popular hashing scheme used for retrieving set containment or intersection for binary data . in practice ,the ordering of inner product and the ordering or resemblance can be different because of the variation in the values of and , and as argued in section [ sec : intro ] , which may be undesirable and lead to suboptimal results .we show later that by exploiting asymmetric transformations we can get away with the undesirable dependency on the number of nonzeros leading to a better hashing scheme for indexing set intersection ( or binary inner products ) . presented a novel lsh family for all ( ] .the hash function is defined as : where is the floor operation .the collision probability under this scheme can be shown to be where is the cumulative density function ( cdf ) of standard normal distribution and is the euclidean distance between the vectors and .this collision probability is a monotonically decreasing function of the distance and hence is an lsh for distances .this scheme is also the part of lsh package . here is a parameter. signed random projections ( srp ) or _ simhash _ is another popular lsh for the cosine similarity measure , which originates from the concept of _ * signed random projections ( srp ) * _ . given a vector , srp utilizes a random vector with each component generated from i.i.d .normal , i.e. , , and only stores the sign of the projection .formally simhash is given by it was shown in the seminal work that collision under srp satisfies the following equation : where .the term is the popular * cosine similarity*. for sets ( or equivalently binary vectors ) , the cosine similarity reduces to the recent work on _ coding for random projections _ has shown the advantage of srp ( and 2-bit random projections ) over l2lsh for both similarity estimation and near neighbor search .interestingly , another recent work has shown that for binary data ( actually even sparse non - binary data ) , minhash can significantly outperform srp for near neighbor search even as we evaluate both srp and minhash in terms of the cosine similarity ( although minhash is designed for resemblance ) .this motivates us to design asymmetric minhash for achieving better performance in retrieving set containments . butfirst , we provide an overview of asymmetric lsh for general inner products ( not restricted to binary data ) . the term `` alsh '' stands for _ asymmetric lsh _ , as used in a recent work . through an elementary argument, showed that it is not possible to have a locality sensitive hashing ( lsh ) family for general unnormalized inner products . for inner products between vectors and , it is possible to have .thus for any hashing scheme to be a valid lsh , we must have , which is an impossibility .it turns out that there is a simple fix , if we allow asymmetry in the hashing scheme .allowing asymmetry leads to an extended framework of asymmetric locality sensitive hashing ( alsh ) .the idea to is have a different hashing scheme for assigning buckets to the data point in the collection , and an altogether different hashing scheme while querying .+ * definition : * ( * _ asymmetric _ * locality sensitive hashing ( alsh ) ) a family , along with the two vector functions ( * query transformation * ) and ( * preprocessing transformation * ) , is called -sensitive if for a given -nn instance with query , and the hash function chosen uniformly from satisfies the following : * if then * if then here is any point in the collection .asymmetric lsh borrows all theoretical guarantees of the lsh .[ theo : extendedlsh ] given a family of hash function and the associated query and preprocessing transformations and respectively , which is -sensitive , one can construct a data structure for -nn with query time and space , where . showed that using asymmetric transformations , the problem of * maximum inner product search ( mips ) * can be reduced to the problem of approximate near neighbor search in .the algorithm first starts by scaling all by a constant large enough , such that .the proposed alsh family ( * l2-alsh * ) is the lsh family for distance with the preprocessing transformation andthe query transformation defined as follows : \\ \label{eq : l2q}q^{l2}(x ) & = [ x ; 1/2; ... ;1/2;||x||^2_2 ; .... ; ||x||^{2^m}_2],\end{aligned}\ ] ] where [ ; ] is the concatenation . appends scalers of the form followed by 1/2s " at the end of the vector , while first appends `` 1/2s '' to the end of the vector and then scalers of the form .it was shown that this leads to provably efficient algorithm for mips .[ fact : l2-alsh ] for the problem of -approximate mips in a bounded space , one can construct a data structure having + query time and space , where is the solution to constrained optimization ( [ eq : optrho ] ) . here the guarantees depends on the maximum norm of the space . + quickly , it was realized that a very similar idea can convert the mips problem in the problem of maximum cosine similarity search which can be efficiently solve by srp leading to a new and better alsh for mips * sign - alsh * which works as follows : the algorithm again first starts by scaling all by a constant large enough , such that . the proposed alsh family ( * sign - alsh * )is the srp family for cosine similarity with the preprocessing transformation and the query transformation defined as follows : \\ \label{eq : signq}q^{sign}(x ) & = [ x ; 0 ; ... ; 0 ; 1/2 - ||x||^2_2 ; ... ; 1/2 - ||x||^{2^m}_2],\end{aligned}\ ] ] where [ ; ] is the concatenation . appends scalers of the form followed by 0s " at the end of the vector ,while appends `` 0 '' followed by scalers of the form to the end of the vector .it was shown that this leads to provably efficient algorithm for mips . + as demonstrated by the recent work on _ coding for random projections _, there is a significant advantage of srp over l2lsh for near neighbor search .thus , it is not surprising that sign - alsh outperforms l2-alsh for the mips problem .+ similar to l2lsh , the runtime guarantees for sign - alsh can be shown as : [ theo : main ] for the problem of -approximate mips , one can construct a data structure having query time and space , where is the solution to constraint optimization problem ^{2^{-m } } % \nonumber s.t . \ \ \z^ * & = \max_{0 \le z \le \frac{cs_0u^2}{v^2 } } \frac{z}{\sqrt{\frac{m^2}{16 } + \frac{mz^{2^m}}{2 } + z^{2^{m+1}}}}\end{aligned}\ ] ] there is a similar asymmetric transformation which followed by signed random projection leads to another alsh having very similar performance to sign - alsh .the values , which were also very similar to the can be shown as both l2-alsh and sign - alsh work for any general inner products over . for sparse and high - dimensional binary dataset which are common over the web, it is known that minhash is typically the preferred choice of hashing over random projection based hash functions .we show later that the alsh derived from minhash , which we call asymmetric minwise hashing ( _ mh - alsh _ ) , is more suitable for indexing set intersection for sparse binary vectors than the existing alshs for general inner products .in , it was shown that there can not exist any lsh for general unnormalized inner product . the key argument used in the proof was the fact that it is possible to have and with . however, binary inner product ( or set intersection ) is special . for any two binary vectors and we always have .therefore , the argument used to show non - existence of lsh for general inner products does not hold true any more for this special case .in fact , there does exist an lsh for binary inner products ( although it is mainly for theoretical interest ) .we provide an explicit construction in this section . + our proposed lsh construction is based on sampling . simply sampling a random component leads to the popular lsh for hamming distance .the ordering of inner product is different from that of hamming distance .the hamming distance between and query is given by , while we want the collision probability to be monotonic in the inner product . makes it non - monotonic in .note that has no effect on ordering of because it is constant for every query . to construct an lsh monotonic in binary inner product , we need an extra trick . given a binary data vector , we sample a random co - ordinate ( or attribute ) .if the value of this co - ordinate is ( in other words if this attribute is present in the set ) , our hash value is a fixed number . if this randomly sampled co - ordinate has value ( or the attribute is absent ) then we independently generate a random integer uniformly from .formally , given two binary vectors and , we have \frac{a}{d } + \frac{1}{n}\ ] ] the probability that both and have value 0 is .the only other way both can be equal is when the two independently generated random numbers become equal , which happens with probability .the total probability is which simplifies to the desired expression . is \frac{s_0}{d } + \frac{1}{n } , \left[\frac{n-1}{n}\right]\frac{cs_0}{d } + \frac{1}{n}) ] the above lsh for binary inner product is likely to be very inefficient for sparse and high dimensional datasets . for those datasets ,typically the value of is very high and the sparsity ensures that is very small . for modern web datasets , we can have running into billions ( or ) while the sparsity is only in few hundreds or perhaps thousands . therefore , we have which essentially boils down to .in other words , the hashing scheme becomes worthless in sparse high dimensional domain . on the other hand , if we observe the collision probability of minhash eq .( [ eq : minhash ] ) , the denominator is , which is usually of the order of and much less than the dimensionality for sparse datasets . another way of realizing the problem with the above lsh is to note that it is informative only if a randomly sampled co - ordinate has value equal to 1. for very sparse dataset with , sampling a non zero coordinate has probability .thus , almost all of the hashes will be independent random numbers . in this section ,we argue why retrieving inner product based on plain minhash is a reasonable thing to do .later , we will show a provable way to improve it using asymmetric transformations .the number of nonzeros in the query , i.e. , does not change the identity of in eq.([eq : prob ] ) .let us assume that we have data of bounded sparsity and define constant as where is simply the maximum number of nonzeros ( or maximum cardinality of sets ) seen in the database .for sparse data seen in practice is likely to be small compared to .outliers , if any , can be handled separately . by observing that , we also have thus , given the bounded sparsity , if we assume that the number of nonzeros in the query is given , then we can show that minhash is an lsh for inner products because the collision probability can be upper and lower bounded by purely functions of and .[ theo : minhash ] given bounded sparsity and query with , minhash is a sensitive for inner products with this explains why minhash might be a reasonable hashing approach for retrieving inner products or set intersection . here ,if we remove the assumption that then in the worst case and we get in the denominator .note that the above is the worst case analysis and the assumption is needed to obtain any meaningful with minhash .we show the power of alsh in the next section , by providing a better hashing scheme and we do not even need the assumption of fixing .in this section , we provide a very simple asymmetric fix to minhash , named _ asymmetric minwise hashing ( mh - alsh ) _ , which makes the overall collision probability monotonic in the original inner product . for sparse binary data , which is common in practice ,we later show that the proposed hashing scheme is superior ( both theoretically as well as empirically ) compared to the existing alsh schemes for inner product .we define the new preprocessing and query transformations ^d \rightarrow [ 0,1]^{d+m} ] as : \\ \label{eq : qamin } q'(x)&= [ x;0;0;0; ... ;0],\end{aligned}\ ] ] where [ ; ] is the concatenation to vector . for append 1s and rest zeros , while in we simply append zeros . at this pointwe can already see the power of asymmetric transformations .the original inner product between and is unchanged and its value is . given the query , the new resemblance between and is if we define our new similarity as , which is similar in nature to the containment , then the near neighbors in this new similarity are the same as near neighbors with respect to either set intersection or set containment .thus , we can instead compute near neighbors in which is also the resemblance between and .we can therefore use minhash on and . observe that now we have in the denominator , where is the maximum nonzeros seen in the dataset ( the cardinality of largest set ) , which for very sparse data is likely to be much smaller than .thus , asymmetric minhash is a better scheme than with collision probability roughly for very sparse datasets where we usually have .this is an interesting example where we do have an lsh scheme but an altogether different asymmetric lsh ( alsh ) improves over existing lsh .this is not surprising because asymmetric lsh families are more powerful . from theoretical perspective , to obtain an upper bound on the query and space complexity of -approximate near neighbor with binary inner products , we want the collision probability to be independent of the quantity .this is not difficult to achieve .the asymmetric transformation used to get rid of in the denominator can be reapplied to get rid of .+ formally , we can define ^d \rightarrow [ 0,1]^{d+2m} ] as : where in we append 1s and rest zeros , while in we append zeros , then 1s and rest zeros again the inner product is unaltered , and the new resemblance then becomes which is independent of and is monotonic in .this allows us to achieve a formal upper bound on the complexity of -approximate maximum inner product search with the new asymmetric minhash . from the collision probability expression ,i.e. , eq .( [ eq : collamin ] ) , we have minwise hashing along with query transformation and preprocessing transformation defined by equation [ eq : p ] is a sensitive asymmetric hashing family for set intersection .this leads to an important corollary .there exist an algorithm for -approximate set intersection ( or binary inner products ) , with bounded sparsity , that requires space and , where given query and any point , the collision probability under traditional minhash is .this penalizes sets with high , which in many scenarios not desirable . to balance this negative effect, asymmetric transformation penalizes sets with smaller .note , that ones added in the transformations gives additional chance in proportion to for minhash of not to match with the minhash of .this asymmetric probabilistic correction balances the penalization inherent in minhash .this is a simple way of correcting the probability of collision which could be of independent interest in itself .we will show in our evaluation section , that despite this simplicity such correction leads to significant improvement over plain minhash . our transformations and always create sets with nonzeros .in case when is big , hashing might take a lot of time .we can use fast consistent weighted sampling for efficient generation of hashes .we can instead use transformations and that makes the data non - binary as follows \\\notag q'''(x ) & = [ x ; 0 ; m - { f_x}]\notag\end{aligned}\ ] ] it is not difficult to see that the weighted jaccard similarity ( or weighted resemblance ) between and for given query and any is therefore , we can use fast consistent weighted sampling for weighted jaccard similarity on and to compute the hash values in time constant per nonzero weights , rather than maximum sparsity . in practice we will need many hashes for which we can utilize the recent line of work that make minhash and weighted minhash significantly much faster .for solving the mips problem in general data types , we already know two asymmetric hashing schemes , _ l2-alsh _ and _ sign - alsh _ , as described in section [ sec : alsh ] . in this section, we provide theoretical comparisons of the two existing alsh methods with the proposed asymmetric minwise hashing ( _ mh - alsh _ ) . as argued , the lsh scheme described in section [ sec : lship ] is unlikely to be useful in practice because of its dependence on ; and hence we safely ignore it for simplicity of the discussion .before we formally compare various asymmetric lsh schemes for maximum inner product search , we argue why asymmetric minhash should be advantageous over traditional minhash for retrieving inner products .let be the binary query vector , and denotes the number of nonzeros in the query .the for asymmetric minhash in terms of and is straightforward from the collision probability eq.([eq : r ] ) : for minhash , we have from theorem [ theo : minhash ] . since is the upper bound on the sparsity and is some value of inner product , we have . using this fact , the following theorem immediately follows for any query q , we have .this result theoretically explains why asymmetric minhash is better for retrieval with binary inner products , compared to plain minhash . for comparing asymmetric minhash with alsh for general inner products ,we compare with the alsh for inner products based on signed random projections .note that it was shown that has better theoretical values as compared to l2-alsh .therefore , it suffices to show that asymmetric minhash outperforms signed random projection based alsh .both and can be rewritten in terms of ratio as follows .note that for binary data we have observe that is also the upper bound on any inner product .therefore , we have .we plot the values of and for with .the comparison is summarized in figure [ fig : rho_comp ] .note that here we use derived from instead of for convenience although the two schemes perform essentially identically .we can clearly see that irrespective of the choice of threshold or the approximation ratio , asymmetric minhash outperforms signed random projection based alsh in terms of the theoretical values .this is not surprising , because it is known that minwise hashing based methods are often significantly powerful for binary data compared to srp ( or simhash ) .therefore alsh based on minwise hashing outperforms alsh based on srp as shown by our theoretical comparisons .our proposal thus leads to an algorithmic improvement over state - of - the - art hashing techniques for retrieving binary inner products .in this section , we compare the different hashing schemes on the actual task of retrieving top - ranked elements based on set jaccard containment .the experiments are divided into two parts . in the first part, we show how the ranking based on various hash functions correlate with the ordering of jaccard containment . in the second part, we perform the actual lsh based bucketing experiment for retrieving top - ranked elements and compare the computational saving obtained by various hashing algorithms .we chose four publicly available high dimensional sparse datasets : _ep2006 _ , _ mnist _ , _ news20 _ , and _nytimes_. except mnist , the other three are high dimensional binary bow " representation of the corresponding text corpus .mnist is an image dataset consisting of 784 pixel image of handwritten digits .binarized versions of mnist are commonly used in literature .the pixel values in mnist were binarized to 0 or 1 values . for each of the four datasets , we generate two partitions .the bigger partition was used to create hash tables and is referred as the * training partition*. the small partition which we call the * query partition * is used for querying .the statistics of these datasets are summarized in table [ tab_data ] .the datasets cover a wide spectrum of sparsity and dimensionality .[ tab_data ] we consider the following hash functions for evaluations : 1 .* asymmetric minwise hashing ( proposed ) : * this is our proposal , the asymmetric minhash described in section [ sec : amin ] .* traditional minwise hashing ( minhash ) : * this is the usual minwise hashing , the popular heuristic described in section [ sec : minhash ] .this is a symmetric hash function , we use as define in eq.([eq : min ] ) for both query and the training set .* l2 based asymmetric lsh for inner products ( l2-alsh ) : * this is the asymmetric lsh of for general inner products based on lsh for l2 distance .4 . * srp based asymmetric lsh for inner products ( sign - alsh ) : * this is the asymmetric hash function of for general inner products based on srp .we are interested in knowing , how the orderings under different competing hash functions correlate with the ordering of the underlying similarity measure which in this case is the jaccard containment .for this task , given a query vector , we compute the top-100 gold standard elements from the training set based on the jaccard containment . note that this is the same as the top-100 elements based on binary inner products . give a query , we compute different hash codes of the vector and all the vectors in the training set .we then compute the number of times the hash values of a vector in the training set matches ( or collides ) with the hash values of query defined by where is the indicator function . subscript is used to distinguish independent draws of the underlying hash function . based on we rank all elements in the training set .this procedure generates a sorted list for every query for every hash function . for asymmetric hash functions , in computing total collisions , on the query vector we use the corresponding function ( query transformation ) followed by underlying hash function ,while for elements in the training set we use the function ( preprocessing transformation ) followed by the corresponding hash function .we compute the precision and the recall of the top-100 gold standard elements in the ranked list generated by different hash functions . to compute precision and recall , we start at the top of the ranked item list and walk down in order ,suppose we are at the ranked element , we check if this element belongs to the gold standard top-100 list .if it is one of the top 100 gold standard elements , then we increment the count of _ relevant seen _ by 1 , else we move to . by step ,we have already seen elements , so the _ total elements seen _ is .the precision and recall at that point is then computed as : it is important to balance both .methodology which obtains higher precision at a given recall is superior .higher precision indicates higher ranking of the relevant items .we finally average these values of precision and recall over all elements in the query set .the results for are summarized in figure [ fig : hashquality ] .we can clearly see , that the proposed hashing scheme always achieves better , often significantly , precision at any given recall compared to other hash functions .the two alsh schemes are usually always better than traditional minwise hashing .this confirms that fact that ranking based on collisions under minwise hashing can be different from the rankings under jaccard containment or inner products .this is expected , because minwise hashing in addition penalizes the number of nonzeros leading to a ranking very different from the ranking of inner products .sign - alsh usually performs better than l2-lsh , this is in line with the results obtained in .+ it should be noted that ranking experiments only validate the monotonicity of the collision probability .although , better ranking is definitely a very good indicator of good hash function , it does not always mean that we will achieve faster sub - linear lsh algorithm . for bucketing the probability sensitivity around a particular thresholdis the most important factor , see for more details . what matters is the * gap * between the collision probability of good and the bad points . in the next subsection ,we compare these schemes on the actual task of near neighbor retrieval with jaccard containment . in this section ,we evaluate the four hashing schemes on the standard -parameterized bucketing algorithm for sub - linear time retrieval of near neighbors based on jaccard containment . in -parameterized lsh algorithm , we generate different meta - hash functions .each of these meta - hash functions is formed by concatenating different hash values as ,\ ] ] where and , are different independent evaluations of the hash function under consideration. different competing scheme uses its own underlying randomized hash function .+ in general , the -parameterized lsh works in two phases : a. * preprocessing phase : * we construct hash tables from the data by storing element , in the training set , at location in the hash - table .note that for vanilla minhash which is a symmetric hashing scheme .for other asymmetric schemes , we use their corresponding functions .preprocessing is a one time operation , once the hash tables are created they are fixed .b. * query phase : * given a query , we report the union of all the points in the buckets , where the union is over hash tables . again here is the corresponding function of the asymmetric hashing scheme , for minhash . typically , the performance of a bucketing algorithm is sensitive to the choice of parameters and .ideally , to find best and , we need to know the operating threshold and the approximation ratio in advance .unfortunately , the data and the queries are very diverse and therefore for retrieving top - ranked near neighbors there are no common fixed threshold and approximation ratio that work for all the queries .+ our objective is to compare the four hashing schemes and minimize the effect of and , if any , on the evaluations . this is achieved by finding best and at every recall level .we run the bucketing experiment for all combinations of and for all the four hash functions independently .these choices include the recommended optimal combinations at various thresholds .we then compute , for every and , the mean recall of top- pairs and the mean number of points reported , per query , to achieve that recall .the best and at every recall level is chosen independently for different .the plot of the mean fraction of points scanned with respect to the recall of top- gold standard near neighbors , where , is summarized in figure [ fig : topk ] .the performance of a hashing based method varies with the variations in the similarity levels in the datasets .it can be seen that the proposed asymmetric minhash always retrieves much less number of points , and hence requires significantly less computations , compared to other hashing schemes at any recall level on all the four datasets .asymmetric minhash consistently outperforms other hash functions irrespective of the operating point .the plots clearly establish the superiority of the proposed scheme for indexing jaccard containment ( or inner products ) .l2-alsh and sign - alsh perform better than traditional minhash on ep2006 and news20 datasets while they are worse than plain minhash on nytimes and mnist datasets . if we look at the statistics of the dataset from table [ tab_data ] , nytimes and mnist are precisely the datasets with less variations in the number of nonzeros and hence minhash performs better .in fact , for mnist dataset with very small variations in the number of nonzeros , the performance of plain minhash is very close to the performance of asymmetric minhash .this is of course expected because there is negligible effect of penalization on the ordering .ep2006 and news20 datasets have huge variations in their number of nonzeros and hence minhash performs very poorly on these datasets .what is exciting is that despite these variations in the nonzeros , asymmetric minhash always outperforms other alsh for general inner products .the difference in the performance of plain minhash and asymmetric minhash clearly establishes the utility of our proposal which is simple and does not require any major modification over traditional minhash implementation .given the fact that minhash is widely popular , we hope that our proposal will be adopted .minwise hashing ( minhash ) is a widely popular indexing scheme in practice for similarity search .minhash is originally designed for estimating set resemblance ( i.e. , normalized size of set intersections ) . in many applicationsthe performance of minhash is severely affected because minhash has a bias towards smaller sets . in this study, we propose asymmetric corrections ( asymmetric minwise hashing , or mh - alsh ) to minwise hashing that remove this often undesirable bias .our corrections lead to a provably superior algorithm for retrieving binary inner products in the literature .rigorous experimental evaluations on the task of retrieving maximum inner products clearly establish that the proposed approach can be significantly advantageous over the existing state - of - the - art hashing schemes in practice , when the desired similarity is the inner product ( or containment ) instead of the resemblance .our proposed method requires only minimal modification of the original minwise hashing algorithm and should be straightforward to implement in practice . + * future work * : one immediate future work would be _ asymmetric consistent weighted sampling _ for hashing weighted intersection : , where and are general real - valued vectors .one proposal of the new asymmetric transformation is the following: , \hspace{0.3 in } q(x ) = [ x ; 0 ; m - \sum_{i=1}^d x_i ] , \end{aligned}\ ] ] where .it is not difficult to show that the weighted jaccard similarity between and is monotonic in as desired . at this point, we can use existing methods for consistent weighted sampling on the new data after asymmetric transformations . + another potentially promising topic for future work might be asymmetric minwise hashing for 3-way ( or higher - order ) similarities y. bachrach , y. finkelstein , r. gilad - bachrach , l. katzir , n. koenigstein , n. nice , and u. paquet . speeding up the xbox recommender system using a euclidean transformation for inner - product spaces . in _ proceedings of the 8th acm conference on recommender systems_ , recsys 14 , 2014 .g. cormode and s. muthukrishnan .space efficient mining of multigraph streams . in _ proceedings of the twenty - fourth acm sigmod - sigact - sigart symposium on principles of database systems _ , pages 271282 .acm , 2005 .a. s. das , m. datar , a. garg , and s. rajaram .google news personalization : scalable online collaborative filtering . in _ proceedings of the 16th international conference on world wide web _ , pages 271280 .acm , 2007 .n. koudas , s. sarawagi , and d. srivastava .record linkage : similarity measures and algorithms . in _ proceedings of the 2006 acmsigmod international conference on management of data _ , pages 802803 .acm , 2006 .
minwise hashing ( minhash ) is a widely popular indexing scheme in practice . minhash is designed for estimating set resemblance and is known to be suboptimal in many applications where the desired measure is set overlap ( i.e. , inner product between binary vectors ) or set containment . minhash has inherent bias towards smaller sets , which adversely affects its performance in applications where such a penalization is not desirable . in this paper , we propose asymmetric minwise hashing ( _ mh - alsh _ ) , to provide a solution to this problem . the new scheme utilizes asymmetric transformations to cancel the bias of traditional minhash towards smaller sets , making the final `` collision probability '' monotonic in the inner product . our theoretical comparisons show that for the task of retrieving with binary inner products asymmetric minhash is provably better than traditional minhash and other recently proposed hashing algorithms for general inner products . thus , we obtain an algorithmic improvement over existing approaches in the literature . experimental evaluations on four publicly available high - dimensional datasets validate our claims and the proposed scheme outperforms , often significantly , other hashing algorithms on the task of near neighbor retrieval with set containment . our proposal is simple and easy to implement in practice .
the concept of the digital cavity that has been recently introduced mimics the functionality of an analog cavity .cavities , in general , use the interference effect of the electromagnetic ( em ) fields to generate comb - filters .the number of cycles that interfere determine the line - width of the cavity . the ratio of the line - width to the frequency of the signal is often termed the `` q - factor '' of the cavity . the losses in a cavity due to absorption and scattering limit the number of free oscillations of the em field in the cavity thereby limiting the number of cycles that interfere , and consequently the q - factor of the cavity .the highest q - factor achievable in an analog design is about 10 . the power loss in the cavity relates to the loss in the information .alternatively , one can design a detection system such that the complete information of the em field gets stored before it is lost using an analog to digital converter ( adc ) .this is possible for signals up - to a few tens of giga - hertz ( microwave frequencies ) using adcs on the market .once the signal has been digitized one can use a digital cavity to mimic the interference effect of an analog cavity . in this case , as no information is lost due to the power loss , a digital cavity with extremely narrow line - width can be designed .the algorithms of such filters have very low computational foot - print and can be parallelised , which opens up the possibility of implementing real - time phase sensitive detection ( psd ) schemes for microwave signals with extremely high q - factors .we discuss one of such possibilities in this article , viz .generalization of the lock - in amplifier ( lia ) .the response of a digital cavity is described by the following equation : where is the response of the cavity to the signal and is the digitization interval . according to the above equation ,the digitized signal is sectioned to waveforms of elements and those waveforms are then added to generate the response .this simple algorithm captures the interference effect of the sinusoidal signals , i.e. only the frequencies whose periods match the length of the cavity , , interfere constructively while the other frequencies interfere destructively . for large ,the algorithm generates a comb - pass filter with line - width given by where is the full - width at half maximum of the response of the cavity and is the fundamental resonance frequency of the cavity . according to equ. the fundamental resonance frequency can be tuned by either changing or .for example , if a signal is digitized at 100 ms / s ( mega samples per second ) and is chosen to be 100 then mhz , and if is chosen to be 90 then mhz . in this case .for high resolution signal analysis changing provides a coarse tuning while changing provides a fine tuning of the digital cavity . is defined by the clocking of the adcs .it can be changed continuously over a certain range by using a high precision voltage controlled oscillator .this design of the digital cavity is expected to achieve extremely high q - factors with reasonable memory requirements . a drawback of such a design is that normal digitizing cards that usually have a fixed clock can not be used to build a finely tunable digital cavity .however , in applications for which a q - factor of less than suffices one can use digital tuning .digital tuning refers to the concept of shifting the frequency of the signal to the resonant frequency of the digital cavity by multiplying it with an appropriate sine or cosine function .similar technique is also used in the lia where it forms a part of the psd scheme . a common digital lia uses two references , one in - phase and other quadrature : to demodulate the signal .the multiplication of the signal by the reference signal(s ) produces side bands at the sum and difference frequencies , e.g. multiplication by the in - phase reference gives ,\ ] ] where is the in - phase signal after multiplication with the reference .when , the resulting signal contains a dc component , , and an ac component , .a sharp cut - off low - pass filters are used to select the dc components of the in - phase and the quadrature signals from which the phase and the amplitude of the signal are calculated . the reference signal and the low - pass filter play critical role in the lia . the algorithms for generating the in - phase and the quadrature references usually involve computationally costly digital phase - locked - loops ( pll ) , and the same is true for the design of digital low - pass filters with high q - factors . this limits the use of common digital lia for precision measurements of signals to frequencies lower than few hundred mhz .new techniques that avoid using reference signals and use computationally efficient narrow line - width digital filters with parallelisable algorithms for computation are necessary to precisely measure the high frequency signals that exceed the clock frequency of the computer or the digital signal processing ( dsp ) unit .digital cavities fulfill these requirements .when using a digital cavity for high precision signal analysis one uses the line filter at the fundamental resonant frequency .the signal is multiplied by a cosine ( or sine ) function , , to get the input function , , to the cavity : .\ ] ] is chosen such that one of the frequency components of is resonant with the cavity . in practice , it is better to chose such that . in this casethe frequency component at is always non - resonant with the cavity for non - zero .care must be taken when chosing as in this case can also be equal to the multiples of and be resonant with the cavity .the output of the digital cavity is the amplified waveform , , from which the amplitude and phase of the signal can be extracted .the waveform averaging generates the narrow line - pass filter , and it also reduces the white noise in the signal by a factor of . this test , a signal at 245 mhz was generated using the signal generator hp-8656b from hewlett - packard .an 8 bit digitizer ats9840 from alazartech was used to digitize the signal at the rate of 1 gs / s ( giga sample per second ) .a digital cavity was set with , for which the .the digitized signal was frequency up - shifted by multiplying with sine functions with around 5 mhz and filtered with the digital cavity to get the spectral information in the vicinity of mhz .[ fig1]a shows the signal acquired by the digitizer .[ fig1]b shows the spectrum of the signal around 245 mhz .the scanning of the spectrum is done by up - shifting the signal to the resonant frequency of the digital cavity at 250 mhz . though the signal generator is set to 245 mhz fig .[ fig1]b shows that the signal measured by the digitizer is at 245.0029 mhz .this discrepancy is due to the slightly different clock speeds of the generator and the digitizer .the response of the digital cavity with up , shown by red and green curves in fig .[ fig1]b , is close to the response of the cavity to an ideal monochromatic signal . however , when is increased to , one observes wobbling of the maxima of the cavity response by few hz .[ fig1]c shows a 2 hz shift of the maxima of the cavity response in the signals that are acquired half a seconds apart .the wobbling of the frequency of the signal is due to the fluctuations in the signal generator caused by temperature drifts , vibrations , etc . to the upshifted signal and ( c )is the comparison of the cavity response with to the signals acquired at 0.5 s time delay . , width=480 ] in this test we recorded the signal from a photodiode using a 14 bit digitizer ats9440 from alazartech .the experimental setup is shown in fig .[ fig2 ] . a continuous - wave he - ne laser ( 25lhp121 - 230 ) from cvi melles griot was used as the light source . in the measurements ,the light from the source is attenuated using a neutral density filter ( nd , od ) and split into two beams using a beam splitter ( bs1 ) .the phase of each of the beam is modulated by using acousto - optic modulators ( aom1,2 ) from gooch and housego ( r35055 - 1-.8 ) .the two beams are then recombined using another beam splitter ( bs2 ) .the recombined beam is then detected by an amplified photo - diode ( pd , pda36a thorlabs ) .the phase modulation by aoms causes the intensity modulation of the combined beam at the difference frequency of the phase modulation .the phase modulation frequency of the aoms is varied in - between 40 - 70 mhz to get the difference frequency of up to 30 mhz .[ fig3]a - b show the photo - diode signals for the light modulated at 50 khz and 4 mhz , respectively .the signals are digitized at the rate of 50 ms / s .the response of a photo - diode depends on the modulation frequency of the light .the amplifier used in the photo - diode typically has a cut - off frequency beyond which the amplification of the photo - diode response falls down rapidly . since 4 mhz lies beyond the cut - off frequency of about 100 khz ( shown in fig .[ fig3]e ) the photo - diode signal is very noisy at these frequencies .we use a digital cavity set to 5 mhz ( and ) , and up - shift the signals recorded by the photo - diode to the resonant frequency of the digital cavity by multiplying it with sine functions having appropriate frequencies , ( e.g. for 50 khz signal and for 4 mhz . fig .[ fig3]c - d show the response of the digital cavity to the signal around 50 khz and 4 mhz , respectively .the scanning for the signals around 50 khz and 4 mhz is done by changing .the signal at 50 khz , fig .[ fig3]a has little noise , consequently the response of the digital cavity is close to the ideal noise free signal , while the response of the cavity to 4 mhz signal shows some contribution due to noise around the central frequency .the amplitudes of the response of the digital cavity for photo - diode signals at different modulation frequencies is shown in fig .the inset in fig .[ fig3]e shows the noise of the photo - diode at the different frequencies .the amplitudes of the signals at higher frequencies is negligible compared to the signals at the lower frequencies ( fig .[ fig3]e ) , nevertheless the signals at higher frequencies are easily discernible from the noise ( fig .[ fig3]d ) .as shown in fig .[ fig3]f the noise at the lower frequencies is about 3 orders of magnitude smaller than the signal after the application of digital cavity , while it is about 80 times smaller in the case of the higher frequencies .it is possible to use the generalized lock - in to retrieve signals even at higher frequencies . fig .[ fig4 ] shows an example of the signal retrieved at 30 mhz . fig .[ fig4]a shows a part of the actual signal digitized by the digitizer at the rate of 100 ms / s .the data is predominantly electronic noise .the signal contribution to the digitized data at such high frequency is so low that it can not be isolated from the noise even after applying a digital cavity set to 25 mhz and as shown in fig .the signal at 30 mhz is visible only when we use a digital cavity with very high ( fig .[ fig4]c ) .though we have tested signal recovery from extremely noisy data for signals up to 30 mhz , generalized lock - in can be used for even higher frequencies provided that one uses suitable digitizer and appropriate values of .30 mhz is the highest modulation frequency we can generate in our setup . and ( c )shows the amplitude of the signal around 30 mhz after applying a digital cavity with ., width=480 ] the development of a generalized lia is motivated by the need for precision analysis of high frequency signals .the generalized lia simplifies the algorithms for the line - filters and also eliminates computationally demanding plls . as the algorithms are parallelisable , this technique can be used to analyze frequencies beyond the clock cycles of the dsp processors . with the most advanced commercially available digitizers and on - board processors , the generalized lia could be developed to analyze the signals upto 60 - 70 ghz .a generalized lia does not use an external reference signal , which is an advantage as well as a limitation .the noise in the phase and the amplitude of the reference in an lia add to the noise in the output . this source of noise is eliminated in the generalized lia . as the generalized liais not locked to a reference , it is freely tunable .this is also an advantage because one can apply multiple cavities simultaneouly to the signal to extract different fourier components . moreover , for the applications like the software defined radio or the detection of faint radio / microwave frequencies the external reference is not available so normal lia can not be used . in the casewhen the generalized lia is used to measure only one fourier component in the signal , the phase of the fourier component is the phase when the digitization commences , so it is a random number .this is the limitation of not using the external reference .one of the ways to get a meaningful phase information uses a two channel digitizer , one of which digitizes the signal and the other digitizes the reference .the generalized lia is used to extract the phase and amplitude of the both .the phase difference between the signal and the reference then is independent of the starting time of the digitization .this implementation of the generalized lia is suited to analyze the phase and amplitude of the noisy signals in optical non - linear spectroscopy . the generalized lia uses a comb - pass filter .this means it selects not only a single fourier component of the signal at the frequency but all the other components at etc .this is an advantage in certain situations when the harmonics of the signal needs to be analyzed but is a disadvantage when a single fourier component needs to be filtered .the disadvantage , however , can be eliminated in different ways .one of the ways is to use the high - frequency cut - off of the digitizers themselves for signal pre - conditioning such that higher frequencies do not reach the adc .for example , when a digitizer with bandwidth 450 mhz is used for the generalized lia , one can set mhz .the generalized lia uses the computer system s built - in sine and cosine functions , which have finite numerical precision , and thus makes the accuracy of the algorithm system dependent .[ fig5]a shows the response of the digital cavity to sine functions calculated by the computer in the vicinity of 5 mhz for different values of .the response at shows that the signal amplitude is maximum for frequency at 5 mhz , which should be the case but the response at shows that the maximum is shifted slightly by about 1 hz towards lower frequency , which is an artifact .this artifact is related to the accuracy of the floating points in representing the actual numbers as well as the finite accuracy of the sine and cosine functions generated by the computer .note , that this artifact is inherent in all the dsp systems as they use finite accuracy floating point system .our tests show that using an intel(r ) core(tm ) i7 - 3770k cpu , gcc compiler ( version 4.7 ) and 32-bit floating number system , we can achieve an accuracy of about of ppm ( parts per million , q - factor of about ) ( fig . [ fig1]a ) , which is similar to the accuracy of the digital lia found in the market . as shown in fig .[ fig5]b , use of double precision ( 128-bit ) number system improves the accuracy to few tens of ppb ( parts per billion , q - factor of about 10 ) . in this case, we see the artifact due to inaccuracy of the number system and trigonometric function at when the cavity is formed by unsing . here too , the artifact shows as the shift in the maximum response of the cavity to lower frequencies by about 2 mhz as shown in the inset fig .[ fig5]b . to our knowledgethe q - factor of that can be achieved with the generalized lock - in is far better than commercially available digital lia .this makes the generalized lia not only attractive for the precision analysis of high frequency signals but for any signals that modern adcs are capable of digitizing .in this work , we present a technique that allows us to generalize the concept of lock - in detection using the digital cavities .we call this technique the generalized lock - in .we have also shown that the generalized lock - in can be used to analyze high frequency signals ( up to hundreds of mhz ) with few tens of ppb resolution , and they can also be used to extract the signals from an extremely noisy data set . as the technique uses little computational footprint and is alsoparallelisable , we speculate that it is suitable for precision analysis of microwave signals upto 60 - 65 ghz that currently available digitizers can digitize .kk thanks werner - gren foundation for the generous post - doctoral fellowship awarded to him . financial support from the knut and alice wallenberg foundation and lund university innovation systemis greatfully acknowledged .13ifxundefined [ 1 ] ifx#1 ifnum [ 1 ] # 1firstoftwo secondoftwo ifx [ 1 ] # 1firstoftwo secondoftwo `` `` # 1'''' [ 0]secondoftwosanitize [ 0 ] + 12$12 & 12#1212_12%12[1][0] * * , ( ) * * , ( ) _ _ , ed .( , , ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( ) * * , ( )
we herein formulate the concept of a generalized lock - in amplifier for the precision measurement of high frequency signals based on digital cavities . accurate measurement of signals higher than 200 mhz using the generalized lock - in is demonstrated . the technique is compared with a traditional lock - in and its advantages and limitations are discussed . we also briefly point out how the generalized lock - in can be used for precision measurement of giga - hertz signals by using parallel processing of the digitized signals .
during the last decade we have witnessed radical changes in the structure of electricity markets world - wide . for many yearsit was argued convincingly that the electricity industry was a natural monopoly and that strong vertical integration was an obvious and efficient model for the power sector .however , recently it has been recognized that competition in generation services and separation of it from transmission and distribution would be the optimal long - term solution .restructuring has been designed to foster competition and create incentives for efficient investment in generation assets . while the global restructuring process has achieved some significant successes , serious problems some predictable , others not have also arisen .the difficulties that have appeared were partly due to the flaws in regulation and partly to the complexity of the market .when dealing with the power market we have to bear in mind that electricity can not simply be manufactured , transported and delivered at the press of a button . moreover , electricity is non - storable , which causes demand and supply to be balanced on a knife - edge .relatively small changes in load or generation can cause large changes in price and all in a matter of hours , if not minutes . in this respectthere is no other market like it .californians are very well aware of this . in january 2001california s energy market was on the verge of collapse .wholesale electricity prices have soared since summer 2000 , see the top panel of fig .the state s largest utilities were threatening that they would be bankrupted unless they were allowed to raise consumer electricity rates by 30% ; the california power exchange suspended trading and filed for chapter 11 protection with the u.s . bankruptcy court .how could this have happened when deregulation was supposed to increase efficiency and bring down electricity prices ?it turns out that the difficulties that have appeared are intrinsic to the design of the market , in which demand exhibits virtually no price responsiveness and supply faces strict production constraints .another flaw of deregulation was the underestimation of the rising consumption of electricity in california .the soaring prices and san francisco blackouts clearly showed that there is a need for sophisticated tools for the analysis of market structures and modeling of electricity load dynamics . in this paperwe investigate whether electricity loads in the california power market can be modeled by arma models .the analyzed database was provided by the university of california energy institute ( ucei , www.ucei.org ) . among other datait contains system - wide loads supplied by california s independent ( transmission ) system operator .this is a time series containing the load for every hour of the period april 1st , 1998 december 31st , 2000 . due to a very strong daily cyclewe have created a 1006 days long sequence of daily loads .apart from the daily cycle , the time series exhibits weekly and annual seasonality , see the bottom panel of fig . 1 .because common trend and seasonality removal techniques do not work well when the time series is only a few ( and not complete , in our case ca .2.8 annual cycles ) cycles long , we restricted the analysis only to two full years of data , i.e. to the period january 1st , 1999 december 31st , 2000 , and applied a new seasonality reduction technique .the seasonality can be easily observed in the frequency domain by plotting a sample analogue of the spectral density , i.e. the periodogram where is the vector of observations , , ] denotes the largest integer less then or equal to . in the top panel of fig .2 we plotted the periodogram for the system - wide load .it shows well - defined peaks at frequencies corresponding to cycles with period 7 and 365 days .the smaller peaks close to and 0.4 indicate periods of 3.5 and 2.33 days , respectively .both peaks are the so called harmonics ( multiples of the 7-day period frequency ) and indicate that the data exhibits a 7-day period but is not sinusoidal .the weekly period was also observed in lagged autocorrelation plots . to remove the weekly cycle we used the moving average technique . for the vector of daily loads the trendwas first estimated by applying a moving average filter specially chosen to eliminate the weekly component and to dampen the noise : where .next , we estimated the seasonal component . for each , the average of the deviations was computed .since these average deviations do not necessarily sum to zero , we estimated the seasonal component as where and for .the deseasonalized ( with respect to the 7-day cycle ) data was then defined as finally we removed the trend from the deseasonalized data by taking logarithmic returns , . after removing the weekly seasonalitywe were left with the annual cycle .unfortunately , because of the short length of the time series ( only two years ) , the method applied to the 7-day cycle could not be used to remove the annual seasonality . to overcome this we applied a new method which consists of the following : ( i ) : : calculate a 25-day rolling volatility for the whole vector ; ( ii ) : : calculate the average volatility for one year, i.e. in our case ( iii ) : : smooth the volatility by taking a 25-day moving average of ; ( iv ) : : finally , rescale the returns by dividing them by the smoothed annual volatility . the obtained time series ( see the top panel of fig .3 ) showed no apparent trend and seasonality ( see the bottom panel of fig .therefore we treated it as a realization of a stationary process .moreover , the dependence structure exhibited only short - range correlations .both , the autocorrelation function ( acf ) and the partial autocorrelation function ( pacf ) rapidly tend to zero ( see the bottom panels of fig .3 ) , which suggests that the deseasonalized load returns can be modeled by an arma - type process .the mean - corrected ( i.e. after removing the sample mean=0.0010658 ) deseasonalized load returns were modeled by arma ( autoregressive moving average ) processes where denote the order of the model and is a sequence of independent , identically distributed variables with mean 0 and variance ( denoted by in the text ) .the maximum likelihood estimators , and of the parameters , and , respectively , were obtained after a preliminary estimation via the hannan - rissanen method using all 730 deseasonalized returns .the parameter estimates and the model size ( , ) were selected to be those that minimize the bias - corrected version of the akaike criterion , i.e. the aicc statistics where denotes the maximum likelihood function and .the optimization procedure led us to the following arma(1,6 ) model ( with ) where and iid .the value of the aicc criterion obtained for this model was aicc=1956.294 . in order to check the goodness of fit of the model to the set of data we compared the observed values with the corresponding predicted values obtained from the fitted model .if the fitted model was appropriate , then the residuals where denotes the predicted value of based on and , should behave in a manner that is consistent with the model . in our casethis means that the properties of the residuals should reflect those of an iid noise sequence with mean 0 and variance .the residuals obtained from the arma(1,6 ) model fitted to the mean - corrected deseasonalized load returns are displayed in the top panel of fig .4 . the graph gives no indication of a nonzero mean or nonconstant variance .the sample acf and pacf of the residuals fall between the bounds indicating that there is no correlation in the series , see the bottom panels of fig .4 . recall that for large sample size the sample autocorrelations of an iid sequence with finite variance are approximately iid with distribution .therefore there is no reason to reject the fitted model on the basis of the autocorrelation or partial autocorrelation function .however , we should not rely only on simple visual inspection techniques . for our results to be more statistically sound we performed several standard tests for randomness .the results of the portmanteau , turning point , difference - sign and rank tests are presented in table 1 .short descriptions of all applied tests can be found in the appendix ..test statistics and p - values for the residuals . [ cols="<,<,<",options="header " , ] as we can see from table 1 , if we carry out the tests at commonly used 5% level , the tests do not detect any deviation from the iid behavior .thus there is not sufficient evidence to reject the iid hypothesis .moreover , the order of the minimum aicc autoregressive model for the residuals also suggests the compatibility of the residuals with white noise , see the appendix .therefore we may conclude that the arma(1,6 ) model ( defined by eq .( [ arma ] ) ) fits the mean - corrected deseasonalized load returns very well .in the previous section we showed that the residuals are a realization of an iid(0, ) sequence .but what precisely is their distribution ?the answer to this question is important , because if the noise distribution is known then stronger conclusions can be drawn when a model is fitted to the data .there are simple visual inspection techniques that enable us to check whether it is reasonable to assume that observations from an iid sequence are also gaussian .the most widely used is the so - called normal probability plot , see fig .5 . if the residuals were gaussian then they would form a straight line .obviously they are not gaussian the deviation from the line is apparent .this deviation suggests that the residuals have heavier tails. however , we have to bear in mind that in order to comply with the arma model assumptions the distribution of the residuals must have a finite second moment . in the class of heavy - tailed laws with finite variance the hyperbolic distribution seems to be a natural candidate .the hyperbolic law was introduced by barndorff - nielsen for modeling the grain size distribution of windblown sand .it was also found to provide an excellent fit to the distributions of daily returns of stocks from a number of leading german enterprises .the name of the distribution is derived from the fact that its log - density forms a hyperbola .recall that the log - density of the normal distribution is a parabola .hence the hyperbolic distribution provides the possibility of modeling heavy tails .the hyperbolic distribution is defined as a normal variance - mean mixture where the mixing distribution is the inverse gaussian law .more precisely , a random variable has the hyperbolic distribution if its density is of the form where the normalizing constant , , is the modified bessel function with index 1 , the scale parameter , the location parameter and .the latter two parameters and determine the shape , with being responsible for the steepness and for the skewness . given a sample of independent observations all four parameters can be estimated by the maximum likelihood method . in our studieswe used the hyp program to obtain the following estimates the empirical probability density function ( pdf ) to be more precise : a kernel estimator of the density together with the estimated hyperbolic pdf are presented in fig .we can clearly see that , on the semi - logarithmic scale , the tails of the residuals density form straight lines , which justifies our choice of the theoretical distribution .the adjusted kolmogorov statistics , where is the theoretical and is the empirical cummulative distribution function , returns the value .this indicates that there is not sufficient evidence to reject the hypothesis of the hyberbolic distribution of the residuals at the 1% level .for comparison we fitted a gaussian law to the residuals as well . in this casethe adjusted kolmogorov statistics returned causing us to reject the gaussian hypothesis of the residuals at the same level .due to limited monitoring in a power distribution system its loads usually are not known in advance and can only be forecasted based on the available information . in this paperwe showed that it is possible to model deseasonalized loads via arma processes with heavy - tailed hyperbolic noise .this method could be used to forecast loads in a power market .its effectiveness , however , still has to be tested and will be the subject of our further research .the portmanteau test . : : instead of checking to see if each sample autocorrelation falls inside the bounds , where is the sample size , it is possible to consider a single statistic introduced by ljung and box , whose distribution can be approximated by the distribution with degrees of freedom .a large value of suggests that the sample autocorrelations of the observations are too large for the data to be a sample from an iid sequence .therefore we reject the iid hypothesis at level if , where is the quantile of the distribution with degrees of freedom . the turning point test .: : if is a sequence of observations , we say that there is a turning point at time ( ) if and or if and .in order to carry out a test of the iid hypothesis ( for large ) we denote the number of turning points by ( is approximately , where and ) and we reject this hypothesis at level if , where is the quantile of the standard normal distribution .the large value of indicates that the series is fluctuating more rapidly than expected for an iid sequence ; a value of much smaller than zero indicates a positive correlation between neighboring observations . the difference - sign test . : : for this test we count the number of values such that , . for an iid sequence and for large , is approximately , where and . a large positive ( or negative )value of indicates the presence of an increasing ( or decreasing ) trend in the data .we therefore reject the assumption of no trend in the data if . the rank test .: : the rank test is particularly useful for detecting a linear trend in the data .we define as the number of pairs such that and , . for an iid sequence and for large , is approximately , where and .a large positive ( negative ) value of indicates the presence of an increasing ( decreasing ) trend in data .the iid hypothesis is therefore rejected at level if . the minimum aicc ar model test .: : a simple test for whiteness of a time series is to fit autoregressive models of orders , for some large , and to record the value of for which the aicc value attains the minimum .compatibility of these observations with white noise is indicated by selection of the value .barndorff - nielsen , exponentially decreasing distributions for the logarithm of particle size , proc .royal soc .london a 353 ( 1977 ) 401 - 419 .r. bjorgan , c .- c .liu , j. lawarree , financial risk management in a competitive electricity market , ieee trans .power systems 14 ( 1999 ) 1285 - 1291 .p. blaesild , m. srensen , hyp a computer program for analyzing data by means of the hyperbolic distribution , research report no .248 , univ .aarhus , dept .. statist .s. borenstein , the trouble with electricity markets ( and some solutions ) , power working paper pwp-081 , ucei ( 2001 ) .brockwell , r.a .davis , introduction to time series and forecasting , springer - verlag , new york , 1996 .e. eberlein , u. keller , hyperbolic distributions in finance , bernoulli 1 ( 1995 ) 281 - 299 .international chamber of commerce , liberalization and privatization of the energy sector , paris , july 1998 . v. kaminski , the challenge of pricing and risk managing electricity derivatives , in `` the us power market '' , risk books , london , 1997 . v. kaminski , ed . , managing energy price risk , 2nd .ed . , risk books , london , 1999 .u. kchler , k. neumann , m. srensen , a. streller , stock returns and hyperbolic distributions , discussion paper no .23 , humbolt university , berlin ( 1994 ) .ljung , g.e.p .box , on a measure of lack of fit in time series models , biometrica 65 ( 1978 ) 297 - 303 .masson , competitive electricity markets around the world : approaches to price risk management , in , 1999 .s. stoft , the market flaw california overlooked , new york times , jan . 2 , 2001 .r. weron , energy price risk management , physica a 285 ( 2000 ) 127 - 134 .r. weron , b. kozowska , j. nowicka - zagrajek , modeling electricity loads in california : a continuous - time approach , arxiv : cond - mat/0103257 , to appear in physica a ( 2001 ) proceedings of the nato arw on application of physics in economic modelling .wolfram , electricity markets : should the rest of the world adopt the uk reforms ? , power working paper pwp-069 , ucei ( 1999 ) .
in this paper we address the issue of modeling electricity loads . after analyzing properties of the deseasonalized loads from the california power market we fit an arma(1,6 ) model to the data . the obtained residuals seem to be independent but with tails heavier than gaussian . it turns out that the hyperbolic distribution provides an excellent fit . -15 mm , electricity load , arma model , heavy tails , hyperbolic distribution
prompt detection and reporting of hazards in remote and/or hazardous regions are important applications of wireless sensor networks ( wsn ) . in general , in such applications , event occurrence rate ( eor ) is low and non - deterministic .therefore , most of the time , a sensor node remains in an idle state and most of its limited battery energy is wasted in idle listening . in addition , due to the generally unsafe nature of the monitored region in such applications , a wsn is expected to work after deployment without ( or with minimum ) human intervention for as long as possible .therefore , contention based synchronous mac protocols would be a better choice as the mac protocol in such applications , since these are typically easier to implement than contention free and hybrid macs and provide low e2etd ( end - to - end transmission delay ) compared to asynchronous macs . in sensor mac ( smac ) , ye and heidemann adopt a periodic sleep - wake strategy to reduce the idle listening .for this , smac uses a cycle structure which contains three time windows : synchronization window ( sw ) , data transmission scheduling window ( dw ) , and sleep window ( slpw ) . in sw , each node periodically broadcasts its current sleep - wake schedule to maintain the synchronization .while , in dw , after medium contention with the help of contention window ( cw ) , a node can schedule data packet transmission . in smac , after medium contention , a node can schedule only one hop transmission of its data packets , which results in large e2etd and low packet delivery ratio ( pdr ) in multi - hop scenarios . in the routing enhanced mac ( rmac ) protocol , du et al .utilize cross - layer ( routing layer ) information , so that , after medium contention , a node can schedule multi - hop transmission of its data packets in dw . for this , rmac introduces a new control packet , termed as pioneer ( pion ) .pion contains the address of the sender , the next - hop receiver , the previous hop receiver , and the hop distance of the sender from the source of the current flow . for the next and the previous hop receiver, a pion behaves as rts and cts , respectively . with this , in a multi - hop scenario rmac significantly improve the pdr and e2etd compared to smac .however , in rmac and other similar works , a node can schedule transmission of only one data packet in a cycle , even though the node has multiple data packet to be sent to the same destination .prmac enables a node to schedule transmission of multiple data packets in a cycle . for this, prmac assumes that on the demand of the mac layer , the routing layer can provide information about the number of data packets in the queue for the requested destination , and can send them irrespective of their order in the queue . with this , prmac significantly improves the e2etd and pdr compared to rmac .however , prmac s performance in a multi - hop scenario is restricted because 1 ) a node can transmit pion packet , only when the remaining time of dw ( or time remains in beginning of slpw ) is more than or equal to the , where and are the transmission duration of one pion packet and short inter frame space , respectively , and ; 2 ) it accommodates both request - to - send data and confirmation - to - send data process in the dw . in this paper, we propose a cross - layer contention based synchronous mac protocol to reduce the transmission delay for wsns in a multi - hop scenario .our proposed protocol uses a novel cycle structure which improves the e2etd and pdr by increasing the length of the multi - hop flow setup in dw compared to prmac , without increasing the duration of dw by putting the request - to - send data process in dw and the confirmation - to - send data process in slpw . the rest of the paper is organized as follows : section ii presents a brief overview of the prmac protocol . in section iii , we describe the design details of our proposed protocol , including its cycle structure and data transmission process . in section iv, we evaluate our protocol s performance through ns-2.35 simulations based and compare its performance to prmac .finally , we conclude our work in section v.[ htbp ! ] [ fig1 ] for the sake of completeness , we provide here a brief overview of the data transmission process of prmac . as in other existing cross - layer contention based synchronous mac protocols, prmac follows the cycle structure proposed by smac .data transmission process of prmac is shown in fig .1 , assuming that node a has two data packets to send to the two - hop distant sink s , and b is the next - hop receiver of a. in dw , after medium contention , the node a sends a pion packet to forward its data transmission request to its next - hop receiver b. pion transmitted by the node a contains following information : 1 ) a wants to send two data packets to the node b , and ; 2 ) a is the source of the current flow , i.e. , a lies at 0 hop distance from the source of the current flow .the node b transmits a pion to send the data transmission request to the node s , and to inform the node a that it can receive data packets .the pion transmitted by the b contains following information : 1 ) the number of data packets it can receive from the node a ; 2 ) the number of data packets it wants to send to s , and ; 3 ) hop distance of the node b from the source of current flow ( i.e. , the node a ) . after receiving confirmation, the node a goes in sleep .since the remaining time of dw is not sufficient to transmit a pion packet ( i.e. , time remains in the beginning of slpw is less than the ) , b and s also go in sleep . at the beginning of slpw ,a and b wake up to transmit and receive the first data packet , respectively . since, the node b does nt receive confirmation to its request from s , therefore , along with a , it also goes to sleep after transmitting acknowledgement ( ack ) packet .after ( retransmission period ) [ 7 ] duration from the beginning of slpw , the nodes a and b wake up to transmit and receive the second data packet , respectively , and then both go in sleep . in this way , both data packets travel one hop towards s in a cycle .[ htbp ! ] [ fig1 ] the cycle structure of our proposed protocol is shown in fig .cycle duration is divided into two windows : wake up window ( ww ) and sleep window ( slpw ) . in ww ,all the network nodes remains in active ( wake - up ) state .ww is further divided into synchronization window ( sw ) and data transmission scheduling window ( dw ) . as in , in sw, each node periodically broadcasts a sync packet , containing the sender s current sleep - wake schedule , to maintain synchronization . in dw , after medium contention with the help of cw , a node requests its next - hop receiver to receive its data packets and to forward the request further , so that , multi - hop transmission of data packets can be scheduled . for datatransmission request , a node transmits rtsd packet . in proposed protocol ,a rtsd contains the following fields : 1 ) sender s address ; 2 ) receiver s address ; 3 ) hop distance of the sender from the source of the current flow ( i.e. , the node who initiates the current flow ) ; 4 ) number of data packets the sender wants to send to its next - hop receiver , and ; 5 ) the address of final destination .unlike ww , in slpw , all the network nodes remain in sleep state .it is further divided into two windows : and . contains n confirmation window ( cfw ) , and one request window ( rqw ) . here, and are the transmission duration of rtsd and duration of dw , respectively , and sifs is short inter frame space . in fig .2 , cfw is represented by .size of rqw and a cfw are equal to the and , respectively , where is the transmission duration of a ctsd packet . in case ,a node receives rtsd containing as the hop distance of the sender from the source of the scheduled flow , it wakes up at the beginning of to transmit the ctsd to its upstream node as the response of the received rtsd .the transmitted ctsd contains the sender s address , receiver s address , and number of data packets the node can receive from its upstream node . on the other hand ,if a node transmits rtsd with as the hop distance of the sender from the source of the scheduled flow then it wakes up at the beginning of the to receive the ctsd from its next - hop receiver . in , corresponding to the setup flow , data packets transmission process occurs .[ htbp ! ] [ fig1 ] in this subsection , we illustrate the data transmission process of our proposed protocol , assuming that node has two data packets to send to the two - hop distant sink s , and is the next - hop receiver of . in its dw , after medium contention , sends a data transmission request to its next - hop receiver by transmitting a rtsd packet , and then goes to sleep .this rtsd contains the following : 1 ) address of the node as the sender ; 2 ) address of node as the receiver ; 3 ) address of node s as the final destination ; 4 ) number of data packets wants to send , and ; 5 ) 0 as the hop distance of from the source of the current flow .( note that is the source of the current flow in fig .3 ) . in case, the node can receive data packet from , sends a data transmission request to the node s , which is its next - hop receiver corresponding to the final destination in received rtsd .as done for the node , after transmitting rtsd goes in sleep .rtsd transmitted by contains following : 1 ) address of node as the sender ; 2 ) address of node s as the next - hop receiver ; 3 ) address of s as the final destination ; 4 ) number of data packets has to send to s , and ; 5 ) 1 as the hop distance of sender from the source of current flow ( i.e. , ) . after receiving rtsd, s goes in sleep . in case, it can receive data packets from , it wakes up at the beginning of , and send a ctsd to the node .the ctsd sent by s contains the number of data packets s can receive from .similarly , at the beginning of , node sends a ctsd to inform the node about the number of data packets it can receive . in this way , in proposed protocol , accommodation of request - to - send data ( rtsd ) and confirmation - to - send data ( ctsd ) process in two different time windows results in the increase in the scheduled flow length by one hop compared to prmac , without increasing .nodes and wake up at the beginning of to transmit and receive the first data packet , respectively . after receiving data packet from , forwards the received data packet to the node s. after ( retransmission period ) duration from the beginning of slpw2 , and wake up to transmit and receive second data packet , respectively .then forward the received data packet to the sink s. in this way , in the proposed protocol , data packets travel from source to destination s in one cycle .( for the same example , prmac would require two cycles as in figwe evaluate the performance of our proposed protocol using the ns-2.35 simulator .we compare our proposed protocol with prmac . for performance evaluation , we have randomly deployed sensor nodes in a uniform fashion to monitor an area of , and have placed a sink at the center . for performance analysis , we vary the hop distance between the source and sink . in our simulations , is varied from 1 to 6 hops . for each value of , we randomly choose a sensor node , among the sensor nodes lying at hop distance from the sink , as the source node .this chosen source node generates constant bit rate ( cbr ) traffic with packet arrival interval ( pai ) 1.0s .in addition to this , in our simulations , each node is assumed to have one omnidirectional antenna and uses the two ray ground reflection radio propagation model . as in , we assume that each node is aware of its next - hop receiver as per the shortest path and all the nodes are following the same sleep - wake schedule .frame size parameters are given in table i ; cycle duration related parameters of proposed and prmac protocol are given in table ii and iii , respectively .networking parameters are similar to that of prmac ..frame sizes [ cols= " < , < , < , < " , ] [ fig : subfigureexample ] fig .4 ( a ) , fig .4 ( b ) , and fig .4(c ) show comparison of e2etd , pdr , and average energy consumption ( aec ) of prmac and proposed protocol . in fig .4 ( a ) , fig . 4 ( b ) , and fig .4 ( c ) results are averaged over simulations with different seeds , each lasting for .e2etd is defined as the difference between the reception time of the first data packet at the sink and generation time of the same data packet at the source node . as shown in fig .4 ( a ) , with increasing , the difference between the e2etd of prmac and proposed protocol increases . this occurs because , unlike prmac , the proposed protocol accommodates the request - to - send and confirmation - to - send data processes in two separate time windows .moreover , the proposed protocol allows a node to send a data transmission request even if the remaining time of dw is less than the duration of the rtsd packet .this can increase the scheduled flow length by up to two hops in the same duration dw in comparison to prmac .as a results , the e2etd of the proposed protocol is less than that of prmac . in fig .4 ( a ) , in case of , e2etd of proposed protocol is almost less than the prmac protocol .pdr is defined as the ratio of the total number of packets successfully received at the sink to the total packets generated at the source during simulation time . as shown in fig .4 ( b ) , the difference between the pdr of prmac and that of the proposed protocol increases with increasing . this happens because in the same duration dw , a node can schedule more number of hop for forwarding its data packets in the proposed protocol compared to prmac . in fig. 4 ( b ) , in case of , pdr of our proposed protocol is almost more than that of prmac .aec is defined as the ratio of total energy consumed during the simulation time divided by the number of nodes in the network . from fig .4 ( c ) , it is clear that our proposed protocol significantly improves e2etd and pdr at the cost of very small increase in aec . in comparison to prmac , in proposed protocol , aec is increased due to the accommodation of request - to - send data process and confirmation - to - send data process in separate time windows .in this paper , we proposed a new cross - layer contention based synchronous mac protocol for wsns .our proposed protocol uses a novel cycle structure , which accommodates the request - to - send and confirmation - to - send data processes in two separate time windows , to improve both e2etd and pdr .our ns-2.35 simulator based simulation result suggests that the proposed protocol is a better solution than prmac as a mac protocol for wsns deployed for monitoring the low eor delay - sensitive events in remote / hazardous regions .w. ye , j. heidemann , and d. estrin , medium access control with coordinated adaptive sleeping for wireless sensor networks , " _ ieee / acm trans .493 - 506 , june 2004 .s. du , a. saha , and d. johnson , rmac : a routing enhanced duty cycle mac protocol for wireless sensor networks , " _ in proc ._ , 2007 , pp .1478 - 1486 .y. sun , s. du , o. gurewitz , and d. b. johnson , dw - mac : a low latency energy efficient demand - wakeup mac protocol for wireless sensor networks , " _ in proc .mobihoc _ , 2008 , pp .g. liu , and g. yao , srmac : staggered routing - enhanced mac protocol for wireless sensor networks , " _ in proc .wicom _ , 2011 , pp . 1 - 6. k. t. cho , and s. bahk , he - mac : hop extended mac protocol for wireless sensor networks , " _ in proc ., 2009 , pp . 1 - 6. k. t. cho , and s. bahk , optimal hop extended mac protocol for wireless sensor networks , " _ comput .4 , pp . 1458 - 1469 , march 2012 .t. canli and a. khokhar , prmac : pipelined routing enhanced mac protocol for wireless sensor networks , " _ in proc .icc _ , 2009 , pp . 1 - 5 .r. singh , and s. chouhan , a cross - layer mac protocol for contention reduction and pipelined flow optimization in wireless sensor networks , " _ in proc .retis _ , 2015 , pp .. m. s. hefeida , t. canli , and a. khokhar , cl - mac : a cross - layer mac protocol for heterogeneous wireless sensor networks , " _ ad hoc netw .213 - 225 , jan . 2013
recently designed cross - layer contention based synchronous mac protocols like the prmac protocol , for wireless sensor networks ( wsns ) enable a node to schedule multi - hop transmission of multiple data packets in a cycle . however , these systems accommodate both the request - to - send data process and the confirmation - to - send data process in the same data transmission scheduling window ( i.e. data window ) . this reduces the length of the multi - hop flow setup in the data window . in a multi - hop scenario , this degrades both the packet delivery ratio ( pdr ) and the end - to - end transmission delay ( e2etd ) . in this paper , we propose a cross - layer contention based synchronous mac protocol , which accommodates the request - to - send data process in the data window and the confirmation - to - send data process in the sleep window for increased efficiency . we evaluate our proposed protocol through ns-2.35 simulations and compare its performance with the prmac protocol . results suggest that in multi - hop scenario , proposed protocol outperforms prmac both in terms of the e2etd and the packet delivery ratio ( pdr ) . end - to - end transmission delay , medium access control ( mac ) protocols , packet delivery ratio , synchronization , wireless sensor networks ( wsns ) .
a bayesian data analysis specifies joint probability distributions to describe the relationship between the prior information , the model or hypotheses , and the data . using bayes theorem , the posterior distribution is uniquely determined from the conditional probability distribution of the unknowns given the observed data .the posterior probability is usually stated as follows : where is the marginal likelihood .the symbol denotes the assumption of a particular model and the parameter vector . for physical models , the sample space is most often a continuous space. in words , equation ( [ eq : bayes ] ) says : the probability of the model parameters given the data and the model is proportional to the prior probability of the model parameters and the probability of the data given the model parameters .the posterior may be used , for example , to infer the distribution of model parameters or to discriminate between competing hypotheses or models .the latter is particularly valuable given the wide variety of astronomical problems where diverse hypotheses describing heterogeneous physical systems is the norm ( see * ? ? ?* for a thorough discussion of bayesian data analysis ) . for parameter estimation ,one often considers to be an uninteresting normalization constant. however , equation ( [ eq : evidence ] ) clearly admits a meaningful interpretation : it is the support or _ evidence _ for a model given the data .this see this , assume that the prior probability of some model , say , is . then by bayes theorem ,the probability of the model given the data is .the posterior odds of model relative to model is then : if we have information about the ratio of prior odds , , we should use it , but more often than not our lack of knowledge forces a choice of .then , we estimate the relative probability of the models given over their prior odds by the bayes factor ( see * ? ? ?* for a discussion of additional concerns ) .when there is no ambiguity , we will omit the explicit dependence on of the prior distribution , likelihood function , and marginal likelihood for notational convenience .the bayes factor has a number of attractive advantages for model selection : ( 1 ) it is a consistent selector ; that is , the ratio will increasingly favor the true model in the limit of large data ; ( 2 ) bayes factors act as an occam s razor , preferring the simpler model if the `` fits '' are similar ; and ( 3 ) bayes factors do not require the models to be nested in any way ; that is , the models and their parameters need not be equivalent in any limit . there is a catch : direct computation of the marginal likelihood ( eq . [ eq : evidence ] ) is intractable for most problems of interest .however , recent advances in computing technology together with developments in markov chain monte carlo ( mcmc ) algorithms have the promise to compute the posterior distribution for problems that have been previously infeasible owing to dimensionality or complexity .the posterior distribution is central to bayesian inference : it summarizes all of our knowledge about the parameters of our model and is the basis for all subsequent inference and prediction for a given problem .for example , current astronomical datasets are very large , the proposed models may be high - dimensional , and therefore , the posterior sample is expensive to compute . however ,once obtained , the posterior sample may be exploited for a wide variety of tasks . although dimension - switching algorithms , such as reversible - jump mcmc incorporate model selection automatically without need for bayes factors , these simulations appear slow to converge for some of our real - world applications .moreover , the marginal likelihood may be used for an endless variety of tests , ex post facto . presented a formula for estimating from a posterior distribution of parameters .they noted that a mcmc simulation of the posterior selects values of distributed as and , therefore , or {p({\boldsymbol{\theta}}|{{\bf d } } ) } , \label{eq : hmean}\ ] ] having suppressed the explicit dependence on for notational clarity .this latter equation says that the marginal likelihood is the harmonic mean of the likelihood with respect to the posterior distribution .it follows that the harmonic mean computed from a sampled posterior distribution is an estimator for the marginal likelihood , e.g. : ^{-1}. \label{eq : hsamp}\ ] ] unfortunately , this estimator is prone to domination by a few outlying terms with abnormally small values of ( e.g. see * ? ? ?* and references therein ) . describes convergence criteria for equation ( [ eq : hsamp ] ) and present augmented approaches with error estimates .alternative approaches to computing the marginal likelihood from the posterior distribution have been described at length by . of these, the laplace approximation , which approximates the posterior distribution by a multidimensional gaussian distribution and uses this approximation to compute equation ( [ eq : evidence ] ) directly , is the most widely used .this seems to be favored over equation ( [ eq : harmonic ] ) because of the problem with outliers and hence because of convergence and stability . in many cases , however , the laplace approximation is far from adequate in two ways .first , one must identify all the dominant modes , and second , modes may not be well - represented by a multidimensional gaussian distribution for problems of practical interest , although many promising improvements have been suggested ( e.g. * ? ? ? * ) . explored the use of the savage - dickey density ratio for cosmological model selection ( see also * ? ? ?* for a full review of the model selection problem for cosmology ) . finally , we may consider evaluation of equation ( [ eq : evidence ] ) directly .the mcmc simulation samples the posterior distribution by design , and therefore , can be used to construct volume elements in -dimensional parameter space , , e.g. when .although the volume will be sparsely sampled in regions of relatively low likelihood , these same volumes will make little contribution to equation ( [ eq : evidence ] ) . the often - used approach from computational geometry , delaney triangulation , maximizes the minimum angle of the facets and thereby yields the `` roundest '' volumes .unfortunately , the standard procedure scales as for a sample of points .this can be reduced to using the flip algorithm with iterative construction but this scaling is prohibitive for large and typical of many problems .rather , in this paper , we consider the less optimal but tractable kd - tree for space partitioning . in part ,the difficulty in computing the marginal likelihood from the sampled posterior has recently led ( * ? ? ?* nesting sampling ) to suggest an algorithm to simulate the marginal likelihood rather than the posterior distribution .this idea has been adopted and extended by cosmological modelers .the core idea of nesting sampling follows by rewriting equation ( [ eq : zdef0 ] ) as a double integral and swapping the order of integration , e.g. the nested sampler is a monte carlo sampler for the likelihood function with respect to the prior distrbution so that .the generalization of the construction in equation ( [ eq : nest ] ) for general distributions and multiple dimensions is the lebesgue integral ( see [ sec : intgr ] ) .clearly , this procedure has no problems with outliers with small values of .of course , any algorithm implementing nested sampling must still thoroughly sample the multidimensional posterior distribution and so retains all of the intendant difficulties that mcmc has been designed to solve . in many ways ,the derivation of the nested sampler bears a strong resemblance to the derivation of the harmonic mean but without any obvious numerical difficulty .this led me to a careful study of equations ( [ eq : evidence ] ) and ( [ eq : zdef0 ] ) to see if the divergence for small value of likelihood could be addressed .indeed they can , and the following sections describe two algorithms based on each of these equations . these new algorithms retain the main advantage of the harmonic mean approximation ( hma ) : direct incorporation of the sampled posterior without any assumption of a functional form . in this sense , they are fully and automatically adaptive to any degree multimodality given a sufficiently large sample .we begin in [ sec : intgr ] with a background discussion of lebesgue integration applied to probability distributions and monte carlo ( mc ) estimates .we apply this in [ sec : evid ] to the marginal likelihood computation .this development both illuminates the arbitrariness in the hma from the numerical standpoint and leads to an improved approach outlined in [ sec : algo ] . in short ,the proposed approach is motivated by methods of numerical quadrature rather than sample statistics .examples in [ sec : examples ] compare the application of the new algorithms to the hma and the laplace approximation .the overall results are discussed and summarized in [ sec : summary ] .using riemann and lebesgue integration . for riemann integration , we sum thin vertical rectangles of height about the abscissa point for some width . for lebesgue integration , we sum thin horizontal rectangles of width about the ordinate point of height . in both cases , we sum the area under the curve . in the lebesgue case , we must add in the rectangle of width and height .,scaledwidth=50.0% ] assume that we have a mc generated sample of random variates with density .recall that any moment or expectation with respect to this density may be computed as a single sum , independent of the parameter - space dimension !this powerful and seemingly innocuous property follows from the power of lebesgue integration ( e.g. * ? ? ?* ) . to see this ,let us begin by considering a one - dimensional integral where is non - negative , finite and bounded , that is : for ] with measure as follows .we assume that is measurable over and , following , define the measure function . clearly , is monotonic with and .let be a partition of ] .an average of the sums in equation ( [ eq : lun ] ) gives us a trapezoidal rule analog : (y_{i } - y_{i-1 } ) .\label{eq : tn}\ ] ] further generalization is supported by the lebesgue theory of differentiation .a central result of the theory is that a continuous , monotonic function in some interval is differentiable almost everywhere in the interval .this applies to the measure function .this result may be intuitively motivated in the context of our marginal likelihood calculation as follows .our measure function describes the amount of density with likelihood smaller than some particular value .a typical likelihood surface for a physical model is smooth , continuous , and typically consists of several discrete modes .consider constructing by increasing the value of beginning at the point of maximum likelihood peak .since , this is equivalent to beginning at and decreasing . recall that decreases from 1 to 0 as increases from to .therefore , we construct from by finding the level set corresponding to some value of and subtracting off the area of the likelihood surface constructed from the perimeter of the set times .the function will decrease smoothly from unity at until reaches the peak of the second mode . at this point, there may be a discontinuity in the derivative of as another piece of perimeter joins the level set , but it will be smooth thereafter until the peak of the third mode is reached , and so on . since we expect the contribution to to be dominated by a few primary modes , this suggests that we can evaluate the integral numerically using the quadrature implied by equation ( [ eq : lebesgue2 ] ) and possibly even higher - order quadrature rules .these arguments further suggest that partitioning into separate domains supporting individual modes would improve the numerics by removing discontinuities in the derivative of and explicitly permitting the use of higher - order quadrature rules. this partition may be difficult to construct automatically , however . to better control the truncation error for this quadrature , we might like to choose a uniform partition in , , to evaluate the sums and . for mc integration ,this is not possible .rather , mc selects with irregular spacings and this induces a partition of .motivated by kernel density estimation , we may then approximate by where monotonically increases from 0 to 1 in the vicinity of .for example , we may choose to be the heaviside function which assigns a `` step '' to the upper value of the range in for each .alternatively , we may consider smoothing functions such as \label{eq : nummeas3}\ ] ] where denotes the error function .then , upon substituting equation ( [ eq : nummeas ] ) into equation ( [ eq : lebesgue2 ] ) for , we get : where and by construction and the final equality follows by gathering common factors of in the earlier summation . for integration over probability distributions, we desire distributed according some probability density , = \int_{\mathbb{r } } f(x ) g(x)\,dx , \label{eq : probint}\ ] ] which yields with the normalization , and therefore , ] and may be trivially computed from a mcmc - generated posterior distribution .using our finite mc - sampled distributed as the posterior probability , , and converting the integral to a sum , we have the following simple estimate for : } \equiv \frac1n \sum_{j=1}^n \mathbf{1}_{\{y_j > y_i\ } } , \qquad m_i^{[u ] } \equiv \frac1n \sum_{j=1}^n \mathbf{1}_{\{y_j\ge y_i\ } } , \qquad m_i \equiv \frac{m_i^{[l ] } + m_i^{[u]}}{2 } , \label{eq : measurex3}\ ] ] where we have defined the left and right end points from equation ( [ eq : nummeas2 ] ) and the mean separately so that } \le m_i \le m_i^{[u]} ] requires that decreases faster than for some . for and large , increases as . for , decreases at least as fast .figure [ fig : gauss_ex1 ] shows as a function of and suggests that is sufficient to obtain convergence for practical values of .qualitatively , a prior distribution that limits from above ( or , equivalently , from below ) will prevent the divergence .the integral is shown as a function of for various values of the ratio .for an uninformative prior distribution and and diverges with increasing . for an informative prior distribution and convergencesquickly with .,scaledwidth=60.0% ] similar asymptotic expressions for may be derived for multivariate normal distributions . for simplicity , assume that data is distributed identically in each of dimensions and , for ease of evaluation , take .then where is the upper incomplete gamma function and is the complete gamma function . using the standard asymptotic formula for in the limit of large , one finds that ^{k/2 - 1 } \left(\frac{y}{y_0}\right)^{-1-b } \qquad \mbox{for\ } y\gg k/2 .\label{eq : mulimygasymp}\ ] ] this expression reduces to equation ( [ eq : myg ] ) when , but more importantly , this shows that the tail magnitude of increases with dimension . figure [ fig : gauss_ex2 ] illustrates this for various values of and .the divergence of in the limit shows that normally distributed likelihood function with an uninformative prior is divergent .moreover , figures [ fig : gauss_ex1 ] and [ fig : gauss_ex2 ] further demonstrate that weakly informative prior with very small is likely to be numerically divergent even if formally converges .intuitively , the cause is clear : if the markov chain never samples the wings of the prior distribution that still make significant contribution to , then will increase with sample size .analytically , the failure is caused by the measure decreasing too slowly as increases ( as described at the beginning of [ sec : evid ] ) .empirically , the convergence of the integral may be tested by examining the run of for increasing .overall , [ sec : evid ] highlights the marginal likelihood as a numerical quadrature .we have considered the path of approximations leading from the quadrature to standard expression for the hma .we have also considered the intrinsic requirements on the prior distribution so that the meausure is convergent .this development suggests that there are two sources of error in evaluating equation ( [ eq : zdef ] ) using equations ( [ eq : evidencex ] ) and ( [ eq : numerick ] ) .the first is a truncation error of order .the second is a bias error resulting from specifying the computational grid by a monte carlo procedure .we will consider each in turn .thinking again in terms of numerical quadrature , we can trust the sum in equation ( [ eq : numerick ] ) when adjacent values of are close .the error in will be dominated by the extremely small values of that lead to .specifically , the error for such a term will be , and a small number of such terms can dominate the error for .numerical analysis is based on functional approximation of smooth , continuous , and differentiable functions .this truncation error estimate assumes that is such a function .although not true everywhere , we have argued in [ sec : intgr ] ) that this assumption should be valid over a countable number of intervals in practice .in addition , we expect the sample to be strongly clustered about the value of the posterior mode . by the central limit theorem for a likelihood dominated posterior , the distribution of tends toward a normal distribution. therefore , larger will yield more extremal values of and _ increase _ the divergence of .eventually , for proper prior distributions and sufficiently large , the smallest possible value of will be realized as long as ( see eq . [ eq : zdef ] and following discussion ) .further sampling will reduce the largest intervals , and this will lead to the decrease of large , and finally , convergence . the second source of error is closely related to the first . after eliminating the divergent samples with , the sampled domain will be a subset of the originally defined domain .that is , the mcmc sample will not cover all possible values of the parameter vector .this implies that the numerical quadrature of equation ( [ eq : zdef ] ) will yield .identification of these error sources immediately suggests solutions .note that this observation does not change the problem definition in some new way , but rather , allows us to exploit the mcmc - chosen domain to eliminate the divergence for small described in [ sec : gauss_ex ] .first , we may decrease the truncation error in by ordering the posterior sample by increasing values of and truncating the sequence at the point where for some choice .next , we need a consistent evaluation for . we may use the sampled posterior distribution itself to estimate the sampled volume in .this may be done straightforwardly using a space partitioning structure .a computationally efficient structure is a binary space partition ( bsp ) tree , which divides a region of parameter space into two subregions at each node .the most easily implemented tree of this type for arbitrary dimension is the kd - tree ( short for k - dimensional tree ) .the computational complexity for building the tree from the sampled points in parameter space scales as using the quicksort algorithm at each successive level ( this may be improved , see * ? ? ?each leaf has zero volume .each non - leaf node has the minimum volume enclosing the points in the node by coordinate planes . assigning the volume containing a fixed number of leaves ( e.g. or ) , and some representative value of the prior probability in each node ( such as a -quantile or mean value ) , one may immediately sum product of each volume and value to yield an estimate of . for modest values , we will almost certainly find that .since the mcmc chain provides the values of and , we may use the same tree to evaluate both and over the sampled volume .the example in [ sec : gauss_ex ] suggests that evaluation of may stymied by poor convergence unless the prior distribution is restrictive .therefore , if the value of is divergent or very slowly convergent , the evaluation of using will fail whether or not we use the improved truncation criteria .direct evaluation of the is free from this divergence and remains an practical option in this case .the advantage of a direct evaluation is clear : the converged markov chain samples the domain proportional to the integrand of equation ( [ eq : evidence ] ) , and therefore , we expect for large sample size by construction .we propose a hybrid of cubature and monte carlo integration .the bsp tree provides a _ tiling _ of multidimensional volume by using the posterior distribution to define volume elements , .we use a -quantile ( such as the median ) or mean value of the posterior probability or the prior probability to assign a probability value to each volume element .an approximation to the integrals and follow from summing the field values over the volume elements , analogous to a multidimensional riemann rule .although for a _ infinite _ posterior sample ,there are several sources of error in practice .first , the variance in the tessellated parameter - space volume will increase with increasing volume and decreasing posterior probability .this variance may be estimated by bootstrap .secondly , the truncation error of the cubature increases with the number of points per element . as usual , there is a variance bias trade off choosing the resolution of the tiling : the bias of the probability value estimate increases and the variance decreases as the number of sample points per volume element increases .the prior probability value will be slowing varying over the posterior sample for a typical likelihood - dominated posterior distribution , so the bias will be small .this suggests that larger numbers of points per cell will be better for the evaluation of and a smaller number will be better for .some practical examples suggest that the resulting estimates are not strongly sensitive to the number of points per cell ( or 32 appears to be a good compromise ) .almost certainly , there will be a bias toward larger volume and therefore larger values of and this bias will increase with dimension most likely . to summarize ,we have described two approaches for numerically computing from a mcmc posterior simulation .the first evaluates of the integral by numerical lebesgue integration , and the second evaluates directly by a parameter space partition obtained from the sampled posterior distribution .the first is closely related to the hma .it applies ideas of numerical analysis the integral that defines the hma .the second is more closely related to the laplace approximation .in some sense , laplace approximation is an integral of a parametric fit to the posterior distribution .the tree integration described above is , in essence , an integral of a non - parametric fit to the posterior distribution .the advantage of the first method its amenability to analysis .the disadvantage is the limitation on convergence as illustrated in [ sec : gauss_ex ] .the advantage of the second method is its guaranteed convergence .the disadvantage is its clear , intrinsic bias and variance .the variance could be decreased , presumably , using high - dimensional voronoi triangulation but not without dramatic computational cost .the hma is treated as an expectation value in the literature .one of the failures pointed out by and others is that the hma is particularly awful when the sample is a single datum . in the context of the numerical arguments here ,this is no surprise : one can not accurately evaluate a quadrature with a single point ! even for larger samples ,the hma is formally unstable in the limit of a thin - tailed likelihood function owing to the divergence of the variance of the hma . address this failure of the statistic directly , proposing methods for stabilizing the harmonic mean estimator by reducing the parameter space to yield heavier - tailed densities .this is a good alternative to the analysis presented here when such stabilization is feasible .as previously mentioned , presents conditions on the posterior distribution for the consistency of the hma .intuitively , there is a duality with the current approach . trimming the lebesgue quadrature sum so that the interval is equivalent to lopping off the poorly sampled tail of the posterior distribution .this truncation will be one - sided in the estimate of the marginal likelihood since it removes some of the sample space .however , this may be compensated by an appropriate estimation of .the exposition and development in [ sec : evid ] identifies the culprits in the failure of the hma : ( 1 ) truncation error in the evaluation of the measure ; and ( 2 ) the erroneous assumption that when in practice .we now present two new algorithms , the _ numerical lebesgue algorithm _ ( nla ) and the _ volume tessellation algorithm _ ( vta ) , that implement the strategies described in [ sec : newevid ] to diagnose and mitigate this error .nla computes and vta computes and , optionally , directly from equation ( [ eq : evidence ] ) . in the following sections , we assume that .we begin with a converged mcmc sample from the posterior distribution .after the initial sort in the values of , the nla computes the difference with to find the first value of satisfying .the algorithm then computes the for using equation ( [ eq : measurex3 ] ) . for completeness , we compute the using both the restriction and to obtain lower and upper estimate for .then , these may be combined with and from [ sec : intgr ] to riemann - like upper and lower bounds on .see the listing below for details . excepting the sort , the work required to implement this algorithm is only slightly harder than the hma .the vta uses a kd - tree to partition the samples from posterior distribution into a spatial regions .these tree algorithms split on planes perpendicular to one of the coordinate system axes .the implementation described here uses the median value along one of axes ( a _ balanced _ kd - tree ) .this differs from general bsp trees , in which arbitrary splitting planes can be used .there are , no doubt , better choices for space partitioning such as voronoi tessellation as previously discussed , but the kd - tree is fast , easy to implement , and published libraries for arbitrary dimensionality are available .traditionally , every node of a kd - tree , from the root to the leaves , stores a point . in the implementation used here , the points are stored in leaf nodes only , although each splitting plane still goes through one of the points .this choice facilitates the computation of the volume spanned by the points for each node as follows .let be the number of parameter - space points } , n=1,\ldots,{\bar m}_j ] denote the field quantities at each point } ] seem to be good choices for many applications .each node in the frontier is assigned a representative value .i use -quantiles with for tests here .the resulting estimate of the integrals and/or follow from summing the product of the frontier volumes with their values .both the nla and the vta begin with a sort of the likelihood sequence and this scales as . in the nla ,the computation of the followed by the computation of is .the sequence is useful also for diagnosis as we will explore in the next section .however , in many cases , we do not need the individual but only need the differential value to compute , which contains a single term .the values of likelihood may range over many orders of magnitude .owing to the finite mantissa , the differential value be necessary to achieve adequate precision for large , and the nla may be modified accordingly .the algorithm computes the lower , upper , and trapezoid - rule sums ( eqns .[ eq : lun][eq : tn ] ) for the final integral . for large posterior samples , e.g. ,the differences between and are small .indeed , a more useful error estimate may be obtained by a random partitioning and subsampling of the original sequence to estimate the distribution of ( see the examples in [ sec : examples ] ) . in practice ,computing the marginal likelihood from a posterior sample with takes 0.2 cpu seconds on a single 2ghz opteron processor .although nla could be easily parallelized over processors to reduce the total runtime by seems unnecessary .the kd - tree construction in vta scales as followed by a tree walk to sum over differential node volumes to obtain the final integral estimates that scales as .this scaling was confirmed empirically using the multidimensional example described in [ sec : highd ] with dimension ] .computing the marginal likelihood from a posterior sample with and takes 4.4 cpu seconds on a single 2ghz opteron processor , and , therefore , the computation is unlikely to be an analysis bottleneck , even when resampling to produce a variance estimate .the leading coefficient appears to vary quite weakly the distribution , although there may be undiscovered pathological cases that significantly degrade performance .the required value of increases with parameter dimension ; is barely sufficient for in tests below .subsampling recommends the use of even larger chains to mitigate dependence of the samples .therefore , the first practical hardware limitation is likely to be sufficient ram to keep the data in core .likelihood values from the simulated posterior distribution sort the sequence so that and/or save the values save the estimated marginal likelihood , and algorithmic error likelihood values from the simulated posterior distribution change variables sort the sequence so that create an empty point set add } , { \bf f}^{[n]}) ] among the points save the estimated value of the integrals buildkd( ) a set posterior points and values } , { \bf f}^{[n]})\in{\cal p} ] ( e.g. by quicksort ) split into two subsets by the hyperplane defined by : .to estimate the marginal likelihood using the methods of the previous section , we may use either the nla to estimate and the vta to estimate or use the vta alone to estimate .examples below explore the performance of these strategies .the mcmc posterior simulations are all computed using the umass bayesian inference engine ( bie , * ? ? ?* ) , a general - purpose parallel software platform for bayesian computation .all examples except that in [ sec : highd ] simulate the posterior distribution using the parallel tempering scheme with and 20 temperature levels .convergence was assessed using the subsampling algorithm described in , a generalization of the test . fora simple initial example , let us compute the marginal likelihood for a data sample of 100 points modelled by with prior distribution for .the marginal likelihood can be computed analytically from for this simple example .the final 200,000 converged states of the mcmc - generated chain were retained .application of the nla for and the vta for gives a value of ( 95% confidence interval ) , close to but systematically smaller than the analytic result : .a value of seems appropriate from numerical considerations , although experiments suggest that the algorithm is not sensitive to this choice as long as is not so small to decimate the sample or so large that error - prone outliers are included .it is prudent to check a range of to determine the appropriate value of each problem .the vta yields , consistent with the analytic result .the bias in the first estimate appears to be caused by an overestimate of produced by the vta .this might be improved by a space partition whose cells have smaller surface to volumes ratios ( [ sec : highd ] for a graphical example ) .the bias is much less pronounced in the direct estimate of by the vta owing to smallness of the posterior probability in the extrema of the sample .these extrema result in anomalously small value of for the hma .figure [ fig : mk ] illustrates the details of the nla applied to this computation .panel ( a ) plots from equation ( [ eq : measurex3 ] ) .the run of with rises rapidly near the posterior mode and drops rapidly to zero for small likelihood values .the inset in this figure shows in linear scale .the measure function , and hence the integral , is dominated by large values of as long as decreases sufficiently fast ( see [ sec : gauss_ex ] ) .panel ( b ) plots the accumulating sum defining the quadrature of in equations ( [ eq : lun])([eq : tn ] ) , beginning with the largest values of likelihood first .the contribution to is dominated at the likelihood peak , corresponding to the steeply rising region of in panel ( a ) . in other words , the samples with small values of that degrade the hmamake a negligible contribution to the marginal likelihood computation as long as .in addition , nla provides upper and lower bounds , and thereby some warning when the estimate is poorly conditioned , e.g. owing to an inappropriate choice for . the plot in figure [ fig: mk]b will readily reveal such failures .a more realistic error assessment can be obtained by subsampling the sequence .the cpu time for these algorithms is sufficiently small that this procedure should be practical in general .consider the following experiment : ( 1 ) the posterior is simulated by mcmc ( as described above ) to obtain a chain of 400,000 states ; ( 2 ) the first half of the chain is discarded ; ( 3 ) the second - half is randomly subsampled with replacement to obtain 128 samples of 10,000 states ; ( 4 ) the marginal likelihood for each is computed using the nla , vta , the laplace approximation and the hma ( approximately 2 cpu minute in total ) . for all butthe hma , increasing the number of states decreases the variance for each distribution ; samples with 10,000 states best revealed the differences between the algorithms with a single scale .figure [ fig : subsample ] illustrates the relative performance with different prior distributions .figure [ fig : subsample]a is the model described at the beginning of this section ; the range of the prior distribution is much larger than the values sampled from the posterior distribution .the prior distributions for each successive panel have smaller ranges as indicated .the colors are composited over yields the new value . ] with ( e.g. hma over vta is brown , hma over nla is purple , laplace over hma is blue - grey , laplace over vta is blue - green ) . in panel ( d ) , the range is within the range of values sampled by the posterior in panel ( a ) .the overall trends are as follows : 1 ) the hma has unacceptably large variance unless the domain of the prior roughly coincides with the domain sampled by the mcmc algorithm ; 2 ) the vta and laplace approximation have the smallest variances , followed by hma ; 3 ) the nla is consistently biased below the analytic value ; and 4 ) the vta and laplace approximation are closed to the expected analytic value .indeed , the laplace approximation is an ideal match to and should do well for this simple unimodal model . in the final panel ,there are no outlier values of and the harmonic mean approximation is comparable to the others . these tests also demonstrate that the same outliers that wreck the hma have much less affect on nla and vta .further experimentation reveals that the results are very insensitive to the threshold value .in fact , one needs an absurdly large value of , , to produce failure . here , we test these algorithms on the radiata pine compressive strength data analyzed by han and carlin ( 2001 ) and a number of previous authors .we use the data tabled by han and carlin from williams ( 1959 ) .these data describe the maximum compressive strength parallel to the grain , the density , and the resin - adjusted density for specimens .carlin and chib ( 1995 ) use these data to compare the two linear regression models : with , and .we follow han and carlin ( 2001 ) and carlin and chib ( 1995 ) , adopting priors on and , and ^{-1}\right) ] is chosen to achieve 95% confidence intervals approximately 1% of or smaller .the 95% confidence intervals on are indicated as sub- and super - scripts .recall that the standard vta determines volume spanned samples and approximates the integral by multiplying the volume by the median value of the sample . to assess the variance inherent in this choice , i quote the results for two other p - quantiles , and .finally , for each algorithm , the table presents the relative error : .both the nla and vta results are very encouraging : the relative error is within a few percent for . for ,i computed with samples sizes of 400,000 states .both the nla and vta tend to slightly overestimate for large .the laplace approximation results are disappointing for small and improve for large , but still are less precise than either the nla or vta . ., scaledwidth=70.0% ] figure [ fig : k2d ] illustrates the kd - tree construction for a single sample .each two - dimensional cell is colored by the median value of the posterior probability for the points in each cell and scaled to the peak value of posterior probability for the entire sample .a careful by - eye examination of the cell shape reveals a preponderance of large axis - ratio rectangles ; this is a well - known artifact of the kd - tree algorithm . for large values of , the volume elements are small , and with a sufficiently large sample , the gradient in across the volume are small . for small values of ,the volume elements are large , the gradients are large , and the large - axis ratio rectangles distort the reconstructed shape of the true posterior .however , as described in [ sec : newevid ] , the values of for an infinite sample , so a small number of distorted rectangles will not compromise the end result .moreover , the values of at large volumes are smaller than those at small volume for these tests , and this further decreases the importance of the kd - tree cell - shape artifact .but for a sample from the cauchy distribution from fig . .the box shows the quantiles and median , the whisker shows the ( 10% , 90% ) intervals , followed by outlying points .the three distributions are ( 1 ) the hma ; ( 2 ) nla for and vta for ; and ( 3 ) vta for both and .,scaledwidth=50.0% ] as an example of model selection , we first compute the marginal likelihood for the same data as in the first example of [ sec : test ] but assuming a cauchy - lorentz distribution , ^{-1},\ ] ] as the model with unknown location and scale parameters . for prior distributions , we take and where is the weibull distribution with scale parameter and shape parameter .nla yields , vta yields and the hma yields .the data and fits are shown in figure [ fig : cauchy]a .there should be no surprise that the true model ( with )is strongly preferred .let us now repeat the experiment using 100 data points selected from the cauchy - lorentz distribution ( ) and compare the marginal likelihood values for a cauchy - lorentz distribution and a mixture of two normal distributions ( ) .nla and vta , respectively , yield and . the hma yields and .regardless of the algorithm performing the test , the bayes factor reveals strong evidence in favor of the true model .note from figure [ fig : cauchy]b that both models are reasonable fits `` by eye '' .however , the bayes factor overwhelmingly _ prefers _ the simpler ( in this case , true ) model . as expected , the distribution of for the heavy - tailed cauchy distribution is much better behaved ( see fig .[ fig : subsamplec ] ) .the results for nla and vta are consistent and the hma is systematically larger , but non enough to misguide a decision .in summary , much of the general measure - theoretic underpinning of probability and statistics naturally leads naturally to the evaluation of expectation values .for example , the harmonic mean approximation ( hma , * ? ? ?* ) for the marginal likelihood has large variance and is slow to converge ( e.g. * ? ? ?on the other hand , the use of analytic density functions for the likelihood and prior permits us to take advantage of less general but possibly more powerful computational techniques . in [ sec : intro][sec : algo ] we diagnose the numerical origin of the insufficiencies of the hma using lebesgue integrals .there are two culprits : 1 ) the integral on the left - hand side of equation ( [ eq : zdef0 ] ) may diverge if the measure function from equation ( [ eq : measurex ] ) decreases too slowly ; and 2 ) truncation error may dominate the quadrature of the left - hand side of equation ( [ eq : zdef0 ] ) unless the sample is appropriately truncated . using numerical quadrature for the marginal likelihood integral ( eqns . [eq : evidence ] and [ eq : measurex ] ) leads to improved algorithms : the _ numerical lebesgue algorithm _ ( nla ) and the _ volume tessellation algorithm _ ( vta ) .our proposed algorithms are a bit more difficult to implement and have higher computational complexity than the simple hma , but the overall cpu time is rather modest compared to the computational investment required to produce the mcmc - sampled posterior distribution itself . for a sample of size , the sorting required by nla and vta has computational complexity of and , respectively , rather than for the harmonic mean . nonetheless , the computational time is a fraction of second to minutes for typical values of ( see [ sec : algo ] ) .the geometric picture behind nla is exactly that for lebesgue integration .consider integrating a function over a two - dimensional domain .in standard riemann quadrature , one chops the domain into rectangles and adds up their area .the sum can be refined by subdividing the rectangles ; in the limit of infinitesimal area , the resulting sum is the desired integral . in the lebesgue approach ,one draws horizontal slices through the surface and adds up the area of the horizontal rectangles formed from the width of the slice and the vertical distance between slices .the sum can be refined by making the slices thinner when needed ; in the limit of slices of infinitesimal height , the resulting sum is the desired integral . in the riemann case ,we multiply the box area in the domain , , by the function height , . in the lebesgue, we multiply the slice height in the range , , by the domain area , ( see fig .[ fig : geom ] ) .both algorithms easily generalize to higher dimension . for the lebesgue integral , the slices become level sets on the hypersurface implied by the integrand. therefore the lebesgue approach always looks one - dimensional in the level - set value ; the dimensionality is ` hidden ' in the area of domain ( _ hypervolume _ for ) computed by the measure function .the level - set value for the nla is .once determined , nla applies the trapezoidal rule to the sum over slices and compute the upper and lower rectangle sums as bounds .clearly , the error control on this algorithm might be improved by using more of the information about the run of with .having realized that the practical failure of the harmonic mean approximation is a consequence of the sparsely sampled parameter - space domain , nla addresses the problem by determining a well - sampled subset from the mcmc sample , ex post facto .restricted to this subset , , the value of the integral on the right - hand side of equation ( [ eq : zdef0 ] ) is less than unity .we determine by a binary space partitioning ( bsp ) tree and compute from this partition .a bsp tree recursively partitions a the k - dimensional parameter space into convex subspaces .the vta is implemented with a kd - tree for simplicity .in addition , one may use vta by itself to compute equation ( [ eq : evidence ] ) directly. judged by bias and variance , the test examples do not suggest a strong preference for either the nla or the vta .however , both are clearly better than the hma or the laplace approximation .conversely , because these algorithms exploit the additional structure implied by smooth , well - behaved likelihood and prior distribution functions , the algorithms developed here will be inaccurately and possibly fail miserably for _ wild _ density functions .the nla and the vta are not completely independent since the nla uses the tessellation from the vta to estimate the integral .however , the value of the integral tends to dominate , that is , and the contributions are easily checked .based on current results , i tentatively recommend relying preferentially on vta for the following reasons : 1 ) there is no intrinsic divergence ; 2 ) it appears to do as well as vta even in a high - dimensional space ; and 3 ) there is no truncation threshold .figure [ fig : k2d ] illustrates the potential for volume artifacts that could lead to both bias and variance .this error source affects both the vta and nla ( through the computation of ) but the affect on the nla may be larger ( [ sec : test ] ) .additional real - world testing , especially on high - dimensional multimodal posteriors , will provide more insight .in test problems described in this paper , i explored the effects of varying the threshold and the kd - tree bucket size .these parameters interact the sample distribution , and therefore , are likely to vary for each problem .i also recommend implementing both the nla , vta , htm , laplace approximation and comparing the four for each problem .we are currently testing these algorithms for astronomical inference problems too complex for a simple example ; the results will be reported in future papers .an implementation of these algorithms will be provided in the next release of the umass bayesian inference engine ( bie , * ? ? ?there are several natural algorithmic extensions and improvements not explored here . [ sec : intgr ] describes a smoothed approximation to the computation of ( eqns .[ eq : nummeas][eq : itilden ] ) rather than the step function used in [ sec : algo ] .the direct integration of equation ( [ eq : evidence ] ) currently ignores the location of field values in each cell volume . at the expense of cpu time, the accuracy might be improved by fitting the sampled points with low - order multinomials and using the fits to derive a cubature algorithm for each cell .in addition , a more sophisticated tree structure may decrease the potential for bias by providing a tessellation with `` rounder '' cells . in conclusion ,the marginal likelihood may be reliably computed from a monte carlo posterior sample though careful attention to the numerics .we have demonstrated that the error in the hma is due to samples with very low likelihood values but significant prior probability .it follows that their posterior probability also very low , and these states tend to be outliers . on the other hand ,the converged posterior sample is a good representation of the posterior probability density by construction .the proposed algorithms define the subdomain dominated by and well - sampled by the posterior distribution and perform the integrals in equation ( [ eq : zdef0 ] ) over rather than .although more testing is needed , these new algorithms promise more reliable estimates for from an mcmc simulated posterior distribution with more general models than previous algorithms can deliver .i thank michael lavine for thoughtful discussion and both neal katz and michael lavine for comments on the original manuscript .it is also a pleasure to acknowledge the thoughtful and helpful comments of two anonymous referees and the associate editor of the journal .this work was supported in part by nsf iis program through award 0611948 and by nasa aisr program through award nng06gf25 g .
computation of the marginal likelihood from a simulated posterior distribution is central to bayesian model selection but is computationally difficult . the often - used harmonic mean approximation uses the posterior directly but is unstably sensitive to samples with anomalously small values of the likelihood . the laplace approximation is stable but makes strong , and often inappropriate , assumptions about the shape of the posterior distribution . it is useful , but not general . we need algorithms that apply to general distributions , like the harmonic mean approximation , but do not suffer from convergence and instability issues . here , i argue that the marginal likelihood can be reliably computed from a posterior sample by careful attention to the numerics of the probability integral . posing the expression for the marginal likelihood as a lebesgue integral , we may convert the harmonic mean approximation from a sample statistic to a quadrature rule . as a quadrature , the harmonic mean approximation suffers from enormous truncation error as consequence . this error is a direct consequence of poor coverage of the sample space ; the posterior sample required for accurate computation of the marginal likelihood is much larger than that required to characterize the posterior distribution when using the harmonic mean approximation . in addition , i demonstrate that the integral expression for the harmonic - mean approximation converges slowly at best for high - dimensional problems with uninformative prior distributions . these observations lead to two computationally - modest families of quadrature algorithms that use the full generality sample posterior but without the instability . the first algorithm automatically eliminates the part of the sample that contributes large truncation error . the second algorithm uses the posterior sample to assign probability to a partition of the sample space and performs the marginal likelihood integral directly . this eliminates convergence issues . the first algorithm is analogous to standard quadrature but can only be applied for convergent problems . the second is a hybrid of cubature : it uses the posterior to discover and tessellate the subset of that sample space was explored and uses quantiles to compute a representative field value . qualitatively , the first algorithm improves the harmonic mean approximation using numerical analysis , and the second algorithm is an adaptive version of the laplace approximation . neither algorithm makes strong assumptions about the shape of the posterior distribution and neither is sensitive to outliers . based on numerical tests , we recommend a combined application of both algorithms as consistency check to achieve a reliable estimate of the marginal likelihood from a simulated posterior distribution . * keywords : * bayesian computation , marginal likelihood , algorithm , bayes factors , model selection
efficient inference in large complex systems is a major challenge with significant implications in science , engineering and computing .exact inference is computationally hard in complex systems and a range of approximation methods have been devised over the years , many of which have been originated in the physics literature . a recent review highlights the links between the various approximation methods and their applications. approximative bayesian inference techniques arguably offer the most principled approach to information extraction , by combining a rigorous statistical approach with a feasible but systematic approximation .although message passing techniques have existed for some time in the computer science community they have enjoyed growing popularity in recent years , mainly within the context of bayesian networks and the use of belief propagation ( bp ) for a range of inference applications , from signal extraction in telecommunication to machine learning .the main advantage of these techniques is their moderate growth in computational cost , with respect to the systems size , due to the local nature of the calculation when applied to sparse graphs . until recently ,message passing techniques were deemed unsuitable for inference in densely connected systems due to the inherently high number of short loops in the corresponding graphical representation , and the large number of connections per node , which results in a high computational cost .both properties are considered prohibitive to the use of conventional message passing techniques in such problems . a recently suggested method for message passing in densely connected systems relies on replacing individual messages by averages sampled from a gaussian distribution of some mean and variance that are modified iteratively .the method has been applied for the cdma signal detection inference problem ; it successfully finds optimal solutions where the space of solutions is contiguous but breaks down when the solution space becomes fragmented , for instance , when there is a mismatch between the true and assumed noise levels in the cdma detection problem .the emergence of competing solutions gives rise to conflicting messages that result in bungled average messages and suboptimal performance .in statistical physics terms , it corresponds to the replica symmetric solution in dense systems and gives poor estimates when more complex solution structures are required . in the current paper, we methodologically extend the approach of kabashima for inference in dense graphs by considering a large ( infinite ) number of replicated variable systems , exposed to the same evidential data ( received signals ) .each one of the systems represents a pure state and a possible solution .the pseudo posteriors , that form the basis for our estimates , are based on averages over the replicated systems .the method has been employed previously only in the non - critical regime , using the most basic ( rs - like ) ansatz for the solution structure . in the current paperwe study both critical and non - critical regimes and extend the solution structure considered to include step replica symmetry breaking ( 1rsb ) like structures . to demonstrate the potential of this approach and the performance obtained using the resulting algorithm we apply the method to two different but related problems : signal detection in code division multiple access ( cdma ) and learning in the ising linear perceptron ( ilp ) .we investigate both rs and 1rsb - like structures .the former is applied to both cdma and ilp problems and seems to be sufficient for obtaining optimal performances ; the latter is applied to a variant of the cdma signal detection problem with a more complex noise model that exhibits rsb - like behaviour , to demonstrate its efficacy for particularly difficult inference tasks . in section [ sec : models ] we will introduce the general models studied , followed by a brief review of message passing techniques for dense systems in section [ sec : message_passing ] .the general derivation of our approach , for both rs and rsb - like solution structures , will be presented in section [ sec : general - formalism ] ; numerical studies of both cdma signal detection and ilp learning will be reported in section [ sec : cdma ] . to demonstrate the method based on the more complex 1rsb solution structure , and to examine its efficacy to problems that require such structures, we will introduce a variant of the cdma signal detection problem and study it numerically in section [ sec : cdma2gauss ] .we will conclude the presentation with a summary and point to future research directions .details of the derivation will be provided in appendices [ app : rs]-[app : optimisation ] .before describing the inference method , the approach taken and the algorithms derived from it , it would be helpful to briefly describe the exemplar inference problems tackled in this paper .we apply the method to two different but related inference problems : signal detection in cdma and learning in the ising linear perceptron ( ilp ) .both correspond to inference problems where data points are noisy representations of sums of binary variables modulated by random binary values .multiple access communication refers to the transmission of multiple messages to a single receiver .the scenario we study here , described schematically in figure [ cdma - ilp](a ) , is that of users transmitting independent messages over an additive white gaussian noise ( awgn ) channel of zero mean and variance .various methods are in place for separating the messages , in particular time , frequency and code division multiple access .the latter , is based on spreading the signal by using individual random binary spreading codes of spreading factor .we consider the large - system limit , in which the number of users tends to infinity while the system load is kept to be .we focus on a cdma system using binary phase shift keying ( bpsk ) symbols and will assume the power is completely controlled to unit energy .the received aggregated , modulated and corrupted signal is of the form : where is the bit transmitted by user , is the spreading chip value , is the gaussian noise variable drawn from , and the received message .the task is to infer the original transmission from the set of received messages .this process is reminiscent of the learning task performed by a perceptron with binary weights and linear output , which is the next example considered in this paper .learning in neural networks has attracted considerable theoretical interest . in particularwe focus on supervised learning from examples , which relies on a training set consisting of examples of the target task .we consider a perceptron , described schematically in figure [ cdma - ilp](b ) , which is a network that sums a single layer of inputs with synaptic weights and passes the result through a transfer function where is typically a non - linear sigmoidal function . if the network is termed _ linear output perceptron_. if the weights the network is called _ ising perceptron_. learning is a search through the weight space for the perceptron that best approximates a target rule .the similarity between the linear perceptron of equation ( [ eq : perceptron ] ) and the cdma detection problem of eq.([eq : cdma ] ) allows for a direct relation between the two problems to be established .the main difference between the problems is the regime of interest . while cdma detection applications are of interest mainly for non - critical low load values , ilp studies focused on the critical regime .we consider both regimes in this paper .( 440,190 ) ( 0,-4)=98.5 mm ( 325,25)=50.5 mm ( 0,203)(a ) ( 305,203)(b )graphical models ( bayes belief networks ) provide a powerful framework for modelling statistical dependencies between variables .they play an essential role in devising a principled probabilistic framework for inference in a broad range of applications .message passing techniques are typically used for inference in graphical models that can be represented by a sparse graph with a few ( typically long ) loops .they are aimed at obtaining ( pseudo ) posterior estimates for the system s variables by iteratively passing messages ( locally calculated conditional probabilities ) between variables .iterative message passing of this type is guaranteed to converge to the globally correct estimate when the system is tree - like ; there are no such guarantees for systems with loops even in the case of large loops and a local tree - like structure ( although message passing techniques have been used successfully in loopy systems , supported by some limited theory ) . a clear link has been established between certain message passing algorithms and well known methods of statistical mechanics such as the bethe approximation . these inherent limitations seem to prevent the use of message passing techniques in densely connected systems due to their high connectivity , implying an exponentially growing cost , and an exponential number of loops .however , an exciting new approach has been recently suggested for extending bp techniques to densely connected systems . in this approach, messages are grouped together , giving rise to a macroscopic random variable , drawn from a gaussian distribution of varying mean and variance for each of the nodes .the technique has been successfully applied to cdma signal detection problems and the results reported are competitive with those of other state - of - the - art techniques . however , the current approach has some inherent limitations , presumably due to its similarity to the replica symmetric solution in the equivalent ising spin models . in a separate recent development , the replica - symmetric - equivalent bp has been extended to survey propagation ( sp ) , which corresponds to one - step replica symmetry breaking in diluted systems .this new algorithm , motivated by the theoretical physics interpretation of such problems , has been highly successful in solving hard computational problems , far beyond other existing approaches .in addition , the algorithm facilitated theoretical studies of the corresponding physical system and contributed to our understanding of it .the sp algorithm has recently been modified to handle ising and multilayer perceptrons .we recently presented a new approach for inference in densely connected systems , which was inspired by both the extension of bp to densely connected graphs and the introduction of sp .the systems we consider here are characterised by multiplicity of pure states and a possible fragmentation of the space of solutions . to address the inference problem in such cases we consider an ensemble of replicated systems where averages are taken over the ensemble of potential solutions .this amounts to the presentation of a new graph , where the observables are linked to variables in all replicated systems , namely ; where , as shown in figure .to estimate the variables given the data , in a bayesian framework , we have to maximise the posterior where we have considered independent data , and thus .the likelihood so defined is of a general form ; the explicit expression depends on the particular problem studied . here, we are interested in cases where is an unbiased vector and .the estimate we would like to obtain is the maximiser of the posterior marginal ( mpm ) which is expected to be a vector with equal entries for all replica .the number of operations required to obtain the full mpm estimator is of which is infeasible for large values . to obtain an approximate mpm estimatewe apply bp message passing technique .in particular we are interested here in the application of bp to densely connected graphs , similar to the one presented in .the latter is based on estimating a single solution and therefore does not converge , as has been observed , when the solution space becomes fragmented and multiple solutions emerge .this arguably corresponds to the replica symmetry breaking phenomena and occurs , for instance , when the noise level is unknown in the cdma signal detection case .a potential algorithmic improvement is achieved by the introduction of an sp - like approach , based on replicated variable systems , similar to the approach taken in problems that can be mapped onto sparsely connected graphs . given data.,width=245 ] using bayesrule one straightforwardly obtains the bp equations : for calculating the posterior we assume a dependency of the data on the parameters of the form , where is some general smooth function , are model parameters and are small enough to ensure that .we define the vector thus , using we can model the likelihood such that \left(y_{\mu}|\mathbf{{\delta}}_{\mu k};\mathbold{\gamma}\right)\ , p\left(\mathbf{{\delta}}_{\mu k}|\mathbf{b}\right)\,,\label{eq : likelihood}\end{aligned}\ ] ] where we have assumed that , due to the assumed dependence of the observed values on and .an explicit expression for inter - dependence between solutions is required for obtaining a closed set of update equations .we assume a dependence of the form where is a vector representing an external field and the matrix of cross - replica interaction .the form of depends upon the particular case considered .we assume one of the following symmetry relation between the replicated solutions : where is a block index that runs from 1 to and ` a ' is a intra - block replica index that runs form 1 to where is the number of variables per block .we also make the following reasonable assumption , as one expects correlations to gradually decrease between variables with non - identical replica and block indices , respectively . for both types of symmetries considered , the correlation matrix defined as : where is an index or a pair of indices for rs and 1rsb , respectively .the correlation matrix is assumed to be self - averaging , i.e. and preserves the symmetry of the matrix .an explicit derivation of the entries of is presented in appendices [ app : rs ] and [ app : rsb ] , for the rs and rsb - like correlation structures , respectively ; the matrices take following the general form : +\left(1-\delta^{\ell\ell^{\prime}}\right)\frac{1}{nl}\left(v^{t}-r^{t}\right)\,.\end{aligned}\ ] ] thus , for the appropriate centre of the distribution ( see equations ( [ eq : urs ] ) and ( [ eq : u1rsb ] ) ) , the probability of can be expressed as : }\right\ } \prod_{{\rm a}=1}^{n}\exp\left\ { -{\displaystyle \frac{\left(\delta_{\mu k}^{\ell{\rm a}}-\vartheta_{\mu k}^{0\ell t}\right)^{2}}{2\left(x^{t}-n^{-1}v^{t}\right)}}\right\ } & \mbox{(rsb)}\end{cases}}\nonumber \end{aligned}\ ] ] for the rs and rsb - like correlation matrices , respectively , where and having obtained the conditional probability distribution one can then derive explicit expressions for the messages ( magnetisation ) and that can be viewed as parameters in the corresponding marginalised binary distributions and . the messages from nodes to nodes , as derived in appendix [ app : messages ] , equations ( [ eq : mhat])-([eq : mm1rsb ] ) where , is defined in equation ( [ eq : gcal ] ) and is obtained from the saddle point equations given by equation ( [ eq : wrs ] ) in the rs case and by equation ( [ eq : w1rsb ] ) in the 1rsb case .the messages from nodes to are given in both cases by the expression for the gauged field where .the distribution of this field is well approximated by a gaussian as a result of the central limit theorem .the mean and variance of the gaussian are and respectively : \simeq\frac{1}{k}\sum_{k=1}^{k}\sum_{\mu=1}^{n}\left(\hat{m}_{\mu k}^{t}\right)^{2}\,.\nonumber \end{aligned}\ ] ] both and are assumed to be independent of the index by virtue of the self - averaging property .for the same reason we expect the macroscopic variables defined as and , where , to be independent of the index thus , these macroscopic variables can be evaluated by the following integrals where .the structure of the correlation matrix used introduces free variables in the form of the correlation terms between replicated solutions .these are used for optimising the estimation provided with respect to a given performance measure .since the mpm estimator is given by , the expression for the error per bit rate takes the form : which is minimised when the true message vector * * and the vector of messages are parallel .therefore , the error rate per bit decreases as the ratio increases .the optimal value is reached when and as derived in appendix [ app : optimisation ] .using this notation one defines for the cdma problem and for the ising perceptron . the goal is to get an accurate estimate of the vector for all users given the received message vector via a principled approximation of the posterior .an expression representing the likelihood is required and is easily derived from the noise model ( assuming zero mean and variance ) . if the arithmetic variance over replicas of the macroscopic message is finite and independent of the sub indexes and , i.e. , then can be expanded as \,,\label{supp}\end{aligned}\ ] ] where and .the function , defined in equation ( [ eq : pcal ] ) , and obtained from this distribution is linear in ; therefore , the second derivative used for calculating the messages in equation ( [ eq : m_hat ] ) and the corresponding structure of the correlation matrix is rs - like . to calculate correlations between replicawe expand in the large _limit in ( [ supp ] ) , as shown in equation ( [ eq : likelihood ] ) .according to the rs correlation assumption , the macroscopic variables satisfy the following relation : where for the cdma ( ilp ) system and for the cdma ( ilp ) systems , respectively , due to the change in scaling .the saddle point equation ( [ eq : hcal1rsb ] ) provides a dominant value for the variable the message from to at time is then given by : the main difference between equation ( [ mhat1 ] ) and the equivalent equation in is the dependence of the pre - factor on , reflecting correlations between different solutions groups ( replica ) . to determine this term we optimise the choice of by applying the condition . forcing this condition leads to a relation between the structure of the space of solutions , represented by , and the free parameter of the model . from equation ( [ mhat1 ] ) and using and one obtains : \left(e^{t+1}\right)^{2}\,,\end{aligned}\ ] ] which imply , after simplification , that for both cases . despite the simplicity of this result , the process from which we obtained it provides us with a practical way to estimate the true noise variance .notice that for calculating and we used the limits .so that , which appears in the expression for , can be obtained from the signal vector of with an infinite number of entries .thus using this expression we can finally express the message as: where no prior belief of is required .the steady state equations for the macroscopic variables and are obtained by taken the limit .let us define and .in the asymptotic regime the following relations hold : and from these expressions one can obtain the full expression for the error per bit rate : \,.\label{eq : epbbar}\ ] ]the inference algorithm requires an iterative update of equations ( [ eq : mmhatapprox],[mhatfin ] ) and converges to a reliable estimate of the signal , with no need for prior information of the noise level .the computational complexity of the algorithm is of .( 440,190 ) ( 0,0 ) ( 225,0 ) ( -10,193)(a ) ( 215,193)(b ) to test the performance of our algorithm we carried out a set of experiments of the cdma signal detection problem under typical conditions .error probability of the inferred signals was calculated for a system load of , where the true noise level is and the estimated noise is , as shown in figure [ fig2](a ) .the solid line represents the expected theoretical results ( density evolution ) , knowing the exact values of and , while circles represent simulation results obtained via the suggested _ practical _ algorithm , where no such knowledge is assumed .the results presented are based on trials per point and a system size and are superior to those obtained using the original algorithm .another performance measure one should consider is that provides an indication to the stability of the solutions obtained . in figure [ fig2](b ) we see that results obtained from our algorithm show convergence to a reliable solution in contrast to the original algorithm . the physical interpretation of the difference between the two results is assumed to be related to a replica symmetry breaking phenomenon .for the ilp , the regime of high interest as the system develops a critical behaviour for a range of values .we carried out a set of experiments for this system based on density evolution . in figure [ fig - crit-1 - 2](a )we present curves of the bit error probability , defined in equation ( [ eq : epbbar ] ) , as a function of the inverse load for different values of .three different regimes have been observed : for the curves exhibit a discontinuity at a value of that varies with ( first order phase transition - like behaviour ) . at curve becomes continuous but its slope diverges ( second order phase transition - like behaviour ) .the curves show analytical behaviour for noise values above 0.1025 .figure [ fig - crit-1 - 2](b ) exhibits a phase diagram of the ilp system ; it shows the dependency of the critical load as a function of the noise parameter .the first order transition line ends in a second order transition point marked by a circle .the results obtained , and in particular the critical value , are consistent with those derived using the replica symmetric statistical mechanics - based analysis of the problem .another indication for the critical behaviour is the number of steps required for the recursive update of equation ( [ eq : ebar ] ) to convergence . in figure [ fig - crit-3 - 4](a )we present the number of iterations required to reach a steady state as a function of when the noise parameter is set to .the number of iterations diverges when the critical value of is reached . finally , we wish to explore the efficiency of the algorithm as a function of the system size . in figure [ fig - crit-3 - 4](b )we present the result of iterating equations ( [ eq : mmhatapprox ] ) and ( [ mhatfin ] ) for a system size of _ _ k__=500 .the curve presents mean values and error bars over 1000 experiments .there is a strong dependency of the error per bit rate on the size of the system , which is expected to converge to the asymptotic limit ( infinite system size ) represented by the solid line .( 440,190 ) ( 0,0)=67.5 mm ( 225,0 ) ( 0,193)(a ) ( 225,193)(b ) ( 440,190 ) ( 0,-4 ) ( 225,-5 ) ( 0,193)(a ) ( 225,193)(b )to demonstrate the suitability of the method for more complex inference problems that require a system with 1rsb - like structures , we will consider the cdma signal of equation ( [ eq : cdma ] ) where the noise is drawn from a bi - gaussian distribution : where represents the bias and the positions of the gaussian peaks .we consider the particular case where , so that the gaussian peaks are slightly off centre . for this modelthe likelihood expression takes the form : +\frac{1+r}{2}\exp\left[-\frac{\left(y_{\mu}-\delta_{\mu}^{\ell{\rm a}}-\varepsilon\right)^{2}}{2\sigma^{2}}\right]\right\ } \,,\ ] ] where _ r _ , and are estimates of the true parameters , and . to derive the messages in this case we first calculate the function of equation ( [ eq : pcal ] ) , which has the form : where following the derivation of appendix [ app : messages ] , the saddle point equations ( [ eq : wrs ] ) and ( [ eq : w1rsb ] ) can be expressed as : where we denote for the rs case and for the 1rsb case , , , and .the solution of this equation provides , up to order , .\end{aligned}\ ] ] the function and its two first derivatives at the saddle point value are : \rho_{w}\,\varepsilon+\\ & & + \left[1-\left(1-r^{2}\right)\rho_{w}^{2}\,\varepsilon^{2}-\left(1-r^{2}\right)\left(1 - 3r^{2}\right)\updelta\rho_{w}\,\rho_{w}^{2}\,\varepsilon^{4}\right]\left(y_{\mu}-u_{\mu k}^{t}\right)+\\ & & + r\left(1-r^{2}\right)\rho_{w}^{3}\left(y_{\mu}-u_{\mu k}^{t}\right)^{2}\varepsilon^{3}+\frac{1}{3}\left(1-r^{2}\right)\left(1 - 3r^{2}\right)\rho_{w}^{4}\left(y_{\mu}-u_{\mu k}^{t}\right)^{3}\varepsilon^{4}\\ \mathcal{p}_{1 } & \simeq & -\rho_{0}+\mathcal{o}\left(\varepsilon^{2}\right)\\ \mathcal{p}_{2 } & = & 2\rho_{0}^{3}\left(1-r^{2}\right)\left[r\varepsilon^{3}+\left(1 - 3r^{2}\right)\rho_{w}\left(y_{\mu}-u_{\mu k}^{t}\right)\varepsilon^{4}\right]\,,\end{aligned}\ ] ] therefore , one can obtain the following expression , required for calculating the messages in the 1rsb case ( [ eq : mm1rsb ] ) \,,\ ] ] where .this straightforwardly leads to the following expression for the message : \,\varepsilon+\right.\nonumber \\ & & + \rho_{w}\left[1-\left(1-r^{2}\right)\rho_{w}\,\varepsilon^{2}-\left(1-r^{2}\right)\left(1 - 3r^{2}\right)\left(\upsilon_{n}-\rho_{w}^{2}\right)\varepsilon^{4}\right]\,\left(y_{\mu}-u_{\mu k}^{t}\right)+\nonumber \\ & & \left.+r\left(1-r^{2}\right)\rho_{w}^{3}\,\varepsilon^{3}\left(y_{\mu}-u_{\mu k}^{t}\right)^{2}+\frac{1}{3}\left(1-r^{2}\right)\left(1 - 3r^{2}\right)\rho_{w}^{4}\,\varepsilon^{4}\left(y_{\mu}-u_{\mu k}^{t}\right)^{3}\right\ } \,,\label{eq:2gauss1rsb}\end{aligned}\ ] ] where .the expression for the message in the rs case is recovered from equation ( [ eq:2gauss1rsb ] ) in the limit calculating the expressions for the macroscopic variables and , used in the optimisation process , requires performing the following sums , in the limit of with : where and . from the definition of the signal ( [ eq : cdma ] ) and the expression for the noise ( [ eq:2gruido ] ) we find that , , , , , and the explicit expressions derived for the macroscopic variables are : \rho_{w}\,\varepsilon^{4}\\ f^{t+1 } & = & b_{2}\rho_{w}^{2}-2rb_{1}\rho_{w}^{2}\,\varepsilon\\ & & + \left[r^{2}-2\left(1-r^{2}\right)b_{2}\rho_{w}\right]\rho_{w}^{2}\,\varepsilon^{2}-2r\left(1-r^{2}\right)b_{1}\left[\upsilon_{n}-\left(2 + 3b_{2}\rho_{w}\right)\rho_{w}^{2}\right]\rho_{w}\,\varepsilon^{3}+\\ & & + \left(1-r^{2}\right)\left[2r^{2}\left(\upsilon_{n}-\rho_{w}^{2}\right)\rho_{w}+\left(1 - 3r^{2}\right)b_{2}\left(3\rho_{w}^{2}+2b_{2}\rho_{w}^{3}-2\upsilon_{n}\right)\rho_{w}^{2}\right]\varepsilon^{4}\,.\end{aligned}\ ] ] applying the optimisation conditions of appendix [ app : optimisation ] , and , where one obtain the following conditions : in the 1rsb case one can further simplify these expressions by a suitable choice of and the number of replicas per block _n_. optimisation with respect to the latter results in which implies that by definition is larger than zero .this condition is satisfied if our estimate for the noise variance is smaller than the true parameter . in this casethe number of replicas per block has to satisfy the condition interestingly this ties the noise level mismatch to the number of replicas , thus giving further insight to the role played by the structure of the inter - replica correlation matrix . for , the minimum value of is reached at .it is also possible to prove that although and will not be explicitly used in the following expressions , the correct choice of the value for these parameters allows one to use equations ( [ eq : cond1 ] ) and ( [ eq : cond2 ] ) in order to find the final expression for the macroscopic variable , where no estimates are needed for the noise parameters : note that in the rs case we do not have the freedom to choose the number of replicas per block , given that this case is equivalent to take in the absence of the additional replica . for this reason equations ( [ eq : cond1 ] ) and ( [ eq : cond2 ] ) and ( [ eq : cond2 ] ) take the form : and the macroscopic variable which depends on both estimates of the noise variance and bias given that the algorithm deals with finite signal vectors , the quantities and have to be approximated by the correspondent finite sums .therefore , we have : where we used the fact that .observe that no information about the true noise has been used to derive these expressions .having the estimates ( [ eq : estb1b2 ] ) we can write down the messages explicitly : \,,\end{aligned}\ ] ] which can be now used recursively for obtaining the inferred solutions for this problem .notice that an estimate of _ both _ and in required in the rs case . to test the performance of the 1rsb algorithm we carried out a set of experiments of the cdma signal detection problem with bi - gaussian noise .the results shown in figure [ fign](a ) describe the error probability of the inferred signals as a function of the number of iterations has been calculated using both rs and 1rsb - like correlation matrices for the case of parameters mismatch .the system load used in the simulations was , the true noise level , gaussian bias of and weight .6 .the estimated noise parameters are and .the circles represent simulation results obtained via the 1rsb algorithm while the squares correspond to the rs - like structure .the results presented are based on trials per point and a system size ; error - bars are also provided .the results obtained using the 1rsb - like structure are superior to those obtained using the rs algorithm . as shown in figure [ fign](b ) using the stability measure , both rs and 1rsb - based algorithms converge to reliable solutions ; the 1rsb - based algorithm is slightly slower to converge , presumably due to the more complex message passing scheme .( 440,190 ) ( 0,0 ) ( 225,0 ) ( -10,193)(a ) ( 215,193)(b )we present and methodologically develop a new algorithm for using bp in densely connected systems that enables one to obtain reliable solutions even when the solution space is fragmented .the algorithm relies on the introduction of a large number of replicated variable systems exposed to the same evidential nodes .messages are obtained by averaging over all replicated systems leading to pseudoposterior that is then used to infer the variable nodes most probable values .this is done with no actual replication , by introducing an assumption about correlations between the replicated variables and exploiting the high number of replicated systems .the algorithm was developed in a systematic manner to accommodate more complex correlation matrices .it was successfully applied to the cdma signal detection and ilp learning problems , using the rs - like correlation matrix , and to the cdma inference problem with bi - modal gaussian noise model in the 1rsb - like correlation matrix .the algorithm provides superior results to other existing algorithms and a systematic improvement where more complex correlation matrices are introduced , where required .further research is required to fully determine the potential of the new algorithm .two particular areas which we consider as particularly promising are inference problems characterised by discrete data variables and noise model and problems that can be mapped onto _ sparse _ graphs .both activities are currently underway .support from evergrow ip no .1935 of the eu fp-6 is gratefully acknowledged .10 m. mzard , g. parisi and m.a .virasoro , _ spin glass theory and beyond _ , world scientific , singapore ( 1987 ) .m. opper and d. saad , _ advanced mean field methods : theory and practice _ , mit press , cambridge , ma 2001 j. pearl , _ probabilistic reasoning in intelligent systems _ , morgan kaufmann publishers , san francisco , ca ( 1988 ) .jensen , _ an introduction to bayesian networks _ , ucl press , london ( 1996 ) .mackay , _ information theory , inference and learning algorithms _ , cambridge university press ( 2003 ) .y. kabashima , j. phys .a * 36 * , 11111 ( 2003 ) .h. nishimori , _ statistical physics of spin glasses and information processing _ , oxford university press , uk , ( 2001 ) .neirotti and d. saad , europhys .lett . * 71 * , 866 ( 2005 ) .although we will be using the terms rs and rsb , it should be clear that this is not directly related to the replica approach , but merely uses similar structures for the cross - replica correlations .s. verd , _ multiuser detection _ , cambridge university press uk ( 1998 ) .h. s. seung , h. sompolinsky and n. tishby , phys .a * 45 * , 6056 ( 1992 ) .y. weiss , _ neural computation _ * 12 * , 1 ( 2000 ) .y. kabashima , d. saad , europhys .lett . * 44 * , 668 ( 1998 ) .yedidia , w.t .freeman and y. weiss , in _ advances in neural information processing systems _ * 13 * , 698 ( 2000 ) .m. mzard , g. parisi and r. zecchina , science * 297 * , 812 ( 2002 ) .m. mzard and r. zecchina , phys .e * 66 * , 056126 ( 2002 ) . a. braunstein and r. zecchina , phys .lett . , * 96 * 030201 ( 2006 ) y. kabashima , jour . of the physical society of japan* 74 * 2133(2005 )within the rs setting , the interaction term in equation ( [ eq : ansatz ] ) is : a simplified expression for equation ( [ eq : ansatz ] ) immediately follows ^{-1}\exp\left\ { h_{\mu k}^{t}\sum_{\mathrm{a}=1}^{n}b_{k}^{\mathrm{a}}+\frac{1}{2}q_{1\mu k}^{t}\left(\sum_{\mathrm{a}=1}^{n}b_{k}^{\mathrm{a}}\right)^{2}\right\ } \\ & = & [ \mathcal{z}_{\mu k}^{t}]^{-1}{\displaystyle \int_{-\infty}^{\infty}\mathrm{d}x\,\exp\left\ { -\frac{x^{2}}{2q_{1\mu k}^{t}}+\left(x+h_{\mu k}^{t}\right)\sum_{\mathrm{a}=1}^{n}b_{k}^{\mathrm{a}}\right\ } } \end{aligned}\ ] ] where is a normalisation constant .the diagonal elements only affect the normalisation term and can therefore be taken to zero with no loss of generality .we expect the logarithm of the normalisation term ( linked to the free energy ) , obtained from the well behaved distribution , to be self - averaging .we therefore expect where and are the mean values of the parameters drawn for some suitable distributions and the over - line represents the mean value of the partition function over these distributions .in the following we will drop the upper - index _ t _ and the sub - indices and for brevity . to obtain the scaling behaviour of the various parametersone calculates explicitly , assuming the parameter is taken from a normal distribution .the partition function takes the form : thus , the mean value of the partition function over the set of parameters is : where the normalisation can be expressed as : \right\ } \\ & = & \mathcal{a}(n)\,(n+1)\binom{n}{n/2}\,\exp\left\ { n\left[\left|h\right|+n\frac{\hat{q}_{1}}{2}+n^{3}\frac{\sigma_{q_{1}}^{2}}{8}\right]\right\ } \\ & \simeq & \sqrt{\frac{2}{\pi}}\mathcal{a}(n)\,\exp\left\ { n\left[\ln(2)+\left|h\right|+n\frac{\hat{q}_{1}}{2}+n^{3}\frac{\sigma_{q_{1}}^{2}}{8}\right]\right\ } , \end{aligned}\ ] ] where .thus , , and .> from now on we will take the off - diagonal elements of the rs matrix equal to , where . the form of the marginalised posterior at time _t _ is then : where the function presents one or two minima according to the following table : + [ cols="^,^,^",options="header " , ] where ; the coefficient plays the role of the inverse temperature . below the critical value a spontaneous magnetisation appears .this results from analysing the equation : the case of two maxima is presented in figure [ 2peaks ] . ) with two maxima and one minimum for a positive value of the field .[2peaks],width=245 ] we define the mean values from the distribution equation ( [ pp ] ) .if the field is not zero , as shown in figure [ 2peaks ] , ^{n} ] .we expect that , for large the following approximation to be valid : \left(x - x_{h}\right)^{2}\right\ } \right.\nonumber \\ & & \left.\qquad+{\rm e}^{-nmh}\exp\left\ { -\frac{n}{2}\left[g_{1}^{-1}-\beta_{-h}\left(m , g_{1}\right)\right]\left(x - x_{-h}\right)^{2}\right\ } \right\ } \,.\label{exph0}\end{aligned}\ ] ] using equation ( [ exph0 ] ) one can calculate the normalisation in equation ( [ eq : z ] ) \left(x - x_{h}\right)^{2}\right\ } \nonumber \\ & & + { \rm e}^{-n\left(\phi_{0}+mh\right)}\int{\rm d}x\,\exp\left\ { -\frac{n}{2}\left[g_{1}^{-1}-\beta_{-h}\left(m , g_{1}\right)\right]\left(x - x_{-h}\right)^{2}\right\ } \nonumber \\ & \simeq & \sqrt{\frac{2\pi g_{1}\xi\left(m , g_{1}\right)}{n}}\!\,{\rm e}^{-n\phi_{0}}\!\left\ { \,{\rm e}^{nmh}\!\left(1-g_{1}\left(1-m^{2}\right)\xi^{2}\left(m , g_{1}\right)\ , mh\right)\right.\nonumber \\ & & \qquad\qquad\qquad\left.+\,\,{\rm e}^{-nmh}\!\left(1+g_{1}\left(1-m^{2}\right)\xi^{2}\left(m , g_{1}\right)\ , mh\right)\right\ } \,\,.\label{eq : zzz}\end{aligned}\ ] ] the mean value of a given function with respect to the conditional probability distribution defined in equation ( [ pp ] ) is then:\!\left(x\!-\ ! x_{h}\right)^{2}\right\ } \\ & & \qquad\qquad\qquad\qquad\qquad\left[f\left(x_{h}\right)+\left(x - x_{h}\right)f^{\prime}\left(x_{h}\right)+\frac{1}{2}\left(x - x_{h}\right)^{2}f^{\prime\prime}\left(x_{h}\right)\right]\\ & & + \mathcal{z}^{-1}{\rm e}^{-n\left(\phi_{0}+mh\right)}\int\!\!{\rm d}x\,\exp\left\ { -\frac{n}{2}\left[g_{1}^{-1}\!-\!\left(1\!-\ !m^{2}\right)\!\left(1\!+\!2\xi\left(m , g_{1}\right)\ , mh\right)\right]\!\left(x\!-\ !x_{-h}\right)^{2}\right\ } \\ & & \qquad\qquad\qquad\qquad\qquad\left[f\left(x_{-h}\right)+\left(x - x_{-h}\right)f^{\prime}\left(x_{-h}\right)+\frac{1}{2}\left(x - x_{-h}\right)^{2}f^{\prime\prime}\left(x_{-h}\right)\right]\,,\end{aligned}\ ] ] which implies , considering that the integrals of the linear terms are zero and keeping only the leading terms in the expansions , that the expectation values takes the form : \left\ { f\left(x_{h}\right)+\frac{g_{1}}{2n}\xi\left(m , g_{1}\right)\ , f^{\prime\prime}\left(x_{h}\right)\right\ } \\ & & + { \rm e}^{-2nmh}\left(1 + 2\xi^{2}\left(m , g_{1}\right)\ , mh\right)f\left(x_{-h}\right)\,.\end{aligned}\ ] ] considering the expansion of and disregarding terms of , one can write : \!+\ !f^{\prime}\left(mg_{1}\right)\!\xi\left(m , g_{1}\right)h.\label{eq : meanrs}\ ] ] the resulting one and two variable expectation values become \xi\left(m_{\mu k}^{t},g_{1\mu k}^{t}\right)-2{\rm e}^{-2nm_{\muk}^{t}h_{\mu k}^{t}}\right]m_{\mu k}^{t}\\ & & \qquad\qquad+\xi\left(m_{\mu k}^{t},g_{1\mu k}^{t}\right)\,\left[1-\left(m_{\mu k}^{t}\right)^{2}\right]h_{\mu k}^{t}\end{aligned}\ ] ] and where \,\left\ { \frac{g_{1\mu k}^{t}}{n}\,\left[1 - 3\left(m_{\mu k}^{t}\right)^{2}\right]+2m_{\mu k}^{t}h_{\mu k}^{t}\right\ } \,,\ ] ] and thus , the leading terms for the covariance matrix of the replicated variables are : \\ & & + \left(1-\delta^{{\rm ab}}\right)\left\ { \frac{g_{1\mu k}^{t}}{n}\xi\left(m_{\mu k}^{t},g_{1\mu k}^{t}\right)\left[1-\left(m_{\mu k}^{t}\right)^{2}\right]^{2}+4{\rm e}^{-2nm_{\mu k}^{t}h_{\mu k}^{t}}\left(1-{\rm e}^{-2nm_{\mu k}^{t}h_{\mu k}^{t}}\right)\left(m_{\mu k}^{t}\right)^{2}\right\ } \,.\end{aligned}\ ] ] if one requires the non - diagonal elements of this covariance matrix to have the same scaling as the inter - replica interaction matrix , the field has to behave in such a way that the exponential term contributes at most in one thus expects the field to obey , where the are appropriate constants . with this asymptotic behaviour ,the expression for the entries in the covariance matrix is +\left(1-\delta^{{\rm ab}}\right)\,\frac{g_{1\mu k}^{t}\xi\left(m_{\mu k}^{t},g_{1\mu k}^{t}\right)}{n}\,\left[1-\left(m_{\mu k}^{t}\right)^{2}\right]^{2}\,,\ ] ] which serves to define the probability distribution for the macroscopic variable .under a solution correlation matrix that resembles the 1rsb structure , the system comprises variables , where both the number of blocks _l _ and the number of variables per block _n _ are considered large . asbefore we are interested in the regime where and with this setting , the interaction term in equation ( [ eq : ansatz ] ) is now: thus we have now squared sums in the exponent that can be replaced by integrals:^{-1}{\displaystyle \int\mathrm{d}\mathbold{x}}\,\exp\left\ { -\frac{x_{0}^{2}}{2q_{2\mu k}^{t}}-\sum_{\ell=1}^{l}\frac{x_{\ell}^{2}}{2\updelta q_{\mu k}^{t}}+\sum_{\ell=1}^{l}\left(x_{0}+x_{\ell}+h_{\mu k}^{t}\right)\sum_{\mathrm{a}=1}^{n}b_{k}^{\ell\mathrm{a}}\right\ } \,,\end{aligned}\ ] ] where and . also herewe expect the logarithm of the normalisation term ( linked to the free energy ) obtained from the well behaved distribution to be self - averaging , thus: which is satisfied if the entries behave like and where and .using this new scaled parameters , the expression for the normalisation is where\,.\ ] ] as before , we drop the indexes , _ k _ , and _t _ for brevity .the critical points of the function satisfy the following set of equations : which are satisfied for the following values : where .the second equation in the set , equation ( [ eq : set ] ) , has the same form for all and in the small field regime it has at most three different solutions . from the three possible solutions ,one is a local maximum ; of the other two , the one that has the same sign as _ h _ is dominant .thus we can expect , for all , .this reduces the set of equations to one where .with the substitution the equation has the same form as equation ( [ eq : derivada ] ) , i.e. if one considers again the field _h _ to be small , the solutions can be expressed as an expansion of the zero field solutions , where is given by equation ( [ eq : xih ] ) , and using these expansions the critical values are given by : ] for all as in the rs case , the expansion of around the critical points in the small field regime is .so the dominant solution is the one that shares the sign with the field . for a sufficiently large system with variables ,one expects the following expansion to be valid:\right.\nonumber \\ & & \left.\qquad\quad+{\rm e}^{-nlmh}\exp\left[-\frac{nl}{2}\left(\mathbf{x}\!-\!\mathbf{x}_{-h}^{*}\right)^{\sf t}\mathbf{h}_{\phi ,-h}\left(\mathbf{x}\!-\!\mathbf{x}_{-h}^{*}\right)\right]\right\ } \,,\label{eq : expmultivar}\end{aligned}\ ] ] where is the hessian of in .defining \ , mh\right\ } ] the linear transformation from the canonical basis to the basis of eigenvectors is then represented by a matrix with the entries \nonumber \\ & & + \frac{1}{\sqrt{l}}\,\delta_{1j}\left[\delta_{0i}\,\frac{\beta_{0}}{\alpha_{0}}+\left(1-\delta_{0i}\right)\right]-\frac{1}{l}\,\delta_{0j}\left(1-\delta_{0i}\right)\,\frac{\beta_{0}}{\alpha_{0}}\,,\label{eq : rot}\end{aligned}\ ] ] ignoring terms of . because this transformation is a rigid rotation , the following properties are satisfied : and second order terms in equation ( [ eq : expmultivar ] ) can be re - written using the diagonal representation of the hessian .therefore , keeping only terms of order we have that : , where and is the diagonal representation of the hessian , i.e. . using the diagonal representation in conjunction with equation ( [ eq : expmultivar ] ) one obtains an expression for the normalisation term \\ & & + { \rm e}^{-nl\left(\phi_{0}+mh\right)}\int{\rm d}\mathbf{y}\,\exp\left[-\frac{nl}{2}\left(\mathbf{y}-\mathbf{y}_{-h}^{*}\right)^{\sf t}\mathbf{h}_{\phi ,- h}^{\prime}\left(\mathbf{y}-\mathbf{y}_{-h}^{*}\right)\right]\\ & \simeq & { \rm e}^{-nl\phi_{0}}\left(\frac{2\pi}{nl}\right)^{\frac{l+1}{2}}\left[{\rm e}^{nlmh}\prod_{\ell=0}^{l}\lambda_{\ell , h}^{-\frac{1}{2}}+{\rm e}^{-nlmh}\prod_{\ell=0}^{l}\lambda_{\ell ,- h}^{-\frac{1}{2}}\right]\,.\end{aligned}\ ] ] for a small field , the product of the eigenvalues can be approximated by \left[\xi(m , g)+1\right]lmh\right\ } \prod_{\ell=0}^{l}\lambda_{\ell}^{-\frac{1}{2}}\,.\end{aligned}\ ] ] thus , the expression for reduces to \left[\xi(m , g)+1\right]lmh\right\ } \right.\\ & & \qquad\qquad\qquad+\left.{\rm e}^{-nlmh}\left\ { 1+\left[\xi(m , g_{2})-1\right]\left[\xi(m , g)+1\right]lmh\right\ } \right\ } \,.\end{aligned}\ ] ] the mean value of a given function is then given by \\ & & + \mathcal{z}^{-1}{\rm e}^{-nl\left(\phi_{0}+mh\right)}\int{\rm d}\mathbf{y}\,\exp\left\ { -\frac{nl}{2}\left(\mathbf{y}-\mathbf{y}_{-h}^{*}\right)^{\sf t}\mathbf{h}_{\phi ,- h}^{\prime}\left(\mathbf{y}-\mathbf{y}_{-h}^{*}\right)\right\ } \\ & & \qquad\qquad\qquad\qquad\qquad\left[f\left(\mathbf{x}_{-h}\right)+\frac{1}{2}\left(\mathbf{y}-\mathbf{y}_{-h}^{*}\right)^{\sf t}\mathbf{h}_{f ,- h}^{\prime}\left(\mathbf{y}-\mathbf{y}_{-h}^{*}\right)\right]\,,\end{aligned}\ ] ] where is the hessian of the function in the basis of eigenvectors of , evaluated at the critical points .the linear terms in the expansion of do not contribute to the expectation value .the gaussian integral of the cross products of the type with are zero , thus the gaussian integral of the second term in the expansion of becomes : \left(\mathbf{y}-\mathbf{y}_{\pm h}^{*}\right)^{\sf t}\mathbf{h}_{f,\pm h}^{\prime}\left(\mathbf{y}-\mathbf{y}_{\pm h}^{*}\right)\nonumber \\ i_{+ } & \simeq & \frac{1}{2}\,\left\ { 1-{\rm e}^{-2nlmh}\left\ { 1 + 2\left[\xi(m , g_{2})-1\right]\left[\xi(m , g)+1\right]lmh\right\ } \right\ } \;\frac{1}{nl}\,\sum_{\ell=0}^{l}\lambda_{\ell , h}^{-1}\left(\mathbf{h}_{f , h}^{\prime}\right)_{\ell\ell}\nonumber \\ i_{- } & \simeq & \frac{1}{2}\,{\rm e}^{-2nlmh}\left\ { 1 + 2\left[\xi(m , g_{2})-1\right]\left[\xi(m , g)+1\right]lmh\right\ } \;\frac{1}{nl}\,\sum_{\ell=0}^{l}\lambda_{\ell ,- h}^{-1}\left(\mathbf{h}_{f ,- h}^{\prime}\right)_{\ell\ell}\,.\label{b6}\end{aligned}\ ] ] using the expansion where and , the diagonal entries of the transformed hessian are : with defined by the second term in ( [ b7 ] ) . using the entries of the diagonalised hessian ,the last term in the integrals ( [ b6 ] ) becomes disregarding terms of the expectation value of an arbitrary function can then be approximated by \!+\!\delta\ !h,\label{eq : mean1rsb}\end{aligned}\ ] ] where we have disregarded terms of , and . by simple inspection ,equation ( [ eq : mean1rsb ] ) is equivalent to the rs mean value equation ( [ eq : meanrs ] ) .the single variable mean value is then : the expansion for is \left(1,\stackrel{\ell-1\;{\rm times}}{\overbrace{0,0,\dots,0}},1,\stackrel{l-\ell\;{\rm times}}{\overbrace{0,0,\dots,0}}\right)^{\sf t}\mathbold{\xi}\left(m_{\mu k}^{t},g_{\mu k}^{t}\right)h_{\mu k}^{t}\,,\ ] ] which results in the following expression for the single variable mean value {\mu k}^{t}\\ & & -m_{\mu k}^{t}\left[1-\left(m_{\mu k}^{t}\right)^{2}\right]\frac{1}{nl}\sum_{k=0}^{l}\lambda_{k}^{-1}\left(\mathbf{m^{\prime}}_{0\ell}\right)_{kk}\,,\end{aligned}\ ] ] where is a matrix such that \mathbf{m}_{0\ell} ] and =g_{\mu k}^{t}\xi\left(m_{\mu k}^{t},g_{\mu k}^{t}\right)-g_{2\mu k}^{t}\xi\left(m_{\mu k}^{t},g_{2\mu k}^{t}\right). ] , thus : \left[1 - 3\left(m_{\mu k}^{t}\right)^{2}\right]\nonumber \\ & & + \frac{g_{\mu k}^{t}\xi\left(m_{\mu k}^{t},g_{\mu k}^{t}\right)-g_{2\mu k}^{t}\xi\left(m_{\mu k}^{t},g_{2\mu k}^{t}\right)}{nl}\left[1-\left(m_{\mu k}^{t}\right)^{2}\right]\left[1 - 3\left(m_{\mu k}^{t}\right)^{2}\right]\nonumber \\ & & + 2\xi\left(m_{\mu k}^{t},g_{\mu k}^{t}\right)\left[1-\left(m_{\mu k}^{t}\right)^{2}\right]\ , m_{\mu k}^{t}h_{\mu k}^{t}.\label{eq : meanbblock1rsb}\end{aligned}\ ] ] finally , to calculate the expectation value for the product of two variables belonging to different blocks ( the sub - block index is insignificant in this case ) , .we set , thus the hessian matrix where \left[1 - 3\left(m_{\mu k}^{t}\right)^{2}\right] ] and ] is the hessian of , then the matrix has the same structure as , therefore , the eigenvalues and eigenvectors of can be obtained adapting equations ( [ eq : eigenvalues ] ) and ( [ eq : eigenvectors ] ) by the substitutions , and .expanding at the saddle point one obtains where the resulting messages are }}{{\displaystyle \int{\rm d}\mathbf{{\omega}}\,\exp\left\ { -\frac{nl}{2}\updelta\mathbf{\theta}^{\sf t}\mathbf{h}_{\mathcal{h}}\updelta\mathbf{\theta}\right\ } } } } \end{aligned}\ ] ] where the term proportional to vanishes for parity reasons . in the basis of eigenvectors of , i.e. where * u * is adapted from equation ( [ eq : rot ] ) , the message has the form: where are the eigenvalues of and is adapted from equation ( [ eq : diagm ] ) .the expression for the message is reduced to }\nonumber \\ & \simeq & \varepsilon_{\mu k}\frac{\tilde{\vartheta}_{\mu k}^{t}-u_{\mu k}^{t}}{{2v}^{t}-r^{t}}+\frac{\varepsilon_{\mu k}}{2n}\,\frac{\mathcal{p}_{2}v^{t}}{1-\mathcal{p}_{1}v^{t}}\,.\label{eq : mm1rsb}\end{aligned}\ ] ] the expression for the messages from * b*-nodes to * y*-nodes is : which can be approximated by }{{\displaystyle \sum_{\left\ { \mathbf{b}_{k}\right\ } } } \,\int\mathrm{d}\mathbf{\delta}_{\nu k}p\left(y_{\nu}|\mathbf{{\delta}}_{\nu k};\mathbold{\gamma}\right)p\left(\mathbf{{\delta}}_{\nu k}|\mathbf{b}\right)\left[1+\varepsilon_{\nu k}\mathbf{b}_{k}^{\sf t}\nabla_{\mathbf{{\delta}}_{\nu k}}\ln p\left(y_{\nu}|\mathbf{{\delta}}_{\nu k};\mathbold{\gamma}\right)\right]}\\ & = & \frac{{\displaystyle \sum_{b_{k}^{{\rm a}^{\prime}}=\pm1}}\ , b_{k}^{{\rm a}^{\prime}}\int\mathrm{d}\mathbf{\delta}_{\nu k}p\left(y_{\nu}|\mathbf{{\delta}}_{\nu k};\mathbold{\gamma}\right)p\left(\mathbf{{\delta}}_{\nu k}|b_{k}^{{\rm a}^{\prime}}\right)\left[1+\varepsilon_{\nu k}b_{k}^{{\rm a}^{\prime}}{\displaystyle \frac{\partial}{\partial\delta_{\mu k}^{{\rm a}^{\prime}}}}\ln p\left(y_{\nu}|\mathbf{{\delta}}_{\nu k};\mathbold{\gamma}\right)\right]}{{\displaystyle \sum_{b_{k}^{{\rm a}^{\prime}}=\pm1}}\,\int\mathrm{d}\mathbf{\delta}_{\nu k}p\left(y_{\nu}|\mathbf{{\delta}}_{\nuk};\mathbold{\gamma}\right)p\left(\mathbf{{\delta}}_{\nu k}|b_{k}^{{\rm a}^{\prime}}\right)\left[1+\varepsilon_{\nu k}b_{k}^{{\rm a}^{\prime}}{\displaystyle \frac{\partial}{\partial\delta_{\mu k}^{{\rm a}^{\prime}}}}\ln p\left(y_{\nu}|\mathbf{{\delta}}_{\nu k};\mathbold{\gamma}\right)\right]}\\ & = & \frac{{\displaystyle \sum_{b_{k}^{{\rm a}}=\pm1}b_{k}^{{\rm a}}}{\displaystyle \prod_{\nu\neq\mu}}{\displaystyle \frac{1+\widehat{m}_{\nu k}^{t}b_{k}^{{\rm a}}}{\mathscr n_{\nu k}^{t}}}}{{\displaystyle \sum_{b_{k}^{{\rm a}}=\pm1}}{\displaystyle \prod_{\nu\neq\mu}}{\displaystyle \frac{1+\widehat{m}_{\nu k}^{t}b_{k}^{{\rm a}}}{\mathscr n_{\nu k}^{t}}}}=\frac{{\displaystyle \prod_{\nu\neq\mu}}{\displaystyle \frac{1+\widehat{m}_{\nu k}^{t}}{\mathscr n_{\nu k}^{t}}}-{\displaystyle \prod_{\nu\neq\mu}}{\displaystyle \frac{1-\widehat{m}_{\nu k}^{t}}{\mathscr n_{\nu k}^{t}}}}{{\displaystyle \prod_{\nu\neq\mu}}{\displaystyle \frac{1+\widehat{m}_{\nu k}^{t}}{\mathscr n_{\nu k}^{t}}}+{\displaystyle \prod_{\nu\neq\mu}}{\displaystyle \frac{1-\widehat{m}_{\nu k}^{t}}{\mathscr n_{\nu k}^{t}}}}=\tanh\left(\sum_{\nu\neq\mu}{\rm arctanh}\left(\widehat{m}_{\nu k}^{t}\right)\right)\,,\end{aligned}\ ] ] but since we have that the rs case the equation to be solved is : thus , the equation to be satisfied is: for the 1rsb case we have that resulting in the set of equations: which is equivalent to: where . observed that equation ( [ eq : w1rsb ] ) is equivalent to equation ( [ eq : wrs ] ) and that the ground state is independent of the indices 0 and .our goal is to devise an algorithm that returns a better estimate of the message at each iteration ; we therefore apply a variational approach that optimises the free parameters of the model at each iteration .we expect to find a suitable set of parameters that maximises the drop in error per bit rate .the second term of the right hand side of equation ( [ eq : gg ] ) is an implicit function of the parameters through and , therefore where the partial derivatives with respect to and are \left[n^{t}-m^{t}\tanh\left(\sqrt{f^{t}}z+e^{t}\right)\right]\\ \frac{\partial}{\partial f^{t}}\left(\frac{m^{t}}{\sqrt{n^{t}}}\right ) & = & \left(n^{t}\right)^{-\frac{3}{2}}\int\mathcal{d}z\,\frac{z}{2\sqrt{f^{t}}}\,\left[1-\tanh^{2}\left(\sqrt{f^{t}}z+e^{t}\right)\right]\left[n^{t}-m^{t}\tanh\left(\sqrt{f^{t}}z+e^{t}\right)\right]\,.\end{aligned}\ ] ] by the definition of the field we have that . exploiting gaussian properties of the distribution of ( [ eq : ef ] ) and we suppose that and are both explicit functions of the parameters , therefore \left\ { \frac{\partial e^{t}}{\partial\gamma_{i}}-\frac{1}{2}\,\frac{e^{t}}{f^{t}}\,\frac{\partial f^{t}}{\partial\gamma_{i}}\right\ } \,.\ ] ] by differentiation equation ( [ eq : gg ] ) and using equation ( [ eq : gammadiff ] ) one obtains }\left(\frac{\partial e^{t}}{\partial\gamma_{i}}-\frac{1}{2}\,\frac{e^{t}}{f^{t}}\,\frac{\partial f^{t}}{\partial\gamma_{i}}\right)\nonumber \\ & & -\left(n^{t}\right)^{-\frac{3}{2}}\int\mathcal{d}z\,\frac{n^{t}-m^{t}\tanh\left(\sqrt{f^{t}}z+e^{t}\right)}{\cosh^{2}\left(\sqrt{f^{t}}z+e^{t}\right)}\left(\frac{\partial e^{t}}{\partial\gamma_{i}}+\frac{z}{2\sqrt{f^{t}}}\frac{\partial f^{t}}{\partial\gamma_{i}}\right)\nonumber \\ & = & -\left(f^{t}n^{t}\right)^{-\frac{3}{2}}\int\frac{{\rm d}u}{\sqrt{2\pi}}\exp{\textstyle \left[-{\displaystyle \frac{\left(u - e^{t}\right)^{2}}{2f^{t}}}\right]}\,\frac{u}{2}\,\frac{n^{t}-m^{t}\tanh\left(u\right)}{\cosh^{2}\left(u\right)}\nonumber \\ & & -\left(\frac{\partial e^{t}}{\partial\gamma_{i}}-\frac{1}{2}\,\frac{e^{t}}{f^{t}}\,\frac{\partial f^{t}}{\partial\gamma_{i}}\right)\nonumber \\ & & \;\times\left\ { \frac{\lambda^{2}}{\sqrt{2\pi}f^{t}}\exp{\textstyle \left[-\!{\displaystyle \frac{\left(e^{t}\right)^{2}}{2f^{t}}}\right]}+\int\!\!\frac{{\rm d}u}{\sqrt{2\pi f^{t}\left(n^{t}\right)^{3}}}\exp{\textstyle \left[-\!{\displaystyle \frac{\left(u - e^{t}\right)^{2}}{2f^{t}}}\right]}\,\frac{n^{t}\!-\! m^{t}\tanh\left(u\right)}{\cosh^{2}\left(u\right)}\right\ } \,.\label{eq : casi}\end{aligned}\ ] ] to optimise with respect to one requires .the first term of the right hand side of equation ( [ eq : casi ] ) is independent of the index _i _ and is zero if and only if the integrand is an odd function .this is true if .this condition is only satisfied if which automatically makes . by the application of this condition , the sum between curly brackets in the second term at the right hand side of eq.([eq : casi ] )is always positive , which implies . the conditions and imply that : therefore , if the critical point is a minimum , then the expansion has a second term that satisfy the conditions : and , validating the optimisation process .
an efficient bayesian inference method for problems that can be mapped onto dense graphs is presented . the approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes . an assumption about the symmetry of the solutions is required for carrying out the averages ; here we extend the previous derivation based on a replica symmetric ( rs ) like structure to include a more complex one - step replica symmetry breaking ( 1rsb)-like ansatz . to demonstrate the potential of the approach it is employed for studying critical properties of the ising linear perceptron and for multiuser detection in code division multiple access ( cdma ) under different noise models . results obtained under the rs assumption in the non - critical regime give rise to a highly efficient signal detection algorithm in the context of cdma ; while in the critical regime one observes a first order transition line that ends in a continuous phase transition point . finite size effects are also observed . while the 1rsb ansatz is not required for the original problems , it was applied to the cdma signal detection problem with a more complex noise model that exhibits rsb behaviour , resulting in an improvement in performance .
a multi - party contract signing ( mpcs ) protocol is a communication protocol that allows a number of parties to sign a digital contract .the need for mpcs protocols arises , for instance , in the context of service level agreements ( slas ) and in supply chain contracting . in these domains ( electronic ) contract negotiations and signingare still mainly bilateral . instead of negotiating and signing one multi - party contract , in practice ,multiple bilateral negotiations are conducted in parallel .because negotiations can fail , parties may end up with just a subset of the pursued bilateral contracts .if a party is missing contracts with providers or subcontractors , it faces an _ overcommitment _ problem .if contracts with customers are missing , it has an _ overpurchasing _problem .both problems can be prevented by using fair multi - party contract signing protocols .existing optimistic mpcs protocols come in two flavors ._ linear _ mpcs protocols require that at any point in time at most one signer has enough information to proceed in his role by sending messages to other signers ._ broadcast _ mpcs protocols specify a number of communication rounds in each of which all signers send or broadcast messages to each other . however , neither of the two kinds of protocols is suitable for slas or supply chain contracting .the reason is that in both domains , the set of contractors typically has a hierarchical structure , consisting of main contractors and levels of subcontractors .it is undesirable ( and perhaps even infeasible ) for the main contracting partners and their subcontractors to directly communicate with another partner s subcontractors .this restriction immediately excludes broadcast protocols as potential solutions and forces linear protocols to be impractically large . in this paperwe introduce mpcs protocol specifications that support arbitrary combinations of linear and parallel actions , even within a protocol role .the message flow of such protocols can be specified as a directed acyclic graph ( dag ) and we therefore refer to them as _ dag _ mpcs protocols . a central requirement for mpcs protocols is _fairness_. this means that either all honest signers get all signatures on the negotiated contract or nobody gets the honest signers signatures .it is well known that in asynchronous communication networks , a deterministic mpcs protocol requires a trusted third party ( ttp ) to achieve fairness . in order to prevent the ttp from becoming a bottleneck ,protocols have been designed in which the ttp is only involved to resolve conflicts .a conflict may occur if a message gets lost , if an external adversary interferes with the protocol , or if signers do not behave according to the protocol specification .if no conflicts occur , the ttp does not even have to be aware of the execution of the protocol .such protocols are called _ optimistic _we focus on optimistic protocols in this paper .dag mpcs protocols not only allow for better solutions to the subcontracting problem , but also have further advantages over linear and broadcast mpcs protocols and we design three novel mpcs protocols that demonstrate this .one such advantage concerns communication complexity .linear protocols can reach the minimal number of messages necessary to be exchanged in fair mpcs protocols at the cost of a high number of protocol `` rounds '' .we call this the _ parallel complexity _ , which is a generalization of the round complexity measure for broadcast protocols , and define it in section [ sec : complexity ] .conversely , broadcast protocols can attain the minimal number of protocol rounds necessary for fair mpcs , but at the cost of a high message complexity .we demonstrate that dag mpcs protocols can simultaneously attain best possible order of magnitude for both complexity measures .as discussed in our related work section , the design of fair mpcs protocols has proven to be non - trivial and error - prone .we therefore not only prove our three novel dag mpcs protocols to be fair , but we also derive necessary and sufficient conditions for fairness of any optimistic dag mpcs protocol .these conditions can be implemented and verified automatically , but they are still non - trivial . therefore , for a slightly restricted class of dag protocols , we additionally derive a fairness criterion that is easy to verify .* contributions . *our main contributions are ( i ) the definition of a syntax and interleaving semantics of dag mpcs protocols ( section [ sec : spec_exec_model ] ) ; ( ii ) the definition of the message complexity and parallel complexity of such protocols ( section [ sec : complexity ] ) ; ( iii ) a method to derive a full mpcs specification from a _ skeletal graph _ , including the ttp logic ( section [ sec : mpcs ] ) ; ( iv ) necessary and sufficient conditions for fairness of dag mpcs protocols ( section [ sec : fairness ] ) ; ( v ) minimal complexity bounds for dag mpcs protocols ( section [ sec : minimal_complexity ] ) ; ( vi ) novel fair mpcs protocols ( section [ sec : constructions ] ) ; ( vii ) a software tool that verifies whether a given mpcs protocol is fair ( described in appendix [ s : tool ] ) .we build on the body of work that has been published in the field of fair optimistic mpcs protocols in asynchronous networks .the first such protocols were proposed by baum - waidner and waidner , viz . a round - based broadcast protocol and a related round - based linear protocol .they showed subsequently that these protocols are round - optimal .this is a complexity measure that is related to , but less general than , parallel complexity defined in the present paper .garay et al . introduced the notion of _ abuse - free _ contract signing .they developed the technique of _ private contract signature _ and used it to create abuse - free two - party and three - party contract signing protocols .garay and mackenzie proposed mpcs protocols which were later shown to be unfair using the model checker mocha and improved by chadha et al .mukhamedov and ryan developed the notion of _ abort chaining attacks _ and used such attacks to show that chadha et al.s improved version does not satisfy fairness in cases where there are more than five signers .they introduced a new optimistic mpcs protocol and proved fairness for their protocol by hand and used the nusmv model checker to verify the case of five signers .zhang et al . have used mocha to analyze the protocols of mukhamedov and ryan and of mauw et al . .mauw et al . used the notion of abort chaining to establish a lower bound on the message complexity of linear fair mpcs protocols .this complexity measure is generalized in the present paper to dag mpcs protocols .kordy and radomirovi have shown an explicit construction for fair linear mpcs protocols .the construction covers in particular the protocols proposed by mukhamedov and ryan and the linear protocol of baum - waidner and waidner , but not the broadcast protocols .the dag mpcs protocol model and fairness results developed in the present paper encompass both types of protocols .they allow for arbitrary combinations of linear and parallel behaviour ( i.e. partial parallelism ) , and in addition allow for parallelism within signer roles .mpcs protocols combining linear and parallel behaviour have not been studied yet . apart from new theoretical insights to be gained from designing and studying dag mpcs protocols , we anticipate interesting application domains in which multiple parties establish a number of related contracts , such as slas .emerging business models like software as a service require a negotiation to balance a customer s requirements against a service provider s capabilities .the result of such a negotiation is often complicated by the dependencies between several contracts and multi - party protocols may serve to mitigate this problem .karaenke and kirn propose a multi - tier negotiation protocol to mitigate the problems of overcommitment and overpurchasing .they formally verify that the protocol solves the two observed problems , but do not consider the fairness problem .slas and negotiation protocols have also been studied in the multi - agent community .an example is the work of kraus who defines a multi - party negotiation protocol in which agreement is reached if all agents accept an offer . if the offer is rejected by at least one agent , a new offer will be negotiated .another interesting application area concerns _ supply chain contracting _ .a supply chain consists of a series of firms involved in the production of a product or service with potentially complex contractual relationships .most literature in this area focuses on economic aspects , like pricing strategies .an exception is the recent work of pavlov and katok in which fairness is studied from a game - theoretic point of view .the study of multi - party signing protocols and multi - contract protocols has only recently been identified as an interesting research topic in this application area .the purpose of a multi - party contract signing protocol is to allow a number of parties to sign a digital contract in a fair way . in this sectionwe recall the basic notions pertaining to mpcs protocols .we use to denote the set of signers involved in a protocol , to denote the contract , and to denote the ttp .a signer is considered _ honest _( cf.definition [ def : honestsigner ] ) if it faithfully executes the protocol specification .an mpcs protocol is said to be _optimistic _ if its execution in absence of adversarial behaviour and failures and with all honest signers results in signed contracts for all participants without any involvement of .optimistic mpcs protocols consist of two subprotocols : the _ main _ protocol that specifies the exchange of _ promises _ and _ signatures _ by the signers , and the _ resolve _ protocol that describes the interaction between the agents and in case of a failure in the main protocol .a promise made by a signer indicates the intent to sign .a promise can only be generated by signer .the content can be extracted from the promise and the promise can be verified by signer and by .a signature can only be generated by and by , if has a promise .the content can be extracted and the signature can be verified by anybody .cryptographic schemes that allow for the above properties are digital signature schemes and private contract signatures .mpcs protocols must satisfy at least two security requirements , namely _ fairness _ and _ timeliness_. an optimistic mpcs protocol for contract is said to be _fair _ for an honest signer if whenever some signer obtains a signature on from , then can obtain a signature on from all signers participating in the protocol .an optimistic mpcs protocol is said to satisfy _ timeliness _ , if each signer has a recourse to stop endless waiting for expected messages .the fairness requirement will be the guiding principle for our investigations and timeliness will be implied by the communication model together with the behaviour of the ttp .a formal definition of fairness is given in section [ sec : fairness ] .a further desirable property for mpcs protocols is abuse - freeness which was introduced in .an optimistic mpcs protocol is said to be _ abuse - free _ , if it is impossible for any set of signers at any point in the protocol to be able to prove to an outside party that they have the power to terminate or successfully complete the contract signing .abuse - freeness is outside the scope of this paper .let with be a directed acyclic graph .let be vertices .we say that _ causally precedes _ , denoted , if there is a directed path from to in the graph .we write for .we extend _ causal precedence _ to the set as follows . given two edges , we say that _ causally precedes _ and write , if . similarly , we write if and if .let .if causally precedes we also say that _ causally follows _we say that a set is _ causally closed _if it contains all causally preceding vertices and edges of its elements , i.e. , .by we denote the set of edges incoming to and by the set of edges outgoing from .formally , we have and .the communication between signers is asynchronous and messages can get lost or be delayed arbitrary long .the communication channels between signers and the ttp are assumed to be _ resilient_.this means that the messages sent over these channels are guaranteed to be delivered eventually . in order to simplify our reasoning, we assume that the channels between protocol participants are confidential and authentic .we consider the problem of delivering confidential and authentic messages in a dolev - yao intruder model to be orthogonal to the present problem setting .we assume that contains the contract text along with fresh values ( contributed by every signer ) which prevent different protocol executions from generating interchangeable protocol messages .furthermore we assume that contains all information that needs in order to reach a decision regarding the contract in case it is contacted by a signer .this information contains the protocol specification , an identifier for , identifiers for the signers involved in the protocol , and the assignment of signers to protocol roles in the protocol specification .we assume the existence of a designated resolution process per signer which coordinates the various resolution requests of the signer s parallel threads .it will ensure that is contacted at most once by the signer .after having received the first request from one of the signer s threads , this resolution process will contact on behalf of the signer and store s reply .this reply will be forwarded to all of the signer s threads whenever they request resolution .our dag protocol model is a multi - party protocol model in an asynchronous network with a ttp and an adversary that controls a subset of parties .a _ dag protocol specification _ ( or simply , a _ protocol specification _ ) is a directed acyclic graph in which the vertices represent the state of a signer and the edges represent either a causal dependency between two states ( an -edge ) or the sending of a message .a vertex outgoing edges can be executed in parallel .edges labeled with denote that a signer contacts .[ def : parallel_prot_spec ] let be a set of roles such that and a set of messages .let and be two symbols , such that . by and denote the sets and , respectively .a _ dag protocol specification _ is a labeled directed acyclic graph , where 1 . is a directed acyclic graph ; 2 . is a labeling function assigning roles to vertices ; 3 . is an edge - labeling function that satisfies 1 . , 2 . ; 4 . is a function associated with -labeled edges .a message edge specifies that is to be sent from role to role .an -edge represents internal progress of role and allows to specify a causal order in the role s events .an exit edge denotes that a role can contact the ttp .the ttp then uses the function to determine a reply to the requesting role , based on the sequence of messages that it has received . in section[ sec : mpcs ] exit messages and the function are used to model the resolve protocol of the ttp .we give three examples of dag protocols in figure [ fig : examples ] , represented as message sequence charts ( mscs ) .the dots denote the vertices , which we group vertically below their corresponding role names .vertical lines in the mscs correspond to -edges and horizontal or diagonal edges represent message edges .we mark edges labeled with signing messages with an `` s '' and we leave out the edge labels of promise messages .we do not display exit edges , they are implied by the mpcs protocol specification .a box represents the splitting of a role into two parallel threads , which join again at the end of the box .we revert to a traditional representation of labeled dags if it is more convenient ( see , e.g. , figure [ fig : exampledag ] ) .the first protocol in figure [ fig : examples ] is a classical linear 2-party contract signing protocol .it consists of one round of promises followed by a round of exchanging signatures .the second protocol is the classical broadcast protocol for two signers .it consists of two rounds of promises , followed by one round of signatures .the third protocol is a novel dag protocol , showing the power of in - role parallelism .it is derived from the broadcasting protocol by observing that its fairness does not depend on the causal order of the first two vertices of each of the roles .let be a protocol specification .the _ restriction _ of to role , denoted by , is the protocol specification , where the execution state of a protocol consists of the set of events , connected to vertices or edges , that have been executed .let be a protocol specification .a _ state _ of is a set .the set of states of is denoted by . the _ initial state _ of is defined as . in order to give dag protocols a semantics ,we first define the _ transition relation _ between states of a protocol .[ def : transition ] let be a protocol specification , the set of transition labels , and the states of .we say that transitions with label from state into , denoted by , iff and one of the following conditions holds 1 . and , such that and , 2 . and , such that , and , 3 . and , such that , and , 4 . and , such that and . in definition [ def : transition ] , receive events are represented by vertices , all other events by edges . by the first condition , a receive event can only occur if all events assigned to the incoming edges have occurred .in contrast , the sending of messages ( second condition ) can take place at any time .the third condition states that an -edge can be executed if the event on which it causally depends has been executed .finally , like send events , an exit event can occur at any time .every event may occur at most once , however .this is ensured by the condition .the transitions model all possible behavior of the system. the behavior of honest agents in the system will be restricted as detailed in the following subsection .we denote sequences by ] , , such that .the set of all executions of is denoted by .if ] for a state .2 . \cdot \sigma)_{p}= \begin{cases } [ { \mathit{s}}]_{p}\cdot \sigma_{p } & \text { if } [ { \mathit{s}}]_{p}= [ { \mathit{s}}']_{p}\\ [ { \mathit{s}}]_{p}\cdot[\alpha]\cdot ( [ { \mathit{s}}']\cdot \sigma)_{p } & \text { else . }\end{cases} ] , we say that is _ closed _ if the following three conditions are satisfied 1 . is causally closed , for every , 2 . , for every , 3 .\in\operatorname{\operatorname{exe}({\mathcal{p}})} ] be an execution of a protocol . by denote the number of labels , for , such that .[ lem : snd - count ] for any two closed executions and of a protocol we have . the proof is given in the appendixthe first measure expressing the complexity of a protocol is called _ message complexity_. it counts the overall number of messages that have been sent in a closed execution of a protocol .let be a protocol specification and let .the _ message complexity _ of , denoted by , is defined as .lemma [ lem : snd - count ] guarantees that the message complexity of a protocol is well defined .the second complexity measure is called _parallel complexity_. it represents the minimal time of a closed execution assuming that all events which can be executed in parallel are executed in parallel. the parallel complexity of a protocol is defined as the length of a maximal chain of causally related send edges .the _ parallel complexity _ of a protocol , denoted by , is defined as \in { \ensuremath{e}}^ { * } } : \\\forall_{1\leq i \leq n } : { \mu}(e_{i } ) = { \mathit{send}}\wedge \forall_{1\leq i < n } : e_{i}\prec e_{i+1}\text{.}\ ] ] the message complexity of the first protocol in figure [ fig : examples ] is 4 , which is known to be optimal for two signers .its parallel complexity is 4 , too .the message complexity of the other two protocols is 6 , but their parallel complexity is 3 , which is optimal for broadcasting protocols with two signers .we now define a class of optimistic mpcs protocols in the dag protocol model .the key requirements we want our dag mpcs protocol specification to satisfy , stated formally in definition [ def : parallelmpcsspec ] , are as follows .the messages exchanged between signers in the protocol are of two types , promises , denoted by , and signatures , denoted by .every promise contains information about the vertex from which it is sent .this is done by concatenating the contract with the vertex the promise originates from and is denoted by .the signers can contact the ttp at any time .this is modeled with exit edges : every vertex such that ( the set of all signers ) is adjacent to a unique vertex , .the communication with is represented by .the set of vertices with outgoing signature messages is denoted by .[ def : parallelmpcsspec ] let be a protocol specification , be a finite set of signers , be a contract , and . is called a _dag mpcs protocol specification for , _ if for unique existential quantification . ] 1 .[ cond : ttp ] , 2 .[ cond : transitivity ] , 3 .[ cond : elabel ] 4 . , where denotes a list of signatures on , one by each signer .we write for the largest subset of which satisfies the set is called the _signing set_. we represent dag mpcs protocols as _ skeletal _ graphs as shown in figure [ fig : skeletal ] .the full graph , shown in figure [ fig : full ] , is obtained from the skeletal graph by adding all edges required by condition [ cond : transitivity ] of definition [ def : parallelmpcsspec ] and extending according to condition [ cond : elabel ] .the edges are dashed in the graphs .the shaded vertices in the graphs indicate the vertices that are in the signing set .+ we define the _ knowledge _ of a vertex to be the set of message edges causally preceding , and incoming to a vertex of the same role .the knowledge of a vertex represents the state right after its receive event .we define the _ pre - knowledge _ of a vertex to be .the pre - knowledge represents the state just _ before _ the vertex receive event has taken place .we extend both definitions to sets : we define the _ initial set _ of , denoted to be the set of vertices of the protocol specification for which the pre - knowledge of the same role does not contain an incoming edge by every other role .formally , the _ end set _ of , denoted , is the set of vertices of the protocol specification at which the corresponding signer possesses all signatures .let be a dag mpcs protocol specification .the resolve protocol is a two - message protocol between a signer and the ttp , initiated by the signer .the communication channels for this protocol are assumed to be resilient , confidential , and authentic . is assumed to respond immediately to the signer .this is modeled in via an exit edge from a vertex to the unique vertex . s response is given by the function , .if is the sequence of messages sent by the signers to , then is s response for the last signer in the sequence .the function will be stated formally in definition [ def : ttpfun ] .we denote the resolve protocol in the following by .the signer initiating is .he sends the list of messages assigned to the vertices in his pre - knowledge , prepended by , to .this demonstrates that has executed all receive events causally preceding .we denote s message for by : the promise , which is the first element of , is used by to extract the contract , to learn at which step in the protocol claims to be , and to create a signature on behalf of when necessary .all messages received from the signers are stored . performs a deterministic decision procedure , shown in algorithm [ algo : resolve ] , on the received message and existing stored messages and _ immediately _ sends back or the list of signatures .our decision procedure is based on .the input to the algorithm consists of a message received by the from a signer and state information which is maintained by . extracts the contract and the dag mpcs protocol specification from . for each contract , maintains the following state information .a list of all messages received from signers , a set of vertices the signers contacted from , a set of signers considered to be dishonest , and the last decision made .if has not been contacted by any signer regarding contract , then .else , is equal to or the list of signatures on , one by each signer . verifies that the request is legitimate in that the received message is valid and the requesting signer is not already considered to be dishonest .if these preliminary checks pass , the message is appended to .this is described in algorithm [ algo : resolve ] in lines [ line : rfirst ] through [ line : evidence ] .the main part of the algorithm , starting at line [ line : forcedabort ] , concerns the detection of signers who have continued the main protocol execution after executing the resolve protocol .if has not received a promise from every other signer in the protocol ( i.e. the if clause in line [ line : forcedabort ] is not satisfied ) , then sends back the last decision made ( line [ line : retdecision ] ) .this decision is an token unless has been contacted before and decided to send back a signed contract .if has received a promise from every other signer , computes the set of dishonest signers ( lines [ line : caughtrepeat1 ] through [ line : caughtrepeat3 ] ) by adding to it every signer which has carried out the resolve protocol , but can be seen to have continued the protocol execution ( line [ line : caughtrepeat2 ] ) based on the evidence the ttp has collected . if is the only honest signer that has contacted until this point in time , the decision is made to henceforth return a signed contract . [ def : ttpfun ]let be a dag mpcs protocol specification and the ttp decision procedure from algorithm [ algo : resolve ] .then is defined for by where is the projection to the first coordinate and is defined inductively by thus the function represents the response of the ttp in the protocol for all executions of .we say that a dag mpcs protocol execution is fair for signer if one of the following three conditions is true : ( i ) no signer has received a signature of ; ( ii ) has received signatures of all other signers ; ( iii ) has not received an token from the ttp .the last condition allows an execution to be fair as long as there is a possibility for the signer to receive signatures of all other signers .the key problem in formalizing these conditions is to capture under which circumstances the ttp responds with an token to a request by a signer .the ttp s response is dependent on the decision procedure which in turn depends on the order in which the ttp is contacted by the signers .since the decision procedure is deterministic , it follows that the function can be determined for every execution ] of is _ fair _ for signer if one of the following conditions is satisfied : 1 . has not sent a signature and no signer has received signatures from . 2 . has received signatures from all other signers . 3 . has not received an token from .)\neq{\text{``abort''}}\ ] ] if none of these conditions are satisfied , the execution is unfair for .[ def : fairness ] an mpcs protocol specification is said to be fair , if every execution of is fair for all signers that are honest in . by the ttp decision procedure, returns an token if contacted from a vertex .thus a necessary condition for fairness is that an honest signer executes all steps of the initial set causally before all steps of the signing set that are not in the end set : if contacts from a vertex , then responds with an token if it has already issued an token to another signer who is not in the set .this condition can be exploited by a group of dishonest signers in an _ abort chaining attack _the following definition states the requirements for a successful abort chaining attack .for ease of reading , we define the predicate .the predicate is true if there is no evidence ( pre - knowledge ) at the vertices in that the signer has sent a message at or causally after : this is precisely the criterion used by to verify honesty in algorithm [ algo : resolve ] , line [ line : caughtrepeat2 ] .[ def : pacseq ]let be a contract and .a sequence over is called an abort - chaining sequence ( ac sequence ) for if the following conditions hold : 1 .[ paccond : forcedabort ] signer receives an abort token : 2 . [ paccondition1 ] no signer contacts more than once : 3 .[ paccondition4 ] the present and previous signer to contact are considered honest by : 4 .[ pacconditionnew ] the last signer to contact has not previously received all signatures : 5 .[ paccondition3 ] the last signer to contact has sent a signature before contacting or in a parallel thread : the ac sequence represents the order in which signers execute the resolve protocol with .a vertex in the sequence implies an exit transition via the edge in the protocol execution .an abort chaining attack must start at a step in which has no choice but to respond with an abort token due to lack of information .condition [ paccond : forcedabort ] covers this .each signer may run the resolve protocol at most once .this is covered by condition [ paccondition1 ] . to ensure that continues to issue tokens , condition [ paccondition4 ]requires that there must always be a signer which according to s evidence has not continued protocol execution after contacting .to complete an abort chaining attack , there needs to be a signer which has issued a signature ( condition [ paccondition3 ] ) , but has not received a signature ( conditions [ pacconditionnew ] and [ paccondition3 ] ) and will not receive a signed contract from because there is an honest signer ( by condition [ paccondition4 ] ) which has received an token .it is not surprising ( but nevertheless proven in the appendix ) that a protocol with an ac sequence is unfair .however , the converse is true , too .[ th : fairness ] let be a dag mpcs protocol .then is fair if and only if it has no ac sequences . the proof of this and the following theorems is given in the appendix .theorem [ th : fairness ] reduces the verification of fairness from analyzing all executions to verifying that there is no ac - sequence ( definition [ def : pacseq ] ) .this , however , is still difficult to verify in general .the following two results are tools to quickly assess fairness of dag mpcs protocols .the first is an unfairness criterion and the second is a fairness criterion for a large class of dag mpcs protocols .the following theorem states that in a fair dag mpcs protocol , the union of paths from the initial set to every vertex must contain all permutations of all signers ( other than ) as subsequences . in the class of linear mpcs protocols , considered in ,this criterion was both necessary and sufficient .we show in example [ ex : insufficient ] below that this criterion is not sufficient for fairness of dag mpcs protocols . for , ,we denote by the set of all directed paths from a vertex in to .if is a sequence of vertices , we denote by the corresponding sequence of signers .the sequences of signers corresponding to the paths from to is denoted by .[ thm : unfairness ] let .let be an optimistic fair dag mpcs protocol , if , then for every permutation of signers in , there exists a sequence in which contains as a ( not necessarily consecutive ) subsequence .the converse of the theorem is not true as the following example shows . in particular, this example demonstrates that the addition of a vertex to a fair dag mpcs protocol does not necessarily preserve fairness .+ + [ ex : insufficient ] the protocol in figure [ fig : classical3top ] is fair by the results of . by theorem [ thm :unfairness ] , for every vertex every permutation of signers in occurs as a subsequence of a path in .the protocol in figure [ fig : classical3withb ] is obtained by adding the vertex as a parallel thread of signer .thus the permutation property on the set of paths is preserved , yet the protocol is not fair : an ac sequence is .the vertex is in , the evidence presented to the ttp at includes the vertices causally preceding , thus is considered to be honest .the evidence presented by signer at are the vertices causally preceding proving that is dishonest , but is honest .thus has sent a signature at but will not receive signatures from and .if a protocol has no in - role parallelism , then the converse of theorem [ thm : unfairness ] is true .thus we have a simple criterion for the fairness of such protocols .[ thm : fairness - causal ] let be an optimistic dag mpcs protocol without in - role parallelism .let if all paths from to contain all permutations of then is fair for . by adding a causal edge between vertex and vertex of the protocol in figure [ fig : classical3withb ] , as shown in figure [ fig : classical3withbcausal ] , we obtain again a fair protocol .in this section we illustrate the theory and results obtained in the preceding sections by proving optimality results and constructing a variety of protocols .we prove lower bounds for the two complexity measures defined in our model , viz .parallel and message complexity . a minimal 4-party fair broadcasting protocol .] the minimal parallel complexity for an optimistic fair dag mpcs protocol is , where is the number of signers in the protocol . by theorem [ thm : unfairness ] , every permutation of signers in the protocol must occur as a subsequence in the set of paths from a causally last vertex in the initial set to a vertex in the signing set .since a last vertex in the initial set must have a non - empty knowledge , there must be a message edge causally preceding .there are at least edges in the path between the vertices associated with the signers in a permutation and there is at least one message edge outgoing from a vertex in the signing set .thus a minimal length path for such a protocol must contain edges . the minimal parallel complexity is attained by the broadcast protocols of baum - waidner and waidner .an example is shown in figure [ fig : butterfly ] .[ thm : minimal_message_complexity ] the minimal message complexity for an optimistic fair dag mpcs protocol is , where is the number of signers in the protocol and is the length of the shortest sequence which contains all permutations of elements of an -element set as subsequences .the minimal message complexities for are .the minimal message complexities for are smaller or equal to .note that while broadcasting protocols have a linear parallel complexity , they have a cubic message complexity , since in each of the rounds each of the signers sends a message to every other signer .linear protocols on the other hand have quadratic minimal message and parallel complexities . in the followingwe demonstrate that there are dag protocols which attain a linear parallel complexity while maintaining a quadratic message complexity .[ [ single - contractor - multiple - subcontractors . ] ] single contractor , multiple subcontractors .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + a motivation for fair mpcs protocols given in is a scenario where a single entity , here referred to as a contractor , would like to sign contracts with independent companies , in the following referred to as subcontractors .the contractor has an interest in either having all contracts signed or to not be bound by any of the contracts .the subcontractors have no contractual obligations towards each other .it would therefore be advantageous if there is no need for the subcontractors to directly communicate with each other .the solutions proposed in are linear protocols .their message and parallel complexities are thus quadratic .linear protocols can satisfy the requirement that subcontractors do not directly communicate with each other only by greatly increasing the message and parallel complexities .the protocol we propose here is a dag , its message complexity is and its parallel complexity is for signers .it therefore combines the low parallel complexity typically attained by broadcasting protocols with the low message complexity of linear protocols .additionally , the protocol proposed does not require any direct communication between subcontractors .figure [ fig : contractor ] shows a single contractor with three subcontractors .the protocol can be subdivided into five rounds , one round consisting of the subcontractors sending a message to the contractor followed by the contractor sending a message to the subcontractors . in the first four rounds promisesare sent , in the final round signatures are sent .the protocol can be easily generalized to more than three subcontractors .for every subcontractor added , one extra round of promises needs to be included in the protocol specification .the protocol is fair by theorem [ thm : fairness - causal ] .the msc shown in figure [ fig : contractor ] resembles the skeletal graph from which it was built .the message contents can be derived by computing the full graph according to condition [ cond : transitivity ] of definition [ def : parallelmpcsspec ] .the result is as follows . in each round of the protocol ,each of the subcontractors sends to the contractor a promise for the contractor and for each of the other subcontractors .the contractor then sends to each of the subcontractors all of the promises received and his own promise .the final round is performed in the same manner , except that promises are replaced by signatures .[ [ two - contractors - with - joint - subcontractors . ] ] two contractors with joint subcontractors .+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + figure [ fig : two - twojoint ] shows a protocol where two contractors want to sign a contract involving two subcontractors .the subcontractors are independent of each other .after the initial step , where all signers send a promise to the first contractor , there are three protocol rounds , one round consisting of the contractor sending promises to the two subcontractors and in parallel which in turn send promises to the second contractor .a new round is started with the second contractor sending the promises received with his own promise to contractor .this protocol , too , can be generalized to several independent subcontractors . for every subcontractoradded , one extra protocol round needs to be included in the protocol specification and each protocol step of the subcontractors executed analogously . in - role parallelism . ][ [ parallelism - within - a - role . ] ] parallelism within a role .+ + + + + + + + + + + + + + + + + + + + + + + + + + figure [ fig : subthree - crossed ] shows an example of a subcontracting protocol with in - role parallelism for the contractor role .the contractor initiates the protocol . in the indicated parallel phase, the contractor may immediately forward a promise by one of the subcontractors along with his own promise to the other subcontractor without waiting for the latter subcontractor s promise .the same is true in the signing phase .the fairness property for this protocol has been verified with a tool ( described in appendix [ s : tool ] ) which tested fairness for each signer in all possible executions .we have identified fair subcontracting as a challenging new problem in the area of multi - party contract signing .we have made first steps towards solving this problem by introducing dag mpcs protocols and extending existing fairness results from linear protocols to dag protocols .for three typical subcontracting configurations we propose novel dag mpcs protocols that perform well in terms of message complexity and parallel complexity .fairness of our protocol schemes follows directly from our theoretical results and we have verified it for concrete protocols with our automatic tool .there are a number of open research questions related to fair subcontracting that we havent addressed .we mention two .the first concerns the implementation of multi - contracts . in our current approachwe consider a single abstract contract shared by all parties .however , in practice such a contract may consist of a number of subcontracts that are accessible to the relevant signers only . how to cryptographically construct such contracts and what information these contracts should share is not evident .second , a step needs to be made towards putting our results into practice .given the application domains identified in this paper , we must identify the relevant signing scenarios and topical boundary conditions in order to develop dedicated protocols for each application area .we thank barbara kordy for her many helpful comments on this paper .10 n. asokan . .thesis , univ . of waterloo , 1998 .b. baum - waidner and m. waidner .optimistic asynchronous multi - party contract signing .research report rz 3078 ( # 93124 ) , ibm zurich research laboratory , zurich , switzerland , november 1998 .b. baum - waidner and m. waidner .round - optimal and abuse free optimistic multi - party contract signing . in _automata , languages and programming icalp 2000 _ , volume 1853 of _ lncs _ , pages 524535 .springer , july 2000 .r. chadha , s. kremer , and a. scedrov .formal analysis of multi - party contract signing . in _ csfw04 _, page 266 , washington , dc , usa , 2004 .ieee . s. even and y. yacobi .relations among public key signature systems .technical report 175 , computer science dept . ,technion , haifa , isreal , march 1980 .j. garay , m. jakobsson , and p. mackenzie .abuse - free optimistic contract signing . in _ crypto99 _ ,volume 1666 of _ lncs _ , pages 449466 .springer , aug .j. a. garay and p. d. mackenzie .abuse - free multi - party contract signing . in_ 13th int .computing _ , volume 1693 of _ lncs _ , pages 151165 .springer , 1999 .p. karaenke and s. kirn . towards model checking & simulation of a multi - tier negotiation protocol for service chains . in _aamas 2010 _ , pages 15591560 .int . found . for autonomous agents and multiagent systems , 2010 .e. katok and v. pavlov .fairness in supply chain contracts : a laboratory study ., 31:129137 , 2013 .b. kordy and s. radomirovi .constructing optimistic multi - party contract signing protocols . in _csf 2012 _ , pages 215229 .ieee computer society , 2012 .s. kraus .automated negotiation and decision making in multi - agent environments . in _acm multi - agent systems and applications _ , pages 150172 , 2001 . h. krishnan and r. winter .the economic foundations of supply chain contracting . , 5(3 - 4):147309 , 2012 .k. lu , r. yahyapour , e. yaqub , and c. kotsokalis .structural optimisation of reduced ordered binary decision diagrams for sla negotiation in iaas of cloud computing . in _icsoc 2012 _ , volume 7636 of _ lncs _ , pages 268282 .springer , 2012 .s. mauw and s. radomirovi . .post 2015 _ , 2015 .s. mauw , s. radomirovi , and m. t. dashti . minimal message complexity of asynchronous multi - party contract signing . in _csf09 _ , pages 1325 .ieee , 2009 .a. mukhamedov and m. ryan .improved multi - party contract signing . infinancial cryptography _ ,volume 4886 of _ lncs _ , pages 179191 .springer , 2007 .a. mukhamedov and m. d. ryan .fair multi - party contract signing using private contract signatures ., 206(2 - 4):272290 , 2008 .s. radomirovi . a construction of short sequences containing all permutations of a set as subsequences . ,19(4):paper 31 , 11 pp . , 2012 .m. schunter . .phd thesis , universitt des saarlandes , 2000 .r. seifert , r. zequiera , and s. liao .a three - echelon supply chain with price - only contracts and sub - supply chain coordination ., 138:345353 , 2012 .e. yaqub , p. wieder , c. kotsokalis , v. mazza , l. pasquale , j. rueda , s.g.gmez , and a. chimeno . a generic platform for conducting sla negotiations .in _ service level agreements for cloud computing _ , pages 187206 .springer , 2011 .y. zhang , c. zhang , j. pang , and s. mauw .game - based verification of contract signing protocols with minimal messages ., 8(2):111124 , 2012 .we have developed a prototype tool in python 2 that model checks a skeletal protocol graph for the fairness property ( definition [ def : fair - execution ] ) in the execution model defined in section [ sec : spec_exec_model ] .the tool , along with specifications for the protocols presented in this paper , is available at http://people.inf.ethz.ch/rsasa/mpcs . the tool s verification procedure works directly on the execution model and the ttp decision procedure ( algorithm [ algo : resolve ] ) .it therefore provides evidence for the correctness of the protocols shown in section [ sec : protocols ] , independent of the fairness proofs given in this paper .the verification is performed as follows .for each specified signer , the tool analyzes a set of executions in which the signer is honest and all other signers dishonest .the tool does not analyze all possible executions .it starts the analysis from the state where all promises of dishonest signers have been sent , but no protocol step has been performed by the honest signer . by analyzing this type of executions only , we do not miss any attacks , because the honest signers fairness is not invalidated until he has sent a signature . in this reduced set of executions ,the dishonest signers retain the possibility to contact the ttp from any of their vertices and all these possibilities are explored by the tool .we note that the same type of verification could be achieved with an off - the - shelf model checker and we would expect better performance in such a case. however , the code complexity and room for error when encoding a given protocol and ttp decision procedure in a model checker s input language is comparable to the code complexity of this self - contained tool .the mpcs protocols designed in this work allow for parallelism during the execution of the protocol .the specification language allows even for parallel threads to occur within a signer role .this allows us to model the case where a signer role represents multiple branches of the same entity .a signature issued by any branch represents the signature of the entire entity .we expect that the signing processes across branches are not easily synchronizeable with each other .such parallelism can be implemented in multiple ways .we discuss the various options and explain the choices made for this paper . the first decision to be madeis whether parallel threads of a signer role should be assumed to have shared knowledge . in this paper, we choose the weaker assumption : memory for a signer s parallel threads is local to the threads .this is in accordance with our expectation that parallel - threads are not easily synchronizeable and allows us , for instance , to specify and analyze protocols in which representatives of a signing entity can independently carry out parallel protocol steps without the need to communicate and synchronize their combined knowledge .causal dependence between two actions of a signer is explicitly indicated in the protocol specification . 1. all threads of a signer immediately synchronize and stop executing whenever any of the threads intends to issue a resolve request to the ttp . a designated resolution process per signerwill be required to continuously schedule all threads and take care of the interaction with the ttp .2 . threads of a signer contact the signer s designated resolution process only when they intend to issue a resolve request .the resolution process will take of contacting the ttp ( only once per signer ) and distributing the ttp s reply upon request of the threads .3 . threads of a signer are considered fully independent . a signer s threads are not orchestrated .the ttp may take into account that two requests can originate from the same signer , but from different ( causally not related ) threads . in this paperwe adopt the second option , which keeps the middle between the fully synchronized and fully desynchronized model .this will on the one hand allow for independent parallel execution of the threads and on the other hand minimize the impact of the signer s threading on the ttp s logic . from an abstract point of view, one could even argue that the second and third option are equivalent if we consider the signer s designated resolution processes just as part of a distributed ttp .we assume that the communication between a thread and the designated resolution process is resilient .the class of dag mpcs protocols defined in section [ sec : mpcs ] is restricted by condition [ cond : transitivity ] of definition [ def : parallelmpcsspec ] .it requires that every signer sends a message to all subsequent , causally following signers occurring before signer s next step .while there are fair dag mpcs protocols which do not belong to this restricted class , such protocols are not going to have a lower communication complexity .the reason for this is that each message received by a signer serves as evidence for the ttp that the sender has executed the protocol up to a certain step .skipping such a message thus lengthens the protocol , because the evidence is available only at a later vertex .furthermore , the restriction simplifies the reasoning about fairness in that causal precedence between vertices is enough to guarantee that there is a message sent from signer to signer at some point between the execution of and the execution of .finally , it also permits one to design , characterize , and represent protocols using skeletal graphs rather than full graphs as displayed in figure [ fig : exampledag ] .the set is the set of vertices in which do not have any causally following vertices in and we will refer to it as the set of _ maximal vertices _ of . similarly , is the set of vertices in which do not have any causally preceding vertices in and will be referred to as the set of _ minimal vertices _ of .conditions [ paccond : forcedabort ] through [ paccondition4 ] imply that the ttp decision procedure leads to an token for the last signer to contact the ttp .the remaining two conditions imply that the last signer has sent a signature , but not received a signature . to complete the proof , we need to construct an execution in which the exit transitions occur in the order indicated by the ac sequence and signer is honest .let be an ac sequence . for each vertex in the ac sequence ,let be the causal closure of in .note that the union of causally closed sets is causally closed .let be the sequence of transitions without exit transitions and such that all states are causally closed . for and ] .that is , is equal to , except for an additional exit transition and additional exit edges in all states which stem from exit transitions added to . finally , for ] .then is an execution in which signer is honest , since the restricted execution is by construction causally closed in all states before the last state and the single exit transition occurs in the last transition .let ] , which contradicts the closedness of .suppose there exists a permutation of signers in which is not a subsequence of any sequence in .we construct an ac sequence as follows .let be the set of all vertices of in . for , let be the minset of all vertices of which causally follow a vertex of , i.e. . since for every signer there exists a vertex which causally follows , it follows that for some , there exists a vertex with .( else we have contradiction to the assumption that is a missing permutation in . )thus we obtain a sequence , where , , which is an ac sequence .it suffices to verify the statement for a subset of all vertices in by the following two facts : * fact 1 : * let be a causally earliest vertex of a signer from which a signature is sent , i.e. . if contains all permutations of signers in , then contains all such permutations of signers for all with .* fact 2 : * if such that for every signer there is a vertex for which contains all permutations of signers in , then contains all permutations of signers in .since is a causally earliest vertex of a signer from which a signature is sent , it follows by the fact that the protocol is optimistic that for every signer other than there exists a vertex which causally follows or that there exists another vertex of signer from which a signature is sent such that and .we consider these two cases separately . 1 . for every signer other than , there exists a vertex which causally follows .+ we split this case into two separate subcases depending on whether there exists a vertex of signer which causally follows .1 . .let be a permutation of signers in and suppose towards a contradiction that the permutation does not appear as a subsequence of any sequence in .we construct an ac sequence as follows .let be the set of all vertices of in .for , let be the minset of all vertices of which causally follow a vertex of , i.e. . + since for every signer there exists a vertex which causally follows , it follows that for some there exists a vertex with , else we have contradiction to the assumption that the permutation is not a subsequence of any sequence in .+ by construction , there exists a vertex in which causally precedes and thus we obtain a sequence which is an ac sequence .2 . .+ since the protocol is optimistic , there exists a vertex assigned to signer such that . since , it follows that is not causally related to . by the remark preceding lemma [ l : commonancestorperm ] , there exists a common ancestor or and and satisfy the hypothesis of the lemma .thus there exists a vertex causally preceding such that contains all permutations of signers in and therefore contains all such permutations .there are causally unrelated vertices of signer from which signatures are sent .+ let be such a vertex . by equation in section [ sec :suffnec ] , there is a vertex assigned to signer which causally precedes all vertices of which are in .let be a maximal such vertex , i.e. for any vertex assigned to signer , there exists a vertex in of signer which does not causally follow .+ since the protocol is optimistic , for every signer in the protocol , there exists a vertex , which causally follows .+ then the vertices satisfy the hypothesis of lemma [ l : commonancestorperm ] , thus there exists a vertex causally preceding such that contains all permutations of signers in and therefore contains all such permutations .suppose that the protocol is not fair .consider a shortest ac sequence , , . since the sequence is a shortest sequence , we have that , else would be a shorter ac sequence .consider the permutation of signers corresponding to the ac sequence , i.e. .let be the unique vertex in existence of a vertex in the set follows from the fact that the protocol is optimistic , uniqueness follows from the fact that there is no in - role parallelism , i.e. the vertices assigned to a particular signer are totally ordered . by hypothesis , the set of paths from to contains all permutations of signers .let be the vertices associated with one such permutation .note that either or we can find , and .thus we may assume .we have .we also have , else condition [ paccondition4 ] for being an ac sequence ( definition [ def : pacseq ] ) would be violated .thus we have .this forms the basis for the inductively constructed sequence : given , satisfying , let be the unique vertex in .existence of a vertex in the set follows from and uniqueness follows from the lack of in - role parallelism . by construction , .if , then we also have , else condition [ paccondition4 ] for being an ac sequence ( definition [ def : pacseq ] ) would be violated .[ l : singlecomponent ] let be the dag of a fair optimistic dag mpcs protocol for two or more signers .let , where and , be the dag obtained by removing the ttp vertex and corresponding edges . then is a single connected component .* .then is an ac sequence . *let be a vertex such that and .such a vertex exists , because the protocol is optimistic , thus there must be a vertex of signer receiving a signature .but such a vertex can not precede , because is a causally earliest vertex from which a signature is sent .+ consider two cases : * * : then is an ac sequence . ** : then and are in the same connected component and is in another connected component . if , then let be such that and . else let .+ then is an ac sequence .the minimal message complexity has been derived for optimistic fair linear protocols in .since these protocols are a subset of dag mpcs protocols we see that the same message complexity can be attained .we need to show that there are no optimistic dag mpcs protocols with lower message complexity . by theorem [ thm :unfairness ] , every permutation of signers in the protocol must occur as a subsequence in the set of paths from a maximal vertex of the set of vertices of a signer in the initial set to a vertex in the signing set .consider any fair optimistic dag mpcs protocol .construct a linear dag by choosing any topologically sorted list of the vertices in and setting . since all permutations of signers occur along the paths in the dag under the labelling , they also occur in the topologically sorted list under the same labelling . since the dag is a single connected component by lemma [ l : singlecomponent ] , the number of edges in is smaller or equal to the number of edges in .thus the message complexity of is greater than or equal to the message complexity of a protocol based on the linear dag .
multi - party contract signing ( mpcs ) protocols allow a group of signers to exchange signatures on a predefined contract . previous approaches considered either completely linear protocols or fully parallel broadcasting protocols . we introduce the new class of dag mpcs protocols which combines parallel and linear execution and allows for parallelism even within a signer role . this generalization is useful in practical applications where the set of signers has a hierarchical structure , such as chaining of service level agreements and subcontracting . our novel dag mpcs protocols are represented by directed acyclic graphs and equipped with a labeled transition system semantics . we define the notion of _ abort - chaining sequences _ and prove that a dag mpcs protocol satisfies fairness if and only if it does not have an abort - chaining sequence . we exhibit several examples of optimistic fair dag mpcs protocols . the fairness of these protocols follows from our theory and has additionally been verified with our automated tool . we define two complexity measures for dag mpcs protocols , related to execution time and total number of messages exchanged . we prove lower bounds for fair dag mpcs protocols in terms of these measures .
the inadequacy of treating coupled systems as finite dimensional lattices on one hand and fully random networks on the other , has become evident in recent times .various networks , ranging from collaborations of scientists to metabolic networks , have been studied and shown not to fit in either paradigm .some alternatives have been suggested , the most popular of which are small - world networks and scale free networks . in the small world model ,one starts with a structure on a lattice , for instance regular nearest neighbour connections .then each link from a site to its nearest neighbor is rewired randomly with probability , _ i.e. _ the site is connected to another randomly chosen lattice site .this model is proposed to mimic real life situations in which non - local connections exist along with predominantly local connections .the geometrical properties of these lattices have been extensively studied .many studies have observed the following : starting from a one dimensional chain at , one obtains long - range order at any finite rewiring probability with same critical exponents as in the mean - field case , namely in the thermodynamic limit the behavior for any is the same as the behavior for for these models .newman and moore recover critical exponents for percolation on small world lattices which are the same as for the bethe lattice , i.e. an infinite dimensional case .for the xy model , medvedyeva _ et al _ conjecture that the critical exponents are the same as for the mean field case .they have confirmed it for and there is good reason to believe that it is true for any ( the obvious difficulty is that one needs to simulate larger and larger lattices at small . ) similar conclusions are reached for the ising model on small world networks as well .so while there is much evidence that random nonlocal connections , even in a small fraction , makes a big difference to geometrical properties like characteristic path length , its implications for dynamics is still unclear and even conflicting .so the first question we will probe here is this : _ does one see dynamical changes at very low values of _ , namely does the behavior change as soon as non - local shortcuts are put in place ( as observed in equilibrium models ) .now , while the dynamics of coupled oscillators and coupled maps on regular lattices has been extensively investigated , there have been only a few studies on the dynamics on small world lattices .most of these have focussed on exact synchronization .for instance , in coupled map lattices ( cml ) , with add - on non - local links , it was observed that synchronization occurs in the thermodynamic limit for infinitesimal .likewise barahona _ et al _ investigated coupled oscillators with small world connections with add - on links , and observed that in cases where synchronization occurs , the fraction of nonlocal edges required to synchronize the system decreases monotonically with lattice size and is very small for large lattices .however , for cmls where non - local links were added at the cost of existing regular links , transition to exact synchronization was observed at finite .this finite transition was similar to the transition to self - sustained oscillations evident in a model of infection spreading .it was also shown in these cmls that the magnitude of lyapunov exponents and the coupling parameter range over which exact synchronization occurs , varies monotonically with .further , a study on coupled chate - manneville minimal maps indicates that the critical exponents for the transition to turbulence change monotonically as a function of rewiring probability . hereone can find situations in which the critical exponent drops to zero at some value of , thus changing the nature of the transition from second order to first order , though the change is still monotonic .another investigation on stochastic resonance on small world networks falls on similar lines .so all these studies indicate that dynamical features vary monotonically with , and interpolate between the limits of regular and random connections without in any sense being `` optimal '' or more pronounced at some intermediate value of . surprisingly however ,a few studies indicate a non - monotonic change in dynamics and special features at small values of .et al _ reported that the spirals which were unstable on regular networks get stabilized at very small .however , it is known that small spatial noise stabilizes spirals and small world connectivity could be playing the same role since the nonlocal connections will join sites well inside one phase with the other phase and vice versa .another interesting study by lago - fernandez _ et al _ demonstrated that the power spectra of the collective field of coupled hodgin - huxley elements shows a non - monotonic increase in strength of spectral peaks at low frequency .thus they argued that small world connectivity gives something special which is amiss in regular or random lattices .though this trend held true for their particular model system , it is not clear if small world connectivities will have similar consequences in general .numerical evidence from more varied sources is required , in the absence of analytical results , to settle this question . in view of the above, the second interesting question we wish to address through our case studies here is as follows : is there evidence for dynamical features at some intermediate value of , that , in some sense , does not interpolate between the random and regular limits .thus this paper will attempt to provide some more examples from coupled dynamical systems , including another prototypical neuronal model , in order to _ shed more light on the validity of the conjecture that small world connections yield special dynamical features that are absent in both the regular and the random limits_. note that the change in characteristic length scales under small world connectivities would make a significant change in the characteristic time scales in geometric models , such as those describing epidemics or rumor propagation . in these modelsthe initial disturbance is localized and the time taken for spreading is reduced considerably in presence of nonlocal connections . in coupled chaotic systems , however , the characteristic time scale will be related to largest lyapunov exponent which does not change drastically with nonlocal connections . also note that in these spatiotemporally chaotic systems , the disturbances are neither localized nor few .this is another important distinction from epidemic models .our test systems are the following two networks : ( i ) coupled hindmarsh - rose neurons and ( ii ) coupled logistic maps .we will study both systems in the parameter regime that shows chaos .note that the constituents of the networks in the two case studies are very different .one of them is an excitable system , while other becomes chaotic via familiar period - doubling route to chaos .in particular , we will focus on the spectral properties of the collective field under varying degrees of random rewiring in the two networks .the mean field indicates the degree of independence of different elements in the network .if the elements are uncorrelated and individually chaotic , one would expect the mean field to approach a constant , namely the average value of the components , with the fluctuations decaying with system size . on the other hand if the elements are very coherent , we may see strong departures from this behaviour .thus we may look at signal - to - noise ratio ( snr ) of the peaks in the spectrum of the collective field as some kind of _ order parameter _ demonstrating the coherent oscillations in the mean field , if any .it is zero when there are no coherent oscillations in the mean field while it has a finite value when it oscillates with some chosen frequency . as mentioned above, we will explore two questions in this work .first , we will examine if dilute rewirings have any significant impact on the spectral properties .secondly we will try to discern whether or not any non - monotonic changes result in dynamical properties as rewiring fraction is varied , namely we will address the question : does there exist an optimal for which certain dynamical features become significantly more pronounced than in either the regular or random limits .the organization of the paper is as follows . in section 2we report on our first test case , namely a network of coupled hindmarsh - rose neurons . in section 3 ,we report our results for a network of coupled logistic maps in the chaotic regime .we summarize our results in section 4 .in light of the observation that the low frequency spectral peak of the collective field of a network of hodgkin - huxley neurons is more pronounced for low than for either the regular or the random case , we have chosen our first case study to be another prototypical neuronal model : the hindmarsh - rose model . on lines of lago - fernandez _ et al _ , let us cosider a lattice made up of non - identical hindmarsh - rose neurons .let each of them be in chaotic regime and be coupled to its neighbours this system is defined as where is a proportional to laplacian of variable calculated at site . for a regular square lattice , where , , , ( with periodic boundary conditions ) .however for a small world lattice could take random values with probability .the parameters in the above equations take values : and . the term where the is a random number between 0 and 1 .the collective ( mean ) field in the above system of coupled neurons can be defined as we study this quantity for different system sizes and different rewiring fractions . [ fig0 ] fig .1 displays the time evolution of the collective field for four values of rewiring fraction .it seems apparent that there is a monotonic change in the qualitative nature of the dynamics .the trend is as follows : as increases , i.e. as the network becomes more and more random , the oscillations of the mean field get larger in amplitude and follow the pattern of the individual neurons more closely .this indicates increasing synchronicity as , though exact synchronisation is never achieved here .the spectra of the collective field reveal the following trends : \(a ) for a regular lattice ( ) the peak in the power spectrum of the collective field occurs only in the low frequency region .\(b ) for fully nonlocal connections ( ) the power spectrum of the mean field shows peaks both in the low as well as the high frequency regimes .\(c ) as one changes the rewiring probability we see more and more peaks in high frequency regime . but the _ low frequency peak is neither destroyed nor decreased in strength_. in fact , it increases faster than background noise giving a slightly higher signal to noise ratio .\(d ) the background level of the power spectrum grows as is increased .this is keeping in with expectation , since for higher the sites will be more correlated . in fig .2 we illustrate the above points by showing the power spectra for the representative cases of , and .[ fig1 ] in light of the remarks made in introduction , we would like to compare the system with rewired nonlocal couplings with a system with regular couplings influenced by noise .so we simulate the above dynamics on a regular lattice under the influence of noise , i.e. we evolve the dynamics above with an additive noise term , where is a random number between -1 and 1 .the spectra under different noise strengths shows the following trends : \(a ) upto noise strengths the spectral peak remains the same , and arguably even increases a little .so the effect of very small noise is akin to low rewiring fractions ( ) .\(b ) for larger noise strengths , the spectral peak decreases significantly .so here higher noise strengths do not have the same effect as higher rewiring fractions ( ) .\(c ) as noise increases , the background gets cleaner and more pronounced .3 illustrates these trends in a lattice of size .[ fig2 ] we have also studied the system with the connections dynamically rewired , i.e. at very small intervals the connectivity matrix is updated keeping the probability of rewiring fixed at .dynamic rewiring yields results qualitatively similar to static rewiring for small . at larger however , dynamic rewiring is unlike static rewiring .in fact it s effects are rather similar to that of noise at larger .so in dynamically rewired networks , as increases , the rough oscillatory behaviour of the collective field is rapidly lost and the spectra is dominated by the background ( see fig . 4 ) .[ fig1 ] various other questions could be asked about dynamics on networks , like for instance the nature of dynamic phase transitions .such questions would involve concerns about the relevant order parameter(s ) , thermodynamic limit in space and asymptotic limit in time and delicate issues in approaching these limits .but here we will not concern ourselves with the asymptotic or thermodynamic limit , and we will draw no conclusions about infinite networks from our studies of finite ( albeit quite large ) lattices . indeed from a practical point of view , whether it is a collection of neurons or electronic oscillators , our observations will still be very relevant .we have also studied coupled chaotic logistic maps on a 2-dimensional small world network .this system is given as again takes the nearest neighbour values : , , and with probability , and could take random values with probability .the network has periodic boundary conditions .we carried out simulations for function for , and .we obtained the mean field , again defined as , for the above system , with .we have the following observations , for both static and dynamic rewiring : [ fig3 ] \(a ) for , the spectum does not have sharp peaks and spectral features do not change much with rewiring of bonds .\(b ) for and , there are peaks for regular lattices , and as one increases the rewiring probability the spectral peaks reduce in strength and ultimately vanish .\(c ) the background level grows as is increased , as expected .\(d ) the snr displays monotonic decrease with respect to rewiring probability . in the left panel of fig . 5 we display the power spectrum of the above system with in the local maps , for , and .the figure clearly bears out the observations listed above . if the spectral peaks are not very pronounced to begin with at , even very small rewiring probability will destroy ithowever , if the spectral peaks are sharp at , one has to go up to larger values of to see them vanish . in fig .6 we display the signal - to - noise ratio ( snr ) of the spectra at different values of rewiring fraction , for the case of .the figure illustrates that the snr is a monotonic function of .so evidently no significant dynamical effect is discernable as .rather the snr shows a gradual decrease with increasing .[ fig1 ] we have also simulated the dynamics of a coupled logistic map network on a regular lattice under varying noise strengths , namely : where is a random number in the range $ ] and is the noise strength , and interestingly we observed a very similar diminishing of spectral peaks with increasing . in the right panel of fig .5 we display the power spectrum of the collective field in the above system with in the local maps , for noise strengths , and .clearly the noise destroys the spectral peaks much in the same way as increasing , as evident in the close similarity between the left and right panels of fig .this indicates that non - local connections act as spatial noise in this dynamical network , with rewiring fraction playing the role of noise strength .we have also studied the system with the connections dynamically rewired , i.e. at every iteration the connectivity matrix is updated keeping the probability of rewiring fixed at .dynamic rewiring yields results qualitatively similar to static rewiring for small .however in a dynamically rewired network the snr falls much more sharply than for static rewiring .for instance fig .7 displays the spectra for dynamic rewiring for the case of , .clearly the peaks have vanished for when the connections are dynamically rewired , while they disappear only around when the rewired connections are static .[ fig1 ] these results arising from a very different system further strengthens our conclusion that does not have special implications for dynamical properties .rather the dynamical characteristics appear to change smoothly and monotonically with respect to rewiring fraction .it had been observed in a study of the collective field of coupled hodgin - huxley neurons that in the small world region the low frequency spectral peak was more pronounced than it was in either the fully regular or fully random case .our first objective here was to check this feature in another prototypical neuronal model in order to gauge the generality and range of applicability of the above phenomena .so we chose a system of coupled hindmarsh - rose neurons as our first case study .the key results of the study on coupled hindmarsh - rose neurons showed that the change in the low frequency spectral peak as a function of random rewiring is _monotonic_. there was no evidence of any significant increase in spectral strength in the low regime .so the hindmarsh - rose neuron network , unlike the hodgin - huxley neuron network of ref . , does not yield special dynamical features in the dilute limit of small world links .in fact the nature of the dynamics of the mean field appears to vary quite smoothly and monotonically throughout the full range of . in our second case studywe studied a network of coupled logistic maps . here too the key result was that the spectral features changed monotonically with respect to and did not show any prominent change at any special value of intermediate .we also found that the dynamics at small rewiring was very similar to that of regular connections with additive noise .these observations then provide examples in support of the conjecture that in a large number of coupled dynamical systems the changes in collective behaviour is monotonic with respect to the degree of non - locality in connections , and is not in any way `` optimised '' at any particular .99 d. j. watts and s. h. strogatz , nature , * 393 * 440 ( 1998 ) . in this paperthey use terms such as _ small world value _ and _ small world regime _ to signify the value of at which one can see high clustering as in regular lattices , but low characteristic length scale as in random lattices .typical values of for which this holds true are very low , with for finite lattices of size . throughout our paperwe will use the term ` small world connectivities ' to signify this regime .s. a. pandit and r. e. amritkar , phys .e * 63 * 041104 ( 2001 ) , m. e. j. newman and d.j .watts , phys .e * 60 * 7332 ( 1999 ) ; a. barrat and m. weigt , eur .j. b * 13 * 547 ( 2000 ) ; s.c .manrubia , j. delgado , and b. luque , europhys . lett .[ 53 ] , 693 ( 2001 ) .certain non - equilibrium models like the majority - vote model , while displaying behavior different from equilibrium models , nevertheless behave monotonically as a function of within numerical accuracy . for example , see p. r. a. campos and v. m. de olivera , phys .e * 67 * 026104 ( 2003 ) .
we study the dynamical behaviour of the collective field of chaotic systems on small world lattices . coupled neuronal systems as well as coupled logistic maps are investigated . we observe that significant changes in dynamical properties occur only at a reasonably high strength of nonlocal coupling . further , spectral features , such as signal - to - noise ratio ( snr ) , change monotonically with respect to the fraction of random rewiring , i.e. there is no optimal value of the rewiring fraction for which spectral properties are most pronounced . we also observe that for small rewiring , results are similar to those obtained by adding small noise . + prashant m. gade and sudeshna sinha + _ the institute of mathematical sciences , taramani , chennai 600 113 , india _ + for modelling and simulation , university of pune , ganeshkhind , pune , 411 007 , india _ pacs : 05.45.-a , 05.45.ra , 02.50.ey
diffusion processes arise in many important fields , including finance , genetics , and engineering .there is great interest in simulation and inference using diffusions , but this is a difficult problem because the transition density function of a diffusion is rarely known .it is typical to work with a discretization of an intractable diffusion model so that monte carlo simulation can be applied .for example , the euler scheme is a discretization of time in which increments of the diffusion over each small time step are assumed to be gaussian .there has been much work into improving this and other discretized approaches , but a disadvantage is that they introduce two sources of error : a monte carlo error and a discretization error .the latter causes a bias , and it may be computationally expensive to make the grid spacing sufficiently fine to ensure this bias is negligible .however , for a certain class of diffusion processes described below , recent work based on _ retrospective _ sampling ideas has obviated the need for discretization , allowing realizations to be simulated _ exactly _ .the key idea is to use brownian motion as the proposal in a rejection sampling algorithm , in which it is possible to make the accept / reject decision without having to simulate a complete ( infinite - dimensional ) sample path .this idea has also been extended to perform parametric inference of discretely observed diffusions .where the algorithm has been developed for one - dimensional diffusions , it is assumed for the most part that the diffusion has state space with boundaries at .the goal of this paper is to extend the exact algorithm to diffusion models with a certain class of _ finite _ boundary .one approach would be a straightforward modification of the algorithm presented in using brownian motion as the candidate process .however , it is easy to find examples for which this approach exhibits unacceptably high rejection rates , essentially because the paths of brownian motion do not mimic the paths of the target diffusion sufficiently well near the boundary .an alternative approach developed in this paper is more drastic and based on a related idea in : we replace brownian motion in the proposal mechanism with _ another _ well - characterized diffusion , namely the bessel process . as a simple motivating example , suppose we wanted to simulate sample paths of the diffusion with generator \frac{d}{dx } + \frac{1}{2}\frac{d^2}{dx^2},\ ] ] for and , where is the modified bessel function of the first kind .this diffusion was introduced by and studied by , , and , among others .the diffusion has state space , and when the boundary at 0 is _ entrance _ in the terminology of feller ( see * ? ? ?thus , unlike brownian motion which can wander close to 0 , this diffusion should experience a strong repulsive force whenever the diffusion approaches 0 from above , due to the singularity in the drift of asymptotic form as .hence , a rejection sampling algorithm using brownian motion as a candidate process will have high rejection rates whenever the sample path approaches the boundary .however , a _ bessel _ process of order sharesthis singularity and would make a more suitable candidate process .( in fact , in this example ( [ eq : besseldriftgen ] ) includes the bessel process itself as the special case .the diffusion with generator ( [ eq : besseldriftgen ] ) has been described as the _ bessel process in the wide sense _as will be shown , using a bessel process when the diffusion of interest has a certain type of boundary can substantially improve algorithmic efficiency .the structure of the paper is as follows . in section [ sec : ea ] i give an overview of the exact algorithms ea13 of , , and . after summarizing some useful properties of the bessel process in section [ sec : bessel ] , in section [ sec : bessel - ea ]i develop a new exact algorithm that can be applied to a diffusion with a finite entrance boundary .the algorithm is illustrated in section [ sec : applications ] by application to conditioned diffusions and to a general diffusion model of population growth . in section [ sec : twoboundaries ] i discuss the problem of _ two _ finite boundaries and extend ea3 to apply to this case .section [ sec : discussion ] discusses possible directions for future research .consider the scalar diffusion process satisfying the stochastic differential equation ( sde ) with drift coefficient and diffusion coefficient . in all that follows we are interested in the law of the diffusion only up to some fixed , finite time .we first apply the _ lamperti transform _ defined via for some fixed in the state space of .the process satisfies where the new drift is given by equation ( [ eq : ysde ] ) , with unit diffusion coefficient , will be our central object of study , and we assume it admits a unique weak solution .the diffusion coefficient , , of a one - dimensional diffusion can always be reduced to unity in this manner , but we note that the infinitesimal covariance of a multidimensional diffusion may or may not be reducible to the identity matrix ( * ? ? ?* gives an explicit way to check ) .the law of , to be denoted , is absolutely continuous with respect to the law of brownian motion commenced from , to be denoted .this motivates the latter s use for drawing candidates in a rejection sampling algorithm .the acceptance probability in such an algorithm would be proportional to the radon - nikodm derivative of with respect to , which is given by girsanov s formula ( e.g. * ? ? ?* ) : equation ( [ eq : girsanov ] ) makes it clear why such an algorithm is impossible : to compute the rejection probability one must evaluate integrals over the whole sample path .exact _ algorithms show how it is possible to make the accept / reject decision without this requirement , using only a finite amount of computation .i give here a brief overview ; for further details the reader is referred to . to proceed we make three further assumptions . * the radon - nikodm derivative of with respect to exists andis given by ( [ eq : girsanov ] ) .* , i.e. is continuously differentiable .* is bounded below . using ( a2 ) and it s lemma applied to ,we can rewrite ( [ eq : girsanov ] ) as\right\}. \ ] ] to simplify matters , rather than working directly with we work with the probability measure defined via the measure corresponds to _ biased _ brownian motion ; it is the law of \mid b_t \sim h) ] from and accept it with probability , then the accepted paths are distributed according to .it remains to construct an event , call it , occurring with the required probability .inspection of the form of ( [ eq : dqdz ] ) suggests one way to define : as the event that there are no points in the realization of a poisson process occurring at unit rate in the area under the graph of . a practical way to achievethis is to thin a poisson process occurring in a larger rectangle containing this graph .more precisely , suppose there exists a random variable and a positive function such that a.s .and }\phi(y_t ) \leq r(\upsilon) ] , and let ] occurs with probability , as required .the point of first finding an a.s .upper bound on is to ensure the poisson process has finite total rate , and hence that can be determined with a finite amount of computation .assuming such a bound can be found , an exact algorithm therefore proceeds as follows : + * exact algorithm ( ea ) * [ cols= " < , < " , ] + + given an accepted skeleton , further points can be filled in by simulation from bessel bridges between each skeleton point ( conditional on ) .no further reference to is necessary .it seems as though we have just replaced one proposal measure for another .the benefit of this operation is apparent when we notice that , for certain diffusions , the hypograph of may be dramatically smaller than the hypograph of , manifesting itself via a much lower rejection rate .for example , if our target drift is of the form as , then whereas remains finite .thus , we should expect the rejection rate to be improved most when the effect of the boundary is strong and/or the target process spends a great deal of time near the boundary , for example if one end of the bridge is close to .these observations are verified in an example application later .first , we must specify an appropriate choice of and , and how to simulate from , which requires further assumptions on .for simplicity we will focus on the bessel analogue of ea1 , to be denoted bessel - ea1 , for which we assume * the function is bounded above .under this assumption we are entitled to choose the non - random and to omit step 2 of bessel - ea .it remains to simulate skeleton points from the law of a bessel bridge .i now detail how this can be achieved . when it is well known that the bessel process is the radial part of a brownian motion in .it is possible to use this observation to simulate from a bessel bridge by transforming an underlying brownian bridge in sch : etal:2013 .however , it is in fact possible to simulate exactly from the bessel bridge for _ any _ real , as follows .we first need a definition . a random variable on said to be _bessel-distributed _ when this distribution is constructed by normalizing the coefficients of to sum to .we define as the continuous limit as ; then a.s . because the bessel distribution is discrete ,a realization of can be achieved easily by the usual method of simulating a uniform ] is distributed as where \} ] [ or ) ] with both and finite . assuming to be an entrance boundary ,the theory developed in this paper will apply only in the exceptional case that remains finite as .this will not typically hold if is an exit or an entrance boundary , for example , where we would expect as ; see , for example , the wright - fisher diffusion studied by .while there is currently no solution available if we want to retain the bessel process to simulate our candidate paths , one can make progress provided we revert to using brownian motion . in this casewe can not make any assumptions on (y)$ ] either as or as ; nonetheless , we can still use ea3 ( see section [ sec : ea3 ] above and the discussion in * ? ? ?the main idea is to partition into layers in such a way that the layers converge to each boundary ( figure [ fig : layers ] ) . as a consequence, simulated paths can approach either boundary arbitrarily closely , as required , yet we can still compute a bound on whatever layer is actually simulated .recall we must first simulate a layer of ( step 2 of ea ) , and then simulate points of given its layer ( step 4 ) .a careful reading of the algorithm described by shows that we need only modify their step 4 to allow for unequal layers approaching the two boundaries , i.e. if ea3 is to be applied to two finite entrance boundaries , we can no longer choose to define the layers symmetrically , , as is done in . in the remainder of this section ,i generalize their step 4 to allow .although it is not possible to simulate from directly , it _ is _ possible to obtain points distributed according to this law by using a further rejection step , in which another , simpler process is used to sample candidate paths .let denote the probability measure corresponding to a brownian bridge from to in time , and let denote the probability measure obtained after restricting the brownian bridge to an event .define the events we sample candidate paths from the mixture the intuition behind this proposal measure is clear : it ensures that at least the first of the two sets constituting ( or ) in equation ( [ eq : ui , li ] ) occurs .it is possible to simulate paths from ( [ eq : pdi ] ) by first selecting from the mixture using a bernoulli random variable , and then simulating a brownian bridge conditioned on the chosen extremum falling within layer . provide details on how to simulate the maximum ( or minimum ) of a brownian bridge and then simulate the rest of the bridge given this extremum . the distribution function of the extremum of a brownian bridge is known in closed form , so it is a simple matter to condition the extremum to lie within a given interval .rejection from to obtain paths distributed according to can be achieved by the following result .[ thm : ea3 ] is absolutely continuous with respect to , and if we choose then the corresponding radon - nikodm derivative is use the unconditional brownian bridge as a reference measure to find : which simplifies to ( [ eq : thmea3 ] ) when is given by ( [ eq : lambda ] ) ( also noting that on the event ) . note that choosing recovers theorem 4 of , in which .also notice that , because the distribution function of the extremum of a brownian bridge is known , the general choice of in ( [ eq : lambda ] ) can be computed exactly .it remains to simulate random indicators for the events ( given ) and ( given and ) , which proceeds as in the symmetric case .in this paper i have developed an efficient , exact algorithm for simulating from the law of a diffusion on with a finite entrance boundary at .the algorithm is applicable when the boundary behaviour is matched by that of a bessel process , which covers a number of interesting examples including conditioned diffusions [ equation ( [ eq : conditioned ] ) ] , the wide - sense bessel process [ equation ( [ eq : besseldriftgen ] ) ] , and a very general model of population growth [ equation ( [ eq : growthgen ] ) ] . in an application to the latter, it was shown that using the bessel process instead of brownian motion to generate candidate paths gives a striking improvement in efficiency . for a diffusion with two entrance boundaries, i developed a tractable exact algorithm which uses brownian motion as the candidate process .there are a number of directions for further research .perhaps the greatest restriction on the algorithm developed here is assumption ( b * ) , which does not apply if for example the diffusion also has an upper entrance boundary .we should like to relax ( b * ) to the following : * .this is the bessel analogue of ( * * ) , and makes no restrictions on the drift away from . for diffusions satisfying ( * * ) , tackled the analogous problem by first simulating the maximum of a brownian bridge path together with the time it is attained .this was possible because these distributions take on a simple form and are easy to simulate .there are grounds for optimism that we might take a similar approach using the bessel process .remarkably , the bessel process is one of few well - characterized diffusions for which we also have some results on the distribution of the maximum of its bridge and the time it is attained . herethough , the relevant distributions are rather more complicated , expressible only in infinite series form . exact simulation from these distributions will be the subject of a future paper . a further extension of this work would be to handle other types of boundary behaviour .much of the preceding argument , including proposition [ prop : mak : gle:2010 ] , continues to hold for , which could then be used in a rejection sampling algorithm for a target diffusion with an instantaneously reflecting boundary .however , great care must be taken in ensuring the assumptions of the algorithm are met .in particular we can no long write the radon - nikodm derivative in the form ( [ eq : rdchain ] ) ; moreover , for the bessel process is not even a semimartingale beyond .finally , the contributions in this paper illustrate an important concept : that it is possible to implement the exact algorithm using a _ non_-brownian candidate process .this raises the interesting question : what candidate diffusions _ other than _ brownian motion and the bessel process are available to use in the framework of the exact algorithm ( and when would they be useful ) ?this work benefitted from many helpful discussions : with steve evans , gareth roberts , joshua schraiber , and dario span .beskos , a. , papaspiliopoulos , o. , roberts , g. o. , and fearnhead , p. ( 2006 ) .exact and computationally efficient likelihood - based estimation for discretely observed diffusion processes . _j. r. stat .soc . ser ._ , * 68 * , 333382 .pitman , j. and yor , m. ( 1981 ) .bessel processes and infinitely divisible laws . in williams ,d. , editor , _ stochastic integrals _ ,volume 851 of _ lecture notes in mathematics _ , pages 285370 .berlin : springer .
diffusion processes arise in many fields , and so simulating the path of a diffusion is an important problem . it is usually necessary to make some sort of approximation via model - discretization , but a recently introduced class of algorithms , known as the _ exact _ algorithm and based on retrospective rejection sampling ideas , obviate the need for such discretization . in this paper i extend the exact algorithm to apply to a class of diffusions with a finite entrance boundary . the key innovation is that for these models the bessel process is a more suitable candidate process than the more usually chosen brownian motion . the algorithm is illustrated by an application to a general diffusion model of population growth , where it simulates paths efficiently , while previous algorithms are impracticable .
in distributed storage systems ( dss ) , it is desirable that data be reliably stored over a network of nodes in such a way that a user ( _ data collector _ ) can retrieve the stored data even if some nodes fail .to achieve such a resilience against node failures , dss introduce data redundancy based on different coding techniques .for example , erasures codes are widely used in such systems : when using an code , data to be stored is first divided into blocks ; subsequently , these information blocks are encoded into blocks stored on distinct nodes in the system .in addition , when a single node fails , the system reconstructs the data stored in the failed node to keep the required level of redundancy .this process of data reconstruction for a failed node is called _ node repair process _ . during a node repair process , the node which is added to the system to replace the failed node downloads data from a set of appropriate and accessible nodes .there are two important goals that guide the design of codes for dss : reducing the _ repair bandwidth _ ,i.e. the amount of data downloaded from system nodes during the node repair process , and achieving _ locality _ , i.e. reducing the number of nodes participating in the node repair process .these goals underpin the design of two families of codes for dss called _ regenerating codes _ ( see and references therein ) and _ locally repairable codes _ ( see ) , respectively .in this paper we focus on the locally repairable codes ( lrcs ) .recently , these codes have drawn significant attention within the research community .oggier et al . presents coding schemes which facilitate local node repair . in ,gopalan et al . establishes an upper bound on the minimum distance of scalar lrcs , which is analogous to the singleton bound .the paper also showes that pyramid codes , presented in , achieve this bound with information symbols locality .subsequently , the work by prakash et al .extends the bound to a more general definition of scalar lrcs .( han and lastras - montano provide a similar upper bound which is coincident with the one in for small minimum distances , and also present codes that attain this bound in the context of reliable memories . ) in , papailiopoulos and dimakis generalize the bound in to vector codes , and present locally repairable coding schemes which exhibits mds property at the cost of small amount of additional storage per node .the main contributions of our paper are as follows .first , in section [ sec : preliminaries ] , we generalize the definition of _ scalar _ locally repairable codes , presented in to _ vector _ locally repairable codes . for such codes , every node storing symbols from a given field , can be locally repaired by using data stored in at most other nodes from a group of nodes of size , which we call a _ local group _, where is the number of system nodes , and and are the given locality parameters .subsequently , in section [ sec : vectorlrc ] , we derive an upper bound on the minimum distance of the vector codes that satisfy a given locality constraint , which establishes a trade off between node failure resilience ( i.e. , ) and per node storage .the bound presented in can be considered as a special case of our bound with .further , we present an explicit construction for lrcs which attain this bound on minimum distance . this construction is based on maximum rank distance ( mrd ) gabidulin codes , which are a rank - metric analog of reed - solomon codes .scalar _ and _ vector _ lrcs that are obtained by this construction are the first explicit optimal locally repairable codes with . finally , in section [ sec : discussion ] , we discuss how the scalar and vector codes obtained by this construction can be used for constructions of repair bandwidth efficient lrcs .we conclude the paper with section [ sec : conclusions ] .let be the size of a file to be stored in a dss with nodes .all data symbols belong to a finite field .each node stores symbols over .we generalize the definition of _ scalar _ locally repairable codes , presented in to _ vector _ locally repairable codes , where each node , stores a vector of length over .first , we provide an alternate definition of the minimum distance of a vector code .[ def : dmin ] the minimum distance of a vector code of dimension is defined as : h({{\bf x}}_{{{\cal a } } } ) < \mathcal{m}}|{{\cal a}}|,\ ] ] where ] over , where each entry of is expanded as a column vector .the _ rank _ of a vector , denoted by , is defined as the rank of the matrix over .similarly , for two vectors , the _ rank distance _ is defined by .an {q^m} ] gabidulin code , , is defined as , where is a linearized polynomial over of with the coefficients given by the information message , and are linearly independent over .an mrd code with minimum distance can correct any erasures , which we will call _ rank erasures_. an algorithm for erasures correction of gabidulin codes can be found e.g. in . a linear ] gabidulin code .we assume in this construction that .we denote by the number of local groups in the system . * if then a codeword is first partitioned into disjoint groups , each of size , and each group is stored on a different set of nodes , symbols per node . in other words ,the output of the first encoding step generates the encoded data stored on nodes , each one containing symbols of a ( folded ) gabidulin codeword .second , we generate parity nodes per group by applying an ] mds array code over on each of the first local groups of nodes and by applying a ] gabidulin code .this codeword is partitioned into three groups , two of size and one of size , as follows : .then , by applying a ] mds code in the last group we add one parity to each group .the symbols of with three new parities are stored on 14 nodes as shown in fig [ fig : construction1 ] .lrc for and . ] by theorem [ thm : dmin ] , the minimum distance of this code is at most . by remark [ rm : linearized property ] , any node erasures correspond to at most rank erasures and then can be corrected by , hence .in addition , when a single node fails , it can be repaired by using the data stored on other nodes from the same group .next , we illustrate construction i for a vector lrc .we consider a dss with the following parameters : by ( [ eq : upp_bound ] ) we have .let and be a codeword of an {q^{36}} ] mds array code over is applied on each local group to obtain parity nodes per local group .the coding scheme is illustrated in fig .[ fig : construction ] .locally repairable code with and . ] by remark [ rm : linearized property ] , any node failures correspond to at most rank erasures in the corresponding codeword of .since the minimum rank distance of is , these node erasures can be corrected by , and thus the minimum distance of is exactly . the efficiency of the decoding of the codes obtained by construction i depends on the efficiency of the decoding of the mds codes and the gabidulin codes .in this section , we discuss the hybrid codes which for a given locality parameters minimize the repair bandwidth . these codes are based on a combination of locally repairable codes with regenerating codes . in a nave repair process for a locally repairable code ,a newcomer contacts nodes in its local group and downloads all the data stored on these nodes . following the line of work of bandwidth efficient repair in dss given by , we allow a newcomer to contact nodes in its local group and to download only symbols stored in these nodes in order to repair the failed node .the motivation behind this is to lower the repair bandwidth for a lrc .so the idea here is to apply a regenerating code in each local group .( we note that , in a parallel and independent work , kamath et al . also proposed utilizing regenerating codes in the context of lrcs . ) in particular , by applying an msr code in each local group instead of an mds array code in the second step of construction i we obtain a code , denoted by msr - lrc , which has the maximal minimum distance ( since an msr code is also an mds array code ) , the local minimum storage per node , and the minimized repair bandwidth .( the details of this construction can be found in . ) in addition , the optimal scalar codes obtained by construction i can be used for construction of mbr - lrcs ( codes with an mbr code in each local group ) as it has been shown by kamath et al .we presented a novel construction for ( scalar and vector ) locally repairable codes .this construction is based on maximum rank distance codes .we derived an upper bound on minimum distance for vector lrcs and proved that our construction provides optimal codes for both scalar and vector cases .we also discussed how the codes obtained by this construction can be used to construct repair bandwidth efficient lrcs .99 [ 1]#1 url [ 2]#2 [ 2]l@#1=l@#1#2 a. g. dimakis , p. godfrey , m. wainwright and k. ramachandran , `` network coding for distributed storage system , '' _ ieee trans . on inform .56 , no . 9 , pp . 4539 - 4551 , sep . 2010 .y. wu and a. g. dimakis , reducing repair traffic for erasure coding - based storage via interference alignment " , in _ proc . of ieee isit _n. b. shah , k. v. rashmi , p. v.kumar and k. ramchandran , explicit codes minimizing repair bandwidth for distributed storage " , in _ proc . of ieee itw _ , jan. 2010 . c. suh and k. ramchandran , exact - repair mds codes for distributed storage using interference alignment " , in _ proc . of ieee isit _ , jul . 2010 .k. v. rashmi , n. b. shah and p. v. kumar , optimal exact - regenerating codes for distributed storage at the msr and mbr point via a product - matrix construction " , _ ieee trans . on inform .57 , no .57 , pp .5227 - 5239 , aug .i. tamo , z. wang , and j. bruck , zigzag codes : mds array codes with optimal rebuilding , " _ corr _ , vol .abs/1112.0371 , dec . 2011 .a. datta and f. oggier , an overview of codes tailor - made for networked distributed data storage , " _abs/1109.2317 , sep .a. g. dimakis , k. ramchandran , y. wu , and c. suh , a survey on network codes for distributed storage , " proceedings of the ieee , mar .p. gopalan , c. huang , h. simitchi and s. yekhanin , `` on the locality of codeword symbols , '' _ ieee trans . on inform .58 , no . 11 , pp . 6925 - 6934 , nov . 2012. d. s. papailiopoulos and a. g. dimakis , `` locally repairable codes , '' in _ proc . of ieee isit _ , jul. 2012 .n. prakash , g. m. kamath , v. lalitha , and p. v. kumar , `` optimal linear codes with a local - error - correction property , '' in _ proc . of ieee isit _ , jul .f. e. oggier and a. datta , `` homomorphic self - repairing codes for agile maintenance of distributed storage systems , '' _ corr _ , vol .abs/1107.3129 , jul .f. e. oggier and a. datta , `` self - repairing codes for distributed storage - a projective geometric construction , '' _ corr _ , vol .abs/1105.0379 , may 2011 . c. huang , h. simitci , y. xu , a. ogus , b. calder , p. gopalan , j. li , and s. yekhanin,erasure coding in windows azure storage , " _ in proc .usenix annual technical conference ( atc ) _ , apr . 2012 .m. sathiamoorthy , m. asteris , d. papailiopoulos , a. g. dimakis , r. vadali , s. chen , and d. borthakur , xoring elephants : novel erasure codes for big data , " _abs/1301.3791 , jan .2013 . c. huang , m. chen , and j. li , `` pyramid code : flexible schemes to trade space for access efficiency in reliable data storage systems , '' in _ proc . of 6th ieee nca _ , mar .j. han and l.a .lastras - montano , `` reliable memories with subline accesses , '' in _ proc . of ieee isit 2007 _ , jun .g m. kamath , n. prakash , v. lalitha , and p. v. kumar , `` codes with local regeneration,''_corr _ , vol .abs/1211.1932 , nov .h. d. l. hollmann , `` storage codes coding rate and repair locality,''_corr _ , vol .abs/1301.4300 , jan .e m. gabidulin , `` theory of codes with maximum rank distance , '' _ problems of information transmission _ , vol .21 , pp . 1 - 12 , jul .r. m. roth , `` maximum - rank array codes and their application to crisscross error correction , '' _ ieee trans . on inform .37 , pp .328 - 336 , mar .f. j. macwilliams and n. j. a. sloane , _ the theory of error - correcting codes _ , north - holland , 1978 .gabidulin and n.i .pilipchuk , `` error and erasure correcting algorithms for rank codes , '' _ designs , codes and cryptography _ , vol .105122 , 2008 .m. blaum , j. brady , j. bruck , and j. menon , `` evenodd : an efficient scheme for tolerating double disk failures in raid architectures , '' _ ieee trans . on computers _ ,2 , pp . 192202 , feb .m. blaum and r. m. roth , on lowest density mds codes " , _ ieee trans .inform . theory _45 , pp . 4659 , 1999 .y. cassuto and j. bruck,cyclic low - density mds array codes " , in _ proc . of ieee isit _ , jul .n. silberstein , a. s. rawat and s. vishwanath , `` error resilience in distributed storage via rank - metric codes , '' in _ proc . of 50th allerton _, available in _ http://arxiv.org/abs/1202.0800_ , oct .a. s. rawat , o. o. koyluoglu , n. silberstein , and s. vishwanath , optimal locally repairable and secure codes for distributed storage systems " , _ corr _ , vol .abs/1210.6954 , oct .to prove that attains the bound ( [ eq : upp_bound ] ) we need to show that any node erasures can be corrected by . for this purposewe will prove that any erasures of correspond to at most rank erasures of the underlying gabidulin code and thus can be corrected by the code .here , we point out the the worst case erasure pattern is when the erasures appear in the smallest possible number of groups and the number of erasures inside a local group is maximal . 1. let . then and .* if then and .in this case by ( [ eq : bound_rewrite ] ) we have .hence , in the worst case we have groups with all the erased nodes and one additional group with erased nodes , which by remark [ rm : linearized property ] corresponds to rank erasures in groups of the corresponding gabidulin codeword . since by ( [ eq : rank_erasures ] ) , ,this erasures can be corrected by the gabidulin code . * if , then and .then by ( [ eq : bound_rewrite ] ) we have .hence , in the worst case we have groups with all the erased nodes and one additional group with erased nodes , which by remark [ rm : linearized property ] corresponds to rank erasures that can be corrected by the gabidulin code . * if then and .then by ( [ eq : bound_rewrite ] ) we have .hence , in the worst case we have groups with all the erased nodes and one additional group with erased nodes , which corresponds to rank erasures that can be corrected by the gabidulin code .2 . let .then , and . * if then and .then by ( [ eq : bound_rewrite ] ) , we have .hence , in the worst case we have groups with all the erased nodes and one additional group with erased nodes ( or erased nodes in the smallest group , groups with all the erased nodes and one group with erased nodes ) .this by remark [ rm : linearized property ] corresponds to rank erasures that can be corrected by the gabidulin code . *if then and .then by ( [ eq : bound_rewrite ] ) we have .hence , in the worst case we have groups with all the erased nodes and one additional group with erased nodes ( or erased nodes in the smallest group , groups with all the erased nodes and one group with erased nodes ) .this by remark [ rm : linearized property ] corresponds to rank erasures that can be corrected by the gabidulin code .
this paper presents a new explicit construction for locally repairable codes ( lrcs ) for distributed storage systems which possess all - symbols locality and maximal possible minimum distance , or equivalently , can tolerate the maximal number of node failures . this construction , based on maximum rank distance ( mrd ) gabidulin codes , provides new optimal vector and scalar lrcs . in addition , the paper also discusses mechanisms by which codes obtained using this construction can be used to construct lrcs with efficient repair of failed nodes by combination of lrc with regenerating codes .
the notion of directed information introduced by massey in assesses the amount of information that causally `` flows '' from a given random and ordered sequence to another . for this reason, it has increasingly found use in diverse applications , from characterizing the capacity of channels with feedback , the rate distortion function under causality constraints , establishing some of the fundamental limitations in networked control , determining causal relationships in neural networks , to portfolio theory and hypothesis testing , to name a few . the directed information from a random ) for random variables , denoting a particular realization by the corresponding italic character , . ]sequence to a random sequence is defined as where the notation represents the sequence .the causality inherent in this definition becomes evident when comparing it with the mutual information between and , given by . in the latter sum ,what matters is the amount of information about the _sequence present in , given the past values .by contrast , in the conditional mutual informations in the sum of , only the past and current values of are considered , that is , .thus , represents the amount of information causally conveyed from to . thereexist several results characterizing the relationship between and .first , it is well known that , with equality if and only if is causally related to . a conservation law of mutual and directed information has been found in , which asserts that , where denotes the concatenation .given its prominence in settings involving feedback , it is perhaps in these scenarios where the directed information becomes most important .for instance , the directed information has been instrumental in characterizing the capacity of channels with feedback ( see , e.g. , and the references therein ) , as well as the rate - distortion function in setups involving feedback . in this paper , our focus is on the relationships ( inequalities and identities ) involving directed and mutual informations within feedback systems , as well as between directed informations involving different signals within the corresponding feedback loop . in order to discuss some of the existing results related to this problem , it is convenient to consider the general feedback system shown in fig .[ fig : diagramas]-(a ) . in this diagram, the blocks represent possibly non - linear and time - varying causal systems such that the total delay of the loop is at least one sample . in the same figure , are exogenous random signals ( scalars , vectors or sequences ) , which could represent , for example , any combination of disturbances , noises , random initial states or side informations .we note that any of these exogenous signals , in combination with its corresponding deterministic mapping , can also yield any desired stochastic causal mapping . for the simple case in which all the systems linear time invariant ( lti ) and stable , and assuming ( deterministically ) , it was shown in that does not depend on whether there is feedback from to or not .inequalities between mutual and directed informations in a less restricted setup , shown in fig .[ fig : diagramas]-(b ) , have been found in . in that setting ( a networked - control system ) , is a strictly causal lti dynamic system having ( vector ) state sequence , with being the random initial state in its state - space representation .the external signal ( which could correspond to a disturbance ) is statistically independent of , the latter corresponding to , for example , side information or channel noise . both are also statistically independent of .the blocks labeled , and correspond to an encoder , a decoder and a channel , respectively , all of which are causal .the channel maps and to in a possibly time - varying manner , i.e. , similarly , the concatenation of the encoder , the channel and the decoder , maps and to as a possibly time - dependent function under these assumptions , the following fundamental result was shown in ( * ? ? ?* lemma 5.1 ) : by further assuming in that the decoder in fig .[ fig : diagramas]-(b ) is deterministic , the following markov chain naturally holds , leading directly to which is found in the proof of ( * ? ? ?* corollary 5.3 ) .the deterministic nature of the decoder played a crucial role in the proof of this result , since otherwise the markov chain does not hold , in general , due to the feedback from to .notice that both and provide lower bounds to the difference between two mutual informations , each of them relating a signal _ external _ to the loop ( such as ) to a signal _ internal _ to the loop ( such as or ) . instead , the inequality which holds for the system in fig. [ fig : diagramas]-(a ) and appears in ( * ? ? ? * theorem 3 ) ( and rediscovered later in ( * ? ? ?* lemma 4.8.1 ) ) , involves the directed information between two internal signals and the mutual information between the second of these and an external sequence .a related bound , similar to but involving information rates and with the leftmost mutual information replaced by the directed information from to ( which are two signals internal to the loop ) , has been obtained in ( * ? ? ? * lemma 4.1 ) : with and , provided .this result relies on three assumptions : a ) that the channel is memory - less and satisfies a `` conditional invertibility '' property , b ) a finite - memory condition , and c ) a fading - memory condition , these two related to the decoder ( see fig .[ fig : diagramas ] ) .it is worth noting that , as defined in , these assumptions upon exclude the use of side information by the decoder and/or the possibility of being affected by random noise or having a random internal state which is non - observable ( please see for a detailed description of these assumptions ) .the inequality has recently been extended in ( * ? ? ? * theorem 1 ) , for the case of discrete - valued random variables and assuming , as the following identity ( written in terms of the signals and setup shown in fig .[ fig : diagramas]-(a ) ) : letting in fig .[ fig : diagramas]-(a ) and with the additional assumption that , it was also shown in ( * ? ? ? * theorem 1 ) that for the cases in which ( i.e. , when the concatenation of and corresponds to a summing node ) . in , and play important roles in characterizing the capacity of channels with noisy feedback . to the best of our knowledge , , , , and are the only results available in the literature which lower bound the difference between an internal - to - internal directed information and an external - to - internal mutual information .there exist even fewer published results in relation to inequalities between two directed informations involving only signals internal to the loop . to the best of our knowledge ,the only inequality of this type in the literature is the one found in the proof of theorem 4.1 of .the latter takes the form of a ( quasi ) data - processing inequality for directed informations in closed - loop systems , and states that provided to mean `` is independent of '' . ] and if is such that is a function of ( i.e. , if is conditionally invertible ) . in, corresponds to the causally conditioned directed information defined in .inequality plays a crucial role , since it allowed lower bounding the average data rate across a digital error - free channel by a directed information .( in , corresponded to a random dither signal in an entropy - coded dithered quantizer . ) in this paper , we derive a set of information identities and inequalities involving pairs of sequences ( internal or external to the loop ) in feedback systems .the first of these is an identity which , under an independence condition , can be interpreted as a law of conservation of information flows . the latter identity is the starting point for most of the results which follow it . among other things , we extend and to the general setup depicted in fig . [fig : diagramas]-(a ) , where _ none of the assumptions made in ( except causality ) needs to hold_. moreover , we will prove the validity of without assuming the conditional invertibility of nor that .the latter result is one of four novel data - processing inequalities derived in section [ ssec : nested_directed ] , each involving two nested directed informations valid for the system depicted in fig .[ fig : diagramas]-(a ) .the last of these is a complete closed - loop counterpart of the traditional open - loop data - processing inequality .the remainder of this paper begins with a description of the systems under study and the extension of massey s directed information to the case in which each of the blocks in the loop may introduce an arbitrary , non - negative delay ( i.e. , we do not allow for anticipation ) .the information identities and inequalities are presented in section [ sec : results ] .for clarity of the exposition , all the proofs are deferred to section [ sec : proofs ] .a brief discussion of potential applications of our results is presented in section [ sec : possible_applications ] , which is followed by the conclusions in section [ sec : conclusions ] .we begin by providing a formal description of the systems labeled in fig . [fig : diagramas]-(a ) .their input - output relationships are given by the possibly - varying deterministic mappings [ eq : block_defs ] where are exogenous random signals and the ( possibly time - varying ) delays are such that that is , the concatenation of has a delay of at least one sample .for every , , i.e. , is a real random vector whose dimension is given by some function .the other sequences ( ) are defined likewise . as stated in , the directed information ( as defined in )is a more meaningful measure of the flow of information between and than the conventional mutual information when there exists causal feedback from to . in particular , if and are discrete - valued sequences , input and output , respectively , of a forward channel , and if there exists _ strictly causal _ , perfect feedback ,so that ( a scenario utilized in as part of an argument in favor of the directed information ) , then the mutual information becomes thus , when strictly causal feedback is present , fails to account for how much information about has been conveyed to through the forward channel that lies between them .it is important to note that , in ( as well as in many works concerned with communications ) , the forward channel is instantaneous , i.e. , it has no delay .therefore , if a feedback channel is utilized , then this feedback channel must have a delay of at least one sample , as in the example above .however , when studying the system in fig .[ fig : diagramas]-(a ) , we may need to evaluate the directed information between signals and which are , respectively , input and output of a _ strictly casual _forward channel ( i.e. , with a delay of at least one sample ) , whose output is instantaneously fed back to its input . in such case , if one further assumes perfect feedback and sets , then , in the same spirit as before , = h(\rvay^{k } ) .\end{aligned}\ ] ] as one can see , massey s definition of directed information ceases to be meaningful if instantaneous feedback is utilized .it is natural to solve this problem by recalling that , in the latter example , the forward channel had a delay , say , greater than one sample .therefore , if we are interested in measuring how much of the information in , not present in , was conveyed from through the forward channel , we should look at the mutual information , because only the input samples can have an influence on .for this reason , we introduce the following , modified notion of directed information _ in this paper , the directed information from to through a forward channel with a non - negative time varying delay of samples is defined as _ for a zero - delay forward channel ,the latter definition coincides with massey s .likewise , we adapt the definition of causally - conditioned directed information to the definition when the signals , and are related according to . before finishing this section ,it is convenient to recall the following identity ( a particular case of the chain rule of conditional mutual information ) , which will be extensively utilized in the proofs of our results : begin by stating a fundamental result , _ which relates the directed information between two signals within a feedback loop , say and , to the mutual information between an external set of signals and _ : [ thm : main ] _ in the system shown in fig .[ fig : diagramas]-(a ) , it holds that with equality achieved if is independent of ._ this fundamental result , which for the cases in which can be understood as a _ law of conservation of information flow _ , is illustrated in fig .[ fig : information_flow ] . for such cases ,the information causally conveyed from to equals the information flow from to .when are not independent of , part of the mutual information between and ( corresponding to the term ) can be thought of as being `` leaked '' through , thus bypassing the forward link from to .this provides an intuitive interpretation for .theorem [ thm : main ] implies that is only a part of ( or at most equal to ) the information `` flow '' between all the exogenous signals entering the loop outside the link ( namely ) , and . in particular , if were deterministic , then , regardless of the blocks and irrespective of the nature of . by using , .then , applying theorem [ thm : main ] , we recover , whenever . thus , ( * ? ? ?* theorem 3 ) and ( * ? ? ?* lemma 4.8.1 ) ) can be obtained as a corollary of theorem [ thm : main ] .the following result provides an inequality relating with the separate flows of information and .[ thm : from_splitting_more_precise ] _ for the system shown in fig .[ fig : diagramas]-(a ) , if and , then with equality if and only if the markov chain holds ._ theorem [ thm : from_splitting_more_precise ] shows that , provided , is lower bounded by the sum of the individual flows from all the subsets in any given partition of , to , provided these subsets are mutually independent .indeed , both theorems [ thm : main ] and [ thm : from_splitting_more_precise ] can be generalized for any appropriate choice of external and internal signals .more precisely , let be the set of all external signals in a feedback system .let and be two internal signals in the loop .define as the set of exogenous signals which are introduced to the loop at every subsystem that lies in the path going from to .thus , for any , if , we have that and become respectively .to finish this section , we present a stronger , non - asymptotic version of inequality : [ thm : three_full_loops ] _ in the system shown in fig .[ fig : diagramas]-(a ) , if are mutually independent , then _ as anticipated , theorem [ thm : three_full_loops ] can be seen as an extension of to the more general setup shown in fig .[ fig : diagramas]-(a ) , where the assumptions made in ( * ? ? ? * lemma 4.1 ) do not need to hold .in particular , letting the decoder and in fig .[ fig : diagramas]-(b ) correspond to and in fig . [fig : diagramas]-(a ) , respectively , we see that inequality holds even if and have dependent initial states , or if the internal state of is not observable .theorem [ thm : three_full_loops ] also admits an interpretation in terms of information flows . this can be appreciated in the diagram shown in fig .[ fig : flujos2 ] , which depicts the individual full - turn flows ( around the entire feedback loop ) stemming from , and . theorem [ thm : three_full_loops ] states that the sum of these individual flows is a lower bound for the directed information from to , provided are independent .this section presents three closed - loop versions of the data processing inequality _ relating two directed informations _ , both between pairs of signals _ internal _ to the loop . as already mentioned in section [ sec : intro ] , to the best of our knowledge ,the first inequality of this type to appear in the literature is the one in theorem 4.1 in ( see ) .recall that the latter result stated that , requiring to be such that is a deterministic function of and that .the following result presents another inequality which also relates two nested directed informations , namely , and , but requiring only that .[ thm : dpi_dir_dir ] _ for the closed - loop system in fig .[ fig : diagramas]-(b ) , if , then _ notice that theorem [ thm : dpi_dir_dir ] does not require to be independent of or .this may seem counter - intuitive upon noting that enters the loop between the link from to .the following theorem is an identity between two directed informations involving only internal signals .it can also be seen as a complement to theorem [ thm : dpi_dir_dir ] , since it can be directly applied to establish the relationship between and .[ thm : finally ] _ for the system shown in fig .[ fig : diagramas]-(a ) , if , then with equality if , in addition , . in the latter case , it holds that notice that , by requiring additional independence conditions upon the exogenous signals ( specifically , ) , theorem [ thm : finally ] ( and , in particular , ) yields which strengthens the inequality in ( * ? ? ?* theorem 4.1 ) ( stated above in ) .more precisely , does not require conditioning one of the directed informations and holds irrespective of the invertibility of the mappings in the loop .a closer counterpart of ( i.e. , of ( * ? ? ?* theorem 4.1 ) ) , involving , is presented next .[ thm : xtoycond ] _ for the system shown in fig .[ fig : diagramas]-(a ) , if , then _ _ where the equality labeled hods if , in addition , the markov chain is satisfied for all . _thus , provided , yields that holds regardless of the invertibility of , requiring instead that , for all , any statistical dependence between and resides only in ( i.e. , that markov chain holds ) .the results derived so far relate directed informations having either the same `` starting '' sequence or the same `` destination '' sequence .we finish this section with the following corollary , which follows directly by combining theorems [ thm : dpi_dir_dir ] and [ thm : finally ] and relates directed informations involving four different sequences internal to the loop .[ coro : full_d - dpi ] _ for the system shown in fig .[ fig : diagramas]-(a ) , if and , then equality holds in if , in addition , ( i.e. , if are mutually independent ) . _ to the best of our knowledge , corollary [ coro : full_d - dpi ] is the first result available in the literature providing a lower bound to the gap between two nested directed informations , involving four different signals inside the feedback loop .this result can be seen as the first full extension of the open - loop ( traditional ) data - processing inequality , to arbitrary closed - loop scenarios .( notice that there is no need to consider systems with more than four mappings , since all external signals entering the loop between a given pair of internal signals can be regarded as exogenous inputs to a single equivalent deterministic mapping . )we start with the proof of theorem [ thm : main ] .it is clear from fig .[ fig : diagramas]-(a ) and from that the relationship between , , , , and can be represented by the diagram shown in fig .[ fig : key ] . from this diagram and lemma [ lem : not_so_obvious ] ( in the appendix )it follows that if is independent of , then the following markov chain holds : denoting the triad of exogenous signals by we have the following [ eq : lanueva ] \nonumber\\ & \overset{(a)}{= } \sumfromto{i=1}{k } \left [ i(\theta^{i};\rvay(i)|\rvay^{i-1 } ) - i(\theta^{i};\rvay(i)|\rvax^{i - d_{3}(i)},\rvay^{i-1 } ) \right]\label{eq : solo_directeds } \\ & \overset{(b)}{\leq } \sumfromto{i=1}{k } i(\theta^{i};\rvay(i)|\rvay^{i-1 } ) \overset{(c)}{\leq } \sumfromto{i=1}{k } i(\theta^{k};\rvay(i)|\rvay^{i-1 } ) \\&= i(\theta^{k};\rvay^{k}).\end{aligned}\ ] ] in the above , follows from the fact that , if is known , then is a deterministic function of .the resulting sums on the right - hand side of correspond to , and thereby proving the first part of the theorem , i.e. , the equality in . in turn, stems from the non - negativity of mutual informations , turning into equality if , as a direct consequence of the markov chain in .finally , equality holds in if , since depends causally upon .this shows that equality in is achieved if , completing the proof .apply the chain - rule identity to the rhs of to obtain now , applying twice , one can express the term as follows : where the second equality follows since .the result then follows directly by combining with and . since , where is due to theorem [ thm : finally ], follows from theorem [ thm : main ] and the fact that and from the chain rule of mutual information . for the second term on the rhs of the last equation ,we have where holds since , , and stem from the chain rule of mutual information , and is a consequence of the markov chain which is due to the fact that .finally , is due to the markov chain , which holds because as a consequence of lemma [ lem : not_so_obvious ] in the appendix ( see also fig . [ fig : diagramas]-(a ) ) .substitution of into yields , thereby completing the proof . since , we can apply ( where now plays the role of ) , and obtain now , we apply theorem [ thm : main ] , which gives completing the proof . applying theorem [ thm : main ] , since , forthe other directed information , we have that where follows from theorem [ thm : main ] , which also states that equality is reached if and only if . in turn , is due to the fact that is a deterministic function of .equality holds if and only if . finally ,from lemma [ lem : not_so_obvious ] ( in the appendix ) , turns into equality if .substitution of into yields , completing the proof .we begin with the second part of the theorem , proving the validity of the equality in .we have the following : \\ & \overset{(a)}{\leq } \sumfromto{i=1}{k } i(\rvar^{i},\rvap^{i},\rvax^{i - d_{3}(i)};\rvay(i)|\rvay^{i-1},\rvaq^{i } ) \\ & \overset{(b)}{= } \sumfromto{i=1}{k } i(\rvar^{i},\rvap^{i};\rvay(i)|\rvay^{i-1},\rvaq^{i } ) \\ & \overset{\eqref{eq : chainrule_i}}{= } \sumfromto{i=1}{k } \left [ i(\rvar^{i},\rvap^{i},\rvaq_{i+1}^{k};\rvay(i)|\rvay^{i-1},\rvaq^{i } ) - i(\rvaq_{i+1}^{k};\rvay(i)|\rvay^{i-1},\rvaq^{i},\rvar^{i},\rvap^{i } ) \right ] \\ & \overset{(c)}{= } \sumfromto{i=1}{k } \left [ i(\rvar^{i},\rvap^{i},\rvaq_{i+1}^{k};\rvay(i)|\rvay^{i-1},\rvaq^{i } ) \right ] \\ & \overset{\eqref{eq : chainrule_i}}{= } \sumfromto{i=1}{k } \left [ i(\rvar^{i},\rvap^{i};\rvay(i)|\rvay^{i-1},\rvaq^{k } ) + i(\rvaq_{i+1}^{k};\rvay(i)|\rvay^{i-1},\rvaq^{i } ) \right ] \\ & \overset{(d)}{= } \sumfromto{i=1}{k } i(\rvar^{i},\rvap^{i};\rvay(i)|\rvay^{i-1},\rvaq^{k } ) \label{eq : directed_normal_cond } \\ & \overset{(e)}{\leq } \sumfromto{i=1}{k } i(\rvar^{k},\rvap^{k};\rvay(i)|\rvay^{i-1},\rvaq^{k } ) = i(\rvar^{k},\rvap^{k};\rvay^{i}|\rvaq^{k } ) \label{eq : irprgivenq}\end{aligned}\ ] ] where equality holds in if and only if the markov chain holds for all ( as a straightforward extension of lemma [ lem : not_so_obvious ] ) . in our case ,the latter markov chain holds since we are assuming . in turn, stems from the fact that , for all , is a function of . to prove , we resort to and write from the definitions of the blocks ( in ) , it can be seen that , given , the triad of random sequences is a deterministic function of ( at most ) . recalling that and that ( see ) , it readily follows that , and thus each of the mutual informations on the right - hand - side of is zero . to verify the validity of , we use and obtain where now follows since , where the last term in this chain of inequalities was shown to be zero in the proof of .equality holds in if and only if , a markov chain which is satisfied in our case from the fact that and from lemma [ lem : not_so_obvious ] . finally , since , we have that the chain of equalities from to holds , from which we conclude that inserting this result into and invoking theorem [ thm : main ] we arrive at equality in .to prove the first equality the , it suffices to notice that corresponds to the sum on the right - hand - side of , from where we proceed as with the first part .this completes the proof of the theorem .information inequalities and , in particular , the data - processing inequality , have played a fundamental role in information theory and its applications .it is perhaps the lack of a similar body of results associated with the directed information ( and with non - asymptotic , causal information transmission ) which has limited the extension of many important information - theoretic ideas and insights to situations involving feedback or causality constraints .two such areas , already mentioned in this paper , are the understanding of the fundamental limitations arising in networked control systems over noiseless digital channels , and causal rate distortion problems . in those contexts, causality is of paramount relevance an thus the directed information appears , naturally , as the appropriate measure of information flow ( see , for example , and ) .we believe that our results might help gaining insights into the fundamental trade - offs underpinning those problems , and might also allow for the solution of open problems such as , for instance , characterizing the minimal average data - rate that guarantees a given performance level ( an improved version of the latter paper , which extensively uses the results derived here , is currently under preparation by the authors ) . on a different vein , directed mutual information plays a role akin to that of ( standard ) mutual information when characterizing channel feedback capacity ( see , e.g. , and the references therein ) .our results may also play a role in expanding the understanding of communication problems over channels used with feedback , particularly when including in the analysis additional exogenous signals such as a random channel state , interference and , in general , any form of side information .thus , we hope that the inequalities and identities presented in section [ sec : results ] may help in extending results such as dirty - paper coding , watermarking , distributed source coding , multi - terminal coding , and data encryption , to scenarios involving causal feedback .in this paper , we have derived fundamental relations between mutual and directed informations in general discrete - time systems with feedback .the first of these is an inequality between the directed information between to signals inside the feedback loop and the mutual information involving a subset of all the exogenous incoming signals .the latter result can be interpreted as a law of conservation of information flows for closed - loop systems .crucial to establishing these bounds was the repeated use of chain rules for conditional mutual information as well as the development of new markov chains .the proof techniques do not rely upon properties of entropies or distributions , and the results hold in very general cases including non - linear , time - varying and stochastic systems with arbitrarily distributed signals .indeed , the only restriction is that all blocks within the system must be causal mappings , and that their combined delay must be at least one sample .a new generalized data processing inequality was also proved , which is valid for nested directed informations within the loop .a key insight to be gained from this inequality was that the further apart the signals are in the loop , the lower is the directed information between them .this closely resembles the behavior of mutual information in open loop systems , where it is well known that any independent processing of the signals can only reduce their mutual information .[ lem : not_so_obvious ] in the system shown in fig .[ fig:2systems ] , the exogenous signals are mutually independent and are deterministic ( possibly time - varying ) causal maps characterized by , , , for some . since and are deterministic functions , it follows that for every possible pair of sequences , the sets and are also deterministic .thus , and .this means that for every pair of borel sets of appropriate dimensions , where follows from the fact that . this completes the proof .m. s. derpich and j. stergaard , `` improved upper bounds to the causal quadratic rate - distortion function for gaussian stationary sources , '' _ ieee trans .inf . theory _ ,58 , no . 5 , pp .31313152 , may 2012 .n. martins and m. dahleh , `` feedback control in the presence of noisy channels : bode - like fundamental limitations of performance , '' _ ieee trans .53 , no . 7 , pp . 16041615 , aug .e. i. silva , m. s. derpich , and j. stergaard , `` a framework for control system design subject to average data - rate constraints , '' _ ieee trans .control _ ,56 , no . 8 , pp . 18861899 , june 2011 . , `` on the minimal average data - rate that guarantees a given closed loop performance level , '' in _ proc .2nd ifac workshop on distributed estimation and control in networked systems , necsys _ , annecy , france , 2010 , pp .6772 .h. h. permuter , y .- h .kim , and t. weissman , `` interpretations of directed information in portfolio theory , data compression , and hypothesis testing , '' _ ieee trans .inf . theory _ ,57 , no .32483259 , june 2011 .h. zhang and y .- x .sun , `` directed information and mutual information in linear feedback tracking systems , '' in _ proc .6-th world congress on intelligent control and automation _, june 2006 , pp . 723727 .j. zola , m. aluru , a. sarje , and s. aluru , `` parallel information - theory - based construction of genome - wide gene regulatory networks , '' _ ieee transactions on parallel and distributed systems _ , vol .21 , no . 12 , pp .17211733 , dec . 2010 .n. merhav , `` data - processing inequalities based on a certain structured class of information measures with application to estimation theory , '' _ ieee trans .inf . theory _ ,58 , no .52875301 , aug .2012 .r. zamir , s. shamai , and u. erez , `` nested linear / lattice codes for structured multiterminal binning , '' _ ieee trans .inf . theory _special a.d .wyner issue , pp .1250 - 1276 , june 2002 . , pp .12501276 , june 2002 .
we present several novel identities and inequalities relating the mutual information and the directed information in systems with feedback . the internal blocks within such systems are restricted only to be causal mappings , but are allowed to be non - linear , stochastic and time varying . moreover , the involved signals can be arbitrarily distributed . we bound the directed information between signals inside the feedback loop by the mutual information between signals inside and outside the feedback loop . this fundamental result has an interesting interpretation as a law of conservation of information flow . building upon it , we derive several novel identities and inequalities , which allow us to prove some existing information inequalities under less restrictive assumptions . finally , we establish new relationships between nested directed informations inside a feedback loop . this yields a new and general data - processing inequality for systems with feedback .
laplacians arise in many different mathematical contexts ; three in particular that will interest us : manifolds , graphs and fractals .there are connections relating these different types of laplacians .manifold laplacians may be obtained as limits of graph laplacians for graphs arising from triangulations of the manifold ( ) .kigami s approach of construction laplacians on certain fractals , such as the sierpinski gasket ( sg ) , also involves taking limits of graph laplacians for graphs that approximate the fractal ( ) . in this paperwe present another connection , where we approximate the fractal from without by planar domains , and attempt to capture spectral information about the fractal laplacian from spectral information about the standard laplacian on the domains .thus we add an arrow to the diagram : \ar[dr ] & \\ \mbox{manifolds } \ar@{.>}[rr ] & & \mbox{fractals } } \ ] ] we should point out that the probabilistic approach to constructing laplacians on fractals also involves approximating from without , but in that case it is the stochastic process generated by the laplacian that is approximated , so it is not clear how to obtain spectral information .we may describe our method succinctly as follows .suppose we have a self - similar fractal in the plane , determined by the identity where is a finite set of contractive similarities ( called an _ iterated function system , _ ifs ) .choose a bounded open set whose closure contains , and form the sequence of domains consider the standard laplacian on with neumann boundary conditions ( recall that such conditions make sense even for domains with rough boundary ) .let denote the eigenvalues in increasing order ( repeated in case of nontrivial multiplicity ) with eigenfunctions ( normalized ) .so of course with constant .we then hope to find a renormalization factor such that exists and exists .( we have to be careful in cases of nontrivial multiplicity , and we may have to adjust by a minus sign in general ) .if this is the case then we may simply define a self - adjoint operator on by of course we would also like to identify with a previously defined laplacian , if such is possible , or at least show that is a local operator satisfying some sort of self - similarity .this may seem like wishful thinking , but it is not implausible .after all , many other types of structures on fractals can be obtained as limits of structures on , so why not a laplacian ?after reading this paper , we hope the reader will agree that there is a lot of evidence that this method should work in many cases .we leave to the future the challenge of describing exactly when it works , and why .we note one great advantage of our method : it not only approximates the laplacian , but it gives information about the spectrum .other methods of constructing laplacians on fractals do not yield spectral information directly .of course , not all spectral information is immediately available . in particular ,asymptotic information must be lost , since we know from weyl s law that for each fixed , but for fractals laplacians this is not the case .this means , in particular , that the limit ( [ one4 ] ) is not uniform in . to get information about for large requires taking a large value for . in practice ,our numerical calculations get stuck around .so we only see an approximation to a segment at the bottom of the spectrum .but this is already enough to reveal aspects of the spectrum that are provable .briefly , if the fractal has a nontrivial finite group of symmetries , then every neumann eigenfunction can be miniaturized , and so there is an eigenvalue renormalization factor such that if is an eigenvalue then so is . the argument for this works for the approximating domains and also for a self - similar laplacian on the fractal .( in fact the argument could be presented on the fractal alone , so its validity is independent of the validity of the outer approximation method , but in fact it was discovered by examining the experimental data ! ) so what is the evidence for the validity of the outer approximation method ?first we show that it works for the case when is the unit interval ( embedded in the x - axis in the plane ) . in this casewe can take and .if we take to be the unit square , then we can compute the spectra of ( rectangles ) and verify everything by hand ( in this case ) .we do this in section [ secunit ] , where we also look at different choices of , producing sawtooth shaped domains , whose spectra are computed numerically . in section [ secgasket ]we look at the case of sg , where the spectrum is known exactly . herewe see numerically how the spectra of the approximating domains approaches the known spectra .this computation shows that the accuracy falls off rapidly as increases .we are also able to compare the eigenfunctions of the approximating domains with the known eigenfunctions on sg . in this caseit is natural to take to be a triangle containing sg in its interior since this yields connected domains .we examine how the size of the overlap influences the spectra . afterthe work reported in section [ secgasket ] was completed , a different approach to outer approximation on sg was studied in . in particular , different methods for choosing approximating domains are used , and a whole family of different laplacians are studied . in section [ secnonpcf ]we examine numerical data for some fractals for which very little had been known about the spectrum of the laplacian , and in some cases where even the existence of a laplacian is unknown .these examples fall outside of the postcritically finite ( pcf ) category defined in .the first example is the standard sierpinski carpet sc ( cut out the middle square in tic - tac - toe and iterate ) .here it is known that a self - similar laplacian exists , but the construction is indirect , and uniqueness is not known .( after this work was completed , uniqueness was established in . )but we also examine some nonsymmetric variants of sc for which the existence of a laplacian is unknown .we also examine a symmetric fractal , the octagasket , where existence of a laplacian is unknown . in all casesthe spectra of the approximating regions appear to converge when appropriately renormalized .we can identify features of the spectrum , such as multiple eigenvalues , and eigenvalue renormalization factors , and we produce rough graphs of eigenfunctions on the fractal . in particular , there is no discernible difference between the behavior in the case of the standard sc and the other examples . in section [ secmini ]we describe the miniaturization process that produces the eigenvalue renormalization factor . for this to workwe need a dihedral group of symmetries of the fractal .we only deal with the examples at hand , but it is clear that it works quite generally ( we also explain how it works on the square ) . for the approximating regions ,this shows how shows up in the spectrum on ( the factor is not the same as ) .in section [ secranc ] we examine numerical data of randomly constructed variants of sc , where the existence of laplacians is unknown . to make these carpets , we modify the construction of sc .we fix the number of squares cut out at each recursive step , but we randomly determine which squares are removed . then , we achieve connected domains with a suitable change to the above algorithm and properly chosen parameters . herewe again see convergence of normalized eigenvalues .these random carpets are related to the mandelbrot percolation process .see and , for example . how do we compute the spectrum of the laplacian on the approximating domain ?we use a finite element method solver in matlab , matlab s own ` pdeeig ` function .to do this we only need to describe the geometry of the polygonal domain . then we either choose a triangulation ( exclusive to section [ secranc ] ) or let matlab s triangulation functions ` decsg ` and ` initmesh ` produce a triangulation and then use piecewise linear splines in the finite element method .note that it would be preferable to use higher - order splines , at least piecewise cubic , since these increase accuracy dramatically for a fixed memory space and running time . as a concession ,all of our triangulations may be further refined with the ` refinemesh ` function .the advantage of automating the triangulation is that it saves a tremendous amount of work ; in particular it chooses nonregular triangulations that increase accuracy .the disadvantage is that the program usually does not pick a triangulation with the same symmetry as the domain .this means that the eigenspaces that have nontrivial multiplicity in the domain end up being split into clusters of eigenspaces with eigenvalues close but not quite equal . since a lot of the structure of the spectrum we are trying to observe has to do with multiplicities ,this forces us to make ad hoc judgements as to when we have close but unequal eigenvalues , versus multiple eigenvalues .why do we deal exclusively with neumann spectra ?the main reason is that neumann boundary conditions on the approximating domains appears to lead to neumann boundary conditions for the laplacian on the fractal in the case of the interval and sg , while at the same time dirichlet boundary conditions on the approximating domains do not lead to dirichlet boundary conditions for the laplacian on the fractal .for example , in the case of the interval you would need to use a mix of dirichlet and neumann boundary conditions on different portions of the boundary .it is not at all clear what to do for other fractals . indeed for scit is not even clear what to choose for the boundary .the advantage of neumann boundary conditions is that one can dispense with all notions of boundary , and define eigenfunctions simply as stationary points of the rayleigh quotient with no boundary restrictions .all our programs , as well as further numerical data may be found on the websites ` www.math.cornell.edu/~thb9d ` [ and ` www.math.cornell.edu/~smh82 ` ] .finally , we note that have studied similar outer approximations in the context of quantum graphs .for the unit interval with the second derivative as laplacian , the neumann eigenfunctions are with eigenvalues . if we take to be the unit square , then is the rectangle \times[0,2^{-m}] 4 3 2 \alpha=.84032 4 \alpha=.83853 3 \alpha=.85007 2 \alpha=.83383 4 3 2 \alpha=.80788 4 \alpha=.80253 3 \alpha=.82004 2 \alpha=.81408 4 3 2 \alpha=.84747 4 \alpha=.84753 3 \alpha=.83368 2 \alpha=.83019 4 3 2 \alpha=.81013 4 \alpha=.80544 3 \alpha=.82086 2 \alpha=.81975 $ } \end{array}\ ] ] bryant adams , s. alex smith , robert s. strichartz , and alexander teplyaev , _ the spectrum of the laplacian on the pentagasket , _ fractals in graz 2001 , trends math . ,birkhuser , basel , 2003 , pp . 1 - 24 .mr mr2091699 ( 2006g:28017 ) martin t. barlow , _diffusions on fractals , _ lecturs on probability theory and statistics ( saint - flour , 1995 ) , lecture notes in math . ,1690 , springer , berlin , 1998 , 1 - 121 .mr mr1668115 ( 2000a:60148 ) erik i. broman and federico camia , _ large - n limit of crossing probabilities , discontinuity , and asymptotic behavior of threshold values in mandelbrot s fractal percolation process , _ electron .j. probab .* 13 * ( 2008 ) , no .33 , 980 - 999 . mr mr2413292
we present a new method to approximate the neumann spectrum of a laplacian on a fractal k in the plane as a renormalized limit of the neumann spectra of the standard laplacian on a sequence of domains that approximate k from the outside . the method allows a numerical approximation of eigenvalues and eigenfunctions for lower portions of the spectrum . we present experimental evidence that the method works by looking at examples where the spectrum of the fractal laplacian is known ( the unit interval and the sierpinski gasket ( sg ) ) . we also present a speculative description of the spectrum on the standard sierpinski carpet ( sc ) , where existence of a self - similar laplacian is known , and also on nonsymmetric and random carpets and the octagasket , where existence of a self - similar laplacian is not known . at present we have no explanation as to why the method should work . nevertheless , we are able to prove some new results about the structure of the spectrum involving `` miniaturization '' of eigenfunctions that we discovered by examining the experimental results obtained using our method . = 1
the complete characterization of quantum devices represents one of the fundamental tasks of quantum information science .the characterization of single- and two - qubit devices is particularly important since single - qubit and two - qubit controlled - not gates are the two building blocks of a quantum computer . furthermore , identifying an unknown quantum process acting on a quantum system is another key task for quantum dynamics control , in particular in presence of decoherence . in this contextany quantum process can be described by a linear map acting on density matrices associated to a hilbert space which transforms an input state into an output state ( fig .[ blackbox ] ) : the complete characterization of such a process can be realized by reconstructing the corresponding map .the action induced by a black box may be represented by a _process matrix _ which is experimentally reconstructed by quantum process tomography ( qpt ) ..,width=264 ] so far , several qpt experiments have been performed for trace - preserving processes , such as single - qubit transmission channels , optimal transpose map , gates for ensembles of two - qubit systems in nmr , a two - qubit quantum - state filter , a universal two - qubit gate , control - not ( cnot ) and control - z ( cz ) gates for photons .recently theoretical and experimental analyses of non trace - preserving processes have been carried out ._ evaluated the role of prior knowledge of the intrinsic feature of the experimental setup in order to obtain physical and easily understandable parameters for characterizing the gate and estimating its performance .furthermore quantum process tomography in presence of decoherence has been analyzed for a fast identification of the main decoherence mechanisms associated to an experimental map .here we address the characterization of non trace - preserving maps , focusing on the evaluation of an operator , representing the success probability of the process .in particular we carry out a quantum process tomography ( qpt ) approach for a set of non trace - preserving maps .then , we discuss possible errors occurring in presence of inappropriate approximations . the paper is organized as follows . in sectionii a brief review of the main theoretical aspects of qpt and of the process fidelity , both for trace - preserving and non trace - preserving maps , is presented . in section iiiwe report an example of qpt of a non trace - preserving process , corresponding to the transformation induced by a partially transmitting polarizing beam splitter . the qpt experimental realization and resultsare then presented together with a brief discussion on possible wrong approaches to the problem , when a non trace - preserving process is approximated with a trace - preserving one .finally , the conclusions are given in section iv .consider an unknown quantum process , i.e. a black box , acting on a physical quantum system described by a density matrix associated to a -dimensional hilbert space .a complete characterization of the process may be obtained by the kraus representation of quantum operations in an open system .a generic map acting on a generic state can be expressed by the kraus representation : where are operators acting on the system and satisfying the relation means that the eigenvalues of the hermitian operator are not positive . ] . if is a trace - preserving process the completeness relation holds .the quantum process tomography of consists of the experimental reconstruction of the operators . in order to relate each operator with measurable parameters it is convenient to use a _ fixed _ basis of operators such that : by substituting this expression in ( [ e ] ) , the map reads as follows where . by construction, the matrix with elements is hermitian and semidefinite positive . to experimentally reconstruct each element we prepare input states forming a basis for the hilbert space of matrices .the output states can be written as where the coefficients are experimentally obtained by characterizing and expressing it in the basis . by defining the coefficients such that it is easy to obtain a relation between and : in order to find the matrix which completely describes the process , we need to operate a matrix inversion of .if is this generalized inverse matrix ( i.e. ) , the elements of read : for a non trace - preserving map , it is important to consider not only the transformation acting on a generic input state , but also the probability of success of the map . for a given input state , the probability of obtaining an output state from the black box is given by =\text{tr}\left[\sum_{mn}{\chi_{mn}a_m\rho a_n^{\dagger}}\right]=\text{tr}\left[{\mathcal p}\rho\right]\;,\ ] ] where is a semidefinite positive hermitian operator defined as : let s write in its diagonal form , , where are the eigenstates and the corresponding eigenvalues .different cases may occur : * , i.e. for a trace - preserving process ; * ( is proportional to the identity operator ) for a non trace - preserving process with state independent success probability ; * there is at least one eigenvalue different from the others in the case of a non trace - preserving process with state dependent success probability .the eigenvectors of coincide with the `` probability of success '' eigenstates of the transformation .+ we now describe how to compare two quantum processes .it is well known that a quantum state can be completely determined by a tomographic reconstruction and compared with the expected theoretical state by a variety of measures , such as quantum state fidelity .similarly , we know that the process matrix gives a convenient way of representing a general operation . a closely related but more abstract representation is provided by the jamiolkowski isomorphism , which relates a quantum operation to a quantum state , : where is a maximally entangled state associated to the _ d - dimensional _ system with another copy of itself , and is an orthonormal basis set . if is a trace - preserving process , then the quantum state is normalized , =1 ] .it is easy to demonstrate that , by choosing the set as kraus operators , we have , and , in general , if any complete set of operators satisfying =d\delta_{mn} ] , the process fidelity definition is generalized as follows .let be the ideal matrix associated to a non trace - preserving process in the kraus representation and the experimental one .the fidelity for such a process is written as ^ 2}{\text{tr}\left[\chi\right]\text{tr}\left[\chi_{id}\right]}. \label{fidntp}\ ] ] note that the physical meaning of this expression is the same of ( [ fidproc ] ) : indeed we can express it as ^ 2\ ] ] where } ] are well defined physical states ( =\text{tr}[\chi^{\prime}_{id}]=1 ] ) , and choose the set of the states to be measured , obtaining the following matrix : obviously , the explicit form of does not depend on the chosen set , but only on the fixed basis in the kraus representation .let s now write the explicit form of the operator for the ppbs . by using the matrix given in , we obtain this operator is proportional to the identity only when . in this subsectionwe report the experimental realization of qpt for a partially transmitting polarizing beam splitter . in the experimental setup shown in fig .[ setup ] the ppbs is implemented by a closed - loop scheme , similar to the one used in , operating with two half - waveplates ( hwp ) .a diagonally polarized light beam is splitted by a polarizing beam splitter ( pbs ) in two beams with equal intensity and orthogonal polarizations . precisely , the horizontal ( ) and vertical ( ) components travel along two parallel directions inside the interferometer , counterclockwise and clockwise , respectively .one half - waveplate intercepts the beam , while the other intercepts the beam ; by rotating the waveplates it is possible to vary the value of with respect to .the photons injected in this interferometric setup are generated by a spontaneous paramentric down conversion source realized with a nonlinear crystal cut for type ii non collinear phase matching .the crystal is pumped by a cw diode laser and pairs of degenerate photons are produced at wavelength .one of the photon is used as a trigger , while the other is delivered to the ppbs setup . .] and , for ( _ a - b _ ) and ( _ c - d _ ) .the imaginary part are negligible . ]we prepared six different input states , , , , , , associated to horizontal , vertical , diagonal , anti - diagonal , right - handed and left - handed polarization respectively , and measured the six output components for each input with a standard polarization analysis setup .we repeated this procedure for different values of the ratio and , for each value of , we reconstructed the experimental matrix of the process .we then performed an optimization of the process matrix following a _ maximum likelihood _approach ; in particular we minimized the following function ^ 2 \label{lhood } \end{aligned}\end{aligned}\ ] ] where are the measured coincidence counts for the input and the output , and indicates the input and the output state respectively , and are the pauli operators . since we are not interested into overall losses affecting the transformation ( even the adopted fidelity is independent of global losses ) we normalize the experimental matrix such that the maximum eigenvalue of is 1 . .solid lines represent expected behaviour . ]we determined the fidelity between the experimental map and the ideal one for several values of , as shown in fig .[ fidexp ] .we observe that the process fidelity approaches unity for each value of , and in general , we have with a good agreement between the experimental data and the theory . in fig .[ chiml ] two examples of ideal and experimental process matrices , corresponding to and , are shown .we also estimated the probability operator : the behaviour of its eigenvalues and as a function of is shown in fig .[ peigenv ] .we observe that for each value of ( by construction ) , while the other eigenvalue , , shows a decreasing behaviour as the ratio between the trasmittivities decreases , as expected from ( [ p ] ) .again , a very good agreement between experimental data and theory is obtained .the method above described can be usefully adopted even when the process under investigation is ideally trace - preserving .in fact , when a quantum process tomography is practically implemented , any interaction with the environment as well as experimental imperfections may cause the process to be non trace - preserving . in practice , to approximate the process as a trace - preserving one corresponds to minimize the likelihood function ( [ lhood ] ) with the additional constraint . in this way we are imposing the probability of success to be independent of the input state .we carried out the minimization by taking into account the constraint of the mathematica5 program that allows to numerically minimizes subject to the constraints cons .note that the constraint imposes the normalization tr=1 $ ] . ] and evaluated the process fidelity between the obtained and the ideal matrix ( [ chimat ] ) for each value of .the results are shown in fig .[ fidvinc ] .( red open circles ) .fidelities obtained with the previous method are also reported ( black filled triangles ) . ]as expected , this method gives results similar to those obtained in section iiia for , while the fidelities values are different as decreases .in particular , the fidelities calculated by imposing the constraint decrease as goes to zero . it is evident that constraining the process to be trace - preserving does not allow to correctly reconstruct the associated map .a further scenario where probability of success must be taken into account may arise when measurements are performed in post - selection .the reconstruction of the output state density matrices ( which obviously are normalized physical states ) for several input state , leads to a trace - preserving process .even in this case we evaluated the fidelities between the resulting process matrix and the ideal one obtaining the results shown in fig .[ fidnorm ] . as in the previous casethe fidelity decreases as goes to zero .note that this approach is not correct even from a theoretical point of view : the process matrix obtained by normalizing the output states could be non - physical ( i.e. it could have negative eigenvalues ) and its expression depends on the chosen set of input states .this is due to the fact that normalization implies the process to be no longer a linear map and equation ( [ e2 ] ) is not valid anymore . in general, the output state normalization produces wrong process matrices for _ any _ non trace - preserving operation with state dependent success probability .a review on quantum process tomography of non trace - preserving maps has been reported .the experimental implementation of a simple non trace - preserving , state dependent process , i.e. the transformation induced by a partially polarizing beam splitter , provided process fidelities larger than for any value of the ratio between the transmittivities .particular attention has been addressed to the state dependence property of the process through evaluation of the operator ( [ pt ] ) .this operator has been calculated and measured in the case of a ppbs and its eigenvalues resulted to be different from unity ( see ( [ p ] ) ) , as expected for a non trace - preserving process . in order to stress the validity of the method a brief discussion about possible wrong approaches has been presented together with the explicit calculation of the ppbs process fidelities . the obtained results clearly show that approximation of a non trace - preserving , state dependent process with a trace - preserving one does not allow a correct reconstruction of the real process map .qpt of non trace preserving processes are relevant for linear optical logic gates with success probability .indeed , typically it is just assumed that the success probability of such gates is uniform across input states and hence it is crucial to check the validity of this assumption for any application .for example , it can be interesting to investigate whether losses in the planar integrated waveguide chips currently being used could affect different input states differently .
the ability of fully reconstructing quantum maps is a fundamental task of quantum information , in particular when coupling with the environment and experimental imperfections of devices are taken into account . in this context we carry out a quantum process tomography ( qpt ) approach for a set of non trace - preserving maps . we introduce an operator to characterize the state dependent probability of success for the process under investigation . we also evaluate the result of approximating the process with a trace - preserving one .
the classical cake - cutting problem of steinhaus asks how to cut a cake fairly among several players , that is , how to divide the cake and assign pieces to players in such a way that each person gets what he or she believes to be a fair share .the word _ cake _ refers to any divisible good , and the word _ fair _ can be interpreted in many ways . a strong notion of a fair division is an _ envy - free _ division ,one in which every player believes that their share is better than any other share .existence of envy - free cake divisions dates back to neyman , but constructive solutions are harder to come by ; the recent procedure of brams and taylor was the first -person envy - free solution .see and for surveys .it is natural to consider the cake - cutting question when there is more than one cake to divide .given several cakes , does an envy - free division of the cakes exist , and under what conditions can such a division be guaranteed? of course , if the player preferences over each cake are independent ( i.e. , additively separable ) , then the problem can be solved by one - cake division methods simply perform an envy - free division on each cake separately .so the question only becomes interesting when the players have _ linked _ preferences over the cakes , in which the portion of one cake that a player prefers is influenced by the portion of another cake that she might also obtain .consider the case of two players , alice and bob , dividing two cakes .suppose that each cake is to be cut into two pieces by a single cut , and players may choose one piece from each cake ( called a _ piece selection _ ) .note that there are 4 possible piece selections , only 2 of which will be chosen by alice and bob .see figure [ pieceselection ] .we would like the 2 chosen piece selections to be _ disjoint _ , i.e. , have no common piece on either cake , if bob and alice are to avoid a fight .also , we would like their piece selections to be _ envy - free _ , which means that each player would not want to trade their piece selection for any of other 3 possible piece selections .we remark that this notion of _ envy - free piece selection _ is stronger than the notion of an _ envy - free allocation _ , in which after pieces are allocated to players , a given player is only comparing their piece selection to what other players actually receive ( in this example , there is just 1 other piece selection that s been assigned to a player ) .our stronger notion of envy - freeness for piece selections ensures that the allocation is pareto - optimal over all possible piece selections in that division .does there always exist a division of the cakes ( by single cuts ) so that the players have disjoint envy - free piece selections ? and , respectively.,height=96 ] consider the following scenario as an example of this generalized cake - cutting question .a company employs two part - time employees alice and bob who work half - days ( either a morning or an afternoon shift ) two days each week . between them, they should cover the entire day on each of those two days .now , alice and bob may have certain preferences ( such as preferring a morning or afternoon shift ) and such preferences may be linked ( e.g. , alice might highly prefer the same schedule both days , whereas bob might prefer to have a morning shift on one day if he has the afternoon on another ) .their boss would like to account for both of their preferences .suppose that she has a certain fixed amount of salary money per day to divide up between the morning and afternoon shifts .is there always a way to split the daily salary pool among the shifts so that alice and bob will choose different shifts ?if so , is there a method to find it ? if not , why not ? here , the cakes are the salary pools for each day , and the pieces of cake are the shift assignments along with their salary .in this paper we examine what can be said about envy - free assignments in situations like this one .in particular , we show that for two cakes , single cuts on each cake are insufficient to guarantee an envy - free allocation of piece selections to players ; there exist preferences that alice and bob could have for which it is impossible to cut the cakes so that they would prefer disjoint piece selections . however , if one of those two cakes is divided into 3 pieces , then , somewhat surprisingly , there does exist an envy - free allocation of piece selections to players ( with one unassigned piece ) . similarly , for three or more cakes, cutting each cake into three pieces is not sufficient to guarantee the existence of an envy - free allocation of piece selections to two players .however , if each cake is cut into four pieces , we find that such an allocation is always possible .these results are reminiscent of the cake - cutting procedure of brams and taylor in which the authors show that for more than two players , the number of pieces required to ensure an envy - free division is strictly larger than the number of players .here , however , we find that extra pieces are necessary even in the two - player case .the key idea here is to view the space of possible divisions as a _polytope _ ; we triangulate this polytope and label each vertex in a way that reflects player preferences for piece selections in the division that the vertex represents .the labels satisfy the conditions of a generalization of sperner s lemma to polytopes ; its conclusion suggests a division and a disjoint allocation of piece selections to players that is envy - free , and pareto - optimal with respect to all possible piece selections in that division .sperner s lemma is a combinatorial analogue of the brouwer fixed point theorem , one of the most important theorems of mathematics and economics. constructive proofs of sperner s lemma can be used to prove the brouwer theorem in a constructive fashion ; such proofs have therefore found wide application in game theory and mathematical economics to locate fixed points as well as nash equilibria ( e.g. , see for surveys ) .more recently , sperner s lemma and related combinatorial theorems have been used to show the existence of envy - free divisions for a variety of _ fair division problems _ , including the classical cake - cutting question , e.g. , see and . in this paperwe will use a recent generalization of sperner s lemma to polytopes to address our generalized cake - cutting question . before describing sperner s lemma or its generalization ,we review some terminology .-simplex _ is a generalization of a triangle or tetrahedron to dimensions : it is the convex hull of affinely independent points in .a polytope in is the convex hull of points , called the _ vertices _ of the polytope .we call an -vertex , -dimensional polytope an _ -polytope_. a _ face _ of a polytope is the convex hull of any subset of the vertices of that polytope ; a -dimensional face of is called a _facet_. a _ triangulation _ of is a finite collection of distinct simplices such that : ( i ) the union of all the simplices in is , ( ii ) the intersection of any two simplices in is either empty or a face common to both simplices and ( iii ) every face of a simplex in is also in .the vertices of simplices in are called _ vertices of the triangulation _ . a _ sperner labeling _ of is a labeling of the vertices of that satisfies these conditions : ( 1 ) all vertices of have distinct labels and ( 2 ) the label of any vertex of which lies on a facet of matches the label of one of the vertices of that spans that facet .a _ full cell _ is any -dimensional simplex in whose vertices possess distinct labels. see figure [ sperner ] for examples of sperner - labeled polygons ( with full cells shaded ) .sperner s lemma states that any sperner - labeled triangulation of a simplex contains an odd number of full cells .the generalization which will be of use to us is : any sperner - labeled triangulation of an -polytope must contain at least full cells .[ polytopalsperner ] for instance , the sperner - labelled pentagon in figure [ sperner ] has at least full cells .-polytope ) .full cells are shaded.,height=240 ] the polytope that will interest us is the _ polytope of divisions _ , which we describe presently .suppose that we have cakes , each of length 1 , and that each cake is to be cut into pieces .let the length of the -th piece of the -th cake be denoted by .then for all we have each such division can be represented by an matrix : where each row sum is 1 . now, suppose that each cake is cut in such a way that one piece is the entire cake and all other pieces are empty .we will call such a division " of the cakes a _ pure division_. the matrix representation of a pure division is one in which each row has exactly one entry as a 1 and the rest 0 .notice that any division may be written as a convex linear combination of the pure divisions .thus , it is natural to view the space of divisions as a polytope with the pure divisions as its vertices .we call the _ polytope of divisions_. from the matrix representation of this polytope we see that the space of divisions is of dimension , since in each of the cakes , the length of the last piece is determined by the lengths of the first pieces .also has vertices , one for each pure division .thus is an -polytope with and .we now describe how to locate envy - free piece selections using the polytope of divisions .suppose that there are two players , and .let be a triangulation of .next , label with an _ owner - labeling _ : label each vertex in with either or .we will want this owner - labeling to satisfy the condition that each simplex in has roughly the same number of and labels . in general , for any number of players , call an owner - labeling _uniform _ if in each simplex , the number of labels for each player differs by at most one from any other player .so for two players and , if a simplex in has vertices , a uniform labeling would assign labels and to vertices each if is even , and to at least vertices each if is odd .we claim that any polytope has a triangulation of arbitrarily small mesh size that can be given a uniform owner - labeling .in particular , this may be accomplished by choosing a triangulation of the required mesh size , and _ barycentrically subdividing _barycentric subdivision _ takes a simplex of , and replaces it by smaller -simplices whose vertices are the barycenters of an increasing saturated chain of faces of .since each smaller -simplex contains exactly one barycenter in each dimension , we may assign the even - dimensional barycenters to player and the odd - dimensional barycenters to . this owner - labeling is uniform for players and .see figure [ owner ] for an example of a barycentric subdivision with a uniform owner - labeling .( if there are more than two players , then cyclically rotate player assignments by dimension ) . and .,height=144 ]now that the ownership has been assigned to the vertices of , we shall construct a _ preference - labeling _ of . givena division ( not necessarily pure ) of the cakes , a _ piece selection _ is a choice of one piece from each cake .we say that a player _ prefers _a certain piece selection if that player does not think that any other piece selection is better .we make the following three assumptions about preferences : \(1 ) _ independence of preferences_. a player s preferences depend only on that player and not on choices made by other players .\(2 ) _ the players are hungry_. a player will always prefer a nonempty piece in each cake to an empty piece in that cake ( assuming the pieces selected in the other cakes are fixed ) .hence a preferred piece selection will contain no empty pieces .\(3 ) _ preference sets are closed_. if a piece selection is preferred for a convergent sequence of divisions , then that piece selection will be preferred in the limiting division .note that with these assumptions about player preferences , a given player always prefers at least one piece selection in any division and may prefer more than one if that player is indifferent between piece selections .we now construct the preference - labeling of .notice that a vertex of is just a point in and so it represents a division of the cakes .ask the owner of ( who is either or ) : label by the answer given . since every piece selection corresponds to a pure division in a natural way ( namely , each may be thought of as a choice of one piece from each cake ), we obtain a preference - labeling of the vertices of by pure divisions .this new labeling is , in fact , a sperner labeling . to see why , note that each vertex of is a pure division , and condition ( 2 ) ensures that the player who owns that vertex will select the unique non - empty piece from each cake .so , every vertex of will be labeled by its corresponding pure division .a vertex of that lies on a facet of is a strict convex linear combination of the subset of vertices of that span that facet .thus , the division represented by is a strict convex linear combination of the subset of pure divisions that are represented by the vertices that span the facet .so , if the -th piece ( on any cake ) of the division represented by is empty , then each of those pure divisions must have an empty -th piece as well .so condition ( 2 ) guarantees that the owner of will never prefer a piece selection that corresponds to a pure division that is not on the same facet as .thus , the preference - labeling is a sperner labeling .the polytopal sperner lemma shows that there exist -dimensional , full cells in .the owner labeling of each of these cells is uniform .hence , a full cell represents similar divisions in which players and choose different piece selections .now , if we repeat this procedure for a sequence of finer and finer triangulations of , we would create a sequence of smaller and smaller full cells .since is compact , there must be a convergent subsequence of full cells that converges to a single point .since each full cell in the convergent subsequence also has a uniform owner labeling and since there are only finitely many ways to choose a piece selection , there must be an infinite subsequence of our convergent sequence for which the piece selections of each player remain unchanged .condition ( 3 ) guarantees that the selections will not change at the limit point of these full cells .so , at this limit point , the players choose different piece selections just as they did in the cells of the sequence .in this way , we may find a division for which both players choose _ different _ piece selections , that is , selections where the players choose different pieces on _ at least one _ cake .more generally , with any number of players , the same arguments ( using a uniform - labelling over multiple players ) show : [ notdisjoint ] given players and cakes , if each cake is cut into pieces each and , then there exists a division of the cakes for which each player chooses _ different _ ( though not necessarily disjoint ) piece selections .in particular , this theorem holds if each cake is cut into pieces .the bound on the number of pieces comes from the fact that the number of vertices of the simplex must be at least as large as the number of players so that each player owns at least one vertex of the full cell .the conclusion says that the piece selections chosen are _ envy - free _ : no one would trade their piece selection for any other .however , this theorem does not help us allocate the pieces , because two different players may have conflicting piece selections if they chose the same piece on a particular cake .so we wish to find divisions for which the players make _ disjoint _ envy - free piece selections , i.e. , selections where the players would choose non - conflicting pieces on _ every _ cake .we remark that such an allocation would be pareto - optimal with respect to all possible piece selections in that division .this is because each player would have chosen , over _ all piece selections _, what he / she most prefers .so there would be no other allocation of pieces of a given division in which players could do better .we explain this in contrast to an interesting example of brams , edelman , and fishburn , who consider a single cake cut into 6 pieces in which 3 players choose 2 pieces each .they exhibit a division and allocation of a pair of pieces to each player in such a way that the allocation is envy - free but not pareto optimal , because for the given 6-piece division there is another allocation in which some players are better off . although theirs was an _ envy - free allocation _ ( involving comparisons of 3 allocated pairs of pieces ) , those pairs were not _ envy - free piece selections _ ( involving comparisons of 30 possible pairs of pieces ) .an division into envy - free piece selections would by nature yield a pareto - optimal allocation with respect to the given division . in what follows we explore the existence of disjoint envy - free piece selections .[ sec2ck2pc ] we now explore what happens when two cakes , each cut into two pieces , are divided among two players : and . we show : given 2 players and 2 cakes , there does not necessarily exist a division of the cakes into 2 pieces each that contains disjoint envy - free piece selections for those players .[ thm:2cakes2pieces ] let be the polytope of divisions . in this case , is a -polytope , i.e. , a square .we label the pure divisions as follows : , , , , where , for instance , represents the pure division in which the left pieces of figure [ pieceselection ] are the entire cakes .recall that the same names , , , can be used to reference piece selections . also note that and represent disjoint piece selections , and so do and .player a s preferences can be described by a _ cover _ of the square by four sets , labeled by piece selections: , , , .this is accomplished by placing point in in the set if in the division of cake represented by , player would prefer piece selection .some points may be in multiple sets ( representing indifference between piece selections ) .assumption ( 3 ) about preferences ( see previous section ) ensures that the four sets are closed .the union of the four sets contains .assumption ( 2 ) about preferences ensures that the pure divisions are in the sets that bear their own name . in a similar fashion ,player also has a cover by four sets that describes his preferences . to find a division that allows for disjoint envy - free piece selections ,we seek a point in such that is in one player s set and the other s set , or in one player s set and the other s set .now consider the player preferences shown in figure [ 2ck2pc ] , where we assume that each set is labeled by the pure division that it contains .a visual inspection shows that there is no such point as described above .therefore , it is not necessarily possible to divide two cakes into two pieces in such a way that the two players will choose different pieces on each cake , i.e. , find disjoint piece selections .this result may seem somewhat unexpected , since if preferences are not linked , we can divide each individual cake in such a way that both players would be satisfied with their piece on each individual cake .an interpretation of figure [ 2ck2pc ] provides an example of linked preferences .player generally seems to either want the left hand or the right hand pieces of both cakes , and player generally wants both a left and a right hand piece ( in both cases , as long as the pieces are not too small ) . in our running example from the introduction , the preferences in figure [ 2ck2pc ] would correspond to alice strongly desiring to work either both morning or both afternoon shifts , and bob strongly desiring to work one morning and one afternoon shift .there may then be no division of the salary pool on each day that will induce them to take disjoint shifts .one may ask why the arguments of the previous section do not extend here .the answer is that while in each triangulation we can ensure an - edge ( in owner - labels ) that corresponds to endpoints with different preference - labels , we can not ensure that the corresponding piece selections are disjoint .indeed we see from figure [ 2ck2pc ] that if is triangulated very finely , there will be no - edge ( in owner - labels ) that could possibly be - or - ( in preference - labels ) since the corresponding players sets in the figure are not close .in contrast to theorem [ thm:2cakes2pieces ] , we may obtain a positive result for two players when there are three players involved . given 3 players and 2 cakes , there exists a division of the cakes , each cut into 2 pieces , such that some pair of players has disjoint envy - free piece selections . with three players ,consider an owner - labeling in which every triangle has vertices labeled by , , and , so that each vertex of the triangle belongs to a different player .note that a fully labeled triangle in the square of all divisions must contain a pair of disjoint piece selections .so , there is a pair of players who prefer disjoint piece selections for two very close divisions . using the limiting argument of section [ spernermethod ] ,there is a division in which some pair of players has disjoint envy - free piece selections .while there does not necessarily exist an envy - free allocation for two players with each cake cut into two pieces , we can satisfy both players with one cake cut into three pieces and the other cut in ( only ) two pieces .the proof of this fact , and later results for three cakes , will use the following theorem : let be an -polytope with sperner - labeled triangulation .let be the piecewise - linear map that takes each vertex of to the vertex of that shares the same label , and is linear on each -simplex of .the map is surjective , and thus the collection of full cells in forms a cover of under .[ cover ] given 2 players and 2 cakes , there is a division of the cakes one cut into 2 ( or more ) pieces , the other cut into 3 ( or more ) pieces so that the players have disjoint envy - free piece selections .[ thm:2cakes2p3p ] let be the ( 6,3)-polytope of divisions of the two cakes . in this case, is a triangular prism with vertices corresponding to the pure divisions , and .notice the 1-skeleton of can be interpreted as a graph in which piece selections that conflict appear as labels on adjacent vertices of ( see figure [ 2pieces3pieces ] ) .let be a triangulation of with a uniform owner - labeling .by theorems [ polytopalsperner ] and [ cover ] , there exists a fully - labeled 3-simplex whose image , under the map of theorem , is one of the simplices of the cover of .therefore , is non - degenerate and its four vertices do not lie on a common face of . in this case , one can verify that one vertex of must be non - adjacent , in the 1-skeleton of , to two other vertices , and .this means and correspond to piece selections that are disjoint from the piece selection of . without loss of generality ,if owns , must own at least one of and because the owner labeling is uniform .therefore , using the methods of section [ spernermethod ] , we can find an envy - free allocation .if the two cakes are divided into more than 2 or 3 pieces respectively , we can find an envy - free allocation simply by restricting our attention to divisions of the cake in which the extra pieces are empty , and using the above results .( but other envy - free divisions may exist as well . )as shown in section [ sec2ck2pc ] , dividing two cakes into two pieces each was insufficient to guarantee a division with envy - free piece selections for two players .we shall see that for two players , dividing three or more cakes into three pieces each is also insufficient , but cutting three cakes into four pieces each guarantees a division with envy - free piece selections .we begin by examining divisions of three cakes into three pieces each . given 2 players and 3 ( or more ) cakes ,there does not necessarily exist a division of the cakes into 3 pieces each that contains disjoint envy - free piece selections for the two players .[ preferences ] we will let and designate the leftmost , center , and rightmost piece of each cake , such that a choice of , for example , will refer to a player choosing the second piece in cakes one and two , and the first piece in cake three ( see figure [ bbaselection ] ) . ) .,height=168 ] we now construct preferences for two players , alice ( ) and bob ( ) , for which there does not exist a division with envy - free disjoint piece selections . fix some small . let prefer piece selections according to the following broad categories in descending order of preference : 1 .three pieces of the same type ( i.e. , , , ) , 2 .two pieces of the same type ( e.g. , , ) , 3 .three pieces all of different type ( e.g. , ) .let s preferences be the reverse : 1 .three pieces all of different type , 2 .two pieces of the same type , 3 .three pieces of the same type .neither player will accept a piece selection if any of its pieces are of size less than .if two or more piece selections are available in a given preference category , with all pieces greater than in size , the player chooses the option with the greatest total size ( if two choices have the same total size , choose the lexicographic first option ) .players only move to a lower ranked preference category if all options in a higher category contain a piece of cake with size less than .we show that for any set of pieces that player chooses , player prefers a piece selection which is not disjoint from s .suppose chooses three pieces of the same type ; without loss of generality , say chooses .thus , each piece has size greater than .if s piece selection were disjoint from , it would contain no , and hence would contain at least two s or two s .however , replacing one of those repeated letters with an would result in a piece selection more desirable to .next , suppose s piece selection consists of two pieces of one type and one piece of another ; for example , say chooses . in the third cake ,piece must have size less than , otherwise would have chosen . therefore , to not conflict with s choice , s piece selection must contain piece in the last cake .non - conflicting choices for are , or ; however , the pieces in the first two cakes have size greater than , so the piece selections , or would be preferred by to the previous options , respectively . these conflict with s piece selection in one of the first two cakes .finally , suppose chooses three different types of pieces , for example , .since would prefer or to , it must be that is the only piece of size greater than in the first cake .therefore will also choose piece from the first cake , and s and s piece selections will not be disjoint .we have shown that a division with disjoint envy - free piece selections for two players need not exist for three cakes divided into three pieces each .it is easy to see that the same is true for four or more cakes .simply have the players adopt the preferences described above on the first three cakes .we now show that if , instead , the three cakes are each divided into four or more pieces , we can always find a division with envy - free piece selections for two players .given 2 players and 3 cakes , there exists a division of the cakes into 4 ( or more ) pieces each that contains disjoint envy - free piece selections for both players .as before , the idea of the proof is to use the polytopal sperner lemma ( theorem [ polytopalsperner ] ) on the -dimensional space of divisions .however , our task is made much simpler by focusing on a particular full cell : the one that covers the center of .let be the ( 64,9)-polytope in of divisions of 3 cakes into 4 pieces each , and let be a triangulation of with a uniform owner labeling . by theorem[ cover ] , the fully - labeled cells of cover under the map .hence , for at least one fully - labeled 9-simplex , contains the center of .that is , there exist weights , such that where the are the matrices of the pure divisions corresponding to the vertices of , and is the matrix in which every entry is 1/4 ( this is the center of ) . ,the pure divisions , arranged in a grid .these also may be thought of as piece selections , and two piece selections are disjoint if and only if they do not lie on the same grid plane.,height=216 ] it will be convenient to visualize the vertices of arranged on a grid , since there are four pieces that may be selected for each of three cakes ( see figure [ 12planes ] ) .note that there are four planes in each of the 3 orthogonal grid directions ; in the rest of the proof , a _ plane _ will refer to one of these 12 special planes .for example , the bottom horizontal plane contains all vertices of corresponding to pure divisions in which piece is picked in the 3rd cake . in fact , any given plane corresponds to pure divisions in which a particular piece in a particular cake is chosen .the matrices corresponding to such divisions all have a 1 in the same entry . since is a weighted average of - matrices that represent pure divisions , and matrices with a in a particular entry correspond to vertices in our grid that lie on a common plane , the total weight of vertices of that lie on a given plane of our grid must be .all of our arguments will be based on this fact .a vertex in our grid may also be thought of as a piece selection . by doing so, we see that two piece selections lie on a common plane if and only if they are not disjoint .recall that denotes the full cell whose image contains the center of .consider a graph whose nodes are the labels of , and in which two nodes are adjacent in if and only if they represent disjoint piece selections , i.e. , if and only if their labels do not lie on a common plane in figure [ 12planes ] ._ we now show , in all arguments that follow , that has a connected component of size at least ._ since the uniform owner labeling of a 9-simplex has exactly 5 vertices owned by and 5 by , this would imply the desired conclusion : there is an - edge of with disjoint preference - labels . call a configuration of 4 vertices in the cube a _ diagonal _ if each plane contains exactly one vertex .these corresponding piece selections are all mutually disjoint , and so they form a size-4 clique in . furthermore , any other vertex in the cube lies on exactly three planes , and so must be disjoint from one of the diagonal vertices .thus , a set of vertices that contains a diagonal is connected in .we now focus attention on the number of vertices of on each plane of the cube and their associated weights .consider first the case in which some plane contains only one vertex , ; hence .vertex is adjacent , in , to every other vertex with non - zero weight , since can not lie on a plane with any other weighted vertex ( otherwise the total weight of the plane exceeds ) .if there are at least five other vertices with positive weight , would have a connected component of size at least 6 .suppose there are five or fewer weighted vertices .since the four planes in any given direction must each contain at least one weighted vertex , there must be three planes that contain exactly one weighted vertex .these vertices each have weight , and each is the only vertex on any of the three orthogonal planes that contain it .therefore , any additional weighted vertex must lie in the unique intersection of the remaining planes , and this in turn implies a diagonal configuration of vertices . by our earlier argumentall 10 vertices of are connected in .if there is no such plane , then every plane contains at least two vertices .we look at the possible cases individually and show that there is some vertex that lies on planes with at most 4 other vertices , and is therefore disjoint from at least 5 vertices in ( resulting in a connected component of size at least 6 in ) .we will use the shorthand notation to indicate the number of vertices on the 4 planes in a given direction , ordered by the number of vertices they contain .case 1 : ( 4,2,2,2 ) , ( 4,2,2,2 ) , ( 4,2,2,2 ) .suppose that in every direction there is one plane containing 4 vertices and three planes containing 2 vertices each .we claim that there exists a vertex which does not lie on any plane containing 4 vertices . to see why this is true ,let the pairs of vertices on the three 2-vertex planes in one direction be , , and .the combined weight of each pair is .if each of these vertices appeared on a 4-vertex plane in at least one of the other two directions , the sum of the weights of those two planes would be at least the combined weight of the six vertices , or , and so at least one of the planes would have weight greater than , a contradiction .therefore , there is a vertex which lies only on 2-vertex planes in each direction .this vertex is not disjoint from at most 3 other vertices , and so is disjoint from at least 6 .case 2 : ( 4,2,2,2 ) , ( 4,2,2,2 ) , ( 3,3,2,2 ) . at most eight of the ten vertices of on a 4-vertex plane , so at least two do not lie on 4-vertex planes .these have at most 4 neighbors in ( it is four if they lie on planes with 2 , 2 , and 3 vertices ) .thus , they are disjoint from at least 5 vertices in .case 3 : ( 4,2,2,2 ) , ( 3,3,2,2 ) , ( 3,3,2,2 ) . by an argument similar to that of case 1, there exists a vertex which does not lie on the 4-vertex plane , and lies on at most one 3-vertex plane ; otherwise , the two 3-vertex planes in each direction must contain all 6 of the vertices not on the 4-vertex plane so the total weight of the two 3-vertex planes ( in either direction ) would be , which is too large . therefore , there exists a vertex that lies on at most one 3-vertex plane and no 4-vertex plane .this vertex has at most 4 neighbors in ( it is four if it lies on planes with 2 , 2 , and 3 vertices ) .thus it is disjoint in from at least 5 vertices .case 4 : ( 3,3,2,2 ) , ( 3,3,2,2 ) , ( 3,3,2,2 ) .there are twelve positions for vertices on 2-vertex planes and only 10 vertices of , so by the pigeonhole principle , one vertex must lie on at least two 2-vertex planes , hence it has at most 4 neighbors in , so it is disjoint in from at least 5 vertices .thus , given a fully - labeled 9-simplex whose image under contains the point , a uniform owner labeling produces an - edge with disjoint preference - labels . by using the methods of section [ spernermethod ] we are able to find a division with disjoint envy - free piece selections for players and . as in the proof of theorem [ thm:2cakes2p3p ] , if the cakes are cut into more than 4 pieces each , we can guarantee an envy - free allocation by restricting our attention to the case in which the extra pieces are empty .other envy - free allocations may exist as well .in this article , the polytopal sperner lemma has given us insight into how many pieces are necessary for envy - free multiple - cake division with disjoint piece selections .we have shown that it is possible to divide two ( respectively , three ) cakes among two players in an envy - free fashion with disjoint piece selections so long as each cake is cut into at least three ( respectively , four ) pieces .this suggests a natural question for 2 players and cakes : is it always possible to find disjoint envy - free piece selections if each cake is cut into at least pieces ? and can we get by with fewer pieces in some of the cakes ( as we did for 2 cakes , allowing one cake to have 2 instead of 3 pieces ) ?we have also shown that there may not exist envy - free divisions of two ( respectively , three ) cakes among two players when the cakes are only cut into two ( respectively , three ) pieces each .in fact , with preferences similar to those in the proof of theorem [ preferences ] , one can verify that 4 cakes cut in 4 pieces each is not sufficient to ensure an envy - free allocation among two players . in this case , a s preferences in order are : 4 of a kind , 3 of a kind , 2 pair , 1 pair , all different .b s preferences are the reverse . by extending these preferences to cakes each cut into pieces ,is it possible to show that disjoint envy - free piece selections may not exist for two players in this situation ? finally , how many additional pieces do we need in each cake if there are more players who demand disjoint envy - free piece selections ?recall ( see comments at the end of section [ spernermethod ] ) that disjoint , envy - free piece selections would by nature yield pareto - optimal allocations , even with more than two players .our intuition suggests that if we cut the cake into enough pieces , we should be able to satisfy all players , though there may be many extra pieces left over .we conclude with this open question : for given numbers of players and cakes , we ask for the minimum number of pieces to divide each cake that would ensure that some division has disjoint envy - free piece selections .
we introduce a generalized cake - cutting problem in which we seek to divide multiple cakes so that two players may get their most - preferred piece selections : a choice of one piece from each cake , allowing for the possibility of linked preferences over the cakes . for two players , we show that disjoint envy - free piece selections may not exist for two cakes cut into two pieces each , and they may not exist for three cakes cut into three pieces each . however , there do exist such divisions for two cakes cut into three pieces each , and for three cakes cut into four pieces each . the resulting allocations of pieces to players are pareto - optimal with respect to the division . we use a generalization of sperner s lemma on the polytope of divisions to locate solutions to our generalized cake - cutting problem . corresponding author ]
a critical problem faced by participants in investment markets is the so - called optimal liquidation problem , viz . how best to trade a given block of shares at minimal costhere , cost can be interpreted as in perold s implementation shortfall ( ) , i.e. adverse deviations of actual transaction prices from an arrival price baseline when the investment decision is made .alternatively , cost can be measured as a deviation from the market volume - weighted trading price ( vwap ) over the trading period , effectively comparing the specific trader s performance to that of the average market trader . in each case , the primary problem faced by the trader / execution algorithm is the compromise between price impact and opportunity cost when executing an order .price impact here refers to adverse price moves due to a large trade size absorbing liquidity supply at available levels in the order book ( temporary price impact ) .as market participants begin to detect the total volume being traded , they may also adjust their bids / offers downward / upward to anticipate order matching ( permanent price impact ) . to avoid price impact , traders may split a large order into smaller child orders over a longer period . however , there may be exogenous market forces which result in execution at adverse prices ( opportunity cost ) .this behaviour of institutional investors was empirically demonstrated in , where they observed that typical trades of large investment management firms are almost always broken up into smaller trades and executed over the course of a day or several days .several authors have studied the problem of optimal liquidation , with a strong bias towards stochastic dynamic programming solutions .see , , , as examples . in this paper, we consider the application of a machine learning technique to the problem of optimal liquidation .specifically we consider a case where the popular almgren - chriss closed - form solution for a trading trajectory ( see ) can be enhanced by exploiting microstructure attributes over the trading horizon using a reinforcement learning technique .reinforcement learning in this context is essentially a calibrated policy mapping states to optimal actions .each state is a vector of observable attributes which describe the current configuration of the system .it proposes a simple , model - free mechanism for agents to learn how to act optimally in a controlled markovian domain , where the quality of action chosen is successively improved for a given state . for the optimal liquidation problem, the algorithm examines the salient features of the current order book and current state of execution in order to decide which action ( e.g. child order price or volume ) to select to service the ultimate goal of minimising cost .the first documented large - scale empirical application of reinforcement learning algorithms to the problem of optimised trade execution in modern financial markets was conducted by .they set up their problem as a minimisation of implementation shortfall for a buying / selling program over a fixed time horizon with discrete time periods . for actions ,the agent could choose a price to repost a limit order for the remaining shares in each discrete period .state attributes included elapsed time , remaining inventory , current spread , immediate cost and signed volume . in their results, they found that their reinforcement learning algorithm improved the execution efficiency by 50% or more over traditional submit - and - leave or market order policies . instead of a pure reinforcement learning solution to the problem , as in , we propose a hybrid approach which _ enhances _ a given analytical solution with attributes from the market microstructure . using the almgren - chriss ( ac ) model as a base , for a finite liquidation horizon with discrete trading periods , the algorithm determines the proportion of the ac - suggested trajectory to trade based on prevailing volume / spread attributes .one would expect , for example , that allowing the trajectory to be more aggressive when volumes are relatively high and spreads are tight may reduce the ultimate cost of the trade . in our implementation , a static volume trajectory is preserved for the duration of the trade , however the proportion traded is dynamic with respect to market dynamics . as in ,a market order is executed at the end of the trade horizon for the remaining volume , to ensure complete liquidation .an important consideration in our analysis is the specification of the problem as a finite - horizon markov decision process ( mdp ) and the consequences for optimal policy convergence of the reinforcement learning algorithm . in , they use an approximation in their framework to address this issue by incorporating elapsed time as a state attribute , however they do not explicitly discuss convergence .we will use the findings of in our model specification and demonstrate near - optimal policy convergence of the finite - horizon mdp problem .the model described above is compared with the base almgren - chriss model to determine whether it increases / decreases the cost of execution for different types of trades consistently and significantly .this study will help determine whether reinforcement learning is a viable technique which can be used to extend existing closed - form solutions to exploit the nuances of the microstructure where the algorithms are applied .this paper proceeds as follows : section 2 introduces the standard almgren - chriss model .section 3 describes the specific hybrid reinforcement learning technique proposed , along with a discussion regarding convergence to optimum action values .section 4 discusses the data used and results , comparing the 2 models for multiple trade types .section 5 concludes and proposes some extensions for further research .bertsimas and lo are pioneers in the area of optimal liquidation , treating the problem as a stochastic dynamic programming problem .they employed a dynamic optimisation procedure which finds an explicit closed - form best execution strategy , minimising trading costs over a fixed period of time for large transactions .almgren and chriss extended the work of to allow for risk aversion in their framework .they argue that incorporating the uncertainty of execution of an optimal solution is consistent with a trader s utility function .in particular , they employ a price process which permits linear permanent and temporary price impact functions to construct an efficient frontier of optimal execution .they define a trading strategy as being _ efficient _ if there is no strategy which has lower execution cost variance for the same or lower level of expected execution cost .the exposition of their solution is as follows : they assume that the security price evolves according to a discrete arithmetic random walk : where : here , permanent price impact refers to changes in the equilibrium price as a direct function of our trading , which persists for at least the remainder of the liquidation horizon .temporary price impact refers to adverse deviations as a result of absorbing available liquidity supply , but where the impact dissipates by the next trading period due to the resilience of the order book .almgren and chriss introduce a temporary price impact function to their model , where causes a temporary adverse move in the share price as a function of our trading rate .given this addition , the actual security transaction price at time is given by : assuming a _ sell _ program , we can then define the total trading revenue as : where for .the total cost of trading is thus given by , i.e. the difference between the target revenue value and the total actual revenue from the execution .this definition refers to perold s implementation shortfall measure ( see ) , and serves as the primary transaction cost metric which is minimised in order to maximise trading revenue .since implementation shortfall is a random variable , almgren and chriss compute the following : and the distribution of implementation shortfall is gaussian if the are gaussian .given the overall goal of minimising execution costs and the variance of execution costs , they specify their objective function as : where : the intuition of this objective function can be thought of as follows : consider a stock which exhibits high price volatility and thus a high risk of price movement away from the reference price . a risk averse trader would prefer to trade a large portion of the volume immediately , causing a ( known ) price impact , rather than risk trading in small increments at successively adverse prices . alternatively ,if the price is expected to be stable over the liquidation horizon , the trader would rather split the trade into smaller sizes to avoid price impact .this trade - off between speed of execution and risk of price movement is what governs the shape of the resulting trade trajectory in the ac framework .a detailed derivation of the general solution can be found in . here ,we state the general solution : the associated trade list is : where : this implies that for a program of selling an initially long position , the solution decreases monotonically from its initial value to zero at a rate determined by the parameter .if trading intervals are short , is essentially the ratio of the product of volatility and risk - intolerance to the temporary transaction cost parameter .we note here that a larger value of implies a _ more rapid _ trading program , again conceptually confirming the propositions of that an intolerance for execution risk leads to a larger concentration of quantity traded early in the trading program .another consequence of this analysis is that different sized baskets of the same securities will be liquidated in the same manner , barring scale differences and provided the risk aversion parameter is held constant .this may be counter - intuitive , since one would expect larger baskets to be effectively less liquid , and thus follow a _ less rapid _ trading program to minimise price impact costs .it should be noted that the ac solution yields a suggested volume trajectory over the liquidation horizon , however there is no discussion in as to the prescribed _ order type _ to execute the trade list . we have assumed that the trade list can be executed as a series of _ market orders_. given that this implies we are always crossing the spread, one needs to consider that traversing an order book with thin volumes and widely - spaced prices could have a significant transaction cost impact .we thus consider a reinforcement learning technique which learns _ when _ and _ how much _ to cross the spread , based on the current order book dynamics .the general solution outlined above assumes linear price impact functions , however the model was extended by almgren in to account for non - linear price impact .this extended model can be considered as an alternative base model in future research .the majority of reinforcement learning research is based on a formalism of markov decision processes ( mdps ) . in this context ,reinforcement learning is a technique used to numerically solve for a calibrated policy mapping states to optimal or near - optimal actions .it is a framework within which a learning agent repeatedly observes the state of its environment , and then performs a chosen action to service some ultimate goal .performance of the action has an immediate numeric reward or penalty and changes the state of the environment .the problem of solving for an optimal policy mapping states to actions is well - known in stochastic control theory , with a significant contribution by bellman .bellman showed that the computational burden of an mdp can be significantly reduced using what is now known as dynamic programming .it was however recognised that two significant drawbacks exist for classical dynamic programming : firstly , it assumes that a complete , known model of the environment exists , which is often not realistically obtainable .secondly , problems rapidly become computationally intractable as the number of state variables increases , and hence , the size of the state space for which the value function must be computed increases .this problem is referred to as the _ curse of dimensionality _ .reinforcement learning offers two advantages over classical dynamic programming : firstly , agents learn online and continually adapt while performing the given task . secondly , the methods can employ function approximation algorithms to represent their knowledge .this allows them to generalize across the state space so that the learning time scales much better .reinforcement learning algorithms do not require knowledge about the exact model governing an mdp and thus can be applied to mdp s where exact methods become infeasible .although a number of implementations of reinforcement learning exist , we will focus on _ q - learning_. this is a model - free technique first introduced by , which can be used to find the optimal , or near - optimal , action - selection policy for a given mdp . during _ q - learning _, an agent s learning takes place during sequential episodes .consider a discrete finite world where at each step , an agent is able to register the current state and can choose from a finite set of actions .the agent then receives a probabilistic reward , whose mean value depends only on the current state and action . according to , the state of the world changes probabilistically to according to : the agent is then tasked to learn the optimal policy mapping states to actions , i.e. one which maximises total discounted expected reward . under some policy mapping and discount rate ( ) ,the value of state is given by : according to and , the theory of dynamic programming says there is at least one optimal stationary policy such that we also define as the expected discounted reward from choosing action in state , and then following policy thereafter , i.e. the task of the _ q - learning _agent is to determine , and where is unknown , using a combination of exploration and exploitation techniques over the given domain .it can be shown that and that an optimal policy can be formed such that .it thus follows that if the agent can find the optimal q - values , the optimal action can be inferred for a given state .it is shown in that an agent can learn q - values via experiential learning , which takes place during sequential episodes . in the episode , the agent : *observes its current state , * selects and performs an action , * observes the subsequent state as a result of performing action , * receives an immediate reward and * uses a learning factor , which decreases gradually over time . is updated as follows : provided each state - action pair is visited infinitely often , show that converges to for any exploration policy .singh et al .provide guidance as to specific exploration policies for asymptotic convergence to optimal actions and asymptotic exploitation under the _ q - learning _algorithm , which we incorporate in our analysis .the above exposition presents an algorithm which guarantees optimal policy convergence of a stationary infinite - horizon mdp .the stationarity assumption , and hence validity of the above result , needs to be questioned when considering a finite - horizon mdp , since states , actions and policies are time - dependent .in particular , we are considering a discrete period , finite trading horizon , which guarantees execution of a given volume of shares . at each decision step in the trading horizon , it is possible to have different state spaces , actions , transition probabilities and reward values .hence the above model needs revision .garcia and ndiaye consider this problem and provide a model specification which suits this purpose .they propose a slight modification to the bellman optimality equations shown above : for all , , , and .this optimality equation has a single solution that can be obtained using dynamic programming techniques .the equivalent discounted expected reward specification thus becomes : they propose a novel transformation of an -step non - stationary mdp into an infinite - horizon process ( ) .this is achieved by adding an artifical final reward - free absorbing state , such that all actions lead to with probability 1 .hence the revised _ q - learning _ update equation becomes : where if , , otherwise choose randomly in . if , select .the learning rule for is thus equivalent to setting .garcia and ndiaye further show that the above specification ( in the case where ) will converge to the optimal policy with probability one , provided that each state - action pair is visited infinitely often , and .given the above description , we are able to discuss our specific choices for state attributes , actions and rewards in the context of the optimal liquidation problem .we need to consider a specification which adequately accounts for our state of execution and the current state of the limit order book , representing the opportunity set for our ultimate goal of executing a volume of shares over a fixed trading horizon .we acknowledge that the complexity of the financial system can not be distilled into a finite set of states and is not likely to evolve according to a markov process. however , we conjecture that the essential features of the system can be sufficiently captured with some simplifying assumptions such that meaningful insights can still be inferred . for simplicity ,we have chosen a look - up table representation of .function approximation variants may be explored in future research for more complex system configurations .as described above , each state represents a vector of observable attributes which describe the configuration of the system at time . as in , we use _ elapsed time _ and _ remaining inventory _ as private attributes which capture our state of execution over a finite liquidation horizon .since our goal is to modify a given volume trajectory based on favourable market conditions , we include _ spread _ and _ volume _ as candidate market attributes .the intuition here is that the agent will learn to increase ( decrease ) trading activity when _ spreads _ are narrow ( wide ) and _ volumes _ are high ( low ) .this would ensure that a more significant proportion of the total volume - to - trade would be secured at a favourable price and , similarly , less at an unfavourable price , ultimately reducing the post - trade implementation shortfall . given the look - up table implementation ,we have simplified each of the state attributes as follows : * = trading horizon , = total volume - to - trade , * = hour of day when trading will begin , * = number of remaining inventory states , * = number of spread states , * = number of volume states , * = % ile spread of the tuple , * = % ile bid / ask volume of the tuple , * * elapsed time * : , * * remaining inventory * : , * * spread state * : * * volume state * : thus , for the episode , the state attributes can be summarised as the following tuple : for and , we first construct a historical distribution of spreads and volumes based on the training set .it has been empirically observed that major equity markets exhibit _ u_-shaped trading intensity throughout the day , i.e. more activity in mornings and closer to closing auction. a further discussion of these insights can be found in and .in fact , empirically demonstrates that south african stocks exhibit similar characteristics .we thus consider simlulations where training volume / spread tuples are _h_-hour dependent , such that the optimal policy is further refined with respect to trading time ( _ h _ ) . based on the almgren - chriss ( ac ) model specified above, we calculate the ac volume trajectory ( ) for a given volume - to - trade ( ) , fixed time horizon ( ) and discrete trading periods ( ) . represents the proportion of to trade in period , such that .for the purposes of this study , we assume that each child order is executed as a _ market order _ based on the prevailing limit order book structure .we would like our learning agent to modify the ac volume trajectory based on prevailing volume and spread characteristics in the market .as such , the possible actions for our agent include : * = proportion of to trade , * = lower bound of volume proportion to trade , * = upper bound of volume proportion to trade , * * action * : , where + and the aim here is to train the learning agent to trade a higher ( lower ) proportion of the overall volume when conditions are favourable ( unfavourable ) , whilst still broadly preserving the volume trajectory suggested by the ac model . to ensure that the total volume - to - trade is executed over the given time horizon , we execute any residual volume at the end of the trading period with a _ market order_. each of the actions described above results in a volume to execute with a _ market order _ , based on the prevailing structure of the limit order book .the size of the child order volume will determine how deep we will need to traverse the order book .for example , suppose we have a _ buy _ order with a _ volume - to - trade _ of 20000 , split into child orders of 10000 in period and 10000 in period .if the structure of the limit order book at time is as follows : * _ level-1 ask price _ = 100.00 ; _level-1 ask volume _= 3000 * _ level-2 ask price _ = 100.50 ; _ level-2 ask volume _ = 4000 * _ level-3 ask price _ = 102.30 ; _ level-3 ask volume _ = 5000 * _ level-4 ask price _ = 103.00 ; _ level-4 ask volume _ = 6000 * _ level-5 ask price _ = 105.50 ; _ level-5 ask volume _= 2000 the volume - weighted execution price will be : trading more ( less ) given this limit order book structure will result in a higher ( lower ) volume - weighted execution price .if the following trading period has the following structure : * _ level-1 ask price _ = 99.80 ; _ level-1 ask volume _level-2 ask price _= 99.90 ; _level-2 ask volume _level-3 ask price _= 101.30 ; _level-3 ask volume _level-4 ask price _= 107.00 ; _level-4 ask volume _= 3000 * _ level-5 ask price _ = 108.50 ; _ level-5 ask volume _= 1000 the volume - weighted execution price for the second child order will be : if the reference price of the stock at is 99.5 , then the _ implementation shortfall _ from this trade is: since the conditions of the limit order book were more favourable for _ buy _ orders in period , if we had modified the child orders to , say 8000 in period and 12000 in period , the resulting _ implementation shortfall _ would be: in this example , increasing the child order volume when _ ask prices _ are lower and _ level-1 volumes _ are higher decreases the overal cost of the trade .it is for this reason that _ implementation shortfall _ is a natural candidate for the rewards matrix in our reinforcement learning system .each action implies a child order volume , which has an associated volume - weighted execution price .the agent will learn the consequences of each action over the trading horizon , with the ultimate goal of minimising the total trade s _implementation shortfall_. given the above specification , we followed the following steps to generate our results : * specify a stock ( ) , volume - to - trade ( ) , time horizon ( ) , and trading datetime ( from which the trading hour is inferred ) , * partition the dataset into independent _ training sets _ and _ testing sets _ to generate results ( the _ training set _always pre - dates the _ testing set _ ) , * calibrate the parameters for the almgren - chriss ( ac ) volume trajectory ( ) using the historical _ training set _ ; set , since we assume order book is resilient to trading activity ( see below ) , * generate the ac volume trajectory ( ) , * train the _ q - matrix _ based on the state - action tuples generated by the _ training set _ , * execute the ac volume trajectory at the specified trading datetime ( ) on each day in the _ testing set _ , recording the _ implementation shortfall _ , * use the trained _ q - matrix _ to modify the ac trajectory as we execute at the specified trading datetime , recording the _ implementation shortfall _ and * determine whether the reinforcement learning ( rl ) model improved / worsened realised _implementation shortfall_. in order to train the _ q - matrix _ to learn the optimal policy mapping , we need to traverse the training data set ( ) times , where is the total number of possible actions .the following pseudo - code illustrates the algorithm used to train the _ q - matrix _ : .... optimal_strategy < v , t , i , a > for ( episode 1 to n ) { record reference price at t=0 for t = t to 1 { for i = 1 to i calculate episode 's state attributes < s , v > for a = 1 to a { set x = < t , i , s , v > determine the action volume a calculate is from trade , r(x , a ) simulate transition x to y look up max_p q(y , p ) update q(x , a ) = q(x , a ) + alpha*u } } } select the lowest - is action max_p q(y , p ) for optimal policy .... an important assumption in this model specification is that our trading activity does not affect the market attributes .although temporary price impact is incorporated into execution prices via depth participation of the _ market order _ in the prevailing limit order book , we assume the limit order book is resilient with respect to our trading activity .market resiliency can be thought of as the number of quote updates before the market s spread reverts to its competitive level .degryse et al. showed that a pure limit order book market ( euronext paris ) is fairly resilient with respect to most order sizes , taking on average 50 quote updates for the spread to normalise following the most aggressive orders .since we are using 5-minute trading intervals and small trade sizes , we will assume that any permanent price impact effects dissipate by the next trading period .a preliminary analysis of south african stocks revealed that there were on average over 1000 quote updates during the 5-minute trading intervals and the pre - trade order book equilibrium is restored within 2 minutes for large trades .the validity of this assumption however will be tested in future research , as well as other model specifications explored which incorporate permanent effects in the system configuration .for this study , we collected 12 months of market depth tick data ( jan-2012 to dec-2012 ) from the thomson reuters tick history ( trth ) database , representing a universe of 166 stocks that make up the south african local benchmark index ( alsi ) as at 31-dec-2012 .this includes 5 levels of order book depth ( bid / ask prices and volumes ) at each tick .the raw data was imported into a mongodb database and aggregated into 5-minute intervals showing average level prices and volumes , which was used as the basis for the analysis . to test the robustness of the proposed model in the south african ( sa ) equity market we tested a variety of stock types , trade sizes and model parameters . due to space constraints, we will only show a representative set of results here that illustrate the insights gained from the analysis .the following summarises the stocks , parameters and assumptions used for the results that follow : * * stocks * * * sbk ( large cap , financials ) * * agl ( large cap , resources ) * * sab ( large cap , industrials ) * * model parameters * * * : 0 , : 2 , : 0.25 * * : 0.01 , : 5-min , : 1 , : 1 * * : 100 000 , 1000 000 * * : 4 ( 20-min ) , 8 ( 40-min ) , 12 ( 60-min ) * * : 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 * * : 5 , 10 * * buy / sell : buy * * assumptions * * * max volume participation rate in order book : 20% * * market is resilient to our trading activity note , we set since states that this is a necessary condition to ensure convergence to the optimal policy with probability one for a finite - horizon mdp .we also choose an arbitrary value for , although sensitivities to these parameters will be explored in future work .ac parameters are calibrated and _ q - matrix _ trained over a 6-month _ training set _ from 1-jan-2012 to 30-jun-2012 .the resultant ac and rl trading trajectories are then _ executed _ on each day at the specified trading time in the _ testing set _ from 1-jul-2012 to 20-dec-2012 .the _ implementation shortfall _ for both models is calculated and the difference recorded .this allows us to construct a distribution of _ implementation shortfall _ for each of the ac and rl models , and for all trading hours .table 1 shows the average % improvement in median _ implementation shortfall _ for the complete set of stocks and parameter values .these results suggest that the model is more effective for shorter trading horizons ( ) , with an average improvement of up to 10.3% over the base ac model .this result may be biased due to the assumption of order book resilience .indeed , the efficacy of the trained q - matrix may be less reliable for stocks which exhibit slow order book resilience , since permanent price effects would affect the state space transitions . in future work, we plan to relax this order book resilience assumption and incoporate permanent effects into state transitions .figure 1 illustrates the improvement in median post - trade _ implementation shortfall _ when executing the volume trajectories generated by each of the models , for each of the candidate stocks at the given trading times . in general ,the rl model is able to improve ( lower ) ex - post _ implementation shortfall _ , however the improvement seems more significant for early morning / late afternoon trading hours .this could be due to the increased trading activity at these times , resulting in more state - action visits in the _ training set _ to refine the associated q - matrix values .we also notice more dispersed performance between 10:00 and 11:00 .this time period coincides with the uk market open , where global events may drive local trading activity and skew results , particularly since certain sa stocks are dual - listed on the london stock exchange ( lse ) .the improvement in _ implementation shortfall _ ranges from 15 bps ( 85.3% ) for trading 1000 000 of sbk between 16:00 and 17:00 , to -7 bps ( -83.4% ) for trading 100 000 sab between 16:00 and 17:00 .overall , the rl model is able to improve _ implementation shortfall _ by 4.8% ..average % improvement in median _ implementation shortfall _ for various parameter values , using ac and rl models .training -dependent .[ cols="<,<,<,^,^,^,^,^,^,^,^ , > " , ] figure 2 shows the % of _ correct actions _implied by the q - matrix , as it evolves through the training process after each tuple visit . here ,correct action _ is defined as a reduction ( addition ) in the volume - to - trade based on the max q - value action , in the case where _ spreads _ are above ( below ) the 50%ile and _ volumes _ are below ( above ) the 50%ile level .this coincides with the intuitive behaviour we would like the rl agent to learn .these results suggest that finer state granularity ( ) improves the overall accuracy of the learning agent , as demonstrated by the higher % _ correct actions _ achieved .all model configurations seem to converge to some _ stationary _ accuracy level after approximately 1000 tuple visits , suggesting that a shorter training period may yield similar results .we do however note that improving the % of correct actions by increasing the granularity of the state space does not necessarily translate into better model performance .this can be seen by table 1 , where the results where do not show any significant improvement over those with .this suggests that the market dynamics may not be fully represented by _volume _ and _ spread _ state attributes , and alternative state attributes should be explored in future work to improve ex - post model efficacy .table 2 shows the average standard deviation of the resultant _ implementation shortfall _ when using each of the ac and rl models .since we have not explicitly accounted for _ variance of execution _ in the rl reward function , we see that the resultant trading trajectories generate a higher standard deviation compared to the base ac model .thus , although the rl model provides a performance improvement over the ac model , this is achieved with a higher degree of execution risk , which may not be acceptable for the trader .we do note that the rl model exhibits comparable risk for , thus validating the use of the rl model to reliably improve is over short trade horizons . a future refinement on the rl modelshould incorporate _ variance of execution _ , such that it is consistent with the ac objective function . in this way, a true comparison of the techniques can be done , and one can conclude as to whether the rl model indeed outperforms the ac model at a statistically significant level .in this paper , we introduced reinforcement learning as a candidate machine learning technique to _ enhance _ a given optimal liquidation volume trajectory .nevmyvaka , feng and kearns showed that reinforcement learning delivers promising results where the learning agent is trained to choose the optimal limit order price at which to place the remaining inventory , at discrete periods over a fixed liquidation horizon . here , we show that reinforcement learning can also be used successfully to modify a given volume trajectory based on market attributes , executed via a sequence of _ market orders _ based on the prevailing limit order book .specifically , we showed that a simple look - up table _ q - learning _ technique can be used to train a learning agent to modify a static almgren - chriss volume trajectory based on prevailing spread and volume dynamics , assuming order book resiliency . using a sample of stocks and trade sizes in the south african equity market, we were able to reliably improve post - trade _ implementation shortfall _ by up to 10.3% on average for short trade horizons , demonstrating promising potential applications of this technique .further investigations include incorporating _ variance of execution _ in the rl reward function , relaxing the order book resiliency assumption and alternative state attributes to govern market dynamics .the authors thank dr nicholas westray for his contribution in the initiation of this work , as well as the insightful comments from the anonymous reviewers .this work is based on the research supported in part by the national research foundation of south africa ( grant number cprr 70643 )
reinforcement learning is explored as a candidate machine learning technique to enhance existing analytical solutions for optimal trade execution with elements from the market microstructure . given a volume - to - trade , fixed time horizon and discrete trading periods , the aim is to adapt a given volume trajectory such that it is dynamic with respect to favourable / unfavourable conditions during realtime execution , thereby improving overall cost of trading . we consider the standard almgren - chriss model with linear price impact as a candidate base model . this model is popular amongst sell - side institutions as a basis for arrival price benchmark execution algorithms . by training a learning agent to modify a volume trajectory based on the market s prevailing spread and volume dynamics , we are able to improve post - trade implementation shortfall by up to 10.3% on average compared to the base model , based on a sample of stocks and trade sizes in the south african equity market .
many recent survey articles on the challenges of achieving exascale computing identify three issues to be overcome : exploiting massive parallelism , reducing energy usage and , in particular , coping with run - time failures . faults are an issue at peta / exa - scale due to the increasing number of components in such systems .traditional checkpoint - restart based solutions become unfeasible at this scale as the decreasing mean time between failures approaches the time required to checkpoint and restart an application .algorithm based fault tolerance has been studied as a promising solution to this issue for many problems .sparse grids were introduced in the study of high dimensional problems as a way to reduce the _ curse of dimensionality_. they are based on the observation that when a solution on a regular grid is decomposed into its hierarchical bases the highest frequency components contribute the least to sufficiently smooth solutions .removing some of these high frequency components has a small impact on the accuracy of the solution whilst significantly reducing the computational complexity .the combination technique was introduced to approximate sparse grid solutions without the complications of computing with a hierarchical basis . in recent years these approacheshave been applied to a wide variety of applications from real time visualisation of complex datasets to solving high dimensional problems that were previously cumbersome .previously it has been described how the combination technique can be implemented within a _ map reduce _ framework .doing so allows one to exploit an extra layer of parallelism and fault tolerance can be achieved by recomputing failed map tasks as described in .also proposed was an alternative approach to fault tolerance in which recomputation can be avoided for a small trade off in solution error . in we demonstrated this approach for a simple two - dimensional problem showing that the average solution error after simulated faults was generally close to that without faults . in this paperwe develop and discuss this approach in much greater detail .in particular we develop a general theory for computing new combination coefficients and discuss a three - dimensional implementation based on mpi and openmp which scales well for relatively small problems .as has been done in the previous literature , we use the solution of the scalar advection pde for our numerical experiments .the remainder of the paper is organised as follows . in section[ sec : back ] we review the combination technique and provide some well - known results which are relevant to our analysis of the fault tolerant combination technique .we then develop the notion of a general combination technique. in section [ sec : faults ] we describe how the combination technique can be modified to be fault tolerant as an application of the general combination technique . using a simple model for faults on each node of a supercomputer we are able to model the failure of component grids in the combination technique and apply this to the simulation of faults in our code .we present bounds on the expected error and discuss in how faults affect the scalability of the algorithm as a whole . in section [ sec : implem ]we describe the details of our implementation .in particular we discuss the multi - layered approach and the way in which multiple components work together in order to harness the many levels of parallelism .we also discuss the scalability bottleneck caused by communications and several ways in which one may address this . finally , in section [ sec : numres ] we present numerical results obtained by running our implementation with simulated faults on a pde solver .we demonstrate that our approach scales well to a large number of faults and has a relatively small impact on the solution error .we introduce the combination technique and a classical result which will be used in our analysis of the fault tolerant algorithm . for a complete introduction of the combination technique one should refer to .we then go on to extend this to a more general notion of a combination technique building on existing work on adaptive sparse grids .let , then we define to be a discretisation of the unit interval .similarly for we define as a grid on the unit -cube . throughout the rest of this paperwe treat the variables as multi - indices in .we say if and only if for all , and similarly , if and only if and .now suppose we have a problem with solution ^{d}) ] .however , the worst case scenario is when which results in a classical combination of level which has the error bound this is consistent with the upper bound which is the desired result .note the assumption that each be computed on a different node is not necessary as we have bounded the probability of a failure during the computation of to be independent of the starting time . as a result our bound is independent of the number of nodes that are used during the computation and how the are distributed among them as long as each individual is not distributed across multiple nodes .the nice thing about this result is that the bound on the expected error is simply a multiple of the error bound for , i.e. the result in the absence of faults .if the bound on was tight then one might expect \lessapprox \|u - u^{c}_{n}\|_{2}\left(1 + 3\left(1-e^{-(t_{n}/\lambda)^{\kappa}}\right)\right ) \,.\ ] ] also note that can be expressed as \leq\|u - u^{c}_{n}\|_{2}+\pr(t\leq t_{n})\|u^{c}_{n-1}-u^{c}_{n}\|_{2 } \,.\ ] ] if the combination technique converges for then as . since the error due to faults diminishes as .we now prove an analogous result for the case where only solutions with are recomputed .given and let , and be as described in proposition [ prop : err1 ] with each computed on different nodes for which time between failures is iid having weibull distribution with and .additionally let .suppose we recompute any with which is interrupted by a fault , let be the set of all possible for which was successfully computed ( eventually ) iff .let be the function - valued random variable corresponding to the result of the ftct , then \leq\epsilon_{n}\cdot\min\left\{16,1 + 3\left(d+5-e^{-\left(\frac{t_{n}}{\lambda}\right)^{\kappa}}-(d+4 ) e^{-\left(\frac{t_{n-1}}{\lambda}\right)^{\kappa}}\right)\right\ } \,.\ ] ] this is much the same as the proof of proposition [ prop : err1 ] .the solution to the gcp for satisfies the property that if for then was not computed successfully , that is .however the converse does not hold in general . regardless ,if for then the worst case is that and for the possible satisfying .we therefore note that the error generated by faults affecting with is bounded by therefore we have & \leq\|u - u^{c}_{n}\|_{2}+\sum_{\|i\|_{1}=n}g(t_{n})\|h_{i}\|_{2 } \\ & \qquad+\sum_{\|i\|_{1}=n-1}g(t_{n-1})\sum_{j\in i,\,j\geq i}\|h_{j}\|_{2 } \\ & \leq\epsilon_{n}\left(1 + 3\left(1-e^{-(t_{n}/\lambda)^{\kappa}}\right)\right ) \\ & \qquad+\binom{n-1+d-1}{d-1}\left(1-e^{-(t_{n-1}/\lambda)^{\kappa}}\right)(d+4)3^{-d}2^{-2n}|u|_{h^{2}_{\text{mix } } } \\ & \leq\epsilon_{n}\left(1 + 3\left(1-e^{-(t_{n}/\lambda)^{\kappa}}\right ) + 3(d+4)\left(1-e^{-(t_{n-1}/\lambda)^{\kappa}}\right)\right ) \,.\end{aligned}\ ] ] now the expected error should be no more than the worse case which is for which we have . taking the minimum of the two estimates yields the desired result. to illustrate how this result may be used in practice , suppose we compute a level interpolation in dimensions on a machine whose mean time to failure can be modelled by the weibull distribution with a mean of seconds and shape parameter .further , suppose with are not recomputed if lost as a result of a fault and that is estimated to be seconds and is at most seconds .the expected error for our computation is bounded above by times the error bound if no faults were to occur . whilst this provides some theoretical validation of our approach , in practice we can numerically compute an improved estimate by enumerating all possible outcomes and the probability of each occurring .the reason for this is that equation is an overestimate in general , particularly for relatively small . in practice ,a fault on with will generally result in the loss of of the largest hierarchical spaces in which case equation overestimates by a factor of .we now repeat the above analysis , this time focusing on the mean time required for recomputations .the first issue to consider is that a failure may occur during a recomputation which will trigger another recomputation .given a solution with , the probability of having to recompute times is bounded by .hence the expected number of recomputations for such a is bounded by let the time required to compute each be bounded by for some fixed and all . for a given ,suppose we intend to recompute all with upon failure , then the expected time required for recomputations is bounded by and by bounding components of the sum with the case one obtains the time required to compute all with once is similarly bounded by and hence estimates the expected proportion of extra time spent on recomputations .we would generally expect that ( we assume the time to compute level grids is much less than the mean time to failure ) and therefore this quantity is small . as an example , if we again consider a level computation in dimensions for which and the time to failure is weibull distributed with mean seconds with shape parameter , the expected proportion of time spent recomputing solutions level or smaller is . in comparison ,if any of the which fail were to be recomputed then a proportion of additional time for would be expected for recomputations , almost times more . whilst this is a somewhat crude estimateit clearly demonstrates the our approach will scale better than a traditional checkpoint restart when faults are relatively frequent .at the heart of our implementation is a very simple procedure : solve the problem on different grids , combine the solutions , and repeat .this section is broken up into different sub - sections based upon where different layers of parallelism can be implemented .we conclude by discussing some bottlenecks in the current implementation .the top layer is written in python .as in many other applications , we use python to glue together the different components of our implementation as well as providing some high level functions for performing the combination .it is based upon the development of numrf : intended to be a clean interface where different computation codes can be easily interchanged or added to the application .this layer can be further broken down into 4 main parts .the first is the loading of all dependencies including various python and _ numpy _ modules as well as any shared libraries that will be used to solve the given problem . in particular , bottomlayer components which have been compiled into shared libraries from various languages ( primarily c++ with c wrappers in our case ) are loaded into python using _ctypes_. the second part is the initialisation of data structures and construction / allocation of arrays which will hold the relevant data .this is achieved using pygraft which is a general class of grids and fields that allows us to handle data from the various components in a generic way .also in this part of the code is the building of a sparse grid data structure .this is done with our own c++ implementation which was loaded into python in the first part of the code .the third part consists of solving the given problem .this is broken into several `` combination steps '' .a `` combination step '' consists of a series of time steps of the underlying solver for each component solution , followed by a combination of the component solutions into a sparse grid solution , and finally a sampling of the component solutions from the sparse grid solution before repeating the procedure .the fourth and final part of the code involves checking the error of the computed solutions , reporting of various log data and finalise / cleanup .the top layer is primarily responsible for the coarsest grain parallelism , that is distributing the computation of different component solutions across different processes .this is achieved through mpi using _mpi4py_. there are two main tasks the top layer must perform in order to effectively handle this .the first is to determine an appropriate load balancing of the different component solutions across a given number of processes . for our simple problem thiscan be done statically on startup before initialising any data structures . for more complex problems thiscan be done dynamically by evaluating the load balancing and redistributing if necessary at start of each combination step based upon timings performed in the last combination step .re - distribution of the grids may require reallocating many of the data structures .the second task the top layer is responsible for is the communication between mpi processes during the combination step .this is achieved using two all_reduce calls .the first call is to establish which solutions have been successfully computed .this is required so that all processes are able to compute the correct combination coefficients .each process then does a partial sum of the component solutions it has computed .the second all_reduce call then completes the combination of all component solutions distributing the result to all processes . following this the component solutionsare then sampled from the complete sparse grid solution .the bottom layer is made up of several different components , many of which are specific to the problem that is intended to be solved .when solving our advection problem we have 2 main components , one is responsible for the sparse grid data structure and functions relating to the sparse grid ( e.g. interpolation and sampling of component solutions ) and the other component is the advection solver itself .both the sparse grid and advection solver components use openmp to achieve a fine grain level of parallelism .this is primarily achieved by distributing the work of large for loops within the code across different threads .the for loops have roughly constant time per iteration so the distribution of work amongst threads is done statically .the middle layer is currently being developed into the programming model .it is intended solely to handle various aspects relating to the computation of component solutions where domain decompositions are added as a third layer of parallelism .this will be achieved through an interface with a distributed array class of the numrf / pygraft framework at the top layer .this layer will need to interface with solver kernel from the bottom layer and then perform communication of data across domain boundaries .the combination of solutions onto the sparse grid in the top layer will also need to interface with this layer to handle the communication of different domains between mpi processes .it is intended that most of this will be transparent to the user . since interpolation of the sparse grid and the solver ( and any other time consuming operations ) each benefit from load balancing with mpi and work sharing with openmp , any major hurdles to scalability will be caused by the all_reduce communication and any serial operations in the code ( e.g. initialisation routines ) .ignoring initialisation parts of the code it becomes clear we need to either reduce the size of the data which is communicated , or reduce the frequency at which it is communicated .the first can be done if we apply some compression to the data before communicating , i.e. we trade - off smaller communications for additional cpu cycles .another way is to recognise that it is possible to do the all_reduce on a sparse grid of level if a partial hierarchisation is done to the largest component grids .this does nt improve the rate in which the complexity grows but can at least reduce it by a constant . reducing the frequency of the all_reducecan be done by performing partial combinations in place of full combinations for some proportion of the steps .this trades off the time taken to combine with some accuracy of the approximation .a partial combination is where a grid combines only with its neighbouring grids .however , the only way to really address the bottleneck caused by communication is to perform a full hierarchisation of the component grids . by doing this one can significantly reduce the communication volume at the expense of increasing the number of messages .one can then reduce the number of messages by identifying those which are communicated to the same mpi processes .we currently have a first implementation of this which we intend to improve as development continues .in this section , we present some numerical results which validate our approach .the problem used to test our algorithm is the scalar advection equation on the domain ^{3}\subset\mathbb{r}^{3}$ ] for constant . for the results presented in this section we use , periodic boundary conditions and the initial condition the pdeis solved using a lax - wendroff finite difference scheme giving results which are second order in space and time .we compare numerical solutions against the exact solution to determine the solution error at the end of each computation .a truncated combination technique as in is used for our experiments . in order to apply the ftct we need to compute some additional grids .we define which is the set of indices for which we are required to compute solutions if the top two levels are not to be recomputed in the event of a fault .note that as the grid sizes vary between the so does the maximum stable time step size as determined by the cfl condition .we choose the same time step size for all component solutions to avoid instability that may otherwise arise from the extrapolation of time stepping errors during the combination . as a resultour timesteps must satisfy the cfl condition for all component grids . by choosing such that it satisfies the cfl condition for the numerical solution of it follows that the cfl condition is also satisfied for all with .all of our computations were performed on a fujitsu primergy cluster consisting of 36 nodes each with 2 intel xeon x5670 cpus ( 6 core , 2.934ghz ) with infiniband interconnect ..[tab : wr2 ] numerical results for runs for each using the weibull distribution with mean of seconds and shape parameter of for the fault simulation .the computation was performed on 2 nodes with 6 openmp threads on each . [ cols="^,^,^,^,^,^,^,^,^",options="header " , ] in table [ tab : wr6 ] we repeat this experiment with runs on 6 nodes with 6 openmp threads on each . whilst running with additional nodes leads to a decrease in computation time we experience more faults on average because of the additional nodes .however , we can see that the effect of the increased average number of faults is quite small on both the average solution error and the average wall time .table [ tab : er2 ] again shows results for runs of the ftct with fault simulation on 2 nodes with 6 openmp threads on each .however , for this experiment the faults are exponentially distributed with a mean of seconds .we see that for this distribution the faults are a little less frequent on average leading to a slightly smaller average error .similar is observed in table [ tab : er6 ] where we repeat the experiment with runs on 6 nodes with 6 openmp threads on each .here the average number of faults is substantially less than the results of table [ tab : wr6 ] and this is again reflected by a smaller average error in comparison .the large in the 2nd row is due to a single outlier , the next largest time being .no simulated faults occurred for this outlier so we suspect it was due to a system issue . in figure[ fig : scala2 ] we demonstrate the scalability and efficiency of our implementation when the fault simulation is disabled . noting from table [ tab : wr6 ] that faults have very little effect on the computation time we expect similar results withfault simulation turned on .the advection problem was solved using a truncated combination .the component solutions were combined only once at the end of the computation .the _ solver _time reported here is the timing of the core of the code , that is the repeated computation , combination and communication of the solution which is highly scalable .the _ total _time reported here includes the time for python to load modules and shared libraries , memory allocation and error checking .the error checking included in the _ total _ time is currently computed in serial and could benefit from openmp parallelism .coordinates ( 1 , 1.0 ) ( 2 , 1.999503001868713 ) ( 4 , 3.9778915936641024 ) ( 6 , 5.878718802969197 ) ( 8 , 7.836917562724014 ) ( 12 , 11.574774152713045 ) ( 16 , 15.145158861617226 ) ( 24 , 22.36579942183678 ) ( 36 , 32.12360268284893 ) ( 48 , 40.523368251410155 ) ( 72 , 60.516847172081825 ) ; coordinates ( 1 , 1.0 ) ( 2 , 1.9895584696232995 ) ( 4 , 3.91569199837609 ) ( 6 , 5.72765319684416 ) ( 8 , 7.554623102457946 ) ( 12 , 10.928991528624614 ) ( 16 , 14.031659161759613 ) ( 24 , 19.969141279700285 ) ( 36 , 27.32690232056125 ) ( 48 , 33.1609364767518 ) ( 72 , 45.26189944134078 ) ; coordinates ( 1 , 1.0 ) ( 2 , 2.0 ) ( 4 , 4.0 ) ( 6 , 6.0 ) ( 8 , 8.0 ) ( 12 , 12.0 ) ( 16 , 16.0 ) ( 24 , 24.0 ) ( 36 , 36.0 ) ( 48 , 48.0 ) ( 72 , 72.0 ) ; in figure [ fig : tvf ] we compare the computation time required for our approach to reach a solution compared to more traditional checkpointing approaches , in particular , with a local and global checkpointing approach . with global checkpointingwe keep a copy of the last combined solution .if a failure affects any of the component grids it is assumed that the entire application is killed and computations must be restarted from the most recent combined solution .we emulate this by checking for faults at each combination step and restart from the last combination step if any faults have occurred . with local checkpointingeach mpi process saves a copy of each component solution it computes . in this casewhen a faults affect component solutions we need only recompute the affected component solutions from their saved state . in both checkpointing methodsthe extra component solutions used in our approach are not required and are hence not computed . as a resultthese approaches are slightly faster when no faults occur .however , as the number of faults increases , it can be seen from figure [ fig : tvf ] that the computation time for the local and global checkpointing methods begins to grow .a line of best fit has been added to the figure which makes it clear that the time for recovery with global checkpointing increases rapidly with the number of faults .local checkpointing is a significant improvement on this but still shows some growth . on the other handour approach is barely affected by the number of faults and beats both the local and global checkpointing approaches after only a few faults . for much larger number of faultsour approach is significantly better .table[x = f , y = t ] from ; table[x = f , y = t ] from ; table[x = f , y = t ] from ; table[x = f , y = t , forget plot ] from ; table[x = f , y = t ] from ; table[x = f , y = t , forget plot ] from ;a generalisation of the sparse grid combination technique has been presented . from this generalisationa fault tolerant combination technique has been proposed which significantly reduces recovery times at the expense of some upfront overhead and reduced solution accuracy .theoretical bounds on the expected error and numerical experiments show that the reduction in solution accuracy is very small .the numerical experiments also demonstrate that the upfront overheads become negligible compared to the costs of recovery using checkpoint - restart techniques if several faults occur .there are some challenges associated with load balancing and efficient communication with the implementation of the combination technique .studying these aspects and improving the overall scalability of the initial implementation will be the subject of future work . as the ulfm specification continues to develop ,the validation of the ftct on a system with real faults is also being investigated . , _ a robust combination technique _, in s. mccue , t. moroney , d. mallet , and j. bunder , editors , proceedings of the 16th biennial computational techniques and applications conference , anziam journal , 54 ( ctac2012 ) , pp . c394c411 . , _ robust solutions to pdes with multiple grids _ , sparse grids and applications - munich 2012 , j. garcke , d. pflger ( eds . ) , lecture notes in computational science and engineering 97 , springer , 2014 , to appear ., _ a parallel fault tolerant combination technique _ , m. bader , a. bode , h .- j .bungartz , m. gerndt , g.r .joubert , f. peters ( eds . ) , parallel computing : accelerating computational science and engineering ( cse ) , advances in parallel computing 25 , ios press , 2014 , pp .584592 . , _ global communication schemes for the sparse grid combination technique _ , m. bader , a. bode , h .- j .bungartz , m. gerndt , g.r .joubert , f. peters ( eds . ) , parallel computing : accelerating computational science and engineering ( cse ) , advances in parallel computing 25 , ios press , 2014 , pp . 564573 . , _reducibility among combinatorial problems _ , in complexity of computer computations : proc . of a symp . on the complexity of computer computations , r. e. miller and j. w. thatcher , eds . , the ibm research symposia series , new york , ny : plenum press , 1972 , pp . 85103 . , _ fault - tolerant grid - based solvers : combining concepts from sparse grids and mapreduce _ , proceedings of 2013 international conference on computer science ( iccs ) , procedia computer science , elsevier , 2013 . , _ managing complexity in the parallel sparse grid combination technique _ , m. bader , a. bode , h .- j .bungartz , m. gerndt , g.r .joubert , f. peters ( eds . ) , parallel computing : accelerating computational science and engineering ( cse ) , advances in parallel computing 25 , ios press , 2014 , pp .
this paper continues to develop a fault tolerant extension of the sparse grid combination technique recently proposed in [ b. harding and m. hegland , _ anziam j. _ , 54 ( ctac2012 ) , pp . c394c411 ] . the approach is novel for two reasons , first it provides several levels in which one can exploit parallelism leading towards massively parallel implementations , and second , it provides algorithm - based fault tolerance so that solutions can still be recovered if failures occur during computation . we present a generalisation of the combination technique from which the fault tolerant algorithm is a consequence . using a model for the time between faults on each node of a high performance computer we provide bounds on the expected error for interpolation with this algorithm . numerical experiments on the scalar advection pde demonstrate that the algorithm is resilient to faults on a real application . it is observed that the trade - off of recovery time to decreased accuracy of the solution is suitably small . a comparison with traditional checkpoint - restart methods applied to the combination technique show that our approach is highly scalable with respect to the number of faults . exascale computing , algorithm - based fault tolerance , sparse grid combination technique , parallel algorithms 65y05 , 68w10
survival analysis is a class of statistical methods for studying the occurrence and timing of events with the ability to incorporate censored observations .our research is a follow up study of the paper _ a recurrent - events survival analysis on the duration of olympic records _ by gutirrez , gonzlez , and lozano ( 2011 ) , in which survival analysis methods were used to examine the duration of track and field olympic records .we expand on this previous research by elaborating on censoring as it pertains to this olympic data , adding many more covariates , and extending our analysis to canoeing , cycling , and swimming . in terms of modeling , we reproduce previous models estimated , as well as introduce a logistic model that has not yet been applied to olympic data .+ as this is a study of time to event data , we defined an event to be a new olympic record being set .our main goal was to gain insight about what determines how long olympic records last .censoring of a record occurred when we had incomplete information about when a record would have been or will be broken , and these are the two cases of censoring we worked with in this study .the first case is a result of any record that is currently standing .the information about these records is incomplete in the sense that we do not know when current records will be broken .another case of censoring comes from any record that spanned over the years in which the olympics were cancelled due to world war i ( wwi ) or world war ii ( wwii ) .the information concerning these records is also incomplete because we do not know if the standing record at that time would have been broken if those particular olympic games had not been cancelled .data for this study was collected from _www.olympic.org _ and _ www.databaseolympics.com_. given that absolute , as opposed to relative , record times , distances , heights , etc .are necessary for tracking the olympic record progression of a particular event , the olympic events analyzed were limited to individual events contained in track & field , canoeing , cycling , and swimming categories .record durations span from the first modern olympic games held in athens , greece in 1896 to the most recent games held in beijing , china in 2008 . in the total of 27 olympic games that have occurred ,there have been 690 distinct records in the 63 different olympic events that were considered .note that the way in which the event time data was collected did not allow for a record to be both set and broken within the same olympic games .only the time , distance , or height achieved by gold medalists were defined to be the record , if in fact it was better than all prior gold medal performances .this was done to avoid potentially including records which lasted only a very short time ( perhaps a few days ) , which would be assigned a duration of 0 in a discrete model where records can only last a multiple of 4 years .a _ covariate _ is a factor that has an effect on the length of time that a record lasts .a wide variety of covariates were taken into account in order to characterize a record .we considered many characteristics of specific records and specific olympic games .here is a detailed explanation of the covariates that were considered : * * gender * - 1 for males and 0 for females .+ * * category * - 1 for track , 2 for field , 3 for canoeing , 4 for cycling , and 5 for swimming . + * * sameathlete * - 1 if the previous record was set by the same person , 0 otherwise . + * * samecountry * - 1 if the previous record was set by a person from the same country , 0 otherwise . + * * hostcountry * - 1 if the record setter is from the same country that the olympics are being held , 0 otherwise . + * * medal * - 1 if the person who set the record was an previous olympic medalist at the time of setting the record , 0 otherwise . + * * worldrecord * - 1 if the olympic record is the world record , 0 otherwise . +* * age * - the age of the record setter at the time the record was set . + * * percent change athletes ( pca ) * - the percent change of the number of participants . + * * percent change in countries ( pcc ) * - the percent change in the number of countries participating .+ * * dgdp * - growth rate of per capita gdp of the country of the record setter . +* * dpop * - growth rate of the population of the country of the record setter . + * * medal count ( mc ) * - the number medals won by the record setter s country through the 2008 olympics . + * * number of records ( nr ) * - the total number of records set at the olympics in the year a record is set . + * * marginal record improvement ( mri ) * - the percent difference between the new record and the previous record . + * * total record improvement ( tri ) * - the percent difference between the new record and the first record .+ * * world record improvement ( wri ) * - the percent difference between the olympic record and the contemporaneous world record .+ tables 1 - 4 show some simple statistics about our olympic data .table [ censored ] tells us the number of total observations , number of censored entries , and the number of records that failed , which is how many records have been broken .table [ freq ] tells us the frequency , or number of records , within each olympic sport category .table [ covariatefreq ] tells us the the number of records that are characterized by either a `` 1 '' or a `` 0 '' value for the categorical covariates that we included .table [ means ] gives us some simple summary statistics about our quantitative covariates .duration , which is the dependent variable , represents the number of years the olympic record lasted , which has a mean of 6.4 years ..censored and uncensored values [ cols="^,^,^,^",options="header " , ] since there are so few record durations of more than 20 years , we were unable to produce reliable estimates of the survival function for greater than 20 , as seen in table [ surv_est ] , so only records that were set after 1988 could be used to make predictions .of the 63 events that were analyzed , the current records for 51 of them were set in 1992 or later , so it was these 51 records that we were able to incorporate into our predictions found in table [ pred ] .therefore , of the 51 records included , the wang - chang , the generalized km , and the mle frailty predict that 31.14 , 25.74 , and 20.12 of them will be broken in 2012 respectively .the factors that go in to new olympic records being set may seem like a mystery to some and completely arbitrary to others . however , we ventured to shed some light on the issue using survival analysis methods and techniques .we first used the kaplan - meier estimator to determine which covariates significantly affected the duration of a record and also ran checks for dependence to inform our subsequent analysis .after prleminary measures , we estimated five different models with significant covariates .four of these models were extensions of the cox proportional hazards model while the other was a discrete - time logistic model for repeated events . in the end ,the logistic model was the best fit to our data based on the akaike information criterion , and it depended primarily on the following covariates : sameathlete , worldrecord , pcc , mri , and tri .we also used survival estimates from three different recurrent event survivor function estimators to determine the number of new records that will be set in the 2012 olympics . in 51 of the 63 events we considered, the estimated number of records that will be broken is somewhere between 20.12 and 31.14 .+ future research might include considering more events in the analysis .this could mean adding different individual events , as well as team and winter events .we would also like to investigate covariates that were not included in our analysis such as ethnicity , a measure of technological improvements , and percentage change of males and females competing .we would like to thank california state university , fresno and the national science foundation for their financial support ( nsf grant # dms-1156273 ) , the california state university , fresno mathematics reu program , and dr .ke wu for his support during the completion of the project .
we use recurrent - events survival analysis techniques and methods to analyze the duration of olympic records . the kaplan - meier estimator is used to perform preliminary tests and recurrent event survivor function estimators proposed by wang & chang ( 1999 ) and pena _ et al . _ ( 2001 ) are used to estimate survival curves . extensions of the cox proportional hazards model are employed as well as a discrete - time logistic model for repeated events to estimate models and quantify parameter significance . the logistic model was the best fit to the data according to the akaike information criterion ( aic ) . we discuss , in detail , covariate significance for this model and make predictions of how many records will be set at the 2012 olympic games in london . + + _ keywords : _ survival analysis , recurrent events , kaplan - meier estimator , cox proportional hazards model , olympics .
ergodic monte carlo simulations of large dimensional systems having complicated topologies with many disconnected local minima are difficult computational tasks , though indispensable for many physical applications. among the various methods dealing with such problems , the parallel tempering method is one of the most successful , especially given the simplicity of the idea and the ease of implementation . for sure, the idea of coupling two independent markov chains characterized by different parameters in order to ensure transfer of information from one to the other has a long history . in physical sciences ,coupling strategies have been employed for the development of specialized sampling techniques such as replica monte carlo sampling of spin glasses, jump - walking, and simulated tempering, to give a few examples .how this coupling must be performed in the simplest , most general , and most efficient way is , however , a quite difficult problem . the parallel tempering method , as we utilize it in this article , addresses the question of coupling independent monte carlo chains that sample classical boltzmann distributions for different temperatures and which are usually generated by the metropolis _ et al _algorithm. the method has been formalized seemingly independently by geyer and thompson as well as by hukushima and nemoto. of course , it is not necessary that the distributions involved in swaps differ through their temperature .for instance , the controlling parameter may be the chemical potential , as in the hyperparallel tempering method, a delocalization parameter , as in the q - jumping monte carlo method, or suitable modifications of the potential , as in the hamiltonian replica exchange method. in parallel tempering , swaps involving two temperatures and are attempted from time to time in a cyclic or random fashion and accepted with the conditional probability }\right\},\ ] ] where is the potential of the physical system under consideration .this acceptance rule ensures that the detailed balance condition is satisfied and that the equilibrium distributions are the boltzmann distributions . as eq .( [ 01 ] ) suggests , the efficiency of the temperature swaps depends on the difference between the inverse temperatures and . in order to maintain high acceptance ratios , only swaps between neighboring temperatures in a given schedule attempted .an optimal schedule of temperatures has the property that the acceptance ratios between neighboring temperatures are equal to some predetermined value , value that is usually greater or equal to .the determination of the optimal schedule is complicated by the fact that the distributions of the coordinates and are also temperature dependent ( they are , of course , the boltzmann distributions at the temperatures and , respectively ) . in this article , we attempt to answer the following important question : what are the main properties of the physical system that control the acceptance ratio in the limit that the difference is small ?the answer to this question allows for a better understanding of the applicability as well as the limitations of the parallel tempering method .in addition , it allows for the development of optimal temperature schedules in a way that seems more direct and easier to implement than other adaptive strategies. in section ii , we demonstrate in a rigorous mathematical fashion that the acceptance probabilities for parallel tempering swaps are controlled , within an error , by the _ ratio _ of the two temperatures involved in swaps and by the _average potential fluctuations _ of the system , at the inverse temperature .the acceptance probabilities are well approximated by the so - called incomplete beta function law , which has the additional property that it is exact for harmonic oscillators . under the assumption that the relation between the average fluctuations and the average square fluctuations of the potential is roughly the one for harmonic oscillators, we develop an empirical version of the incomplete beta function law , version that connects the acceptance probabilities for parallel tempering swaps with the temperature ratios and the heat capacity of the system . in section iii.b, we show how the incomplete beta function laws can be employed for the determination of optimal temperature schedules .we also explain the empirical observation that a geometric progression is the best schedule for systems and ranges of temperatures for which the heat capacity is almost constant. in section iii.c , we demonstrate rigorously that the efficiency of the parallel tempering method for harmonic oscillators decreases naturally to zero at least as fast as the inverse square root of the dimensionality of the physical system .we then argue that the loss in efficiency is even greater for condensed - phase systems ( both solids and liquids ) .this result seems to be in contradiction with the findings of kofke, who suggested that such a curse of dimension does not appear for parallel tempering .however , the result is in agreement with the explanation of fukunishi , watanabe , and takada. it is for this reason that we insist that our findings be proven in a mathematically rigorous way .the rigorous form of the incomplete beta function law involves the average potential fluctuations at certain temperatures .an evaluation of this property by monte carlo simulations would require the computation of a double integral over the configuration space .for this reason , current monte carlo codes would have to be modified extensively in order to take advantage of the incomplete beta function law for the design of optimal temperature schedules . to circumvent this undesirable situation, we propose an empirical version of the incomplete beta function law , version that is derived under the assumption that the relation between the average fluctuations and the average square fluctuations of the potential is roughly the one for harmonic oscillators . in sectioniv , we illustrate the good applicability of the empirical law by performing a monte carlo simulation for a cluster made up of atoms of neon that interact through lennard - jones potentials .consider a -dimensional physical system described by the potential , which is assumed to be bounded from below . to simplify notation, we may also assume that the global minimum of the potential is , perhaps after addition of a constant .clearly , the addition of a constant does not change the acceptance probabilities for swaps . in the parallel tempering algorithm, swaps involving two temperatures and occur with the conditional probability }\right\}.\end{aligned}\ ] ] the joint probability distribution density of the points and in an equilibrated system is given by the formula where is the configuration integral .it follows that the acceptance probability for swaps between neighboring temperatures is given by the average }\right\}. \quad\end{aligned}\ ] ] by construction , is symmetrical under exchange of variables . without loss of generality, we may assume .then , }\right\ } = e^{(\beta ' - \beta)\min\{0 , v(\mathbf{x}')-v(\mathbf{x})\ } } \\ = e^{\frac{(\beta ' - \beta)}{2 } [ v(\mathbf{x}')-v(\mathbf{x } ) ] } e^{-\frac{(\beta ' - \beta)}{2 } |v(\mathbf{x}')-v(\mathbf{x})|}. \quad \end{aligned}\ ] ] replacing eq .( [ eq:2 ] ) in eq .( [ eq:1 ] ) , we obtain .\quad\end{aligned}\ ] ] in eq .( [ eq:3 ] ) , the variables and are defined by the equations respectively . due to the nature of the results we prove , it is more convenient to express the various formulas in terms of the new variables and . because , we need only consider the case . as announced in the introduction , we are interested in establishing asymptotic laws in the limit that for which the error is of order or , alternatively , . at this point ,let us see that the acceptance probability given by eq .( [ eq:3 ] ) is alternatively given by the formula \right\rangle_{\bar{\beta}},\ ] ] where , in general , denotes the statistical average in eq .( [ eq:6 ] ) , denotes the density of states . in these conditions, the following proposition holds true .we have where _ proof . _ a taylor expansion of the exponential function to the third order and the identity imply \right\rangle_{\bar{\beta } } \\ & & = 1 + \frac{1}{2}\left(\frac{r-1}{r+1}\right)^2 \bar{\beta}^2\left\langle |u ' - u|^2\right\rangle_{\bar{\beta } } + o(|r-1|^3 ) .\end{aligned}\ ] ] therefore , on the other hand , \right\rangle_{\bar{\beta } } = 1 - \frac{r-1}{r+1}m(\bar{\beta } ) \\ + \frac{1}{2}\left(\frac{r-1}{r+1}\right)^2 \bar{\beta}^2\left\langle |u ' - u|^2\right\rangle_{\bar{\beta } } + o(|r-1|^3).\end{aligned}\ ] ] eq .( [ eq:9 ] ) and ( [ eq:10 ] ) imply \right\rangle_{\bar{\beta}}\\ = 1 - \frac{r-1}{r+1}m(\bar{\beta } ) + o(|r-1|^3)\end{aligned}\ ] ] and the proof is concluded . proposition 1 , while a powerful asymptotic result , has the disadvantage that it may produce negative numbers for the acceptance probability in actual simulations .however , we can repair this in very straightforward fashion .notice that the fact that does not depend upon the temperature is _ not crucial _ for the proof of proposition 1 .thus , given any other well - behaved density of states ] . from eqs .( [ eq:7 ] ) and ( [ eq:12 ] ) , we deduce for all ] , such that the resulting approximation is exact for a certain class of physical systems .we take this class to be the harmonic oscillators , for which .we prove in appendix i that for any -dimensional harmonic oscillator , and here , denotes the respective euler s beta function . for an harmonic oscillator, is the dimension . for general systems , is just a fitting parameter chosen such that eq .( [ eq:11 ] ) is satisfied .in fact , eq . ( [ eq:11 ] ) , which in our case reads has always a unique solution because increases strictly from to , as also increases from to .the following theorem is an immediate consequence of eq .( [ eq:13 ] ) and of the discussion above .consider an arbitrary thermodynamic system for which is given as a function of temperature .let be the unique solution of the equation then , } \int_0^{1/(1 + r ) } \theta^{d(\bar{\beta})/2 - 1}\\ \times ( 1 - \theta)^{d(\bar{\beta})/2 -1}d \theta + o\left(|r-1|^3\right ) , \quad\end{aligned}\ ] ] with the error term canceling for harmonic oscillators .there are several important consequences of theorem 1 .the first one concerns the development of an empirical law connecting the acceptance probabilities with the heat capacity and the ratio of the temperatures involved in the parallel tempering swap .the second one regards the optimal distribution of temperatures in parallel tempering monte carlo simulations .yet a third one is the statement that the efficiency of the swaps between neighboring temperatures decreases naturally as the inverse square root of the dimensionality of the system .we analyze these consequences in some detail in the remainder of the section .the property is not usually determined in simulations , nor is it measured in experiments .it is therefore necessary to relate it to other thermodynamic properties , more precisely to the heat capacity .in addition , it is desirable to develop a version of the incomplete beta function law involving the heat capacity rather then , even if the law has an empirical validity only . the quantity measures the statistical fluctuations of the potential and is connected to the heat capacity of the system .more precisely , the cauchy - schwartz inequality says that however , ^ 2\bigg\}.\end{aligned}\ ] ] the last term in the equation above is twice the potential contribution to the total heat capacity of the system .( in this article , the heat capacity is always expressed in units of the boltzmann constant . )the total heat capacity sums both the potential and the kinetic average square fluctuations and is given by the well - known formula then , the identity implies the inequality eq . ( [ eq:19 ] ) suggests that is an extensive property of the physical system , property that may be very large for systems for which the heat capacity is also large .in fact , sterling s formula shows that ^ 2 \approx \frac{2d}{\pi}\ ] ] for large dimensional harmonic oscillators .the relation has a linear scaling with the dimensionality of the system ( that is , with the number of particles ) .this scaling appears also for the heat capacity of harmonic oscillators for systems in condensed phase , for which an harmonic superposition is roughly a good approximation of the boltzmann distribution , one may safely assume that the functional relationship between and is not very far from the one for the harmonic oscillator .of course , this is always true in the low temperature limit . in this conditions ,the solution of the equation is approximately . replacing the last result in eq .( [ eq:17 ] ) , we obtain the following _ empirical incomplete beta function law _ : the good applicability of eq .( [ eq:20 ] ) for realistic physical systems will be illustrated in section iv for the case of a lennard - jones cluster .( [ eq:20 ] ) can be expressed in terms of the full heat capacity of the system with the help of eq .( [ eq:18 ] ) .the empirical incomplete beta function law is still exact for harmonic oscillators .it is an empirical observation that the optimal schedule ( i.e. , the schedule for which all the acceptance probabilities for swaps between neighboring temperatures are equal ) is given by a geometric progression of temperatures , if the heat capacity of the system is approximatively constant for the range ] using a geometric progression law .the results are then interpolated by cubic spline , for example .only a rough estimate is necessary . given a prescribed value for the acceptance probability and given the inverse temperature , one computes by solving the equation where .one starts with and continues the procedure until the current inverse temperature becomes greater or equal to .this way , one determines both the optimal distribution of temperatures and the minimal number of temperatures compatible with the prescribed acceptance probability . to ensure the validity of the approximation furnished by theorem 1, the value of should be large enough .in fact , values larger or equal to are necessary anyway in order to have good mixing between the monte carlo walkers running at neighboring temperatures .sure enough , one may use the full incomplete beta function law to find the optimal schedule .however , the computation of requires extensive changes to the existing codes .in addition , as illustrated by the example described in section iv , the empirical version of the incomplete beta function law may be sufficiently accurate for most systems of practical interest .the applicability of the law can also be tested during the computation of the heat capacity , by comparing the observed values for the acceptance probabilities with the ones predicted by the empirical incomplete beta function law . in this subsection, we show that the minimum number of intermediate temperatures that ensures an acceptance probability greater or equal to some preset value for a -dimensional system increases naturally at least as the square root of the dimensionality for condensed - phase systems ( both solids and liquids ) .this observation was first made by hukushima and nemoto. we begin with a rigorous mathematical analysis for harmonic oscillators . for them ,the optimal schedule is a geometric progression [ because is independent of temperature ] and the minimum value of is given by } \right ] + 2,\ ] ] where ] are counted ) .then , we have performed a second monte carlo simulation to verify the validity of the schedule .the plot in fig .[ fig:3 ] demonstrates that the computed schedule works very well .this explicit application illustrates the utility of the empirical incomplete beta function law for the determination of optimal temperature schedules in parallel tempering simulations . observed acceptance ratios for the optimal schedule of temperatures determined with the help of the empirical incomplete beta function law .the deviations from the ideal result of are comparable to the statistical noise ., width=264 ]we have successfully and rigorously related the acceptance probabilities for parallel tempering swaps to the ratio between the temperatures involved in the swap and the average statistical fluctuations of the potential at some intermediate temperature .the respective law , called the incomplete beta function law , is exact for harmonic oscillators and of order for arbitrary systems .we have also demonstrated that there is a loss of efficiency in parallel tempering simulations of condensed - phase systems with the increase of dimensionality .the loss of efficiency is at least , the value computed for harmonic oscillators . motivated by the fact that the existent monte carlo codes do not allow for the computation of the average potential fluctuations without extensive reprogramming , we have developed and tested the empirical incomplete beta function law .this empirical law connects the acceptance probabilities of the parallel tempering swaps with the heat capacity of the system .the law has been extensively verified for the cluster , on a range of temperatures that spanned three thermodynamic phases .the empirical incomplete beta function law provides a direct justification of the observation that a geometric progression is the optimal schedule for systems and regions of temperatures where the heat capacity is almost constant .finally , the use of the empirical incomplete beta function law for the construction of optimal temperature schedules has been demonstrated .as opposed to its empirical version , the incomplete beta function law given by theorem 1 is an exact mathematical statement , valid for all systems asymptotically , for close enough temperatures . for strongly anharmonic systems ( for instance , systems for which the sampling is performed on a lattice , such as spin glasses and self - avoiding random walks ) , the empirical version of the incomplete beta function law may fail .in such cases , the rigorous incomplete beta function law should be used for the development of optimal temperature schedules . as discussed in the text ,the evaluation of the average requires however a double integral over the configuration space .this integral can be computed by doubling the number of temperatures in the parallel tempering schedule , as follows then , to compute , one collects the values of the differences any time a swap between equal neighboring temperatures and is attempted .of course , swaps between equal temperatures are always accepted .the values are interpolated by a cubic spline .the determination of the optimal temperature schedule then proceeds in a way similar to the approach utilized in section iv .while the reader may object that the introduction of an intercalated set of identical temperatures is an additional computational burden , many times , there are certain advantages in doing so . for large dimensional systems ,coupled independent replica running at identical temperatures constitute an elegant device for parallelizing the monte carlo code , whenever the number of available compute nodes is at least twice the number of temperatures in the parallel tempering schedule .nowadays , with the advent of inexpensive cluster computing , this is the case with many research groups .furthermore , in this setting , the computation of the heat capacity and other properties of the system can be done by means of unbiased estimators , as shown by eq .( [ eq:18a ] ) . on a more general level, we hope that a better understanding of the laws governing the acceptance probabilities for swaps in parallel tempering methods may lead to useful research in improving the efficiency of the methods . in the meantime , we recommend that the dimensionality of the systems be maintained as low as possible , for instance , by adiabatically reducing those degrees of freedom that do not lead to significant degradation in the quality of the final results .the first author acknowledges support from the national science foundation through awards no .che-0095053 and che-0131114 .cvc is supported by the mrsec program at brown university .the authors also wish to thank professors j. d. doll and d. l. freeman for useful suggestions and interesting discussions on the subject .for an arbitrary -dimensional harmonic oscillator whose global minimum is zero , the density of states is given by the formula . in these conditions , it is but a simple exercise to show that the acceptance probability for the parallel tempering swaps is given by the equation we also want to evaluate the quantity [ see eq .( [ eq:15 ] ) ] , which is given by the formula while for an harmonic oscillator the parameter is an integer , we compute the two integrals above under the assumption that is a strictly positive real number .we prove that the value of is given by the incomplete beta function where is euler s beta function and in addition , we prove that because the function is symmetrical in its arguments , we may assume without loss of generality that , so that . by decomposing the domain of the integral against in eq .( [ eq : a1 ] ) in two regions with and respectively , it is straightforward to see that where performing the substitution , we obtain performing the change of variables , we conclude where is euler s beta function . the value of is obtained by replacing with in the first expression of eq .( [ eq : a6 ] ) .we compute eq .( [ eq : a1p ] ) follows easily from eqs .( [ eq : a3 ] ) , ( [ eq : a6 ] ) , and ( [ eq : a7 ] ) , after easy simplifications .if applied to eq .( [ eq : a2 ] ) , the same decomposition and coordinate transformations used in the proof of eq .( [ eq : a1p ] ) lead to the equation , \qquad\end{aligned}\ ] ] where integrating by parts the last integral , we obtain where combining eq .( [ eq : a9 ] ) with the identity we obtain which , after replacement in eq .( [ eq : a8 ] ) , produces eq .( [ eq : a2p ] ) .let be a sequence of positive numbers convergent to . then , where is the error function .sterling s formula implies therefore , remembering that , we obtain the equality the fact that the above sequences are bounded by for all $ ] , and the dominated convergence theorem imply thus , and the theorem is proven . 99 r. h. swendsen and j .- s .wang , phys .lett . * 57 * , 2607 ( 1986 ) .m. c. tesi , e. j. janse van rensburg , e. orlandini , and s. g. whittington , j. stat .phys . * 82 * , 155 ( 1996 ) .u. h. e. hansmann , chem .281 * , 140 ( 1997 ) .m. g. wu and m. w. deem , mol .phys . * 97 * , 559 ( 1999 ) .m. falcioni and m. w. deem , j. chem .phys . * 110 * , 1754 ( 1999 ) .q. yan and j. j. de pablo , j. chem .phys . * 111 * , 9509 ( 1999 ) .j. p. neirotti , f. calvo , d. l. freeman , and j. d. doll , j. chem .phys . * 112 * , 10340 ( 2000 ) .f. calvo , j. p. neirotti , d. l. freeman , and j. d. doll , j. chem . phys . * 112 * , 10350 ( 2000 ) . q. yan and j. j. de pablo , j. chem .phys . * 113 * , 1276 ( 2000 ) .y. sugita , a. kitao , and y. okamoto , j. chem . phys . * 113 * , 6042 ( 2000 ) . q. yan and j. j. de pablo , j. chem. phys . * 114 * , 1727 ( 2001 ) .f. calvo and j. p. k. doye , phys .e * 63 * , 010902 ( 2001 ) .d. bedrov and g. d. smith , j. chem .phys . * 115 * , 1121 ( 2001 ) .d. gront , a. kolinski , and j. skolnick , j. chem . phys . * 115 * , 1569 ( 2001 ). y. ishikawa , y. sugita , t. nishikawa , and y. okamoto , chem .. lett . * 333 * , 199 ( 2001 ) .a. bunker and b. dunweg , phys .e * 63 * , 016701 ( 2001 ) .r. faller , q. yan , and j. j. de pablo , j. chem . phys .* 116 * , 5419 ( 2002 ) .h. fukunishi , o. watanabe , and s. takada , j. chem .phys . * 116 * , 9058 ( 2002 ) . c. j. geyer and e. a. thompson ,. assoc . *90 * , 909 ( 1995 ) .k. hukushima and k. nemoto , j. phys .. jpn . * 65 * , 1604 ( 1996 ) .d. d. frantz , d. l. freeman , and j. d. doll , j. chem .* 93 * , 2769 ( 1990 ) .t. w. whitfield and j. e. straub , phys .e * 64 * , 066115 ( 2001 ) .a. p. lyubartsev , a. a. martsinovski , s. v. shevkunov , and p. n. voronstsov - velyaminov , j. chem .phys . * 96 * , 1776 ( 1992 ) .e. marinari and g. parisi , europhys* 19 * , 451 ( 1992 ) .n. metropolis , a. w. rosenbluth , m. n. rosenbluth , a. m. teller , and e. teller , j. chem . phys . * 21 * , 1087 ( 1953 ) .m. kalos and p. whitlock , _ monte carlo methods _ ( wiley - interscience , new york , 1986 ) .i. andricioaei and j. e. straub , j. chem . phys .* 107 * , 9117 ( 1997 ) .d. a. kofke , j. chem .phys . * 117 * , 6911 ( 2002 ) .m. matsumoto and t. nishimura , _ dynamic creation of pseudorandom number generators _ , in _monte carlo and quasi - monte carlo methods 1998 _ , ( springer - verlag , new york , 2000 ) , pp 5669 .
we show that the acceptance probability for swaps in the parallel tempering monte carlo method for classical canonical systems is given by a universal function that depends on the average statistical fluctuations of the potential and on the ratio of the temperatures . the law , called the incomplete beta function law , is valid in the limit that the two temperatures involved in swaps are close to one another . an empirical version of the law , which involves the heat capacity of the system , is developed and tested on a lennard - jones cluster . we argue that the best initial guess for the distribution of intermediate temperatures for parallel tempering is a geometric progression and we also propose a technique for the computation of optimal temperature schedules . finally , we demonstrate that the swap efficiency of the parallel tempering method for condensed - phase systems decreases naturally to zero at least as fast as the inverse square root of the dimensionality of the physical system . [ 2]theorem
the method of least squares is a powerful technique for the approximate solution of overdetermined systems and is often used for data fitting and statistical inference in applied science and engineering . in this paper, we will primarily consider the linear least squares problem where is dense and full - rank with , , , and is the euclidean norm .formally , the solution is given by where is the moore - penrose pseudoinverse of , and can be computed directly via the qr decomposition at a cost of operations .this can be prohibitive when and are large .if is structured so as to support fast multiplication , then iterative methods such as lsqr or gmres present an attractive alternative . however , such solvers still have several key disadvantages when compared with their direct counterparts : the convergence rate of an iterative solver can depend strongly on the conditioning of the system matrix , which , for least squares problems , can sometimes be very poor . in such cases ,the number of iterations required , and hence the computational cost , can be far greater than expected ( if the solver succeeds at all ) .direct methods , by contrast , are robust in that their performance does not degrade with conditioning .thus , they are often preferred in situations where reliability is critical. standard iterative schemes are inefficient for multiple right - hand sides . with direct solvers , on the other hand , following an expensive initial factorization ,the subsequent cost for each solve is typically much lower ( e.g. , only work to apply the pseudoinverse given precomputed qr factors ) .this is especially important in the context of updating and downdating as the least squares problem is modified by adding or deleting data , which can be viewed as low - rank updates of the original system matrix . in this paper, we present a fast semi - direct least squares solver for a class of structured dense matrices called hierarchically block separable ( hbs ) matrices .such matrices were introduced by gillman , young , and martinsson and possess a nested low - rank property that enables highly efficient data - sparse representations .the hbs matrix structure is closely related to that of - and -matrices and hierarchically semiseparable ( hss ) matrices , and can be considered a generalization of the matrix features utilized by multilevel summation algorithms like the fast multipole method ( fmm ) .many linear operators are of hbs form , notably integral transforms with asymptotically smooth radial kernels .this includes those based on the green s functions of non - oscillatory elliptic partial differential equations .some examples are shown in table [ tab : examples ] ; we highlight , in particular , the green s functions for the laplace and biharmonic equations , respectively , in 3d , and their regularizations , the inverse multiquadric and multiquadric kernels respectively ( for not too large ) ..examples of radial kernels admitting hbs representations : , zeroth order hankel function of the first kind ; , zeroth order modified bessel function of the second kind . [ cols="<,<,^,^,<",options="header " , ] furthermore , as we have solved an approximate , compressed system , we can not in general fit the data exactly ( with respect to the true operator ) .indeed , we see relative residuals of order as predicted by the compression tolerance .thus , our algorithm is especially suitable in the event that observations need to be matched only to a specified precision .our semi - direct method vastly outperformed both lapack / atlas and fmm / gmres , which required from up to iterations using as a right preconditioner .deferred correction was successful in all cases within two steps . in our final example, we demonstrate the efficiency of our updating and downdating methods in the typical setting of fitting additional observations to an already specified overdetermined system . for this, we employed the thin plate spline approximation problem of section [ sec : thin - plate - spline ] with and , followed by the addition of new random target points . from section [ sec : updating ], the perturbed system can be written as ( [ eqn : equality - least - squares - augmented ] ) with ( [ eqn : row - addition ] ) , i.e. , ] , which , since has full column rank , gives , hence the preconditioned system is \mathbf{x } \simeq \mathbf{b } \mathbf{f}. \label{eqn : update - setup}\ ] ] note that only _one _ application of is necessary , independent of the number of iterations required .solving this in matlab took iterations and a total of s , with s going towards setting up ( [ eqn : update - setup ] ) .the relative residual on the new data was .this should be compared with the roughly s required to solve the problem without updating , treating it instead as a new system via our compressed algorithm ( table [ tab : overdetermined ] ) .although this difference is perhaps not very dramatic , it is worth emphasizing that the complexity here scales as with updating versus without , as the former needs only to apply while the latter needs also to compress and factor .therefore , the asymptotics for updating are much improved .in this paper , we have presented a fast semi - direct algorithm for over- and underdetermined least squares problems involving hbs matrices , and exhibited its efficiency and practical performance in a variety of situations including rbf interpolation and dynamic updating . in 1d ( including boundary problems in 2d and problems withseparated data in all dimensions ) , the solver achieves optimal complexity and is extremely fast , but it falters somewhat in higher dimensions , due primarily to the growing ranks of the compressed matrices as expressed by ( [ eqn : rank - growth ] ) .developments for addressing this growth are now underway for square linear systems , and we expect these ideas to carry over to the present setting .significantly , the term involving the larger matrix dimension is linear in all complexities ( i.e. , only instead of as for classical direct methods ) , which makes our algorithm ideally suited to large , rectangular systems where both and increase with refinement . _remark_. if only _one _ dimension is large so that the matrix is strongly rectangular , then standard methods are usually sufficient ; see also .although we have not explicitly considered least squares problems with hbs equality constraints ( we have only done so implicitly through our treatment of underdetermined systems ) , it is evident that our methods generalize .however , our complexity estimates can depend on the structure of the system matrix .in particular , if it is sparse , e.g. , a diagonal weighting matrix , then our estimates are preserved .we can also , in principle , handle hbs least squares problems with hbs constraints simply by expanding out both matrices in sparse form .this flexibility is one of our method s main advantages , though it can also create some difficulties .in particular , the fundamental problem is no longer the unconstrained least squares system ( [ eqn : overdetermined ] ) but the more complicated equality - constrained system ( [ eqn : equality - least - squares ] ) .accordingly , more sophisticated iterative techniques are used , but these can fail if the problem is too ill - conditioned .this is perhaps the greatest drawback of the proposed scheme .still , our numerical results suggest that the algorithm remains effective for moderately ill - conditioned problems that are already quite challenging for standard iterative solvers . for severely ill - conditioned problems ,other methods may be preferred .finally , it is worth noting that fast direct solvers can be leveraged for other least squares techniques as well .this is straightforward for the normal equations , which are subject to well - known conditioning issues , and for the somewhat better behaved augmented system version : \left [ \begin{array}{c } r\\ x \end{array } \right ] = \left [ \begin{array}{c } b\\ 0 \end{array } \right].\ ] ] this approach has the advantage of being immediately amenable to fast inversion techniques but at the cost of `` squaring '' and enlarging the system .thus , all complexity estimates involve instead of and separately . in particular , the current generation of fast direct solvers would require , e.g. , instead of work . with the development of a next generation of linear or nearly linear time solvers ,this distinction may become less critical .memory usage and high - performance computing hardware issues will also play important roles in determining which methods are most competitive .we expect these issues to become settled in the near future .we would like to thank the anonymous referees for their careful reading and insightful remarks , which have improved the paper tremendously . 00 , _lapack users guide _ , siam , philadelphia , pa , 3rd ed . , 1999 ., _ on the augmented system approach to sparse least - squares problems _ ,, 55 ( 1989 ) , pp ., _ error analysis and implementation aspects of deferred correction for equality constrained least squares problems _ , siam j. numer .anal . , 25 ( 1988 ) , pp .13401358 . , _ the direct solution of weighted and equality constrained least - squares problems _ , siam j. scicomput . , 9 ( 1988 ) ,704716 . , _ a note on deferred correction for equality constrained least squares problems _ , siam j. numer ., 29 ( 1992 ) , pp .249256 . , _ a well - behaved electrostatic potential based method using charge restraints for deriving atomic charges : the resp model _ , j. phys ., 97 ( 1993 ) , pp .1026910280 . , _ on the qr decomposition of -matrices _ , computing , 88 ( 2010 ) , pp .111129 . , _ numerical methods for least squares problems _ , siam , philadelphia , pa , 1996 . , _ data - sparse approximation of non - local operators by -matrices _ , linear algebra appl. , 422 ( 2007 ) , pp .380403 . ,_ radial basis functions : theory and implementation _ , cambridge university press , cambridge , 2003 . , _generalized inverse formulas using the schur complement _ , siam j. appl ., 26 ( 1974 ) , pp .254259 . , _generalized inverses of linear transformations _ , pitman , london , 1979 . , _ reconstruction and representation of 3d objects with radial basis functions _ , in proceedings of the 28th annual conference on computer graphics and interactive techniques , los angeles , ca , 2001 , pp .6776 . , _ some fast algorithms for sequentially semiseparable representations _ , siam j. matrix anal .appl . , 27 ( 2005 ) , pp . 341364 . , _ a fast solver for hss representations via sparse matrices_ , siam j. matrix anal . appl . , 29 ( 2006 ) ,6781 . , _ a fast decomposition solver for hierarchically semiseparable representations _ , siam j. matrix anal .appl . , 28 ( 2006 ) , pp .603622 . , _ on the compression of low rank matrices _ ,siam j. sci .comput . , 26 ( 2005 ) , pp .13891404 . , _ an direct solver for integral equations on the plane _ ,, _ algorithm 832 : umfpack v4.3an unsymmetric - pattern multifrontal method _ ,acm trans . math .softw . , 30 ( 2004 ) , pp .196199 . , _ algorithm 915 , suitesparseqr : multifrontal multithreaded rank - revealing sparse qr factorization _ , acm trans .softw . , 38 ( 2011 ) , pp .8:18:22 . , _a hierarchical semi - separable moore - penrose equation solver _ , in wavelets , multiscale systems and hypergeometric analysis , operator theory : advances and applications , 167 , d. alpay , a. luger , and h. woracek , eds ., birkhuser , basel , 2006 , pp ._ splines minimizing rotation - invariant semi - norms in sobolev spaces _ , in constructive theory of functions of several variables , lecture notes in mathematics , 571 , w. schempp and k. zeller , eds . , springer - verlag , berlin , 1977 , pp ., _ charges fit to electrostatic potentials .ii . can atomic charges be unambiguously fit to electrostatic potentials ? _ , j. comput ., 17 ( 1996 ) , pp .367383 . , _ a direct solver with complexity for integral equations on one - dimensional domains _ , frontmath . china ., 7 ( 2012 ) , pp .217247 . , _ fmmlib : fast multipole methods for electrostatics , elastostatics , and low frequency acoustic modeling _, in preparation .software available from http://www.cims.nyu.edu/cmcl/software.html . , _ fast direct solvers for integral equations in complex three - dimensional domains _ , acta numer ., 18 ( 2009 ) , pp .243275 . , _ a fast algorithm for particle simulations _ , j. comput ., 73 ( 1987 ) , pp .325348 . ,_ a new version of the fast multipole method for the laplace equation in three dimensions _ , acta numer . ,6 ( 1997 ) , pp . 229269 . , _ some applications of the pseudoinverse of a matrix _ , siam rev ., 2 ( 1960 ) , pp .1522 . , _ new fast algorithms for structured linear least squares problems _ , siam j. matrix anal ., 20 ( 1998 ) , pp . 244269 . , _ partial differential equations of mathematical physics and integral equations _ , prentice - hall , englewood cliffs , nj , 1988 ., _ a sparse matrix arithmetic based on -matrices .part i : introduction to -matrices _ , computing , 62 ( 1999 ) , pp .89108 . , _ data - sparse approximation by adaptive -matrices _ , computing , 69 ( 2002 ) , pp . 135 . , _ a sparse -matrix arithmetic .part ii : application to multi - dimensional problems _ , computing , 64 ( 2000 ) , pp .2147 . , _ on -matrices _ ,in lectures on applied mathematics , h .- j .bungartz , r. w. hoppe , and c. zenger , eds . , springer - verlag , berlin , 2000 , pp .multiquadric equations of topography and other irregular surfaces _ , j. geophys ., 76 ( 1971 ) , pp .19051915 . ,_ gmres methods for least squares problems _ , siam j. matrix anal .appl . , 31 ( 2010 ) , pp .24002430 . ,_ accuracy and stability of numerical algorithms _, 2nd . ed . ,siam , philadelphia , pa , 2002 ., _ a fast direct solver for structured linear systems by recursive skeletonization _ , siam j. sci .34 ( 2012 ) , pp .a2507a2532 . , _ hierarchical interpolative factorization for elliptic operators : integral equations _ , arxiv:1307.2666 . , _ fast reliable algorithms for matrices with structure _ , siam , philadelphia , pa , 1999 . , _ solving least squares problems _, prentice - hall , englewood cliffs , nj , 1974 . , _randomized algorithms for the low - rank approximation of matrices _ , proc ., 104 ( 2007 ) , pp .2016720172 . , _ a fast direct solver for boundary integral equations in two dimensions _ , j. comput .phys . , 205 ( 2005 ) , pp . 123, _ lsrn : a parallel iterative solver for strongly over- or underdetermined systems _ , siam j. sci .comput . , 36 ( 2014 ) , pp .c95c118 . , _ lsqr : an algorithm for sparse linear equations and sparse least squares _ ,acm trans . math .8 ( 1982 ) , pp . 4371 . ,_ radial basis functions for multivariable interpolation : a review _ , in algorithms for approximation , j. c. mason and m. g. cox , eds . , clarendon press , oxford , 1987 , pp .143167 . , _ a fast randomized algorithm for overdetermined linear least - squares regression _ , proc .natl . acad ., 105 ( 2008 ) , pp .1321213217 . ,_ gmres : a generalized minimal residual algorithm for solving nonsymmetric linear systems _ , siam j. sci .7 ( 1986 ) , pp . 856869 . , _ the quadtree and related hierarchical data structures _ , acm comput, 16 ( 1984 ) , pp .187260 . , _ a fast algorithm for computing minimal - norm solutions to underdetermined systems of linear equations _ , arxiv:0905.4745 . , _ a superfast method for solving toeplitz linear least squares problems _ , linear algebra appl . ,366 ( 2003 ) , pp ., _ on the method of weighting for equality - constrained least - squares problems _ ,siam j. numer ., 22 ( 1985 ) , pp ., _ automated empirical optimization of software and the atlas project _ , parallel comput . , 27 ( 2001 ) , pp . 335 . , _ a fast randomized algorithm for the approximation of matrices_ , appl ., 25 ( 2008 ) , pp .335366 . ,_ fast algorithms for hierarchically semiseparable matrices _ , numer .linear algebra appl . , 17 ( 2010 ) , pp ., _ a superfast structured solver for toeplitz linear systems via randomized sampling _ , siam j. matrix anal .appl . , 33 ( 2012 ) , pp .837858 . , _ a kernel - independent adaptive fast multipole algorithm in two and three dimensions _ , j. comput ., 196 ( 2004 ) , pp .
we present a fast algorithm for linear least squares problems governed by hierarchically block separable ( hbs ) matrices . such matrices are generally dense but _ data - sparse _ and can describe many important operators including those derived from asymptotically smooth radial kernels that are not too oscillatory . the algorithm is based on a recursive skeletonization procedure that exposes this sparsity and solves the dense least squares problem as a larger , equality - constrained , sparse one . it relies on a sparse qr factorization coupled with iterative weighted least squares methods . in essence , our scheme consists of a direct component , comprised of matrix compression and factorization , followed by an iterative component to enforce certain equality constraints . at most two iterations are typically required for problems that are not too ill - conditioned . for an hbs matrix with having bounded off - diagonal block rank , the algorithm has optimal complexity . if the rank increases with the spatial dimension as is common for operators that are singular at the origin , then this becomes in 1d , in 2d , and in 3d . we illustrate the performance of the method on both over- and underdetermined systems in a variety of settings , with an emphasis on radial basis function approximation and efficient updating and downdating . fast algorithms , matrix compression , recursive skeletonization , sparse qr decomposition , weighted least squares , deferred correction , radial basis functions , updating / downdating 65f05 , 65f20 , 65f50 , 65y15
in the last decades , there has been an increasing interest in a geometrical construct called the voronoi diagram ( e.g. [ 1 ] , [ 2 ] , [ 3 ] and [ 4 ] ) . the voronoi diagram is a data structure extensively investigated in the domain of computational geometry ( e.g. [ 5 ] ) . given some number of points in the plane , their voronoi diagram divides the plane according to the nearest - neighbor rule : each point is associated with the region of the plane closest to it , so it is a tessellation of .we have already noted that the concept of the voronoi diagram is used extensively in a variety of disciplines and has independent roots in many of them ( e.g. [ 6 ] ) .the first extension of them was to the area of crystallography ( the area we are interesting in ) , works in this field are for example [ 7 ] and [ 8 ] ..4 cm since there is a large number of empirical structures which also involve tessellations of , one of the most direct applications of voronoi concepts is in the modelling of such structures and the processes that generate them . in these notes, we use the voronoi assignment model in the modelling of physical - chemical systems .such systems under study consist of a set of sites occupied by atoms , ions , molecules , etc .( depending on the specific application ) which are represented as equal - size spheres .our system is formed by sites regularly arranged in , they assume form of lattice ( the structure is said to be crystalline ) .thin metal films images with atomic force microscopy ( afm ) consist of small two - dimensional islands ( objects ) distributed on the substrate .the quantitative characterization of the object arrangement can bring information about internal processes in the studied system .we apply methods of mathematical morphology to thin metal films images with atomic force microscopy , to assign the model : the voronoi growth model .voronoi polygons has been employed for providing nanostructural information to these multi - particle assemblies .we analyze morphological algorithms applied to these tessellations , e.g. to restore the generators from a given voronoi diagram ..4 cm as a graphical user interface ( gui ) makes easier for the user to obtain information from algorithms , we present how to join all algorithms , we have studied , to design one .the graphical user interface provides as input for the system the afm image , and interprets the output in terms we are interesting in . we note that this work can easily be extended , if we have images from other fields like ecology , meteorology , epidemiology , linguistics , economics , archeology or astronomy , that we suspect are a voronoi diagram . .4 cm the structure of this paper is as follows . in section 2we recall the mathematical theoretical background about voronoi diagrams and we give an application of them , that it is called the voronoi growth models which we will use in the analysis of film nanographs . in section 3we explain the mathematical solution to the problem proposed here .next in section 4 , we give and analyze algorithms of the mathematical solution and in section 5 we finish with important concluding remarks and directions for further research .in this section we will review the basic notions we shall require for the sections to follow . for more details about them we refer to [ 9 ] and [ 10 ] for the first investigation of mathematical aspects of voronoi diagrams , [ 11 ] and [ 12 ] for papers that present surveys about voronoi diagrams and related topics , and [ 13 ] for a good introduction to all applications of voronoi diagrams to sciences .we will define the voronoi diagram and introduce properties and notations to be commonly used in this notes ..3 cm we work with a finite number , , of points in the euclidean plane , and assume that .the points are labeled by with the cartesian coordinates or location vectors .the points are distinct in the sense that for , .let be an arbitrary point in the euclidean plane with coordinates or location vector .then the euclidean distance from to is given by if is the nearest point from or is one of the nearest points from , we have the relation for , . in this case, is assigned to .therefore , let where and for , .we call the region given by the _ ( ordinary ) voronoi polygon _ associated with ( or the voronoi polygon of ) , and the set given by the _ ( planar ordinary ) voronoi diagram _ generated by ( or voronoi diagram of ). we can extend the above definition to the -dimensional euclidean space , but for our proposes we only need the euclidean plane .so , we shall often refer to a planar ordinary voronoi diagram simply as a _ voronoi diagram _ and an ordinary voronoi polygon as a voronoi polygon ..5 cm for a voronoi diagram we have the following definitions .we call the of the _ generator point _ or _ generator _ of the voronoi polygon , and the set the _ generator set _ of the voronoi diagram ( figure 1 ) . for brevitywe may write for . also we may use or when we want to emphasize the coordinates or location vector of the generator .in addition , we may use when we want to explicitly indicate the generator set of . given a voronoi diagram ,since a voronoi polygon is a closed set , it contains its boundary denoted by .the boundary of a voronoi polygon may consist of line segments , half lines or infinite lines , which we call _ voronoi edges_. noticing that is included in the relation of equation ( [ eqdefvor ] ) , we may alternatively define a voronoi edge as a line segment , a half line or an infinite line shared by two voronoi polygons with its end points .mathematically , if , the set gives a voronoi edge ( which may be degenerate into a point ) .we use for , which is read as the voronoi edge generated by and .note that may be empty .if is neither empty nor a point , we say that the voronoi polygons and are _adjacent_. an end point of a voronoi edgeis called a _voronoi vertex_. alternatively , a voronoi vertex may be defined as a point shared by three or more voronoi polygons .we denote a voronoi vertex by ( see figure 1 ) .when there exits at least one voronoi vertex at which four or more voronoi edges meet in the voronoi diagram , we say that is _ degenerate _( figure 2 ) ; otherwise , we say that is _ non - degenerate_. in the previous definitions of voronoi diagram , we have defined a voronoi diagram in an unbounded plane . in practical applications ,however , we often deal with a bounded region , where generators are placed . in this casewe consider the set given by we observed that an ordinary voronoi diagram consists of polygons , as a polygon can be defined in terms of half planes , we have the equality of proposition [ perpendicular2 ] . given a voronoi diagram , we consider the line perpendicularly bisecting the line segment joining two generators and .we call this line the _ bisector _ between and and denote it by .since a point on the bisector is equally distant from the generators and , is written as the bisector divides the plane into two half planes and gives we call the _ dominance region of _ over .[ perpendicular2 ] let , where and for , and .then where is the _ ( ordinary ) voronoi polygon _ associated with and set where is the _ ( planar ordinary ) voronoi diagram _ generated by .as a degenerate voronoi diagram requires special lengthy treatments which are not essential we avoid this difficulty and we often make the following assumption : [ asumir1 ] ( * the non - degeneracy assumption * ) .every voronoi vertex in a voronoi diagram is incident to exactly three voronoi edges . .* the largest empty circle in a voronoi diagram .* .4 cm for a given set of points , if a circle does not contain any points of in its interior , the circle is called an _ empty circle_. [ vdcircle ] let be the set of voronoi vertices of a voronoi diagram generated by .for every voronoi vertex , , there exists a unique empty circle centered at which passes through three or more generators . under the non - degeneracy assumption, passes through exactly three generators ( figure 3 ) . from this theorem ,the non - degeneracy assumption ( assumption [ asumir1 ] ) is equivalent to the following assumption .( * the non co - circularity assumption * ) given a set of points ( ) , there does not exist a circle , , such that , , are on , and all points in are outside .circle in theorem [ vdcircle ] is the largest empty circle among empty circles centered at the voronoi vertex ..5 cm here we have seen some properties of the voronoi diagram , but it has many more .for example , if one connects all the pairs of sites whose voronoi cells are adjacent then the resulting set of segments forms a triangulation of the point set , called the _ delaunay triangulation_. thin metal films deposited on a surface consist in their initial stage of growth of small islands .basic information about nucleation processes during the thin film growth can be derived by the morphological analysis of the film afm image .for very thin metal film or generally for systems consisting of small regular objects , the methods of mathematical morphology are well - suited to the study of spatial distribution of objects in images ( e.g. [ 14 ] ) .there is a large number of empirical structures which involves tessellations of ( and more generally in ) , one of the most obvious direct applications of voronoi concepts is in the modelling of such structures and the processes that generate them .these models produce spatial patterns as the result of a simple growth process with respect to a set of points ( nucleation sites ) , , at positions , respectively , in or a bounded region of .if we make the following assumptions , the resulting pattern will be equivalent to the ordinary voronoi diagram of : [ ass1 ] each point ( ) is located simultaneously .[ ass2 ] each point remains fixed at throughout the growth process .[ ass3 ] once is established , growth commences immediately and at the same rate in all directions from .[ ass4 ] is the same for all members of .[ ass5 ] growth ceases whenever and wherever the region growing from comes into contact with that growing from ( ) .together , assumptions [ ass1]-[ass5 ] define the _ voronoi growth model_. the figure 4 shows a series in stages in such a growth process ..4 cm .5 cm generally speaking , one obvious application is to model crystal growth about a set of nucleation sites . hereassumptions [ ass1]-[ass5 ] are equivalent to assuming an omni directional , uniform supply of crystallizing material to all faces of the grind crystal in the absence of any absorbable impurities .assumption [ ass3 ] also implies that the rate of growth of the volume of a crystal will be proportional to its surface area . also, growth models to modelling phase transitions in metallurgy involving transformation of an isotropic , one - component solid through nucleation , and isotropic growth of grains of a new or re - crystallized phase . in this context the voronoi growth model is sometimes referred to as the _ cell model _ or the _ site saturation model_. specific examples include the covering of a metallic surface by films or layers of corrosion product where the nucleation sites might be surface imperfections such as impurities , points of intersection with bulk defects and surface pits .another example is the growth of thin films of metal or semiconductors . in these examplesif the thickness of the film is small relative to the spacing between the nucleation sites or if the grain boundaries are perpendicular to the plain of the film , a two - dimensional representation is appropriate .if the voronoi assignment and growth models described in the previous section are appropriate for modelling a particular phenomenon , we would expect spacial patterns of the phenomenon to display characteristics of voronoi diagrams . in casewe have a tessellation , we have to consider ways of determining if it is a voronoi diagram based on some set ( this problem has been studied e.g. in [ 15 ] ) . recognizing a voronoi diagram is closely related with the next generator recognition problem ..4 cm * the generator recognition problem : * provided that the voronoi edges of a non - degenerate voronoi diagram are given , we recover the locations of generators ..4 cm the first problem we approach is to restore the generators from a given voronoi diagram , that is , the inverse problem of constructing the voronoi diagram from the given points . for this problem itself , we propose the following geometrical approach ( see e.g. [ 13 ] ) ..4 cm let be a voronoi vertex , be generators whose voronoi polygons share , and be the voronoi vertices of the voronoi edges incident to . from theorem [ vdcircle ] , is the center of the circle that passes through .since voronoi edges , and perpendicularly bisect line segments , , , respectively , we have the equations : hence , i.e. . from this equation , we obtain the following theorem .[ teoremasolucion ] let be a voronoi edge in a non - degenerate voronoi diagram , and , be the acute angles at and , respectively , where is indexed counterclockwise from at and clockwise at .let ( ) be the half line radiating from ( ) with angle ( ) with in the sector of ( ) , .then the intersection point made by and , and that by and give the generators of the voronoi diagram sharing .we develop this theorem into a more general theorem with which we can examine whether or not a given planar tessellation , is a voronoi diagram .we suppose that the tessellation consists of convex polygons and every vertex has exactly three edges .let be the vertices of a polygon indexed counterclockwise .let be the intersection point in obtained through the same procedure stated in the previous theorem , where is replaced by , ( should be read as ) .then we have the main theorem . a planar tessellation consisting of convex polygonswhose vertices are all degree three is a voronoi diagram if and only if holds for , where is defined in the above ..5 cm an alternative method to find the generators of a voronoi diagram proposed in [ 16 ] is the following algebraically method .it is based on the perpendicular bisector property i.e. the line segment joining the generators and of two adjacent voronoi polygons and is bisected perpendicularly by the common edge of and ( see proposition [ perpendicular2 ] ) .so , that means that and are subject to the following conditions : and lie on a line perpendicular to . and are equidistant from .these conditions can be formulated algebraically to form a linear system of equations which can be solved to find the locations and .let and be the locations of the end points of the common edge of two adjacent members and of .we search and the locations of the generators and , respectively .the segment lies on the line as is a voronoi diagram , condition 1 gives and condition 2 gives suppose that is non - degenerate and has interior edges .condition 1 gives a system of equations and unknows , and condition 2 gives another system of equations and unknows .taken jointly all equations we have enough constraints to provide a least squares solution for .specific methods for solving this equations are given in [ 16 ] .the point is that if is a voronoi diagram , then all equations will yield the same solution for .evans and jones in [ 16 ] outline three algorithms for its solution .unfortunately , the algorithms require the inversion of poorly - conditioned matrices and may thus be highly unstable .if the exact voronoi diagram were given , we could determine the position of the generators by the previous methods of subsection [ generadores ] .however , such a situation is unrealistic .recognizing that the recording of many types of empirical patterns often involve some measurement error , it is usual that given a pattern could not correspond to a voronoi diagram even when we suspect that the pattern was generated by processes such as those in the voronoi diagram .even if theoretical consideration tells us that the diagram which appears in a phenomenon should be a voronoi diagram , the error in observation process must perturb the original diagram .therefore , the geometrical method would always tell us that the diagram is not the voronoi , i.e. , it would give us no information in almost every case .methods proposed in [ 16 ] , [ 17 ] , [ 18 ] tell us at least approximate positions of the generators .geometric objects such as points , lines , and polygons are the basis of a broad variety of important applications and give rise to an interesting set of problems and algorithms .computers are being used to solve larger - scale geometric problems ._ computational geometry _ has been developed as a set of tools and techniques that takes advantage of the structure provided by geometry .now , we describe algorithms to solve the generator recognition problem ( see [ pfl ] ) .first of all we must store the tessellation of which we are seeking the generators . a tessellation is typically stored as a list of vertex coordinates and its associated contiguity lists : lists which provide , for each vertex , the indices of the other vertices to which it is connected . if a vertex lies on an infinite edge , we store both the vertex and an arbitrary other point on the infinite edge , where is labeled a dummy vertex and given no adjacency list . in the input of the algorithms we will describe ,we require that the number of ordinary vertices and the number of dummy vertices be specified , and that the dummy vertices be placed at the end of the vertex list ( we will consider them as degenerate ) .let be a tessellation of the euclidean plane and the set of vertices in which the last vertices lies in a infinite edge . by theorem [ teoremasolucion ] , if the tessellation is a voronoi diagram for each vertex of a given polygon we can define a half line of a giving direction radiating from into the interior of .the intersection of any two such half lines gives the location of the generator of .[ 19 ] gives the implementation of the following algorithms .the above introduction suggests the following naive algorithm for tessellations such that all of whose polygons contain at least two non degenerate vertices : .5 cm algorithm i step 1 . : : specify the polygons .: : for each polygon : + 2.1 ; ; find any two non degenerate vertices outlining , say . 2.2 ; ;for each vertex ( ) find the ray extending from ( ) through the generator in , as we described above .2.3 ; ; find the intersection of this two rays ..5 cm problems i ) : : the requirement that each cell contain at least two non degenerate vertices .ii ) : : only two rays are used to determine the generator in each polygon .iii ) : : if the two rays in a polygon are perfectly parallel ( a simple modification is to find an additional ray emanating from a different non degenerate vertex in the polygon ) .errors in the generator determination of the previous algorithm could be minimized by using all the available rays rather than just two .hence an alternative is the following algorithm : .5 cm algorithm ii step 1 .: : specify the polygons .: : for each polygon : + 2.1 ; ; find all non degenerate vertices outlining .2.2 ; ; for each vertex find the ray associated . 2.3 ; ; find the intersection of every possible pair of rays .2.4 ; ; average these intersection points ..5 cm problems i ) : : the generator location errors using the algorithm ii are in fact typically considerably larger than for algorithm i ! ! !this increase in error in the previous algorithm is attributable to the instability in intersecting certain select pairs of rays , one may modify step 2.4 of algorithm ii by computing a weighted average of the intersection points , weighting each point according to an estimate of its stability , as in the following algorithm : .5 cm algorithm iii step 1 .: : specify the polygons .: : for each polygon : + 2.1 ; ; find all non degenerate vertices outlining .2.2 ; ; for each vertex find the ray associated . 2.3 ;; find the intersection of every possible pair of rays in the polygon .2.4 ; ; for each pair of rays in the polygon , estimate the stability of its intersection by perturbing the slopes of each of the rays by a small amount in either direction and seeing how much the intersection point changes .record = the sum of the sizes of these changes . 2.5; ; compute a weighted average of the intersection points , giving the weight + ..5 cm note that a potential alternative to algorithms ii and iii is to find the point minimizing some penalty function such as the sum of squared perpendicular distances to the rays .the ( weighted ) averaging in algorithms ii and iii is equivalent to finding the location minimizing the ( weighted ) sum of squared distances to the intersection points of the rays .algorithms i , ii , and iii are all entirely local ; each polygon is determined solely based on its own vertices and their neighboring vertices .the accuracy of the algorithms can potentially be improved by incorporating information from neighboring polygons , e.g. by using the perpendicular bisector relation of proposition [ perpendicular2 ] .paik , ferguson and li suggest modifying algorithms i , ii , and iii to improve the results .+ all of the algorithms proposed are extremely fast , requiring just observations , where represents the number of generators to be determined .+ the errors in the inversion algorithms proposed are very small .however , in [ 19 ] they inquire about the size of the errors resulting when one of the vertices is recorded substantially in error .we start with an afm image , like one in the figures below that represent the growth of crystals with different velocities ( courtesy of pablo stoliar ) .they can be represented as a tessellation of a bounded region of the plane .we want to apply voronoi diagrams to pattern recognition to this branch of solid state physics .if we suspect that these structures are generated by spatial processes resulting in tessellations which can be constituted by voronoi diagrams , we would have to follow the next steps to analyze the images in order to obtain properties of the thin films they represent . 1 .we approximate the image to extract the vertices of the tessellation .2 . using one of the algorithms of subsection 3.2 we approximate this tessellation to a voronoi diagram .3 . in order to apply algorithms of section 4 , we store the vertices like we said in this section .the algorithms take as input this list of vertices and their adjacency lists , the output give us the generator points of the voronoi diagram .4 . the last step will be to measure the errors ( root - mean - squared errors in the vertices locations ) between the tessellation we obtain directly from the image , and the voronoi diagram we obtain with the outputs of the algorithms i , ii and iii .finally , the present work may be extended in the following way : every step uses a different computational algorithm , it is interesting to join all this steps to design a graphical user interface ( gui ) that provides as input for the system the afm image , and interprets the output of the system in terms of errors and generators coordinates .a user interface makes easier for the user to interact with the designed programs utilizing toolbar buttons and/or icons .every software package , the one for step 1 , step 2 , step 3 and step 4 , need a graphical user interface design that can be developed , we think it would be useful to do only one graphical user interface with all these algorithms inside , in that way we can obtain quickly the thin film information we need .this graphical user interface could be easily extended to another kind of images , i.e. we can provide as input for the system a variety of images that it could be represented by a voronoi diagram .so , this work will be valuable not only in the field of crystallography , but also in the fields such as ecology , meteorology , epidemiology , linguistics , economics , archeology and astronomy where voronoi diagrams are applied .
for the analysis of systems consisting of small , regular objects , the methods of mathematical morphology applied to images of these systems are well - suited . one of these methods is the use of voronoi polygons . it was found that the voronoi tessellation method represents a powerful tool for the analysis of thin film morphology and provides nanostructural information to many multi - particle assemblies . in these notes , several morphological algorithms are analyzed and we study how to join all of them to design a graphical user interface ( gui ) that provides as input for the system the afm imageand interprets the output of the system in terms of errors and generators coordinates . * keywords * : voronoi diagram ; inverting problem ; graphical user interface .3 cm * 2000 mathematics subject classification * : 92b99 ; 68u05
modeling the circumstellar envelopes of o and b stars is a complex nonlinear problem .the non - lte level populations , the ( magneto- ) hydrodynamics , and the radiation field are strongly coupled .thus , an iterative procedure is needed to achieve a consistent solution .an essential constituent of this procedure is the availability of an accurate and fast radiative transfer code .progress in computer technology and the availability of fast numerical methods now allow the development of such codes for detailed study of 2d and 3d envelopes .there are several possible avenues to follow .the most straightforward is to solve the general radiative transfer equation \ ] ] and calculate the radiative transition rates using the solution . in the above is the direction of the radiation , is the specific intensity , is the opacity , and is the source function ( functional dependence is not indicated for clarity ) .alternatively , one could also use the moment equations , derived from eq .[ eq : rt ] , and solve directly for the moments which set the transition rates ; or use monte - carlo simulation to solve the radiative transfer equation and calculate estimators of the transition rates .we decided to use the first approach because of its simplicity and since it provides a reasonable compromise between numerical efficiency and flexibility .a simple iteration between the radiative transfer and the rate equations is not a wise choice for the iterative procedure .this is the so - called `` -iteration '' which is notorious for convergence difficulties when the optical depth is large .convergence is ensured , however , by using the approximate lambda iteration ( ali , see e.g. , * ? ? ?* ; * ? ? ?* ) which takes some coupling of the radiation and populations into account by using an invertible approximate lambda operator ( alo ) . in our 2d codewe use the local contribution at every spatial point to construct the alo because it is easy to calculate and has acceptable convergence characteristics .the actual implementation of the ali procedure into our full non - lte model atmosphere will be discussed in .the optical depth and the formal solution of eq .[ eq : rt ] at any position , , along a ray are respectively .the intensity can be calculated by specifying i at the up - stream end of the ray ( or characterisitic ) and by evaluating two integrals . for this purpose, we use the `` short characteristic '' ( sc ) method , first explored by and .this method requires the evaluation of the integrals only between the point of interest and the closest cell boundary and uses the calculated intensities at other grid points to interpolate . in the spherical coordinate system the directional variation of the intensity is normally described by the radiation coordinates and , which are defined by \cdot \left [ \underline{\bf r } \times \underline{\bf z } \right ] \ ; , \end{aligned}\ ] ] respectively ( see fig .[ fig1 ] for definitions ) .unfortunately , and vary along a characteristic so using the same and grid for all spatial points would require interpolations in these angles . to avoid this additional interpolationwe describe a characteristic with which we call the `` impact parameter vector '' ( see fig .[ fig1 ] ) .this vector describes all essential features of a characteristic and can be considered as an analog of the orbital momentum vector in two body problems .its absolute value p= *p* is the traditional impact parameter while its orientation defines the `` orbital plane '' of the radiation ( the plane that contains the characteristic and the origin ) .following the analogy one can define an `` inclination '' angle for this plane by in our code we set up a universal grid in impact parameters and in inclination angles and calculate the radiation coordinates by for each grid point ( see fig [ fig1 ] for definitions ) .we evaluate eq .[ eq : fs ] in the comoving frame of the point of interest which is the proper frame for solving the rate equations . to ensure that the spatial and frequency variations of the opacity and source function are mapped properly in the integrations , we add extra integration points to the characteristics .the number of the extra points ( at least one ) depends on the ratio of the line of sight velocity difference between the endpoints and a `` maximum allowed velocity difference '' which is a free parameter in the code .the opacities and source terms at every comoving frequency are then interpoleted onto the integration points by bi - linear approximations using the four closest spatial grid points .it is relatively easy to construct a diagonal alo in this evaluation scheme .with the exception of the intensity we interpolate all quantities in first order .however , the accuracy of this approximation is insufficient in many cases ; therefore , we introduced a rudimentary multi - grid approach . before evaluating eq .[ eq : fs ] , we calculate opacity and source function on a dense grid by using a sophisticated 3 order approximation . then , the transfer equation is solved on the original grid using the dense grid for opacity and source term interpolations .we tested our code by reproducing spherical symmetric cmfgen models , as well as the results of an accurate 2d long characteristic program ( see * ? ? ?the 2d models were static with schuster - type inner boundary conditions and included electron scattering iterations .we were able to reproduce all test cases within % accuracy . in figs .[ fig2 ] and [ fig3 ] we demonstrate the capabilities of our code by showing the results for a rotating stellar envelope .the model was produced by using the opacities and emissivities from the results of a realistic cmfgen simulation and by introducing a rotational velocity field .as expected the spectral lines show the rotational broadening .
we discuss work toward developing a 2.5d non - lte radiative transfer code . our code uses the short characteristic method with modifications to handle realistic 3d wind velocities . we also designed this code for parallel computing facilities by representing the characteristics with an impact parameter vector * p*. this makes the interpolation in the radiation angles unnecessary and allows for an independent calculation for characterisitcs represented by different * p * vectors . the effects of the velocity field are allowed for by increasing , as needed , the number of grid points along a short characteristic . this allows us to accurately map the variation of the opacities and emissivities as a function of frequency and spatial coordinates . in the future we plan to use this transfer code with a 2d non - lte stellar atmosphere program to self - consistently solve for level populations , the radiation field and temperature structure for stars with winds and without spherical symmetry
network representation has become an increasingly widespread methodology of analysis to gain insight into the behavior of complex systems , ranging from gene regulatory networks to human infrastructures such as the internet , power - grids and airline transportation , through metabolism , epidemics and social sciences .these studies are primarily data driven , where connectivity information is collected , and the structural properties of the resulting graphs are analyzed for modeling purposes .however , rather frequently , full connectivity data is unavailable , and the modeling has to resort to considerations on the _ class of graphs _ that obeys the available structural data . a rather typical situation is when the only information available about the network is the degree sequence of its nodes .for example , in epidemiology studies of sexually transmitted diseases , anonymous surveys may only collect the _ number _ of sexual partners of a person in a given period of time , not their identity .epidemiologists are then faced with constructing a _ typical _ contact graph having the observed degree sequence , on which disease spread scenarios can be tested .another reason for studying classes or _ensembles _ of graphs obeying constraints comes from the fact that the network structure of many large - scale real - world systems is not the result of a global design , but of complex dynamical processes with many stochastic elements .accordingly , a statistical mechanics approach can be employed to characterize the collective properties of the system emerging from its node level ( microscopic ) properties . in this approach ,statistical ensembles of graphs are defined , representing `` connectivity microstates '' from which macroscopic system level properties are inferred via averaging .here we focus on the degree as a node characteristic , which could represent , for example , the number of friends of a person , the valence of an atom in a chemical compound , the number of clients of a router , etc . in spite of its practical importance , finding a method to construct degree - based graphs in a way that allows the corresponding graph ensemble to be properly sampled has been a long - standing open problem in the network modeling community ( references using various approaches are given below ) . herewe present a solution to this problem , using a biased sampling approach .we consider degree - based graph ensembles on two levels : 1 ) sequence - level , where a specific sequence of degrees is given , and 2 ) distribution level , where the sequences are themselves drawn from a given degree distribution . in the remainder we will focus on the fundamental case of labeled , undirected simple graphs . in a simple graph any link connects a single pair of distinct nodes and self loops and multiple links between the same pair of nodes are not allowed .without loss of generality , consider a sequence of positive integers , arranged in non - increasing order : .if there is at least one simple graph with degree sequence , the sequence is called a _ graphical sequence _ and we say that _ realizes _ . note that not every sequence of positive integers can be realized by simple graphs .for example , there is no simple graph with degree sequence or , while the sequence can obviously be realized by a simple graph . in general ,if a sequence is graphical , then there can be several graphs having the same degree sequence .also note that given a graphical sequence , the careless or random placing of links between the nodes may not result in a simple graph . recently , a direct , swap - free method to systematically construct all the simple graphs realizing a given graphical sequence was presented .however , in general ( for exceptions see ref . ) , the number of elements of the set of all graphs that realize sequence , increases very quickly with : a simple upper bound is provided by the number of all graphs with sequence , allowing for multiple links and loops : .thus , typically , systematically constructing all graphs with a given sequence is practical only for short sequences , such as when determining the structural isomers of alkanes . for larger sequences , andin particular for modeling real - world complex networks , it becomes necessary to sample .accordingly , several variants based on the markov chain monte carlo ( mcmc ) method were developed .they use link - swaps ( `` switches '' ) to produce pseudo - random samples from .unfortunately , most of them are based on heuristics , and apart from some special sequences , little has been rigorously shown about the methods mixing time , and accordingly they are ill - controlled .the literature on such mcmc methods is simply too extensive to be reviewed here , instead , we refer the interested reader to refs and the references therein . finally , we recall the main swap - free method producing uniform random samples from , namely the configuration model ( cm ) .this method picks a pair of nodes uniformly at random and connects them , until a rejection occurs due to a double link or a self - loop , in which case it restarts from the very beginning .for this reason , the cm can become very slow , as shown in the discussion section .the cm has inspired approximation methods as well and methods that construct random graphs with given _ expected _ degrees . here , by developing new results from the theorems in ref ., we present an efficient algorithm that solves this fundamental graph sampling problem , and it is exact in the sense that it is not based on any heuristics . given a graphical sequence , the algorithm always finishes with a simple graph realization in polynomial time , and it is rejection free . while the samples obtained are not uniformly generated , the algorithm also provides the exact weight for each sample , which can then be used to produce averages of arbitrary graph observables measured uniformly , or following any given distribution over .before introducing the algorithm , we state some results that will be useful later on .we begin with the erds - gallai ( eg ) theorem , which is a fundamental result that allows us to determine whether a given sequence of non - negative integers , called `` degree sequence '' hereafter , is graphical .[ eg ] a non - increasing degree sequence is graphical if and only if their sum is even and , for all : a necessary and sufficient condition for the graphicality of a degree sequence , which is constrained from having links between some node and a forbidden set " of other nodes is given by the star - constrained graphicality theorem . in this casethe forbidden links are all incident on one node and thus form a `` star '' . to state the theorem, we first define the `` leftmost adjacency set '' of a node with degree in a degree sequence as the set consisting of the nodes with the largest degrees that are _ not in _ the forbidden set .if is non - increasing , then the nodes in the leftmost adjacency set are the first nodes in the sequence that are not in the forbidden set .the forbidden set could represent nodes that are either already connected to , and thus subsequent connections to them are forbidden , or just imposed arbitrarily . using this definition ,the theorem is : [ th6 ] let be a non - increasing graphical degree sequence. assume there is a set of forbidden links incident on a node .then a simple graph avoiding the forbidden links can be constructed if and only if a simple graph can be constructed where is connected to all the nodes in its leftmost adjacency set .a direct consequence of theorem [ th6 ] for the case of an empty forbidden set is the well - known havel - hakimi result , which in turn implies : [ firsthappy ] let be a non - increasing unconstrained graphical degree sequence .then , given any node , there is a realization of that includes a link between the first node and .another result we exploit here is lemma 3 of ref . , extended to star - constrained sequences : [ l3kim ] let be a graphical sequence , possibly with a star constraint incident on node .let and be distinct nodes not in the forbidden set and different from , such that .then is also a graphical sequence with the same star constraint .let denote the set of nodes forbidden to connect to node . since is star - constrained graphicalthere is a simple graph realizing the sequence with no connections between and . since , there is a node to which is connected but is not .note that could be in .now cut the edge of creating a stub at and another at .remove the stub at so that its degree becomes , and add a stub at so that its degree becoming .since there are no connections in between and , connect the two stubs at these nodes creating a simple graph thus realizing .clearly there are still no connections between and in , and thus is also star - constrained graphical . finally , using lemma [ l3kim ] and theorem [ th6 ] , we prove : [ fmt ] let be a degree sequence , possibly with a star - constraint incident on node , and let and be two nodes with degrees such that that are not constrained from linking to node .if the residual degree sequence obtained from by reducing the degrees at and by unity is not graphical , then the degree sequence obtained from by reducing the degrees at and by unity is also not graphical . by definition , for and , ; for and , .we consider , however , the proof is not affected by this assumption . by assumption , is not graphical . using proof by contradiction ,assume that is graphical .clearly , , and thus we can apply lemma [ l3kim ] on this sequence . as a result ,the sequence , that is exactly is graphical , a contradiction .note that if a sequence is non - graphical , then it is not star - constrained graphical either , and thus theorem [ fmt ] is in its strongest form .the sampling algorithm described below is ergodic in the sense that every possible simple graph with the given finite degree sequence is generated with non - zero probability .however , it does not generate the samples with uniform probability ; the sampling is biased .nevertheless , the algorithm can be used to compute network observables that are unbiased , by appropriately weighing the averages measured from the samples .according to a well known principle of biased sampling , if the relative probability of generating a particular sample is , then an unbiased estimator for an observable measured from a set of randomly generated samples is the weighted average where the weights are , and the denominator is a normalization factor .the key to this method is to find the appropriate weight to associate with each sample .note that in addition to uniform sampling , it is in fact possible to sample with any arbitrary distribution by choosing an appropriate set of sample weights .let be a non - increasing graphical sequence .we wish to sample the set of graphs that realize this sequence .the graphs can be systematically constructed by forming all the links involving each node .to do so , begin by choosing the first node in the sequence as the `` hub '' node and then build the set of the `` allowed nodes '' that can be connected to it . contains all the nodes that can be connected to the hub such that if a link is placed between the hub and a node from , then a simple graph can still be constructed , thus preserving graphicality .choose uniformly at random a node , and place a link between and the hub .if still has `` stubs '' , i.e. remaining links to be placed , then add it to the set of `` forbidden nodes '' that contains all the nodes which ca nt be linked anymore to the hub node and which initially contains only the hub ; otherwise , if has no more stubs to connect , then remove it from further consideration .repeat the construction of and link the hub with one of its randomly chosen elements until the stubs of the hub are exhausted .then remove the hub from further consideration , and repeat the whole procedure until all the links are made and the sample construction is complete .each time the procedure is repeated , the degree sequence considered is the `` residual degree sequence '' , that is the original degree sequence reduced by the links that have previously been made , and with any zero residual degree node removed from the sequence .then , choose a new hub , empty the set of forbidden nodes and add the new hub to it .it is convenient , but not necessary , to choose the new hub to be a node with maximum degree in the residual degree sequence .the sample weights needed to obtain unbiased estimates using eq . [ two ] are the inverse relative probabilities of generating the particular samples .if in the course of the construction of the sample different nodes are chosen as the hub and they have residual degrees when they are chosen , then this sample weight can be computed by first taking the product of the sizes of the allowed sets constructed , then dividing this quantity by a combinatorial factor which is the product of the factorials of the residual degrees of each hub : the weight accounts for the fact that at each step the hub node has nodes it can be linked to , which is the size of the allowed set at that point , and that the number of equivalent ways to connect the residual stubs of a new hub is .note that it is always true that , with occurring for sequences for which there is only one possible graph .the most difficult step in the sampling algorithm is to construct the set of allowed nodes . in order to do so first notethat theorem [ fmt ] implies that if a non - forbidden node , that is a node not in , can be added to , then all non - forbidden nodes with equal or higher degree can also be added to .conversely , if it is determined that a non - forbidden node can not be added to , then all nodes with equal or smaller degree also can not be added to . therefore , referring to the degrees of nodes that can not be added to as `` fail - degrees '' , the key to efficiently construct is to determine the maximum fail - degree , if fail - degrees exist . the first time is constructed for a new hub , according to corollary [ firsthappy ] , there is no fail - degree and consists of all the other nodes .however , constructing becomes more difficult once links have been placed from the hub to other nodes . in this case , to find the maximum fail - degree note that at any step during the construction of a sample the residual sequence being used is graphical .then , since according to theorem [ th6 ] any connection to the leftmost adjacency set of the hub preserves graphicality , it follows from theorem [ fmt ] that any fail - degree has to be strictly less than the degree of any node in the leftmost adjacency set of the hub .if there are non - forbidden nodes in the residual degree sequence that have degree less than any in its leftmost adjacency set , then the maximum fail - degree can be found with a procedure that exploits theorem [ th6 ] .in particular , if the hub is connected to a node with a fail - degree , then , by theorem [ th6 ] , even if all the remaining links from the hub were connected to the remaining nodes in the leftmost adjacency set , the residual sequence will not be graphical .our method to find fail - degrees , given below , is based on this argument .begin by constructing a new residual sequence by temporarily assuming that links exist between the hub and all the nodes in its leftmost adjacency set _ except for the last one _ , which has the lowest degree in the set .the nodes temporarily linked to the hub should also be temporarily added to the set of forbidden nodes .the nodes in should be ordered so that it is non - increasing , that forbidden nodes appear before non - forbidden nodes of the same degree , and that the hub , which now has residual degree 1 , is last . at this point , in principle one could find the maximum fail degree by systematically connecting the last link of the hub with non - forbidden nodes of decreasing degree , and testing each time for graphicality using theorem [ eg ] .if it is not graphical then the degree of the last node connected to the hub is a fail - degree , and the node with the largest degree for which this is true will have the maximum fail - degree .however , this procedure is inefficient because each time a new node is linked with the hub the residual sequence changes and every new sequence must be tested for graphicality .a more efficient procedure to find the maximum fail - degree instead involves only testing the sequence . to see how this can be done ,note that is a graphical sequence , by theorem [ th6 ] .thus , by theorem [ eg ] , for all relevant values of , the left hand side of inequality [ egeq ] , , and the right hand side of it , , satisfy .furthermore , for the purposes of finding fail - degrees it is sufficient to consider linking the final stub of the hub with only the last non - forbidden node of a given degree , if any exists .after any such link is made , the resulting degree - sequence will be non - increasing , and thus theorem 1 can be applied to test it for graphicality . therefore , if the degree of the node connected with the last stub of the hub is a fail - degree , then inequality [ egeq ] for must fail for some . for each ,the possible differences in and between and are as follows . is always reduced by 1 because the residual degree of the hub is reduced from 1 to 0 . may be reduced by an another factor of 1 if the last node connected to the hub , having index and degree , is such that and . is reduced by 1 if , otherwise it is unchanged . considering these conditions that can cause inequality [ egeq ] to fail for , the set of allowed nodes can be constructed with the following algorithm that requires only testing . starting with ,compute the values of and for .there are three possible cases : ( 1 ) , ( 2 ) , and ( 3 ) . in case( 1 ) fail - degrees occur whenever is unchanged by making the final link to the hub .thus , the degree of the first non - forbidden node whose index is greater than is the largest fail - degree found with this value of . in case ( 2 ) fail - degrees occur whenever is unchanged and is reduced by 2 by making the final link to the hub .thus , the degree of the first non - forbidden node whose index is greater than and whose degree is less than is the largest fail - degree found with this value of . in case( 3 ) no fail - degree can be found with this value of .repeat this process sequentially increasing , until all the relevant values have been considered , then retain the maximum fail - degree .it can be shown that the algorithm can be stopped either after a case ( 1 ) occurs , or after where is the lowest index of any node in with degree .once the maximum fail - degree is found , remove the nodes that were temporarily added to and construct by including all non - forbidden nodes of with a higher degree .if no fail - degree is ever found , then all non - forbidden nodes of are included in . will always include the leftmost adjacency set of the hub and any non - forbidden nodes of equal degree .note that after a link is placed in the sample construction process , the residual degree sequence changes , and therefore , has to be determined every time . finally , and should be calculated efficiently .calculating the sums that comprise them for each new value of can be computationally intensive , especially for long sequences . even computing them only for as many distinct terms as there are in the sequence , as suggested in ref ., can still become slow if the degree distribution is not quickly decreasing .instead , it is much more efficient to use recurrence relations to calculate them .a recurrence relation for is simply with .for non - increasing degree sequences , define the `` crossing - index '' for each as the index of first node that has degree less than , that is for which for all .if no such index exists , such as for since the minimum degree of any node in the sequence is 1 , then set .then , a recurrence relation for is where is a discrete equivalent of the heaviside function , defined to be 1 on positive integers and 0 otherwise , and . or , since the crossing - index can not increase with , that is for all , a value will exist for which for all , and so eq . [ recm1 ] can be written thus , there is no need to find for . using eqs .[ recs ] and [ recm ] , the mechanism of the calculation of and at sequential values of is shifted from a slow repeated calculation of sums of many terms to the much less computationally intensive task of calculating the recurrence relations . in order to perform the test efficiently ,a table of the values of crossing - index for each relevant can be created as is constructed .probability distribution of the logarithm of weights for an ensemble of power - law sequences with and .the ensemble contained graphical sequences , and for each sequence graph samples were produced .thus , the total number of samples produced was .the simulation data is given by the solid black line and a gaussian fit of the data is shown by the dashed red line that nearly obscures the black line.,scaledwidth=40.0% ] it should be noted that the usefulness of this method for calculating and is broader than its use for calculating fail - degrees in our sampling algorithm . in particular , it can be used in an erds - gallai test to efficiently determine whether a degree - sequence is graphical .as previously stated , the weight associated with a particular sample , given by eq .[ weight ] , is the product of the sizes of all the sets of allowed nodes that have been built for each hub node divided by the product of the factorials of the initial residual degrees of each hub node .the logarithm of this weight is \ : .\label{lw}\ ] ] generally , degree sequences with admit many graphical realizations . when this is true , each of the terms in square brackets in eq .[ lw ] are effectively random and independent , and , by virtue of the central limit theorem , their sum will be normally distributed .that is , the weight of graph samples generated from a given degree sequence with large is typically log - normally distributed .however , degree sequences with that have only a small number of realizations do exist , and is not expected to be log - normally distributed for those sequences .mean and standard deviation of the distributions of the logarithm of the weights vs. number of nodes of samples from an ensemble of power - law sequences with .the black circles correspond to , the red squares correspond to .the error bars are smaller than the symbols .the solid black line and the dashed red line show the outcomes of fits on the data .the linearity of the data on a logarithmic scale indicates that the and follow power - law scaling relations with : and .the slopes of the fit lines are an estimate of the value of the exponents : and .,scaledwidth=40.0% ] furthermore , one can consider not just samples of a particular graphical sequence , but of an ensemble of sequences . by a similar argument to that given above for individual sequences , the weight of graph samples generated from an ensemble of sequences will also typically be log - normally distributed in the limit of large . for example , consider an ensemble of sequences of randomly chosen power - law distributed degrees , that is , sequences of random integers chosen from a probability distribution .hereafter , we refer to such sequences as `` power - law sequences . ''figure [ lognorm ] shows the probability distribution of the logarithm of weights for realizations of power - law sequences with exponent and .note that this distribution is well approximated by a gaussian fit .we have also studied the behavior of the mean and the standard deviation of the probability distribution of the logarithm of the weights of such power - law sequences as a function of . as shown in fig .[ paramfit ] , they scale as a power - law .we have found qualitatively similar results , including power - law scaling of the growth of the mean and variance of the distribution of , for binomially distributed degree sequences that correspond to those of erds - renyi random graphs with node connection probability such that , and for uniformly distributed degree sequences , that is power - law sequences with , with an upper limit , or cutoff , of for the degree of a node .however , for uniformly distributed degree sequences without an imposed upper limit on node degrees , we find that the sample weights are not log - normally distributed .in this section we discuss the algorithm s computational complexity .we first provide an upper bound on the worst case complexity , given a degree sequence . then , using extreme value arguments , we conservatively estimate the average case complexity for degree sequences of random integers chosen from a distribution . the latter is useful for realistically estimating the computational costs for sampling graphs from ensembles of long sequences . to determine an upper bound on the worst case complexity for constructing a sample from a given degree sequence , recall that the algorithm connects all the stubs of the current hub node before it moves on to the hub node of the new residual sequence . forevery stub from the hub one must construct the allowed set . the algorithm for constructing , which includes constructing , performing the vs comparisons , and determining the maximum fail - degree ,can be completed in $ ] steps , where is the maximum possible number of nodes in the residual sequence after eliminating hubs from the process .therefore , an upper bound on the worst case complexity of the algorithm given a sequence is : where the sum involves at most terms .equivalently , , with being the number of links in the graph . for simple graphs ,the maximum possible number of links is , and the minimum possible number is . if , then , and if , then , which is an upper bound , independent of the sequence .the estimated computational complexity of the algorithm for power - law sequences .the leading order of the computational complexity of the algorithm as a power of , where is the number of nodes , is plotted as a function of the degree distribution power - law exponent .the black circles correspond to ensembles of sequences without cutoff , while the red squares correspond to ensembles of sequences with structural cutoff in the maximum degree of .the fits that yielded the data points were carried out considering sequences ranging in size from to .,scaledwidth=40.0% ] from eq .[ cw ] , the expected complexity for the algorithm to construct a sample for a degree sequence of random integers chosen from a distribution , normalized to unity , can be conservatively estimated as here is the expectation value for the degree of the node with index , which is the largest degree for which the expected number of nodes with equal or larger degree is at least .that is , notice that the sum in the above equation runs to the maximum allowed degree in the network , which is nominally , but a different value can be imposed .for example , in the case of power - law sequences , the so - called structural cutoff of is necessary if degree correlations are to be avoided .however , such a cutoff needs to be imposed only for , because the expected maximum degree in a power - law network grows like .thus , for , grows no faster than and no degree correlations exist for large . given a particular form of distribution , eq .[ compl ] can be computed for different values of .subsequent fits of the results to a power - law function allow the order of the complexity of the algorithm to be estimated .figure [ plfits ] shows the results of such calculations for power - law sequences with and without the structural cutoff of as a function of exponent .note that , in the absence of cutoff , the results indicate that the order of the complexity goes to a value of 3 for , that is , in the limit of a uniform degree distribution .however , if the structural cutoff is imposed the order of the complexity is only in this limit .both these results are easily verified analytically .we have tested the estimates shown in fig .[ plfits ] with our implementation of the sampling algorithm for power - law sequences with and without the structural cutoff for certain values of , including 0 , 2 , and 3 .this was done by measuring the actual execution times for generating samples for different and fitting the results to a power - law function . in every case , the actual order of the complexity of our implementation of the sampling algorithm was equal to or slightly less than its estimated value shown in fig .[ plfits ] .we have solved the long standing problem of how to efficiently and accurately sample the possible graphs of any graphical degree sequence , and of any ensemble of degree sequences .the algorithm we present for this purpose is ergodic and is guaranteed to produce an independent sample in , at most , steps .although the algorithm generates samples non - uniformly , and , thus , it is biased , the relative probability of generating each sample can be calculated explicitly permitting unbiased measurements to be made . furthermore , because the sample weights are known explicitly , the algorithm makes it possible to sample with any arbitrary distribution by appropriate re - weighting .it is important to note that the sampling algorithm is guaranteed to successfully and systematically proceed in constructing a graph .this behavior contrasts with that of other algorithms , such as the configuration model ( cm ) , which can run into dead ends that require back - tracking or restarting , leading to considerable losses of time and potentially introducing an uncontrollable bias into the results .while there are classes of sequences for which it is perhaps preferable to use the cm instead of our algorithm , in other cases its performance relative to ours can be remarkably poor .for example , a configuration model code failed to produce even a single sample of a uniformly distributed graphical sequence , , with , after running for more than 24 hours , while our algorithm produced samples of the very same sequence in 30 seconds .furthermore , each sample generated by our algorithm is independent .this behavior contrasts with that of algorithms based on mcmc methods .because our algorithm works for any graphical sequence and for any ensemble of random sequences , it allows arbitrary classes of graphs to be studied .one of the features of our algorithm that makes it efficient is a method of calculating the left and right sides of the inequality in the erds - gallai theorem using recursion relations . testing a sequence for graphicalitycan thus be accomplished without requiring repeated computations of long sums , and the method is efficient even when the sequence is nearly non - degenerate .the usefulness of this method is not limited to the algorithm presented for graph sampling , but can be used anytime a fast test of the graphicality of a sequence of integers is needed .there are now over 6000 publications focusing on complex networks . in many of these publications various processes , such as network growth , flow on networks , epidemics , etc ., are studied on toy network models used as `` graph representatives '' simply because they have become customary to study processes on .these include the erds - rnyi random graph model , the barabsi - albert preferential attachment model , the watts - strogatz small - world network model , random geometric graphs , etc . however , these toy models are based on specific processes that constrain their structure beyond their degree - distribution , which in turn might not actually correspond to the processes that have led to the structure of the networks investigated with them , thus potentially introducing dangerous biases in the conclusions of these studies .the algorithm presented here provides a way to study classes of simple graphs constrained solely by their degree sequence , and nothing else .however , additional constraints , such as connectedness , or any functional of the adjacency matrix of the graph being constructed , can in principle be added to the algorithm to further restrict the graph class built .cidg and keb are supported by the nsf through grant dmr-0908286 and by the norman hackerman advanced research program through grant 95921 .hk and zt are supported in part by the nsf bcs-0826958 and by dtra through hdtra 201473 - 35045 .the authors gratefully acknowledge y. sun , b. danila , m. m. ercsey ravasz , i. mikls , e. p. erds and l. a. szkely for fruitful comments , discussions and support .
uniform sampling from graphical realizations of a given degree sequence is a fundamental component in simulation - based measurements of network observables , with applications ranging from epidemics , through social networks to internet modeling . existing graph sampling methods are either link - swap based ( markov - chain monte carlo algorithms ) or stub - matching based ( the configuration model ) . both types are ill - controlled , with typically unknown mixing times for link - swap methods and uncontrolled rejections for the configuration model . here we propose an efficient , polynomial time algorithm that generates statistically independent graph samples with a given , arbitrary , degree sequence . the algorithm provides a weight associated with each sample , allowing the observable to be measured either uniformly over the graph ensemble , or , alternatively , with a desired distribution . unlike other algorithms , this method always produces a sample , without back - tracking or rejections . using a central limit theorem - based reasoning , we argue , that for large , and for degree sequences admitting many realizations , the sample weights are expected to have a lognormal distribution . as examples , we apply our algorithm to generate networks with degree sequences drawn from power - law distributions and from binomial distributions .
over the past decade , there has been a lot of research relating the structure and function of networks .the common picture is that real - world networks are to some degree random , but also have some regularities .these regularities , the network structure , affect dynamical processes taking place on the network .examples of such processes include epidemic spreading , synchronization , random walks and opinion formation .if follows that the network structure can influence how robust a dynamic system is to targeted attacks and random failures . since the robustness and stability of networks is relevant for the reliability and security of our modern infrastructures such as electricity systems , power - grids , sewage systems , cell - phone networks and the internet it is important to know how to generate ( or design ) robust networks .when a fraction of vertices in a network are malfunctioning due to either random failures or malicious attacks , the whole network may be broken into isolated parts . assuming the indirect connectivity is important for the system to function, we can take this fragmentation process as reflecting the breakdown of the system s functionality . in the context of percolation theory ,this fragmentation can be monitored by the critical occupancy threshold .e .the fraction of functioning vertices needed for a finite fraction of the network to be connected ( in the large - size limit of a network model ) .instead of considering this criterion for robustness , schneider _et al . _ , focused on the evolution of the largest component ( connected subgraph ) when one repeatedly remove the highest - degree vertices in the network .in particular , they introduced an index , , to weigh the robustness of network , which is defined as where is the number of vertices in the network and is the fraction of vertices in the largest connected cluster after removing vertices .the normalization factor makes it easier to compare the robustness of networks with different sizes .the value of lies strictly in the range $ ] , where the two limits correspond to a network with star structure and a fully connected network .this situation is similar in other types of optimization of conflicting objectives .an heuristic method for maximizing while keeping the degree sequence fixed is to pick random pairs of edges and swap these [ and to and whenever a swap increases .when no more swaps can increase , the procedure is terminated .the final networks , after this optimization procedure , will then have a conspicuous onion structure with a core of highly connected vertices , hierarchically surrounded by layers of vertices with decreasing degrees . although one can achieve a considerable enhancement of the robustness by this method , it is not so appropriate in practice for two reasons .assuming that there are edges in a network , since the swapping of two arbitrary edges can impact the value of , the computational complexity of the method of schneider _ et al ._ scales as . on top of this , it takes time for the correlations to propagate through the system so that the time for the greedy algorithm to converge also increase with the system size. all - in - all the running time is thus close to cubic , which makes the approach prohibitively slow for large systems . in this paper , we present an alternative way to generate networks with onion structures under the constraint of invariant degree value of each vertex , and with computational complexity of order . since broad degree distribution are common in nature and society , we will focus our attention on generating scale - free networks with onion topology .we validate the efficiency of our algorithm by investigating the response of the generated networks to malicious attacks and random failures , and compare these to the networks obtained by the optimization procedure of ref . .it has been suggested that the resilience of networks depends strongly on their assortativity , i.e. , on how the vertices connect with each other . to be more specific , assortatively mixed networks ( i.e., high degree vertices are more likely linked with other vertices also with high degrees ) are considerably more robust against the removal of vertices than their disassortative counterparts ( i.e. , high degree vertices are more likely linked with other vertices with low degrees ) .thus , keeping invariant the degree of each vertex and varying the mixing pattern among the vertices to increase assortativity would improve the robustness of a network .however , as was pointed out in , onion and assortativity are distinct properties , and high assortative networks may be significantly fragile to malicious attacks due to the lack of onion topology .nonetheless , these two properties are highly relevant : not all assortative networks have onion structure , but all onion networks are assortative ( the vertices with similar degrees are connected more frequently , as we show below ) . the time consuming optimization in ref . calls for a quicker , heuristic method to generate robust networks with a prescribed degree distribution . to do this ,we first generate a set of random numbers drawn from a distribution .these numbers represent the degrees of the vertices in the networks .one can think of the numbers as `` stubs '' or half - edges , sticking out from their respective vertices .each vertex is then assigned a layer index according to its value .for the sake of convenience , we rank the vertices by degree , increasingly .we set the layer index for the vertices with lowest degree is , the index for the set of vertices with second lowest degree is , and so on until all vertices have been assigned an .then we connect the stubs by selecting a pair at random and joining these with a probability dependent on the layer difference of the two vertices according to where is the difference in layer index between and , and is a control parameter . according to eq .( [ prob ] ) , the vertices within a layer are connected with greater probability than vertices in different layers . with the increase of the layer index difference , approaches zero rapidly . the elementary stub - connection process is repeated until all the stubs have been used up .no duplicate connections between two vertices and self - loop connections are allowed during the construction of the network .it is easy to see that the networks generated in this way should be of onion property .vertices , , and a degree distribution . the lowest and highest degrees in this network are , respectively , and .the sizes of the vertices are proportional to their degree , and vertices with the same layer indices are marked by the same color , and edges between nodes with equal degree are highlighted.,title="fig : " ] + the parameter in eq . ( [ prob ] ) is the only independent parameter of our model .if , our algorithm reduces to the well known configuration model of molloy and reed .this , we argue , means that the network has a minimum of onion structure .if the value of is too large , the connection probabilities among vertices with different degrees become so small that the networks get either stratified and one - dimensional or even fragmented in core where a layer typically consists of only one vertex . in sum, the optimal -value , with respect to robustness , is intermediate . in the present study , we use unless otherwise stated . for , , , and ,there is typically fraction of stubs ( about , and we have checked that for larger size , this fraction can be even decreased ) that can not be paired in the construction process . in practicethis is not much of a problem as it can easily be remedied by the following reshuffle procedure . 1 . for stubs that are unpaired after many trials , we randomly select two of them at each step. 2 . we randomly choose a connection already existing in the network , and simply cut it so that we get two `` new '' stubs . 3 .then we attach the two `` new '' stubs to the two selected ones to form two connections , and at the same time check if any duplicates and self - connections are produced .we accept the change if the resulting graph is simple ( has no multiple edges or self edges ) , otherwise we undo the change and go back to step 2 to make a new try .this procedure is repeatedly repeated until all the remaining stubs are paired .in addition to the graphs with the algorithm presented in this paper , we also create onion scale - free networks according to the method proposed in , which will serve as a benchmark for comparison .in particular , we first obtain a scale - free network by procedure of the configuration model with the same degree sequences . from this original network ,we swap pairs of randomly chosen edges if and only if such a move would increase the robustness .this is done as follows .before swapping the two randomly selected connections , we carry out independent attacks as will be described below .the average robustness value is called . then we swap the neighbors of the two connections , and implement another independent attacks to determine the robustness of the new network .the swap of the neighbors is accepted only if and only if it would increase the robustness , i.e. , .this procedure is repeated with another randomly chosen pair of connections until no further improvement is achieved for a given large number of consecutive swaps ( the last ten thousands steps ) . in fig .[ example ] we show a typical network generated by our algorithm .to check the efficiency of our algorithm , we attack networks generated by our algorithm and those obtained by the robustness - optimization algorithm of ref .the attack procedure proceeds by removing vertices one by one in order of the ( currently ) largest degree ( during the deletion process ) . to recalculate the degrees during the attack , rather than removing vertices by the degree of the original network ( as in ref . ) , is in line with the idea that the attacker has a relatively full picture of the system .if more is known about a specific system , one can of course model the attack procedure in greater detail .this attack - by - current - highest - degree was first proposed in ref . and proven to be more efficient than removing vertices by initial degree . throughout this deletion processwe record . versus the fraction of removed vertices .vertices are removed according to their current degree during the removal to simulate an attack scenario where an adversary hits the fraction of the system s weakest points .we compare three types of model networks : the configuration model ( solid line ) , the robustness - optimized procedure ( dashed line ) , and our algorithm ( dashed - dot line ) .all networks have the same parameters as shown in fig .[ example].,title="fig : " ] + we report our simulation results in fig .[ robustness ] where the relative size of the largest component as a function of , the fraction of removed vertices .the solid , dashed , dashed - dot lines correspond , respectively , to the cases that attack is performed on scale - free networks generated by the configuration model , by the optimization procedure of ref . , and by our algorithm .all these networks have the same sizes and degree distributions . comparing these curves, we note that the robustness - optimized networks really are more robust .furthermore , the curve for our algorithm nearly collapse with the optimized ones .this means that our algorithm can generate networks almost as robust as the optimization algorithm , but much faster .we have calculated the degree assortativity proposed by newman roughly the pearson correlation coefficient of the degree at either side of an edge and found that robustness - optimized networks and our model networks are more assortative than the original configuration - model network ( not shown ) .this means that changing the mixing pattern among the vertices toward positive associativity can enhance the robustness of network against targeted attack .at the same time , assortativity and robustness are not necessarily correlated . as a function of ( a ) the assortativity , and ( b ) the ratio of the largest eigenvalue of the adjacency matrix to the second largest one of the networks .the plus and cross symbols are the results for networks generated , respectively , by the configuration model and our algorithm . independent degree attacks are carried out on each of them .the corresponding results for an optimized network , generated by the optimization procedure , are also shown for comparison ( the solid triangle ) .all networks have the same parameters as shown in fig .[ example].,title="fig : " ] + of vertices belonging to the largest connected cluster versus the fraction of removed vertices using the random attack strategy for scale - free networks generated by the configuration model ( solid line ) , and our algorithm ( dashed line ) .the two networks have the same system size , average degree , and degree sequences . the lowest and highest degrees in the networks are , respectively , and .each curve is obtained by averaging over independent trials.,title="fig : " ] + ( diamonds ) and assortativity coefficient ( circles ) as a function of for scale free networks generated by our algorithm .the error bars indicate the standard errors of the robustness and assortativity calculated for scale - free networks .all networks have the same parameters as shown in fig . [ percvalue ] ., title="fig : " ] + in order to understand how robustness and assortativity are correlated , we present in fig . [ eigenvalue ] the scatter plot of the robustness as a function of the degree assortativity for networks generated by the configuration model , and for another ones generated by our algorithm .one can see that the networks generated by our algorithm are always assortative ( ) , and they are also found to be more robust against malicious attack .it is well known that the spectral property of a network plays an important role in determining the evolution of dynamical processes , such as synchronization , random walks and diffusion , taking place on it .usually , the principle eigenvalue are of particularly important .it has been proven that networks with large spectral gap ( the difference between the first principle eigenvalue and the second one ) are very good expanders , which also is thought to imply robustness .the expander property of a network can by measured approximately by the ratio , where and here denote , respectively , the largest and the second largest eigenvalue of the adjacency matrix of the network , whose elements are ones or zeroes if the corresponding vertices are adjacent or not . to confirm the correlation between and we plot the values of these quantities in a scatter plot ( see fig . [ eigenvalue](b ) ) .this correlation means that the conclusion from fig .[ eigenvalue](a ) also holds if we use a good expander property as robustness criterion .as described so far , our algorithm can be used to design a network with a given degree sequence that is robust against malicious attacks . to further confirm the efficiency of our algorithm, we also simulated random failure process by site percolation , on the generated scale - free networks .the simulation results for the relative size of the largest component after a fraction of vertices has been randomly removed , are presented in fig .[ percvalue ] .we can see that the percolation threshold is close to one , which means that the spanning cluster persists up to nearly failure .this is in accordance with the results of .finally , we show in fig .[ rvsa ] the robustness and the assortative coefficient of the scale free networks generated by our algorithm as a function of the parameter . for each value of , the data are obtained from an average over scale - free networks , and for each network realization , independent attack - by - current - highest - degree processes are implemented .it is obvious that the preferential attachment mechanism among the nodes within the same layers indeed leads to robust networks than the lack of that . from the results in figs .[ robustness] [ rvsa ] , we conclude that robust scale - free networks with onion structures can be obtained from the very beginning and without the need of an explicit optimization .in summary , we have proposed an alternative method to generate networks that are both robust to malicious attacks and random failures . we started by generating the degree sequence of a scale - free network with prescribed power - law degree distribution . from the observation that robust networks have of onion structure , we rank the vertices in terms of their degree , and assign a layer index to each vertex . the connection probability of two stubs is assigned to be related to the layer index difference of the two host vertices in such a way that the vertices with similar layer indices are connected with greater probability than otherwise . by means of this way , we are able to generate scale - free networks of onion structure . we validate our algorithm by testing the robustness of the obtained network against both a harmful attack , which progressively remove the vertex with largest degree in the remain network , and random failures , which is modeled by site percolation .in many systems there are different types of edges that contributes to different aspects of the system s functionality divide edges into connectivity edge and dependency edges .the former ones sustaining the primary connectivity of the system , the latter ones maintaining the functionality of the former . in the present study , we have restricted ourselves to the case where these edges coincide .an obvious further step would be to generalize the onion - structure generation to such interdependent networks .in general , interdependency can make networks more fragile .our preliminary simulations show that this is indeed the case for both random failures and malicious attacks of our onion topologies .another issue is that an adversary typically does not have full information about the system , which would make the strategy to delete vertices by degree hard . on the other hand , without information, one is not expected to do worse than the random failures that we simulate by percolation . to conclude , without interdependencies and with a fairly good knowledge of the graph , which is indeed the case to several vital infrastructures , onion networks are the best bet for constructing a network with a broad degree distribution that is robust to both errors and attacks .a. fabrikant , e. koutsoupias , c. h. papadimitriou , in _ proceedings of the 29th international conference on automata , languages , and programming _ , lecture notes in computer science * 2380 * ( springer , heidelberg ) , 110 ( 2002 ) . that we do nt see the eventual decrease of in the limit of large in fig .[ rvsa ] is attributed to the scale free networks we treated are somewhat dense " .if sparser " networks ( e.g. , with average degree 2.8 , rather than 4.75 ) were considered , indeed displays a decline for sufficiently large due to the fast fragmentation of the networks after removing just a few of nodes with highest degrees ( results not shown ) .
in a recent work [ proc . natl . acad . sci . u.s.a . * 108 * , 3838 ( 2011 ) ] , schneider _ et al . _ proposed a new measure for network robustness and investigated optimal networks with respect to this quantity . for networks with a power - law degree distribution , the optimized networks have an onion structure high - degree vertices forming a core with radially decreasing degrees and an overrepresentation of edges within the same radial layer . in this paper we relate the onion structure to graphs with good expander properties ( another characterization of robust network ) and argue that networks of skewed degree distributions with large spectral gaps ( and thus good expander properties ) are typically onionly structured . furthermore , we propose a generative algorithm producing synthetic scale - free networks with onion structure , circumventing the optimization procedure of schneider _ et al . _ we validate the robustness of our generated networks against malicious attacks and random removals .
we often experience traffic jams during rush hours in cities . in urban networks ,traffic flows are controlled by traffic lights .ideally , the cycles of the traffic lights should be coordinated in a way that optimizes the travel times in the network or avoids deadlock situations .the motivation of this work is to explore systematically the optimization of traffic flow , by using a simple transport model . in traffic engineering excluded - volume effect and stochastic fluctuationsare usually not taken into account .the totally asymmetric simple exclusion process ( tasep ) is a minimal model that includes these features .the tasep is one of cellular - automaton models with stochastic time evolution , which are systems of interacting particles on lattices . in the tasep , each site of the lattice is either occupied by a particle or empty , and each particle stochastically hops to the right neighboring site , if this target site is empty . undoubtedly the tasep has played a prominent role as a paradigmatic model for describing many driven non - equilibrium systems , especially physics of transport phenomena . since its introduction for theoretical description of the kinetics of protein synthesis ,the tasep has been generalized in many ways , for e.g. describing biological transports , in particular the motion of molecular motors .one of the intriguing disciplines which owe much to the tasep is vehicular traffic flow .various features of traffic flow have been investigated in the framework of the tasep such as overtaking , intersection of streets , flow at junctions , queuing process , anticipation effect , time and spatial headway at intersections , on - ramp simulation , pedestrian - vehicles flow , roundabout and shortcut .models of traffic flow at intersections have been also investigated by other approaches than the tasep , mainly in discrete - time frameworks . in order to investigate the effects of traffic lights, one can introduce time - dependent hopping probabilities in some particular sites of lattice traffic models .for example in , discrete - time models were analyzed on regular square lattices and some traffic - light strategies applied to optimize the flow in the system . on a simpler geometry i.e. ring ,discrete - time models were also employed and fundamental diagrams ( the curve of flow _ vs _ the density of cars ) were found to become constant when the density is in a certain range . in this workwe focus on the control of traffic flows on a single main road of city networks , and analyze different strategies to optimize unidirectional flow by signalization .specifically we use the continuous - time tasep on a ring rather than more sophisticated discrete - time models , which have been originally introduced for modeling highway traffic .compared to these traffic models , in the continuous - time tasep the cars velocities fluctuate stronger . in our system , there is a traffic light which controls the conflicting flow of vehicles at each point intersected by a perpendicular street , see fig .[ fig : illust ] .the traffic lights are regarded as local defects . as a special case ,our model includes one of well - known inhomogeneous taseps , which was introduced by janowsky and lebowitz .we also remark that similar variants of the tasep were introduced , e.g. where the tasep with time - dependent exit rate has been investigated and where one site in the lattice can be blocked or unblocked stochastically for description of conformation changes in filamentous substrate .this work is organized as follows . we first ( sec .[ sec : one ] ) analyze the case where there is only one traffic light on the street .we explore extensively its basic properties , mainly the fundamental diagrams , in various parameter regimes .we also consider various types of density profiles , according to averaging procedure .in particular the density profile by sample average _ converges _ to a periodic function in time .( in appendix , we give a proof of the periodicity of physical quantities . ) on the other hand , we observe a shock in the density profile by time average .next ( sec .[ sec : many ] ) we investigate the case where there are more than one traffic light . for simplicity traffic lightsare equidistantly located .we treat two strategies for defining the difference between the offsets of two adjacent traffic lights : fixed and random ones .we examine the current , which depends on the strategies .we also measure the total waiting time of cars behind traffic lights , and we explore its average and distribution in the two strategies .it turns out that there is an interrelation between the total waiting time and the current .finally ( sec .[ sec : conclusions ] ) we summarize this work and mention possible future studies .illustration of our model . in the one dimensional lattice ,each car move to the next site if it is empty with rate 1 .if a car is on a site just before a traffic light in red phase , the movement is not allowed ., width=287 ]let us consider first the tasep with only one traffic light on a ring with sites .each site is either empty or occupied by at most one article .we denote the global density ( i.e. the ratio between the number of cars and ) by , and the occupation number of site at time by .the ` hopping ' rate of cars from site to is set to be 1 . without loss of generality, we put the light between sites and 1 .we assume that the light periodically changes its status from green to red and from red to green .we denote the cycle length ( period ) and the green phase ratio by and , respectively , which are basic parameters in our model .the signal is green for unit of time and red for the rest of the cycle , i.e. , .more precisely , the jump rate from site to 1 is given by the following time - dependent function : where the integer ] showing the cycle number . in other words ,the light periodically becomes green at and red at . in this workwe consider only the case where all the traffic lights have identical values of and .analogous to the argument for the case , the limit of the model corresponds to the tasep with static defective bonds where hopping rates are reduced to . in the opposite limit , the current becomes , where is the ratio between the period and the duration when all the lights are green , i.e. . on the other hand , in the limit , the current is expected to be for the low and high density cases ( ) , and flat when , with some critical density .but again we are interested in the case where and are in a same order rather than these limits . the current depends on the offset parameters as well as , and . in this workwe discuss two types of offsetting of traffic lights , i.e. fixed and random offset strategies ( see e.g. for other types ) . in the fixed offset strategy ,the difference of the offsets are set as for , with some .the choice of is restricted as ( ) , so that no inhomogeneity is caused to the light : ( modulo 1 ) .the car - hole symmetry is no longer true except for , but we have an extended symmetry . on the other hand , in the random offset strategy , for each randomly chosen from the unit interval . and ( b ) _ vs _ the global density and the difference of offset parameters in the fixed offset strategy .the parameters were set as , and , corresponding to the surfaces . for comparison we plotted data of by markers in the face .in order to emphasize the effect of in the low and high density cases , we divide the current by in ( b ) .( c ) optimal _ vs _ .the curves correspond to the green wave strategy .[ fig:3d - j ] , width=287 ] figure [ fig:3d - j ] ( a ) shows the fundamental diagram for , and in the fixed offset strategy . as in the single - light ( ) case , there are regimes and , where the current depends on the global density .when , the current is independent of .the dependence of this plateau current on is also weak , see fig .[ fig:3d - j ] ( a , b ) . on the other hand , in the cases of low and high densities ,the dependence on becomes significant . in fig .[ fig:3d - j ] ( c ) , we plot the value of which gives maximum of for given .one may naively think that the so - called green wave strategy , i.e. ( modulo 1 ) with , maximizes the current .however we observe that this guess fails ._ vs _ the global density and the difference of offset parameters .the parameters were set as , and , corresponding to the surfaces . for comparison we plotted data of by markers in the face .[ fig : w ] , width=287 ] let us turn to investigation on the total waiting time ( twt ) .for each car , the waiting time behind a traffic light is defined by the duration from joining a queue to moving again .the twt for the period of the light is then the summation of waiting times over all cars in the queue , see fig .[ fig : w ] ( a ) .this is also rephrased as the queue length integrated over time .we also denote by the average of twt over and .the twt is one of quantities which we want to minimize in real city traffic . in fig[ fig : w ] ( b ) the surface corresponds to simulation results of the average twt normalized by , which is equivalent to average waiting time per car per cycle .when the global density is large ( ) , a queue _ created _ behind the light in the red phase can reach the light , and/or it can last still in the red phase . in this casewe can not define a twt per cycle .therefore we show only meaningful data of in fig [ fig : w ] ( b ) .the dependence on is again strong in the low density case . in kymographs ,fig [ fig : kymos ] , we observe the following facts . for the low density case ( a , c ) , is fluctuating with respect to , but we partially see that packets are propagated . on the other hand in the plateau current region ( the intermediate density regime ) ( b ,d ) , highly depends on in a time window , say .this property is true for the fixed offset strategy even though there is no inhomogeneity among the lights , i.e. when is large ( as compared to other lights ) , is also large .( this dependence on disappear when we consider the average over in a much longer time window . ) and ( b ) , and the random offset strategy with ( c ) and ( d ) .the other parameters are , , and .[ fig : kymos ] , width=287 ] in the low density regime , there is the following tendency ( agreeing with our intuition ) as varies : when the twt is minimized ( resp .maximized ) , the current is maximized ( resp . minimized ) ,[ fig : jtwt - twtdistribution ] ( a ) for example with .however the relation between and is not a perfect one - to - one correspondence . on the other hand , in fig .[ fig : jtwt - twtdistribution ] ( b ) , we provide plots for as an example in the intermediate density regime .we can not observe a clear tendency of the relationship between and , as their ranges are too small .for comparison , we plot of the random offset strategy in fig .[ fig : jtwt - twtdistribution ] ( a , b ) .we notice , in ( a ) , that the curve made by the fixed offset strategy encloses the samples of random offset strategy with randomly chosen parameters as well as the average over . for intermediate densities , there is no big difference in as varies , see ( b ) .let us investigate the probability distribution of the twt .one may naively expect that the distributions of the cases and with are similar to each other . however this guess fails ,see the insets of fig .[ fig : jtwt - twtdistribution ] ( c , d ) , where we observe very different curves . for , we observe various types of distributions in fig . [fig : jtwt - twtdistribution ] ( c ) . for except for the vicinity of , the distribution is almost flat , as compared to other values of . for ,we observe a strong oscillation , where peaks appear with a period slightly smaller than . for ,the distribution is exponential - like ( but with a peak at a positive ) . for , a simplegaussian fitting agrees with the simulation data . on the other hand , for ( with ), does not drastically change its form as we change the value of , see fig .[ fig : jtwt - twtdistribution ] ( d ) .plots of as varies for global density ( a ) and ( b ) .we have set the other parameters as , , , and .for the fixed offset strategy , we varied the value of as .the small dots ( and partially the markers with various shapes ) were obtained by averaging over .the lines are guides for eyes . for the random offset strategy ,each marker was obtained by averaging over of one simulation run with a randomly chosen set of parameters .furthermore we averaged 40 simulation runs of the random offset strategy .( c ) and ( d ) show probability distributions of the average twt for and 0.4 , respectively , in the fixed offset strategy , where the average values are indicated by markers with a vertical bar . in the insets, we compare the distributions in the cases of and with .the dashed line in ( c ) is the gaussian fitting for .,width=287 ]in this work we analyzed the tasep with dynamic defective bonds which correspond to the traffic lights at intersections . our model can be considered as an approach to the regulation of traffic flow on arterial roads in urban areas .we explored possible optimization strategies of the flow and waiting time behind a traffic light .our choice of the traffic model , i.e. the continuous - time tasep , leads to rather strong fluctuations of cars velocities .this implies that the ballistic motion of cars , which might be relevant at low densities , is not considered by our approach .these fluctuations limit the efficiency of the traffic - light optimization schedules based on the typical traveling time between two intersections . in our systematic approach, we started with the single - light problem .we saw that the fundamental diagrams quantitatively depend on the parameters of signalization as well as the system size .although these time - average fundamental diagrams are qualitatively similar to that of the jl model , the time - periodic rate at the intersection gives interesting phenomena in various types of density profiles .in particular the temporary density profile nicely reflects formation and relaxation of a queue behind the light , which enables us to optimize the flow by tuning the period of the traffic light .in contrast to deterministic discrete - time models , even at low densities it is not possible to avoid queueing of cars completely , but the impact of the signal period and system size is significant .another interesting observation is that the average of the time - periodic profile over one cycle can exhibit a shock in the periodic system . for the -light problem , we measured the total waiting time of cars behind traffic lights , and explored relations to the flow .we found that , for the low density regime , the distribution of the total waiting time takes various forms depending on the offset parameter of the fixed offset strategy .moreover the flow and the waiting times are correlated .when the density becomes larger , the flow and the waiting time can not be controlled by the offset parameters , and the correlation between the flow and the waiting time becomes weaker .our results refer to the steady state of a periodic system , which we characterized in some details .we believe that our approach sets a firm ground for the analysis of more sophisticated traffic models as well as in other geometries . the step toward more complex lattices has already been made , for example , at an intersection of two perpendicular segments or in more complicated networks . despite the relevance of these studies for city traffic , in these works traffic optimization has been understood as optimization of the flow . for realistic city traffic , however , it is also important to understand how the cars are distributed in the network as well as to optimize traffic flow with respect to the drivers waiting times .although we addressed these issues in a rather simple geometry , we observed that the density of cars is strongly varying at different sections of the roads .this observation is of great relevance for city traffic since a queue on a main road may block a whole section of the city network .here we prove that ( the ensemble average of ) any quantity converges to a periodic function with period , as .we consider only the single - light problem , but the proof can be generalized to the -light problem .the probability distribution at time is evolved by the master equation in continuous time where ], we find + \kappa } | p ( 0 ) \rangle & ( s ' \le g ) \\e^{m_\text{r } ( s ' -g ) t } e^{m_\text{p } g t } \mathcal m^ { [ s / t ] + \kappa } | p ( 0 ) \rangle & ( s ' > g ) \end{cases } \\ & \to \begin{cases } e^{m_\text{p } s ' t } |p_\text{st } \rangle & ( s ' \le g ) \\ e^{m_\text{r } ( s ' -g)t } e^{m_\text{p } g t } | p_\text{st } \rangle & ( s ' > g ) \end{cases } \label{eq : last - equation}\end{aligned}\ ] ] as ( ) .we denote eq . by satisfying .we find that any quantity converges to , which is also periodic , .assume that cars always want to jump from site to site 1 with rate 1 .but jumps are allowed only when the signal is green .when is very small , i.e. in the very high frequency of the light , the acceptance of the jump is almost stochastic with probability , showing the equivalence of our model to the jl with .
we consider the exclusion process on a ring with time - dependent defective bonds at which the hopping rate periodically switches between zero and one . this system models main roads in city traffics , intersecting with perpendicular streets . we explore basic properties of the system , in particular dependence of the vehicular flow on the parameters of signalization as well as the system size and the car density . we investigate various types of the spatial distribution of the vehicular density , and show existence of a shock profile . we also measure waiting time behind traffic lights , and examine its relationship with the traffic flow .
in this section we prove some auxiliary results , that will be used to prove both theorem [ psiteorexist ] and theorem [ teorexist ] .we start by showing that , under suitable assumptions , an approximable quasistatic evolution is automatically an evolution of critical points .[ tre2 ] suppose that , ( j2 ) , and ( j3 ) are satisfied , and let ; \mathcal f) ] be an approximable quasistatic evolution with initial condition and constraint .then , is a critical point of on the affine space for every ] be fixed .for every , let be such that ( to ease the notation , we do not stress the dependence of on ) from the definition of approximate quasistatic evolution we have .then , by continuity of and we obtain .we thus need only to show that . by definition of constrained critical point, there exists , , and such that from ( j2 ) it follows that is locally bounded and therefore , by , we have on the other hand , thanks to and remark [ limir ] , we have therefore , thanks to , , and ( j3 ) thus , there exists such that , up to subsequences , from , , and we get , by the closure property of the subdifferential , that as required . in order to construct an approximate quasistatic evolution , we first introduce an auxiliary minimum problem .let be a fixed time step , and let with .set , and suppose that is a critical point of on the affine space . if property ( j2 ) is satisfied , we define the sequence , , by setting and with chosen such that ( j2 ) holds .note that ( j2 ) , ( ) and ( ) guarantee that mimimizers in are unique .the following lemma gives some properties of the sequence , .[ lemmajnew ] let , ( j1 ) , ( j2 ) , and ( j3 ) be satisfied , let ; \mathcal f) ] and .if , suppose in addition that ( ) , ( ) and ( ) hold true .let and let be a critical point of in the affine space . set and , for every with , let be defined by , and let be a limit point of .then , the function \to \mathcal{e} ] and .if , suppose in addition that ( ) , ( ) and ( ) hold true .let and let be a critical point of in the affine space .let \to \mathcal{e} ] , with as in remark [ limir ] , such that } } | v_{\delta } ( t ) |_{\mathcal{e } } \leq z_1.\ ] ] let with be fixed , and let be the sequence defined by . by property ( j2 ) and remark [ vbar ] ,the functional is strongly convex .therefore , whenever , we have in particular , choosing and recalling the definition of we have by , is the global minimizer of on .therefore , there exists such that so that , by , which gives * step 1 .* we show that there exist positive constants and with the following property : for every and with we have we start observing that therefore , thanks to condition ( j3 ) we have for some positive constant , where is given by remark [ limir ] .thus , from we also have \int_{(i-1 ) \delta}^{i \delta } | \dot{f } ( s ) | _ { \mathcal{f } } \ , \mathrm{d}s.\end{aligned}\ ] ] using last inequality , gives by the absolutely continuity of the integral , there exists a positive constant such that note now that , for every , by the minimality of we have where we also took into account that is nonincreasing . passing to the limit when , up to subsequences , we obtain combining last relation with , we get .* step 2 . *we conclude .we start by proving that } } | j ( v_{\delta } ( t))| \leq z_2,\ ] ] for some positive constant ;\mathcal{f})}) ] .let ; x ) \cap bv ( [ 0,t ] ; x) ] and a subsequence , independent of , with \setminus n.\ ] ] we set ) ] .let now be the ( at most countable ) set of discontinuity points of .for \setminus n ] .let and let be a critical point of in the affine space .let \to \mathcal{e} ] , \to \mathcal{e} ] ; * , with as in ( j2 ) ; * } | v_{\delta } ( t-\delta ) - \overline{v}_{\delta } ( t ) |_{\mathcal{e } } \leq r_1 ( \delta) ] with the following property .if are such that then we have by and we have ;\mathcal{f } ) } \quad \text { for every } \delta \in ( 0 , \overline{\delta } ) \text { and } i \in \mathbb{n } \text { with } i \delta \leq t,\ ] ] where does not depend on and . then , by ( j3 ) and the estimate follows . since by it also holds , while by remark [ limir ] , again by ( j3 ) and the claim is proved . ** we prove ( iii ) .first of all , we define for every , and let now with .we need to show that there exists with as such that by construction , we have that is a critical point of the functional on the affine space . therefore , there exists such that using the fact that we can also write note now that , by construction , minimizes the functional on the affine space . therefore , there exists such that now , and are both in the subdifferential of the strongly convex functional at .the constant of strong convexity is furthermore indepedendent of by remark [ vbar ] . with this and step 1we have then , setting by the absolute continuity of the integral we have that as and we conclude .* step 3 . *we prove ( i ) and ( ii ) .we start by defining the function \to \mathcal f' ] , and ; \mathcal{f}') ] such that ; \mathcal{e})\ ] ] and , in addition , \ ] ] up to extracting a further subsequence we can assume that lemma [ lemmadelta ] holds .furthermore , since by ( i ) in theorem [ erfd ] we also have ; \mathcal{f } ' ) } \leq c,\ ] ] without any loss of generality we can assume that for the same subsequence we have ; \mathcal{f}'),\ ] ] for some function ; \mathcal{f}') ] there exists such that in particular , since is bounded in ; \mathcal{f}') ] . without any loss of generality, we can assume that ; \mathcal{e}')\ ] ] for some ; \mathcal{e}') ] with , such that for every \setminus \lambda ] . since for every we have , and recalling that is convex , for every let now \setminus \lambda ] .let and let be a critical point of on the affine space .let \to \mathcal{e} ] and positive constants , and , independent of , such that * for every ] ; * for every ] as \cap [ i \delta , ( i + 1 ) \delta ) , \quad \hbox { for all } i \in \mathbb{n}_0 \text { with } i \delta \leq t,\ ] ] property ( i ) is satisfied . since is bounded by proposition [ z1 ] , with ( j3 ) and we obtain ( ii ) we now divide the remaining part of the proof into two steps. * step 1 .* we show that there exists a constant , depending only on and , such that for every with . to this aim ,let with be fixed , and let be the sequence defined by . by property ( j2 ) and remark [ vbar ] , the functional is strongly convex .therefore , whenever , we have in particular , choosing and recalling the definition of we have by , is the global minimizer of on .therefore , there exists such that so that , by , therefore , by the absolute continuity of , and recalling that is constant in the interval , we have where we used the fact that and . observe now that , by definition of , we have .thus , recalling that , we obtain thus , recalling that , by property ( j4 ) we obtain which , together with , gives using young s and hlder s inequality , we get for a suitable constant ( depending only on and ) , where we also used the fact that ;\mathcal f) ] , \to \mathbb{r} ] be measurable functions , for every .for every ] ; * for every open set the set : \mathcal{i}(t ) \cap u \neq \emptyset \} ] and \to \mathcal e ] be such that and is well defined for every \setminus \lambda ] we can extract a subsequence ( possibly depending on ) such that .\ ] ] by ( ii ) of theorem [ teor18 ] , we have .\ ] ] thus , for every ] , we can choose the subsequence in such a way that the maps \to \mathcal f' ] are measurable .let us denote by ( ) the closed ball of ( ) with center at the origin and radius ( ) .applying lemma [ dalgp ] with , and , we have that * is closed for all ] is measurable , where the set is given by thanks to ( * ? ? ?* theorem iii.6 ) , for every ] , let us now show that . since is the of measurable functions , we deduce that it is measurable .moreover , we have in order to get the energy inequality , recall that by ( iii ) of theorem [ teor18 ] ( for and ) we have , for every , taking the limsup in of the previous expression , using fatou s lemma so that ( c ) follows .in the remaining part of the paper , we show how to apply theorem [ teorexist ] to cohesive fracture evolution .we start this section by recalling the model introduced in , where a critical points evolution is obtained by following the scheme described in the first part of the introduction .we conclude the section performing a finite dimensional discretization . in the next sectionwe will then show how it is possible to pass to the limit , thus obtaining a different proof of the existence result in .let be a bounded open set with lipschitz boundary , with .we assume that the reference configuration is the infinite cylinder , and that the displacement has the special form where .this situation is referred to in the literature as _ generalized antiplanar shear_. we assume that the crack path in the reference configuration is contained in , where is a lipschitz closed set such that and , where and are disjoint open connected sets with lipschitz boundary .we will study the energy of a finite portion of the cylinder , obtained by intersection with two horizontal hyperplanes separated by a unit distance .given a time interval ] , imposed on a fixed portion of .we make the assumption that is well separated from , and that . in the framework of linearized elasticity ,the stored elastic energy associated with a displacement is given by the crack in the reference configuration can be identified with the set where denotes the trace on of the restriction of to . in order to take into account the cohesive forces acting between the lips of the crack , according to barenblatt s model we consider a fracture energy of the following form : | ) d \mathcal h^{d-1},\ ] ] where := u^+ - u^- ] , and let be a minimizer for the problem then ( * ? ? ?* proposition 3.1 ) , g'(|[u(t)]|)\operatorname{sign}([u(t)])1_{j_{u(t ) } } + |[\psi]| 1_{j_{u(t)}^c } \right ) d \mathcal h^{d-1 } \geq 0,\ ] ] for all with on .one can see ( * ? ? ?* proposition 3.2 ) that satisfies if and only if it is a weak solution of |)\operatorname{sign}([u(t ) ] ) & \mbox { on } j_{u(t ) } , \end{cases}\ ] ] where denotes the sign function .any function satisfying or will be referred to as _ critical point of at time . in , a critical points evolution is obtained by following the general ideas given in the introduction , by setting and given a critical point of at time and , the author shows ( * ? ? ?* theorem 4.1 ) the existence of a function \to h^1 ( \omega \setminus \gamma) ] with such that the following properties are satisfied : * _ approximability _ : for every ] the function is a critical point of at time ; * _ energy inequality _ : for every ] , and let be a contrained critical point of at time , under the constraint on . by , there exists a sequence , h^1(\omega)) ] for every and , h^1(\omega)).\ ] ] for every ] such that * ; * is constant in \cap [ i \delta , ( i + 1 ) \delta) ] is an _ approximable quasistatic evolution _ with initial condition and constraint , if for every ] , while stationarity at will be recovered in the limit passage . our goal is now proving the following version of ( * ? ? ? * theorem 4.4 ) , which is the main result of this section .[ quasist ] let , h^1(\omega)) ] with such that the following properties are satisfied : * _ approximability _ : for every ] the function is a critical point of at time under the constraint on ; * _ energy inequality _ : for every ] such that * is an approximable quasistatic evolution with initial condition and constraint ; * stationarity : for every ] defined as \cap [ i \delta , ( i + 1 ) \delta ) , \quad \text { for every } i \in \mathbb{n}_0 \text { with } i \delta \leq t,\ ] ] is a discrete quasistatic evolution with time step , initial condition , and constraint , in the sense of definition [ newquasist ] . at this point, we need to check that theorem [ teor18 ] can still be proven .in particular , we want to define a function \to \mathcal{f}_h ] . since is a discrete quasistatic evolution , for every with there exists such that .then , we define \to \mathcal{f}_h ] such that .\ ] ] let now ] we have recalling that and that the linear operator is independent of time , we have by , for every we have . since for every , by remark [ remarkregular ] we have therefore , which gives ( c ) .finally , property ( d ) directly follows from remark [ boundevolution ] and remark [ dependence ] .we can now pass to the limit as .let ] with such that is well defined for every \setminus \lambda_2 ] be given by theorem [ theoremh ] . we define \setminus \lambda_2 , \vspace{.1 cm } \\ 0 & \text { for every } t \in \lambda_2 , \end{cases}\ ] ] and .\ ] ] by definition of , for every ] we can extract a further subsequence ( not relabelled ) such that for some with . by repeating what was done in the proof of theorem [ teorexist ] , we can show that the subsequence can be chosen in such a way that the map \to \mathcal h^1 ( \omega \setminus \gamma) ] , in order to prove that we first observe that is measurable , since it is the of a sequence of measurable functions .moreover , we have by ( c ) of theorem [ theoremh ] we have , for every and for every ] .let with on .then ( see for instance ) , we can find a sequence such that and with on , for every . note that , by , we have therefore , by g'(|[u_{h_{k_j}}(t)]| ) \operatorname{sign}([u_{h_{k_j}}(t)])1_{j_{u_{h_{k_j}}(t ) } } - |[\psi_{h_{k_l}}]| 1_{j_{u_{h_{k_j}}(t)}^c } \right ) d \mathcal h^{d-1 } , \label{eq : crtpconddiscr+2}\end{aligned}\ ] ] for every .by we have define now , for every ] g'(|[u(t)]| ) \operatorname{sign}([u(t)])1_{j_{u(t ) } } - |[\psi_{h_{k_l}}]| 1_{j_{u(t)}^c } \qquad \mathcal{h}^1\text{-a.e . in } \gamma.\ ] ] up to extracting a further subsequence , we can assume that and = [ u ( t ) ] \qquad \mathcal{h}^1\text{-a.e . in } \gamma.\ ] ] now , let us fix such that and hold true .then , for large enough we have ( x ) ) = \operatorname{sign}([u(t ) ] ( x)).\ ] ] therefore , ( x ) g'(|[u_{h_{k_j}}(t ) ] ( x)| ) \operatorname{sign}([u_{h_{k_j}}(t ) ] ( x ) ) 1_{j_{u_{h_{k_j}}(t ) } } ( x ) - |[\psi_{h_{k_l } } ] ( x)| 1_{j_{u_{h_{k_j}}(t)}^c } ( x ) \nonumber \\ & = - [ \psi_{h_{k_l } } ] ( x ) g'(|[u ( t ) ] ( x)| ) \operatorname{sign}([u ( t ) ] ( x ) ) 1_{j_{u ( t ) } } ( x ) - |[\psi_{h_{k_l } } ] ( x)| 1_{j_{u ( t)}^c } ( x ) \label{due}\end{aligned}\ ] ] for -a.e .if , instead , , then recalling that we have ( x ) g'(|[u_{h_{k_j}}(t ) ] ( x)| ) \operatorname{sign}([u_{h_{k_j}}(t ) ] ( x ) ) 1_{j_{u_{h_{k_j}}(t ) } } ( x ) - |[\psi_{h_{k_l } } ] ( x)| 1_{j_{u_{h_{k_j}}(t)}^c } ( x ) \nonumber \\ & \geq - |[\psi_{h_{k_l } } ] ( x)| = - |[\psi_{h_{k_l } } ] ( x)| 1_{j_{u ( t)}^c } ( x ) . \label{tre}\end{aligned}\ ] ] combining and we obtain . thanks to and we can pass to the limit in , obtaining g'(|[u_{h_{k_j}}(t)]| ) \operatorname{sign}([u_{h_{k_j}}(t)])1_{j_{u_{h_{k_j}}(t ) } } - |[\psi_{h_{k_l}}]| 1_{j_{u_{h_{k_j}}(t)}^c } \right ) d \mathcal h^{d-1 } \\ & \geq \int_\gamma \liminf_{j \to \infty } \left ( - [ \psi_{h_{k_l } } ] g'(|[u_{h_{k_j}}(t)]| ) \operatorname{sign}([u_{h_{k_j}}(t)])1_{j_{u_{h_{k_j}}(t ) } } - |[\psi_{h_{k_l}}]| 1_{j_{u_{h_{k_j}}(t)}^c } \right ) d \mathcal h^{d-1 } \\ & \geq \int_\gamma \left ( - [ \psi_{h_{k_l } } ] g'(|[u ( t)]| ) \operatorname{sign}([u ( t)])1_{j_{u ( t ) } } - |[\psi_{h_{k_l}}]| 1_{j_{u ( t)}^c } \right ) d \mathcal h^{d-1}.\end{aligned}\ ] ] finally , passing to the limit as we have g'(|[u ( t)]| ) \operatorname{sign}([u ( t)])1_{j_{u ( t ) } } - |[\psi]| 1_{j_{u ( t)}^c } \right ) d \mathcal h^{d-1 } , \end{aligned}\ ] ] and we conclude .the scope of this section is to practically show that the procedure illustrated in the previous sections can be effectively implemented and produces the desired quasistatic evolution , according to the one described in .we refer the reader to section [ appl ] for the notations used here .we first analyze the results obtained for a one - dimensional problem , when . despite its simplicity ,the one - dimensional setting allows us to give a detailed comparison between numerical results and analytic predictions , since in this case the explicit solutions of are known .we consider the following geometry : , \qquad \ell = 0.5 , \qquad \gamma = \{\ell\ } , \qquad \partial_d \omega = \{0,2\ell\}.\ ] ] we follow the evolution in the time interval = [ 0,1] ] the evolution follows the elastic deformation .after a fracture appears , since the elastic deformation is not any more a critical point of the energy functional ( see ( * ? ? ?* section 9 ) ) .then , the pre - fracture phase starts , showing a bridging force acting on the two lips of the crack . at time the cohesive energy reaches its maximum , and the body is completely fractured .it is worth observing that this evolution coincides with the one analytically calculated in ( * ? ? ?* section 9 ) . .] we can also investigate what happens from the energy point of view , see figure [ cohes1d_en ] .we have a smooth transition between the different phases , and the total energy has a nondecresing profile .the beginning of the pre - fractured phase can be observed at the time step ( i.e. at time ) , when the elastic energy ( in red ) starts decreasing and the crack energy ( in blue ) starts increasing . the final phase of complete rupture is then attained at the final time step .although we focused on the time interval ] the evolution follows again the elastic deformation , and a crack appears at .however , immediately after the body is completely fractured , and no cohesive forces appear .it is important to observe that in this case we actually observe an evolution along critical points that are _ not global minimizers_. indeed , the evolution is elastic until , although it would be energetically convenient to completely break the body at some earlier time ( see ( * ? ? ?* section 9 ) for a detailed description of all critical points ) .thus , we see that the algorithm chooses the critical point which is the closest to the initial configuration , even if other options are available , which are more convenient from an energetic point of view . this evolution is particularly supported by the idea that in nature a body does not completely change its configuration crossing high energetic barriers if a stable configuration can be found with less energetic effort . .] also in this case , we can observe the evolution from the energetic viewpoint , see figure [ nocohes1d_en ] . at time , when the elastic deformation ceases to be a critical point , the domain breaks and the total energy decreases up to the value of , so that no bridging force is keeping the two lips together .as we already observed , the evolution along global minimizers would instead lead to a fracture way before the critical load is reached .+ again , the evolution found with our numerical simulation coincides with that one given in ( * ? ? ?* section 9 ) .in particular , our simulations agree with the _ crack initiation criterion _ ( see ( * ? ? ? * theorem 4.6 ) ) , which states that a crack appears only when the maximum sustainable stress along is reached . in this case , this happens at , when the slope of the elastic evolution reaches the value .having a first analytical validation of the numerical minimization procedure , we can now challenge the algorithm in the simulation of two dimensional evolutions .we now consider the domain introduced in section [ sec : discrete ] setting , , and . within this choice ,the crack initiation time is reduced exactly of a factor , allowing us to speed up the failure process .since all the computations are performed on a macbook pro equipped with a 2.6ghz intel core i7 processor , 8 gb of ram , 1600mhz ddr3 , the two dimensional simulations are performed only for a qualitative purpose .indeed , we are mainly interested in showing that our algorithm produces physically sound evolutions also in dimension , and when the external displacement is non - trivial .the very sparse discretization of the domain is due to the fact that the minimization in requires a huge computational effort , both in terms of time and memory .indeed , in order to implement more realistic experiments , with a finer discretization , we would need to modify the architecture of the minimization algorithm , in such a way that it may run on parallel cores .we perform two different series of experiments , one with boundary datum \text { and } { { \textbf{x}}}\in \omega,\ ] ] see figure [ cohes2d_1 ] , and the other one with boundary datum \text { and } { { \textbf{x}}}\in \omega,\ ] ] see figure [ cohes2d_2 ] . here, we denoted by the generic point of .we now need to reduce the tolerance of the termination condition of the outer loop of the algorithm , setting it to .indeed , we experimented that for bigger values of this tolerance some instabilities in the solution were introduced , leading to an asymmetric evolution , also in the case of as external displacement , where we expect an invariant behavior with respect to the space variable . at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ] * case . * in figure[ cohes2d_1 ] and [ cohes2d_2 ] we report 4 different instances of the evolution for the two different boundary data , when .when the external displacement is , which is constant with respect to the second coordinate , we observe that the evolution is also constant with respect to .for both boundary data , the failure of the body undergoes the three phases of deformation , as it happened in the one dimensional case . * case .* when the boundary datum is , see figure [ noncohes2d_1 ] , the specimen breaks in a brittle fashion , without showing any cohesive intermediate phase .this simulation is actually an evidence that the algorithm still characterizes the correct critical points , following the principle that the domain should not fracture as long as a non - fractured configuration is still a critical point .we conclude commenting the simulation where the boundary datum is with , see figure [ noncohes2d_2 ] . by setting a displacement highly varying with respect to the , we observe that the different phases of the fracture formation can cohexist . at time the domain still presents no fracture , as expected by the previous numerical experiments .then , at , a pre - fracture appeares , but only at those points where the external load is bigger , i.e. around .in fact , even at the final time , the domain is not completely fractured .note that , when the boundary datum is , the evolution coincides with the one obtained analytically ( * ? ? ?* section 9 ) . in particular, the fracture appears at , when the slope of the elastic evolution reaches the value and thus the _ crack initiation criterion is satisfied_. at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ] at time instances with external displacement .,title="fig : " ]marco artina and massimo fornasier acknowledge the financial support of the international research training group igdk 1754 optimization and numerical analysis for partial differential equation with nonsmooth structures " of the german science foundation .s. almi : energy release rate and quasistatic evolution via vanishing viscosity in a cohesive fracture model with an activation threshold ._ esaim : control optim .( 2016 ) , published online .doi : 10.1051/cocv/2016014 .f. cagnetti , r. toader : quasistatic crack evolution for a cohesive zone model with different response to loading and unloading : a young measures approach ._ esaim control optim .17 ( 2011 ) , 127_. g. dal maso , a. de simone , f. solombrino : quasistatic evolution for cam - clay plasticity : a weak formulation via viscoplastic regularization and time rescaling , _ calc .partial differential equations _ * 40 * ( 2011 ) , 125 - 181 .
we introduce a novel constructive approach to define time evolution of critical points of an energy functional . our procedure , which is different from other more established approaches based on viscosity approximations in infinite dimension , is prone to efficient and consistent numerical implementations , and allows for an existence proof under very general assumptions . we consider in particular rather nonsmooth and nonconvex energy functionals , provided the domain of the energy is finite dimensional . nevertheless , in the infinite dimensional case study of a cohesive fracture model , we prove a consistency theorem of a discrete - to - continuum limit . we show that a quasistatic evolution can be indeed recovered as a limit of evolutions of critical points of finite dimensional discretizations of the energy , constructed according to our scheme . to illustrate the results , we provide several numerical experiments both in one and two dimensions . these agree with the crack initiation criterion , which states that a fracture appears only when the stress overcomes a certain threshold , depending on the material . * keywords : * quasistatic evolution , cohesive fracture , numerical approximation . * 2000 mathematics subject classification : * 49j27 , 74h10 , 74r99 , 74s20 , 58e30 . introduction in this paper we introduce a novel model of time evolution of physical systems through linearly constrained critical points of the energy functional . in order to include all the applications we have in mind , we consider both dissipative and nondissipative systems . our approach is _ constructive _ , and it can be numerically implemented , as we show with an application to cohesive fracture evolution . since we are eventually interested in being able to perform reliable numerical simulations , we consider at first _ finite dimensional _ systems . we then also give a concrete case study showing how our results can be adapted to describe infinite dimensional systems . below we recall the general framework , to which we intend to contribute . then we present and comment our results , also in comparison with related contributions appeared in recent literature . when describing the behaviour of a physical system , one can try may want to describe it through the time evolution of absolute minimizers of the energy . this modeling has been pursued , for instance , in . from an abstract point of view , this amounts to requiring that a global stability condition is satisfied at every time . in this case , the notion of solution fits into the general scheme of energetic solutions to rate - independent systems ( see ) . however , it is not always realistic to expect the energy to be actually minimized at every fixed time . in fact , global minimization may lead the system to change instantaneously in a very drastic way , and this is something which is not very often observed in nature . for this reason , several authors recently introduced time evolutions only satisfying a _ local stability _ condition for the energy functional . more precisely , given a time - dependent functional \to \mathbb{r} ] which satisfies ,\ ] ] where denotes the subdifferential of with respect to . typically , the existence of such an evolution is proven by a singular perturbation method . more precisely one first considers , for every , the -gradient flow with an initial datum , where is a critical point of . then , passing to the limit as , converges to a function satisfying . + we now give a detailed description of our results and we comment on them . in this paper we consider the evolution of a system which is driven by a linear external constraint . this can model several of situations of interest , such as prescribed boundary data , integral constraints , or the coupling with a linear ( partial ) differential equation . we will state and discuss the problem in a discrete ( finite - dimensional ) setting , where our main results are obtained . later on we will also comment on how to possibly recover solutions to problems defined in infinite dimension . let and be two euclidean spaces of dimension and , respectively , with . we want to study the evolution in a time interval ] is imposed . more precisely , the only admissible states at each time ] and for a fixed , one can consider the problem if is a minimizer for , then where is the adjoint of , denotes the range of , and is the frchet subdifferential of at . a _ critical point of on the affine space _ is any vector satisfying where , for every , we set a _ discrete quasistatic evolution _ with time step , initial condition , and constraint , is a right - continuous function \to { { \mathcal{e}}} ] for all with ; * is a critical point of on the affine space for every with . moreover , we say that a measurable function \to { { \mathcal{e}}} ] there exists a sequence and a sequence of discrete quasistatic evolutions with time step , initial condition , and constraint , such that we are now ready to state the main results of the paper ( see theorem [ psiteorexist ] and theorem [ teorexist ] ) . * dissipative systems . * let be a critical point of on the affine space . under suitable assumptions on , , , and ( see theorem [ psiteorexist ] ) we prove that there exist ;\mathcal{e}) ] such that : * is an approximable quasistatic evolution with initial condition and constraint ; * local stability : ] is defined by and denotes the duality product in . any function ;\mathcal{e}) ] and \to \mathcal f' ] ; * energy inequality : the function belongs to and for every ] satisfying ( a ) , ( b ) , and ( c ) a _ weak potential type evolution_. evolutions of this kind ( although without constraints ) have been widely considered in literature , as limits for of gradient flows of the type as in , or of systems with vanishing inertia ( see ) . we observe that the term in ( c ) physically corresponds to the virtual power due to the external constraint . in the case where were smooth , thanks to ( b ) this term could indeed be rewritten as , for any smooth curve with . note also that in our definition the precise value of at every point matters . in particular , the initial condition has a meaning , and the energy inequalities ( c ) and ( c ) need to be satisfied at every time . thus , we are in general not identifying functions differing on null sets , as it is usual in spaces . we point out that the main novelty in our approach is not in the existence results per se , but rather in the constructive algorithmic procedure that we provide , see . notice that , differently from the vanishing viscosity approach , the parameter appearing in remains fixed , throughout all the algorithm . heuristically , our inner loop aims at finding the nearest critical point through a sort of discretized instantaneous generalized gradient flow , see figure [ graphj ] and figure [ graphalgorithm ] . this can be obtained by looking at the long time behavior of the minimizing movements of the functional , for a fixed time step . more details about this point are given in the next subsection . ( -10,-4)(15,-4 ) ; plot ( , 3*(+3 ) + ( + 3)^2 + 4 ) ; plot ( , -(+1 ) - ( + 1)^2 + 6 ) ; plot ( , 5 - ) ; plot ( , ( 1.02)*(-6)^2+(1.04)*(-6)+0.02 ) ; plot ( , ( 1.02)*(1.04/2.04)^2-(1.04)*(1.04/2.04)+0.02 - 0.4*(- 6 + 1.04/2.04)^2 ) ; plot ( , -0.8*(-7 ) + ( 1.02)*(1.04/2.04)*(1.04/2.04)-(1.04)*(1.04/2.04)+0.02 - 0.4*(7 - 6 + 1.04/2.04)*(7 - 6 + 1.04/2.04 ) ) ; plot ( , -0.8*(-8 ) + ( -8)^2 -0.8 + ( 1.02)*(1.04/2.04)*(1.04/2.04)-(1.04)*(1.04/2.04)+0.02 - 0.4*(7 - 6 + 1.04/2.04)*(7 - 6 + 1.04/2.04 ) ) node[below right ] ; ( 14,-4.2)node[below] ; ( -1,-1.5 ) rectangle ( 7,2 * 0.2 * 5 * 5 ) ; ( 1,-4 ) ( 1,4 ) ; ( 1,-4.2)node[below] ; ( 3,-5.5)node[below] ; ( -3/2,-4 ) ( -3/2,6.4 ) ; ( -3/2,-4.2)node[below] ; ( 6 - 1.04/2.04,-4.2 ) ( 6 - 1.04/2.04,0.05 ) ; ( 6 - 1.04/2.04,-4.2)node[below] ; ( -0.5,1.02 * 1.04 ^ 2/2.04 ^ 2 - 1.04 * 1.04/2.04 + 0.02 ) ( -0.5,11 ) node[right ] ; ( -0.7 , 1.02 * 1.04 ^ 2/2.04 ^ 2 - 1.04 * 1.04/2.04 + 0.02 ) ( 7.7 , 1.02 * 1.04 ^ 2/2.04 ^ 2 - 1.04 * 1.04/2.04 + 0.02 ) node[right] ; ( -0.7 , 1.02 * 1.04 ^ 2/2.04 ^ 2 - 1.04 * 1.04/2.04 + 0.02)node[left] ; plot ( , 5 - ) ; plot ( , ( 1.02)*(-6)^2+(1.04)*(-6)+0.02 ) ; plot ( , ( 1.02)*(1.04/2.04)^2-(1.04)*(1.04/2.04)+0.02 - 0.4*(- 6 + 1.04/2.04)^2 ) node[below right ] ; plot ( , 5 - + 2 * 0.2*(-1)^2 ) ; plot ( , ( 1.02)*(-6)^2+(1.04)*(-6)+0.02 + 2 * 0.2*(-1)^2 ) ; plot ( , ( 1.02)*(1.04/2.04)^2-(1.04)*(1.04/2.04)+0.02 - 0.4*(- 6 + 1.04/2.04)^2 + 2 * 0.2*(-1)^2 ) node[below right ] ; plot ( , 5 - + 2 * 0.2*(-1 - 1/(4 * 0.2))^2 ) ; plot ( , ( 1.02)*(-6)^2+(1.04)*(-6)+0.02 + 2 * 0.2*(-1 - 1/(4 * 0.2))^2 ) ; plot ( , ( 1.02)*(1.04/2.04)^2-(1.04)*(1.04/2.04 ) + 0.02 - 0.4*(- 6 + 1.04/2.04)^2 + 2 * 0.2*(-1 - 1/(4 * 0.2))^2 ) node[below right ] ; plot ( , 5 - + 2 * 0.2*(-1 - 2/(4 * 0.2))^2 ) ; plot ( , ( 1.02)*(-6)^2+(1.04)*(-6)+0.02 + 2 * 0.2*(-1 - 2/(4 * 0.2))^2 ) ; plot ( , ( 1.02)*(1.04/2.04)^2-(1.04)*(1.04/2.04)+0.02 - 0.4*(- 6 + 1.04/2.04)^2 + 2 * 0.2*(-1 - 2/(4 * 0.2))^2 ) node[below right ] ; ( 1,0 ) ( 1,4 ) ; ( 1,-0.2)node[below] ; ( 1 + 1/0.8,0 ) ( 1 + 1/0.8,5 - 1 - 1/0.8 + 2 * 0.2/0.8 ^ 2 + 0.1 ) ; ( 1 + 1/0.8,-0.2)node[below] ; ( 1 + 2/0.8,0 ) ( 1 + 2/0.8,5 - 1 - 2/0.8 + 0.72 ) ; ( 1 + 2/0.8,-0.2)node[below] ; ( 1 + 3/0.8,0 ) ( 1 + 3/0.8,0.92 ) ; ( 1 + 3/0.8,-0.2)node[below] ; ( 6 - 1.04/2.04,-0.2)node[below] ; as explained below , we also show how it is possible to apply our results to a model of cohesive fracture evolution . in this specific application , we decided to work in the nondissipative setting , for several reasons . first of all , many results are available in the literature in this case ( see ) , and this allows us to test our methods . secondly , the notions of fracture evolution which are used to model the cohesive dissipative case are not easy to use in applications , since they require rather delicate tools of functional analysis ( a formulation is spaces of young measures , see ) . finally , this is a relevant application for the degenerate case , where estimates are not available . we are aware that a more realistic model should take account of the monotonicity of the crack growth , as done for instance in . this issue can be dealt with in the case of brittle fracture , thanks to the _ jump transfer lemma_. unfortunately , to date , this tool is not available in the cohesive case ( see for details ) , and even if it were existing , it would not help with the lack of estimates . we would like to emphasize a few relevant and defining aspects of our results . * we prove the existence of an approximable quasistatic evolution for a large class of energy functions and linear operators ( see and conditions ( j1)(j4 ) in section [ setting ] ) . in particular , these include nonsmooth and nonconvex energy functionals . * we stress the fact that is supposed to visit at different times _ critical points _ of the functional over the affine space . this condition is rather general , if compared to the usual requirement of focusing on global minimizers of over . an evolution along critical points is in general more realistic and physically sound . moreover , it is important to notice that the viscous approximation usually does not provide an evolution along critical points , unless one lets the viscosity to vanish . hence , in contrast to this well - established approximation method , our approach starts with a truly consistent approximation from the very beginning . however , the analysis of such an evolution is usually very involved and , in absence of dissipation , may allow for solutions that are just measurable in time . while we shall be content with the generality of our results , we have to live with the fact that our solutions may not be regular in the nondissipative case . * our approach is _ constructive_. this is an important fact since , as we shall emphasize later , the functional may have multiple feasible critical points at the same time . hence , in order to promote uniqueness of evolutions , or even just their measurability , we need to design a proper selection principle . accordingly , we shall design our selection _ algorithm _ in such a way that the selected critical point is the closest - in terms of euclidean distance - to the one chosen at the previous instant of time , unless it is energetically convenient to perform a `` jump '' to another significantly different phase of the system ( see formula ) . this corresponds to a rather common and well - established behavior of several physical ( and non physical ) systems ( see , for instance , ( * ? ? ? * section 9 ) ) . moreover , this method can be easily implemented by means of a corresponding numerical method , as we will show with the above mentioned application to cohesive fracture evolution , see section [ sec : exp ] . thus , our evolution is the result of a constructive machinery ( an algorithm ) , which is designed to emulate physical principles , according to which a critical point is selected in terms of a balance between neighborliness ( accounting the euclidean path length between critical points ) and energy convenience . in our view , this feature is of great relevance as we provide a black box , whose outputs are solutions . * as already mentioned , the proof of the main results ( theorems [ psiteorexist ] and [ teorexist ] ) is given in a finite dimensional setting . this is due to the fact that , in the infinite dimensional case , the subdifferential is in general not closed with respect to the weak convergence in the domain of the energy . such a difficulty could be overcome by requiring that the energy functional has compact sublevels , an assumption which is quite common in literature , provided the domain of the energy is suitably chosen . on the other hand , the choice of a weaker topology for the domain may not always comply well with other conditions on the energy , that we need to prove the existence results . this is in particular the case of the key condition ( j3 ) ( see section [ setting ] ) , which allows to control the virtual power , due to the external constraint , in terms of the energy . while not representing a major hurdle in the uncostrained case with time - dependent energy functionals , this issue seems particularly relevant for the problem we study ( see remark [ no - infinitedim ] for further details ) . this motivates our choice of first dealing with a finite dimensional setting , and then extending the results to our model case ( see theorem [ quasist ] ) with a problem - specific technique . * as an important remark , we stress that in general all of the constants appearing in the technical assumptions in section [ setting ] could depend on the dimension of the considered euclidean spaces . thus , our results can be applied to physical systems that can assume ( a discrete or a continuum of ) infinitely many states , provided all the relevant estimates obtained are dimension free . for this reason , we state very clearly which are the parameters affecting the constants that come into play in the crucial proofs ( see remark [ bounds constants ] and remark [ dependence ] ) we give an important application to cohesive fracture evolution ( see section [ appl ] and section [ hto0 ] ) , showing how also infinite dimensional systems can be approached with our method . in particular , we eventually provide an alternative proof of the existence of evolutions of critical points for the cohesive fracture model firstly proven in . * the numerical simulations that we provide in section [ sec : exp ] for the cohesive fracture evolutions agree with physically relevant requirements , such as the _ crack initiation criterion _ ( see ( * ? ? ? * theorem 4.6 ) ) , which states that a crack appears only when the maximum sustainable stress of the material is reached . we also mention that numerical simulations , obtained instead with the vanishing viscosity approach , have already appeared in . to the best of our knowledge , this is the first time that an algorithm providing critical points evolution is introduced in such a generality , especially to treat consistently and stably also nondissipative models . however , it is worth mentioning that our approach , although derived independently , resembles similar methods which have appeared recently in the literature . in , a related scheme has been for instance investigated in order to obtain a general existence result in a nonconvex but smooth setting . the author also takes into account viscous dissipation effects , and provides a constructive time rescaling , where the evolutions have a continuous dependence on time . this idea , in particular , generalises previous approaches for systems driven by nonconvex energy functionals ( ) . moreover , the author shows an approximation result that in spirit is close to our theorem [ quasist ] , and to previous results in . however , the results in are obtained under the assumption of regularity of the energy functional , and in an _ unconstrained _ setting . in particular , stability of critical points after passing to the limit is recovered through a very strong assumption ( ( * ? ? ? * ( 8) , theorem 2.3 ) ) , which would seem quite unnatural for a constrained nonsmooth problem . concerning other contributions , we also mention that a very general incremental minimization scheme , involving a quadratic correction with a fixed parameter , has been just proposed in , even in connection with abstract dissipation distances in metric spaces . another algorithm , showing some analogies to , has furthermore been recently considered for a case study of phase field fracture coupled with damage in . in this case , the energy is nonconvex , but separately convex in the two state variables . therefore , instead of adding a regularization , the authors define the discretized evolution through fixed points of an alternate minimization scheme . also in this case a time reparametrization , where a full energy - dissipation balance holds , is provided . the exploitation of similar techniques in connection with our problem is another interesting issue that we plan to pursue in the future . the plan of the paper is the following . in section [ setting ] we state the main results of the paper , theorem [ psiteorexist ] and theorem [ teorexist ] , whose proofs are given in section [ proofmain1 ] and section [ secproof ] , respectively . section [ appl ] is devoted to the description of the cohesive fracture model introduced in . in the same section we introduce a space mesh and spatially discretize the problem . in section [ hto0 ] we pass to the limit as the size of the mesh tends to , thus obtaining a new proof of the result in ( * ? ? ? * theorem 4.4 ) . finally , numerical simulations are given in section [ sec : exp ] . setting of the problem and main result [ setting ] throughout all the paper , we use the notation , and we denote by the standard lebesgue -dimensional measure in . let and be two euclidean spaces with dimension and , respectively , with . we consider an energy function , a linear operator , and a time dependent constraint \to \mathcal f ] of functions of bounded variation from a time interval ] , the -variation of is defined as )=\sup\left\{\sum_{i=0}^k \left(\psi(u(t_i))-\psi(u(t_{i-1}))\right ) : a = t_0<t_1<\dots < t_k = b,\ , k\in \mathbb{n}\right\}\,.\ ] ] if one takes in the above definition , one retrieves the usual definition for the pointwise variation of a function . for all the equality )=\mathrm{var}_{\psi}(u ; [ a , c])+\mathrm{var}_{\psi}(u ; [ c , b])\ ] ] immediately follows from the definition and the subadditivity of . if is additionally absolutely continuous , it is well known that )=\int_a^b \psi(\dot u(s))\,\mathrm{d}s\,.\ ] ] in the following , together with rate independent evolutions , in which dissipation is present , we will also consider _ weak potential type evolutions _ , where there is no dissipation . from the technical point of view , the absence of dissipation translates into a lack of compactness . for this reason , we need an additional assumption to treat this case . we shall assume * there exists a positive constant such that for every although condition ( j4 ) above might seem quite technical , it is automatically satisfied when . indeed , in this case is single valued at every , and coincides with the differential . then , denoting by the lipschitz constant of and using , one has at this point , simply follows by the cauchy - schwarz inequality . we will show in a concrete example that condition ( j4 ) can also be satisfied when ( see section [ conditions a0-a3 ] ) . before stating our main results , we give again and in more detail the notion of discrete and approximable quasistatic evolution , respectively . when this is possible , in the following we treat at the same time the cases with and without dissipation . to this aim , we introduce a switching parameter , in such a way that corresponds to the situation without dissipation , while in the case dissipation is present . [ defdiscr ] let , let be a critical point of on the affine space , and let . a _ discrete quasistatic evolution _ with time step , initial condition , and constraint is a right - continuous function \to { { \mathcal{e}}} ] for all with ; * is a critical point of on the affine space for every with . [ evolution ] let and let be a critical point of on the affine space . a bounded measurable function \to { { \mathcal{e}}} ] , we are now ready to state our main results . the first one is an existence result for rate independent evolutions . [ psiteorexist ] let , and suppose that , ( j1 ) , ( j2 ) , and ( j3 ) are fulfilled , and that ( ) , ( ) , and ( ) hold true . let ; \mathcal f) ] and ;\mathcal{f}') ] ; * the function belongs to , and for every ) \leq j ( v ( t_1 ) ) + \int_{t_1}^{t_2 } \langle q ( s ) , \dot{f } ( s ) \rangle_{\mathcal{f } } \ , \mathrm{d}s.\ ] ] in the case without dissipation we need to add the additional assumption ( j4 ) , and we obtain measurability , but in general no further regularity , of the evolution . [ teorexist ] let , and suppose that , ( j1 ) , ( j2 ) , ( j3 ) , and ( j4 ) are satisfied . let ; \mathcal f) ] and \to \mathcal f' ] ; * the function belongs to , and for every ] of ] , assume additionally that is absolutely continuous in ] with . then clearly , only the -inequality in or has to be shown . we begin by noticing that , since is locally lipschitz by remark [ j - prop ] , under our assumption also the map is absolutely continuous . let now $ ] be a common differentiability point for , and . now , for we have by definition of subdifferential if , we can take we have and differentiating so that the conclusion follows by integration between and . if , we can take , with in to obtain differentiating we have for it holds and therefore therefore , by integration between and , thanks to , we get . now , holds as an equality , and so does . since , the inclusion and the equality together imply . since , by construction , with , this concludes the proof .
knowledge of chemical composition is an important aspect in the study of materials . as a surface analytical technique , secondary ion mass spectrometry ( sims ) ,is employed for composition profiling of materials in widely varying fields like semiconductor industry and nuclear technology .this technique has high sensitivity , surface specificity , and high dynamic range .however , it lacks quantification because of the dependence of yield of secondary ions on the composition ( matrix ) of the surface from which it is ejected. this artefact of the technique is called as matrix effect . in semiconductor industry , the number of commonly analyzed semiconductor materials is limited .hence , semiconductor research uses matrix - matched standards to quantify sims measurements. however , in a general case , like a compound multilayer or an alloy with oxidized surface , the composition is likely to vary over a vast range in the volume analyzed .such specimens would require a very large number of standards matching each of those compositions to quantify the data .hence , employing matrix - matched standards in such cases is near to impossible . the matrix effect in the intensities of xcs secondary ions measured with cs primary ions ( where x stands for an atom from the specimen and nis equal to 1 or 2 ) was shown by gao to be much lower ( in some cases , even by orders of magnitude ) than that in secondary elemental ions . however , there is considerable deviation of the composition computed from these xcs signals from the actual composition . in spite of developments in understanding the formation process of these species ( for example, ref ) , a gap remains in this approach in reaching complete quantification .the current work provides an incremental step in filling this gap .the xcs secondary ions are understood to be formed by the combination of a resputtered cs ion with a sputtered neutral atom from the specimen .since secondary neutrals are formed as different atomic clusters , the intensities of the cs complexes of all these clusters should be combined to enhance the quantitativeness of xcs sims .this was earlier verified with a limited number of cs complexes . however , testing this with all the cs complexes involves quantitative measurement of the intensities of the species constituting a mass spectrum and then computing the composition from those intensities .in this report , the details of this process are presented by analyzing a mass spectrum obtained from a sample of d9 steel , produced by m / s .valinox , france .d9 is the steel used in fast reactors as a construction material of core components because of its resistance to void swelling. it was selected as a material for test in this report , because it is a multi - component alloy with known composition .the implementation of the above theory involves setting up and solving systems of linear equations to know the composition of the mass spectrum .a mass spectrum is analyzed completely by considering its peaks one after another . with a peak chosen for analysis , a probable species with its mono - isotopic mass equal to the mass of that peakis first identified .this species could be mono atomic or a poly atomic cluster .its fingerprint mass spectrum has to be constructed and matched with the experimental mass spectrum .if this species contains number of elements , the number of different isotopic combinations forming this species is given by where , is the number of atoms of the element in the species and is the number of isotopes of the element . those isotopic combinations with differences in mass , which are not discernible by the resolving power of the mass analyzer , appear as a single peak in the mass spectrum .( for example , in the cluster species crfe , the combinations , and differ by a mass of 0.003u .a typical mass spectrometer with mass resolving power ( mrp ) of 500 can not resolve these two combinations since the mrp required to resolve these species is 35428 . ) hence , generally in a mass spectrum , the number of peaks corresponding to a cluster species is fewer than .this group of peaks from the mass spectrometer forms the fingerprint spectrum of the species after normalizing the sum of their intensities to unity .the fingerprint spectra of many species are likely to overlap with each other .the difficulty of identifying and measuring the intensities of such species in the measured spectrum depends on the number and complexity of their fingerprint spectra .the overlapping fingerprint spectra in the measured spectrum are mathematically represented by a system of linear equations .this system of equations contains one equation for every peak in the spectrum that is the resultant of overlap . for number of species ,overlapping with each other , constituting number of peaks , the system of equations is where runs from to representing the number of equations and is the measured intensity of the peak . is the intensity of the species , to be solved for .if the fingerprint mass spectrum of the species has a peak at the mass of the peak , is the intensity of that peak of the fingerprint mass spectrum .otherwise , is zero . is the noise that occurred while measuring , which is not known .this term was added for completeness of mathematical description .however , the above set of equations has to be solved without the knowledge of this term .the above set of equations can be represented in matrix form , after neglecting the noise term as , where is the matrix containing the elements of eqn.[eq : equations to be solved ] . is the solution vector and is the input vector containing and of eqn.[eq : equations to be solved ] as their elements respectively .the solution providing the least squared deviation between the l.h.s . and r.h.s .of the above equation is obtained by solving the normal form of the above equation , where is a matrix and is a vector . means transpose of . in a few circumstances ,the solution may turn out to be erroneous ( including negative values for the intensities of a few species ) due either to the intensity of noise or to wrong choice of species or both .if the error in the solution is due to the noise in the data , the solution can be optimized by following a regularization algorithm such as the iterative algorithm discussed by gautier _ _ et al__ to deconvolve instrumental broadening from sims depth profiles .the convolution matrix , solution of deconvolution and measured depth profile found in ref . are to be replaced by the isotope abundance matrix , solution for intensities and the measured mass spectrum respectively to regularize the solution of eqn [ eq : matrix form of equations to be solved ] . out of the two regularization conditions imposed in ref . , the condition of positivity has to be retained while rejecting the condition of smoothness since the intensities need not vary smoothly from species to species .if the choice of species is wrong , the intensities of one or more species in the solution may remain to be negative even after the regularization process . in such cases ,other probable species have to be tried out and the process has to be repeated until all the correct species are identified .the concentration of an element is computed as the fraction of the number of atoms of that element in the cs complexes to the total number of atoms in all of the cs complexes excluding atoms of cs and any other element like o that might originate from the instrument, where is the concentration of the element , is the number of species containing the element , is the number of atoms of element in the such species , is the intensity of that ( ) species , is the total number of species identified , is the number of elements ( excluding cs and any other element as mentioned above ) in the species , is the number of atoms of the element in the species and is the intensity of the species . the process of identifying and measuring the intensities of the species constituting a mass spectrum will be discussed here with a sims mass spectrum obtained from a d9 specimen .this mass spectrum was obtained using a sims ( cameca ims-4f ) system by employing a , primary ion beam for sputtering the specimen .the primary ion beam was rastered over a square area of width and the secondary ions were collected from the central circular area of diameter .the mass spectrometer was operated with a mass resolving power of and energy band - pass of for the secondary ions .eleven data points were recorded around each integral mass number to construct the peaks .a portion of the raw mass spectrum in addition to the peak values as calculated below is shown in figure [ fig : sample mass spectrum ] .the sims mass spectral peaks have the shape of the convolution of the ion - beam crossover with the exit slit of the mass analyzer .the data required for analysis are the heights of these peaks .since the data is subject to noise and the peaks have a sparse density of data , the highest raw data point of many of the peaks do not represent the apex position .hence , the spectrum was smoothened by 3-point weighted average with the central point receiving a weight of 50% and the neighboring points a weight of 25% .after smoothening , the heights of the peaks decreased proportionately and almost all of the peaks obtained a unique apex point as shown in the figure .the apex values of most of the smoothened peaks could be considered as the heights of the respective peaks .the heights of the remaining peaks were estimated manually .the peak values so estimated are shown as a bar graph in figure [ fig : full mass bargraph ] .a portion of raw mass spectrum showing extraction of peak values , scaledwidth=80.0% ] the mass spectrum over the complete mass range , constructed using the peak values picked up from the raw mass spectrum .the names of a few of the prominent species are labeled to the corresponding peaks.,scaledwidth=80.0% ] the two strongest peaks in this mass spectrum , at mass numbers 133u and 266u , are those of cs and cs respectively .the next highest peak is at mass number , 189u that is the mass of . setting up and solving eqn.[eq : matrix form of equations to be solved ] for fecs alone results in a slightly higher estimate for the intensity of fecs , to minimize the squared deviation from the measured intensity at mass number 187u that is the mass of as well as that of .the higher estimate for fecs compensates the absence of in the equation for mass number 187u . after including crcs in the equation ,this error in the estimate for fecs is corrected . in this manner ,the probable species contributing to all the peaks can be tried out one after the other until all the peaks are successfully characterized .the final solution is not affected by the order in which the the peaks of the mass spectrum are chosen for analysis .in the above mass spectrum , twenty four species were identified overlapping with each other spanning over the mass range from 177u to 208u , as shown in figure [ fig : solution for species around csfe ] . in this figure ,the measured spectrum is shown wider in the background and the constituent fingerprint spectra multiplied by their respective intensities are shown narrower in the foreground . with the perfect solution , the sum of the constituent spectra should be as equal to the measured spectrum as possible as shown in figure [ fig : solution for species around csfe ] .the analysis of the complete mass spectrum resulted in identification of 165 species that are tabulated in table [ table : species list ] .most of them are cs complexes that are required for the proposed quantification process . , labelled here as meas . ms " , and its composition computed using_ eqn _ [ eq : normal form of equations to be solved],scaledwidth=80.0% ] [ ! htbp ].species identified as constituents of the complete mass spectrum shown in figure [ fig : full mass bargraph ] [ cols="<,>,<,>,<,>",options="header " , ]the technique of combining all xcs complexes ( where x is any cluster ) to compute composition is verified to advance the current status of quantification with cs complexes a step further towards better quantification by two means .one is by the inclusion of atoms in the left out cs complexes .the other is by the tendency of the disproportionalities of the intensities of xcs and xcs species to the concentration of x to counter each other .delineation of species constituting a mass spectrum , which is a prerequisite for this quantification technique , is aided by the mass spectral analysis described here .the authors thank dr .r. ramaseshan for his help in proof reading and valuable suggestions. 99 takayuki aoyama , hiroko tashiro , and kunihiro suzuki , j. electrochem .soc . * 146 ( 5 ) * , 1879 ( 1999 ) .n. sivai bharasi , k. thyagarajan , h. shaikh , m. radhika , a.k .balamurugan , s. venugopal , a. moitra , s. kalavathy , s. chandramouli , a.k .tyagi , r.k .dayal , and k.k .rajan , metallurgical and materials transactions a * 43 * , 561 ( 2012 ) . c. david , b.k .panigrahi , s. balaji , a.k .balamurugan , k.g.m .nair , g. amarendra , c.s .sundar , baldev raj , journal of nuclear materials * 383 * , 132 ( 2008 ) .v. r. deline , william katz , c. a. evans jr . ,peter williams , appl .. lett . * 33 * , 832 ( 1978 ) . ming l. yu , wilhad reuter , j. vac .sci . technol .* 17 * , 36 ( 1980 ) .d. p. leta , g. h. morrison , anal .chem . * 52 ( 3 ) * , 514 ( 1980 ) .y. gao , j. appl .phys . * 64 * , 3760 ( 1988 ) .y. gao , y.marie , f. saldi , h.n .migeon , proceedings of the 9 international conference on secondary ion mass spectrometry - sims - ix , yokohama , japan , 7 - 12 november , 406 ( 1993 ) .charles w. magee , william l. harrington , ephraim m. botnick , international journal of mass spectrometry and ion processes * 103 * , 45 ( 1990 ) .h. gnaser , international journal of mass spectrometry and ion processes * 174 * 119 ( 1998 ) howard e. smith , bang - hung tsao , james scofield , materials science forum * 527 - 529 * , 629 ( 2006 ) .h. gnaser , h. oechsner , surface science letters * 302 * , l289 ( 1994 ) .t. mootz , f. adams , international journal of mass spectrometry and ion processes * 152 * 209 ( 1996 ) .k. wittmaack nuclear instruments and methods in physics research b * 64 * , 621 ( 1992 ) .t. wirtz , h .-migeon , h. scherrer , international journal of mass spectrometry * 225 * , 135 ( 2003 ) .kudriavtsev , a. villegas , a. godines , r. asomoza .sci . * 206 * , 187 ( 2003 ) .n. mine , b. douhard , l. houssiau , applied surface science * 255 * , 973 ( 2008 ) .j. goschnick , m. fichtner , m. lipp , j. schuricht , h.j .ache , applied surface science * 70/71 * , 63 ( 1993 ) .t. welzel , s. mandl , k. ellmer , j. phys .phys . * 47 * , 065204 ( 2014 ) .belykh , v.i .matveev , i.v .veryovkin , a. adriaens , f. adams , nuclear instruments and methods in physics research b * 155 * , 409 ( 1999 ) .balamurugan , s. rajagopalan , s. dash , a.k.tyagi , proceedings of the 23 international conference on surface modification technologies - smt - xxiii , mamallapuram , india , november , 537 ( 2009 ) b. gautier , r. prost , g. prudon , j. c. dupuy , surface and interface analysis , * 24 * 733 ( 1996 ) . b. gautier , j. c. dupuy , r. prost , g. prudon , surface and interface analysis , * 25 * , 464 ( 1997 ) .
this work highlights the possibility of improving the quantification aspect of cs - complex ions in sims ( secondary ion mass spectrometry ) , by combining the intensities of all possible cs - complexes . identification of all possible cs - complexes requires quantitative analysis of mass spectrum from the material of interest . the important steps of this mass spectral analysis include constructing fingerprint mass spectra of the constituent species from the table of isotopic abundances of elements , constructing the system(s ) of linear equations to get the intensities of those species , solving them , evaluating the solutions and employing a regularization process when required . these steps are comprehensively described and the results of their application on a sims mass spectrum obtained from d9 steel are presented . it is demonstrated that results from the summation procedure , which covers entire range of sputtered clusters , is superior to results from single cs - complex per element . the result of employing a regularization process in solving a mass spectrum from an ss316ln steel specimen is provided to demonstrate the necessity of regularization .
qr models have become increasingly popular since the seminal work of .in contrast to the mean regression model , qr belongs to a robust model family , which can give an overall assessment of the covariate effects at different quantiles of the outcome . in particular , we can model the lower or higher quantiles of the outcome to provide a natural assessment of covariate effects specific for those regression quantiles .unlike conventional models , which only address the conditional mean or the central effects of the covariates , qr models quantify the entire conditional distribution of the outcome variable .in addition , qr does not impose any distributional assumption on the error , except requiring the error to have a zero conditional quantile .the foundations of the methods for independent data are now consolidated , and some statistical methods for estimating and drawing inferences about conditional quantiles are provided by most of the available statistical programs ( e.g. , r , sas , matlab and stata ) .for instance , just to name a few , in the well - known r package ` quantreg ( ) ` is implemented a variant of the simplex ( br ) for linear programming problems described in , where the standard errors are computed by the rank inversion method . another method implemented in this popular package is lasso penalized quantile regression ( lpqr ) , introduced by , where a penalty parameter is specified to determine how much shrinkage occurs in the estimation process .qr can be implemented in a range of different ways . provided an overview of some commonly used quantile regression techniques from a `` classical '' framework . considered median regression from a bayesian point of view , which is a special case of quantile regression , and discussed non - parametric modeling for the error distribution based on either plya tree or dirichlet process priors .regarding general quantile regression , proposed a bayesian modeling approach by using the ald , developed bayesian semi - parametric models for quantile regression using dirichlet process mixtures for the error distribution , studied quantile regression for longitudinal data using the ald .recently , developed a simple and efficient gibbs sampling algorithm for fitting the quantile regression model based on a location - scale mixture representation of the ald .an interesting aspect to be considered in statistical modelling is the diagnostic analysis .this can be carried out by conducting an influence analysis for detecting influential observations .one of the most technique to detect influential observations is the case - deletion approach .the famous approach of cook ( 1977 ) has been applied extensively to assess the influence of an observation in fitting a statistical model ; see and the references therein .it is difficult to apply this approach directly to the qr model because the underlying observed - data likelihood function is not differentiable at zero . presents an approach to perform diagnostic analysis for general statistical models that is based on the q - displacement function .this approach has been applied successfully to perform influence analysis in several regression models , for example , considered in multivariate distribution , obtained case - deletion measures for mixed - effects models following the s approach and in we can see some results about local influence for mixed - effects models obtained by using the q - displacement function . taking advantage of the likelihood structure imposed by the ald , the hierarchical representation of the ald , we develop here an em - type algorithm for obtaining the ml estimates at the level , and by simulation studies our em algorithm outperformed the competing br and lpqr algorithms , where the standard error is obtained as a by - product .moreover , we obtain case - deletion measures for the qr model .since qr methods complement and improve established means regression models , we feel that the assessment of robustness aspects of the parameter estimates in qr is also an important concern at a given quantile level .the rest of the paper is organized as follows .section 2 introduces the connection between qr and ald as well as outlining the main results related to ald .section 3 presents an em - type algorithm to proceed with ml estimation for the parameters at the level .moreover , the observed information matrix is derived .section [ sec diagnostic ] provides a brief sketch of the case - deletion method for the model with incomplete data , and also develop a methodology pertinent to the ald .sections [ sec application ] and [ sec simulation study ] are dedicated to the analysis of real and simulated data sets , respectively .section 6 concludes with a short discussion of issues raised by our study and some possible directions for the future research .even though considerable amount of work has been done on regression models and their extensions , regression models by using asymmetric laplace distribution have received little attention in the literature . only recently, the a study on quantile regression model based on asymmetric laplace distribution was presented by tian et al .( 2014 ) who a derived several interesting and attractive properties and presented an em algorithm . before presenting our derivation ,let us recall firstly the definition of the asymmetric laplace distribution and after this , we will present the quantile regression model . as discussed in , we say that a random variable y is distributed as an ald with location parameter , scale parameter and skewness parameter , if its probability density function ( pdf ) is given by where is the so called check ( or loss ) function defined by , with denoting the usual indicator function .this distribution is denoted by .it is easy to see that follows an exponential distribution .a stochastic representation is useful to obtain some properties of the distribution , as for example , the moments , moment generating function ( mgf ) , and estimation algorithm . for the ald , and presented the following stochastic representation : let and be two independent random variables .then , can be represented as where and , and denotes equality in distribution .figure [ fig : ald ] shows how the skewness of the ald changes with altering values for .for example , for almost all the mass of the ald is situated in the right tail . for ,both tails of the ald have equal mass and the distribution then equals the more common double exponential distribution .in contrast to the normal distribution with a quadratic term in the exponent , the ald is linear in the exponent .this results in a more peaked mode for the ald together with thicker tails . on the other hand ,the normal distribution has heavier shoulders compared to the ald . ] from ( [ st - ald ] ) , we have the hierarchical representation of the ald , see , given by this representation will be useful for the implementation of the em algorithm .moreover , since , then one can derive easily the pdf of .that is , the pdf in ( [ pdfal ] ) can be expressed as where , and , with being the modified bessel function of the third kind .it easy to see that that the conditional distribution of , given , is . here, denotes the generalized inverse gaussian ( gig ) distribution ; see for more details .the pdf of gig distribution is given by the moments of can be expressed as =\left(\frac{a}{b}\right)^{k}\frac{k_{\nu+k}(ab)}{k_{\nu}(ab)},\,\,\ k\in \mathbb{r}.\ ] ] some properties of the bessel function of the third kind that will be useful for the developments here are : ( i ) ; ( ii ) ; ( iii ) for non - negative integer , .a special case is .+ let be a response variable and a vector of covariates for the observation , and let be the quantile regression function of given , .suppose that the relationship between and can be modeled as , where is a vector of unknown parameters of interest .then , we consider the quantile regression model given by where is the error term whose distribution ( with density , say , ) is restricted to have the quantile equal to zero , that is , .the error density is often left unspecified in the classical literature .thus , quantile regression estimation for proceeds by minimizing where is as in ( [ pdfal ] ) and is the quantile regression estimate for at the quantile .the special case corresponds to median regression .as the check function is not differentiable at zero , we can not derive explicit solutions to the minimization problem . therefore, linear programming methods are commonly applied to obtain quantile regression estimates for .a connection between the minimization of the sum in ( [ losseq ] ) and the maximum - likelihood theory is provided by the ald ; see .it is also true that under the quantile regression model , we have the above result is useful to check the model in practice , as will be seen in the application section . now , suppose are independent observations such as . then , from ( [ pdfals ] ) the log likelihood function for can be expressed as where , with is a constant does not depend on and with and . note that if we consider as a nuisance parameter , then the maximization of the likelihood in ( [ likel ] ) with respect to the parameter is equivalent to the minimization of the objective function in ( [ losseq ] ) . and hence the relationship between the check function and ald can be used to reformulate the qr method in the likelihood framework . the log likelihood function is not differentiable at zero .therefore , standard procedures the estimation can not be developed following the usual way .specifically , the standard errors for the maximum likelihood estimates is not based on the genuine information matrix . to overcome this problem we consider the empirical information matrix as will be described in the next subsection . in this section, we discuss an estimation method for qr based on the em algorithm to obtain ml estimates .also , we consider the method of moments ( mm ) estimators , which can be effectively used as starting values in the em algorithm . here, we show how to employ the em algorithm for ml estimation in qr model under the ald . from the hierarchical representation ( [ hierar1])-([hierar2 ] ) , the qr model in ( [ qrmodel ] )can be presented as where and are as in ( [ st - ald ] ) .this hierarchical representation of the qr model is convenient to describe the steps of the em algorithm .let and be the observed data and the missing data , respectively .then , the complete data log - likelihood function of , given , ignoring additive constant terms , is given by , where for . in what follows the superscript indicates the estimate of the related parameter at the stage of the algorithm .the e - step of the em algorithm requires evaluation of the so - called q - function ] means that the expectation is being effected using for .observe that the expression of the q - function is completely determined by the knowledge of the expectations ,\,\,\ , s=-1,1,\end{aligned}\ ] ] that are obtained of properties of the distribution .let be the vector that contains all quantities defined in ( [ weith ] ) .thus , dropping unimportant constants , the q - function can be written in a synthetic form as , where this quite useful expression to implement the m - step , which consists of maximizing it over .so the em algorithm can be summarized as follows : + _ e - step _ : given , compute through of the relation =\left(\frac{\delta^{(k)}_i}{\gamma^{(k)}}\right)^{s}\frac{k_{1/2+s}\big(\lambda^{(k)}_i\big)}{k_{1/2}\big(\lambda^{(k)}_i\big ) } , s=-1,1,\ ] ] where , and ; + _ m - step _ : update by maximizing over , which leads to the following expressions where denotes the diagonal matrix , with the diagonal elements given by and .a similar expression for is obtained in .this process is iterated until some distance involving two successive evaluations of the actual log - likelihood , like or , is small enough .this algorithm is implemented as part of the r package ` aldqr ( ) ` , which can be downloaded at not cost from the repository cran .furthermore , following the results given in , the mm estimators for and are solutions of the following equations : where is as ( [ st - ald ] ) .note that the mm estimators do not have explicit closed form and numerical procedures are needed to solve these non - linear equations .they can be used as initial values in the iterative procedure for computing the ml estimates based on the em - algorithm .standard errors for the maximum likelihood estimates is based on the empirical information matrix , that according to formula , is defined as where .it is noted from the result of that the individual score can be determined as .asymptotic confidence intervals and tests of the parameters at the level can be obtained assuming that the ml estimator has approximately a normal multivariate distribution .+ from the em algorithm , we can see that is inversely proportional to .hence , can be interpreted as a type of weight for the case in the estimates of , which tends to be small for outlying observations .the behavior of these weights can be used as tools for identifying outlying observations as well as for showing that we are considering a robust approach , as will be seen in sections 4 and 5 .case - deletion is a classical approach to study the effects of dropping the case from the data set .let be the augmented data set , and a quantity with a subscript `` ] .let }=(\widehat{{\mbox{}}}^{\top}_{p[i ] } , \widehat{\sigma^2}_{[i]})^{\top} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} ] , where is the ml estimate of .to assess the influence of the case on , we compare the difference between } ] is far from in some sense , then the case is regarded as influential . as } ] of , ( see * ? ? ?* ) proposed the following one - step approximation based on the q - function , }= \widehat{{\mbox{}}}+ \big\ { -\ddot{q}(\widehat{{\mbox{}}}|\widehat{{\mbox{}}})\big\}^{-1 } \dot{q}_{[i]}(\widehat{{\mbox{}}}|\widehat{{\mbox{}}}),\end{aligned}\ ] ] where }(\widehat{{\mbox{}}}|\widehat{{\mbox{}}})=\displaystyle\frac{\partial{{q}_{[i]}({\mbox{}}|\widehat{{\mbox{}}})}}{\partial{{\mbox{}}}}\big\vert_{{\mbox{}}=\widehat{{\mbox{}}}},\end{aligned}\ ] ] are the hessian matrix and the gradient vector evaluated at , respectively .the hessian matrix is an essential element in the method developed by to obtain the measures for case - deletion diagnosis . for developing the case - deletion measures, we have to obtain the elements in ( [ theta1 ] ) , }(\widehat{{\mbox{}}}|\widehat{{\mbox{}}}) { \bm \theta} { \bm \theta} ] are 2 .the elements of the second order partial derivatives of evaluated at are \end{aligned}\ ] ] and . in the following result ,we will obtain the one - step approximation of }=(\widehat{{\mbox{}}}^{\top}_{p[i ] } , \widehat{\sigma}_{[i]})^{\top}{\bm \beta} { \bm \xi} { \bm \theta} { \bm \theta} ] and } ] and based on metrics , proposed by , for measuring the distance between } { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta}{\bm \beta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta}{\bm \beta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta} { \bm \theta}$}}})\big\}.\ ] ]we illustrate the proposed methods by applying them to the australian institute of sport ( ais ) data , analyzed by cook and weisberg ( 1994 ) in a normal regression setting .the data set consists of several variables measured in athletes ( 102 males and 100 females ) . here , we focus on body mass index ( bmi ) , which is assumed to be explained by lean body mass ( lbm ) and gender ( sex ) .thus , we consider the following qr model : where is a zero quantile . this model can be fitted in the r software by using the package ` quantreg ( ) ` , where one can arbitrarily use the br or the lpqr algorithms . in order to compare with our proposed em algorithm, we carry out quantile regression at three different quantiles , namely by using the ald distribution as described in section 2 .the ml estimates and associated standard errors were obtained by using the em algorithm and the observed information matrix described in subsections 2.3 , respectively .table [ table.application ] compares the results of our em , br and the lpqr estimates under the three selected quantiles .the standard error of the lpqr estimates are not provided in the r package ` quantreg ( ) ` and are not shown in table [ table.application ] . from this tablewe can see that estimates under the three methods only exhibit slight differences , as expected . however , the standard errors of our em estimates are smaller than those via the br algorithm .this suggests that the em algorithm seems to produce more accurate estimates of the regression parameters at the level .[ table.application ] confidence intervals for various values of .[ fig:2b],title="fig : " ] to obtain a more complete picture of the effects , a series of qr models over the grid is estimated .figure [ fig:2b ] gives a graphical summary of this analysis .the shaded area depicts the confidence interval from all the parameters . from figure [ fig:2b ]we can observe some interesting evidences which can not be detected by mean regression .for example , the effect of the two variables ( lbm and gender ) become stronger for the higher conditional quantiles , indicating that the bmi are positively correlated with the quantiles .the robustness of the median regression can be assessed by considering the influence of a single outlying observation on the em estimate of . in particular, we can assess how much the em estimate of is influenced by a change of units in a single observation .replacing by , where denotes the standard deviation .let be the em estimates of after contamination , .we are particularly interested in the relative changes . in this studywe contaminated the observation corresponding to individual and for between 0 and 10 .figure [ fig : change ] displays the results of the relative changes of the estimates for different values of . as expected , the estimates from the median regression model are less affected by variations on than those of the mean regression .moreover , figure [ fig:2c ] shows the q - q plot and envelopes for mean and median regression , which are obtained based on the distribution of , given in ( [ wi ] ) , that follows distribution . the lines in these figures represent the 5th percentile , the mean and the percentile of simulated points for each observation .these figures clearly show that the median regression distribution provides a better - fit than the standard mean regression to the ais data set .+ , and in comparison with the true value , for median and mean regression , for different contaminations .[ fig : change],title="fig : " ] , and in comparison with the true value , for median and mean regression , for different contaminations .[ fig : change],title="fig : " ] , and in comparison with the true value , for median and mean regression , for different contaminations . [fig : change],title="fig : " ] , title="fig : " ] as discussed at the end of section 2.3 the estimated distance can be used efficiently as a measure to identify possible outlying observations .figure [ fig : mahal](left panel ) displays the index plot of the distance for the median regression model .we see from this figure that observations # 75 , # 162 , # 178 and # 179 appear as possible outliers . from the em - algorithm ,the estimated weights for these observations are the smallest ones ( see right panel in figure [ fig : mahal ] ) , confirming the robustness aspects of the maximum likelihood estimates against outlying observations of the qr models .thus , larger implies a smaller , and the estimation of tends to give smaller weight to outlying observations in the sense of the distance .figure [ fig:1b ] shows the estimated quartiles of two levels of gender at each lbm point from our em algorithm along with the estimates obtained via mean regression . from this figurewe can see clear attenuation in due to the use of the median regression related to the mean regression .it is possible to observe in this figure some atypical individuals that could have an influence on the ml estimates for different values of quantiles . in this figure ,the individuals and were marked since they were detected as potentially influential . and the estimated weights .[fig : mahal],title="fig : " ] and the estimated weights .[fig : mahal],title="fig : " ] , title="fig : " ] , title="fig : " ] + .( second row ) .index plot of approximate likelihood displacement .the influential observations are numbered.[pert3 ] , title="fig : " ] .( second row ) .index plot of approximate likelihood displacement .the influential observations are numbered.[pert3 ] , title="fig : " ] .( second row ) .index plot of approximate likelihood displacement .the influential observations are numbered.[pert3 ] , title="fig : " ] + .( second row ) .index plot of approximate likelihood displacement .the influential observations are numbered.[pert3 ] , title="fig : " ] .( second row ) .index plot of approximate likelihood displacement .the influential observations are numbered.[pert3 ] , title="fig : " ] .( second row ) .index plot of approximate likelihood displacement .the influential observations are numbered.[pert3 ] , title="fig : " ] + in order to identify influential observations at different quantiles when some observation is eliminated , we can generate graphs of the generalized cook distance , as explained in section [ sec diagnostic ] .a high value for indicates that the observation has a high impact on the maximum likelihood estimate of the parameters .following , we can use as benchmark for the at different quantiles .figure [ pert3 ] ( first row ) presents the index plots of .we note from this figure that , only observation appears as influential in the ml estimates at and observations as influential at , whereas observations and appear as influential in the ml estimates at .figure [ pert3 ] ( second row ) presents the index plots of . from this figure, it can be noted that observations appear to be influential at , whereas observations and seem to be influential in the ml estimates at , and in addition observation appears to be influential at .in this section , the results from two simulation studies are presented to illustrate the performance of the proposed method .we conducted a simulation study to assess the performance of the proposed em algorithm , by mimicking the setting of the ais data by taking the sample size .we simulated data from the model where the are simulated from a uniform distribution ( u(0,1 ) ) and the errors are simulated from four different distributions : the standard normal distribution , a student - t distribution with three degrees of freedom , , a heteroscedastic normal distribution , and , a bimodal mixture distribution .the true values of the regression parameters were taken as . in this way, we had four settings and for each setting we generated data sets .once the simulated data were generated , we fit a qr model , with and , under barrodale and roberts ( br ) , lasso ( lasso ) and em algorithms by using the `` quantreg ( ) '' package and our ` aldqr ( ) ` package , from the r language , respectively . for the four scenarios, we computed the bias and the square root of the mean square error ( rmse ) , for each parameter over the replicas .they are defined as : where and with or , is the estimate of obtained in replica and is the true value .table [ table.simul1 ] reports the simulation results for and .we observe that the em yields lower biases and rmse than the other two estimation methods under all the distributional scenarios .this finding suggests that the em would produce better results than other alternative methods typically used in the literature of qr models .[ table.simul1 ] we also conducted a simulation study to evaluate the finite - sample performance of the parameter estimates .we generated artificial samples from the regression model ( [ simulation_1 ] ) with and .we chose several distributions for the random term a little different than the simulation study 1 , say , normal distribution ( n1 ) , a student - t distribution ( t1 ) , a heteroscedastic normal distribution , ( n2 ) and , a bimodal mixture distribution ( t2 ) .finally , the sample sizes were fixed at and . for each combination of parameters and sample sizes , samples were generated under the four different situations of error distributions ( n1 , t1 , n2 , t2 ) .therefore , 36 different simulation runs are performed .once all the data were simulated , we fit the qr model with and the bias ( [ bias ] ) and the square root of the mean square error ( [ eqm ] ) were recorded .the results are shown in figure [ fig:77a ] .we can see a pattern of convergence to zero of the bias and mse when increases . as a general rule, we can say that bias and mse tend to approach to zero when the sample size increases , indicating that the estimates based on the proposed em - type algorithm do provide good asymptotic properties . this same pattern of convergence to zerois repeated considering different levels of the quantile . , , with ( median regression ) , where , , and .[fig:77a],title="fig : " ] , , with ( median regression ) , where , , and .[fig:77a],title="fig : " ] + , , with ( median regression ) , where , , and .[fig:77a],title="fig : " ] , , with ( median regression ) , where , , and .[fig:77a],title="fig : " ] + , , with ( median regression ) , where , , and .[fig:77a],title="fig : " ] , , with ( median regression ) , where , , and .[fig:77a],title="fig : " ] +we have studied a likelihood - based approach to the estimation of the qr based on the asymmetric laplace distribution ( ald ) . by utilizing the relationship between the qr check function and the ald, we cast the qr problem into the usual likelihood framework .the mixture representation of the ald allows us to express a qr model as a normal regression model , facilitating the implementation of an em algorithm , which naturally provides the ml estimates of the model parameters with the observed information matrix as a by product .the em algorithm was implemented as part of the r package _ aldqr()_. we hope that by making the code of our method available , we will lower the barrier for other researchers to use the em algorithm in their studies of quantile regression .further , we presented diagnostic analysis in qr models , which was based on the case - deletion technique suggested by and , which are the counterparts for missing data models of the well - known ones proposed by and .the simulation studies demonstrated the superiority of the proposed methods to the existing methods , implemented in the package ` quantreg ( ) ` .we applied our methods to a real data set ( freely downloadable from r ) in order to illustrate how the procedures can be used to identify outliers and to obtain robust ml parameter estimates . from these results , it is encouraging that the use of ald offers a better alternative in the analysis of qr models . finally , the proposed methods can be extended to a more general framework , such as , censored ( tobit ) regression models , measurement error models , nonlinear regression models , stochastic volatility models , etc and should yield satisfactory results at the expense of additional complexity in implementation .an in - depth investigation of such extensions is beyond the scope of the present paper , but these are interesting topics for further research .the research of v. h. lachos was supported by grant 305054/2011 - 2 from conselho nacional de desenvolvimento cientfico e tecnolgico ( cnpq - brazil ) and by grant 2014/02938 - 9 from fundao de amparo pesquisa do estado de so paulo ( fapesp - brazil ) .
to make inferences about the shape of a population distribution , the widely popular mean regression model , for example , is inadequate if the distribution is not approximately gaussian ( or symmetric ) . compared to conventional mean regression ( mr ) , quantile regression ( qr ) can characterize the entire conditional distribution of the outcome variable , and is more robust to outliers and misspecification of the error distribution . we present a likelihood - based approach to the estimation of the regression quantiles based on the asymmetric laplace distribution ( ald ) , which has a hierarchical representation that facilitates the implementation of the em algorithm for the maximum - likelihood estimation . we develop a case - deletion diagnostic analysis for qr models based on the conditional expectation of the complete - data log - likelihood function related to the em algorithm . the techniques are illustrated with both simulated and real data sets , showing that our approach out - performed other common classic estimators . the proposed algorithm and methods are implemented in the r package ` aldqr ( ) ` . + * keywords * : quantile regression model ; em algorithm ; case - deletion model ; asymmetric laplace distribution .
[ angle ] let and . for all ,there exists such that where denotes the group of unitary matrices . notethat the relation between and relies on properties of gamma distributions , and one can easily compute for a given the optimal value of .figure [ t16 ] shows the value of the optimal for a given outage probability ( i.e. , for a given ) when .the lowest value of the outage probability for which the number of active antennas is strictly less than ( i.e. , when the optimal has a zero entry ) is slightly above half .( the minimizer of ) for a given value of ( outage probability ) when .a tab indicates the first location at which the optimal decreases from one , in particular , before the first tab , the optimal is equal to .note that the locations of the tabs do not depend on ; however , for larger than 40 , more tabs will appear before the first tab plotted here . ]this conjecture can be interpreted in different ways .it characterizes the optimal power allocation over the antennas of a non - ergodic miso channel with gaussian fading , in order to minimize the outage probability .geometrically , it characterizes the best choice of a norm induced by a positive definite matrix , to minimize the probability of observing a short random vector which is gaussian distributed .observe that one can w.l.o.g .consider diagonal matrices ( since is unitary invariant ) , and the conjecture can also be expressed as follows .[ miso ] let , with and . for all ,there exists s.t . where denotes the group of permutation matrices .this formulation also relates to portfolio optimization problems . in ,the conjecture is stated in a slightly more general setting where is an random matrix and where is minimized . clearly , if , this is equivalent to the above conjecture , with .many works in the literature about mimo channels that investigate properties of the outage probability assume that the conjecture holds or pick a uniform power allocation without discussing its optimality . in , conjecture [ angle ]is proved for low values of .we complete here the proof for an arbitrary .recall that with .we denote by the density of a random variable .[ lemma_trick ] let be such that and are mutually independent .then , and , as ( only when there is a discontinuity at ) , we can use the fourier transform to write : therefore and is zero for negative values of .thus we get [ lemma_trick2 ] let be a random variable independent of , and let .we then have this is easily verified by using the fourier transform .[ unimodal ] for all and , we have and there exists a unique such that , , and , . the fact that can be verified by induction , knowing that the exponential density is in and using properties of convolution and differentiation .note that and for s.t .all s are different , which can be written without loss of generality as , we have this can also be verified by induction . moreover , we have that the function is continuous and converges when considering equal s .so we can restrict ourself to prove the lemma for s having all components different ( and we will consider such s in what follows ) . for , we have , ( this is a consequence of the first statement in the lemma ) .let us suppose that there exist such that and from , the assumption , in addition with for , implies that there exist and , all different and non - zero , such that but this is to say that there exists , non all equal to zero , and s.t . now , if and , we clearly need to ensure solutions in , which leads to a contradiction . if or , is equivalent to where is a real polynomial of degree .clearly , can have at most different solutions .hence we have a contradiction with . otherwise , we have and is equivalent to where and is a real polynomial of degree .hence , has at most different solutions , and we also have a contradiction .thus , . the existence of , as well as the sign of the derivatives around are easily verified .let . from lemma [ lemma_trick ] , for any , with independent of and .we thus conclude that we can replace by + using the kuhn - tucker theorem , if minimizes , then s.t . by lemma [ lemma_trick ] and [ lemma_trick2 ] , is equivalent to with mutually independent and .now , let us assume that ( this represent w.l.o.g . that at least two different non - zero values are in ) .then and using , we get we now assume that , the third component of , is non - zero . by successive use of and by ,we have but from lemma [ unimodal ] and , is strictly positive on , thus therefore , if is not equal to , or 0 , we must have , in order to satisfy the kt conditions , but this contradicts and .+ we have just shown that the kt conditions for minima can be satisfied only with points in that contains at most two different non - zero values , i.e. a minimizer has the following form ( up to permutations ) with , ] for a large enough , and get a counter - example for positive random variables .hence , one would have to find a more specific condition ( than i.i.d . )under which conjecture [ miso ] may hold . finally , note that conjecture [ angle ] reduces to a conjecture stated in terms of a weighted sum of random variables ( as in conjecture [ miso ] ) only when is unitary invariant . andthe only way to have unitary invariance and independence is to assume a gaussian distribution for , which means that , the previous counter - examples to conjecture [ miso ] for arbitrary i.i.d .random variables do not lead to counter - examples to conjecture [ angle ] when has an arbitrary unitary invariant distribution .
in telatar 1999 ( ) , it is conjectured that the covariance matrices minimizing the outage probability for mimo channels with gaussian fading are diagonal with either zeros or constant values on the diagonal . in the miso setting , this is equivalent to conjecture that the gaussian quadratic forms having largest tale probability correspond to such diagonal matrices . we prove here the conjecture in the miso setting .
i address the question , what is the appropriate stochastic equation of motion to use when modelling a driven steady state ( including chaotic and fluctuating steady states ) such as that of a fluid under continuous shear flow ? at equilibrium , the solution is well understood . to generate configurations consistent with the equilibrium ensemble, one may use any equation of motion that respects the principle of detailed balance , which is a constraint on ratios of forward and reverse transition rates .that condition ensures that every thermally - driven flux is balanced by an equal and opposite flux . for non - equilibrium systems incontinuously driven steady states , no such guidance is hitherto available in choosing an equation of motion consistent with the _ mechanically _ ( externally ) driven fluxes , so arbitrary choices are often made .the aim of this work is to eliminate arbitrariness , and determine what transition rates are implied by the macroscopic state of the non - equilibrium system , i.e. its mean energy and flux , combined with our knowledge of the microscopic laws of physics .the objective is to use _ only _ the information that is available , without unwittingly introducing any arbitrariness , deriving from personal prejudices .the method for keeping the amount of information constant throughout the calculation is jaynes information - theoretic method of maximum entropy inference (maxent ) , which is often misunderstood in the context of non - equilibrium thermodynamics , despite recent notable achievements .it has been successfully used to derive fluctuation theorems and linear transport theory , and to explain self - organised criticality .jaynes gives a nice explanation of maximum entropy inference in his original paper on the subject , where he uses the method to re - derive equilibrium statistical mechanics without the need for many microscopic details that had previously been considered necessary .the application of the method to equilibrium systems is uncontroversial .however , the history of non - equilibrium information theory can be confusing because it has been used in so many different ways , some of them exact , some only approximate .in fact , information theory itself is not a physical theory , but a mathematical method , providing a logical structure .some physical input is required if such a method is to make physical predictions .if one throws away too much relevant information about some non - equilibrium system before applying maxent , it will still provide answers , but they will be inaccurate .for instance , using the method to minimise the information content of the momentum distribution in a non - equilibrium gas , although efficacious , is not an exact method , as was recently shown . in fact , there is no justification for discarding all information content except for some averaged features .indeed , particles possess their individual velocities for a reason : they have each come from somewhere , and are going somewhere , and their journeys will affect the trajectories of other particles .these facts are relevant to the physics of a non - equilibrium system , and lead to temporal correlations . at the other extreme ,if one retains all the details of a system s phase - space trajectory , allowing no stochastic input ( e.g. from a reservoir ) , then maxent becomes a null procedure , since it is asked to choose the most likely distribution from a choice of only one physical scenario - a delta function distribution of trajectories .such a null procedure may be regarded as an extreme case where maxent can correctly predict " any and all physics .there is thus no reason in principle why maxent should be expected to fail in non - equilibrium situations , if we ask it the right questions .the choice of the prior set of options that is presented to maxent is of crucial importance .it should be a set of physical paths through phase space , that each obeys newton s laws , so that all physics ( the navier - stokes equation , long - range correlations , etc . )is respected _ a priori_. maxent then tells us which of these trajectories is most likely to be chosen , under the influence of a non - equilibrium reservoir that is coupled to the system but uncorrelated with it .this is the application of information theory that should be understood here and in refs .i derive its implications for transition rates . alternatively , as is often done in theoretical modelling, one can settle for a physically imprecise prior set of dynamical rules such as brownian particles , or a discrete state space , or discrete time steps so that things become easy to solve . then applying the methods below will , accordingly , yield only approximate physics , but at least one will know exactly what information went into the simplified model .such an application of the present theory would yield transition rates that are somewhat arbitrary , due to the arbitrariness of the prior rates that are chosen . however, it will provide strictly _ the least _ arbitrary model .such a model will be derived in section [ applications ] for a stochastically hopping particle , that demonstrates some features of the method .it is important to realise that the approximations introduced in that section are only for expediency in that particular model .the general derivation of the method for obtaining transition rates from prior dynamical rules combined with non - equilibrium macroscopic observables , presented in sections [ presentation ] and [ variants ] , remains exact .the conditions derived here for macroscopically driven steady states are analogous to the equilibrium principle of detailed balance . like detailed balance ,the conditions are not sufficient to completely determine the microscopic transition rates , but are necessary to be satisfied by any equation of motion that generates an unbiased ergodic driven steady - state ensemble .the derivation of detailed balance relies on two assumptions : time - reversal symmetry of the microscopic laws of motion , and the ergodic hypothesis which implies that a heat reservoir can be characterised by the boltzmann distribution with temperature as the only parameter .similarly , the non - equilibrium conditions assume the same microscopic laws that govern equilibrium motions ( therefore implicitly requiring microscopic time - reversal symmetry , broken only by imposition of the macroscopic flux ) , and rely also on a hypothesis of ergodicity implying that the driven reservoir is fully characterised by its macroscopic observables ( mean energy and flux ) . many quiescent systems ( those without fluxes )are at thermodynamic equilibrium , but exceptions include glasses , granular media , and certain cellular automata , in which the ergodic hypothesis and/or microscopic reversibility fails . boltzmann s law and the principle of detailed balance apply only to that class of quiescent systems that are , by definition , at equilibrium . that class of systems has of course proved to be large , significant and interesting .similarly , not every non - equilibrium steady state should be expected to respect the conditions presented here ; exceptions include traffic flow and fluids of molecular motors , in which the constituents violate time reversal symmetry .the ergodic hypothesis may also fail in some systems , implying that hidden information that is not apparent in the macroscopic observables is nonetheless significant . however , it is anticipated that the ergodicity criteria are respected by the transition rates of many macroscopically driven systems , defining a special and important class .the method outlined in section [ presentation ] was presented in a recent letter .it is explained here in more detail , and the analysis extended to an alternative non - equilibrium ensemble in section [ variants ] .the method is demonstrated in section [ model ] where rates are derived for the stochastic transitions of a particle hopping in a non - trivial energy landscape , subject to a driving force .applications to other models are also discussed in section [ applications ] .using jaynes interpretation of gibbs entropy , it is possible to make a maximum entropy inference " to assess the probability that a system , subject to random influences , ( whether at equilibrium or not ) takes a particular trajectory through its phase space , thus allowing us to assess the _reproducible part _ of the system s motion .the recipe for the probability of trajectory is to maximize the shannon entropy , or information entropy , subject to constraints that some averaged properties of the trajectories conform with our knowledge of the macroscopic features such as mean energy , volume , flux etc . in principle, this formalism gives us a full solution of the statistics of any ensemble , be it at equilibrium or not . in the absence of any macroscopic fluxes ( i.e. at equilibrium ), the prescription reduces to a maximization of the gibbs entropy with respect to a distribution of instantaneous states rather than trajectories , yielding boltzmann s law . in the non - equilibrium case, maxent gives us the probability of an entire trajectory .it would be more useful to have a formula for the probability of a short segment of the trajectory , a single transition from a state to a subsequent state .such a transition probability is what we require for designing a stochastic model or simulation .this would allow us to generate trajectories belonging to the non - equilibrium ensemble .let us now derive that formula .we begin with some trivial calculations to establish notation . at any instant , the entire state of a system is represented classically by its phase - space position vector .this is a high - dimensional vector specifying the positions and momenta of all the particles constituting the system .as time progresses from the beginning to the end of the experiment or simulation , traces out a trajectory through phase space .it will prove useful to label each probability distribution function with a subscript indicating the duration of the trajectories to which it applies thus : .for a deterministic system with definite initial conditions , only one trajectory is possible , so the probability distribution is a delta function . in the presence of randomness , such as coupling to a reservoir of systems with similar properties ,the distribution is finite for all trajectories that respect some prior dynamical rules , such as conservation of momentum for all internal degrees of freedom not directly coupled to the reservoir . in the _ absence _ of any posterior constraints other than normalization , all trajectories of a given duration have equal _ a priori _ probability .that is not an independent postulate , but is embodied in the maximum entropy principle of information theory , since the entropy - maximizing distribution is given by with a lagrange multiplier chosen for consistency with eq .( [ norm ] ) .equation ( [ prior ] ) is solved by , indicating that the unconstrained ( ` prior ' ) set of trajectories of a given duration have equal probability .we now impose a posterior constraint , and calculate the statistical properties of that sub - set of trajectories that respect the constraint .let us not necessarily conserve the energy of the system at each instant ( since we allow energy exchange with a reservoir ) , but rather demand that its time - average over the whole trajectory is fixed at .we shall use a bar to indicate time averages , so that let us divide the trajectory into shorter segments , each of duration .then the constraint on the time - averaged energy may be written assuming ergodicity , time - averages are equivalent to ensemble - averages in the limit .so this constraint , for a time - average on , defines the equilibrium canonical ensemble for .in other words , the conditional probability of encountering a particular trajectory segment of duration , given that the whole trajectory has a time - averaged energy , is found by maximizing the information entropy for subject to eq .( [ canonical ] ) .the maximization involves lagrange multipliers for this energy constraint , and for the normalization constraint , and yields where the condition is represented for brevity by . as expected , this is boltzmann s law , and we interpret the lagrange multipliers as the temperature parameter and partition function .a transition rate , for any transition between states say , can be written as a conditional probability . if we consider a trajectory segment of duration , representing the transition , then the transition rate at some time , which we may define without loss of generality to be , is this is the probability ( per unit time ) of encountering the trajectory , _ given _ that we begin at .equation ( [ priorrate ] ) gives the _ prior _ rate of the particular transition .the rate in the equilibrium ensemble is given by a probability subject to _ two _ conditions : where the condition is represented for brevity by .to re - cap , eq .( [ eqrate ] ) defines the probability of encountering trajectory segment ( a transition ) given that we are in state , and that the entire trajectory of duration will turn out to have a mean energy .we have looked so far at the prior phase - space trajectories , and those for systems in the equilibrium ensemble . our goal is to determine the transition rates appropriate to a non - equilibrium ensemble , for which there is an imposed flux .again , we should not over - constrain the dynamics .let us allow the flux to fluctuate , and demand only that the dynamics will result in some finite value of the flux time - averaged over the whole trajectory : we ask , what is the probability , in time , of encountering the transition , given that we begin in state , and that the dynamics will eventually conspire to produce a mean flux and energy ?again we relate this conditional probability to a transition rate : fig .[ trajectories ] depicts some of the trajectories that have been discussed .time is shown on the horizontal axis , and all trajectories have a total duration .the vertical axis represents the phase - space coordinates though , of course , this is a reduced representation of the vastly high - dimensional phase space , since it has been projected onto a single axis . for definiteness ,let us say that this axis represents integrated flux , i.e. the flux that the system has accrued since .we must imagine that all the other coordinates required to fully describe the state of the system , are on axes perpendicular to the page .a sample of trajectories representing the equilibrium distribution is shown ( in grey and black ) .these trajectories are concentrated close to the time axis ( zero flux ) . if another axis measuring instantaneous energy were constructed perpendicular to the page , then the density of trajectories would be exponentially distributed along that axis , by boltzmann s law .equation ( [ eqrate ] ) gives the frequency of observing a particular trajectory segment shown in fig .[ trajectories ] ( the single transition ) of microscopic duration , given that we are currently ( at ) in state , and that the whole trajectory belongs to this equilibrium set .equation ( [ drivenrate ] ) asks for the frequency with which that trajectory segment occurs in the sub - set of trajectories shown in black in fig .[ trajectories ] , for which a given integrated flux will be accumulated by time .this sub - set of trajectories is the driven ensemble .note that we shall not require to vanish .the discussion will cover discrete - time processes for which the microscopic time step is , as well as continuous - time dynamics for which . to mathematically manipulate conditional probabilities , we appeal to bayes theorem .it states that the joint probability of two outcomes and both occurring , given a third fact , may be written in two equivalent ways : where is simply used to mean ` probability ' for any event ( appropriately normalised ) , as opposed to a particular distribution function .we can now assign the following meanings : is the fact that the transition takes place within , represented by the trajectory ; says that the flux has a mean value averaged over the entire duration ; and is the combined statement that the initial state at is and that the trajectory s time - averaged energy will be .thus , eq . ( [ bayes ] ) expresses the probability of the transition taking place within _ and _ the flux averaged over being , for the given initial state and average energy .it is re - written thus : notice that it is redundant to specify the two conditions , since the trajectory segment is the transition which includes the initial state . substituting from eqs .( [ eqrate ] ) and ( [ drivenrate ] ) yields a theorem for transition rates in the driven steady - state ensemble , notice that all quantities on the rhs of eq .( [ relation ] ) are defined _ at equilibrium _, not on the driven ensemble .this is indicated by the superscript ` eq ' , which is equivalent to the condition fixing , the time - averaged energy .equation ( [ relation ] ) tells us that the transition rate in the driven ensemble is given by the transition rate in the equilibrium ensemble , multiplied by an enhancement or attenuation factor .we shall see below that the theorem makes intuitive sense .given that the dynamics must be consistent with the macroscopically observable mean energy and flux , and with the same microscopic laws of motion that hold sway in an equilibrium system , maxent yields an unbiased description ofthe dynamics , and thereby constrains the system the least .equation([relation ] ) specifies explicitly the dynamical rules implied bymaxent .how do we know that eq .( [ relation ] ) constrains the dynamics the least ? it does , because all the quantities on the rhs are defined for the maximum - entropy ensemble at equilibrium , i.e. without the extra constraint on the flux .given that we start with an unbiased set ( the equilibrium ensemble ) , bayes theorem gives us the least biased set subject to the extra posterior constraint .let us examine the enhancement factor in eq .( [ relation ] ) in detail .it is a ratio of conditional probabilities for encountering a flux in the equilibrium ensemble . of course, we do not expect a system at equilibrium to exhibit any net flux , averaged over its whole trajectory .the chance of such a flux arising spontaneously at equilibrium is vanishingly small ( as ) .so the rhs of eq .( [ relation ] ) is the ratio of two vanishingly small terms. however unlikely it may be for an equilibrium system to spontaneously exhibit the desired macroscopic flux , we ask , how much would that probability be enhanced as a result of the putative transition ?if the dynamics of the transition itself contributes some flux to the trajectory , it is favoured by the enhancement factor .the factor also favours transitions to configurations that give a greater than average probability of subsequently obtaining the desired flux , for the given starting point .if the new state is more likely to initiate high - flux trajectories , then the transition rate is boosted over and above the equilibrium rate .we shall examine the implications of eq .( [ relation ] ) in some examples , but firstly let us interpret the meaning of its derivation .imagine that a lazy physicist wishes to collect data from a driven steady state , such as continuous shear flow of a complex fluid .our physicist has a computer program that simulates the fluid at equilibrium ( with free or frictionless boundaries , say ) , and is too lazy to write a new program that simulates shear .instead , ( s)he runs the equilibrium simulation , in the hope that it will spontaneously exhibit shear flow .it does not .so the dilettante updates the program s random number generator and runs it again .having tenure and little imagination , the physicist repeats this process countless times until , one day , the fluid fluctuates into a state of sustained shear flow .the delighted simulator records this fluke , but continues the project for many more years until a large number of such accidents have been observed , exhibiting the same shear rate .finally , the researcher discards an enormous set of simulated trajectories , and publishes only that subset which happened to perform the desired shear . on analysing this subset of trajectories, one might expect to observe the equilibrium transition rates that were coded into the algorithm .but this is a biased data set , subject to an _ a posteriori _ constraint of shear flux .so this sub - set of the equilibrium ensemble exhibits exactly the transition rates specified by eq .( [ relation ] ) .although the programmer has published a biased account of the equilibrium simulations , there was no unwarranted or subjective bias other than the flux constraint , hence the project was a success in producing the physics of shear flow .note that , despite extracting a sub - ensemble from the equilibrium ensemble , the lazy physicist has _ not _ produced a near - equilibrium approximation .the sub - ensemble dynamics " of eq .( [ relation ] ) has features qualitatively different from the equilibrium dynamics . in section [ applications ] , i shall use some examples to demonstrate the correctness of the physics generated by sub - ensemble dynamics ( eq . ( [ relation ] ) ) . before doing so ,in section [ variants ] , i develop a useful variant of eq .( [ relation ] ) , analogous to an alternative thermodynamic ensemble .equation ( [ relation ] ) gives the frequency of observing a particular trajectory segment ( e.g. a single transition ) of microscopic duration , in the driven ensemble which is a sub - set ofall trajectories , shown in black in fig .[ trajectories ] .these trajectories lie in the extreme tails of the equilibrium distribution .note that they have common end points , since we have specified the exact net flux that must flow during the duration of the experiment .any nearby trajectories , that do not have _ exactly _ the specified flux , do not contribute to the quantities appearing in eq .( [ relation ] ) .even very nearby trajectories are completely discarded by eq .( [ relation ] ) .this can be seen by re - writing the probability of thespecified flux as a sum over trajectories with fluxes , so that eq . ( [ relation ] ) becomes here , the dirac delta functions kill all trajectories with anything but the exact net flux .this can be a disadvantage for practical applications of the formula .( the lazy physicist , discussed above , must discard data even from simulations that produce _ nearly _ the right flux . )an alternative expression is now derived , that samples trajectories with less stringent conditions on their eventual flux content . in equilibriumstatistical mechanics , the constraint of energy conservation is relaxed by dividing the isolated microcanonical system into a relatively small sub - section , defining the canonical system , and the large remainder , known as the reservoir . similarly , we shall relax the strict constraint on the time - averaged flux , by dividing the total trajectory of duration into a part ( see fig . [ trajectories ] ) of duration ( where ) , whose properties are examined in detail , and the large remaining part of duration , for which the system s motion is uncorrelated with the earlier trajectory segment .we may express the conditional probability of a net flux in the full duration , as an integral over all possible fluxes during the interval thus : where is the probability of an appropriate flux during interval given that the system began in state at , and then flowed with mean flux for the duration .the required flux is given by given that exceeds any correlation time , the probability becomes independent of , because the system has forgotten its initial state by the time at which the interval commences .in fact , at time , the system is in a state drawn at random from the driven steady - state ensemble , since the integral in eq .( [ workings1 ] ) is dominated by .so we may make the replacement where is the steady - state distribution of instantaneous microstates in the driven ensemble .not only is the above formula independent of the initial state , it actually takes a universal ( exponential ) form as a function of , as shown in appendix [ canonapp ] using the theory of large deviations .this is because the extremely unlikely value of the flux , , is the result of many unlikely realisations of the flux during the many uncorrelated intervals that comprise the large duration . as a result ,( [ relation ] ) can be re - cast , using eqs .( [ workings1 ] ) , ( [ fluxsum ] ) and ( [ replacement ] ) , and the derivation in appendix [ canonapp ] , as where the control parameter is conjugate to the time - averaged flux , and is fixed by the relation in terms of the function here , is an ensemble average with respect to the steady - state distribution of microstates .we have defined that is a property of an instantaneous state of the system .note that has non - trivial -dependence , containing transients for , and becoming linear in for , while is independent of .the above equations have a structure that is familiar from equilibrium thermodynamics . clearly , in eq .( [ q ] ) , plays the role of a thermodynamic potential , andits derivative is conjugate to the temperature - like parameter .the conditional probabilities in the integrands of eq .( [ dyncanonical ] ) describe the likelihood of any particular flux during the interval , given the initial state and/or transition .the exponential factor measures the change in the weight of the large remainder of the trajectory of duration , due to the initial part accepting a flux rather than postponing it until after .compare eqs .( [ dynmicro ] ) and ( [ dyncanonical ] ) .the expressions become very similar under a change of notation . the difference in the new formulation ( eq . ( [ dyncanonical ] ) )is that trajectories with the wrong flux are not eliminated by a delta function , but re - weighted by an exponential weight factor .the two alternative formulations are exactly akin to alternative ensembles in equilibrium statistical mechanics .we can regard the duration of a trajectory as being analogous to the size of a system at equilibrium , and the flux as analogous to energy - density .originally we demanded that the integrated flux was fixed exactly , just as energy is fixed in the mico - canonical ensemble , and we enquired , in eq .( [ dynmicro ] ) , about how the instantaneous ( ` local ' ) conditions are affected by correlations in the rest of the trajectory ( ` system ' ) . the formulation of eq .( [ dyncanonical ] ) is akin to using the canonical ensemble .again , we enquire about conditions at an instant ( ` locality ' ) , within a trajectory segment ( ` system ' ) of duration ( ` size ' ) . but now , the integrated flux ( ` energy ' ) is not strictly conserved , but can be exchanged with the rest of the trajectory ( ` a reservoir ' ) of duration ( ` size ' ) much longer ( ` larger ' ) than the initial trajectory segment ( ` system ' ) . since all important correlations are contained within the ` system ' , the nature of the interface between ` system ' and ` reservoir ' becomes unimportant , and the ` reservoir ' is characterised by a single parameter , .let us refer to this as the`canonical - flux ' ensemble .so long as the trajectory duration ( ` system size ' ) is much greater than any correlation time ( ` length ' ) , the properties at instant ( ` locality ' ) are unaffected by whether integrated flux ( ` energy ' ) is exactly conserved , and the ensembles are equivalent in the infinite - time ( ` thermodynamic ' ) limit .it is possible to derive eq .( [ dyncanonical ] ) [ via eqs .( [ eqrate ] ) and ( [ drivenrate ] ) ] by direct maximization of the information entropy of a set of trajectories , at fixed ensemble - averaged flux and energy . in that case , as with the above derivation , great care is required to compare the relevant time - scales with correlation times , to avoid unwittingly averaging over the correlations present in .that would produce a mean - field expression , in which the rate of each transition is simply boosted exponentially according to its immediate flux contribution .such a scheme is popular in simple models , but should not be mistaken for the exact theorem derived above .the expression for transition rates , eq .( [ dyncanonical ] ) , appears to depend on the arbitrary quantity .it can be re - written in an a much clearer form that is explicitly independent of , as we now show .although as , the difference , for two states and , has a finite asymptote , embodying the different transient influences that the two states have on the system , before it returns to a statistically steady state .so , let us define a function that contains that transient information , but is independent of the arbitraryquantity , thus : so that in the long - time limit .we require one further piece of notation .the dynamics is described by a set of transitions carrying integrated flux . for continuous dynamics , as , but for discrete transitions , remains finite whether or not time steps are made vanishingly small .as above , the following discussion applies to either case . in terms of these physically meaningful quantities , transition rates in the driven ensembleare given by .\ ] ] the derivation of eq .( [ clear ] ) from eq .( [ dyncanonical ] ) is given in appendix [ reformapp ] .it is now clear , in eq .( [ clear ] ) , that three distinct factors determine the rate of a transition in the driven steady - state ensemble .( 1 ) the rate is proportional to the rate at equilibrium .so , all else being equal , energetically expensive transitions are slow , while down - hill transitions take precedence .( 2 ) the rate is exponentially enhanced for transitions that contribute a favourable flux .( 3 ) the dependence on is overlooked by mean - field models .it says that a transition s likelihood depends also on the state in which it leaves the system .its rate is enhanced if it puts the system into a state that is more likely to exhibit flux in the future .the effect of this factor on the driven steady - state distribution of microstates is to increase ( relative to the boltzmann distribution ) the weight of states that are more - than - averagely willing to accept a future flux .we shall see an example of this effect in the next section . in the case of a shear flux , this means that low - viscosity states are favoured , as is often observed .we have a recipe for constructing a model of any given driven system , that is guaranteed to yield the desired flux , and to respect all the physical laws that are obeyed by the equilibrium version of the model , and that is guaranteed to have no artefacts from statistical bias . if we choose to provide this machinery with an equilibrium model that obeys all of newton s laws i.e. , a fluid whose internal interactions conserve momentum , angular momentum , and energy , while stochastic forces from the reservoir couple only to particles at the boundary then it will yield dynamical rules that also respect newton s laws for the boundary - driven fluid .in other words , the method has the capacity to produce exact physics if provided with an exactly physical prior .it provides a description of the reservoir , by characterising the stochastic part of the equations of motion .the way in which that reservoir couples to the system is up to the user to decide . in the above example, it is coupled only at the boundary , but we may instead consider a driven brownian system , for which the heat bath is more strongly and uniformly coupled , dominating all momentum variables .another alternative is to apply the method to a model whose prior ( equilibrium ) physical properties are simplified for the sake of clarity and analytical expediency . in that case, of course , the result of the recipe will be approximate and unreliable , but still the least arbitrary choice of transition rates for the given degree of simplification .the micro - canonical flux ensemble introduced in sections [ presentation ] and [ micro ] was first presented in ref . , where it was used analytically to construct a continuum model of driven diffusion , and heuristically to discuss the features of a lattice model of dimers under shear .the latter model had a much more complex energy landscape including jammed states .another analytically solvable model was constructed in ref . , using the micro - canonical flux method .it was another one - dimensional driven diffusion model , but this time with a discrete state space and discrete time step . in the following section, we shall analytically construct a model of a driven system with a non - trivial energy landscape , that demonstrates some features of more complex systems , such as sheared complex fluids , with states that are locally trapped so that they can not easily be driven .the model reduces to simple one - dimensional driven diffusion in a certain limit , and has a discrete state space but continuous time , to complement the earlier published models .we shall use the canonical - flux ensemble of sections [ canon ] and [ factors ] , to demonstrate the utility of this method . consider a particle that can hop stochastically among a set of discrete states that have the connectivity shown in fig .the particle will be driven by its non - equilibrium heat bath so that it has , on average , a drift velocity from left to right . at each location , it may occupy one of two states : state , from which it may escape to the left or right with rates and to other states at different locations , or downwards with rate into a lower energy trapped state ; once in a state , the particle can not exhibit any flux , i.e. , can not move left or right , but can only wait for a random excitation at rate back up to the state at the same location .= 7.5 cm note that , if we set , the model reduces to a continuous - time discrete - space linear hopping model , like the versions that were previously studied with both space and time continuous or discrete .the equilibrium version of this model ( with no mean drift ) is very straightforward .detailed balance requires that , where e is the energy difference between states and measured in units of , and that where we may measure all rates in units of so that without loss of generality .the occupancy of states is given by boltzmann as , and the only remaining parameter that we are free to choose is , which specifies the ratio of vertical to horizontal mobilities . when the model is not at equilibrium , but is driven at drift velocity , the nave expectation would be either that we a free to choose all four rates , since non - equilibrium models traditionally have no rules , or that detailed balance still governs the ratio . however , as discussed above , neither of these statements is true .there is , in fact , a least - arbitrary set of rates , that corresponds to driving by an uncorrelated heat bath that is characterised only by its temperature and velocity .we shall now calculate that set of rates , using the canonical - flux ensemble .the rates are given by eq .( [ clear ] ) .defining our unit of length to be one inter - site spacing , the integrated flux of a transition to the right ( left ) is , ( ) , while transitions between and states carry no flux as they leave the particle s displacement unchanged . since time is continuous , the time - step is infinitesimal , , so that the last term in the exponential of eq . ( [ clear ] ) vanishes , and it prescribes the following rates in the driven ensemble : [ rldu ] to complete the calculation of the rates , we require only , the difference in the willingness of the two states to admit a flux. this could be evaluted by brute force " using eq .( [ willingness ] ) , if we first calculate the green function for the equilibrium model , i.e. the probability that the particle travels a given distance in a given time , given the intial state or .however , that calculation can be avoided , using the derivation in appendix [ combapp ] to show that , for this comb " model , this is purely a result of the facts that state can only be quitted via state , and that escape times are distributed exponentially .we can now construct a differential equation for , as follows .due to the model s translational symmetry , the steady - state occupancy of states is just and , since displacements are allowed only from states , the mean drift velocity is now , using eq .( [ j ] ) , we obtain an ordinary differential equation , that can be integrated for as a function of .the constant of integration is fixed by which follows from normalization of the probability distribution in the definition of [ eqs .( [ q ] ) and ( [ m ] ) ] . finally , we obtain the required potential " , ^ 2 + \rho^2 e^{-e}}\ ] ] which , with eqs .( [ rldu ] ) , ( [ qq ] ) , ( [ occ ] ) and ( [ drift ] ) , leads to four constraints on the four transition rates in the driven system , from which the abstract quantities and have been eliminated : [ constraints ] one of these four equations is obvious ; the others are not .equation ( [ c1 ] ) is simply a re - statement of eq .( [ drift ] ) , and gives the drift velocity that results from any choice of the four transition rates .so , if we applied the usual _ ad hoc _ construction of non - equilibrium stochastic models , we would pluck four rates out of the air , use eq .( [ c1 ] ) to find the resulting drift velocity , and have no other constraints .the other three constraints have arisen from our demand that the design of the model incorporates the prior dynamics , the large - scale flux , and no other design features .note that eqs .( [ c2 ] ) and ( [ c3 ] ) express relations between forward and reverse transition rates that are generic to any continuous - time model with instantaneous transitions , that _ the product of the forward and reverse rates of a transition is equal in the driven and equilibrium ensembles_. this follows directly from eq .( [ clear ] ) with finite as .the four transition rates defined by eqs .( [ constraints ] ) have exactly the same number of free parameters as in an equilibrium model : for a given energy gap and flux , the four rates are defined up to one parameter , , that specifies the prior ratio of vertical to horizontal mobilities , as is the case in the equilibrium version of the model that was required to respect detailed balance . while the equilibrium occupancy , given by boltzmann s law , is independent of , the occupancy in the driven ensemble [ ( eq . ( [ occ ] ) ] does depend on this kinetic parameter .the transition rates prescribed by sub - ensemble dynamics are plotted as functions of velocity in fig .[ rates ] for an energy gap and mobility ratio . due to the symmetries of the comb structure ,the rates of transitions up and down ( , ) between and states are even functions of . at ,the rates take their equilibrium values , and .on increasing velocity , hops to the right ( ) become more frequent , while hops to the left ( ) are suppressed , as expected .also the particle becomes less likely to fall down ( ) into a trapped state , and is increasingly dragged out of traps ( ) by the driving force .= 7.5 cm the rates are re - plotted on log - log axes ( for positive ) in fig .[ logrates ] , using parameter values , , that were chosen to provide a separation of time - scales , emphasizing the features of the graphs .three regimes of drift velocity become apparent . on the left of the figure ( low ) is the near - equilibrium regime , where the rates , , of transitions that do not carry a flux , remain approximately constant , respecting detailed balance .this fulfils the nave expectation often applied to non - equilibrium models , that detailed balance continues to describe the physics of activated processes. meanwhile , the rate of hops to the right , , is enhanced and to the left , , is suppressed , so that the sparsely populated states exhibit the required drift velocity .= 7.5 cm the second regime of the driving velocity , is shaded grey in fig .[ logrates ] . in this regime, the flux constraint can no longer be satisfied by the small population of thermally - activated states .the states become mechanically activated , with particles in the immobile state promoted into the mobile state by the driving force .as increases through the shaded part of the figure , the unequal hopping rates to the right and left remain approximately constant , while the rate of activation increases and rate of trapping decreases , so that the drifting population increases .this is also apparent in fig .[ fafb ] , which shows the occupancies of the two states as a function of velocity , for this same set of parameters .once the mobile states are fully populated , and the trapped states have negligible occupancy , the bias on hops to the right can no longer remain constant while satisfying an increasing flux constraint .hence , a third regime exists at the highest values of ( fig .[ logrates ] ) , where rates and both become proportional to the flux , while the flux - impeding transitions have rates and inversely proportional to .= 7.5 cm the model we have studied here , with its simple comb - shaped state space , has some features that are generic to driven systems .we saw , in section [ factors ] , that the rate of a transition in a driven ensemble depends on three factors : its rate at equilibrium , the amount of flux that it contributes , and the difference in the willingness of the initial and final states to allow the required flux in the future .transitions that contribute a non - zero amount of flux were called type a " in ref . , while transitions between states with different promise for future flux where labelled type b " . in previous articles , the rates prescribed by sub - ensemble dynamics were calculated explicitly only for simple models , that exhibited only type a transitions due to the simplicity of their state spaces . the comb model , on the other hand , has both type a ( ) and type b ( ) transitions .another example of such a model , that was previously discussed only heuristically , is a set of dimers ( particles that occupy two adjacent lattice sites ) that perform random walks on a two - dimensional triangular lattice , while the lattice itself is driven into shear flow by sporadically cleaving and re - positioning its horizontal layers .certain arrangements of the dimers ( analogous to states of the comb model ) allow these quanta of shear , while other states ( analogous to states ) are prevented from shearing , due to the unbreakable dimers straddling two layers of the lattice , thus jamming the system .although any such many - particle system has a very complex state - space , its crucial features are reproduced in the comb model . when the comb model is stuck in a state , the driving force ( that derives from the statistics of the driven ensemble )pushes it into a more mobile state in order to flow .likewise , when the dimer model is in a state that will not admit a flux , it must first re - arrange its particles .the driving force achieves this by imposing a shear stress on the particles , causing them to re - orient mechanically ( as opposed to thermally , by brownian motion ) .the sub - ensemble rules prescribe ( for a given prior dynamics ) the rate of that mechanically imposed re - alignment , thereby specifying the constraints that must be met by a physically acceptable constitutive relation for the flowing system .there are certain constraints that must be satisfied by any candidate for a statistical mechanical theory of driven steady states : it must satisfy the known laws of motion , and it must give rise to the required macroscopic observables ( flux , energy etc . ) . in this article, we have assumed ; indeed , demanded ; that those are the _ only _ constraints , and derived the transition rates implied by that assumption .comparison with experimental observations will determine _ a posteriori _ whether a particular system belongs to the ergodic class that is well described by these unbiased rates , just as empirical comparison determines whether or not a static system is at thermodynamic equilibrium .if one is privy to prior information indicating that the driven system s motion is biased in some way that is not apparent in its macroscopic flux and conserved quantities , then the dynamical rules set out here should be disregarded . to violate the rules _ a priori _ without such a justification is to bias the model with arbitrary information derived from prejudice rather than from physics .such arbitrariness is not condoned for equilibrium models , and the same should be the case for macroscopically driven steady states .for example , consider how we design a stochastic model of an equilibrium system .the system is defined by some set of available states , and we must choose the rates of transitions between those states .canonical equilibrium is defined by a fixed volume , particle number , and mean energy of the system .we might choose any arbitrary set of rates , and then measure or calculate the mean energy that results when the system arrives at a steady state . certainly , that procedure would give rise to a well defined mean energy , volume and number , but that is not sufficient for us to say that the system is at equilibrium and that the transition rates are acceptable .there are constraints arising from the principle of detailed balance , which ensure that are the _ only _ parameters characterising the macroscopic state of the ensemble , beyond the definition of the system in terms of its accessible states and reversibility .we have found the generalisation of those constraints to non - equilibrium steady states .the prior is central to the formalism , and is often misinterpreted in non - equilibrium applications of information theory . in the present context , it is used to mean the complete set of _ physically valid _ paths that a system might take in response to the stochastic forces arising from a particular coupling to a non - equilibrium reservoir .if the reservoir can exchange energy with the system , then conservation of energy can be violated in the prior .if the coupling is only to particles at the system s boundary , then energy , momentum and angular momentum must be conserved by all internal interactions , so the prior does not include scenarios for which those laws are violated .this has not been the usual definition of the prior , in previous attempts at non - equilibrium applications of information theory .it is often assumed that our knowledge of microscopic dynamics can be discarded , and maxent will correctly reconstruct that missing information .such optimism can not be justified .for instance , maxent has been used to choose between phase - space paths that are characterised by their actions , discarding our knowledge of hamilton s principle of least action .the result is an exponential distribution in which the paths of least action are the most likely , but that result is incorrect .paths on which the action is extremized are not just _ likely _ ; they are the _ only _ paths of a classical system , and therefore the only paths that should appear in the prior if an exact calculation is wanted .the central results of this paper are the formulae for transition rates in a driven ensemble .these are formulated in two alternative ways . in the microcanonical - flux ensemble ", the flux is constrained to an exact value when time - averaged over the duration ( tending to infinity ) of each system s passage through phase - space , resulting in eq .( [ relation ] ) .the canonical - flux ensemble " , in which only the ensemble - averaged flux is constrained , leads to eq .( [ clear ] ) for the transition rates , which is exactly equivalent to the microcanonical - flux prescription .the canonical - flux equation ( [ clear ] ) makes explicit the three factors influencing a transition rate .as at equilibrium , energetics are important , making a system reluctant to take up - hill steps in its energy landscape . secondly , an exponential factor , that one might have guessed , favours transitions that impart the desired flux . the third factor prescribed by eq .( [ clear ] ) is more subtle .it describes the importance of correlations , and depends on a well - defined quantity ascribed to each microstate , that quantifies its promise for future flux .a transition is favoured if it takes the system to a state of higher promise , that is more amenable to future flux - carrying transitions .the sub - ensemble scheme has previously been demonstrated to produce the standard equations of motion for diffusion with drift , both for continuous and discrete random walks . in the present article ,the dynamical rules were evaluated for a more complex model .we have seen that , for thermally activated processes , that are governed by detailed balance at equilibrium , the sub - ensemble rules describe _ mechanical _ activation by the driving force , although detailed balance is recovered in the low flux regime . in the context of shear flow, mechanical activation corresponds to stress - induced re - arrangement .the fact that this statistical formalism describes the effects of non - equilibrium stresses in a natural way , makes it a promising approach for the study of shear - banding , jamming , and other shear - induced transitions of complex fluids . at the risk of repetition, we have a recipe for constructing a model of any given driven system , that is guaranteed to yield the desired flux , and to respect all the physical laws that are obeyed by the equilibrium version of the model .it is also guaranteed to have no artefacts from statistical bias .this machinery has the capacity to produce exact physics if provided with an exactly physical prior .otherwise , it will yield the least arbitrary model for the given degree of approximation .many thanks go to alistair bruce , michael cates , richard blythe , peter olmsted , tom mcleish , alexei likhtman , suzanne fielding and hal tasaki for informative discussions .rmle is grateful to the royal society for support .as stated in eq .( [ replacement ] ) , the distribution of flux during interval , that appears in eq .( [ workings1 ] ) , is uncorrelated with the initial state , and can therefore be written in terms of the instantaneous steady - state distribution of states .the distribution can be evaluate if we sub - divide the interval into sub - intervals of duration , where since .the system begins each of these sub - intervals in a state drawn randomly and independently from the steady - state distribution .these initial states are uncorrelated because .the overall flux in the interval is the mean of the fluxes in these independent sub - intervals , so that this limit distribution gives the likelihood ( under equilibrium dynamics , with a non - equilibrium initial state ) that the independent flux measurements have an improbably - large mean value .camr s theorem of large deviations states that the weight in the tail of the distribution of the mean of independent identically distributed random variables behaves as that is , the weight in the tail decays exponentially with , at a rate given by .\ ] ] dividing both sides of eq .( [ cramer ] ) by the constant gives since the lhs of eq .( [ limit ] ) is independent of the arbitrary choice of , we can infer that .let us define the function that is independent of the arbitrary quantity .writing the exponential decay law explicitly , with an unknown prefactor that varies only slowly with , \;\mbox { as } \ ; \tau/\hat{\tau}\to 0\ ] ] and differentiating with respect to gives \exp\left [ -\hat{\tau } h(\hat{j } ) \right].\ ] ] now , substituting for from eq .( [ fluxsum ] ) and taylor - expanding to first order in allows us to take the limit when substituting eqs .( [ workings1 ] ) , ( [ f ] ) and ( [ exp ] ) into ( [ relation ] ) , yielding the supremum in eq .( [ sup ] ) can be evaluated by defining the functions in eqs .( [ q ] ) and ( [ m ] ) . from eqs .( [ f ] ) , ( [ sup ] ) and ( [ defh ] ) , we have with the parameter [ equal to in eq .( [ sup ] ) ] given by eq .( [ j ] ) .thus the parameter in eq .( [ workings2 ] ) can be evaluated by differentiating eq .( [ h ] ) and substituting from eq .( [ j ] ) , to give resulting in eq .( [ dyncanonical ] ) .note that is a legendre transform of , and that and in eqs .( [ theta ] ) and ( [ j ] ) are conjugate variables .let us make a change of variable in eq .( [ dyncanonical ] ) , and replace the integration over _ average _ flux by one over _ total _ ( integrated ) flux . then , using now to represent the normalized probability of finding a total flux on an equilibrium trajectory of length , we can write now , the expression is the probability of accumulating an integrated flux during interval , given that the initial part of that interval is taken up with a transition from state to . since that transition carries an integrated flux , we can replace the expression by the probability of accumulating the remaining flux in the remaining time , starting from state , i.e. hence , after a change of variable , eq .( [ rewrite ] ) gives \\ \label{line2 } & = & \nu k_{ab } + \lim_{\tau\to\infty } \left [ m_b(\nu,\tau)-m_a(\nu,\tau ) \right ] -\zeta_b(\nu,\delta t)\end{aligned}\ ] ] where \ ] ] and , by substituting into eq .( [ line1 ] ) , we find , i.e. , is state - independent . given that the limit in eq .( [ zeta ] ) exists , we can write even for finite , since asymptotes to a linear function of . the state - independence of can now be used to factor out the time - derivate of from the ensemble average when differentiating eq .( [ q ] ) with respect to , yielding finally , substituting eq .( [ dmadt ] ) into ( [ line2 ] ) gives a very simple expression for the ratio of transition rates , \ ] ] from which eq .( [ clear ] ) follows .for the discrete states of the comb model of section [ model ] , with the integrated flux quantized into discrete values of the displacement , eq .( [ m ] ) becomes where is the equilibrium green function for states .that is the probability of attaining a displacement in time given that the particle initially occupies a state .an equivalent expression holds for . in the continuous - time model ,a particle occupying state at time will escape to the corresponding state at a time that is drawn stochastically from the exponential probability distribution .once excited to the state , the particle is governed by the corresponding green function , so that the green function for a particle occupying state is given by from which it follows that differentiating with respect to yields in the limit of large , the time derivative of is just , as given by eq .( [ dmadt ] ) , so that , with the definition of in eq . ( [ qm ] ) , the required result , eq . ( [ qq ] ) follows .w. gtze , in liquids , freezing and glass transition " , les houches session li , eds j .-hansen , d. levesque , j. zinn - justin ( elsevier science pub , amsterdam 1989 ) ; j .-bouchaud , j. phys .i france * 2 * , 1705 ( 1992 ) .m. e. cates , in soft and fragile matter " , eds m. e. cates and m. r. evans ( institute of physics , bristol 2000 ) ; l. e. silbert , r. s. farr , j. r. melrose and r. c. ball , j. chem .phys . * 111 * , 4780 ( 1999 ) ; m. e. cates , j. p. wittmer , j .-bouchaud and p. claudin , phys .lett . * 81 * , 1841 ( 1998 ) .
when modelling driven steady states of matter , it is common practice either to choose transition rates arbitrarily , or to assume that the principle of detailed balance remains valid away from equilibrium . neither of those practices is theoretically well founded . hypothesising ergodicity constrains the transition rates in driven steady states to respect relations analogous to , but different from the equilibrium principle of detailed balance . the constraints arise from demanding that the design of any model system contains no information extraneous to the microscopic laws of motion and the macroscopic observables . this prevents over - description of the non - equilibrium reservoir , and implies that not all stochastic equations of motion are equally valid . the resulting recipe for transition rates has many features in common with equilibrium statistical mechanics .
we investigate the application of general purpose graphics processing units ( gpus ) to solving large systems of polynomial equations with numerical methods .large systems not only lead to an increased number of operations , but also to more accumulation of numerical roundoff errors and therefore to the need to calculate in a precision that is higher than the common double precision . motivated by the need of higher numerical precision , we can formulate our goal more precisely .with massively parallel algorithms we aim to offset the extra cost of double double and quad double arithmetic and achieve quality up , a project we started in .* problem statement . *our problem is to accelerate newton s method for large polynomial systems , aiming to offset the overhead cost of double double and quad double complex arithmetic .we assume the input polynomials are given in their sparse distributed form : all polynomials are fully expanded and only those monomials that have a nonzero coefficient are stored . for accuracy and application to overdetermined systems ,we solve linear systems in the least squares sense and implement the method of gauss - newton .our original massively parallel algorithms for evaluation and differentiation of polynomials and for the modified gram - schmidt method were written with a fine granularity , making intensive use of the shared memory .the limitations on the capacity of the shared memory led to restrictions on the dimensions on the problems we could solve .these problems worsened for higher levels of precision , in contrast to the rising need for more precision in higher dimensions .* related work .* as the qr decomposition is of fundamental importance in applied linear algebra many parallel implementations have been investigated by many authors , see e.g. , .a high performance implementation of the qr algorithm on gpus is described in . in ,the performance of cpu and gpu implementations of the gram - schmidt were compared .a multicore qr factorization is compared to a gpu implementation in .gpu algorithms for approaches related to qr and gram - schmidt are for lattice basis reduction and singular value decomposition . in ,the left - looking scheme is dismissed because of its limited inherent parallelism and as in we also prefer the right - looking algorithm for more thread - level parallelism .the application of extended precision to blas is described in , see for least squares solutions .the implementation of blas routines on gpus in triple precision ( double + single float ) is discussed in . in , double double arithmetic is described under the section of error - free transformations .an implementation of interval arithmetic on cuda gpus is presented in .the other computationally intensive stage in the application of newton s method is the evaluation and differentiation of the system .parallel automatic differentiation techniques are described in , , and . concerning the gpu acceleration of polynomial systems solving , we mention two recent works . a subresultant method with a cuda implementation of the fft to solve systems of two variables is presented in . in , a cuda implementation for an nvidia gpu of a multidimensional bisection algorithm is discussed . * our contributions .* for the polynomial evaluation and differentiation we reformulate algorithms of algorithmic differentiation applying optimized parallel reduction to the products that appear in the reverse mode of differentiation .because our computations are geared towards extended precision arithmetic which carry a higher cost per operation , we can afford a fine granularity in our parallel algorithms . compared to our previous gpu implementations in , we have removed the restrictions on the dimensions and are now able to solve problems involving several thousands of variables .the performance investigation involves mixing the memory - bound polynomial evaluation and differentiation with the compute - bound linear system solving .we distinguish three tasks in the evaluation and differentiation of polynomials in several variables given in their sparse distributed form .first , we separate the high degree parts into common factors and then apply algorithmic differentiation to products of variables . in the third stage , monomials are multiplied with coefficients and the terms are added up .a monomial in variables is defined by a sequence of natural numbers , for .we decompose a monomial as follows : where is the product of all variables that have a nonzero exponent .the variables that appear with a positive exponent occur in with exponent , for .we call the monomial a _ common factor _ , as this factor is a factor in all partial derivatives of the monomial . using tables of pure powers of the variables , the values of the common factors are products of the proper entries in those tables . the cost of evaluating monomials of high degrees is thus deferred to computing powers of the variables .the table of pure powers is computed in shared memory by each block of threads .consider a product of variables : .the straightforward evaluation and the computation of the gradient takes multiplications .recognizing the product as the example of speelpenning in algorithmic differentiation , the number of multiplications to evaluate the product and compute all its derivatives drops to .the computation of the gradient requires in total extra memory locations .we need locations for the intermediate forward products , . for the backward products , only one extra temporary memory location is needed , as this location can be reused each time for the next backward product , if the computation of the backward products is interlaced with the multiplication of the forward with the corresponding backward product . for , figure [ figcircuit1 ] displays two arithmetic circuits , one to evaluate a product of variables and another to compute its gradient .the second circuit is executed after the first one , using the same tree structure that holds intermediate products . at a node in a circuit, we write if the multiplication happens at the node and we write if we use the value of the product . atmost one multiplication is performed at each node of the circuit .( 350,100)(0,0 ) ( 0,0 ) ( 80,80)(0,0 ) ( 0,0) ( 25,0) ( 50,0) ( 75,0) ( 2,30) ( 5,8)(1,2)9 ( 28,8)(-1,2)9 ( 52,30) ( 55,8)(1,2)9 ( 78,8)(-1,2)9 ( 17,70) ( 25,40)(1,2)12 ( 58,40)(-1,2)12 ( 140,0 ) ( 200,80)(0,0 ) ( 0,70) ( 50,70) ( 100,70) ( 150,70) ( 0,0) ( 5,10)(0,1)55 ( 50,0) ( 55,10)(0,1)55 ( 130,0) ( 135,10)(0,1)55 ( 180,0) ( 185,10)(0,1)55 ( 150,30) ( 158,38)(-3,1)78 ( 150,36)(-4,1)112 ( 20,30) ( 28,38)(3,1)78 ( 38,36)(4,1)112 denote by the product , for all natural numbers between and .figure [ figcircuit2 ] displays the arithmetic circuit to compute all derivatives of a product of 8 variables , after the evaluation of the product in a binary tree .( 455,160)(0,0 ) ( 0,0) ( 5,10)(0,1)134 ( 60,0) ( 65,10)(0,1)134 ( 120,0) ( 125,10)(0,1)134 ( 180,0) ( 185,10)(0,1)134 ( 240,0) ( 245,10)(0,1)134 ( 300,0) ( 305,10)(0,1)134 ( 360,0) ( 365,10)(0,1)134 ( 420,0) ( 425,10)(0,1)134 ( 0,150) ( 60,150) ( 120,150) ( 180,150) ( 240,150) ( 300,150) ( 360,150) ( 420,150) ( 85,28) ( 105,30)(4,1)267 ( 90,38)(3,1)170 ( 250,100) ( 283,110)(3,1)100 ( 295,105)(4,1)145 ( 370,100) ( 385,110)(-3,1)100 ( 395,110)(-2,1)60 ( 15,15) ( 20,25)(0,1)70 ( 135,15) ( 140,25)(0,1)70 ( 275,15) ( 280,25)(0,1)70 ( 395,15) ( 400,25)(0,1)70 ( 325,28) ( 320,30)(-4,1)267 ( 335,38)(-3,1)170 ( 15,100) ( 48,110)(3,1)100 ( 60,105)(4,1)145 ( 135,100) ( 148,110)(-3,1)105 ( 160,110)(-2,1)65 to count the number of multiplications to evaluate , we restrict to the case of a complete binary tree , i.e. : for some and compute the sum .the circuit to compute all derivatives contains a tree of the same size : with nodes , so the number of multiplications equals minus 3 for the nodes closest to the root which require no computations , and plus for the multiplications at the leaves : in total .so the total number of multiplications to evaluate a product of variables and compute its gradient with a binary tree equals .while keeping the same operational cost of as the original algorithm , the organization of the multiplication in a binary tree incurs less roundoff .in particular the roundoff error for the evaluated product will be proportional to instead of of the straightforward multiplication . for a large number of variables , such as , this reorganization improves the accuracy by two decimal places .the improved accuracy of the evaluated product does not cost more storage as the size of binary tree equals . for the derivatives ,the roundoff error is bounded by the number of levels in the arithmetic circuit , which is .while this bound is still better than , the improved accuracy for the gradient comes at the extra cost of additional memory locations , needed as nodes in the arithmetic circuit for the gradient . in shared memory , the memory locations for the input variables are overwritten by the corresponding components of the gradient , e.g. : then occupies the location of . in the original formulation of the computation of the example of speelpenning ,only one thread performed all computation for one product and the parallelism consisted in having enough monomials in the system to occupy all threads working separately on different monomials .the reformulation of the evaluation and differentiation with a binary tree allows for several threads to collaborate on the computation of one large product .the reformulation refined the granularity of the parallel algorithm and we applied the techniques suggested in .if is not a power of 2 , then for some positive and , denote .the first threads load two variables and are in charge of the product of those two variables , while other threads load just one variable .the multiplication of values for variables of consecutive index , e.g. : will result in a bank conflict in shared memory as threads require data from an even and odd bank . to avoid bank conflicts , the computations are rearranged , e.g. as , so thread 0 operates on and thread 1 on .table [ tabmoneval ] shows the results on the evaluation and differentiation of products of variables in double arithmetic , applying the techniques of .the first gpu algorithm is the reverse mode algorithm that takes operations executed by one thread per monomial .when all threads in a block collaborate on one monomial in the second gpu algorithm we observe a significant speedup .speedups and memory bandwidth improve when resolving the bank conflicts in the third improvement .the best results are obtained adding unrolling techniques ..evaluation and differentiation of 65,024 monomials in 1,024 doubles .times on the k20c obtained with nvprof ( the nvidia profiler ) are in milliseconds ( ms ) . dividing the number of bytes read and written by the time gives the bandwidth .times on the cpu are on one 2.6ghz intel xeon e5 - 2670 , with code optimized with the -o2 flag .[ cols=">,^,>,>,>",options="header " , ] we end this paper with the application of newton s method on the cyclic -roots problem for .the setup is as follows .we generate a random complex vector and consider the system , for .for , we have that is a solution and for sufficiently close to 1 , newton s method will converge .this setup corresponds to the start in running a newton homotopy , for going from one to zero . in complex double double arithmetic , with seven iterationsnewton s method converges to the full precision .the cpu time is 78,055.71 seconds while the gpu accelerated time is 5,930.96 seconds , reducing 21 minutes to about 1.6 minutes , giving a speedup factor of about 13 .to accurately evaluate and differentiate polynomials in several variables given in sparse distributed form we reorganized the arithmetic circuits so all threads in block can contribute to the computation .this computation is memory bound for double arithmetic and the techniques to optimize a parallel reduction are beneficial also for real double double arithmetic , but for complex double double and quad double arithmetic the problem becomes compute bound .we illustrated our cuda implementation on two benchmark problems in polynomial system solving .for the first problem , the cost of evaluation and differentiation grows linearly in the dimension and then the cost of linear system solving dominates . for systems with polynomials of high degree such as the cyclic -roots problem ,the implementation to evaluate the system and compute its jacobian matrix achieved double digits speedups , sufficiently large enough to compensate for one extra level of precision . with gpu accelerationwe obtain more accurate results faster , for larger dimensions .this material is based upon work supported by the national science foundation under grant no .the microway workstation with the nvidia tesla k20c was purchased through a uic las science award .d. adrovic and j. verschelde .polyhedral methods for space curves exploiting symmetry applied to the cyclic -roots problem . in v.p .gerdt , w. koepf , e.w .mayr , and e.v .vorozhtsov , editors , _ proceedings of casc 2013 _ , pages 1029 , 2013 .e. agullo , c. augonnet , j. dongarra , m. faverge , h. ltaief , s. thibault , and s. tomov .factorization on a multicore node enhanced with multiple gpu accelerators . in _ proceedings of the 2011 ieee international parallel distributed processing symposium ( ipdps 2011 ) _ , pages 932943 .ieee computer society , 2011 .m. anderson , g. ballard , j. demmel , and k. keutzer .communication - avoiding qr decomposition for gpus . in _ proceedings of the 2011 ieee international parallel distributed processing symposium ( ipdps 2011 ) _ , pages 4858 .ieee computer society , 2011 .t. bartkewitz and t. gneysu . full lattice basis reduction on graphics cards . in f.armknecht and s. lucks , editors , _weworc11 proceedings of the 4th western european conference on research in cryptology _ , volume 7242 of _ lecture notes in computer science _ , pages 3044 .springer - verlag , 2012 . c. bischof , n. guertler , a. kowartz , and a. walther .parallel reverse mode automatic differentiation for openmp programs with adol - c . in c.bischof , h.m .bcker , p. hovland , u. naumann , and j. utke , editors , _ advances in automatic differentiation _ , pages 163173 .springer - verlag , 2008 .g. bjrck and r. frberg .methods to `` divide out '' certain solutions from systems of algebraic equations , applied to find all cyclic 8-roots . in m. gyllenberg and l.e .persson , editors , _ analysis , algebra and computers in math .research _ , volume 564 of _ lecture notes in mathematics _, pages 5770 .dekker , 1994 .faugre . finding all the solutions of cyclic 9 using grbner basis techniques . in _computer mathematics - proceedings of the fifth asian symposium ( ascm 2001 ) _ , volume 9 of _ lecture notes series on computing _ , pages 112 .world scientific , 2001 .b. foster , s. mahadevan , and r. wang . a gpu - based approximate svd algorithm . in _ parallel processing and applied mathematics _ , volume 7203 of _ lecture notes in computer science volume _ ,pages 569578 .springer - verlag , 2012 .l. gonzalez - vega .some examples of problem solving by using the symbolic viewpoint when dealing with polynomial systems of equations . in j.fleischer , j. grabmeier , f.w .hehl , and w. kchlin , editors , _ computer algebra in science and engineering _ ,pages 102116 .world scientific , 1995 .m. grabner , t. pock , t. gross , and b. kainz .automatic differentiation for gpu - accelerated 2d/3d registration . in c. bischof , h.m .bcker , p. hovland , u. naumann , and j. utke , editors , _ advances in automatic differentiation _ , pages 259269 .springer - verlag , 2008 .y. hida , x.s .li , and d.h .algorithms for quad - double precision floating point arithmetic . in _15th ieee symposium on computer arithmetic ( arith-15 2001 ) , 11 - 17 june 2001 , vail , co , usa _ , pages 155162 .ieee computer society , 2001 . shortened version of technical report lbnl-46996 , software at http://crd.lbl.gov// mpdist .a. kerr , d. campbell , and m. richards .decomposition on gpus . in d.kaeli and m. leeser , editors , _ proceedings of 2nd workshop on general purpose processing on graphics processing units ( gpgpu09 ) _ , pages 7178 .acm , 2009 .klopotek and j. porter - sobieraj . solving systems of polynomial equations on a gpu . in m. ganzha , l. maciaszek , and m. paprzycki , editors , _ preprints of the federated conference on computer science and information systems , september 9 - 12 , 2012 , wroclaw , poland _ , pages 567572 , 2012 .x. li , j. demmel , d. bailey , g. henry , y. hida , j. iskandar , w. kahan , s. kang , a. kapur , m. martin , b. thompson , t. tung , and d. yoo .design , implementation and testing of extended and mixed precision blas ., 28(2):152205 , 2002 . this is a shortened version of technical report lbnl-45991 .m. lu , b. he , and q. luo .supporting extended precision on graphics processors . in a. ailamaki and p.a .boncz , editors , _ proceedings of the sixth international workshop on data management on new hardware ( damon 2010 ) , june 7 , 2010 , indianapolis , indiana _ , pages 1926 , 2010 .software at http://code.google.com / p / gpuprec/. d. mukunoki and d. takashashi . implementation and evaluation of triple precision blas subroutines on gpus . in _ proceedings of the 2012 ieee 26th international parallel and distributed processing symposium workshops .21 - 25 may 2012 , shanghai china _ , pages 13721380 .ieee computer society , 2012 .j. utke , l. hascot , p. heimbach , c. hill , p. hovland , and u. naumann . toward ajoinable mpi . in _ proceedings of the 10th ieee international workshop on parallel anddistributed scientific and engineering computing ( pdsec 2009 ) _ ,pages 18 , 2009 .j. verschelde and g. yoffe .polynomial homotopies on multicore workstations . in m.m .maza and j .- l .roch , editors , _ proceedings of the 4th international workshop on parallel symbolic computation ( pasco 2010 ) , july 21 - 23 2010 , grenoble , france _ , pages 131140 .acm , 2010 .j. verschelde and g. yoffe . evaluating polynomials in several variables and their derivatives on a gpu computing processor . in _ proceedings of the2012 ieee 26th international parallel and distributed processing symposium workshops ( pdsec 2012 ) _ , pages 13911399 .ieee computer society , 2012 .j. verschelde and g. yoffe .orthogonalization on a general purpose graphics processing unit with double double and quad double arithmetic . in _ proceedings of the 2013 ieee 27th international parallel and distributed processing symposium workshops ( pdsec 2013 ) _ , pages 13731380 .ieee computer society , 2013 .
in order to compensate for the higher cost of double double and quad double arithmetic when solving large polynomial systems , we investigate the application of nvidia tesla k20c general purpose graphics processing unit . the focus on this paper is on newton s method , which requires the evaluation of the polynomials , their derivatives , and the solution of a linear system to compute the update to the current approximation for the solution . the reverse mode of algorithmic differentiation for a product of variables is rewritten in a binary tree fashion so all threads in a block can collaborate in the computation . for double arithmetic , the evaluation and differentiation problem is memory bound , whereas for complex quad double arithmetic the problem is compute bound . with acceleration we can double the dimension and get results that are twice as accurate in about the same time . * key words and phrases . * compute unified device architecture ( cuda ) , double double arithmetic , differentiation and evaluation , general purpose graphics processing unit ( gpu ) , newton s method , least squares , massively parallel algorithm , modified gram - schmidt method , polynomial evaluation , polynomial differentiation , polynomial system , qr decomposition , quad double arithmetic , quality up .
optically imaging the activity in a neuronal circuit is limited by the scanning speed of the imaging device .therefore , commonly , only a small fixed part of the circuit is observed during the entire experiment .however , in such an experiment it can be hard to infer from the observed activity patterns whether ( 1 ) a neuron a directly affects neuron b , or ( 2 ) another , unobserved neuron c affects both a and b. to deal with this issue , we propose a shotgun observation scheme , in which , at each time point , we randomly observe a small percentage of the neurons from the circuit . and so , no neuron remains completely hidden during the entire experiment and we can eventually distinguish between cases ( 1 ) and ( 2 ) .previous inference algorithms can not handle efficiently so many missing observations .we therefore develop a scalable algorithm , for data acquired using shotgun observation scheme - in which only a small fraction of the neurons are observed in each time bin . using this kind of simulated data , we show the algorithm is able to quickly infer connectivity in spiking recurrent networks with thousands of neurons .the simultaneous activity of hundreds - and even thousands - of neurons is now being routinely recorded in a wide range of preparations .the number of recorded neurons is expected to grow exponentially over the years .this , in principle , provides the opportunity to infer the `` functional connectivity '' of neuronal networks , _i.e. _ a statistical estimate of how neurons are affected by each other , and by stimulus .the ability to accurately estimate large , possibly time - varying , neural connectivity diagrams would open up an exciting new range of fundamental research questions in systems and computational neuroscience . therefore , the task of estimating connectivity from neural activity can be considered one of the central problems in statistical neuroscience - attracting much attention in recent years ( e.g. , see and references therein ) .perhaps the biggest challenge for inferring neural connectivity from activity and in more general network analysis is the presence of hidden nodes which are not observed directly . despite swift progress in simultaneously recording activity in massive populations of neurons , it is still beyond the reach of current technology to simultaneously monitor a complete network of spiking neurons at high temporal resolution ( though see for some impressive recent progress in that direction ) .since estimation of functional connectivity relies on the analysis of the inputs to target neurons in relation to their observed spiking activity , the inability to monitor all inputs can result in persistent errors in the connectivity estimation due to model miss - specification . more specifically , common input errors in which correlations due to shared inputs from unobserved neurons are mistaken for direct , causal connections plague most naive approaches to connectivity estimation . developing a robust approach for incorporating the latent effects of such unobserved neurons remains an area of active research in connectivity analysis .in this paper we propose an experimental design which greatly ameliorates these common - input problems .the idea is simple : if we can not observe all neurons in a network simultaneously , maybe we can instead observe many overlapping sub - networks in a serial manner over the course of a long experiment. then we use statistical techniques to patch the full estimated network back together , analogously to shotgun genetic sequencing .obviously , it is not feasible to purposefully sample from many distinct sub - networks at many different overlapping locations using multi - electrode recording arrays , since multiple re - insertions of the array would lead to tissue damage .however , fluorescence - based imaging of neuronal calcium or voltage dynamics make this approach experimentally feasible . in the ideal experiment, we would target our observations so they fully cover a neuronal circuit together with all its inputs ( figure [ fig : observation schemes ] ) .however , in each time step , we would only observe a random fraction of all targeted neurons . in this shotgun approach only a small fraction of the networkis observed at any single time .however , connectivity estimation with missing observations is particularly challenging since exact bayesian inference with unobserved spikes is generally intractable .therefore , markov chain monte - carlo ( mcmc ) methods have been previously used to infer the unobserved activity ( spikes ) by sampling . however , such methods typically do not scale well for large networks .mcmc methods are computationally expensive ( though , sometimes highly parallelizable ) , and usually take a long time to converge .variational approaches , may speed up inference , but have not been shown to be robust to missing observations .fortunately , as we show here , given a shotgun sampling scheme , it is not actually required to infer the unobserved spikes .we considerably simplify the loglikelihood using the expected loglikelihood approximation , and a generalized central limit theorem ( clt ) argument to approximate neuronal input as a gaussian variable when the size of the network is large .the simplified loglikelihood depends only on the empiric second order statistics of the spiking process .importantly , these sufficient statistics can be calculated even with partial observations , by simply `` ignoring '' any unobserved activity . in order to obtain an accurate estimation of the connectivity, this simplified loglikelihood can be very efficiently maximized together with various types of prior information .an abundance of such prior information is available on both connection probabilities and synaptic weight distributions as a function of cell location and identity . in addition , cutting edge labeling and tissue preparation methods such as brainbow and clarity are beginning to provide rich anatomical data about potential connectivity ( e.g. , the degree of coarse spatial overlap between a given set of dendrites and axons ) that can be incorporated into these priors . exploiting this prior information can greatly improve inference quality , as was already demonstrated in previous network inference papers .we present numerical results which demonstrate the effectiveness of our approach on a on a simulated recurrent network of spiking neurons .first , we demonstrate that the shotgun experimental design can largely eliminate the biases induced by common input effects , as originally intended ( figure [ fig : common input problem ] ) .specifically , we show that we can quickly infer the network connectivity for large networks , with a low fraction of neurons observed in each time bin .for example , our algorithm can be used to infer the connectivity of a sparse network with neurons and connections , given timebins of spike data in which only of the neurons are observed in each time bin , after running less than a minute on a standard laptop ( figure [ fig : sparsity-1 ] ) .this is achieved by assuming only a sparsity inducing prior on the weights .our parameter scans suggest that our method could be used for arbitrarily low observation ratios ( figure [ fig : parameter - scans p_obs t ] ) and arbitrarily high number of neurons ( figure [ fig : parameter - scans n - t ] ) , given long enough experiments .then , we demonstrate ( figure [ fig : prior results ] ) the usefulness of the following additional pieces of prior information : ( 1 ) dale s law - all outgoing synaptic connections from the same neuron have the same sign .( 2 ) neurons have several types - and connection strength between two types is affected by the identity of these types .( 3 ) the probability of connection between neurons is distance dependent .performance can also be improved by using external stimuli ( figure [ fig : the - effect - of - stimulus ] ) , similar to the stimulus that can be induced by persistent light sensitive ion channels .suppose we wish to perform an experiment in order to measure the functional connectivity in a neuronal circuit by simply observing its activity . in this experiment, we optically capture the neuronal spikes , visible through the use of genetically encoded calcium or voltage indicators .current imaging methods ( _ e.g. _ , two - photon or light sheet microscopy ) record this neuronal activity by scanning through the neuron tissue . the scanning protocol , and consequently , the identity of the observed neurons , have various constraints .an important constraint is the maximal scanning speed of the recording device .since the scanning speed is limited , we can not observe all the neurons in the circuit all the time with sufficient spatio - temporal resolution .we must decide where to focus our observations .commonly , observations are focused on a fixed subset of the neurons in the circuit . however , as we will show here , this procedure can generate persistent errors due to unobserved common inputs , that generate spurious correlations in the network activity . to prevent this , we propose a shotgun observation approach , in which we scan the network at a random order ( figure [ fig : observation schemes ] , _ top _ ) .thus , at each time bin of experimental data , only a random fraction of the neurons in the network are observed .intuitively , the main goal of such an approach is to evenly distribute our observations on all pairs of neurons , so that all the relevant spike correlations could be eventually estimated .note that simple serial scanning of the circuit in contiguous blocks does not accomplish this purpose ( figure [ fig : observation schemes ] , _ bottom _ ) . however , several other scanning schemes do ( random or deterministic ) - including the random shotgun scheme , on which we focus here .in this section we test the shotgun scheme on simulated -long spike data from a recurrent spiking neural network with neurons .specifically , we use a generalized linear network model ( glm , see eqs .[ eq : logistic]-[eq : u ] ) with synaptic connectivity matrix .typically , is sparse , so , the average probability that two neurons are directly connected to each other , is low . to infer we use the inference method described in section [ sec : bayesian inference ] .this approximate bayesian method can exploit various priors ( section [ sub : priors ] ) , such as the sparsity of , to improve estimation quality .we define as the fraction of neurons observed at each timebin , _i.e. _ , the observation probability in the shotgun scheme . for a general list of basic notation ,see table [ tab : basic - notation ] .we start ( section [ sub : dealing - with - common ] ) with a qualitative demonstration that the shotgun approach can be used without the usual persistent bias resulting from common inputs .then , in section [ sub : connectivity - estimation - with ] , we perform quantitative tests to show that our estimation method is effective for various network sizes , observation probabilities , and stimulus input amplitudes .this is done using only a sparsity prior .lastly , in section [ sub : additional - priors ] , we show how more advanced priors can improve estimation quality .we conclude ( section [ sec : discussion ] ) that the limited scanning speed of the any imaging device recording neuronal activity is not a fundamental barrier which prevents consistent estimation of functional connectivity .in this section we use a toy network with neurons to visualize the common input problem , and its suggested solution - the `` shotgun '' approach .errors caused by common inputs are particularly troublesome for connectivity estimation , since they can persist even when .therefore , for simplicity , in this case we work in a regime where the experiment is long and data is abundant ( ) . in this regime ,any prior information we have on the connectivity becomes unimportant so we simply use the maximum likelihood ( ml ) estimator ( eq . [ eq : w_mle ] ) .we chose the weight matrix to illustrate a worst - case common input condition ( figure [ fig : common input problem]a ) .note that the upper - left third of is diagonal ( figure [ fig : common input problem]b ) : _ i.e. _ , neurons share no connections to each other , other than the self - connection terms .however , we have seeded this with many common - input motifs , in which neurons and ( with ) both receive common input from neurons with .if we use a `` shotgun '' approach and observe the whole network with , we obtain a good ml estimate of the network connectivity , including the upper - left submatrix ( figure [ fig : common input problem]c ) .now suppose instead we concentrate all our observations on these neurons , so within that sub - network , but all the other neurons are unobserved .if common input was not a problem then our estimation quality should improve on that submatrix ( since we have more measurements per neuron ) .however , if common noise is problematic then we will hallucinate many nonexistent connections ( i.e. , off - diagonal terms ) in this submatrix .figure [ fig : common input problem]d illustrates exactly this phenomenon .in contrast to the shotgun case , the resulting estimates are significantly corrupted by the common input effects .next , we quantitatively test the performance of maximum a posteriori ( map ) estimate of the network connectivity matrix , using a sparsity prior ( section [ sub : sparse - prior ] ) on a simulated neural network model .the neurons are randomly placed on a 1d lattice ring in locations . to determinethe connection probability ( the probability that is not zero ) we used , where is the distance between two neurons , and was chosen so that the network sparsity is .for self connectivity , we simply used to account for the refractory period .all the non - zero off - diagonal weights are sampled uniformly from the range ] ( defined in eq .[ eq : u ] ) , ] is the maximal eigenvalue of .we wish to find that solves using eq .[ eq : grad ] , we obtain we define so we can write eq .[ eq : grad l = 00003d0 ] as which is solved by substituting this into eq .[ eq : q ] , we have so noticed empirically that using eq .[ eq : p(s|w , b * ) appendix ] as the simplified loglikelihood tends to cause some inaccuracy in our estimates of the weight gains and biases .to correct for this error , we re - estimate the gains and biases in the following way .suppose we obtain a map estimate ( eq . [ eq : ml ] ) after using the above profile likelihood .next , we examine again the original likelihood ( without any approximation ) +c\,.\ ] ] we assume that the map estimate is accurate , up to a scaling constant in each row , so , and we obtain +c\nonumber \\ & = & t\sum_{i=1}^{n}\left[a_{i}\sum_{j=1}^{n}\hat{w}_{i , j}\left\langle s_{i , t}s_{j , t-1}\right\rangle _ { t}+b_{j}\left\langle s_{i , t}\right\rangle _ { t}-\left\langle \ln\left(1+\exp\left(a_{i}\sum_{j=1}^{n}\hat{w}_{i , j}s_{j , t-1}+b_{j}\right)\right)\right\rangle _ { t}\right]\label{eq : l(a , b)}\\ & \approx & t\sum_{i=1}^{n}\left[a_{i}\sum_{j=1}^{n}\hat{w}_{i , j}\left(\tilde{\sigma}_{i , j}^{\left(1\right)}+\tilde{m}_{i}\tilde{m}_{j}\right)+b_{j}\tilde{m}_{i}-\left\langle \ln\left(1+\exp\left(a_{i}z_{i , t}+b_{j}\right)\right)\right\rangle _ { t}\right]+c\,,\end{aligned}\ ] ] where in the last line we used the expected loglikelihood approximation with clt again , and denoted ( recall eqs .[ eq : m]-[eq : sigma ] ) sampling from [ eq : z_i ] ( we found that about samples was usually enough ) , we can calculate the expectation in ( eq . [ eq : l(a , b ) ] ) .next , we can now maximize the likelihood in ( eq . [ eq : l(a , b ) ] ) for each gain and bias , separately , by solving an easy 2d unconstrained optimization problem .the new gains can now be used to adjust our estimation of .we define the proximal operator fista solves the following minimization problem \,.\ ] ] from ( * ? ? ?4.1 - 4.3 ) , we have the following algorithm [ alg : the - fista - algorithm . ] , where the component of the gradient are given by eq .[ eq : grad ] , and the lipschitz constant is given by eq .[ eq : lipshitz constant ] .input : : initial point , and ( lipschitz constant of ) .initialize : : , , .repeat : : for compute \\ t_{k+1 } & = & \left(1+\sqrt{1 + 4t_{k}^{2}}\right)/2\\ \mathbf{y}^{\left(k+1\right ) } & = & \mathbf{w}^{\left(k\right)}+\left(\frac{t_{k}-1}{t_{k+1}}\right)\left(\mathbf{w}^{\left(k\right)}-\mathbf{w}^{\left(k-1\right)}\right)\,.\end{aligned}\ ] ]as explained in section [ sub : sparse - prior ] , we use a sparsity promoting prior ( eqs .[ eq : l1 prior]-[eq : lambda mask ] ) , which depends on a regularization constant , to generate and estimate of the connectivity matrix .though this constant is unknown in advance , we can set it using the sparsity level of * * , defined as we aim for this to be approximately equal to some target sparsity level .this is done using a fast binary search algorithm ( algorithm [ alg : a - binary - search algorithm ] ) , that exploits the fact that is non - increasing in .this monotonic behavior can be observed from the fixed point of the fista algorithm \,,\ ] ] for which the number of zeros components is clearly non - decreasing with .input : : target sparsity level - , tolerance level - , measured sparsity - .initialize : : initial points and , where .repeat : : + ; ; if ; ; + return : : end ; ; if ; ; * * + : : else ; ; + : : end ; ;in this appendix we explain how greedy forward algorithms can be used to solve some of the optimization problems described in section [ sub : priors ] . in this section ,we use the notation . in this appendixwe explain in detail how to greedily maximize exactly for each row in the weight matrix by incrementally extending the support of non - zero weights .since the problem is separable ( and parallelizable ) over the rows ( eq . [ eq : posterior decomposition ] ) , we do this on a single row - . for that row ,the normalized profile loglikelihood is -h\left(m_{\alpha}\right)\sqrt{1+\frac{\pi}{8}\sum_{k , j}w_{\alpha , j}\sigma_{k , j}^{\left(0\right)}w_{\alpha , k}}\,,\label{eq : l(w_alpha)}\ ] ] first we define the support of our estimate ( the set of non - zero components ) to be , and we initialize .then , we define , the complement of the support ( the set of zero weights ) , and initialize .lastly , we define , the set of non - zero weights . since , then we also initialize . given and from the previous step , we extend the support , by finding for each potential new weight index the best weight where is simply under the constraint that , and substituting these constraints into eq .[ eq : l(w_alpha ) ] , we obtain we can find the maximizing values for exactly , as the equation is quadratic in : with once we have updated the support by finding , the weight that maximizes this log - likelihood , we can update the support then , we find the maximum likelihood estimate of weights , constrained to the new support , to be where andwe used similar derivations as in appendix [ sec : maximum - likelihood - estimator ] .we repeat this process until is within tolerance of our sparsity constraint ( ) . in the stochastic block model ( section [ sub : stochastic - block - model ] ) , we add a quadratic penalty to the profile loglikelihood ( eq . [ eq : sbm penalty ] ) , which penalizes non - zero weights which are far from some block matrix , which represent the mean value of the non - zero connection strength between the different types of neurons .we do this by first replacing in eqs .[ eq : greedy step 1]-[eq : greedy step 2 ] with where represents the underlying block matrix .in order to maximize this , we again differentiate and equate to zero this equation , when expanded out , takes the form where we defined since the polynomial is quartic , we can find the optimal weight by running a standard polynomial solver ( the roots ( ) function in matlab ) .once the support for the row has the desired sparsity , we include this penalty in the calculation of the optimal set of weights , so instead of eq .[ eq : greedy step 3 ] , we have as explained in section [ sub : distance - dependence ] , we can also incorporate into the model a prior on , the probability of a connection between neurons as a function of the distance between them . to promote the selection of more probable connections using this prior, we subtract the penalty from in eqs .[ eq : greedy step 1]-[eq : greedy step 2 ] , or eq .[ eq : modified penalty ] if we want also to include the stochastic block model prior .while this penalty modifies the model selection step of extending the support , it has no effect on the values of the weights , determined by eq .[ eq : modified_w_est ] . in many situations , we will not know what is the true block model matrix or distance - dependent connectivity function .however , we can infer these parameters from estimates of the weight matrix , as we explain next .recall section [ sub : stochastic - block - model ] .we wish to estimate from by solving ( see eq .[ eq : sbm penalty ] and explanation below ) \left(\hat{w}_{i , j}-v_{i , j}\right)^{2}+\lambda_{*}\left\vert \mathbf{v}\right\vert _ { * } \,,\label{eq : low rank estimation}\ ] ] where the last nuclear norm penalty promotes a low rank solution .suppose we know the target , _i.e. _ , the minimal number of neuron types affecting neuronal connectivity . then we can use noisy low - rank matrix completion techniques to estimate from .specifically , we found that soft - impute ( , described here in algorithm [ alg : soft - impute ] ) works well in this estimation task , where all components for which , are considered unobserved . in the algorithm we iterate over different values of , starting with a small value ( so initially has a larger rank than desired ) and slowly incrementing , until is of the desired rank .recall section [ sub : distance - dependence ] .we assume the distance - dependent connectivity relationship to be of a sigmoidal form if is unknown , we can run logistic regression on the binarized inferred weight matrix to estimate , _(ad_{ij}+b))}{1+\exp((ad_{ij}+b))}\\ \hat{f } & = & f(d_{ij};\hat{a},\hat{b})\,.\end{aligned}\ ] ] finally , we can combine distance - dependent connectivity and a low - rank mean matrix and infer both and from at the same time . furthermore ,once we have an estimate for these parameters , we can use them to find a new estimate which in turn , allows us to re - estimate both and .thus , we can iterate estimating and the penalty parameters and until we reach convergence ( algorithm [ alg : infer_w_unknown_params ] ) .if our model does not include a low - rank mean matrix or distance - dependent connectivity , we can just set the corresponding penalty coefficient to zero .the penalty coefficients and are selected by finding each penalty coefficient that maximizes the correlation between and while the other coefficient is set to 0 . on actual data , where the ground truth is unknown , one may instead maximize prediction of observed spikes ( figure [ fig : estimating - the - spikes ] ) or spike correlations .we found it is also possible to set the regularization parameters in the case and are inferred from the data , to their optimal value in the case where and are known - multiplied by to account for uncertainty .define as in eq .[ eq : l(w_a , b|w_a , q ) ] .initialize solution . * for * initialize support while * * find the optimal penalized weight index : \ ] ] update the support : update the weights : * return * define where being the singular value decomposition of , with , and with .initialize , and some decreasing set * .* * for * {i , j}\,. ] =soft - impute( ) * return *in this section we give the details of the mcmc approach for inferring the weights ( summarized in section [ sub : markov - chain - monte ] ) .to do this we alternate between sampling the spikes ( section [ sub : sampling - the - spikes ] ) , and sampling the weights ( section [ sub : sampling - the - weights ] ) . where we can neglect any additive constant that does not depend on . on the right hand side ,the first term is \\ & = & s_{i , t}u_{i , t}+c\,,\\ & = & s_{i , t}\left(\sum_{k=1}^{n}w_{i , k}s_{k , t-1}+b_{i}\right)\,,\end{aligned}\ ] ] while the second term is \\ & = & \sum_{j}\left[s_{j , t+1}u_{j , t+1}-\ln\left(1+e^{u_{j , t+1}}\right)\right]\\ & = & \sum_{j}\left[s_{j , t+1}\left(\sum_{k=1}^{n}w_{j , k}s_{k , t}+b_{j}\right)-\ln\left(1+\exp\left(\sum_{k=1}^{n}w_{j , k}s_{k , t}+b_{j}\right)\right)\right]+c\\ & = & c+\sum_{j}s_{j , t+1}s_{i , t}w_{j , i}\\ & - & \sum_{j}\left[\ln\left(1+\exp\left(\sum_{k\neq i}^{n}w_{j , k}s_{k , t}+b_{j}+w_{j , i}\right)\right)-\ln\left(1+\exp\left(\sum_{k\neq i}^{n}w_{j , k}s_{k , t}+b_{j}\right)\right)\right]s_{i , t}\,.\end{aligned}\ ] ] therefore , we can sample the spikes from where .\end{aligned}\ ] ] note that , for a given , depends only on spikes from time and . therefore , samples generated at odd times are independent from samples generated at even times .therefore , we can sample simultaneously for all odd times , and then sample simultaneously at all even times . such a simple block - wise gibbs sampling scheme can be further accelerated by using the metropolized gibbs method , in which we propose a `` flip '' of our previous sample .so if is out previous sample and is our new sample , we propose that and then accept this proposal with probability if the proposal is not accepted , we keep our previous sample .we denote here to be all the components of without the component , and in order to do gibbs sampling , we need to calculate where , as before , we can neglect on the right hand side any additive constant that does not depend on .the first term on the right hand side is \\ & = & \sum_{i}\sum_{t}\left[s_{i , t}u_{i , t}-\ln\left(1+e^{u_{i , t}}\right)\right],\\ & = & \sum_{i}\sum_{t}\left[w_{i , j}s_{i , t}s_{j , t-1}-\ln\left(1+\exp\left(\sum_{k=1}^{n}w_{i , k}s_{k , t-1}+b_{i}\right)\right)\right]+c\\ & = & \sum_{i}\sum_{t}\left[w_{i , j}s_{i , t}s_{j , t-1}-\ln\left(1+\exp\left(w_{i , j}s_{j , t-1}\right)\exp\left(\sum_{k\neq j}^{n}w_{i , k}s_{k , t-1}+b_{i}\right)\right)\right]+c\\ & \approx & \sum_{i}\sum_{t}\left[w_{i , j}s_{i , t}s_{j , t-1}-\ln\left(1+\left(1+w_{i , j}s_{j , t-1}+\frac{1}{2}w_{i , j}^{2}s_{j , t-1}\right)\exp\left(\sum_{k\neq j}^{n}w_{i , k}s_{k , t-1}+b_{i}\right)\right)\right]+c\\ & = & \sum_{i}\sum_{t}\left[w_{i , j}s_{i , t}s_{j , t-1}-\ln\left(1+f\left(\sum_{k\neq j}^{n}w_{i ,k}s_{k , t-1}+b_{i}\right)\left(w_{i , j}s_{j , t-1}+\frac{1}{2}w_{i , j}^{2}s_{j , t-1}\right)\right)\right]+c\\ & \approx & \sum_{i}\sum_{t}\left[w_{i , j}s_{j , t-1}\left[s_{i , t}-f\left(\sum_{k\neq j}^{n}w_{i , k}s_{k , t-1}+b_{i}\right)\right]\right.\\ & - & \left.\frac{1}{2}w_{i , j}^{2}s_{j , t-1}\left[f\left(\sum_{k\neq j}^{n}w_{i , k}s_{k , t-1}+b_{i}\right)-f^{2}\left(\sum_{k\neq j}^{n}w_{i , k}s_{k , t-1}+b_{i}\right)\right]\right]\,,\end{aligned}\ ] ] where in both approximations we used the fact that a single weight is typically small . therefore , denoting {j , t-1}\\ \epsilon_{i , j } & \triangleq & \sum_{t}\left[f\left(\sum_{k\neq j}^{n}w_{i , k}s_{k , t-1}+b_{i}\right)-f^{2}\left(\sum_{k\neq j}^{n}w_{i , k}s_{k , t-1}+b_{i}\right)\right]s_{j , t-1}\,,\end{aligned}\ ] ] with \\ h_{i , j } & \triangleq & h_{0}+\frac{1}{2}\ln\left(\frac{\sigma_{i , j}^{2}}{\sigma_{0}^{2}}\right)+\frac{\mu_{i , j}^{2}}{2\sigma_{i , j}^{2}}-\frac{\mu_{0}^{2}}{2\sigma_{0}^{2}}\,.\end{aligned}\ ] ] we can than proceed and sample from this spike - and - slab distribution ( in eq .[ eq : w_ij distribution for gibbs ] ) - sampling simultaneously for all .now , since we used the approximation that assuming the weights are weak , so this sampling is not exact . therefore , even if the approximation is very good , we can not use direct sampling , or the error will accumulate over time catastrophically . to correct his, we use this approximation as a proposal distribution in a metropolis hastings simulation scheme .
inferring connectivity in neuronal networks remains a key challenge in statistical neuroscience . the `` common input '' problem presents the major roadblock : it is difficult to reliably distinguish causal connections between pairs of observed neurons from correlations induced by common input from unobserved neurons . since available recording techniques allow us to sample from only a small fraction of large networks simultaneously with sufficient temporal resolution , naive connectivity estimators that neglect these common input effects are highly biased . this work proposes a `` shotgun '' experimental design , in which we observe multiple sub - networks briefly , in a serial manner . thus , while the full network can not be observed simultaneously at any given time , we may be able to observe most of it during the entire experiment . using a generalized linear model for a spiking recurrent neural network , we develop scalable approximate bayesian methods to perform network inference given this type of data , in which only a small fraction of the network is observed in each time bin . we demonstrate in simulation that , using this method : ( 1 ) the shotgun experimental design can eliminate the biases induced by common input effects . ( 2 ) networks with thousands of neurons , in which only a small fraction of the neurons is observed in each time bin , could be quickly and accurately estimated . ( 3 ) performance can be improved if we exploit prior information about the probability of having a connection between two neurons , its dependence on neuronal cell types ( e.g. , dale s law ) , or its dependence on the distance between neurons .
the bilateral trade relationships existing between world countries form a complex network known as the international trade network ( itn ) .the observed complex structure of the network is at the same time the outcome and the determinant of a variety of underlying economic processes , including economic growth , integration and globalization .moreover , recent events such as the financia l crisis clearly pointed out that the interdependencies between financial markets can lead to cascading effects which , in turn , can severely affect the real economy .international trade plays a major role among the possible channels of interaction among countries , thereby possibly further propagating these cascading effects worldwide and adding one more layer of contagion . characterizingthe networked worldwide economy is therefore an important open problem and modelling the itn is a crucial step in this challenge , and has been studied extensively .+ historically , macroeconomic models have mainly focused on modelling the trade volumes between countries .the gravity model , which was introduced in the early 60 s by jan tinbergen , serves as a powerful empirical model that aims at inferring the volume of trade between any two ( trading ) countries from the knowledge of their gross domestic product ( gdp ) and mutual geographic distance . over the years, the model has been upgraded to include other possible factors of macroeconomic relevance , like common language and trade agreements , nevertheless gdp and distance remain the two factors with biggest explanatory power .the gravity model can reproduce the observed trade volume between trading countries satisfactorily .however , at least in its simplest and most popular implementation , the model does not generate zero volumes and therefore predicts a fully connected trade network .this outcome is totally inconsistent with the heterogeneous observed topology of the itn , which serves as the backbone on which trades are made .more sophisticated implementations of the gravity model that do a llow for zero trade flows succeed only in reproducing the number of missing links , but not their position in the trade network , thereby producing sparser but still non - realistic topologies .+ in conjunction with the traditional macroeconomic approach , in recent years the modelling of the itn has also been approached using tools from network theory , among which maximum - entropy techniques have been particularly successful .maximum - entropy models aim at reproducing higher - order structural properties of a real - world network from low - order , generally local information , which is taken as a fixed constraint .important examples of local properties that can be chosen as constraints are the _ degree _, i.e. the number of links , of a node ( for the itn , this is the number of trade partners of a country ) and the _ strength _ , i.e. the total weight of the links , of a node ( for the itn , this is the total trade volume of a country ) .examples of higher - order properties that the method aims at reproducing are _ clustering _ , which refers to the fraction of realised triangles around a node , and _ assortativity _ , which is a measure of the correlation between the degree of a node and the average degree of its neighbours .these studies have focused on both binary and weighted representations of the itn , i.e. the two representations defined by the _ existence _ and by the _magnitude _ of trade exchanges among countries , respec tively . in principle , depending on which local properties are chosen as constraints , maximum - entropy models can either fail or succeed in replicating the higher - order properties of the itn . as an example , it has been shown that inferring a network topology only from purely weighted properties such as the strength of all nodes ( i.e. the trade volumes of all countries ) results in a trivial , uniform structure ( almost fully connected and , thus , unrealistic ) .this limitation is similar to the one discussed above for the gravity models , which aim at reproducing the pair - specific traded volumes exclusively , while completely ignoring the underlying network topology .by contrast , the knowledge of purely topological properties such as the degrees of all nodes ( i.e. the number of trade partners of all countries ) , which are usually neglected in traditional macroeconomic models , turns out to be essential for reproducing the heterogeneous topology observed in the itn . a combination of weighed and topological local properties allows to reconstruct the higher - order properties of the itn with extremely high accuracy . + despite the ability of the appropriate maximum - entropy models to provide a better agreement with the data with respect to gravity models ,they do not in principle provide any hint on the underlying ( macro)economic factors shaping the structure of the network under consideration .these models , in fact , assign `` hidden variables '' or `` fitness parameters '' to each country .these quantities arise as lagrange multipliers involved in the constrained maximisation of the entropy and control the probability that a link is established and/or has a given weight .these parameters have , a priori , no economic interpretation .however , here we show that one can indeed find a macroeconomic iden tificatio n for the underlying variables defining the maximum - entropy models .this interpretation is supported by previous studies showing that both topological and weighted properties of the itn are strongly connected with purely macroeconomic quantities , in particular the gdp .+ in this paper we first focus on various empirical relations existing between the gdp and a range of country - specific properties .these properties convey basic but important local information from a network perspective .we also show that these relations are robust and very stable throughout different decades .we then illustrate how the gdp affects differently the binary and weighted representations of the itn , revealing alternative aspects of the structure of this network .these results suggest a justification for the use of gdp as an empirical fitness to be used in maximum - entropy models , thus providing a macroeconomic interpretation for the abstract mathematical parameters defining the model t hemselves . reversing the perspective, this result enables us to introduce a novel gdp - driven model that successfully reproduces the binary _ and _ the weighted properties of the itn simultaneously .the mathematical structure of the model explains the aforementioned puzzling asymmetry in the informativeness of binary and weighted constraints ( degree and strength ) .these results represent a promising step forward in the formulation of a unified model for modelling the structure of the itn .in this study we have used data from the gleditsch database which spans the years 1950 - 2000 , focusing only on the first year of each decade , i.e. six years in total .the data sets are available in the form of weighted matrices of bilateral trade flows , the associated adjacency matrices and vectors of gdps .there are approximately 200 countries in the data set covering the considered 51 years ; the gdp is measured in u.s .+ we have analysed this data set precisely because it has been the subject of many studies so far , focusing both on the binary and on the weighted representation of the itn .this will allow us to compare the performance of our gdp - driven ( two - steps ) method with other reconstruction algorithms already present in the literature .trade exchanges between countries play a crucial role in many macroeconomic phenomena . as a consequence ,it is fundamental to be able to characterize the observed structure of the itn and its properties .more specifically , the itn can be represented in two different ways , depending on the kind of information used to analyse the system : the first one concerns only the existence of trade relations and gives origin to the itn _ binary representation _ ; the second one also takes into account the volume of the trade exchanges and gives origin to the itn _ weighted representation_. while the binary representation describes the skeleton of the itn , relating exclusively to the presence of trade relations , the weighted representation also accounts for the volume of trade occurring `` over '' the links , i.e. the weight of the link once it is formed .the two representations convey very important information regarding the `` trade patterns '' of each country and , most importantly , correspond to different trade mechanisms .+ [ fig1 ] traditionally , macroeconomic models have mainly focused on the weighted representation , because economic theory perceives the latter as being genuinely more informative than the purely binary representation : such models make use of countries gross domestic product ( gdp ) , their geographic distance and any other possible quantity of ( supposed ) macroeconomic relevance to infer trading volumes between countries .the gdp is the most popular measure in the economic literature .although it is generally used as a proxy to infer the evolution of many macroeconomic properties describing the weighted representation of the itn ( as the countries trade exchanges ) , here we will show that the gdp plays a key role not only to explain the itn weighted structure , but also the emergence of its binary structure .+ let us start with an empirical analysis of the gdp .we first define new rescaled quantities of the gdp : and where is the average gdp for an observed year .the two quantities adjust the values of the countries gdps for both the size of the network and the growth , and are a connected by a simple relation .we use the two quantities of the rescaled gdp throughout our analysis , mainly using for the reason that the quantity is bounded which coincides with our model .[ fig1 ] we plot the cumulative distribution of the rescaled gdp with indexing the countries for the different decades collected into our data set .what emerges is that the distributions of the rescaled gdps can be described by log - normal distribution characterized by similar values of the parameters .the log - normal curve is fitted to all the values ( from the different decades ) .this suggests that the rescaled gdps are quantities which do not vary much with the evolution of the system , thus potentially representing the ( constant ) hidden macroeconomic fitness ruling the entire evolution of the system itself .this , in turn , implies understanding the functional dependence of the key topological quantities on the countries rescaled gdp . + as already pointed out by a number of results , the topological quantities which play a major role in determining the itn structure are the countries degrees ( i.e. the number of their trading partners ) and the countries strengths ( i.e. the total volume of their trading activity ) .thus , the first step to understand the role of the rescaled gdp in shaping the itn structure is quantifying the dependence of degrees and strengths on it . since we will now analyse each snapshot at a time ( correction for size is not needed ) , here we will use the bounded rescaled gdp .moreover , this form of the rescaled gdp coincides with a bounded macroeconomic fitness value , which is consistent with the models presented in the next sections . to this aim ,let us explicitly plot versus and versus for a particular decade , as shown in fig .the red points represent the relations between the two pairs of observed quantities for the 2000 snapshot .interestingly , the rescaled gdp is directly proportional to the strength ( in a log - log scale ) , thus indicating that _ the wealth of countries is strongly correlated to the total volume of trade they partecipate in_. such an evidence provides the empirical basis for the definition of the gravity model , stating that _ the trade between any two countries is directly proportional to the ( product of the ) countries gdp_. on the other hand , the functional dependence of the degrees on the values is less simple to decipher .generally speaking , the relation is monotonically increasing and this means that countries with high gdp have also an high degree , i.e. are strongly connected with the others ; coherently , countries characterized by a low value of the gdp have also a low degree , i.e. are less connected to the rest of the world .moreover , while for low values of the gdp there seems to exist a linear relation ( in a log - log scale ) between and , as the latter rises a saturation effect is observed ( in correspondence of the value ) , due to the finite size of the network under analysis .roughly speaking , richest countries lie on the vertical trait of the plot , while poorest countries lie on the linear trait of the same plot : in other words , _ the degree of countries represents a purely topological indicator of the countries wealth_. to sum up , fig .[ fig2 ] shows that countries gdp plays a double role in shaping the itn structure : first , it controls for the number of trading channels each country establishes ; second , it controls for the volume of trade each country participates in , via the established connections .[ fig2 ] the blue points in fig .[ fig2 ] , instead , represent the relation between versus and versus , where the quantities in brackets are the predicted values for degrees and strengths generated by our model , which we will discuss later .in order to formalize the evidences highlighted in the previous section , a theoretical framework is needed . to this aim, we can make use of the _ exponential random graph _ formalism ( erg in what follows ) . under this formalism ,one `` generates '' a ensemble of random networks by maximizing the entropy of the ensemble .however , the maximization is done under certain `` constraints '' which enforce certain properties of the random ensemble ( expectations ) to be equal specific observables that are measured in the real system .different maximum - entropy models enforce different constraints , different properties of the real network , and this corresponds to different probabilities and expectations of the models .+ here , we use the formulas defining the so - called _ enhanced configuration model _ ( ecm in what follows ) which has been recently proposed as an improved model for the itn reconstruction .the ecm aims at reconstructing weighted networks , by enforcing the degree and the strength sequences simultaneously .degrees and strengths , respectively defined as ,\:\forall\:i ] to highlight the random processes behind the formation of each link . as a first step ,one implements a bernoulli trial with probability in order to determine whether a link connecting and is created or not .the second part of our algorithm can be interpreted as a drawing from a geometric distribution , with parameter : if a link ( or , equivalently , a unitary weight ) is indeed established , a second random process determines whether the weight of the same link is increased by another unit ( with probability ) or whether the process stops ( with probability ) .iterating this procedure to determine the probability of obtaining weights of higher values leads precisely to eq.([prob ] ) . as a consistency check, one can explicitly calculate the expected weight for the nodes pair - through the formula , which correctly leads to eq.([eq_wts ] ) .+ in more economic terms , the analysis of the itn clearly proves that a substantial difference exists between establishing a new trade relation and reinforcing an existing one by rising the exchanged amount of goods of e.g. `` one unit '' of trade .these two processes are described , respectively , by the coefficients and . in order to understand which one is more probable, we can study the behavior of the ratio for each pair of countries .in fact , whenever countries and would probably establish a new trade relation quite easily , however experiencing a certain resistance to reinforce it . on the other hand , whenever countries and would experience a certain resistance to start trading ; however , in the case such a relation were established , it would represent a channel with relatively low `` friction '' , inducing the involved parteners to strengthen it . before analysing the case let us rewrite it as .the expression at the first member appears also in eq.([prob ] ) which , in fact , can be restated in the following way : ^{a_{ij}}\frac{(y_iy_j)^{w_{ij}}}{1+z_iz_j} ] . the _ degree _ and the _ strength _ of a given node , respectively defined as ,\:\forall\:i$ ] and , are first - order properties , describing the neighborhood of the node itself and , specifically , the number of its first neighbors ( i.e. the other nodes sharing a direct connection with it ) and its total volume . exploring the topological properties of more distant nodes ( i.e. the neighbors of the neighbors ) implies considering longer pathways starting from node .the simpler second - order properties that can be defined are the _ average nearest neighbors degree _ , , i.e. the arithmetic mean of the degrees of the neighbors of node and the _ average nearest neighbors strength _, , i.e. the arithmetic mean of the strengths of the neighbors of node .once plotted versus the corresponding node degree ( strength ) , the ( ) provides information on the tendency of nodes degrees ( strengths ) to be either positively or negatively correlated . in economic terms, the quantifies the tendency of strongly connected countries to trade with strongly connected partners as well .another important feature of complex networks concerns the tendency of nodes to cluster together .it can be quantified through the clustering coefficient , , which measures the percentage of closed triangles node is part of . in economic terms , the clustering coefficient quantifies the tendency of countries to form small communities and , at a more general level , the hierarchical character of the itn structure .+ the measured properties of the real network need to be compared with the different models predictions .the expected values can be obtained by simply replacing with the probability coefficients predicted by the different models ( e.g. for the ts , for the ecm , etc . ) and with ( e.g. for the ts , etc . ) . whenever considering the gdp - driven ts model , the mathematical expressions for and are the ones illustrated by eqs.(9 ) and ( [ twostep1 ] ) .m. cristelli , a. gabrielli , a. tacchella , g. caldarelli , l. pietronero , ( 2013 ) ` measuring the intangibles : a metrics for the economic complexity of countries and products ' , _ plos one _ , vol . 8 , pp.0070726 n. musmeci , s. battiston , g. caldarelli , m. puliga , a. gabrielli , ( 2013 ) ` bootstrapping topological properties and systemic risk of complex networks using the fitness model ' , _ j. stat .151 , pp.720
the international trade network ( itn ) is the network formed by trade relationships between world countries . the complex structure of the itn impacts important economic processes such as globalization , competitiveness , and the propagation of instabilities . modeling the structure of the itn in terms of simple macroeconomic quantities is therefore of paramount importance . while traditional macroeconomics has mainly used the gravity model to characterize the magnitude of trade volumes , modern network theory has predominantly focused on modeling the topology of the itn . combining these two complementary approaches is still an open problem . here we review these approaches and emphasize the double role played by gdp in empirically determining both the existence and the volume of trade linkages . moreover , we discuss a unified model that exploits these patterns and uses only the gdp as the relevant macroeconomic factor for reproducing both the topology and the link weights of the itn .
current real - time video applications , such as video teleconferencing , the ippp video encoding structure is widely used to satisfy stringent delay constraints .the first frame of the video sequence is intra - coded , and each of the other frames is encoded by using the immediately preceding frame as the reference for motion compensated prediction .when transmitted in a lossy channel , a packet loss affects not only the corresponding frame but also the subsequent frames , which is called error propagation . to deal with packet losses , macroblock ( mb )intra refresh may be used , where some mbs of each frames are intra - coded .this can alleviate the error propagation at the expense of lower coding efficiency . in this paper , we consider that the video destination feeds back the packet loss information to the video encoder to trigger the insertion of an instantaneous decoding refresh ( idr ) frame , which is intra - coded , so that the subsequent frames are free of error propagation .this is one of the reactive packet loss mitigation methods used in webrtc .specifically , the packet loss information can be sent by the receiver via an rtp control protocol ( rtcp ) packet containing the index of the frame to which the lost packet belongs . after receiving this information, the video encoder decides whether the packet loss creates a new error propagation interval .if the index of the frame to which the lost packet belongs is less than the index of the last idr frame , the video encoder will do nothing . in this case , the packet loss occurs during an existing error propagation interval , and a new idr frame has already been generated which will stop the error propagation .otherwise , the packet loss creates a new error propagation interval , and the video encoder encodes the current frame in the intra mode to stop the error propagation .the duration of error propagation depends on the feedback delay , which is at least a round trip time ( rtt ) between the video encoder and decoder .another approach to alleviating error propagation is recurring idr frame insertion , where a frame is intra - coded after every fixed number of p frames , which is not considered in this paper . in ieee802.11 mac , when a transmission is not successful , a retransmission will be performed until the retry limit is exceeded .the retry limit is the maximum number of transmission attempts for a packet , and a packet that could not be transmitted after this many attempts is discarded by the mac .the ieee 802.11 standard defines two retry limits : short retry limit for the packets with a length less than or equal to the rts / cts threshold , and long retry limit for the packets with a length greater than the rts / cts threshold . in this paper , the use of rts / cts is disabled as is often seen in practice , and we only consider the short retry limit , which is denoted by .the importance of a video packet depends on not only the type of frames that a video packet carries but also the events that have happened in the network .network events include packet losses or excessive delays .for example , for a given p frame , being the second lost packet may not have as much impact as being the first lost packet .the dynamic nature of network events makes the importance of a video packet also dynamic .the impact of network events on the video quality is often overlooked in most of the existing resource allocation algorithms . as a result ,packet differentiation is static and depends only on the video encoding structure , such as recurring idr frame insertion and scalable video coding ( svc ) .for instance , svc separates video packets into different substreams based on which layer a video packet belongs to , and conveys the priority level information of the substreams to the network , which then allocates more resources to the substreams with higher priorities .the prioritization based on svc is static , as the priority of a video packet is determined at the time of video encoding and fixed throughout the lifetime of the video packet .we propose a mac layer optimization method that adapts the retry limit to both video frame types and the network events . according to the importance of the video packets , we dynamically assign retry limits ; less important video packets are assigned a lower retry limit , and the saved retransmission opportunities or the earned credit is shifted to the more important video packets without increasing the total contention to the competing traffic .this idea is reminiscent of the one in , where the secondary spectrum users earn credit by assisting the transmission of the primary spectrum users and consume the credit in accessing the spectrum at a later time .we also present an analytic model to evaluate the performance of our proposed method . in the literature ,model - based throughput analyses for the ieee 802.11 standard are proposed in - . in this paper , our focus is on the impact on the video quality resulting from our proposed mac layer optimization method . considering the transmission of cross traffic , a compatibility condition is also required to guarantee that cross traffic will not be negatively affected . by using simulations ,we show that the throughput of cross traffic remains almost the same compared to that for the scenario where mac layer optimization is not used .the proposed method and the analytic model are investigated under the assumption that no forward error correction ( fec ) is performed .however , as we show , it is not difficult to modify the proposed method to be compatible with fec .the remainder of this paper is organized as follows .section [ sec : motivation ] gives the motivation of our proposed method .section [ sec : prop_method ] describes our proposed method .section [ sec : model ] presents an analytic model .section [ sec : simu ] gives the simulation results .section [ sec : conl ] concludes this paper .in video teleconferencing , one of the most important communication networks is that the video sender connects through wifi to the core internet , and eventually connects to the video receiver , as illustrated in fig .[ fig : topo_s ] . consideringthat packet losses occur most likely in the wifi link , optimization in the wifi link may improve the video receiver s quality of experience ( qoe ) significantly .modification at the wifi access point ( ap ) is difficult to implement , and often affects other users performance in the same wifi network .thus , in this paper , our objective is to design a mac layer optimization at the video sender to improve the qoe at the video receiver .there are qoe models in the literature - .these models are essential to qoe - driven network resource allocation - . however , most of the qoe models assume the availability of qos parameters such as the video bit rate , frame rate and packet loss probability .these parameters are not suitable for our mac layer optimization , because the video bit rate and frame rate are controlled by the video teleconferencing applications , and the packet loss probability is determined by the wireless channel condition and the cross traffic from other users in the same wifi network .instead , our proposed method tries to reduce the number of frozen frames to achieve better qoe . to avoid visual artifacts ,some video teleconferencing applications , such as facetime , hangouts and skype , choose to present the most recent error - free frame instead of erroneous frames .the video receiver freezes the video during an error propagation period , which is called a frozen interval , and the frames presented during this period are called frozen frames . in this paper , we consider the freeze time as the performance metric .given a constant frame rate , calculating the freeze time is equivalent to counting the number of frozen frames .it is desirable to establish the relationship between the fraction of frozen frames to the mean opinion score ( mos ) , where the frozen frames occur in groups , each lasing for a time period equal to the packet loss feedback delay .therefore , we perform a subjective experiment to characterize how the fraction of frozen frames affects the qoe . we adopt the single stimulus absolute category rating with hidden reference ( acr - hr ) method to obtain the quality scores .ten observers are first asked to watch a training session , containing two basketball passing video sequences with no frozen frames and the most frozen frames , respectively .then , with the training session in mind , the observers view five video sequences in a random order , all of which are the foreman sequences but with the fraction of frozen frames varying from to approximately .these video sequences are generated from the packet losses obtained in the opnet simulations in section [ sec : simu ] . at the end of each video sequence ,the observers are asked to rate the video according to the itu five - point quality scale , where the scores 1 to 5 stand for the following quality levels : very annoying , annoying , slightly annoying , perceptible but not annoying and imperceptible , respectively . with hidden reference removal , the difference meanopinion score ( dmos ) is obtained by subtracting the observer s rating of the reference video ( i.e. , the video with no frozen frames ) from the observer s rating of other videos , and 5 is added to the dmos to make it nonnegative . in fig .[ fig : dmos ] , we show the average dmos of video sequences with different fractions of frozen frames as well as the confidence intervals . clearly , by decreasing the fraction of frozen frames , we improve the viewer s subjective experience , especially when the fraction of frozen frames is less than , which is the motivation of our proposed method .similar subjective test results are also presented in . in , it is claimed that videos with higher rebuffering frequency or longer rebuffering duration have worse qoes compared to those with fewer rebufferings or shorter durations . in , it shows that the qoe decreases as the packet lost rate increases , where the packet loss is the cause of frozen frames .confidence intervals , scaledwidth=60.0% ]in the ieee 802.11 standard , the retry limits are the same for all packets .this is not optimal for video teleconferencing traffic . to illustrate, we use the foreman video sequence to show a special property of the ippp encoded video sequences in the presence of packet losses in fig .[ fig : psnr ] .the video quality is measured by psnr , the widely used objective video quality metric .note that the use of psnr here is only for the purpose of illustration .we understand that generally psnr does not correlate well with qoe .however , a drop of as much as 10db in psnr does mean a significant drop in qoe . later in the paper, we will evaluate the video quality by the video freeze time , which highly correlates with qoe as we have shown in fig .[ fig : dmos ] .the loss of frame 5 causes all the subsequent decoded p frames to be erroneous until the next idr frame , and the video quality stays low regardless of whether the subsequent frames are received successfully .thus , the transmission of these frames is less important to the video quality , and we may consider lowering the retry limit for them . in our proposed method, we classify the video frames into three priority categories , and assign retry limit for the video frames with priority , where priority 1 is the highest priority and .first , an idr frame and subsequent frames are assigned retry limit , until a frame is lost or the compatibility constraint ( to be discussed in detail later ) is violated . after generating an idr frame ,we want the decoded video sequence at the receiver to be error - free as long as possible .otherwise , if the network drops a frame shortly after the idr frame , the video quality will decrease dramatically and remain poor until a new idr frame is generated , which will take at least 1 rtt .the benefit of an idr frame that is quickly followed by a packet loss will then be limited to a few video frames .therefore , we prioritize not only the idr frame but also the frames subsequent to the idr frame .when the mac layer discards a packet because it has reached the retry limit , the subsequent frames are assigned the smallest retry limit until a new idr frame is generated , because a higher retry limit would not improve the video quality anyway .all the other frames are assigned retry limit , which is the same as the one in the original ieee 802.11 standard .the reason behind this setting is to simplify the analytic model and the implementation .indeed , it is interesting to find out in the analytic model that very few frames are assigned with the retry limit .notice that the proposed method can be also implemented with fec to improve video quality .however , with fec , when a packet is discarded by the mac layer , the receiver may still be able to decode the video frame because of the redundant fec packets , and thus , the mac layer does not have to lower the retry limit of the subsequent frames .instead , the mac layer only reduces the retry limit of the subsequent frames when the video frame can not be successfully decoded by the receiver . in order to decide whether a video frame is decodable, the mac layer has to know how the fec packets are generated , such as the beginning of each fec code block and fec rate .this information can be placed in the extended rtp header as outlined in .the mac layer then reads the information by deep packet inspection . in case of encryption , this information is still transmitted in plaintext in the optional srtp extension field .the proposed method can be easily generalized to the case when fec is performed , and the analytic model is similar . for convenience , in the following sections , we only discuss the proposed method without fec .to facilitate the adoption of our proposed method , we impose a compatibility constraint on our design . in order to make sure that the performance of other access categories ( acs ) is not negatively affected by optimizing the retry limits for the video packets, we try to maintain the same total number of transmission attempts of a video sequence before and after our proposed method is applied . in our study , instead of monitoring the actual number of transmission attempts , we estimate the average number of transmission attempts for the video packets as follows .let be the collision probability of a single transmission attempt at the mac layer of the video sender when our proposed method is used .we adopt the assumption in - that is a constant and independent for every packet , regardless of the number of retransmissions . in our proposed optimization method ,the probability is monitored at the mac layer , and then used as an approximation of collision probability when only the unmodified ieee 802.11 standard is used .the probability that a transmission still fails after attempts is . for a packet with retry limit ,the average number of transmission attempts is given by where is the probability that a packet is successfully transmitted after attempts , and in the second term on the left hand side is the probability that the transmission still fails after attempts . for convenience ,let for , where is the packet loss rate when the retry limit is . since , we have that .let be the total number of video packets received by the mac layer up to now for the case where all stations and the access point ( ap ) use the unmodified ieee 802.11 standard , and then the average number of transmission attempts for the video sequence is given as is the collision probability when the unmodified ieee 802.11 standard is used at the video sender and .let be the total number of video packets received by the mac layer with retry limit when our proposed method is used at the video user .given the maximum transmission unit ( e.g. , the maximum msdu size for wifi is 2304 bytes ) , the video packets corresponding to a video frame received at the mac layer have the same sizes except the last one .then it is reasonable to assume that for every video packets , every single transmission attempt has the same channel occupancy time . to guarantee that our proposed method does not increase the total channel occupancy , we require that the total number of transmission attempts does not increase after we adjust packet retry limits , i.e. , however , when our proposed method is used , both and are unavailable .instead , we consider the following constraint our proposed method is implemented at the mac layer of the video sender with a linear computational complexity , and can be summarized as follows ( see fig .[ fig : frame_priority ] ) : the idr frames are always assigned priority 1 . for a subsequent frame ,if its preceding frames are transmitted successfully , it will be assigned priority 1 as long as the compatibility requirement ( [ eqn : compa_ineqn_alg ] ) is satisfied .when the compatibility requirement is violated , the mac assigns priority 2 to the current frame and the subsequent frames until a packet is dropped because of exceeding the retry limit .when a packet with priority 1 or 2 is dropped , the subsequent frames will be assigned priority 3 until the next idr frame .the number of consecutive frames with priority 3 is determined by the duration of error propagation , which is at least 1 rtt .the details of the algorithm are shown in algorithm [ alg1 ] . the accumulative number of packets are calculated from the beginning of the video sequence . however ,when the video duration is large , we may count the accumulative number of packets during a certain time period . ; [ pckt_come ] ; ; ; ; [ step6 ] ; ; [ step8 ] ; ; [ lin : update_m ] ; [ lin:12 ] notice that according to steps [ step6]-[step8 ] in algorithm [ alg1 ] , a frame is assigned priority 2 only when the previous frame was assigned priority 2 or inequality ( [ eqn : compa_ineqn_alg ] ) is violated . hence , when inequality ( [ eqn : compa_ineqn_alg ] ) is always satisfied , no frame will be assigned priority 2 . in section [ sec :model ] , we will use this property to show that after the beginning of the video sequence , all frames are assigned priority 1 or 3 .although increasing the retry limit for the packets with high priority may also increase the video latency , it only increases the actual number of transmission attempts for very few packets , which failed in the previous transmission attempts .moreover , as we show in lemma [ lem : pckt ] , our proposed method reduces the total packets transmitted , and eventually reduces the packet delay , which is confirmed later by the simulation results presented in section [ sec : simu ] .assume that for every idr and non - idr video frame , and packets with the same size are transmitted by the mac layer of the video sender , respectively , where .let be the total number of frames encoded thus far , and be the expected number of packets when the unmodified ieee 802.11 standard is used at the video sender .for fair comparison , we consider the same number of video frames for the case where our proposed method is used .using our proposed method , we assign one of the three priorities to a frame , and denote by the expected number of packets with priority .notice that and are not necessarily equal , because the numbers of idr frames in these two scenarios may be different . in the following analysis ,we are only interested in the case where is large enough and consider the expected number of packets . the inequality ( [ eqn : compa_ineqn_ori ] ) and ( [ eqn : compa_ineqn_alg ] ) can be approximated by the following inequalities : considering a constant frame rate , we suppose that is the number of frames sent during a feedback delay . when a packet is lost in transmission , the packet loss information is received at the video source a feedback delay after the packet was sent . then a new idr frame is generated immediately , which is the -th frame after the frame to which the lost packet belongs . frames are affected by error propagation . even when the feedback delay is short , at least the frame to which the lost packet belongs is erroneous , so we assume that , and call the interval containing the frozen frames the frozen interval .we assume that the packet loss probability is so small that in each frozen interval , there is only one packet loss ( the very first one ) , when the unmodified ieee 802.11 standard is used at the video user .then , the number of independent frozen intervals is equal to the number of lost packets , which is in an -packet video sequence .thus , the total expected number of erroneous frames , i.e. , frozen frames , is given by using our proposed method , every frozen interval begins with an erroneous frame with priority 1 or 2 , which is followed by frames with priority 3 .the numbers of lost packets with priority 1 and 2 are and , respectively .so , when our proposed method is used , the total expected number of frozen frames is notice that the frames with priority 3 only appear in frozen intervals , and each is encoded into packets .each frozen interval contains frames , of which are assigned priority 3 .thus , the total expected number of packets with priority 3 is given by when ( corresponding to very short feedback delays ) , only one frame ( the frame to which the lost packet belongs ) is transmitted in the frozen interval , and the next frame is an idr frame which stops the frozen interval .in this case , no frame is assigned priority 3 , and . denote by the number of packets which belong to idr frames , when our proposed method is used . notice that except for the first idr frame , other idr frames appear only after the ends of frozen intervals , and each is encoded into packets .thus , the total number of packets belonging to idr frames is given by before presenting the main theorem , we first introduce the following lemmas .[ lem : pckt ] if , when our proposed method is used at the video sender , the expected number of packets is less than that when the original ieee 802.11 standard is used at the video sender , i.e. , [ lem : comp1 ] when , we have furthermore , when the following condition is also satisfied , we have the proofs of lemma [ lem : pckt ] and [ lem : comp1 ] can be found in appendix [ appen : a ] and [ appen : b ] , respectively . in the above lemmas , we assume that , and we want to show that the assumption is sound .consider two scenarios , where the unmodified ieee 802.11 standard and our proposed method are used , respectively .we assume that at the beginning of the video transmission , the collision probabilities are the same , i.e. , .let and be the collision probabilities measured at the end of time interval for the two scenarios , respectively , where a time interval is the duration between two consecutive packet loss probability updates ( see line [ lin:12 ] of algorithm [ alg1 ] ) and should not be confused for the time slot defined in the ieee 802.11 standard .similarly , we define and which are the numbers of packets in the two scenarios , respectively , measured at the end of time interval , where . since and are all random variables and difficult to be modeled , we take their expected values and , for as an approximation by the method of mean - field approximation .as long as the number of packets is large , the approximation is accurate . during the first time interval , the collision probability , and then , according to lemma [ lem : comp1 ] , we have implies that the expected number of transmission attempts in the first scenario is not less than that in the second scenario . thus, at the end of the first time interval , the collision probability in the first scenario is also not less than that in the second scenario , i.e. , . as a result , in the second time interval , the expected number of transmission attempts in the first scenario is also not less than that in the second scenario , i.e. the same reasoning holds for the future time intervals .thus , in our proposed method , the expected number of transmission attempts is always not greater than that in the original ieee 802.11 standard at all time intervals , and at the end of each time interval , the collision probabilities satisfy .note that it is also consistent with the simulation results presented in section [ sec : simu ] .now we are ready to present the main result as follows . by comparing the performance of the ieee 802.11 standard and that of our proposed method , we prove that the expected number of frozen frames is reduced when our proposed method is used .[ thm : main ] if and ( [ eqn : cod ] ) are satisfied , when our proposed method is used , the expected number of frozen frames is upper bounded by + 1}\right\ } \label{eqn : result}\end{aligned}\ ] ] the proof of theorem [ thm : main ] contains two parts .we want to show that is upper bounded by the two elements on the right hand side of ( [ eqn : result ] ) . in the remainder of this section ,we prove them in the following two lemmas .if , when our proposed method is used , the expected number of frozen frames is smaller than that when the ieee 802.11 standard is used at the video user , i.e. , denote by and the numbers of idr frames when the ieee 802.11 standard and our proposed method is used at the video user , respectively . as we assumethat every idr frame and non - idr frame are encoded into and packets , respectively , the total numbers of packets when the ieee 802.11 standard is used is given by similarly , when our proposed method is used , the total number of packets is by lemma [ lem : pckt ] , we know that . from the above two equations , we have .notice that every frozen interval triggers the generation of an idr frame , and except the first idr frame , which is the first frame of the video sequence , idr frame only appears immediately after a frozen interval .then , we have consequently , the number of frozen frames when our proposed method is used is smaller than that when the unmodified ieee 802.11 standard is used , i.e. , if and ( [ eqn : cod ] ) are satisfied , when our proposed method is used , the expected number of frozen frames is upper bounded by + 1}\label{eqn : bound2}\end{aligned}\ ] ] in the proof of lemma [ lem : comp1 ] , we obtain \nonumber\end{aligned}\ ] ] where the right hand side of the above inequality is a positive increasing function of .as the number of packets is increasing , the lower bound of the left hand side of the above inequality is also increasing and remains positive .notice that and are the expected values of random variables and .when the left hand side of the above inequality is large enough , the compatibility condition ( [ eqn : compa_ineqn_alg ] ) is always satisfied .it implies that according to algorithm [ alg1 ] , no frame with priority 2 will be generated after the beginning of the video sequence . as discussed above , except the beginning of the video sequence , no frame is assigned priority 2 .hence , a frame with priority 1 is followed by another frame with priority 1 , when all packets of the former are transmitted successfully .notice that according to algorithm [ alg1 ] , the priority does nt change within a frame .even if a packet of a frame with priority 1 is dropped , the remaining packets of the same frame still have the same priority and the packets of the subsequent frame are then assigned priority 3 .each frozen interval contains subsequent frames with priority 3 , each of which is encoded into packets .the first packets are followed by another packet with priority 3 with probability 1 , and the last one is followed by a packet with priority 1 , which belongs to the next idr frame , with probability 1 .this process can be modeled by the discrete - time markov chain shown in fig .[ fig : markov ] ., and represent the states for the -th packet of an idr frame , a non idr frame and a frame with priority 3 , respectively.,scaledwidth=60.0% ] in fig .[ fig : markov ] , the states in the last row represent the packets with priority 3 in each frozen interval .the states in the first two rows represent the and packets of an idr frame and a non - idr frame with priority 1 , respectively , where the state is for the -th packet of the idr frame , and the state is for the -th packet of the non - idr frame with priority 1 . after a frozen interval, it is followed by packets of an idr frame with priority 1 .if all these packets are transmitted successfully , they are followed by packets of a non - idr frame , and otherwise , they initialize a new frozen interval . after the transmission of a non - idr frame , it is followed by another non - idr frame unless the transmission fails .suppose that and are the probabilities that the transmissions of an idr frame and a non - idr frame with priority 1 are successful , respectively .the transmission of an idr frame is successful if and only if all the packets of the idr frame are transmitted successfully .for each packet , the packet loss rate is , since it has priority 1 .thus , we have for the non - idr frames , notice that they also have priority 1 .then , the probability is given by when , no frame is assigned priority 3 , and then , we do nt have the states in the last row in fig .[ fig : markov ] .if any frame is dropped in transmission , it will be followed immediately by another idr frame . in this case , the discrete - time markov chain becomes the model in fig .[ fig : markovd1 ] .the following derivation is based on the model in fig .[ fig : markov ] .however , it is also suitable when . ,scaledwidth=60.0% ] let , and , for , and , be the stationary distribution of the markov chain .notice that , and .furthermore , we have from the above equations , we derive that considering the normalization condition it is not difficult to obtain that (1-p_b)+p_ad'}.\end{aligned}\ ] ] let be the probability that a packet belongs to an idr frame , which is given by (1-p_b)+p_ad'}.\ ] ] in a video sequence containing packets , the expected number of packets which belong to an idr frame is obtained by from ( [ eqn : pkt_idr ] ) , we have (1-p_b)+p_ad'}\\ & < & \frac{d(1-p_b)n}{[d+(d-1)d'](1-p_b)+p_ad'}\label{eqn : result1 } % & = & \frac{n_f}{1+(d-1)d'p_1},\label{eqn : result}\end{aligned}\ ] ] where the last inequality follows from the fact that . by taylor s theorem ,the probability can be rewritten as where .thus , we have similarly , we obtain that then , applying the above bounds , inequality ( [ eqn : result1 ] ) can be rewritten as (d'p_1-\frac{d'(d'-1)}{2}p_1 ^ 2)+(1-dp_1)d'}\nonumber\\ & = & \frac{dp_1n}{[d+(d-1)d'](p_1-\frac{d'-1}{2}p_1 ^ 2)-dp_1 + 1}\nonumber\\ & = & \frac{dp_0n}{[d+(d-1)d'](p_0-\frac{d'-1}{2}p_0p_1)-dp_0+\frac{p_0}{p_1}}\nonumber\\ & = & \frac{dp_0n}{[(d+(d-1)d')(1-\frac{d'-1}{2}p_1)-d]p_0+\frac{p_0}{p_1}}\nonumber\\ % & < & \frac{n_f}{[d+(d-1)d'](p_0-\frac{(d'-1)}{2}p_0p_1)-dp_0 + 1}\\ & < & \frac{n_f}{[(d+(d-1)d')(1-\frac{d'-1}{2}p_1)-d]p_0 + 1}\nonumber % & = & \frac{dp_0n}{[d+(d-1)d']p_0+\frac{p_0}{p_1}}\\ % & < & \frac{dp_0n}{[d+(d-1)d']p_0 + 1}\\ % & = & \frac{n_f}{[d+(d-1)d']p_0 + 1 } , % \label{eqn : result_appen}\end{aligned}\ ] ] where the last inequality follows from the fact that and . from the result above , we have the following observations : the expected freeze time for our proposed method is always reduced compared to the unmodified ieee 802.11 standard ; the longer the frozen interval , the greater the gain compared to the unmodified ieee 802.11 standard . as shown in fig .[ fig : comparison ] , for the case of the unmodified ieee 802.11 standard , after a packet loss , the video receiver shows frozen frames until the next idr frame , regardless of whether the frames before the idr frame are received successfully .however , for the case of our proposed method , priority 3 is assigned to the frames following a packet loss , and more packets are dropped during the error propagation . as a compensation , the frames from the next idr frame have a higher retry limit , and as a result , a lower packet loss probability is achieved for these frames and the total number of frozen frames is reduced . by increasing the retry limit for the high priority packet, our proposed method concentrates the packet losses into small segments of the entire video sequence to improve the video quality .our proposed scheme is evaluated using the network topology in fig . [fig : net_topo ] , which contains a video teleconferencing session with our proposed qoe - based optimization ( vi-1 and vi-2 ) and other cross traffic , including a voice session , an ftp session , and a video teleconferencing session without our proposed qoe - based optimization ( vi-3 and vi-4 ) . in the simulations , we only consider one - way video transmission from vi-1 to vi-2 , while the video teleconferencing between vi-3 and vi-4 are two - way . vi-1 and vi-3 are in the same wlan with the ftp client and the voice user vo-1 .the access point ap-1 communicates with vi-2 , vi-4 , the ftp server , and the voice user vo-2 through the internet , with a one - way delay of 100 ms in either direction .the h.264 video codec is implemented for vi-1 and vi-2 . without our proposed qoe - based optimization , the retry limit for all packetsis set to 7 , the default value in the ieee 802.11 standard .three levels of video priority are assigned in video teleconferencing sessions with our proposed qoe - based optimization , and the corresponding retry limits are , where as we discuss before , we set because it leads to a tractable analytic model and makes the implementation easier . since larger retry limit may cause longer delay to certain packets , we only increase the retry limit by 1 . meanwhile , no matter whether the packets with priority 3 are received successfully or not , their corresponding frames will be frozen , so we assign the retry limit for them . at the video sender ,a packet is discarded when its retry limit is exceeded . in our proposed scheme , the video receiver detects a packet loss when it receives the subsequent packets or it does not receive any packets for a time period .then it sends the packet loss information to the video sender through rtcp , and a new idr frame is generated after the rtcp feedback is received by the video sender . from the time of the lost frame until the next idr frameis received , the video receiver only presents frozen frames .two test video sequences are transmitted from vi-1 to vi-2 .one is the low motion foreman sequence , which has a frame rate 30 frames / sec and a duration of approximately 10 seconds , containing 295 frames .the other one is the high motion basketball passing sequence .its frame rate is 60 frames / sec , and the duration is 5 second , containing 300 frames .all the cross traffic is generated by opnet 17.1 .for the cross video session from vi-3 to vi-4 , the frame rate is 30 frames / sec , and the outgoing and incoming stream frame sizes are both 8500 bytes . for the tcp session between the ftp client and server , the receive bufferis set to 8760 bytes .all the numerical results in this section are averaged over 100 seeds , and for each seed , the data is collected in the durations of the video sequences . to make the network performance comparable to those reported in , we also introduce another wlan to increase the error probability .it consists of an ap and two stations shown as wlan 2 in fig .[ fig : net_topo ] .both of these two ieee 802.11n wlans operate on the same channel .the data rates are 13 mbps , and the transmit powers are 5 mw . the buffer sizes at the aps are 1 mbits .the numbers of spatial streams are set to 1 .the distances of the aps and the stations are set to enable the hidden node problem . in the simulations ,the distance between the two aps is 300 meters , and the distances between vi-1 and ap-1 , and between ap-2 and vi-5 , are both 350 meters . a video teleconferencing session is initiated between vi-5 and vi-6 through ap-2 .the frame rate is 30 frames / sec , and both the incoming and outgoing stream frame sizes are used to adjust the packet loss rate of the video teleconferencing session with our proposed method operating at vi-1 .network events affect the behavior of the video encoder , and vice versa . the former is often ignored in simulation based studies by feeding a pre - determined video sequence to the network . in our simulation study, we dynamically generate the video frames to be fed into the network according to the network events in the network .specifically , we capture the dynamic idr frame insertion that is triggered by reception of the packet loss feedback conveyed by rtcp packets in opnet .the details are as follows : let , , be the video sequence beginning from frame , where frame is an idr frame and all the subsequent frames are p - frames until the end of the video sequence .we start from the transmission of video sequence , and suppose that rtcp feedback is received when we are transmitting frame .after the transmission of the current frame , we switch to the video sequence , which causes the idr frame insertion at frame , and use frame and the subsequent frames of to feed the video sender simulated in opnet . in fig.[fig : opnet ] , we depict the video sequence , where rtcp feedback is received when frame- and frame-24 are transmitted .note that in the opnet simulation it is the size rather than the content of each packet that is of interest .we encode all possible video sequences , , which is a one - time effort , and store the size of every packet of all video sequences .then , in the simulations , when we receive an rtcp feedback , we switch to the appropriate video sequence . of a single transmission attempt using the ieee 802.11 standard and our proposed method for 100 seeds , scaledwidth=60.0% ] in fig . [fig : collision_prob ] , we present the collision probabilities for 100 seeds , when the ieee 802.11 standard and our proposed method are used at the video user , respectively .the average collision probabilities are 0.350 and 0.339 for the ieee 802.11 standard and our proposed method , respectively , which confirms the assumption . comparing these two values, it is reasonable to use the collision probability from our proposed method as an approximation of collision probability when only the ieee 802.11 standard is applied . in fig .[ fig : num_frm_plr ] , the average fractions of frozen frames using the ieee 802.11 standard and our proposed qoe - based optimization are presented , where the foreman sequence is transmitted . for different application layer load configurations, we tune the cross traffic between vi-5 and vi-6 to obtain different packet loss rates , when the ieee 802.11 standard is used at the video user .the packet loss rates are 0.0023 , 0.0037 , 0.0044 , 0.0052 and 0.0058 , for scenarios 1 to 5 , respectively .then , we run the simulations using our proposed method with the same cross traffic configurations .we also show the upper bound for our proposed method in ( [ eqn : result ] ) , where the parameters , , and are averaged from the simulation results , and it is confirmed that ( [ eqn : cod ] ) is satisfied . in fig . [fig : num_frm_plr_bskt ] , we also present the average fractions of frozen frames of basketball passing video for the same configurations .we observe that the average fraction of frozen frames of our proposed method is strictly less than the upper bound . as the packet loss rate increases ,the average fraction of frozen frames increases regardless of whether our proposed method is used or not , and the performance of our method remains strictly better than that of the corresponding value of the ieee 802.11 standard . in fig .[ fig : num_frm_rtt ] , we show the average fractions of frozen frames for different rtts between video sender and receiver , when the application layer load configuration 3 is applied .the foreman sequence is transmitted .notice that the feedback delay is at least 1 rtt between video sender and receiver . as the rtt increases , the feedback delay increases , the duration of every frozen interval increases ,more frames are affected by each packet loss , and the fraction of frozen frames increases . from the upper bound in ( [ eqn : result ] ) , we infer that the gain of our proposed method compared to the ieee 802.11 standard increases , when a larger rtt is applied and this is consistent with the numerical results in fig . [fig : num_frm_rtt ] .when the rtt is 100 ms , the average fraction of frozen frames using our proposed method is less , compared to that using the ieee 802.11 standard . when the rtt is 400 ms , the gain increases to .moreover , the average fractions of frozen frames using the proposed method are always less than the upper bound in ( [ eqn : result ] ) .similar results are also observed for basketball passing video in fig .[ fig : num_frm_rtt_basket ] . in fig .[ fig : delay ] , we present the cdf of the packet end - to - end delay between the video sender and receiver , when the application layer load configuration 3 is applied and the rtt is 100 ms . for our proposed method , because of lemma [ lem : pckt ] , less packets are generated than the original ieee 802.11 standard . as a result , the end - to - end delay may be well reduced when our proposed method is used , which is corroborated by the results in fig .[ fig : delay ] . [ cols="^,^,^,^,^,^ " , ] [ tab : simu4 ] in tables[ tab : simu1 ] and [ tab : simu3 ] , we list the average throughputs for cross traffic in wlan 1 , when foreman sequence is sent with the application layer load configurations 2 and 5 applied , respectively .in addition , the standard deviations for these two scenarios are listed in tables [ tab : simu2 ] and [ tab : simu4 ] , respectively .we can observe that the throughput results for the proposed method are almost the same compared to those for the ieee 802.11 standard .we proposed a qoe - based mac layer enhancement for wifi that dynamically assigned different retry limits to video packets based on the packet loss events in the network subject to a compatibility design constraint . effectively , our proposed method concentrated the packet losses into small segments of the video sequencethe number of frozen frames was reduced compared to the original ieee 802.11 standard .additionally simulation results showed that cross traffic was not negatively affected .the authors would like to thank dr .anantharaman balasubramanian of interdigital labs for helping develop the joint video - opnet simulation , and dr .rahul vanam and dr .louis kerofsky for helping set up the subjective experiment .using the original ieee 802.11 standard , every lost packet will trigger a new idr frame and the first frame of the video sequence is an idr frame .so the expected total number of the idr frames is .the expected total number of packets is given as ',\end{aligned}\ ] ] which yields with our proposed method , every lost packet with priority 1 or 2 will cause the generation of a new idr frame . similar to the case with the original ieee 802.11 standard , the expected total number of packets is given as ',\end{aligned}\]]which yields let . from ( [ eqn : numfrm_ori ] ) and ( [ eqn : numfrm_prop ] ) , we have notice that , and then , we have where the equality follows from ( [ eqn : numpkt ] ) , and the last inequality follows from the fact that . since , we have . then , it follows from ( [ ineq1 ] ) that the first inequality follows from the fact that is generally very small and is then less than 1 . from the above inequality, we have that , i.e. , for the same number of video frames , the expected number of packets when the unmodified ieee 802.11 standard is used at the video user is greater than that when our proposed method is used . considering the compatibility condition ( [ eqn : compat ] ), we have (\delta d-1)+p_3n_3}{1-p}\\ & \geq&0\end{aligned}\ ] ] the first inequality follows from the fact that .the equation is obtained by substituting ( [ eqn : numpkt2 ] ) .the second inequality follows from the facts that , and , and the equality holds when and .h. schwarz , d. marpe , and t. wiegand , `` overview of the scalable video coding extension of the h.264/avc standard , '' _ ieee tran .circuits syst .video technol .9 , pp . 1103 - 1120 , sept .2007 .a. khan , l. sun , e. jammeh , and e. ifeachor , `` quality of experience - driven adaptation scheme for video applications over wireless networks , '' _ iet communications special issue on `` video communications over wireless networks , '' _ vol .1337 - 1347 , 2010 .t. jiang , h. wang , and a. v. vasilakos , `` qoe - driven channel allocation schemes for multimedia transmission for priority - based secondary users over cognitive radio networks , '' _ieee j. sel .areas commun .7 , pp . 1215 - 1224 , aug .l. zhou , z. yang , y. wen , h. wang , and m. guizani , `` resource allocation with incomplete information for qoe - driven multimedia communications , '' _ ieee tran .wireless commun ., _ vol . 12 , no . 8 , pp . 3733 - 3745 , aug .y. feng , z. liu , and b. li , `` gestureflow : qoe - aware streaming for multi - touch gestures in interactive multimedia applications , '' _ ieee j. sel .areas commun .1281 - 1294 , aug .
in ieee 802.11 , the retry limit is set the same value for all packets . in this paper , we dynamically classify video teleconferencing packets based on the type of the video frame that a packet carries and the packet loss events that have happened in the network , and assign them different retry limits . we consider the ippp video encoding structure with instantaneous decoder refresh ( idr ) frame insertion based on packet loss feedback . the loss of a single frame causes error propagation for a period of time equal to the packet loss feedback delay . to optimize the video quality , we propose a method to concentrate the packet losses to small segments of the entire video sequence , and study the performance by an analytic model . our proposed method is implemented only on the stations interested in enhanced video quality , and is compatible with unmodified ieee 802.11 stations and access points in terms of performance . simulation results show that the performance gain can be significant compared to the ieee 802.11 standard without negatively affecting cross traffic . qoe , packet priority , video teleconferencing , ieee 802.11 , retry limit
upsampling result patches of _ art _ from the middleburry dataset . (a ) the noisy low resolution depth map patch and the corresponding color image .( b ) the ground truth .( c ) the upsampling result by the state - of - art method in .the upsampling result by our method ( d ) without adaptive bandwidth selection and ( e ) with adaptive bandwidth selection .( f ) the corresponding bandwidth map obtained by our method.,title="fig : " ] + acquiring the depth information of 3d scenes is essential for many applications in computer vision and graphics such as 3d modeling , 3dtv and augmented reality .recently time - of - flight ( tof ) cameras have shown impressive results and become more and more popular , e.g. , kinect v2.0 sensor .they can obtain dense depth measurements at a high frame rate .however , the resolution is generally very low and the depth map is often corrupted by strong noise .tremendous efforts have been put for improving the resolution of depth maps acquired by tof cameras .the solutions usually go to three categories . in the first category, single low resolution depth map is upsampled through different data - driven approaches .it may be achieved by exploiting similarity across relevant depth patches in the same depth map .the resolution can also be synthetically increased by exploiting an existing generic database containing large numbers of high resolution depth patches which were inspired by the work in .the second approach upsamples the low resolution depth map by integrating multiple low resolution depth maps of the same scene , which may be acquired at the same time but from slightly different views or at the different sensing time .these existing methods assume that the scene is static .the third category achieves upsampling through the supports of the high resolution guided color image . in this category, it is assumed that the depth discontinuity on the depth map and the color edge on the color image co - occur on the corresponding regions .image guided upsampling methods have several advantages against the other two categories and are popular in recent years .they can yield much better upsampling quality than single depth map upsampling . besides, they can achieve larger upsampling factor and do not need any prior database compared with the existing methods .also when compared with the second category , image guided upsampling is not subject to static scene and does not need complicated camera calibration process . despite the obvious advantages against the first two categories of the solutions , the main issues of image guided upsampling are : 1 ) texture copy artifacts on the smooth depth region when the corresponding color image is highly textured ; 2 ) blurring depth discontinuity when the corresponding color image is more homogeneous ; 3 ) performance drops for the case of highly noisy depth maps . in this paper , a new image guided upsampling approach is proposed .it well tackles the issues of the existing methods in the same category mentioned above .we formulate a novel optimization framework with the brand new data term and the new smoothness term compared with other state - of - art models .pixel - by - patch validity checking is introduced in the data term in the optimization process instead of pixel - by - pixel checking in the existing methods .also , a new error norm is proposed to replace the l2 norm .these together improve the robustness of the framework against the noise .moreover , the new smoothness term relaxes the strict assumption on the co - occurrence of depth discontinuity and the color edge which is one of major problems in the existing methods of the same category . to further improve the performance of the proposed framework, we propose a new data driven parameter selection scheme to adaptively estimate the parameter in the optimization process .experimental results on synthetic and real data show that our method outperforms other state - of - art methods in both visual quality and quantitative accuracy measurement .encouraging performance retains even for larger upsamping scale such as and .the rest of this paper is organized as follows : section [ secrelatedwork ] is the related work where we briefly present the mrf framework in and its extension in which are highly related to our work and then analyze their shortages which motivate the proposed method . in section [ secourmethod ] , we first detail our upsampling model .then , we give further explanation to show how the newly proposed model is able to handle the constrains mentioned above . in the end , a data driven parameter estimation scheme is proposed to adaptively select the parameter in the model .section [ secexperiments ] shows our experimental results and the comparison with other state - of - art methods .we draw the conclusion in section [ secconclusion ] .+ one of the most well known image guided upsampling methods is based on the markov random fields ( mrf ) .it is further extended in which we denote as nlm - mrf . both methods along with other similar methods such as which adopt the certain cues on the corresponding available color image as the reference to upsample the low resolution depth map .they all assume that the color edge co - occurs with the depth discontinuity . to clearly demonstrate the essential constrains of the existing methods , we brief the existing modeling as follows . given a noisy low resolution depth map and the companion high resolution color image , the task is to upsample the to of which the resolution is the same as that of .the work in is modeled through mrf as : where is the initial value of , which can be obtained by interpolating through the basic interpolation such as bicubic interpolation . represents the coordinate of the high resolution depth map . is the neighborhoods of in the square patch centered at . the first term in eq.([mrf ] )is the data term which enforces the similarity between the corresponding positions of the high resolution depth map and the low resolution depth map . the second term in eq.([mrf ] ) is the smoothness term which enforces the smoothness in the neighboring area .the data term and the smoothness term are balanced by the parameter . is defined as follows : where represents the different channels of the color image . is a constant defined by the user .this framework has the following two constrains : first , it is sensitive to the noise in the depth map . to clearly show this constrain , we take the derivative of the objective function in eq.([mrf ] ) with respect to and let it equal to zero ,then we can form an iterative formulation as : it is seen that eq.([mrfupdate ] ) has two important terms : the first term of the numerator is related to the data term in eq.([mrf ] ) and the second term in eq.([mrfupdate ] ) is related to the smoothness term in eq.([mrf ] ) .the denominator in eq.([mrfupdate ] ) can be considered as a constant for a given .it is irrelevant to the depth change .it is therefore skipped in our analysis .the first term contains only the initial value at the corresponding position . for a noise ( including sensing noise and the noise caused by bicubic interpolation operation ) free input, the initial value is close to the groundtruth .however , when the input contains significant noise , the initial value may be far away from the groundtruth .simply adding this noisy initial value will greatly perturb the upsampling result .this implies that the data term in eq.([mrf ] ) is sensitive to the noise . from the data term in eq.([mrf ] ) , it is seen that , during the upsampling process , the validity and the quality of depth value at each depth position are measured pixel - by - pixel in order to enforce the similarity between the the corresponding positions of the high resolution depth map and the low resolution depth map .also , l2 norm is applied to calculate the similarity . from the analysis above, these two factors together make the data term sensitive to the noise .second , from the smoothness term in eq.([mrf ] ) , it is seen that the assumption of the co - occurrence of the color edge and the depth discontinuity is unnecessarily enforced .consequently , the weighting value in this term ( see eq.([mrfupdate ] ) ) is determined by the color difference of the corresponding positions on the color image . however , this assumption does not always well hold .sometimes the locations with small depth difference on the depth map may contain large color difference on the color image . in this case, the obtained upsampled depth map may suffer from the texture copy artifacts .another case is that the positions with large depth difference may contain small color difference on the color image . in such case, the second term will be close to the mean of the depth value in which will result in blurring depth discontinuities .the work in extended the mrf by adding a more regularization term called the non - local means term .moreover , unlike the weight in eq.([colorweight ] ) which is only based on the color image , they further combine segmentation , color information , and edge saliency as well as the bicubic upsampled depth map to define the weight .although the work in somehow improves the performance when compared with , it does not upgrade these two terms significantly and thus the performance improvement is limited .figure [ syntheticcomparison ] illustrates the upsampling results of a synthetic depth map . in this synthetic experiment, we show that these two methods are not robust against the noise and can not handle the case where the color edge and the depth discontinuity are not consistent .we will further illustrate the texture copy artifacts of these two methods in the experimental part .in this section , we describe our optimization framework for upsampling the noisy low resolution depth map to a higher resolution one given a companion color image .different from the previous work , we do not necessarily assume the co - occurrence of the color edge and the depth discontinuity .meanwhile , it is more robust against the noise .our upsampling model consists of two terms : the data term and the smoothness term .given a noisy low resolution depth map , it is first interpolated to by bicubic interpolation .then our upsampling model is formulated as : where is the data term that makes the result to be consistent with the input . is the smoothness term that reflects prior knowledge of the smoothness of our solution .these two terms are balanced with the parameter . + * the data term * : the data term is defined as : where is a normalized gaussian window that decreases the influence of where is far from : where is a constant that is defined by the user . is the robust error norm function that we denote as the exponential error norm .this function has long been used in image denoising such as local mode filtering and nonlocal filtering .it is defined as : the proposed exponential error norm is illustrated in figure [ errornormfunctioncomparision](b ) to have a comparison with the l2 norm illustrated in figure [ errornormfunctioncomparision](a ) .our data term is different from that of previous methods which only use pixel - by - pixel depth difference modeled with l2 error norm in the data term . in eq.([mymodel ] ) , it can be seen that the new data term measures the depth differences pixel - by - patch by taking each reference pixel s neighboring region into account . according to ,depth map is quite piece - wise smooth and thus pixel - by - patch difference calculation is more robust to the noise than pixel - by - pixel depth difference calculation .such calculation is further normalized by gaussian window in order to better maintain the local depth similarity in the neighboring area .then , the pixel - by - patch depth difference is further modeled with the exponential error norm defined in eq.([erronormfunction ] ) which is quite robust against the outliers .these together make the data term robust against the noise and will be further theoretically explained in section [ secadvantageovermrf ] . +* the smoothness term * : the smoothness term is guided by the companion color image .it is defined as : we adopt the same function in eq.([erronormfunction ] ) to model the smoothness term , i.e. . the color guided weight is defined as : where is the same as eq.([spatialweight ] ) and is the same as eq.([colorweight ] ) .note that the color guided weight in our model is quite similar as that of the mrf in except that there is an additional gaussian window .however , the mrf in enforces the co - occurrence of the color edge and the depth discontinuity through eq.([colorweight ] ) .depth discontinuity cues are completely based on color image in their work ( as seen in eq.([mrfupdate ] ) ) .instead , the smoothness term ( i.e. eq.([smoothnessterm ] ) ) in the proposed framework relaxes such strict dependence due to . as we will further show in section [ secadvantageovermrf ], the smoothness term will result in a color image guided bilateral filter on the depth map at each update .depth discontinuity cues are not only based on the color image but also based on the depth map itself during the optimization process .it is more capable of the case where the color edge and the depth discontinuity are not consistent with each other. this property will be the key element that makes our model tackle the texture copy artifacts and blur of depth discontinuities .we will further theoretically explain these in section [ secadvantageovermrf ] .+ we further analyze our upsampling model in order to demonstrate its advantages . by taking the derivative of the objective function of our model in eq.([mymodel ] ) with respect to ,let it equal to zero and form the iterative formulation as : where we define is the derivative of defined in eq.([erronormfunction ] ) .our method has the following advantages : first , it is more robust against the noise in the input .the first term is the weighted sum of the initial depth values in .the weights are based on the difference of the latest updated depth map and the initial depth map , which is further normalized by the gaussian window .this term is related to the data term in the proposed framework .compared with eq.([mrfupdate ] ) , it can be shown that , in the newly proposed data term , the accuracy of upsampling in each round at each pixel is not affected by the initial upsampling value of the reference pixel only .instead , the accuracy of upsampling in each round is based on the local measurement in its neighboring area .this results from the proposed pixel - by - patch depth difference measurement and is more robust against the noise .this effort is further enhanced by introducing which results from the proposed exponential error norm .when the original depth map contains significant noise , the depth value in the neighboring area is unlikely smooth which results in large .this further causes the decrease of . finally , such noisy area will have less effects on in each round of upsamplingconsequently , such data term is more robust to the noise . to our best understanding , it is a brand new data term used in the optimization framework for image guided depth upsampling . the second term in the numerator in eq.([myupdate ] )is related to the smoothness term .compared with the counterpart in eq.([mrfupdate ] ) , it is shown that the existing methods enforce the assumption on the consistence between the color edge and the depth discontinuity , where the weight is only determined by the color difference in the neighboring area on the color image .in eq.([myupdate ] ) , it is shown that we relax such strict assumption by extending color image guided bilateral filtering onto each round of upsampling depth map .the weight value is determined by three factors : 1 ) the color similarity in the local area ( same as the existing method ) ; 2 ) the distance between the reference pixel and its neighboring pixel ( i.e. ) ; 3 ) the difference of depth value between the reference pixel and its neighboring pixels ( i.e. ) . reflects the property of the depth map . for the case where the depth region is homogeneous butthe color information is not smooth at the corresponding area , can eliminate the efforts caused by .thus , it reduces the texture copy artifacts . for the case where depth region contains depth discontinuity butthe color is smooth at the corresponding area on the color image , the second term will be close to a bilateral filter where well preserves the depth discontinuity .thus the proposed smoothness term is more capable of cases where the color edge is not consistent with the depth discontinuity .we do not assume the color edge and the depth discontinuity to be necessarily consistent with each other .figure [ syntheticcomparison](g ) shows the result by our method .it is clear that the noise in the homogeneous regions have been well smoothed and the depth discontinuities can be properly preserved even there is no color edge corresponding to the depth discontinuity on the depth map . in this paper ,the parameter is denoted as the _ bandwidth _ of the exponential error norm function in eq.([erronormfunction ] ) which is similar to the function of the tone term in the bilateral filter .a large has better noise smoothing but may over smooth the depth discontinuities .a small can better preserve the depth discontinuities but performs poorly in noise smoothing . in this section, we describe an optimization method that adapts to each pixel on the depth map .it is a data driven adaptive bandwidth selection . because the depth map is quite piece wise smooth , we assume that the bandwidth is also regular and smooth .we add another regularization term that consists of the l2 norm of the gradient of to the objective function in eq.([mymodel ] ) . that is : by minimizing eq.([bandwidthmodel ] ) with respect to through the steepest gradient descent according to : where is the given updating rate and the derivative of the objective function is given by }+\\ & \alpha \sum\limits_{j\inn\left ( i \right)}{{{{\tilde{\omega } } } _ { i , j}}\left [ 4{{\lambda}_{i}^n}\left ( 1-{{s}^n_{i , j } } \right)-\frac{2{{\left ( { { d}^n_{i}}-{{d}^n_{j } } \right)}^{2}}}{{{\lambda } _ { i}^n}}{{s}^n_{i , j } } \right]}+2\beta \sum\limits_{i\in \omega } { \delta { { \lambda } _ { i}^n } } \end{split}\ ] ] where and are the same as eq.([errornormfunctionderivative ] ) . depth map upsampling and the bandwidth selection are addressed in an iterative way through alternating the bandwidth update in eq.([bandwidthupdate ] ) and the depth map update in eq.([myupdate ] ) . note that most components in eq.([bandwidthderivative ] ) have been already computed in eq.([myupdate ] ) .the computation cost for is also quite small .so the computation cost of the bandwidth selection step is quite small indeed .figure [ coverfigure](f ) illustrates a bandwidth map obtained by our method where dark regions correspond to smaller bandwidth values and bright regions correspond to larger bandwidth values .it is clear that this bandwidth map well corresponds to the character of the depth map shown in figure [ coverfigure](b ) .in this section , we perform the quantitative and qualitative evaluation of our method on both the synthetic data and real data .comparison are performed with other state - of - art methods where the source codes are available .we show that our method can outperform other methods in most cases both quantitatively and qualitatively . for more experimental results ,please see our supplementary material .we first test our method on the synthetic data form the middleburry dataset .we reuse the data in and compare our method with the mrf in and its extension non - local means mrf in , the image guided anisotropic total generalized variation upsampling in , the joint geodesic upsampling in , the color guided auto - regression upsampling in and the cross - based local multipoint filter in .the upsamling results are evaluated in root mean square error ( rmse ) between the original depth map and the upsampling result .four upsampling factors are tested : .all the values of both the color image and the depth map are normalized into interval ] .the parameters are set as follows : we slightly tune for different upsampling factors which is also the strategy adopted by other methods such as .it is chosen as for upsampling .perturbations on the other parameters marginally affect the final results . in eq.([bandwidthmodel ] ) is set to .the neighborhood is chosen as a square patch . and in eq.([colorspatialweight ] ) is set to and respectively . the initial value of in eq.([bandwidthupdate ] )is set to for all .its updating rate in eq.([bandwidthupdate ] ) is .table [ simulatedrmse ] summarizes the quantitative comparison results on the middleburry dataset .it is clear that our method outperforms other methods in almost all the cases . especially , results by our method have a great improvement over the ones by the mrf in and the non - local means mrf in .even when compared with the most recent proposed methods in and , our method have smaller rmse in almost all the cases .note that the method in needs thousands of iterations to converge which is quite time consuming , namely iterations for and upsampling , iterations for upsampling and iterations for upsampling which needs hours with their own implementation .however , our method only needs a few iterations to converge , for example , less than 5 iterations for upsampling and less than 50 iterations for upsampling which only needs very few minutes .the computation cost of the method in is also quite huge due to its complex shape - based color guided weights .our method is also a magnitude faster than theirs . also , our computational cost doubles the mrf in but is about half of the non - local mrf in .figure [ bandwidthselectioncomparison ] illustrates the improvement of upsampling results by our bandwidth selection . without bandwidth selection, it is quite easy to smooth the depth discontinuities where the corresponding color edges are weak .moreover , our bandwidth selection can further help to preserve the fine details in the depth map .figure [ coverfigure](e ) clearly illustrates the improvement .the small holes in the ring are properly preserved with the adaptive bandwidth selection while this can not be achieved without the adaptive bandwidth selection as shown in figure [ coverfigure](d ) .figure [ 8upsamplingcomparison ] shows the examples of upsampling for visual comparison .it is clear that results by the mrf in and the non - local means mrf in can not well smooth the noise in homogeneous regions and properly preserve the depth discontinuities . as shown in the highlighted region of the _ moebius_ , results by their methods suffer from heavy texture copy effect while ours is not .the method in can avoid the texture copy effect , but it also can not preserve the fine details , for example , the small holes on the ring of _ art _ and the small object of _ moebius _ which are shown in the highlighted regions . however , our method can not only well smooth the noise in the homogeneous regions and avoid texture copy but also properly preserve the depth discontinuities even the fine details .figure [ 16upsamplingcomparison ] further shows upsampling results comparison .it is clear that even for such a large upsampling factor , our method can still well smooth the noise and preserve sharp depth discontinuities while none of the compared methods could yield such performance as clearly shown in the figure .upsampling with and without bandwidth selection .the first row are patches from _ dolls _ and the second row are patches from _ reindeer_. ( a ) the groundtruth .( b ) the corresponding color image . ( c )results obtained without bandwidth selection .( d ) results obtained with bandwidth selection and ( e ) the corresponding bandwidth maps.,title="fig : " ] + + + + to test the effectiveness of our method on real tof depth maps , we further test our method on the real tof depth maps from .as far as we know , this is the only real tof dataset that _ provides groundtruth_. three depth maps are included in this dataset , namely _ books _ , _ devil _ and _shark_. all the values in the depth map are the real depth values from the camera to the measured objects ( in mm ) .the upsampling factor is about .table [ realrmse ] summarizes the quantitative comparison on this real dataset .our method outperforms all the other methods on these three depth maps .figure [ realdatacomparison ] illustrates the visual comparison of the _ books _ upsampling results .note that the result of the mrf has strong texture copy effect at the up left corner of the book .the result of the non - local means mrf still contains noise in the homogeneous regions .our method can well smooth the noise in homogeneous regions and preserve sharp depth discontinuities .in this paper , we propose a novel method for depth map upsampling guided by the high resolution color image .we model the upsampling work with an optimization framework that consists of a brand new data term and a new smoothness term .the proposed data term is based on the pixel - patch difference and is modeled with an exponential error norm function .it has been proved to be more robust against the noise than the one based on pixel - pixel difference with l2 norm as the error norm function .we relax the too strict assumption on the consistency between the color edge and the depth discontinuity which is adopted by most existing methods .the new smoothness term makes our model not only obtain the depth discontinuity cues from the guided color image but also the depth map itself .this makes our model well tackle the texture copy artifacts and preserve sharp depth discontinuities even when there is no color edge correspondence .moreover , a data driven scheme is proposed to adaptively select the proper bandwidth of the exponential error norm function .this helps to further improve the upsampling quality where fine details and sharp depth discontinuities could be preserved even for a large upsampling factor , and for example . experimental results on both synthetic data and real datahave shown our method outperforms other state - of - art methods .
time - of - flight ( tof ) depth sensing camera is able to obtain depth maps at a high frame rate . however , its low resolution and sensitivity to the noise are always a concern . a popular solution is upsampling the obtained noisy low resolution depth map with the guidance of the companion high resolution color image . however , due to the constrains in the existing upsampling models , the high resolution depth map obtained in such way may suffer from either texture copy artifacts or blur of depth discontinuity . in this paper , a novel optimization framework is proposed with the brand new data term and smoothness term . the comprehensive experiments using both synthetic data and real data show that the proposed method well tackles the problem of texture copy artifacts and blur of depth discontinuity . it also demonstrates sufficient robustness to the noise . moreover , a data driven scheme is proposed to adaptively estimate the parameter in the upsampling optimization framework . the encouraging performance is maintained even in the case of large upsampling e.g. and .
audio feature extraction and classification methods have been studied by many researchers over the years . in general ,audio classification can be performed in two steps , which involves reducing the audio sound to a small set of parameters using various feature extraction techniques and classifying or categorizing over these parameters .feature commonly exploited for audio classification can be roughly classified into time domain features , transformation domain features , time - transformation domain features or their combinations .many of those features are common to audio signal processing and speech recognition and have many successful performances in various applications .however almost all these features are based on short time duration and in vector form ( it is easy to handle but sometimes not proper ) , although it is believed that long time duration ( seconds ) help a lot in decision making . in this workwe will build robust features on a long time duration in matrix form which is the most natural way using long time audio information . in order to map or smooth the audio segment into a robust matrix space, we introduce the trace norm regularization technique to audio signal processing .the trace norm regularization is a principled approach to learn low - rank matrices through convex optimization problems .these similar problems arise in many machine learning tasks such as matrix completion , multi - task learning , robust principle component antilysis ( robust pca ) , and matrix classification . in this paper ,robust pca is used to extract matrix representation features for audio segments . unlike traditional frame based vector features , these matrix features are extracted based on sequences of audio frames . it is believed that in a short duration the signals are contributed by a few factors . thus it is natural to approximate the frame sequence by low - rank features using robust pca which assumes that the observed matrices are combinations of some low - rank matrices and some corruption noise matrices .having extracted descriptive features , various machine learning methods are used to provide a final classification of the audio events such as rule - based approaches , gaussian mixture models , support vector machines , bayesian networks , and etc . . in most previous work ,these two steps for audio classification are always separate and independent . in this work, we can learn the classifiers in solving similar optimization problems using trace norm regularization .after extraction of the robust low - rank matrix feature , the regularization framework based matrix classification approach proposed by tomioka and aihara in is used to predict the label .the problem of matrix classification ( mc ) with spectral regularization was first proposed by tomioka and aihara in .the goal of the problem is to infer the weight matrix and bias under low trace norm constraints and low deviation of the empirical statistics from their predictions .the trace norm was use to measure the complexity of the weight matrix of the linear classifier for matrix classifications .this kind of inference task belongs to the more general problem of learning low - rank matrix through convex optimization . for the matrix rank minimizationis np - hard in general due to the combinatorial nature of the rank function , a commonly - used convex relaxation of the rank function is the trace norm ( nuclear norm ) , defined as the sum of the singular values of the matrix .recent related researches are not focused on matrix classification directly , but rather on general trace norm minimization problem .these general algorithm can be adapted to matrix classification suitably . in these methods ,most are iterative _ batch _ procedures , accessing the whole training set at each iteration in order to minimize a weighted sum of a cost function and the trace norm .this kind of learning procedure can not deal with huge size training set for the data probably can not be loaded into memory simultaneously .furthermore it can not be started until the training data are prepared , hence can not effectively deal with training data appear in sequence , such as audio and video processing . to address these problems , we propose an _ online _ approach that processes the training samples , one at a time , or in mini - batches to learn the weight matrix and the bias for matrix classification .we transform the general batch - mode accelerated proximal gradient ( apg ) method for trace norm minimization to the online learning framework . in this online learning framework , a slight improvement over the exact apg leads an inexact apg ( iapg ) method , which needs less computation in one iteration than using exact apg . in addition , as a special case of general convex optimization problem , we derived the closed - form of the lipschitz constant , hence the step size estimation of the general apg method was omitted in our approach .our main contributions in this work can be summarized as follows : 1 . to our best knowledge ,we are the first to introduce low - rank constraints in audio and speech signal processing , and the results show that these constrains make the systems more robust to noise , especially to large corruptions .we propose online learning algorithms to learn the trace norm minimization based matrix classifier , which make the approaches work in real applications .the paper is organized as follows : section [ sec : matrixfeature ] presents the extraction of matrix representation feature .section [ sec : matrixclassifyaed ] presents the matrix classification problem solving via the general apg method and the proposed audio event detection with matrix classification .the proposed online methods with exact and inexact apg for weight and bias learning are introduced in section [ sec : onlinelearning ] .section [ sec : experimental ] is devoted to experimental results to demonstrate the characteristics and merits of the proposed algorithm .finally we give some concluding remarks in section [ sec : conclusions ] .over the past decades , a lot work has been done on audio and speech features for audio and speech processing . due to convenience and the short - time stationary assumption , these features are mainly in vector form based on frames ,although it is believed that features based on longer duration help a lot in decision making . in order to build long term features ,the consecutive frame signals are made together as rows , then the audio segments become matrices .generally , it is assumed and believed that the consecutive frame signals are influenced by a few factors , thus these matrices are combinations of low - rank components and noise . hence it is natural to approximate these matrices by low - rank matrices . in this work ,transformations of these approximate low - rank matrices are used as features .given an observed data matrix , where is the number of frames and represents the number of samples in a frame , it is assumed that it can be decomposed as where is the low - rank component and is the error or noise matrix .the purpose here is to recover the low - rank component without knowing the rank of it . for this problem, pca is a suitable approach that it can find the low - dimensional approximating subspace by forming a low - rank approximation to the data matrix .however , it breaks down under large corruption , even if that corruption affects only a very few of the observation which is often encountered in practice . to solve this problem ,the following convex optimization formulation is proposed where denotes the trace norm of a matrix which is defined as the sum of the singular values , denotes the sum of the absolute values of matrix elements , and is a positive regularization parameter .this optimization is refereed to as _ robust pca _ in for its ability to exactly recover underlying low - rank structure in data even in the presence of large errors or outliers . in order to solve equation ( [ eq : robustpcaformulation ] ) ,several algorithms have been proposed , among which the augmented lagrange multiplier method is the most efficient and accurate at present . in our work , this robust pca method is employed for the low - rank matrix extraction . in order to apply the augmented lagrange multiplier ( alm ) to the robust pca problem , lin et . identify the problem as and the lagrangian function becomes two alm algorithms to solve the above formulation are proposed in .considering a balance between processing speed and accuracy , the robust pca via the inexact alm method is chosen in our work . thus the matrix representation feature extraction process based on this approach is summarized in algorithm [ algo : prcaviaialm ] . in algorithm[ algo : prcaviaialm ] , is defined as the larger one of and , where is the maximum absolute value of the matrix elements .the ] . 5 : //line 6 solves 6 : . ] in algorithm [ algo : apg ] is the soft - thresholding operator introduced in : \doteq \left\ { \begin{array}{l } x-\varepsilon , \textrm{if } x>\varepsilon , \\ x+\varepsilon , \textrm{if } x<-\varepsilon , \\ 0 , \textrm{otherwise } \\ \end{array } \right.\ ] ] where and . for vectors and matrices ,this operator is extended by applying element - wise .[ algo : apg ] batch - mode weight matrix learning via apg * initialize * 1 : * while * not converged * do * 2 : .3 : ^t ] . 15 : . 16 : . 17 : 18 : . 19 : * end while * 20 : 21 : * end for * * output * : our procedure is summarized in algorithm [ algo : onlinelearning ] .the operator in step 6 of the algorithm denotes the kronecker product .given two matrices and , denotes the kronecker product between and , defined as the matrix in , defined by blocks of sizes equal to ]. 14 : . 15 : ^t$ ] . 16 : 17 : // line 18 updates the bias . 18 : 19 : * end for * * output * : in some conditions , use the classical heuristic in gradient descent algorithm , we may also improve the convergence speed of our algorithm by drawing training samples at each iteration instead of a single one .let us denote by the samples drawn at iteration .we can now replace lines 5 and 9 of algorithm [ algo : onlinelearning ] and [ algo : inexactonlinelearning ] by but in real applications , this batch method may not improve the convergence speed on the whole since the batch past information computation ( equation ( [ eq : minibatchinfoupdata ] ) ) would occupy much of the time .the updating of needs to do kronecher product which spend much of the computing resource .if the computation cost of equation ( [ eq : minibatchinfoupdata ] ) can be ignored or largely decreased , for example by parallel computing , the batch method would increase the convergence speed by a factor of .experiments are conducted on a collected database .we downloaded about 20hours videos from youku , with different programs and different languages .the start and end position of all the applause and laugh of the audio - tracks are manually labeled .the database includes 800 segments of each sound effect .each segment is about 3 - 8s long and totally about 1hour data for each sound effect .all the audio recordings were converted to monaural wave format at a sampling frequency of 8khz and quantized 16bits .furthermore , the audio signals have been normalized , so that they have zero mean amplitude with unit variance in order to remove any factors related to the recording conditions . in this section ,we conduct detailed experiments to demonstrate the characteristics and merits of the online learning for matrix classification problem .five algorithms are compared : the traditional batch algorithm with exact apg algorithm ( apg ) ; the online learning algorithm with exact apg ( ol_apg ) ; the online learning algorithm with inexact apg ( ol_iapg ) ; the online learning algorithm with exact apg and update equation ( [ eq : minibatchinfoupdata ] ) ( ol_apg_batch ) ; the online learning algorithm with inexact apg and update equation ( [ eq : minibatchinfoupdata ] ) ( ol_iapg_batch ) .all algorithms are run in matlab on a personal computer with an intel 3.40ghz dual - core central processing unit ( cpu ) and 2 gb memory . for this experiment, audio streams were windowed into a sequence of short - term frames ( 20 ms long ) with non overlap .13 dimensional mfccs including energy are extracted , and adjacent 50 frames ( one second ) of mfccs form the mfccs matrix feature .the goal is to classify the matrices according to their labels .two learning tasks are used to evaluate the performance of the online learning method , which are laugh / non - laugh segment classifier learning and applause / non - applause segment classifier learning . for ol_apg and ol_apg_batch algorithms , the parameters in the stopping criteria ( [ eq : stoppingcriteria ] ) are set and or smaller , which are determined by empirical evidence that larger values would make the algorithm diverge .the regularization constant is anchored by the large explicit fixed step size and the matrices involved , this can be seen from in the line 3 in algorithm [ algo : apg ] , which means that in practice the parameter should be set adaptably with the step size in the online process .but due to this variation of , the comparisons between the algorithms would not bring into effect .hence in this work we use throughout .[ fig : performanccomparonline ] compares the five online algorithms .the proposed online algorithm draws samples from the entire training set .we use a logarithmic scale for the computation time .[ fig : performanccomparonline]a shows the values of the target functions as functions of time .it can be seen that the online learning methods without batch or with small batch past information updating converge faster than the methods with large batch past information updating and reason for this has been explained in the last paragraph of section [ sec : onlinelearning ] .after online methods and batch methods converge , the two methods result in almost equal performance .[ fig : performanccomparonline](b)(d ) shows the classification rates for different algorithms respectively . in accordance with the values of the target functions , the classification accuracies of online methods without or with small batch updating become stable quickly than that of methods with batch updating .although the inexact algorithms process samples much fast with less resources than exact ones , they converge slowly .this section is to assess the effectiveness of robust pca extracted low - rank matrix features .original features ( mfccs_matrix ) , corrupted with 0db and -5db white gaussian noise ( wgn snr=5db , 0db , -5db ) and 10% , 30% , 50% random large errors ( le 10% , 30% , 50% ) , and parallelism robust pca extracted features ( rpca ) are compared . in the comparisons , the parameters in the stopping criteria ( [ eq : stoppingcriteria ] )are set and , which are determined by the same method as in section [ sec : onlinelearning ] .the regularization constant is set which is a classical normalization factor according to .the classification accuracy of the one second audio segments is used to evaluate the performance of the methods .[ fig : performanccompar ] shows the performances of the methods with different matrix features under different noise conditions as the functions of the training time used in algorithm [ algo : apg ] .it can be seen that the original mfccs matrix feature is not robust to noises , especially random large errors .if 10% of the elements of the mfccs matrix feature are corrupted with random large errors , then generally there would be a decrease of 25% in audio segments classification accuracy , while for robust pca extracted low - rank features , the decrease are 5% in average . for wgn, the robust pca features also perform better than original features , although not so sharp as in the situation of large errors .the experiments show that the low - rank components are more robust to noises and errors than the original features . [cols="^,^,^,^,^,^,^,^",options="header " , ] we also compare our method with the state - of - the - art svm classifier with long vector feature ( 650 dimension ) obtained by vectorizing the matrix .the results are summarized in table [ tab : appcomparewithsvm ] and table [ tab : laucomparewithsvm ] for applause / non - applause and laugh / non - laugh classification respectively .the results show that the svm become useless under 5db wight noise and 10% large corruptions , while our methods still works .but for the low - rank component , the svm performs better on some situations for which is due to the robustness of the features .in this work , we present a novel framework based on trace norm minimization for audio segment classification . the novel method unified feature extraction and pattern classification into the same framework . in this framework , robust pca extracted low - rank component of original signal is more robust to corrupted noise and errors , especially to random large errors .we also introduced online learning algorithms for matrices classification tasks .we obtain the closed - form updating rules of the weight matrix and the bias .we derive the explicit form of the lipschitz constant , which saves the computation burden in searching step size .experiments show that even the percent of the original feature elements corrupted with random large errors is up to 50% , the performance of the robust pca extracted features almost have no decrease . in future work , we plan to test this robust feature in other audio or speech processing related applications and extend robust pca , even trace norm minimization related methods from matrices to the more general multi - way arrays ( tensors ) . some work related to learning methods are also worth considering , such that the alternating between minimization with respect to weight matrix and bias may results in fluctuation of target value ( even in batch mode ) , thus optimization algorithm that minimization jointly on weight matrix and bias are required ; for multi - classification problems with more classes , some hierarchy methods may be introduced to improve the classification accuracy .k. a. pradeep , c. m. namunu , and s. k. mohan , `` audio based event detection for multimedia surveillance '' , in _ proceedings of ieee international conference on acoustics , speech and signal processing _ , 2006 .k. umapathy , s. krishnan , and r.k .rao , `` audio signal feature extraction and classification using local discriminant bases '' , in _ ieee transactions on audio , speech , and language processing _ , vol .1 , pp . 1236 - 1246 , 2007 .x. zhuang , x. zhou , t.s .huang , and m. hasegawa - johnson , `` feature analysis and selection for acoustic event detection '' , in _ proceedings of ieee international conference on acoustics , speech and signal processing _17 - 20 , 2008 .g. guo and s.z .li , `` content - based audio classification and retrieval by support vector machines '' , ieee transactions on neural networks vol .1 , pp . 209 - 215 , 2003 .m. fazel , h. hindi , and s. p. boyd , `` a rank minimization heuristic with application to minimum order system approximation '' , in _ proceedings of the american control conference _ , pp .4734 - 4739 , 2001 .j. wright , a. ganesh , s. rao , y. peng , and y. ma , `` robust principal component analysis : exact recovery of corrupted low - rank matrices via convex optimization '' , in _ proceedings of advances in neural information processing systems _ , 2009 .i. t. jolliffe , _ principal component analysis _ , springer series in statistics , berlin : springer , 1986 .e. j. candes and b. recht , `` exact matrix completion via convex optimization '' , technical report , ucla computational and applied math , 2008 .
in this paper , a novel framework based on trace norm minimization for audio segment is proposed . in this framework , both the feature extraction and classification are obtained by solving corresponding convex optimization problem with trace norm regularization . for feature extraction , robust principle component analysis ( robust pca ) via minimization a combination of the nuclear norm and the -norm is used to extract low - rank features which are robust to white noise and gross corruption for audio segments . these low - rank features are fed to a linear classifier where the weight and bias are learned by solving similar trace norm constrained problems . for this classifier , most methods find the weight and bias in batch - mode learning , which makes them inefficient for large - scale problems . in this paper , we propose an online framework using accelerated proximal gradient method . this framework has a main advantage in memory cost . in addition , as a result of the regularization formulation of matrix classification , the lipschitz constant was given explicitly , and hence the step size estimation of general proximal gradient method was omitted in our approach . experiments on real data sets for laugh / non - laugh and applause / non - applause classification indicate that this novel framework is effective and noise robust .
serotonergic neurons in the drn , which extensively innervate most brain regions , have a large influence on many aspects of behavior , including sleep - wake cycles , mood and impulsivity ( liu et al , 2004 ; miyazaki et al . ,the firing patterns of these cells have been much studied and many properties of the ionic currents underlying action potentials have been investigated ( aghajanian , 1985 ; segal , 1985 ; burlhis and aghajanian , 1987 ; penington et al . , 1991 , 1992 ; penington and fox , 1995 ) . in order to quantitatively analyze action potential generation in drn 5-ht neurons , it is desirable to have accurate knowledge of the activation and inactivation properties of the various ion currents for some of which there is relatively little or no data available . herewe report results for the fast transient potassium current which plays an important role in determining the cell s firing rate .mathematical modeling of electrophysiological dynamics has been pursued for many different nerve and muscle cell types .some well known neuronal examples are thalamic relay cells ( huguenard and mccormick , 1992 ; destexhe et al . ,1998 ; rhodes and llinas , 2005 ) , dopaminergic cells ( komendantov et al . , 2004 ; putzier et al . , 2009 ; kuznetsova et al . ,2010 ) , hippocampal pyramidal cells ( traub et al . , 1991 ; migliore et al . , 1995 ;poirazi et al . , 2003 ; xu and clancy , 2008 ) and neocortical pyramidal cells ( destexhe et al ., 2001 ; traub et al , 2003 ; yu et al . , 2008 ) .cardiac myocytes have also been the subject of numerous computational modeling studies with similar structure and equivalent complexity to that of neurons ( faber et al ., 2007 ; williams et al . , 2010 ) . the methods employed by hodgkin and huxley ( 1952 ) to describe mathematically the time ( and space ) course of nerve membrane potential have , for the most part , endured to the present dayan integral component of their model consists of differential equations for activation and ( if present ) inactivation variables , generically represented in the non - spatial models as and , respectively , where is time and is membrane potential .these equations are = , .2 in = , where and are steady state values and and are time constants which often depend on . for voltage - gated ion channels the currentis often assumed to be i= ( v - v_rev ) m^ph where is the maximal conductance , is the reversal potential for the ion species under consideration and is ( usually ) a small non - negative integer between 1 and 4 .the inactivation is invariably to the power one , as in ( 2 ) .although these equations have solutions which can only yield approximations to actual currents , it is nevertheless desirable to have accurate forms for the steady state activation and inactivation functions and , the time constants and , and the remaining parameters and .our concern in this article is to describe and illustrate a novel and straighforward yet accurate method of estimating these quantities from voltage - clamp data .in order to isolate , currents for activation and inactivation protocols were obtained in 20 mm tea , as described in section 4.1 , for several identified serotonergic neurons of the rat dorsal raphe nucleus . in many cellsthe outward currents were clearly a mixture of components with different dynamical properties .in some cells , however , the currents decayed smoothly from an early peak to near zero and were apparently a manifestation of a single channel type , presumed to be uncontaminated .thus neither cocl nor cdcl was employed .one cell in particular , dr5 , with an apparently pure set of current traces was singled out for analysis .its activation current traces are shown in figure 1 .results for other cells with composite outward currents will be analyzed in future articles .the current traces of figure 1 were obtained in digitized form and the time constants and and the value of were estimated by the best fitting of the current traces to the analytical formula ( 6 ) .only two of these three parameters can be independently chosen by virtue of the constraint given by formula ( 7 ) for the time of occurrence of the maximum current , which rearranges to _h = ( e^ -1 ) the current was required to pass through the maximum point at and one other point on the trace , called .the value of was found to give excellent fits to the current traces .an example of the resulting curve with best fitting time constants and is shown in figure 2 along with the experimental current for a step from -120 mv to -20 mv .table 1 summarizes the data and estimates of time constants for cell dr5 , all of which quantities are required to estimate the three parameters ( conductance ) , and ( boltzmann parameters for activation ) . in this table is the maximal experimental current over time .activation experiments for in cell dr5 .maximal currents and estimated time constants in [ cols="^,^,^,^ " , ] study ; & sacchi ( 1988 ) ; & mccormick ( 1992 ) ; ( 2000 ) . in order to construct a computational model of action potential generation in drn 5-ht neurons , such that the role of the various ionic components can be properly understood, it is desirable to have accurate knowledge of the activation and inactivation properties of the various ion currents , including that addressed here , .the need for considerable accuracy is made the more important because several components , including the low threshold t - type calcium and operate in overlapping ranges of potential . in this paperwe have illustrated that the often - used procedure of finding activation functions by taking ratios of current to maximal current or conductance to maximal conductance may produce inaccurate results due to the fact that the quantity in ( 8) depends on the clamp voltage and does not cancel .not taking this into account can lead to substantial errors in the estimates of the parameters and .inactivation functions , however , do not have such a complication and can be estimated in the usual way . l - type calcium currents , particularly through channels , have been found to sometimes play a role in pacemaker activity in neurons and cardiac cells ( koschak et al . , 2003 ; putzier et al ., 2009 ; marcantoni et al . , 2010 ) .however , in drn 5-ht cells , there is only a small ( about 4% of total ) contribution from l - type calcium currents ( penington et al . , 1991 ) so it is yet to be determined what role these play in the pacemaking activity .these latter results were , however , obtained in dissociated cells .burlhis and aghajanian ( 1987 ) hypothesized that the low threshold calcium t - type current played a key role in pacemaking and segal ( 1985 ) proposed that was also important .however , considering the fact that the resting potential for these cells is about -60 mv and that threshold for spiking is about -50 mv , and judging by the curves for and in figure 4 , it is likely that is only important during the spike itself and not between the commencement of the afterhyperpolarization and the next spike , probably implying a lesser role for in pacemaking as it tends to be switched off during the greater part of the isi .we will explore by means of a computational model the ionic basis of the mechanisms of pacemaking in drn 5-ht cells in future articles .finally we note that the results obtained here for and the time constants bear a resemblance to those given in gutman et al .( 2005 ) for the transient a - type channels ( mv , mv , ms , and two other values ) .this contrasts with the finding by bekkers ( 2000 ) who suggested that in rat layer 5 cortical pyramidal cells , was carried by channels , which accounts for their smaller time constants of inactivation ( gutman et al . , 2005 ) .data of the kind analyzed in this paper were obtained from 2 month - old male sprague - dawley rats that were anesthetized with halothane and then decapitated with a small animal guillotine in accordance with our local animal care and use committee regulations .three coronal slices ( 500 m ) through the brain stem at the level of the drn were prepared using a `` vibroslice '' in a manner that has previously been described ( yao et al . , 2010 ) .pieces of tissue containing the drn were then incubated in a pipes buffer solution containing 0.2 mg / ml trypsin ( sigma type xi ) under pure oxygen for 120 min .the tissue was then triturated in dulbecco s modified eagle s medium to yield individual neuronal cell bodies with vestigial processes .recording was carried out at room temperature , 20-25 .cells recorded from had diameters of about 20 m .steinbusch et al .( 1981 ) showed using immunohistochemistry that most of the cells in a thin raphe slice with a soma diameter greater than 20 m contain 5-ht while the smaller cells were largely gabaergic interneurons . using a method similar to that of yamamoto et al .( 1981 ) , we also found that the proportion of isolated cells with diameters larger than 20 m that stain for 5-ht , was greater than 85% , and a similar percentage responded to serotonin .the cells were voltage - clamped using a switching voltage clamp amplifier ( axoclamp 2a ) and a single patch pipette in the whole - cell configuration ( hamill et al . , 1981 ) .pipettes pulled from thick - walled borosilicate glass ( resistance from 5 - 7 m ) allowed a switching frequency of 8 - 13 khz with a 30% duty cycle .the seal resistance measured by the voltage response to a 50 pa step of current was often greater than 5 g ) .the voltage - clamp data were filtered at 1 khz and digitized for storage at 16 bits , or experiments were run online with a pc and a ced 1401 interface .the external saline was designed to isolate potassium currents and contained : outside the cell ttx 0.2 m and in mm nacl 147 , kcl 2.5 , cacl 2 , mgcl 2 , glucose 10 , hepes 20 , ph 7.3 . inside : k gluconate 84 , mgatp 2 , kcl 38 , egta - koh 11 , hepes 10 , cacl 1 , ph 7.3 .the total [ k ] inside was 155 mm ( since 33 mm is added to make the salt of egta ) .the osmolarity was adjusted with sucrose so that the pipette solution was 10 mosm hypoosmotic to the bath solution .drugs were either dissolved in the extracellular solution and added to the perfusate or applied by diffusion from a patch pipette ( tip 15 m ) lowered close to the cell ( 50 m and then removed from the bath ) . as a control ,the same procedure was carried out without addition of the drug to the application pipette ; this did not affect . in some cells , to investigate any influence of currents on either 2 mm cocl or 20 m cdcl added to the bath by a pipette placed near the cell .this sometimes changed the magnitude of but had little effect on its time course .suppose that in a voltage clamp experiment the voltage is , after equilibration at a voltage , suddenly clamped at the new voltage .then , assuming a current of the form of equation ( 2 ) , according to the standard hodgkin - huxley ( 1952 ) theory , the current at time after the switch to is i(t ; v_0 , v_1 ) = ( v_1-v_rev ) [ m_1 - ( m_1-m_0 ) e^-t/_m ] ^p [ h_1 - ( h_1-h_0 ) e^-t/_h ] , where we have employed the abbreviations for the activation , for the inactivation and for the time constants . in activation eperimentsthere is a step up from a relatively hyperpolarized state to several more depolarized states so that one usually assumes that and .this gives the simplified expression for the current i(t ; v_1 ) = ( v_1-v_rev)[m_(v_1 ) ( 1- e^-t/_m)]^p e^-t/_h .finding the maximum by differentiation yields the time at which the maximum occurs as t_max(v_1 ) = _ m ( 1 + ) and its value as i_max(v_1 ) = ( v_1-v_rev)m^p_(v_1)f_p ( ) where f_p ( ) = and where , which depends on , is defined as ( v_1)=. since is here considered a variable , the time course of the current changes as varies . by rearrangementthe steady state activation is given by m_(v_1 ) = ( ) ^ if it is known that for some ( usually well - depolarized ) state one has ( but note that this can only ever be approximately true as can only approach unity asymptotically ) then the maximal conductance can be estimated from voltage clamp experiments from = . once is known then ( 11 ) can be used to find for various because all remaining quantities , , and are known .in summary , the steps to find the activation function are : \(a ) from the time course of the current find the maximum current and its time of occurrence for a given \(b ) estimate the time constants and and the value of .\(c ) hence find and also .\(d ) using a suitably highly depolarized state estimate according to ( 12 ) .\(e ) hence estimate at various below .\(f ) find the best fitting boltzmann curve passing through the values obtained .however , the formulas ( 6)-(8 ) were derived on the assumption that and this will usually be valid for , where is below the half - activation potential .this means that the half - activation function can be obtained accurately for , and thus by symmetry for .another approach is as follows .if is not known or is uncertain , then least squares estimates can be made as follows .it is assumed that m_(v ) = is the steady state activation function .then one may estimate , and by minimizing the sum of squares of the differences between experimentally obtained maxima of the current at various and the values given by formula ( 8) here is given a brief description of the 4 methods employed to estimate the parameters of and given in table 2 . + * method a * + having estimated the values of the time constants and and the power as described in the text , experimental maximum currents at various are compared with those obtained from formula ( 8) i_max(v_i ) = ( v_i - v_rev)m^p_(v_i)f_p ( ) with evaluated at .thus the parameters , and are estimated by least squares .+ * method b * + it is assumed that , with mv in the present example , so that is given directly by equation ( 12 ) . then can be obtained for various uising equation ( 11 ) . a best fitting boltzmann is then obtained for the various vaules of .+ * method c * + the experimental values of the maximum currents ( over time ) at each test voltage are divided by to give corresponding conductances .the conductances are assumed ( from equation ( 8) without the correction factor ) to be given by g_i(v_i ) = m^p_(v_i ) . with m_(v ) = the parameters , and ( and possibly if not already known ) are estimated by least squares fitting of the experimental values to the predicted ones , without any assumption on ( normalization ) . + * method d * + the same procedure is carried out as in method c , but the values are divided by the maximum value , assumed to occur at ( -20 mv in the above ) , to give the normalized values = .then since there are only two parameters and to estimate with a best fitting boltzmann curve .the maximal conductance can be obtained as = .if there is no inactivation in the course of the clamp experiment then the maxiumum current will be the asymptotic value i_max(v_1 ) = ( v_1-v_rev)m^p_(v_1 ) .so , with as above , is again given by ( 18 ) , and it is straightforward to obtain the required activation function from the remaining measurements . heresteps are made from a number of relatively hyperpolarized states to a fixed relatively depolarized state .it is usually assumed that and , so that the current is approximately i(t ; v_0 , v_1 ) = ( v_1-v_rev)m^p_(v_1)h_(v_0 ) ( 1-e^-t/_m)^p e^-t/_h . herethe time course of the current ( but not its magnitude ) is the same for all .the time of occurrence of the maximum of is again given by ( 7 ) but the value of the maximum is now i_max(v_0,v_1 ) = ( v_1-v_rev)m^p_(v_1)h_(v_0)f_p ( ) . assuming that has already been estimated and that is such that , then the inactivation function can be found at from h_(v_0 ) = i_max(v_0,v^ * ) ( v^*-v_rev ) f_p((v^ * ) ) it can be seen that normalizing the currents by dividing by the maximum of the gives a reasonable estimate of the steady state inactivation function .support from the max planck institute is appreciated ( hct ) .we appreciate useful correspondence with professor john huguenard ( stanford university ) and professor john bekkers ( australian national university ) .aghajanian , g.k . , 1985 .modulation of a transient outward current in serotonergic neurones by -adrenoceptors .nature 315 , 501 - 503 .chen , y. , yao , y. , penington , n.j .effect of pertussis toxin and _n_-ethylmaleimide on voltage - dependent and -independent calcium current modulation in serotonergic neurons .neuroscience 111 , 207 - 214 .faber , g.m . , silva , j. , livshitz , l. , rudy , y. , 2007 .kinetic properties of the cardiac l - type ca channel and its role in myocyte electrophysiology : a theoretical investigation .biophys j 92 , 1522 - 1543 .gutman , g.a ., chandy , k.g . ,grissmer , s. , lazdunski , m. , mckinnon , d. , pardo , l.a . ,robertson , g.a . ,rudy , b. , sanguinetti , m.c . , sthmer , w. , wang , x. , 2005 .international union of pharmacology .liii . nomenclature and molecular relationships of voltage - gated potassium channels .pharmacol rev 57 , 473508 .hamill , o.p ., marty , a. , neher , e. , sakmann , b. , sigworth , f .. j . , 1981 . improved patch - clamp techniques for high - resolution current recording from cells and cell - free membrane patches .pflugers arch 391 , 85 - 100 .komendantov , a.o . ,komendantova , o.g . ,johnson , s.w ., canavier , c.c . , 2004 .a modeling study suggests complementary roles for gaba and nmda receptors and the sk channel in regulating the firing pattern in midbrain dopamine neurons .j neurophysiol 91 , 346 - 357 .kuznetsova , a.y . ,huertas , m.a . ,kuznetsov , a.s . ,paladini , c.a . ,canavier , c.c .regulation of firing frequency in a computational model of a midbrain dopaminergic neuron .j comp neurosci 28 , 389 - 403 .liu , r - j . , van den pol , a.n . , aghajanian , g.k . , 2004. hypocretins ( orexins ) regulate serotonin neurons in the dorsal raphe nucleus by excitatory direct and inhibitory indirect actions . j neurosci 22 , 9453 - 9464 .marcantoni , a. , vandael , d.h.f . ,mahapatra , s. , carabelli , v. , sinnegger - brauns , m.j . ,striessnig , j. , carbone , e. , 2010 .loss of channels reveals the critical role of l - type and bk channel coupling in pacemaking mouse adrenal chromaffin cells .j neurosci 30 , 491 - 504 .penington , n.j ., kelly , j.s . ,fox , a.p . , 1992 .action potential waveforms reveal simultaneous changes in i and i produced by 5-ht in rat dorsal raphe neurons .proc r soc lond .b 248 , 171 - 179 .putzier , i. , kullmann , p.h.m . , horn , j.p . ,levitan , e.s .ca.3 channel voltage dependence , not ca selectivity , drives pacemaker activity and amplifies bursts in nigral dopamine neurons .j neurosci 29 , 15414 - 15419 .steinbusch , h.w . ,nieuwenhuys , r. , verhofstad , a.a ., van der kooy , d. , 1981 .the nucleus raphe dorsalis of the rat and its projection upon the caudatoputamen .a combined cytoarchitectonic , immunohistochemical and retrograde transport study . j physiol ( paris ) 77 , 157 - 174. traub , r.d ., buhl , e.h . ,gloveli , t. , whittington , m.a . , 2003 .fast rhythmic bursting can be induced in layer 2/3 cortical neurons by enhancing persistent na conductance or by blocking bk channels .j neurophysiol 89 , 909 - 921 .yao , y. , bergold , p. , penington , n.j .acute -dependent desensitization of 5-ht1a receptors is mediated by activation of protein kinase a ( pka ) in rat serotonergic neurons .neuroscience 169 , 87 - 97 .
voltage clamp data were analyzed in order to characterize the properties of the fast potassium transient current for a serotonergic neuron of the rat dorsal raphe nucleus ( drn ) . we obtain maximal conductance , time constants of activation and inactivation , and the steady state activation and inactivation functions and , as boltzmann curves , defined by half - activation potentials and slope factors . we employ a novel method to accurately obtain the activation function and compare the results with those obtained by other methods . the form of is estimated as , with ns . for activation , the half - activation potential is mv with slope factor mv , whereas for inactivation the corresponding quantities are -91.5 mv and -9.3 mv . we discuss the results in terms of the corresponding properties of in other cell types and their possible relevance to pacemaking activity in 5-ht cells of the drn . _ abbreviated title : _ estimation of activation function for in drn keywords : serotonin ; dorsal raphe nucleus ; potassium transient current ; activation .
compressed sensing ( cs ) is a very recent field of fast growing interest and whose impact on concrete applications in coding and image acquisition is already remarkable . upto date informations on this new topic may be obtained from the website _ http://nuit - blanche.blogspot.com/_. the foundational paper is where the main problem considered was the one of reconstructing a signal from a few frequency measurements . since then , important contributions to the field have appeared ; see for a survey and references therein . in mathematical terms , the problem can be stated as follows .let be a -sparse vector in , i.e. a vector with no more than nonzero components .the observations are simply given by where and small compared to with , and the goal is to recover exactly from these observations .the main challenges concern the construction of observation matrices which allow to recover with as large as possible for given values of and .the problem of compressed sensing can be solved unambiguously if there is no sparser solution to the linear system ( [ linsys ] ) than .then , recovery is obtained by simply finding the sparsest solution to ( [ linsys ] ) .if for any in we denote by the -norm of , i.e. the cardinal of the set of indices of nonzero components of , the compressed sensing problem is equivalent to we denote by the solution of problem ( [ l0 ] ) and is called a decoder .is not the unique sparsest solution of ( [ l0 ] ) using this approach for recovery is of course possibly not pertinent .moreover , in such a case , this problem has several solutions with equal -``norm '' and one may rather define as an arbitrary element of the solution set . ] thus , the cs problem may be viewed as a combinatorial optimization problem .moreover , the following lemma is well known .[ algcond ] ( see for instance ) if is any matrix and , then the following properties are equivalent : \i .the decoder satisfies , for all , \ii . for any set of indices with , the matrix has rank where stands for the submatrix of composed of the columns indexed by only . the main problem in using the decoder for given observations is that the optimization problem ( [ l0 ] ) is np - hard and can not reasonably be expected to be solved in polynomial time . in order to overcome this difficulty, the original decoder has to be replaced by simpler ones in terms of computational complexity . assuming that is given , two methods have been studied for solving the compressed sensing problem .the first one is the orthognal matching pursuit ( omp ) and the second one is the -relaxation .both methods are not comparable since omp is a greedy algorithm with sublinear complexity and the -relaxation offers usually better performances in terms of recovery at the price of a computational complexity equivalent to the one of linear programming . more precisely , the relaxation is given by in the following , we will denote by the solution of the -relaxation ( [ l1 ] ) . from the computational viewpoint , this relaxation is of great interest since it can be solved in polynomial time . indeed , ( [ l1 ] ) is equivalent to the linear program the main subsequent problem induced by this choice of relaxation is to obtain easy to verify sufficient conditions on for the relaxation to be exact , i.e. to produce the sparsest solution to the underdetermined system ( [ linsys ] ) .a nice condition was given by candes , romberg and tao and is called the restricted isometry property . up to now, this condition could only be proved to hold with great probability in the case where is a subgaussian random matrix .several algorithmic approaches have also been recently proposed in order to garantee the exactness of the relaxation such as in and .the goal of our paper is different .its aim is to present a new method for solving the cs problem generalizing the original -relaxation of ( ) and with much better performance in pratice as measured by success rate of recovery versus original sparsity .recall that the problem of exact reconstruction of sparse signals can be solved using and lemma [ algcond ] .let us start by writing down problem ( [ l0 ] ) , to which is the solution map , as the following equivalent problem subject to where denotes the vector of all ones . here since the sum of the sis maximized , the variable plays the role of an indicator function for the event that .this problem is clearly nonconvex due to the quadratic equality constraints .a simple way to construct a sdp relaxation is to homogenize the quadratic forms in the formulation at hand using a binary variable . indeed , by symmetry, it will suffice to impose since , if the relaxation turns out to be exact and a solution is recovered with , then , as the reader will be able to check at the end of this section , will also solve the relaxed problem .for instance , problem ( [ quad ] ) can be expressed as subject to for .if we choose to keep explicit all the constraints in problem ( [ quadhom ] ) , the lagrange function can be easily be written as where is the concatenation of , , into one vector , ( resp . and ) is the vector of lagrange multipliers associated to the constraints , ( resp . , , and , ) and where all the matrices , , , , and belong to , the set of symmetric real matrices and are defined by a_j= \left [ \begin{array}{ccc } 0 & 0_{1,n } & \frac12 a_j^t \\ 0_{n,1 } & 0_{n , n } & 0_{n , n } \\\frac12 a_j & 0_{n , n } & 0_{n , n } \end{array } \right]\ ] ] for , where is the row of , , e_i= \left [ \begin{array}{ccc } 0 & -e_i^t & 0_{1,n } \\-e_i & 2d(e_i ) & 0_{n , n } \\ 0_{n,1 } & 0_{n , n } & 0_{n , n } \end{array } \right]\ ] ] and \ ] ] for where is the vector with all components equal to zero except the which is set to one , is the vector of all ones , is the diagonal matrix with diagonal vector and where denotes the matrix of all zeros .the dual function is given by and thus with and where is the lwner ordering ( iff is positive semi - definite ) .therefore , the dual problem is given by which is in fact equivalent to the following semi - definite program subject to we can also try and formulate the dual of this semi - definite program which is called the bidual of the initial problem . this bidual problemis easily seen after some computations to be given by subject to now , if is an optimal solution with , then \big ) \big(\pm\left [ \begin{array}{c } z_0^ * \\ z^ * \\ x^ * \end{array } \right]\big)^t\ ] ] and it can be easily verified that all the constraints in ( [ quadhom ] ) are satisfied .moreover , we may additionally impose that ., multiply by the whole vector ] otherwise .* proof*. problem ( [ rlxstp1 ] ) is clearly separable and the solution can be easily computed coordinatewise . this section , using monte carlo experiments , we compare our alternating approach to two recent methods proposed in the litterature : the reweighted of cands , wakin and boyd and the iteratively reweighted least - squares as proposed in .the problem size was chosen to be the same as in chartrand and yin s paper : , . for each sparsity level a hundred different -sparse vectors were drawn as follows : the support was chosen uniformly on all support with cardinal and the nonzero components were drawn from the gaussian distribution .n , the observation matrix was obtained in two steps : first draw a matrix with i.i.d .gaussian entries and then normalize each column to 2 as in .the parameter , namely the lagrange multiplier for the complementarity constraint was tuned as follows : since on the one hand the natural breakdown point for equivalence , i.e. equivalence of using vs. minimization , lies around and on the other hand , the alternating is nothing but a successive thresholding algorithm due to lemma [ 01 ] , we decided to chose the smallest possible so that the largest components the first step of the alternating algorithm ( which is nothing but the plain decoder whatever the value of ) be over .notice that this value of is surely not the solution of the dual problem but our choice is at least motivated by reasonable deduction based on pratical observations whereas the tuning parameter in the other two algorithms is not known to enjoy such an intuitive and meaningful selection rule .we chose in these experiments .the numerical results for the irls and the reweighted were communicated to us by rick chartrand whom we greatly thank for his collaboration .comparison between the success rates the three methods is shown in figure 1 .our alternating method outperformed both the iteratively reweighted least squares and the reweighted methods for the given data size .as noted in , the irls and the reweighted enjoy nearly the same exact recovery success rates .* remark*. the reweighted and the reweighted ls both need a value of ( or even a sequence of such values as in ) which is hard to optimize ahead of time , whereas the value in the alternating is a lagrange multiplier , i.e. a dual variable . in the monte carlo experiments of the previous section, we decided to base our choice of on a simple an intuitive criterion suggested by the well known experimental behavior of the plain relaxation . on the other hand , it should be interesting to explore duality a bit further and perform experiments in the case where is approximately optimized ( using any derivative free procedure for instance ) based on our heuristic alternating approximation of the dual function . hiriart urruty , j .- b . and lemarchal , c. , convex analysis and minimization algorithms ii : advanced theory and bundle methods , springer- verlag , 1993 , 306 , grundlehren der mathematischen wissenschaften .
compressed sensing is a new methodology for constructing sensors which allow sparse signals to be efficiently recovered using only a small number of observations . the recovery problem can often be stated as the one of finding the solution of an underdetermined system of linear equations with the smallest possible support . the most studied relaxation of this hard combinatorial problem is the -relaxation consisting of searching for solutions with smallest -norm . in this short note , based on the ideas of lagrangian duality , we introduce an alternating relaxation for the recovery problem enjoying higher recovery rates in practice than the plain relaxation and the recent reweighted method of cands , wakin and boyd .
according to fisher [ ( ) , page 311 ] , `` the object of statistical methods is the reduction of data . ''the reduction of data is imperative in the case of discrete - valued networks that may have hundreds of thousands of nodes and billions of edge variables .the collection of such large networks is becoming more and more common , thanks to electronic devices such as cameras and computers .of special interest is the identification of influential subsets of nodes and high - density regions of the network with an eye to break down the large network into smaller , more manageable components .these smaller , more manageable components may be studied by more advanced statistical models , such as advanced exponential family models [ e.g. , ] .an example is given by signed networks , such as trust networks , which arise in world wide web applications .users of internet - based exchange networks are invited to classify other users as either ( untrustworthy ) or ( trustworthy ) .trust networks can be used to protect users and enhance collaboration among users [ ] .a second example is the spread of infectious disease through populations by way of contacts among individuals [ ] . in such applications , it may be of interest to identify potential super - spreaders that is , individuals who are in contact with many other individuals and who could therefore spread the disease to many others and dense regions of the network through which disease could spread rapidly .the current article advances the model - based clustering of large networks in at least four ways .first , we introduce a simple and flexible statistical framework for parameterizing models based on statistical exponential families [ e.g. , ] that advances existing model - based clustering techniques .model - based clustering of networks was pioneered by .the simple , unconstrained parameterizations employed by and others [ e.g. , ] make sense when networks are small , undirected and binary , and when there are no covariates .in general , though , such parameterizations may be unappealing from both a scientific point of view and a statistical point of view , as they may result in nonparsimonious models with hundreds or thousands of parameters .an important advantage of the statistical framework we introduce here is that it gives researchers a choice : they can choose interesting features of the data , specify a model capturing those features , and cluster nodes based on the specified model .the resulting models are therefore both parsimonious and scientifically interesting .second , we introduce approximate maximum likelihood estimates of parameters based on novel variational generalized em ( gem ) algorithms , which take advantage of minorization - maximization ( mm ) algorithms [ ] and have computational advantages . for unconstrained models ,tests suggest that the variational gem algorithms we propose can converge quicker and better avoid local maxima than alternative algorithms ; see sections [ seccomparison ] and [ secapp ] . in the presence of parameter constraints ,we facilitate computations by exploiting the properties of exponential families [ e.g. , ] . in addition, we sketch how the variational gem algorithm can be extended to obtain approximate bayesian estimates .third , we introduce bootstrap standard errors to quantify the uncertainty about the approximate maximum likelihood estimates of the parameters , whereas other work has ignored the uncertainty about the approximate maximum likelihood estimates . to facilitate these bootstrap procedures ,we introduce monte carlo simulation algorithms that generate sparse networks in much less time than conventional monte carlo simulation algorithms .in fact , without the more efficient monte carlo simulation algorithms , obtaining bootstrap standard errors would be infeasible . finally ,while model - based clustering has been limited to networks with fewer than 13,000 nodes and 85 million edge variables [ see the largest data set handled to date , ] , we demonstrate that we can handle much larger , nonbinary networks by considering an internet - based data set with more than 131,000 nodes and 17 billion edge variables , where `` edge variables '' comprise all observations , including node pairs between which no edge exists .many internet - based companies and websites , such as http://amazon.com , http://netflix.com and http://epinions.com , allow users to review products and services . because most users of the world wide web do not know each other and thus can not be sure whether to trust each other , readers of reviews may be interested in an indication of the trustworthiness of the reviewers themselves .a convenient and inexpensive approach is based on evaluations of reviewers by readers .the data set we analyze in section [ secapp ] comes from the website http://epinions.com , which collects such data by allowing any user to evaluate any other user as either untrustworthy , coded as , or trustworthy , coded as , where means that user did not evaluate user [ ] .the resulting network consists of ,827 users and ,378,226,102 observations . since each user can only review a relatively small number of other users , the network is sparse : the vast majority of the observations are zero , with only 840,798 negative and positive evaluations .our modeling goal , broadly speaking , is both to cluster the users based on the patterns of trusts and distrusts in this network and to understand the features of the various clusters by examining model parameters .the rest of the article is structured as follows : a scalable model - based clustering framework based on finite mixture models is introduced in section [ secmodel ] .approximate maximum likelihood and bayesian estimation are discussed in sections [ secmle ] and [ secbay ] , respectively , and an algorithm for monte carlo simulation of large networks is described in section [ secsim ] . section [ seccomparison ] compares the variational gem algorithm to the variational em algorithm of .section [ secapp ] applies our methods to the trust network discussed above .we consider nodes , indexed by integers , and edges between pairs of nodes and , where can take values in a finite set of elements . by convention , for all , where signifies `` no relationship . ''we call the set of all edges a discrete - valued network , which we denote by , and we let denote the set of possible values of . special cases of interest are ( a ) undirected binary networks , where is subject to the linear constraint for all ; ( b ) directed binary networks , where for all ; and ( c ) directed signed networks , where for all . a general approach to modeling discrete - valued networks is based on exponential families of distributions [ ] : ,\qquad \mathbf{y}\in{\mathscr{y}},\ ] ] where is the vector of canonical parameters and is the vector of canonical statistics depending on a matrix of covariates , measured on the nodes or the pairs of nodes , and the network , and is given by ,\qquad { \bolds{\theta}}\in \mathbb{r}^p,\ ] ] and ensures that sums to .a number of exponential family models have been proposed [ e.g. , ] . in general , though , exponential family models are not scalable : the computing time to evaluate the likelihood function is , where in the case of undirected edges and in the case of directed edges , which necessitates time - consuming estimation algorithms [ e.g. , ] .we therefore restrict attention to scalable exponential family models , which are characterized by dyadic independence : where corresponds to in the case of undirected edges and in the case of directed edges .the subscripted and superscripted mean that the product in ( [ dyadindependence ] ) should be taken over all pairs with ; the same is true for sums as in ( [ lb ] ) .dyadic independence has at least three advantages : ( a ) it facilitates estimation , because the computing time to evaluate the likelihood function scales linearly with ; ( b ) it facilitates simulation , because dyads are independent ; and ( c ) by design it bypasses the so - called model degeneracy problem : if is large , some exponential family models without dyadic independence tend to be ill - defined and impractical for modeling networks [ ] .a disadvantage is that most exponential families with dyadic independence are either simplistic [ e.g. , models with identically distributed edges , ] or nonparsimonious [ e.g. , the model with parameters , ] .we therefore assume that the probability mass function has a -component mixture form as follows : \\[-8pt ] & = & \sum_{\mathbf{z}\in{\mathscr{z } } } \prod_{i < j}^n p_{{\bolds{\theta}}}(d_{ij } = d_{ij } { \mid}{\mathbf{x } } , { \mathbf{z}}= \mathbf{z } ) p_{{\bolds{\gamma}}}({\mathbf{z}}= \mathbf{z } ) , \nonumber\end{aligned}\ ] ] where denotes the membership indicators with distributions and denotes the support of . in some applications , it may be desired to model the membership indicators as functions of by using multinomial logit or probit models with as the outcome variables and as predictors [ e.g. , ] .we do not elaborate on such models here , but the variational gem algorithms discussed in sections [ secmle ] and [ secbay ] could be adapted to such models .mixture models represent a reasonable compromise between model parsimony and complexity . in particular , the assumption of conditional dyadic independence does _ not _ imply marginal dyadic independence , which means that the mixture model of ( [ mixturemodel ] ) captures some degree of dependence among the dyads .we give two specific examples of mixture models below .the model of for directed , binary - valued networks may be modified using a mixture model .the original models the sequence of in - degrees ( number of incoming edges of nodes ) and out - degrees ( number of outgoing edges of nodes ) as well as reciprociated edges , postulating that the dyads are independent and that the dyadic probabilities are of the form ,\ ] ] where and is a normalizing constant . following , the parameters may be interpreted as activity or productivity parameters , representing the tendencies of nodes to `` send '' edges to other nodes ; the parameters may be interpreted as attractiveness or popularity parameters , representing the tendencies of nodes to `` receive '' edges from other nodes ; and the parameter may be interpreted as a mutuality or reciprocity parameter , representing the tendency of nodes and to reciprocate edges .a drawback of this model is that it requires parameters .here , we show how to extend it to a mixture model that is applicable to both directed and undirected networks as well as discrete - valued networks , that is much more parsimonious , and that allows identification of influential nodes .observe that the dyadic probabilities of ( [ dyadicprob ] ) are of the form ,\ ] ] where is the reciprocity parameter and and are the sending and receiving propensities of nodes and , respectively .the corresponding statistics are the reciprocity indicator and the sending and receiving indicators and of nodes and , respectively .a mixture model modification of the model postulates that , conditional on , the dyadic probabilities are independent and of the form \\[-8pt ] & & \qquad \propto\exp\bigl[{\bolds{\theta}}_1^\top { \mathbf{g}}_1(d_{ij } ) + { \bolds{\theta}}_{2k}^\top { \mathbf{g}}_{2k}(d_{ij } ) + { \bolds{\theta}}_{2l}^\top { \mathbf{g}}_{2l}(d_{ij})\bigr],\nonumber\end{aligned}\ ] ] where the parameter vectors and depend on the components and to which the nodes and belong , respectively .the mixture model version of the model is therefore much more parsimonious provided and was proposed by in the case of undirected , binary - valued networks . here, the probabilities of ( [ p1extension1 ] ) and ( [ p1extension2 ] ) are applicable to both undirected and directed networks as well as discrete - valued networks , because the functions and may be customized to fit the situation and may even depend on covariates , though we have suppressed this possibility in the notation .finally , the mixture model version of the model admits model - based clustering of nodes based on indegrees or outdegrees or both . a small number of nodes with high indegree or outdegree or both is considered to be influential : if the corresponding nodes were to be removed , the network structure would be impacted .the mixture model of assumes that , conditional on , the dyads are independent and the conditional dyadic probabilities are of the form in other words , conditional on , the dyad probabilities are constant across dyads and do not depend on covariates .it is straightforward to add covariates by writing the conditional dyad probabilities in canonical form : ,\ ] ] where the canonical statistic vectors and may depend on the covariates . if the canonical parameter vectors are constrained by the linear constraints , where and are parameter vectors of the same dimension as , then the mixture model version of the model arises .in other words , the mixture model version of the model can be viewed as a constrained version of the model .while the constrained version can be used to cluster nodes based on degree , the unconstrained version can be used to identify , for instance , high - density regions of the network , corresponding to subsets of nodes with large numbers of within - subset edges .these regions may then be studied individually in more detail by using more advanced statistical models such as exponential family models without dyadic independence as proposed by , for example , , , , , or .other mixture models for networks have been proposed by , and . however , these models scale less well to large networks , so we confine attention here to examples 1 and 2 .a standard approach to maximum likelihood estimation of finite mixture models is based on the classical em algorithm , taking the complete data to be , where is unobserved [ ] .however , the e - step of an em algorithm requires the computation of the conditional expectation of the complete data log - likelihood function under the distribution of , which is intractable here even in the simplest cases [ ] . as an alternative, we consider so - called variational em algorithms , which can be considered as generalizations of em algorithms .the basic idea of variational em algorithms is to construct a tractable lower bound on the intractable log - likelihood function and maximize the lower bound , yielding approximate maximum likelihood estimates . shown that approximate maximum likelihood estimators along these lines are at least in the absence of parameter constraints consistent estimators .we assume that all modeling of can be conditional on covariates and define , for ease of presentation , we drop the notational dependence of on and make the homogeneity assumption which is satisfied by the models in examples 1 and 2 .exponential parameterizations of , as in ( [ dyadicprob ] ) and ( [ nowsni2 ] ) , may or may not be convenient .an attractive property of the variational em algorithm proposed here is that it can handle all possible parameterizations of . in some cases ( e.g. , example 1 ) , exponential parameterizations are more advantageous than others , while in other cases ( e.g. , example 2 ) , the reverse holds .let be an auxiliary distribution with support . using jensen s inequality ,the log - likelihood function can be bounded below as follows : a(\mathbf{z } ) \\ & = & e_a\bigl[\log p_{{\bolds{\gamma } } , { \bolds{\theta}}}({\mathbf{y}}= \mathbf{y } , { \mathbf{z}}= \mathbf{z } ) \bigr ] - e_a\bigl[\log a({\mathbf{z}})\bigr ] .\nonumber\end{aligned}\ ] ] some choices of give rise to better lower bounds than others . to see which choice gives rise to the best lower bound ,observe that the difference between the log - likelihood function and the lower bound is equal to the kullback leibler divergence from to : a(\mathbf{z } ) \nonumber \\ & & \qquad = \sum_{\mathbf{z}\in{\mathscr{z } } } \bigl[\log p_{{\bolds{\gamma}},{\bolds{\theta}}}({\mathbf{y}}= \mathbf{y } ) \bigr ] a(\mathbf{z } ) - \sum_{\mathbf{z}\in{\mathscr{z } } } \biggl[\log \frac{p_{{\bolds{\gamma } } , { \bolds{\theta}}}({\mathbf{y}}= \mathbf{y } , { \mathbf{z}}= \mathbf { z})}{a(\mathbf{z } ) } \biggr ] a(\mathbf{z } ) \\ & & \qquad = \sum_{\mathbf{z}\in{\mathscr{z } } } \biggl[\log\frac{a(\mathbf { z})}{p_{{\bolds{\gamma } } , { \bolds{\theta}}}({\mathbf{z}}= \mathbf{z}\mid{\mathbf{y}}= \mathbf{y } ) } \biggr ] a(\mathbf{z } ) .\nonumber\end{aligned}\ ] ] if the choice of were unconstrained in the sense that we could choose from the set of all distributions with support , then the best lower bound is obtained by the choice , which reduces the kullback leibler divergence to and makes the lower bound tight . if the optimal choice is intractable , as is the case here , then it is convenient to constrain the choice to a subset of tractable choices and substitute a choice which , within the subset of tractable choices , is as close as possible to the optimal choice in terms of kullback leibler divergence . a natural subset of tractable choices is given by introducing the auxiliary parameters and setting where the marginal auxiliary distributions are multinomial . in this case , the lower bound may be written - e_{{\bolds{\alpha } } } \bigl[\log p_{{\bolds{\alpha}}}({\mathbf{z}})\bigr ] \nonumber\\ & = & \sum_{i< j}^n \sum _ { k=1}^k \sum_{l=1}^k \alpha_{ik } \alpha_{jl } \log\pi_{d_{ij};kl}({\bolds{\theta } } ) \\ & & { } + \sum_{i=1}^n \sum _ { k=1}^k \alpha_{ik } ( \log \gamma_k - \log\alpha_{ik } ) .\nonumber\end{aligned}\ ] ] because equation ( [ aux ] ) assumes independence , the kullback leibler divergence between and , and thus the tightness of the lower bound , is determined by the dependence of the random variables conditional on .if the random variables are independent conditional on , then , for each , there exists such that , which reduces the kullback leibler divergence to and makes the lower bound tight . in general , the random variables are not independent conditional on and the kullback leibler divergence ( [ kb ] ) is thus positive .approximate maximum likelihood estimates of and can be obtained by maximizing the lower bound in ( [ lb ] ) using variational em algorithms of the following form , where is the iteration number : letting and denote the current values of and , maximize with respect to .let denote the optimal value of and compute ] with respect to and , which is equivalent to maximizing with respect to and .the method ensures that the lower bound is nondecreasing in the iteration number : where inequalities ( [ nondecreasing1 ] ) and ( [ nondecreasing2 ] ) follow from the e - step and m - step , respectively .it is instructive to compare the variational em algorithm to the classical em algorithm as applied to finite mixture models .the e - step of the variational em algorithm minimizes the kullback leibler divergence between and .if the choice of were unconstrained , then the optimal choice would be .therefore , in the unconstrained case , the e - step of the variational em algorithm reduces to the e - step of the classical em algorithm , so the classical em algorithm can be considered to be the optimal variational em algorithm . to implement the e - step, we exploit the fact that the lower bound is nondecreasing as long as the e - step and m - step increase the lower bound . in other words , we do not need to maximize the lower bound in the e - step and m - step .indeed , increasing rather than maximizing the lower bound in the e - step and m - step may have computational advantages when is large . in the literature on em algorithms , the advantages of incremental e - steps and incremental m - stepsare discussed by and , respectively .we refer to the variational em algorithm with either an incremental e - step or an incremental m - step or both as a variational generalized em , or variational gem , algorithm . direct maximization of is unattractive : equation ( [ lb ] ) shows that the lower bound depends on the products and , therefore , fixed - point updates of along the lines of [ ] depend on all other .we demonstrate in section [ seccomparison ] that the variational em algorithm with the fixed - point implementation of the e - step can be inferior to the variational gem algorithm when is large .to separate the parameters of the maximization problem , we increase via an mm algorithm [ ] .mm algorithms can be viewed as generalizations of em algorithms [ ] and are based on iteratively constructing and then optimizing surrogate ( minorizing ) functions to facilitate the maximization problem in certain situations .we consider here the surrogate function \\[-8pt ] & & { } + \sum_{i = 1}^n \sum _ { k = 1}^k \alpha_{ik } \biggl(\log \gamma_k^{(t ) } - \log\alpha_{ik}^{(t ) } - \frac{\alpha_{ik}}{\alpha _ { ik}^{(t ) } } + 1 \biggr ) , \nonumber\end{aligned}\ ] ] which we show in appendix [ minorizer ] to have the following two properties : in the language of mm algorithms , conditions ( [ p1 ] ) and ( [ p2 ] ) establish that is a _ minorizer _ of at .the theory of mm algorithms implies that maximizing the minorizer with respect to forces uphill [ ] .this maximization , involving separate quadratic programming problems of k variables under the constraints for all and , may be accomplished quickly using the method described by .when is large , it is much easier to update by maximizing the function , which is the sum of functions of the individual , than by maximizing the function , in which the parameters are not separated in this way. we therefore arrive at the following replacement for the e - step : for , increase as a function of subject to for all and .let denote the new value of . to maximize in the m - step , examination of ( [ lb ] )shows that maximization with respect to and may be accomplished separately .in fact , for , there is a simple , closed - form solution : concerning , if there are no constraints on other than , it is preferable to maximize with respect to rather than , because there are closed - form expressions for but not for .maximization with respect to is accomplished by setting if the homogeneity assumption ( [ homogeneity ] ) does not hold , then closed - form expressions for may not be available . in some cases , as in the presence of categorical covariates , closed form expressions for are available , but the dimension of , and thus computing time , increases with the number of categories . if equations ( [ ergm ] ) and ( [ dyadindependence ] ) hold ,then the exponential parametrization may be inverted to obtain an approximate maximum likelihood estimate of after the approximate mle of is found using the variational gem algorithm .one method for accomplishing this inversion exploits the convex duality of exponential families [ ] and is explained in appendix [ convexduality ] . if , in addition to the constraint , additional constraints on are present , the maximization with respect to may either decrease or increase computing time .linear constraints on can be enforced by lagrange multipliers and reduce the dimension of and thus computing time .nonlinear constraints on , as in example 1 , may not admit closed form updates of and thus may require iterative methods .if so , and if the nonlinear constraints stem from exponential family parameterizations of with natural parameter vector as in example 1 , then it is convenient to translate the constrained maximization problem into an unconstrained problem by maximizing with respect to and exploiting the fact that is a concave function of owing to the exponential family membership of [ , page 150 ] . we show in appendix [ gradienthessian ] how the exponential family parameterization can be used to derive the gradient and hessian of the lower bound of with respect to , which we exploit in section [ secapp ] using a newton raphson algorithm .although we maximize the lower bound of the log - likelihood function to obtain approximate maximum likelihood estimates , standard errors of the approximate maximum likelihood estimates and based on the curvature of the lower bound may be too small .the reason is that even when the lower bound is close to the log - likelihood function , the lower bound may be more curved than the log - likelihood function [ ] ; indeed , the higher curvature helps ensure that is a lower bound of the log - likelihood function in the first place . as an alternative ,we approximate the standard errors of the approximate maximum likelihood estimates of and by a parametric bootstrap method [ ] that can be described as follows : given the approximate maximum likelihood estimates of and , sample data sets . for each data set , compute the approximate maximum likelihood estimates of and .in addition to fast maximum likelihood algorithms , the parametric bootstrap method requires fast simulation algorithms .we propose such an algorithm in section [ secsim ] . as usual with em - like algorithms, it is a good idea to use multiple different starting values with the variational em due to the existence of distinct local maxima .we find it easiest to use random starts in which we assign the values of and then commence with an m - step .this results in values and , then the algorithm continues with the first e - step , and so on .the initial are chosen independently uniformly randomly on , then each is multiplied by a normalizing constant chosen so that the elements of sum to one for every .the numerical experiments of section [ secapp ] use 100 random restarts each .ideally , more restarts would be used , yet the size of the data sets with which we work makes every run somewhat expensive .we chose the number 100 because we were able to parallelize on a fairly large scale , essentially running 100 separate copies of the algorithm .larger numbers of runs , such as 1000 , would have forced longer run times since we would have had to run some of the trials in series rather than in parallel . as a convergence criterion, we stop the algorithm as soon as approximate bayesian estimation ------------------------------- the key to bayesian model estimation and model selection is the marginal likelihood , defined as where is the prior distribution of and . to ensure that the marginal likelihood is well - defined , we assume that the prior distribution is proper , which is common practice in mixture modeling [ , chapter 4 ]. a lower bound on the log marginal likelihood can be derived by introducing an auxiliary distribution with support , where is the parameter space of and is the parameter space of .a natural choice of auxiliary distributions is given by p_{{\bolds{\alpha}}_{{\bolds{\gamma}}}}({\bolds{\gamma } } ) \biggl[\prod _ { i = 1}^l p_{{\bolds{\alpha}}_{{\bolds{\theta}}}}(\theta_i ) \biggr],\ ] ] where denotes the set of auxiliary parameters , and . a lower bound on the log marginal likelihood can be derived by jensen s inequality : \\[-8pt ] & \geq & e_{{\bolds{\alpha}}}\bigl[\log p_{{\bolds{\gamma } } , { \bolds{\theta}}}({\mathbf{y}}= \mathbf{y } , { \mathbf{z}}= \mathbf { z } ) p ( { \bolds{\gamma } } , { \bolds{\theta}})\bigr ] - e_{{\bolds{\alpha}}}\bigl[\log a_{{\bolds{\alpha}}}({\mathbf{z } } , { \bolds{\gamma } } , { \bolds{\theta}})\bigr ] , \nonumber\end{aligned}\ ] ] where the expectations are taken with respect to the auxiliary distribution .we denote the right - hand side of ( [ lbbayesian ] ) by . by an argument along the lines of ( [ kb ] ), one can show that the difference between the log marginal likelihood and is equal to the kullback leibler divergence from the auxiliary distribution to the posterior distribution : a_{{\bolds{\alpha}}}(\mathbf{z } , { \bolds{\gamma } } , { \bolds{\theta } } ) \,d{\bolds{\gamma}}\,d { \bolds{\theta}}\nonumber\\[-8pt]\\[-8pt ] & & \qquad = \int_{\gamma } \int_{\theta } \sum _ { \mathbf{z}\in{\mathscr{z } } } \biggl[\log\frac{a_{{\bolds{\alpha}}}(\mathbf{z } , { \bolds{\gamma } } , { \bolds{\theta}})}{p({\mathbf{z}}= \mathbf{z } , { \bolds{\gamma } } , { \bolds{\theta}}\mid{\mathbf{y}}= \mathbf{y } ) } \biggr ] a_{{\bolds{\alpha } } } ( \mathbf{z } , { \bolds{\gamma } } , { \bolds{\theta } } ) \,d{\bolds{\gamma}}\,d{\bolds{\theta}}. \nonumber\end{aligned}\ ] ] the kullback leibler divergence between the auxiliary distribution and the posterior distribution can be minimized by a variational gem algorithm as follows , where is the iteration number : letting and denote the current values of and , increase with respect to .let denote the new value of .choose new values and that increase with respect to and .by construction , iteration of a variational gem algorithm increases the lower bound : a variational gem algorithm approximates the marginal likelihood as well as the posterior distribution .therefore , it tackles bayesian model estimation and model selection at the same time .variational gem algorithms for approximate bayesian inference are only slightly more complicated to implement than the variational gem algorithms for approximate maximum likelihood estimation presented in section [ secmle ] .to understand the difference , we examine the analogue of ( [ lb ] ) : + e_{{\bolds{\alpha}}}\bigl[\log p_{{\bolds{\gamma}}}({\mathbf{z}}= \mathbf{z})\bigr ] \\ & & \qquad\quad{}+ e_{{\bolds{\alpha}}}\bigl[\log p({\bolds{\gamma } } , { \bolds{\theta}})\bigr ] - e_{{\bolds{\alpha } } } \bigl[\log a({\mathbf{z}}= \mathbf{z } , { \bolds{\gamma } } , { \bolds{\theta}})\bigr ] .\nonumber\end{aligned}\ ] ] if the prior distributions of and are given by independent dirichlet and gaussian distributions and the auxiliary distributions of , and are given by independent multinomial , dirichlet and gaussian distributions , respectively , then the expectations on the right - hand side of ( [ lbbayesian2 ] ) are tractable , with the possible exception of the expectations ] in ( [ lbbayesian2 ] ) by the right - hand side of inequality ( [ lblbbayesian ] ) . to save space , we do not address the specific numerical techniques that may be used to implement the variational gem algorithm here .in short , the generalized e - step is based on an mm algorithm along the lines of section [ secmm ] . in the generalized m - step ,numerical gradient - based methods may be used .a detailed treatment of this bayesian estimation method and its implementation , using a more complicated prior distribution , may be found in ; code related to this article is available at http://sites.stat.psu.edu/\textasciitilde dhunter / code/[http://sites.stat.psu.edu/\textasciitilde dhunter / code/ ] .monte carlo simulation of large , discrete - valued networks serves at least three purposes : a. to generate simulated data to be used in simulation studies ; b. to approximate standard errors of the approximate maximum likelihood estimates by parametric bootstrap ; c. to assess model goodness of fit by simulation .a crude monte carlo approach is based on sampling by cycling through all nodes and sampling by cycling through all dyads . however , the running time of such an algorithm is , which is too slow to be useful in practice , because each of the goals listed above tends to require numerous simulated data sets .we propose monte carlo simulation algorithms that exploit the fact that discrete - valued networks tend to be sparse in the sense that one element of is much more common than all other elements of .an example is given by directed , binary - valued networks , where is the sample space of dyads and tends to dominate all other elements of .assume there exists an element of , called the baseline , that dominates the other elements of in the sense that for all and .the monte carlo simulation algorithm exploiting the sparsity of large , discrete - valued networks can be described as follows : 1 .sample by sampling and assigning nodes to component , nodes to component , etc .sample as follows : for each , a. sample the number of dyads with nonbaseline values , , where is the number of pairs of nodes belonging to components and ; b. sample out of pairs of nodes without replacement ; c. for each of the sampled pairs of nodes , sample the nonbaseline value according to the probabilities , , . in general ,if the degree of any node ( i.e. , the number of nonbaseline values for all dyad variables incident on that node ) has a bounded expectation , then the expected number of nonbaseline values in the network scales with and the expected running time of the monte carlo simulation algorithm scales with . if is small and is large , then the monte carlo approach that exploits the sparsity of large , discrete - valued networks is superior to the crude monte carlo approach .we compare the variational em algorithm based on the fixed - point ( fp ) implementation of the e - step along the lines of to the variational gem algorithm based on the mm implementation of the e - step by applying them to two data sets .the first data set comes from the study on political blogs by . we convert the binary network of political blogs with two labels , liberal ( ) and conservative ( ) , into a signed network by assigning labels of receivers to the corresponding directed edges .the resulting network has 1490 nodes and 2,218,610 edge variables .the second data set is the epinions data set described in section [ secintroduction ] with more than 131,000 nodes and more than 17 billion edge variables .we compare the two algorithms using the unconstrained network mixture model of ( [ nowsni ] ) with and components .for the first data set , we allow up to 1 hour for components and up to 6 hours for components . for the second data set , we allow up to 12 hours for components and up to 24 hours for components . for each data set , for each number of components and for each algorithm , we carried out 100 runs using random starting values as described in section [ secstartingstopping ] .figure [ figfpvsmmconvergencetraces ] shows trace plots of the lower bound of the log - likelihood function , where red lines refer to the lower bound of the variational em algorithm with fp implementation and blue lines refer to the lower bound of the variational gem algorithm with mm implementation .the variational em algorithm seems to outperform the variational gem algorithm in terms of computing time when and are small .however , when or are large , the variational gem algorithm appears far superior to the variational em algorithm in terms of the lower bounds .the contrast is most striking when is large , though the variational gem seems to outperform the variational em algorithm even when is small and is large .we believe that the superior performance of the variational gem algorithm stems from the fact that it separates the parameters of the maximization problem and reduces the dependence of the updates of the variational parameters , as discussed in section [ secmm ] , while the variational em algorithm tends to be trapped in local maxima . @ of the log - likelihood function for 100 runs each of the variational em algorithm with fp implementation ( red ) and variational gem algorithm with mm implementation ( blue ) , applied to the unconstrained network mixture model of ( [ nowsni ] ) for two different data sets.,title="fig : " ] & of the log - likelihood function for 100 runs each of the variational em algorithm with fp implementation ( red ) and variational gem algorithm with mm implementation ( blue ) , applied to the unconstrained network mixture model of ( [ nowsni ] ) for two different data sets.,title="fig : " ] + ( a ) political blogs data set with & ( b ) political blogs data set with + of the log - likelihood function for 100 runs each of the variational em algorithm with fp implementation ( red ) and variational gem algorithm with mm implementation ( blue ) , applied to the unconstrained network mixture model of ( [ nowsni ] ) for two different data sets.,title="fig : " ] & of the log - likelihood function for 100 runs each of the variational em algorithm with fp implementation ( red ) and variational gem algorithm with mm implementation ( blue ) , applied to the unconstrained network mixture model of ( [ nowsni ] ) for two different data sets.,title="fig : " ] + ( c ) epinions data set with & ( d ) epinions data set with thus , if and are small and a computing cluster is available , it seems preferable to carry out a large number of runs using the variational em algorithm in parallel , using random starting values as described in section [ secstartingstopping ] .however , if either or is large , it is preferable to use the variational gem algorithm .since the variational gem algorithm is not prone to be trapped in local maxima , a small number of long runs may be all that is needed .here , we address the problem of clustering the ,000 users of the data set introduced in section [ secintroduction ] according to their levels of trustworthiness , as indicated by the network of and ratings given by fellow users . to this end , we first introduce the individual `` excess trust '' statistics since is the number of positive ratings received by user in excess of the number of negative ratings , it is a natural measure of a user s individual trustworthiness .our contention is that consideration of the overall pattern of network connections results in a more revealing clustering pattern than a mere consideration of the statistics , and we support this claim by considering three different clustering methods : a parsimonious network model using the statistics , the fully unconstrained network model of ( [ nowsni ] ) , and a mixture model that considers only the statistics while ignoring the other network structure . for each method , we assume that the number of categories , , is five . partly , this choice is motivated by the fact that formal model selection methods such as the icl criterion suggested by , which we discuss in section 9 , suggest dozens if not hundreds of categories , which complicate summary and interpretation .since the reduction of data is the primary task of statistics [ ] , we want to keep the number of categories small and follow the standard practice of internet - based companies and websites , such as http://amazon.com and http://netflix.com , which use five categories to classify the trustworthiness of reviewers , sellers and service providers . our parsimonious model , which enjoys benefits over the other two alternatives as we shall see , is based on , \nonumber\end{aligned}\ ] ] where and are indicators of negative and positive edges , respectively .the parameters in model ( [ excessdyadmodel1 ] ) are not identifiable , because and .we therefore constrain the positive edge parameter to be .model ( [ excessdyadmodel1 ] ) assumes in the interest of model parsimony that the propensities to form negative and positive edges and to reciprocate negative and positive edges do not vary across clusters ; however , the flexibility afforded by this modeling framework enables us to define cluster - specific parameters for any of these propensities if we wish .the conditional probability mass function of the whole network is given by ,\nonumber\end{aligned}\ ] ] where is the total excess trust for all nodes in the category .the parameters are therefore measures of the trustworthiness of each of the categories .furthermore , these parameters are estimated in the presence of that is , after correcting for the reciprocity effects as measured by the parameters and , which summarize the overall tendencies of users to reciprocate negative and positive ratings , respectively .thus , and may be considered to measure overall tendencies toward _ lex talionis _ and _ quid pro quo _ behaviors .one alternative model we consider is the unconstrained network model obtained from ( [ nowsni ] ) . with five components , this model comprises four mixing parameters in addition to the parameters , of which there are 105: there are nine types of dyads whenever , contributing parameters , and six types of dyads whenever , contributing an additional parameters . despite the increased flexibility afforded by model ( [ nowsni ] ), we view the loss of interpretability due to the large number of parameters as a detriment .furthermore , more parameters opens up the possibility of overfitting and , as we discuss below , appears to make the lower bound of the log - likelihood function highly multi - modal .our other alternative model is a univariate mixture model applied to the statistics directly , which assumes that the individual excesses are independent random variables sampled from a distribution with density where , and are component - specific mixing proportions , means and standard deviations , respectively , and is the standard normal density .traditional univariate approaches like this are less suitable than network - based clustering approaches not only because by design they neither consider nor inform us about the topology of the network , which may be relevant , but also because the individual excesses are not independent : these are functions of edges , and edges may be dependent owing to reciprocity ( and other forms of dependence not modeled here ) , which decades of research [ e.g. , ] have shown to be important in shaping social networks .unlike the univariate mixture model of ( [ gaussianmixture ] ) , the mixture model we employ for networks allows for such dependence .we use a variational gem algorithm to estimate the network model ( [ excessdyadmodel1full ] ) , where the m - step is executed by a newton raphson algorithm using the gradient and hessian derived in appendix [ gradienthessian ] with a maximum of 100 iterations .it stops earlier if the largest absolute value in the gradient vector is less than .by contrast , the unconstrained network model following from ( [ nowsni ] ) employs a variational gem algorithm using the exact m - step update ( [ piupdate ] ) .the variational gem algorithm stops when either the relative change in the objective function is less than or 6000 iterations are performed .most runs require the full 6000 iterations . to estimate the normal mixture model ( [ gaussianmixture ] ), we use the ` r ` package ` mixtools ` [ ] . [ cols="^,^ " , ] to diagnose convergence of the algorithm for fitting the model ( [ excessdyadmodel1full ] ) , we present the trace plot of the lower bound of the log - likelihood function in figure [ figconvergence](a ) and the trace plot of the cluster - specific excess parameters in figure [ figconvergence](b ) . both figures are based on runs , where the starting values are obtained by the procedure described in section [ secstartingstopping ] .the results suggest that all runs seem to converge to roughly the same solution .this fact is somewhat remarkable , since many variational algorithms appear very sensitive to their starting values , converging to multiple distinct local optima [ e.g. , ] .for instance , the 100 runs for the unconstrained network model ( [ nowsni ] ) produced essentially a unique set of estimates for each set of random starting values .similarly , the normal mixture model algorithm produces many different local maxima , even after we try to correct for label - switching by choosing random starting values fairly tightly clustered by their mean values .@ , grouped by highest - probability component of , for parsimonious network mixture model ( [ excessdyadmodel1full ] ) with 12 parameters , normal mixture model ( [ gaussianmixture ] ) with 14 parameters , and unconstrained network mixture model ( [ nowsni ] ) with 109 parameters.,title="fig : " ] & , grouped by highest - probability component of , for parsimonious network mixture model ( [ excessdyadmodel1full ] ) with 12 parameters , normal mixture model ( [ gaussianmixture ] ) with 14 parameters , and unconstrained network mixture model ( [ nowsni ] ) with 109 parameters.,title="fig : " ] + ( a ) network mixture model ( [ excessdyadmodel1full ] ) & ( b ) normal mixture model ( [ gaussianmixture ] ) + + figure [ figepinionsexcesstrust ] shows the observed excesses grouped by clusters for the best solutions , as measured by likelihood or approximate likelihood , found for each of the three clustering methods .it appears that the clustering based on the parsimonious network model does a better job of separating the statistics into distinct subgroups though this is not the sole criterion used than the clusterings for the other two models , which are similar to each other .in addition , if we use a normal mixture model in which the variances are restricted to be constant across components , the results are even worse , with one large cluster and multiple clusters with few nodes . in figure [ figepinionsratings ] ,we `` ground truth '' the clustering solutions using external information : the average ratings of 659,290 articles , grouped according to the highest - probability category of the article s author . while in figure [ figepinionsexcesstrust ] the size of each cluster is the number of users in that cluster , in figure [ figepinionsratings ] the size of each cluster is the number of articles written by users in that cluster .the widths of the boxes in figures [ figepinionsexcesstrust ] and [ figepinionsratings ] are proportional to the square roots of the cluster sizes . as an objective criterion to compare the three models , we fit one - way anova models where responses are article ratings and fixed effects are the group indicators of the articles authors .the adjusted r values are , and for the network mixture model , the normal mixture model and the unconstrained network mixture model , respectively . in other words , the latent structure detected by the 12-component network mixture model of ( [ excessdyadmodel1full ] ) explains the variation in article ratings better than the 14-parameter univariate mixture model or the 109-parameter unconstrained network model .@ ) with 12 parameters , normal mixture model ( [ gaussianmixture ] ) with 14 parameters , and unconstrained network mixture model ( [ nowsni ] ) with 109 parameters .the ordering of the five categories , which is the same as in figure [ figepinionsexcesstrust ] , indicates that the unconstrained network mixture model does not even preserve the correct ordering of the median average ratings.,title="fig : " ] & ) with 12 parameters , normal mixture model ( [ gaussianmixture ] ) with 14 parameters , and unconstrained network mixture model ( [ nowsni ] ) with 109 parameters .the ordering of the five categories , which is the same as in figure [ figepinionsexcesstrust ] , indicates that the unconstrained network mixture model does not even preserve the correct ordering of the median average ratings.,title="fig : " ] + ( a ) network mixture model ( [ excessdyadmodel1full ] ) & ( b ) normal mixture model ( [ gaussianmixture ] ) + + .3c@ & & & * confidence * + * parameter * & * statistic * & & * interval * + negative edges ( ) & & -24.020 & + positive edges ( ) & & 0 & + negative reciprocity ( ) & & 8.660 & + positive reciprocity ( ) & & 9.899 & + cluster 1 trustworthiness ( ) & & -6.256 & + cluster 2 trustworthiness ( ) & & -7.658 & + cluster 3 trustworthiness ( ) & & -9.343 & + cluster 4 trustworthiness ( ) & & -11.914 & + cluster 5 trustworthiness ( ) & & -15.212 & + table [ tableepinions ] reports estimates of the parameters from model ( [ excessdyadmodel1full ] ) along with 95% confidence intervals reported in that table obtained by simulating 500 networks using the method of section [ secsim ] and the parameter estimates obtained via our algorithm . for each network, we run our algorithm for 1000 iterations starting at the m - step , where the parameters are initialized to reflect the `` true '' component to which each node is assigned by the simulation algorithm by setting for not equal to the true component and otherwise .this is done to eliminate the so - called label - switching problem , which is rooted in the invariance of the likelihood function to switching the labels of the components and which can affect bootstrap samples in the same way it can affect markov chain monte carlo samples from the posterior of finite mixture models [ ] .the sample 2.5% and 97.5% quantiles form the confidence intervals shown .in addition , we give density estimates of the five trustworthiness bootstrap samples in figure [ figbootstrap1 ] . table [ tableepinions ]shows that some clusters of users are much more trustworthy than others .in addition , there is statistically significant evidence that users rate others in accordance with both _lex talionis _ and _ quid pro quo _ , since both and are positive .these findings suggest that the ratings of pairs of users and are , perhaps unsurprisingly , dependent and not free of self - interest .finally , a few remarks concerning the parametric bootstrap are appropriate .while we are encouraged by the fact that bootstrapping is even feasible for problems of this size , there are aspects of our investigation that will need to be addressed with further research .first , the bootstrapping is so time - consuming that we were forced to rely on computing clusters with multiple computing nodes to generate a bootstrap sample in reasonable time .future work could focus on more efficient bootstrapping .some work on efficient bootstrapping was done by , but it is restricted to simple models and not applicable here .second , when the variational gem algorithm is initialized at random locations , it may converge to local maxima whose values are inferior to the solutions attained when the algorithm is initialized at the `` true '' values used to simulate the networks .while it is not surprising that variational gem algorithms converge to local maxima , it is surprising that the issue shows up in some of the simulated data sets but not in the observed data set .one possible explanation is that the structure of the observed data set is clear cut , but that the components of the estimated model are not sufficiently separated .therefore , the estimated model may place nonnegligible probability mass on networks where two or more subsets of nodes are hard to distinguish and the variational gem algorithm may be attracted to local maxima .third , some groups of confidence intervals , such as the first four trustworthiness parameter intervals , have more or less the same width .we do not have a fully satisfying explanation for this result ; it may be a coincidence or it may have some deeper cause related to the difficulty of the computational problem . in summary , we find that the clustering framework we introduce here provides useful results for a very large network .most importantly , the sensible application of statistical modeling ideas , which reduces the unconstrained 109-parameter model to a constrained 12-parameter model , produces vastly superior results in terms of interpretability , numerical stability and predictive performance .the model - based clustering framework outlined here represents several advances . an attention to standard statistical modeling ideas relevant in the network context improves model parsimony and interpretability relative to fully unconstrained clustering models , while also suggesting a viable method for assessing precision of estimates obtained .algorithmically , our advances allow us to apply a variational em idea , recently applied to network clustering models in numerous publications [ e.g. , ] , to networks far larger than any that have been considered to date .we have applied our methods to networks with over a hundred thousand nodes and signed edges , indicating how they extend to categorical - valued edges generally or models that incorporate other covariate information . in practice , these methods could have myriad uses , from identifying high - density regions of large networks to selecting among competing models for a single network to testing specific network effects of scientific interest when clustering is present .to achieve these advances , we have focused exclusively on models exhibiting dyadic independence conditional on the cluster memberships of nodes .it is important to remember that these models are _ not _ dyadic independence models overall , since the clustering itself introduces dependence .however , to more fully capture network effects such as transitivity , more complicated models may be needed , such as the latent space models of , or .a major drawback of latent space models is that they tend to be less scalable than the models considered here .an example is given by the variational bayesian algorithm developed by to estimate the latent space model of .the running time of the algorithm is and it has therefore not been applied to networks with more than nodes and ,700 edge variables .an alternative to the variational bayesian algorithm of based on case - control sampling was proposed by . however , while the computing time of this alternative algorithm is , the suggested preprocessing step , which requires determining the shortest path length between pairs of nodes , is . as a result , the largest network analyzeis an undirected network with nodes and ,686,970 edge variables .in contrast , the running time of the variational gem algorithm proposed here is in the constrained and in the unconstrained version of the model , where is the number of edge variables whose value is not equal to the baseline value .it is worth noting that is in the case of sparse graphs and , therefore , the running time of the variational gem algorithm is in the case of sparse graphs . indeed , even in the presence of the covariates , the running time of the variational gem algorithm is with categorical covariates , where is the number of covariates and is the number of categories of the covariate .we have demonstrated that the variational gem algorithm can be applied to networks with more than ,000 nodes and billion edge variables .while the running time of shows that the variational gem algorithm scales well with , in practice , the `` g '' in `` gem '' is an important contributor to the speed of the variational gem algorithm : merely increasing the lower bound using an mm algorithm rather than actually maximizing it using a fixed - point algorithm along the lines of appears to save much computing time for large networks , though an exhaustive comparison of these two methods is a topic for further investigation.=-1 an additional increase in speed might be gained by exploiting acceleration methods such as quasi - newton methods [ , section 10.7 ] , which have shown promise in the case of mm algorithms [ ] and which might accelerate the mm algorithm in the e - step of the variational gem algorithm. however , application of these methods is complicated in the current modeling framework because of the exceptionally large number of auxiliary parameters introduced by the variational augmentation .we have neglected here the problem of selecting the number of clusters . making this selection based on the so - called icl criterion , but it is not known how the icl criterion behaves when the intractable incomplete - data log - likelihood function in the icl criterion is replaced by a variational - method lower bound . in our experience ,the magnitude of the changes in the maximum lower bound value achieved with multiple random starting parameters is at least as large as the magnitude of the penalization imposed on the log - likelihood by the icl criterion .thus , we have been unsuccessful in obtaining reliable icl - based results for very large networks .more investigation of this question , and of the selection of the number of clusters in general , seems warranted . by demonstrating that scientifically interesting clustering models can be applied to very large networks by extending the variational - method ideas developed for network data sets recently in the statistical literature, we hope to encourage further investigation of the possibilities of these and related clustering methods .the source code , written in ` c++ ` , and data files used in sections [ seccomparison ] and [ secapp ] are publicly available at http://sites.stat.psu.edu/\textasciitilde dhunter / code[http://sites.stat.psu.edu/\textasciitilde dhunter / code ] .the lower bound of the log - likelihood function can be written as \\[-8pt ] & & { } + \sum _ { i=1}^n \sum_{k=1}^k \alpha_{ik } ( \log\gamma_k - \log\alpha_{ik } ) .\nonumber\end{aligned}\ ] ] since for all , the arithmetic - geometric mean inequality implies that [ ] , with equality if and .in addition , the concavity of the logarithm function gives with equality if .therefore , function as defined in ( [ qdefn ] ) possesses properties ( [ p1 ] ) and ( [ p2 ] ) .we show how closed - form expressions of in terms of can be obtained by exploiting the convex duality of exponential families .let be the legendre fenchel transform of , where $ ] is the mean - value parameter vector and the subscripts and have been dropped . by barndorff - nielsen [ ( ) , page 140 ] and wainwright and jordan [ ( ) , pages 67 and 68 ] , the legendre fenchel transform of is self - inverse and , thus , can be written as where and .therefore , closed - form expressions of in terms of may be found by maximizing with respect to .we are interested in the gradient and hessian with respect to the parameter vector of the lower bound in ( [ lbappendix ] ) .the two examples of models considered in section [ secmodel ] assume that the conditional dyad probabilities take the form ,\ ] ] where is a linear function of parameter vector and is a matrix of suitable order depending on components and .it is convenient to absorb the matrix into the statistic vector and write ,\ ] ] where .thus , we may write + \mbox{const},\ ] ] where `` const '' denotes terms which do not depend on and .\ ] ] since the lower bound is a weighted sum of exponential family log - probabilities , it is straightforward to obtain the gradient and hessian of with respect to , which are given by \bigr\}\ ] ] and ,\ ] ] respectively . in other words ,the gradient and hessian of with respect to are weighted sums of expectations the means , variances and covariances of statistics .since the sample space of dyads is finite and , more often than not , small , these expectations may be computed by complete enumeration of all possible values of and their probabilities .we are grateful to paolo massa and kasper souren of http://www.trustlet.org[trustlet.org ] for sharing the http://www.epinions.com[epinion.com ] data
we describe a network clustering framework , based on finite mixture models , that can be applied to discrete - valued networks with hundreds of thousands of nodes and billions of edge variables . relative to other recent model - based clustering work for networks , we introduce a more flexible modeling framework , improve the variational - approximation estimation algorithm , discuss and implement standard error estimation via a parametric bootstrap approach , and apply these methods to much larger data sets than those seen elsewhere in the literature . the more flexible framework is achieved through introducing novel parameterizations of the model , giving varying degrees of parsimony , using exponential family models whose structure may be exploited in various theoretical and algorithmic ways . the algorithms are based on variational generalized em algorithms , where the e - steps are augmented by a minorization - maximization ( mm ) idea . the bootstrapped standard error estimates are based on an efficient monte carlo network simulation idea . last , we demonstrate the usefulness of the model - based clustering framework by applying it to a discrete - valued network with more than 131,000 nodes and 17 billion edge variables . ,
global optimization problems are considered where the computation of objective function values , using the standard computer arithmetic , is problematic because of either underflows or overflows .a perspective means for solving such problems is the arithmetic of infinity .besides fundamentally new problems of minimization of functions whose computation involves infinite or infinitesimal values , the arithmetic of infinity can be also very helpful for the cases where the computation of objective function values is challenging because of the involvement of numbers differing in many orders of magnitude .for example , in some problems of statistical inference , the values of operands , involved in the computation of objective functions , differ by more than a factor of .the arithmetic of infinity can be applied to the optimization of challenging objective functions in two ways .first , the optimization algorithm can be implemented in the arithmetic of infinity .second , the arithmetic of infinity can be applied to scale the objective function values to be suitable for processing by a conventionally implemented optimization algorithm .the second case is simpler to apply , since the arithmetic of infinity should be applied only to the scaling of function values .if both implementation versions of the algorithm perform identically with respect to the generation of sequences of points where the objective function values are computed , the algorithm is called strongly homogeneous . in the present paper, we show that both implementation versions - of the p - algorithm and of the one - step bayesian algorithm - are strongly homogeneous . to be more precise , let us consider two objective functions and , differing only in scales of function values , i.e. where and are constants that can assume not only finite but also infinite and infinitesimal values expressed by numerals introduced in . in its turn, is defined by using the traditional finite arithmetic .the sequences of points generated by an algorithm , when applied to these functions , are denoted by and respectively .the algorithm that generates the identical sequences is called strongly homogeneous .a weaker property of algorithms is considered in , where the algorithms that generate the identical sequences for the functions and are called homogeneous .since the proper scaling of function values by translation alone is not always possible , in the present paper we consider invariance of the optimization results with respect to a more general ( affine ) transformation of the objective function values .let us consider the minimization problem where the multimodality of the objective function is expected .although the properties of the feasible region are not essential in a further analysis , for the sake of explicitness , is assumed to be a hyper - rectangle . for the arguments justifying the construction of global optimization algorithms using statistical models of objective functions , we refer to .global optimization algorithms based on statistical models implement the ideas of the theory of rational decision making under uncertainty .the p - algorithm is constructed in stating the rationality axioms in the situation of selection of a point of current computation of the value of ; it follows from the axioms that a point should be selected where the probability to improve the current best value is maximal . to implement the p - algorithm ,gaussian stochastic functions are used mainly because of their computational advantages ; however such type of statistical models is justified axiomatically and by the results of a psychometric experiment .application for a statistical model of a non - gaussian stochastic function would imply at least serious implementation difficulties .let be the gaussian stochastic function with mean value , variance , and correlation function .the choice of the correlation function normally is based on the supposed properties of the aimed objective functions , and the properties of the corresponding stochastic function , e.g. frequently used correlation functions are , .the parameters and should be estimated using a sample of the objective function values .let be the function values computed during the previous minimization steps . by the p - algorithm the next function valueis computed at the point of maximum probability to overpass the aspiration level : since is the gaussian stochastic function , the maximization in ( [ p ] ) can be reduced to the maximization of where and denote the conditional mean and conditional variance of with respect to .the explicit formulae of and are presented below since they will be needed in a further analysis evaluate the influence of data scaling on the whole optimization process , two objective functions are considered : and , where and are constants .let us assume that the first function values were computed for both functions at the same points .the next points of computation of the values of and are denoted by and .we are interested in the strong homogeneity of the p - algorithm , i.e. in the equality .the parameters of the stochastic function , estimated using the same method but different function values , normally are different .the estimates of and , obtained using the data and , are denoted as and , respectively .it is assumed that and ; as shown below , this natural assumption is satisfied for the two most frequently used estimators . obviously, the unbiased estimates of and of , , and , satisfy the assumptions made . although those estimates are well justified only for independent observations , they sometimes ( especially when only a small number ( ) of observations is available ) are used also for rough estimation of the parameters and of despite the correlation between the .the maximum likelihood estimates also satisfy the assumptions : where , and is the dimensional unit vector .it is easy to show that the maximum likelihood estimates implied by ( [ likl ] ) are equal to it follows from ( [ likl1 ] ) and ( [ likl2 ] ) that , and correspondingly .the aspiration levels are defined depending on the scales of function values : , .the p - algorithm , based on the gaussian model with estimated parameters , is strongly homogeneous . according to the definition of the following equalities are valid taking into account the relation between and and the corresponding relations between the estimates of and , equalities ( [ v ] )can be extended as follows the equality between and means that the sequence of points generated by the p - algorithm is invariant with respect to the scaling of the objective function values . the strong homogeneity of the p - algorithm is proven .as shown in , the p - algorithm and the radial basis function algorithm are equivalent under very general assumptions .therefore the statement on the strong homogeneity of the p - algorithm is also valid for the radial basis function algorithm .statistical models of objective functions are also used to construct bayesian algorithms .let a gaussian stochastic function be chosen for the statistical model as in section [ sec : p ] .an implementable version of the bayesian algorithm is the so called one - step bayesian algorithm defined as follows : the one - step bayesian algorithm , based on the gaussian model with estimated parameters , is strongly homogeneous .the value of the objective function is computed by the one - step bayesian algorithm at the point of maximum average improvement ( [ b ] ) .the formula of conditional mean in ( [ b ] ) can be rewritten as follows where denotes the gaussian probability density with the mean value and variance . for simplicity ,we use in this formula and hereinafter the traditional symbol . obviously ,when one starts to work in the framework of the infinite arithmetic , it should be substituted by an appropriate infinite number that has been defined a priori by the chosen statistical model .integration by parts in ( [ b1 ] ) results in the following formula where is the laplace integral : . from the formulae ( [ cond ] ) , ( [ v1 ] ) ,the equalities follow implying the invariance of the sequence generated by the one - step bayesian algorithm with respect to the scaling of values of the objective function .the strong homogeneity of the one - step bayesian algorithm is proven .although the invariance of the whole optimization process with respect to affine scaling of objective function values seems very natural , not all global optimization algorithms are strongly homogeneous .for example , the rather popular algorithm direct is not strongly homogeneous .we are not going to investigate in detail the properties of direct related to the scaling of objective function values . instead an example is presented contradicting the necessary conditions of strong homogeneity .for the sake of simplicity let us consider the one - dimension version of direct .let the feasible region ( interval ) be partitioned into subintervals $ ] , .the objective function values computed at the points are supposed positive , ; denote .a -th subinterval is said to be potentially optimal if there exists a constant such that where , and is a constant defining the requested relative improvement , .all potentially optimal subintervals are subdivided at the current iteration .let us consider the iteration where the potentially optimal -th subinterval is not the longest one .then for all where .otherwise there exists a constant such that the values and corresponding to the minimum in ( [ l2 ] ) are denoted as and correspondingly , i.e. let the values of the function be computed at the points , and assume that the following inequality is valid , where for the data related to the following inequality holds : and a constant satisfying the inequalities can not exist .therefore the -th subinterval for the function is not potentially optimal because necessary conditions ( analogous to ( [ l2 ] ) and ( [ l3 ] ) for the function ) are not satisfied .to demonstrate the strong homogeneity of the p - algorithm an example of one dimensional optimization is considered . for a statistical model the stationary gaussian stochasticfunction with correlation function is chosen .let the values of the first objective function ( say ) computed at the points ( 0 , 0.2 , 0.5 , 0.9 , 1 ) be equal to ( -0.8 , -0.9 , -0.65 , -0.85 , -0.55 ) , and the values of the second objective function ( say ) be equal to ( 0 , -0.4 , 0.6 , -0.2 , 0.99 ) .the graphs of the conditional mean and conditional standard deviation for both sets of data are presented in figure [ fig:1 ] . in the section of figure [ fig:1 ] showing the conditional means , the horizontal lines are drawn at the levels and correspondingly . in spite of the obvious difference in the data , the functions expressing the probability of improvement for both cases coincide . therefore , their maximizers which define the next points of function evaluations also coincide .this coincidence is implied by the strong homogeneity of the p - algorithm and the following relation : , where the values of up to five decimal digits are equal to .both the p - algorithm and the one - step bayesian algorithm are strongly homogeneous .the optimization results by these algorithms are invariant with respect to affine scaling of values of the objective function .the implementations of these algorithms using the conventional computer arithmetic combined with the scaling of function values , using the arithmetic of infinity , are applicable to the objective functions with either infinite or infinitesimal values .the optimization results , obtained in this way , would be identical with the results obtained applying the implementations of the algorithms in the arithmetic of infinity .the valuable remarks of two unknown referees facilitated a significant improvement of the presentation of results .jones d.r .( 1993 ) lipschitzian optimization without the lipschitz constant , journal of optimization theory and applications , vol.79 ( 1 ) , 157181 .mockus j. ( 1972 ) on bayesian methods of search for extremum , avtomatika i vychislitelnaja tekhnika , no.3 , 53 - 62 , ( in russian ) .mockus j. ( 1988 ) bayesian approach to global optimization , kluwer academic publishers , dodrecht .sergeyev ya.d .( 2008 ) a new applied approach for executing computations with infinite and infinitesimal quantities , informatica , vol.19(4 ) , 567 - 596 .sergeyev ya.d .( 2009 ) numerical computations and mathematical modelling with infinite and infinitesimal numbers , journal of applied mathematics and computing , vol.29 , 177 - 195 .sergeyev ya.d .( 2010 ) lagrange lecture : methodology of numerical computations with infinities and infinitesimals , rendiconti del seminario matematico delluniversit e del politecnico di torino , vol.68(2 ) , 95113 .strongin r. , sergeyev ya.d .( 2000 ) global optimization with non - convex constraints , kluwer academic publishers , dodrecht .trn a. , ilinskas a. ( 1989 ) global optimization , lecture notes in computer science , vol.350 , 1 - 255 .ilinskas a. ( 1982 ) axiomatic approach to statistical models and their use in multimodal optimization theory , mathematical programming , vol.22 , 104 - 116 .ilinskas a. ( 1985 ) axiomatic characterization of a global optimization algorithm and investigation of its search strategies , operations research letters , vol.4 , 35 - 39 .ilinskas a. ( 2010 ) on similarities between two models of global optimization : statistical models and radial basis functions , journal of global optimization , vol.48 , 173 - 182 .ilinskas a. ( 2011 ) small sample estimation of parameters for wiener process with noise , communications in statistics - theory and methods , vol .40(16 ) , 3020 - 3028 . ilinskas a , . ilinskas j. ( 2010 ) interval arithmetic based optimization in nonlinear regression , informatica , vol.21(1 ) , 149 - 158 .
the implementation of global optimization algorithms , using the arithmetic of infinity , is considered . a relatively simple version of implementation is proposed for the algorithms that possess the introduced property of strong homogeneity . it is shown that the p - algorithm and the one - step bayesian algorithm are strongly homogeneous . arithmetic of infinity , global optimization , statistical models
sensitivity analysis is an important aspect of the responsible use of hydraulic models .such approach allows the identification of the key parameters impacting model performance . global sensitivity analysis( gsa ) aims at ranking the input parameters variability effects on model s output variability .only few studies have performed gsa on hydraulic models . indeed , this type of approach is not straight forward , as it requires adaptation of methods and tools development .these aspects being time consuming , not without standing the heavy computational cost of such type of approach , it represents an important investment for applied practitioners community and is therefore still at an exploratory level . nevertheless , dealing with uncertainties in hydraulic models is a press - forward concern for both practitioners and new guidance . in practical flood event modelling applications ,mostly uses standard industrial codes relying on shallow water equations ( swes ) either in 1d ( _ e.g. _ mascaret , mike 11 _ etc . _ ) or 2d ( _ e.g. _ telemac 2d , mike 21 , isis2d _ etc .the aim of using hydraulic models is to provide information on simulated flood event properties such as maximal water depth or flood spatial extent .eventually outputs of the models are used for design or safety assessment purpose . in hydraulics , deterministic mathematical modelsaim at representing natural phenomena with different levels of complexity in the mathematical formulation of the physical phenomena depending on underlying simplifying assumptions .these assumptions will influence the domain of validity and of application of the models .consequently , it will impact accuracy standards which should be expected from models results .an analytical solution to swe exists from a mathematical point of view only when the problem is well - posed , which is generally not the case in practical river flood modelling engineering applications .moreover , equations are resolved using computer codes , which will discretely approach the continuous solutions of these equations ( when mathematically existing ) .numerical approach implemented in these codes can be various and different level of accuracy can be expected depending on the numerical method . in deterministic hydraulic codes ,input parameters are variables which are known with a certain level of confidence .eventually , modeller choices to design models and computation optimization can introduce high variability in results . in hydraulic models , sources of uncertainties can be classified in three categories : ( ) hypothesis in mathematical description of the natural phenomena , ( ) numerical aspects when solving the equations , and ( ) input parameters of the model .the uncertainties related to input parameters are of prime interest for applied practitioners willing to decrease uncertainties in theirs models results .hydraulic models input parameters are either of hydrological , hydraulic , topographical or numerical type .identification , classification and impact quantification of sources of uncertainties , on a given model output , are a set of analysis steps which will enable to ( ) analyze uncertainties behavior in a given modelling problem , ( ) elaborate methods for reducing uncertainties on a model output , and ( ) communicate on relevant uncertainties . uncertainty analysis ( ua ) and sensitivity analysis ( sa ) methods are useful tools , as they allow robustness of model predictions to be checked and help to identify input parameters influences .ua consists in the propagation of uncertainty sources through the model , and then focus on the quantification of uncertainties in model output and propagating them through the model predictions .it allows robustness of model results to be checked .various methods are then available to rank parameters regarding their impact on results variability ( such as sobol index ) .this process goes one step beyond ua and constitutes a global sensitivity analysis ( gsa ) . in practice ,such type of approach is of a great interest , but is still at an exploratory level in applied studies relying on 2d swe codes .indeed , gsa approach implementation is challenging , as it requires specific tools and deals with important computational capacity .with 1d free surface hydraulic codes , applied tools and methodology for uncertainty propagation and for gsa have been developed by irsn and companie nationale du rhne ( cnr ) .the purpose of the study presented in this paper is to apply a protocol and to provide a tool , allowing adaptable and ready - to use gsa for 2d hydraulic modelling applications .sections developed in this paper present the concept of gsa applied to 2d hydraulic modelling approach through the presentation ( ) of the gsa concept , ( ) of implemented protocol and ( ) of developed operational tools .a proof of concept to illustrate feasibility of the approach is given , based on a 2d flood river event modelling in nice ( france ) low var valley .a regular sensitivity analysis aims to study how the variations of model input parameters impact the models outputs .objectives with this approach are mostly to identify the parameter or set of parameters which significantly impact models outputs .for instance , sa screening approach is a variance - base method which allows to identify input variables which have negligible effects , from those which have linear , non - linear or combinatory effects significantly impacting variability of results output . this can be useful for models with a large set of input parameters assumed to introduce uncertainty , to reduce the number of input parameters to consider in a given sa study .local sa focuses on fixed point in the space of the input .the aim is here to address model behavior near parameters nominal value to safely assume local linear dependence on the parameter .more details about these sa methods and their application to practical engineering problems can be found in .a gsa aims to quantify the output uncertainty in the input factors , given by their uncertainty range and distribution . to do so ,the deterministic code ( 2d hydraulic code in our case ) is considered as a black box model as described in : where is the model function , are independent input random variables with known distribution and is the output random variable .the principle of gsa method relies on estimation of inputs variables variance contribution to output variance .a unique functional analysis of variance ( anova ) decomposition of any integral function into a sum of elementary functions allows to define the sensitivity indices as explained in .sobol s indices are defined as follow : /var(y).\label{eq : sobol}\ ] ] to implement a gsa approach , it is necessary ( ) to identify inputs and assess their probability distribution , ( ) to propagate uncertainty within the model and ( ) to rank the effects of input variability on the output variability through functional variance decomposition method such as calculation of sobol indices .the first two steps constitute an uncertainty analysis ( ua ) which is a compulsory stage of the gsa . for the first step of the ua, each input parameter is considered as a random value where both , choice of considered input parameter and choice of probability distribution of the input random values , have to be set up .the assessment of selected uncertain parameters and their probability distribution is completed according to expert opinion or by statistical analysis for measured values if sufficient measured data sets are available .the first two steps lead to define probability density function constructed to represent uncertainty of selected input parameters for the study .it has to be emphasized that this step of the gsa process is important and strongly subjective .propagation of uncertainty is then required ( step of the ua ) , all sources of uncertainties are varied simultaneously , which is classically done using monte - carlo techniques or more parsimonious monte - carlo like approach .controlling the convergence of the uncertainty propagation gives an idea if the sample of simulations is large enough to allow consistent statistical analysis . in practice , convergence of estimated sensitivity indices and their confidence interval can be plotted and examined visually .nevertheless , the decision whether the level of convergence is satisfactory or not , depends on arbitrary decision of the operator regarding desired accuracy and confidence interval on the accuracy .eventually , gsa can be performed to calculate sobol index .first - order sobol index indicates the contribution to the output variance of the main effect of each input parameters .total - order sobol index measures the contribution to the output variance including all variance caused by interactions between uncertain input parameters .the production of sobol index spatial distribution map is promising .moreover , such maps have been done in other application fields such as hydrology , hydrogeology and flood risk cost estimation .an overview of gsa approach is presented in fig .this type of general protocol for gsa has already been applied to 1d hydraulic model .common aspects arise between 1d and 2d models hydraulic application to implement a gsa approach and are described step by step below : the first step ( step , in fig .[ fig1 ] ) , identifies the input parameters of the hydraulic code to be considered for the analysis . in hydraulic models , the mains input parameters to consider are : ) geometric parameters , spatial discretization , hydraulic and numerical parameters .the geometric parameters , in 1d models , are represented as cross section , whereas in 2d models digital elevation model ( dem ) are included as component of geometric parameter .another important geometrical aspect to consider as an input parameter introducing uncertainty is the spatial discretization .the spatial discretization can be considered as a geometrical and a numerical parameter .indeed spatial discretization impacts both the level of details of the geometry included in calculation and numerical stability of calculation as , ( and in 2d ) impact cfl criterion .hydraulic parameters are mainly initial , boundary conditions and energy loses coefficient .such parameters are introduced in models under the form of water level , discharge or flow velocity field or friction coefficients .geometric and hydraulic variables can be measured or estimated ; as a result they are often subject to a high level of uncertainty .numerical parameters are related to numerical method and solver included in numerical scheme . broadly speaking, input numerical parameters will be those related to cfl number ( and ) , parameters impacting accuracy of solver ( numbers of iteration or order of solver method for instance ) as well as parameters impacting numerical diffusion and dispersion .once input parameters are selected , a probability density function has to be attributed to each parameter , in order to distribute input parameters .as previously mentioned , probability density function are most often related to expert opinion .the most common used distributions in hydraulic studies are normal or triangular distribution .uniform distribution is used as well , when no clear idea can be made up regarding the distribution of a given parameter variability . by assuming equi - probability of the realization of a variable , uniform distributionwill then maximize the uncertainty ( entropy ) of the input parameter . to apply a gsa ,the input parameters are assumed to be independent .this specific point should be considered at this stage of the gsa , as sometimes selected input parameters can be strongly correlated in some cases .for example , in hydraulic model dependences between flow and linear energy loses properties ( represented by chzy , manning or strickler law ) are present . the second step ( step , in fig . [ fig1 ] )results in the propagation of the distributed input parameters within the selected hydraulic model .hydraulic codes ( 1d and 2d ) are based on the same sets of equations which are the swe , as written as follow in 2d : where the unknowns are the velocities and ] , and where ( ) is the opposite of the slope and ( ) the linear energy losses in -direction ( resp .-direction ) . therefore ,underlying hypothesis in the mathematical description of the physical process are similar for 1d and 2d models : ( ) uniform ( and uniform in 2d ) velocity for a given mesh cell , ( ) horizontal free surface at a given cell , ( ) vertical hydrostatic pressure and ( iv ) energy losses are represented through chzy , manning or strickler formulas . as previously mentioned , the model is considered as a black box as described concerning for application of the gsa . for each specific source of uncertainty , independent realizationsare generated using monte - carlo techniques .the number of realizations has to be large enough to reach convergence of the interest variable .histograms are commonly plotted to ensure that output parameters follow a normal distribution .moreover , spatially distributed results of the variable of interest can then be analyzed .the third step ( step , in fig .[ fig1 ] ) and final step of gsa approach is the calculation of the sobol indices and the evaluation of the model outputs robustness .analysing gsa results , the model user has a better understanding and quantification of its models uncertainties . to apply a gsa with 2d hydraulic models ,a coupling between promthe a code allowing a parametric environment of other codes , has been performed with fullswof_2d , a two - dimensional swe based hydraulic code .the coupling procedure has taken advantage of previous coupling experience of promthe with 1d swe based hydraulic code .the coupled code promthe - fullswof ( p - fs ) has been performed on a hpc computation structure .fullswof_2d ( full shallow water equation for overland flow in 2 dimensions ) is a code developed as a free software based on 2d swe . in fullswof_2d , the 2d swe are solved thanks to a well - balanced finite volume scheme based on the hydrostatic reconstruction . the finite volume scheme , which is suited for a system of conservation low , is applied on a structured spatial discretization , using regular cartesian mesh . for the temporal discretization ,a variable time step is used based on the cfl criterion .the hydrostatic reconstruction ( which is a well - balanced numerical strategy ) allows to ensure that the numerical treatment of the system preserves water depth positivity and does not create numerical oscillation in case of a steady states , where pressures in the flux are balanced with the source term here ( topography ) .different solvers can be used hll , rusanov , kinetic , vfroe combined with first order or second order ( muscl or eno ) reconstruction .fullswof_2d is an object oriented software developed in c++ .two parallel versions of the code have been developed allowing to run calculations under hpc structures .promthe software is coupled with fullswof_2d .promthe is an environment for parametric computation , allowing carrying out uncertainties propagation study , when coupled to a code .this software is an open source environment developed by irsn ( http://promethee.irsn.org/doku.php ) .the main interest of promthe is the fact that it allows the parameterization of any numerical code .also , it is optimized for intensive computing resources use .moreover , statistical post - treatment can be performed using promthe as it integrates r statistical computing environment .the coupled code promthe / fullswof ( p - fs ) is used to automatically launch parameterized computation through r interface under linux os .a graphic user interface is available under windows os , but in case of large number of simulation launching , the use of this os has shown limitations as described in .a maximum of calculations can be run simultaneously , with the use of `` daemons '' .map during an extreme flood event scenario simulated for the low var river valley using fullswof_2d.,scaledwidth=98.0% ] to test the gsa protocol and p - fs tools , uncertainties introduced in 2d hydraulic models by geometric input parameters related to hr topographic data use have been studied .more specifically this case focuses on uncertainty related to 3d classified data use within hydraulic 2d models . in ,the case study is introduced and the outcomes of the gsa applied to uncertainties related to high resolution topographic data use with 2d hydraulic codes are presented in detail .it has to be reminded that the scope of present paper is to give a proof of concept of possibility to apply protocol and to use developed tools for a gsa with 2d swe based hydraulic codes .the main characteristics of that case study are summarized in the current section .the flood event of the var river valley has been modelled .the study area is the lower part of the var river valley , where major inundation occurred during the event .as our analysis focus on uncertainties related to geometric input parameters , hydrologic and hydraulic parameters are treated as constant .an illustration of the maximal water depth ( ) computed in this area for the given hydraulic scenario is illustrated in fig .[ fig2 ] .[ fig3 ] illustrates the gsa approach adapted to the tested proof - of - concept 2d hydraulic modelling study .the three input parameters related to the geometric input parameters are ( ) the topographic input ( called var . ) and ( ) two numerical parameters chosen by modellers ( called var . and var . ) when willing to use hr 3d classified data are considered in this gsa practical case . represents measurement errors in topography . and var . , are discrete values representing operator choices , which are respectively concrete elements structures included in dem ( buildings , walls , street elements _ etc . _ ) and structured mesh resolution .these three parameters var . , and , are assumed independent . is different occurrences of spatially homogenous resolution maps of errors where each cell of the error grids follows a normal probability density function . a given occurrence of var . is the addition of one of the error grid to a reference high resolution dtm . can have discrete values depending on modeler choices when including hr concrete above ground elements to dem .the stands for a row dtm , for a dem which encompasses building elevation information , stands for a dem including elevation information plus concrete walls elevations and eventually , is a dem which including information plus concrete street elements elevation information . can have 5 discrete values being , , , or , representing spatial discretization choice . as the number of considered parameters is limited ,sa screening methods are not considered .the output of interest is the overland flow water surface elevation ( ) reached at points and areas of interest .the gsa ranks influence of selected input parameters variability over variability of .the introduced gsa approach has been followed with a two - dimensional hydraulic model .the coupled tool has been set up on the hpc msocentre and the p - fs couple is transposable on any hpc structures . using r commands , the simulations were launched .however , the calculation running time of our simulation is very long and increases considerably when the mesh resolution increases .the computing cpu cost is respectively , , , , hours for , , , , resolution grid .p - fs tool successfully allow applying gsa protocol to 2d hydraulic modelling .so far , highest resolution ( and ) simulations are not fully completed yet , due to a prohibitive computational time . at these resolutions running simulations on more than cpu is necessary .improvement of parallel version of p - fs is in progress to allow a single simulation to run over more than one node .the r environment allowed performing post - treatment over the results to analyze them efficiently ( see section [ subsec : results - case ] ) .the limit of the approach is the computational time required to run the large number of simulations . in our case , simulations have been run to generate a large set of results , which were sampled afterward , using monte - carlo approach to calculate sobol indices .moreover , generation of dem can be time consuming and can not be entirely generated automatically following chosen probability distribution functions and especially when parameters are discrete . as a result , they have been created before the propagation has been carried out .this remark has to be extended as a current limit of the tool to generate variable spatial input for 2d hydraulic gsa .this requires further development .[ fig4 ] introduces the typical results that are obtained when a gsa is performed .convergence test can be illustrated by the evolution of mean value and the confidence interval ( ci ) when size increases ( fig .[ fig4].a ) .[ fig4].b and [ fig4].b respectively illustrate probability density function analysis of output ( ) when only one parameter ( var . ) is varying ( fig .[ fig4].b ) , and scatter plotting use for distribution analysis when two parameters ( var . and ) are varying ( fig . [ fig4].b ) .first - order sobol indices are represented in fig .[ fig4].c .total order sobol index can be computed as well .these results have been obtained using r scripts , which were written on purpose for such type of analysis using existing r functions .nevertheless , it has to be emphasized that limits of the approach are ( ) needs to fully integrate specialization possibility in p - fs tool for spatially varying input parameters , ( ) computational time required for such an approach application over resources demanding 2d models can be prohibitive for applied practitioner and ( ) spatialization of output statistical analysis still requires few post treatment development to allow a fully holistic spatial gsa application for 2d hydraulic models. eventually an extra round of analysis and research has to be effectuated using feedback of first results of this approach to allow improvement of different steps of gsa for 2d hydraulic models ( regarding identification of parameters independence for instance ) .in this paper , a gsa framework to investigate the impact of uncertainties of deterministic 2d hydraulic models input parameters has been developed and tested over the low var river basin .a coupled tool promthe - fullswof_2d ( p - fs ) has been elaborated over a standard high performance computation ( hpc ) structure .this tool allows parameterization and automatic launching of simulation for uncertainty propagation step of a gsa .achieved test on low var valley constitutes a proof of concept study which has put to the light the promising possibilities of such an approach for identification of most influent uncertain input parameters .indeed , it is possible to go all the way through gsa protocol and after convergence checking , ranking of influent uncertain input parameter is possible .p - fs is ready to use and easily compatible with most of hpc structures .limits and possible improvements of our protocol and tool can be emphasized as follow : 1 . generally speaking , efforts are required for characterization of input parameters spatial variability . this step of the process can be time consuming and his application in p - fs tool might not be straight forward .2 . required computational resources to proceed to this type of study are considerable . here, for the finest resolutions ( and ) , we had to consider to increase the number of cpus possible to use for a given simulation .this will enable to reduce running time of the simulations .3 . the next step is to carry out sobol index map to illustrate possibilities of protocol and tool use combination with cross input parameter output variations spatial analysis .photogrametric and photo - interpreted dataset used for this study have been kindly provided by nice cte dazur metropolis for research purpose .this work was granted access to ( i ) the hpc and visualization resources of `` centre de calcul interactif '' hosted by `` universit nice sophia antipolis '' and ( ii ) the hpc resources of aix - marseille universit financed by the project equip ( anr-10-eqpx-29 - 01 ) of the program `` investissements davenir '' supervised by the agence nationale pour la recherche . technical support for codesadaptation on high performance computation centers has been provided by f. lebas . , h. coullon and p. navarro .abily , m. , delestre , o. , gourbesville , p. , bertrand , n. , duluc , c .- m . , and richet , y. ( 2016 ) . global sensitivity analysis with 2d hydraulic codes : application on uncertainties related to high - resolution topographic data . in gourbesville ,p. , cunge , j. , and caignaert , g. , editors , _ advances in hydroinformatics - simhydro 2014 _ , springer water , pages 301315 .springer singapore .alliau , d. , de saint seine , j. , lang , m. , sauquet , e. , and renard , b. ( 2015 ) .etude du risque dinondation dun site industriel par des crues extrmes : de lvaluation des valeurs extrmes aux incertitudes hydrologiques et hydrauliques ., 2:6774 . , coullon , h. , delestre , o. , laguerre , c. , le , m. h. , pierre , d. , and sadaka , g. ( 2013 ) .fullswof paral : comparison of two parallelization strategies ( mpi and skelgis ) on a software designed for hydrology applications ., 43:5979 .cunge , j. a. ( 2014 ) .what do we model ? what results do we get ?an anatomy of modelling systems foundations . in gourbesville , p. , cunge , j. , and caignaert , g. , editors ,_ advances in hydroinformatics _ , springer hydrogeology , pages 518 .springer singapore .delestre , o. , cordier , s. , darboux , f. , du , m. , james , f. , laguerre , c. , lucas , c. , and planchon , o. ( 2014 ) .fullswof : a software for overland flow simulation . in gourbesville ,p. , cunge , j. , and caignaert , g. , editors , _ advances in hydroinformatics _, springer hydrogeology , pages 221231 .springer singapore .goutal , n. , lacombe , j .- m . , zaoui , f. , and el - kadi - abderrezak , k. ( 2012 ) . : a 1-d open - source software for flow hydrodynamic and water quality in open channel networks . in murillomuoz , r. , editor , _ river flow 2012 : proceedings of the international conference on fluvial hydraulics , san jos , costa rica , 5 - 7 september - volume 2 _ , pages 11691174 .crc press .
global sensitivity analysis ( gsa ) methods are useful tools to rank input parameters uncertainties regarding their impact on result variability . in practice , such type of approach is still at an exploratory level for studies relying on 2d shallow water equations ( swe ) codes as gsa requires specific tools and deals with important computational capacity . the aim of this paper is to provide both a protocol and a tool to carry out a gsa for 2d hydraulic modelling applications . a coupled tool between promthe ( a parametric computation environment ) and fullswof_2d ( a code relying on 2d swe ) has been set up : promthe - fullswof_2d ( p - fs ) . the main steps of our protocol are : ) to identify the 2d hydraulic code input parameters of interest and to assign them a probability density function , ) to propagate uncertainties within the model , and ) to rank the effects of each input parameter on the output of interest . for our study case , simulations of a river flood event were run with uncertainties introduced through three parameters using p - fs tool . tests were performed on regular computational mesh , spatially discretizing an urban area , using up to million of computational points . p - fs tool has been installed on a cluster for computation . method and p - fs tool successfully allow the computation of sobol indices maps . uncertainty , flood hazard modelling , global sensitivity analysis , 2d shallow water equation , sobol index . les mthodes danalyse de sensibilit permettent de contrler la robustesse des rsultats de modlisation ainsi que didentifier le degr dinfluence des paramtres dentre sur le rsultat en sortie dun modle . le processus complet constitue une analyse globale de sensibilit ( gsa ) . ce type dapproche prsente un grand intrt pour analyser les incertitudes de rsultats de modlisation , mais est toujours un stade exploratoire dans les tudes appliques mettant en jeu des codes bass sur la rsolution bidimensionnelle des quations de saint - venant . en effet , limplmentation dune gsa est dlicate car elle ncessite des outils de paramtrage automatisable spcifique et requiers dimportante capacit de calcul . lobjectif de cet article est de prsenter un protocole et des outils permettant la mise en uvre dune gsa dans des applications de modlisation hydraulique 2d . un environnement paramtrique de calcul ( promthe ) et un code de calcul bas sur les quations de saint - venant 2d ( fullswof_2d ) ont t adapts pour dvelopper loutil promthe - fullswof_2d ( p - fs ) . un prototype de protocole oprationnel pour la conduite dune gsa avec un code de calcul dhydraulique surface libre 2d est prsent et appliqu un cas test de crue fluvial en milieu urbain . les tapes du protocole sont : ) lidentification des paramtres hydrauliques 2d dintrt et lattribution dune loi de probabilit aux incertitudes associes ces paramtres , ) la propagation des incertitudes dans le modle , et ) le classement des effets des incertitudes des paramtres dentres sur la variance de la sortie dintrt . pour le cas test , des simulations dun scnario ont t effectues avec une incertitude porte sur trois paramtres dentre . loutil p - fs a t utilis sur un cluster de calcul et est aisment transposable sur dautres architectures de calcul intensif . le protocole de gsa et p - fs ont permis de produire des cartes dindices de sobol afin danalyser la variabilit spatiale des contributions des paramtres incertains . incertitude , modlisation dinondation extrme , mthode danalyse de sensibilit , quations de saint - venant 2d , indices de sobol .
supertree methods combine the evolutionary relationships of a set of phylogenetic trees * g * , into a single tree * t * .these methods differ from the consensus - based approaches , by allowing input trees to have different but overlapping set of taxa .supertrees are useful in combining input trees generated from completely incomparable approaches , such as statistical analysis of discrete dataset and distance analysis of dna - dna hybridization data .input trees often exhibit conflicting topologies , due to different evolutionary histories of respective genes , stochastic errors in site and taxon sampling , and biological errors due to paralogy , incomplete lineage sorting , or horizontal gene transfer .supertree methods quest for resolving such conflicts in order to produce a ` median tree ' , which minimizes the sum of a given distance measure with respect to the input trees .large scale supertrees are intended towards assembling the _ tree of life _ .our earlier work , and the study in , provide a comprehensive review of various supertree methods . _indirect _ supertree methods first generate intermediate structures like matrices ( as in mrp , minflip , sfit ) or graphs ( as in mincut ( mc ) , modified mincut ) from the input trees , and subsequently resolve these intermediate structures to produce the final supertree . these methods , especially mrp , are quite accurate , but computationally intensive ._ direct _ methods , on the other hand , derives the supertree directly from input tree topologies .these methods may aim for minimizing either the sum of false positive ( fp ) branches ( as in the _ veto _ approaches like physic , scm ) or the sum of robinson - foulds ( rf ) distance ( as in rfs ) between * t * and * g*. another approach named superfine employs greedy heuristics on mrp or quartet maxcut ( qmc ) , to derive the supertree , which may not be completely resolved .supertrees formed by synthesizing the subtrees ( such as triplets , quartets ) of the input trees , exhibit quite high performance .but , time and space complexities of these methods depend on the size of the subtree used .we have previously developed cospedtree , a supertree algorithm using evolutionary relationships among individual pair of taxa ( couplets ) .the method is computationally efficient , but produces somewhat conservative ( not fully resolved ) supertrees , with low number of false positive ( fp ) but high number of false negative ( fn ) branches between * t * and * g*. here we propose its improved version , termed as cospedtree - ii , which produces better resolved supertree , with lower number of fn branches between * t * and * g * , keeping the fp count also low .we have also proposed a mechanism to convert a non - resolved supertree into a strict binary tree , to reduce the fn count .cospedtree - ii requires significantly lower running time than cospedtree and most of the reference methods , particularly for the datasets having high number of trees or taxa .rest of this manuscript is organized as follows .first , we review the basics of cospedtree ( as in ) in section [ sec : overview_cosped ] .the method cospedtree - ii is then described in section [ sec : methodology ] .performance of cospedtree - ii is summarized in section [ sec : results ] .let * g * consist of rooted input trees . for an input tree ( ) , let be its set of constituent taxa .suppose a pair of taxa and belong to .further , let and be the parent internal nodes ( points of speciation ) of and , respectively .cospedtree defines four boolean relations ( ) between and , with respect to , as : 1 .earlier speciation of than * ( ) * is true , if is ancestor of in . for the tree in fig .[ fig : source_2 ] , is true , where .similarly , is true for .later speciation of than * ( ) * is true , if is a descendant of .so , and are equivalent .simultaneous speciation of and * ( ) * is true , if = . in fig .[ fig : source_1 ] , and are true .4 . incomparable speciation of and * ( ) * is true , when and occur at different ( and independent ) clades . for the tree in fig .[ fig : source_1 ] , is true . using another taxon ,properties of to can be stated as the following : p1 : : : both and are transitive .thus , + * & .* & .p2 : : : is an equivalence relation . p3 : : : (= ) & , where .p4 : : : (= ) & (= ) (= ) ._ support tree set _ for a couplet ( ) is defined as : the _ frequency _ ( ) of a relation between a couplet ( ) is the number of input trees where and is true .the _ set of allowed relations _ between a couplet ( ) is defined as the following : a couplet ( ) exhibits _ conflict _ if ( where denotes the cardinality of a set ) .the _ consensus relation _ between ( ) is the relation having the maximum frequency ._ priority _ measure for a relation ( ) between a couplet ( ) is defined as the following : cospedtree also defines a _ support score _ for individual relations as the following : the consensus relation between a couplet ( ) exhibits the highest and values .so , corresponding also becomes the highest among all relations between ( ) .final supertree * t * _ resolves _ ( assigns a particular relation to ) individual couplet ( ) with a single relation ( ) between them .maximum agreement property of a supertree quests for resolving individual couplets with their respective consensus relations .but , satisfying such property is np - hard since consensus relations among couplets can be mutually conflicting .thus , order of selection of individual candidate relations ( to resolve the corresponding couplet ( ) ) is crucial . in view of this, cospedtree first constructs a set of relations , such that if a relation , the couplet ( ) is resolved with . to construct , cospedtree applies an iterative greedy approach . at each iteration, it selects a relation to resolve ( ) among all unresolved couplets , provided : = .if the selected relation does not contradict with any of the already selected relations in ( according to the properties p1 to p4 mentioned before ) , it is included in .suppose , l(*g * ) = denotes the complete set of input taxa .then , = (*g*) . using the set of relations , cospedtree partitions l(*g * ) into mutually exclusive taxa clusters ,,, , with the following rule (details are provided in ) : r1 : : : if a pair of taxa and belong to the same cluster ( ) , .r2 : : : suppose and ( , ) are any two distinct taxa clusters. then , , and , , where .this property is denoted by saying that is true , or is related with via the relation .cospedtree creates a directed acyclic graph ( dag ) , whose nodes are individual taxa clusters ( ) .a directed edge from to means is true .however , occurrence of one or more of the following properties means this dag needs to be refined to form a tree : 1 .transitive parent problem ( tpp ) : for three nodes a , b , and c , when , , and are simultaneously true , as indicated in fig .[ fig : mult_par_1 ] .2 . multiple parent problem ( mpp ) : when , , and are simultaneously true , as shown in fig .[ fig : mult_par_2 ] .no parent problem ( npp ) ( fig .[ fig : disc_node ] ) : when a node does not have any parent , i.e. so , there exists no node such that is true .cospedtree applies transitive reduction to resolve tpp .the problem mpp is solved by arbitrary parent assignment , while npp is resolved by assigning one hypothetical root node to the isolated node ( as shown in fig .[ fig : leaf_reln_dff ] ) .finally , a depth first traversal of this dag produces the supertree * t*. as there is no restriction regarding the number of taxa in individual taxa clusters ( partitions with respect to the relation ) , * t * may not be strictly binary ( completely resolved ) .cospedtree - ii extends cospedtree by incorporating the following modifications : \1 ) cospedtree - ii skips the formation of .rather , the taxa clusters ( containing one or more taxon ) are first derived , solely by the frequencies of different relations between individual couplets .subsequently , directed edges between individual pairs of clusters are assigned , according to the properties of individual couplets contained within these cluster pairs .such processing on the taxa clusters , rather than the couplets , achieves high speedup and much lower running time .\2 ) in cospedtree , if a relation ( ) between a couplet ( ) is supported in a tree , the frequency is incremented by 1 .cospedtree - ii , on the other hand , uses fractional and dynamic frequency values . in the above case ,cospedtree - ii increments with a _ weight _ ( ) , which varies for individual couplets ( ) , and also for individual trees .\3 ) for the problem mpp , cospedtree - ii proposes a deterministic selection of the parent , for the node having multiple parents .\4 ) cospedtree - ii also suggests a mechanism to convert a non - binary supertree into a binary tree .subsequent sections describe all such improvements .cospedtree - ii applies a fractional frequency value if an input tree supports the relation between a couplet ( ) .value of depends on the set .utility of such a dynamic ( and fractional ) frequency measure is explained by fig .[ fig : reln_priority ] , which shows three input trees ( fig .[ fig : no_edge_1 ] to fig .[ fig : bidir_edge_1 ] ) and corresponding supertree ( fig . [ fig : ex_suptree ] ) . for the couplet ( a , c ) ,all of the relations , and are supported .however , we observe that the relation is supported only because corresponding tree does not include taxa b and d. similarly , the relation occurs due to the absence of the taxon d. when both b and d are present ( fig . [ fig : bidir_edge_1 ] ) , the relation ( which is the ideal relation between ( a , c ) ) is satisfied .so , the relation should be given higher weight , since the corresponding tree has higher taxa coverage .so , our proposed dynamic frequency measure varies according to the coverage of taxa of different input trees .considering an input tree ( ) and a couplet ( ) in , first we define the following notations : * : set of nodes ( leaf or internal ) of . * : lowest common ancestor ( lca ) of and in .* : subtree rooted at an internal node . * : set of taxa underlying . with such definitions , the set of _ excess taxa _( excluding the couplet itself ) underlying the lca node of ( ) in , is defined as the following : for ( ) , _ union of all excess taxa _ underlying the respective nodes for all , is : we assign the weight of a relation ( ) between ( ) in an input tree , as : where = 1 if = .thus , the weight equals the proportion of taxa within , that is covered in the input tree .frequency of the relation , is now redefined as the following : cospedtree creates taxa clusters after formation of the _ set of resolving relations_ .cospedtree - ii , on the other hand , creates taxa clusters before resolving any couplets at all .rather , for individual couplets ( ) , cospedtree - ii inspects the values of for individual relations ( ) .creation of taxa clusters requires identifying couplets which can be resolved by the relation .cospedtree - ii places a pair of taxa and in the same taxa cluster ( thereby resolving the couplet ( ) with the relation ) , provided : 1 .either = 1 and ( is already defined in eq .[ eq : allowed_reln ] ) .2 . or = 2 and is majority consensus . in such a case , .3 . if 2 , the couplet ( ) is not placed in the same taxa cluster , even if is majority consensus .this is because , as the couplet exhibits high degree of conflict , we check the relations between , , and other taxa set .the first condition is obvious .a couplet having only as its allowed relation would be preferably resolved with it . on the other hand ,if there exists one more relation ( ) within , we check whether , which ensures that is the majority consensus relation of ( ) . in such a case ,the couplet is highly probable of being resolved with in the final supertree . above mentioned heuristicsare applied for individual couplets , to perform the equivalence partitioning ( taxa clusters ) of the input taxa set l(*g * ) .creation of the taxa clusters is followed by the assignment of directed edges between them .as mentioned in section [ sec : overview_cosped ] , directed edge from a cluster to a cluster corresponds to the relation (= ) being true .in such a case , the cluster pair ( , ) is said to be _ resolved by the relation _ . in general , a pair of clusters can be resolved via any one of the relations , or ( no directed edge in this case ) .for individual relations ( ) , we define its frequency with respect to the pair of cluster ( ) , as the following : priority of individual relations ( ) for the cluster pair ( ) is defined as the following : support score of a relation between the cluster pair ( ) is defined as : note that we have used sum , rather than the product , of the priority and frequency measures .this is due to the disparity of signs of frequency ( which is always non - negative ) and the priority ( which can be negative even for a consensus relation ) measures .higher support score of a relation ( between a pair of clusters ) indicates higher frequency and priority of the corresponding relation .the set of support scores for different relations between individual cluster pairs is defined as follows : individual taxa clusters are now resolved by an iterative algorithm , using the set .each iteration extracts a relation ( ) ) from , provided the following : following conditions are checked to see whether the extracted relation can resolve the cluster pair ( , ) . 1 .if ( , ) is already resolved with a different relation , is not applied .if = 1 or 2 , resolving ( , ) with would create a directed edge between the cluster pair . if such an edge forms a cycle with the existing configuration of the taxa clusters , is not applied . for no such above mentioned conflicts, the relation is applied between and .the set is implemented as a max - priority queue , to achieve time complexity for extracting the cluster pair having the maximum support score .iterations continue until becomes empty .however , the final dag may still have the problems tpp , mpp , and npp ( as defined in fig .[ fig : forest_prop ] ) .the problem tpp is removed by transitive reduction ( already described in cospedtree ) .cospedtree - ii employs a better solution for the problem mpp , which is described in the following section . as shown in fig .[ fig : mult_par_2 ] , the problem mpp corresponds to a cluster having ( 2 ) other clusters as its parent , which are not themselves connected by any directed edges . the objective is to assign a unique parent ( ) to the cluster .such assignment was arbitrary in cospedtree .cospedtree - ii proposes a deterministic selection of , by a measure called the _internode count _ between a couplet ( ) , with respect to a rooted tree .the measure was introduced in for unrooted trees . here , the measure is adapted for a rooted tree , as the number of internal nodes between and through the node .as individual trees carry overlapping taxa subsets of l(*g * ) , we define a _ normalized internode count distance _ between and in as : where is defined in the eq .[ eq : weight_couplet_reln ] .so , becomes low only when both is low and is high ( when the tree carries higher proportion of the taxa subset belonging to ) .significance of the internode count distance can be explained by considering a rooted triplet ( shown in the newick format ) , consisting of three taxa , and .here , . in general , lower internode count means corresponding couplet is evolutionarily closer , compared to the other couplets . _ average internode count _ of a couplet , with respect to * g * , is defined by the following expression : the _ internode count distance between a pair of cluster _ and is defined by the following equation : where denotes the cardinality of the taxa cluster . for the mpp problem, cospedtree - ii selects the cluster ( ) as the parent of , provided that has the lowest internode count distance to : such condition is based on the assumption that the cluster pair having lower internode count , is possibly closer in the evolutionary tree , compared to other cluster pairs . after resolving the problemmpp , the refined dag is converted to the supertree * t * , by a depth first traversal procedure ( as described in cospedtree ) .however , the generated supertree * t * may not be completely resolved .cospedtree - ii proposes a refinement strategy which converts * t * into a strict binary tree .suppose , the tree contains an internal multi - furcating node of degree ( ) .let denote the taxa subsets descendant from it , where each taxa subset ( ) consists of one or more taxon named as , etc .union of these taxa subsets is represented by = .suppose , represents the input tree ( ) _ restricted _ to the set of taxa .thus , = .considering fig .[ fig : multi_furc_clust ] as an example , the node represents a multi - furcation with degree 4 .four taxa subsets a , b , c , and d , descend from r. here , .generation of a binary tree requires introducing bifurcations among these taxa subsets .so , for individual input trees , corresponding restricted input tree is produced , as shown in fig .[ fig : multi_furc ] .our proposed binary refinement first generates a tree from the tree , such that the leaves of represent individual taxa subsets ( ) . in other words , individual taxon in replaced by the name of its corresponding taxa subset ( without any duplicate ) .for example , both the taxa and ( belonging to the taxa subset ) are present in the tree ( as shown in fig .[ fig : multi_furc ] ) .so , in , a leaf node labeled is first inserted as a child of the lca node of and .subsequently , the leaves and are deleted from .this process is repeated for other taxa subsets , and as well .[ fig : multi_furc_refine ] shows the tree . for the current set of taxa , each of the input trees processed to generate the corresponding .these trees are then used as input to an existing triplet based supertree approach thtbr , to generate a supertree consisting of the taxa subsets as its leaves .the supertree method is selected since it processes rooted triplets , and generates a rooted output tree .the tree is used as a template , such that its order of bifurcation among individual taxa subsets is replicated to the original multi - furcating node and its descendants .as the degree of multifurcation ( in this case ) is much lower than the total number of taxa ( ) , construction of is very fast .this process is continued until all the multi - furcating nodes are resolved ..results for marsupials dataset ( = 158 , = 267 ) [ cols="<,<,<,<,<,<",options="header " , ] [ tab : perf_cospedtree_mammal ] for input trees covering a total of taxa , both cospedtree and cospedtree - ii incurs time complexity for extracting the couplet based measures from the trees .these methods differ in their subsequent steps .cospedtree first resolves individual couplets in time ( as shown in ) , and subsequently partitions the taxa set according to the relation , to form a dag containing ( ) nodes ( taxa clusters ) .formation of a supertree from this dag involves time complexity .cospedtree - ii , on the other hand , first forms the taxa clusters in time ( processing time for all couplets ) . subsequently , support scores for individual relations between each pair of taxa clusters are placed in the max - priority queue . here ,size of is , considering as the number of taxa clusters . during each iteration ,maintaining the max - priority property of requires time .so , the complete iterative stage to resolve all pairs of clusters ( assigning connectivities between them ) involves time complexity . as in general, is considerably lower than , this iterative step in cospedtree - ii is much faster than cospedtree .resolving individual pair of clusters , rather than the couplets , enables cospedtree - ii to achieve a significant speedup .suppose , denotes the cardinality of a taxa cluster .so , for a pair of taxa clusters and , cospedtree resolves all couplets , and maintains their relations ( and the transitive connectivities inferred from these relations ) .but cospedtree - ii resolves and by processing only one relation between them .so , for this cluster pair , speedup achieved by cospedtree - ii is . for a total of taxa clusters , number of cluster pairsis .thus , overall speedup achieved by cospedtree - ii is . to derive the time complexity associated with the binary refinement of cospedtree - ii ,suppose is the number of internal nodes in * t * having degree .further , suppose ( ) denotes the maximum degree of multi - furcation among all of these nodes .in such a case , applying thtbr for a particular internal node involves maximum time complexity .so , overall complexity of the binary refinement stage is .cospedtree involves a storage complexity of , to store the couplet based measures .cospedtree - ii uses additional storage space for storing the set of excess taxa for individual couplets ( ) .as , the space complexity of cospedtree - ii is .both cospedtree and cospedtree - ii are implemented in python ( version 2.7 ) .tree topologies are processed by the phylogenetic library dendropy .a desktop having intel^^ quad coreintel^^ i5 - 3470 cpu , with 3.2 ghz processor and 8 gb ram , is used to execute these methods .cospedtree - ii is tested with the datasets like marsupials ( 267 taxa and 158 input trees ) , placental mammals ( 726 trees and 116 taxa ) , seabirds ( 121 taxa and 7 trees ) , temperate herbaceous papilionoid legumes ( thpl ) ( 19 trees and 558 taxa ) .work in modified these datasets by removing duplicate taxon names and few infrequent taxa information .we have also experimented with mammal dataset consisting of 12958 trees and 33 taxa .in addition , the dataset cetartiodactyla ( 201 input trees and 299 taxa ) is also tested .performance comparison between cospedtree - ii and the reference approaches , employs the following measures : 1 . *false positive distance * fp(*t * , ) : number of internal branches present in the supertree * t * , but not in the input tree .* false negative distance * fn(*t * , ) : number of internal branches present in but not in * t*. 3 . *robinson - foulds distance * rf(*t * , ) : defined as fp(*t * , ) + fn(*t * , ) .* maximum agreement subtree * mast(*t * , ) : let be the number of taxa contained in the maximum agreement subtree ( mast ) common to and .then , mast(*t * , ) = .this measure is computed using phylonet .above measures are accumulated for all the input trees , to be used as the final performance measures .supertree producing lower values of the sum of fp , fn , and rf values is considered better . on the other hand ,supertree having higher sum of mast score is considered superior .we have reported the results for the following two variations of cospedtree - ii : 1 . * cospedtree - ii * : produces supertree with possible multi - furcations .2 . * cospedtree - ii + b * : produces completely resolved binary supertrees , by applying the binary refinement suggested in section [ subsec : refine_tree ] .tables [ tab : perf_cospedtree_marsupials ] to [ tab : perf_cospedtree_mammal ] compare the performances of both of these variants , and with the reference approaches as well .reference methods marked with a symbol ` * ' , could not be executed in all datasets , either due to the unavailability of corresponding source code , or due to their very high computational complexity .in such a case , we have used their results ( both topological performance and running time ) published in the existing studies .the approaches mrp and superfine require paup * to execute , which is a commercial tool and not available to us .so , these methods could not be tested in all datasets .missing entries are indicated by ` - ' .the methods rfs and supertriplet produced errors in parsing few of the input datasets .entries showing ` er ' indicate these errors .supertrees generated by supertriplet could not be parsed by phylonet .so , we could not compute the mast scores for these trees .finally , a symbol ` f ' indicates that corresponding method could not produce a valid supertree for that dataset .results show that cospedtree - ii produces better resolved supertrees than cospedtree , as indicated by lower fn , and mostly lower rf values for individual datasets .cospedtree - ii also achieves higher mast scores for these datasets . cospedtree - ii+b produces completely resolved binary supertrees .so , the number of fn branches reduces .however , as the input trees may not be fully resolved ( may contain multi - furcating nodes ) , number of fp branches increases considerably . as cospedtree - ii+bproduces completely resolved supertrees , corresponding mast scores are much higher than cospedtree - ii .comparison with reference approaches shows that only rfs produces supertrees with consistently lower rf and higher mast scores than cospedtree - ii .the method superfine performs better than cospedtree - ii for the datasets seabirds and thpl , while our methods perform slightly better ( in terms of lower rf and higher mast score ) for the marsupials and placental mammals dataset .superfine does not always generate strictly binary ( completely resolved ) supertrees ( for example , in the thpl dataset ) , unlike cospedtree - ii+b .such a supertree exhibits much lower rf , but also much lower mast score ( compared to cospedtree - ii+b ) .matrix based methods like minflip , sfit , mmc , are outperformed by cospedtree - ii .veto approaches like scm , physic , produce supertrees with the lowest ( mostly zero ) fp branches , by not including any conflicting clades .in such a case , the number of fn branches becomes very high , and mast scores of these trees also become much lower .cospedtree - ii also produces significantly better results than mrp paup for all the datasets except cetartiodactyla .subtree decomposition based approaches like thspr , thtbr , produce slightly higher mast score values than cospedtree - ii , since these methods directly synthesize input triplets , or in general , subtree topologies . considering the measure rf , on the other hand , these methods are mostly outperformed by cospedtree - ii .tables [ tab : perf_cospedtree_marsupials ] to [ tab : perf_cospedtree_mammal ] express the running time of cospedtree - ii and cospedtree - ii+b for different datasets , in the formats ( a+b ) or ( a+b+c ) , respectively , where : 1 . a = time to extract the couplet based measures from the input trees . 2 .b = time to process the couplets and cluster pairs , to produce a ( possibly not binary ) supertree .c = time required to refine the non - resolved supertree into a strict binary tree .we observe that cospedtree - ii incurs a significant fraction of its running time in the stages a and c. the stage a depends on the processing speed of the python based phylogenetic library dendropy . on the other hand ,running time for the stage c depends both on the construction of from individual for all the multi - furcating nodes , and on the execution of thtbr .results show that cospedtree - ii incurs much lower running time than cospedtree . excluding the binary refinement stage , the running timeis decreased by a factor from 2 ( for the dataset mammal ) to 135 ( for the dataset cetartiodactyla ) . when the number of taxa is high ( such as marsupials , cetartiodactyla ) , cospedtree - ii exhibits much lower running time than the triplet based methods , , due to its lower time complexity . for datasets with large number of trees ,cospedtree - ii incurs slightly higher running time than these methods , due to the time associated in extracting the couplet based measures .we have proposed cospedtree - ii , an improved couplet based supertree construction method ( extending our earlier proposed method cospedtree ) .cospedtree - ii produces supertrees with lower topological errors , and incurs much lower running time ( compared to cospedtree ) . a binary refinement to generate a fully resolved supertree , is also suggested . due to its high performance and much lower running time, cospedtree - ii can be applied in large scale biological datasets .executable and the results of cospedtree - ii are provided in the link http://www.facweb.iitkgp.ernet.in/~jay/phtree/cospedtree2/cospedtree2.html .the first author acknowledges tata consultancy services ( tcs ) for providing the research scholarship . o. r. p.bininda - emonds , _ an introduction to supertree construction ( and partitioned phylogenetic analyses ) with a view toward the distinction between gene trees and species trees _ , pp . 4976 .springer , 2014 .v. ranwez , v. berry , a. criscuolo , p. h. fabre , s. guillemot , c. scornavacca , and e. j. p. douzery , `` physic : a veto supertree method with desirable properties , '' _ syst biol _ , vol .56 , no . 5 , pp .798817 , 2007 .u. roshan , b. moret , t. williams , and t. warnow , `` rec - i - dcm3 : a fast algorithmic technique for reconstructing large phylogenetic trees ., '' vol . 3 of _ proc .3rd computational systems bioinformatics conf .( csb04 ) _ , pp . 98109 , ieee , 2004 .m. wojciechowski , m. sanderson , k. steele , and a. liston , `` molecular phylogeny of the ` temperate herbaceous tribes ' of papilionoid legumes : a supertree approach , '' _ adv .legume syst _ ,vol . 9 , pp . 277298 , 2000 .s. a. price , o. r. p. bininda - emonds , and j. l. gittleman , `` a complete phylogeny of the whales , dolphins and even - toed hoofed mammals ( cetartiodactyla ) ., '' _ biological reviews _ , vol .80 , no . 3 , pp .445473 , 2005 .m. j. sanderson , m. j. donoghue , w. h. piel , and t. eriksson , `` treebase : a prototype database of phylogenetic analyses and an interactive tool for browsing the phylogeny of life , '' _ american journal of botany _ , vol .81 , no . 6 , p. 183
a supertree synthesizes the topologies of a set of phylogenetic trees carrying overlapping taxa set . in process , conflicts in the tree topologies are aimed to be resolved with the consensus clades . such a problem is proved to be np - hard . various heuristics on local search , maximum parsimony , graph cut , etc . lead to different supertree approaches , of which the most popular methods are based on analyzing fixed size subtree topologies ( such as triplets or quartets ) . time and space complexities of these methods , however , depend on the subtree size considered . our earlier proposed supertree method cospedtree , uses evolutionary relationship among individual couplets ( taxa pair ) , to produce slightly conservative ( not fully resolved ) supertrees . here we propose its improved version cospedtree - ii , which produces better resolved supertree with lower number of missing branches , and incurs much lower running time . results on biological datasets show that cospedtree - ii belongs to the category of high performance and computationally efficient supertree methods . = 1 phylogenetic tree , supertree , couplet , directed acyclic graph ( dag ) , equivalence relation , transitive reduction , internode count .
in recent years interest has grown in the detection of very high energy cosmic ray neutrinos which offer an unexplored window on the universe .such particles may be produced in the cosmic particle accelerators which make the charged primaries or they could be produced by the interactions of the primaries with the cosmic microwave background , the so called gzk effect .the flux of neutrinos expected from these two sources has been calculated .this is found to be very low so that large targets are needed for a measurable detection rate .it is interesting to measure this neutrino flux to see if it is compatible with the values expected from these sources , with any incompatibility implying new physics .searches for cosmic ray neutrinos are ongoing in amanda , icecube , antares , nestor , nemo , km3net and at lake baikal detecting upward going muons from the cherenkov light in either ice or water . in general , these experiments are sensitive to lower energies than discussed here since the earth becomes opaque to neutrinos at very high energies .the experiments could detect almost horizontal higher energy neutrinos but have limited target volume due to the attenuation of the light signal in the media .the pierre auger collaboration , using an extended air shower array detector , are searching for upward and almost horizontal showers from neutrino interactions .in addition to these detectors there are ongoing experiments to detect the neutrino interactions by either radio or acoustic emissions from the resulting particle showers .these latter techniques , with much longer attenuation lengths , allow very large target volumes utilising either large ice fields or dry salt domes for radio or ice fields and the oceans for the acoustic technique . in order to test the feasibility of detecting such neutrinos by the acoustic technique it is necessary to understand the production , propagation and detection of the acoustic signal from the shower induced by an interacting neutrino in a medium .this has been treated in some detail in , however , in this treatment it is difficult to incorporate the true attenuation of the sound which has been found to be complex in nature in media such as sea water .such complex attenuation causes dispersion of the acoustic signal and complicates both the propagation of the sound through the water and the signal shape at the detectors .this paper is organised as follows . in section 2 the new approach to calculating the acoustic signal pressureis described and section 3 describes the methods used to model the attenuation of the sound as it propagates through the medium .section 4 describes the detailed calculations of the sound signal in water and in ice as it arrives at the detector starting from the shower simulations described in .finally a new method of simulating signals incorporating shower - to - shower fluctuations is described in section 5 .the standard equations used to determine the thermo - acoustic integrals are outlined in . in this paperwe use a complementary approach . for the thermo - acoustic mechanism even though it is the pressure , , that is detected it is the volume change , , which couples to the velocity potential , , which in turn creates the sound wave .that is detected , it is the current density , , which couples to the magnetic vector potential , , which in turn creates the electromagnetic wave . ]interestingly , the velocity potential as a concept precedes the magnetic vector potential by over 100 years and was introduced by euler in 1752 .three of the most important variables in acoustics are the pressure change from equilibrium , _ p _ , the particle velocity , _ v _ , and the velocity potential , . assuming zero curl these three variables are related by : where is the density . for sources of acoustic energythis velocity potential has a function in acoustics equivalent to the magnetic vector potential in electromagnetism .we are trying to solced the wave equation to get the pressure pulse at the location of an observer placed at .this can be dome by integrating all the contributions of infinitessimally small sources at locations .for an observer at and a shower event at separated by a distance : where is the time rate of change of an infinitesimal volume and the velocity of sound . in our casewe are interested in integrating over the cascade volume where this volume change is caused by the injection of an energy density ( in ) over this cascade volume . for an infinitesimal volume the volume change starting at time , is given by : where the thermal expansion coefficient , the specific heat capacity , the density and the thermal time constant .the integral term is caused by cooling as the deposited energy , within the volume , conducts or convects away into the surrounding fluid .however as the time constant for this thermal cooling mechanism is of the order of tens of milliseconds , and as we are primarily interested in the case where the energy is injected nearly instantaneously in acoustic terms ( ns ) , the second term in equation 3 can be ignored as it is about six orders of magnitude lower than the first term .equation 3 can then be integrated over the entire volume cascade using green s functions ( see for example ) and yields for an observer at distance , , from the source : where is the retarded time , this simply implies that the contribution from each point in the source , in both space and time , travels to the listener with the speed of sound .equation 4 lends itself to efficient numerical solution .if the energy deposition is modelled using monte carlo points with a density proportional to energy , will be a scaled histogram of the flight times to the observer .the pressure can now be derived from equation 1 .a further simplification can be made if the energy deposition as a function of time is identical at all points in the volume .the velocity potential can then be calculated from the convolution integral : where is a dummy variable used to evaluate the integral .this can be simplified further since can be approximated by for the case we are interested in .the time dependence on the right hand side will disappear as the integral of a delta function is equal to one .this formulation of the thermo - acoustic mechanism leads to a solution in the far field which is proportional to the grneisen coefficient . if , after an arbitrary 3d rotation , the observer is , for example , at a distance from the source along the axis much further away than the dimensions of the source , then with increasing observer distance , , the and contribution will more and more closely resemble delta functions as the spread in arrival times caused by their contributions approaches zero .equation 5 reduces to : where is the flight time from the centre of the shower to the observer .the velocity potential , , is simply a scaled cross section of the energy deposition and a projection of : the term is the grneisen coefficient and gives the relative acoustic pulse heights for different media . with this formulationthe far field solution in the absence of attenuation makes a number of predictions : 1 .the pressure pulse is a scaled derivative of the projection along the line of sight to the observer ; 2 .only the distribution along the line of sight is important .hence , for example , with an observer on the axis , a 1 j deposition into a gaussian sphere ( i.e. where , and are randomly distributed with a normal distribution with the same standard deviation , ) with cm , centred on the origin , will observe the same pulse as a tri - axial gaussian distribution ( as above where the standard deviations in and are different ) with cm , mm and m centred on the origin .this is the case when ; 3 .the amplitude of the pressure pulse will depend on where is the projected standard deviation in the acoustic pulse height as seen from the observer .hence an observer on the axis will see a pulse one hundred times greater in magnitude than one on the axis for the tri - axial distribution as described in item b ) above ; 4 . at angles greater than a few degrees from the planethe amplitude of the pressure pulse will depend on .this is illustrated in figure [ fig1 ] where each of the sub - figures illustrate the corresponding point in the list above .the acoustic pulse is affected by the medium through which it travels and it acts as a filter causing frequency dependent attenuation , .attenuation of acoustic pulses is caused by a combination of absorption and back scattering .the amplitude is attenuated by a factor given by : where is the distance travelled and is the attenuation coefficient ( nepers / m ) .the resultant pulse can be determined by converting the pulse into the frequency domain by taking a fourier transform , multiplying and taking the inverse fourier transform : where is the velocity potential with attenuation and .alternatively the effect of attenuation can be determined by taking the inverse fourier transform of the frequency characteristic ( i.e. the modification of the pulse between transmission and reception due to the medium ) and convolving this with the un - attenuated pulse : where is a dummy variable used to evaluate the integral and is the pulse created by passing a dirac delta function source through the medium and is referred to as the unit impulse response the equivalence of these methods is a statement of the convolution theorem : multiplication in the frequency domain is convolution in the time domain .values of the pressure attenuation length in sea water ( ice ) are 3.8 km ( 10 km ) at 10 khz and 2.6 km ( 8.5 km ) at 25 khz .attenuation lengths for intensity are a factor of two less . evaluating analytically is normally difficult .however for a liquid in which the attenuation is dominated by viscosity it is straight forward .this is the case for example in distilled water .it can be shown ( see , e.g. ) that in this case the attenuation coefficient , , and attenuation , , for frequencies of interest , are given by : and is medium dependent .the attenuation is gaussian in shape with a standard deviation of which is proportional to . in distilled water , rad s and lehtinen et al . have used a value of rad s to approximate the attenuation of tropical sea water in the hz region .the unit impulse response is given by the inverse fourier transform as : and is also a gaussian distribution with a standard deviation which gets wider with the square root of the distance .indeed .if the source term is also gaussian the overall pulse profile can be evaluated .the convolution integral of two gaussian distributions yields a gaussian distribution with the standard deviations adding in quadrature . therefore if : then the pressure is given by : this is equivalent to equation 18 in reference , but is derived using a quite different approach .acoustic attenuation in seawater is almost totally caused by absorption . in the 1 - 100khz regionit is dominated by a chemical relaxation process that is connected with the association - dissociation of magnesium sulphate ( ) ions under the pressure of the sound wave .below 1khz a similar mechanism involving boric acid ( ) is responsible for much of the observed attenuation . taken together these mechanisms result in an attenuation of acoustic waves in seawater and a velocity of sound which are both frequency dependent . experimentally however whereas it is straightforward to estimate the magnitude of of the complex attenuation it is very difficult to determine the phase angle and no current measurements exist in the literature .the magnitude of the attenuation however is well measured and the definitive work in this area is considered to be that of francois and garrison . more recently , ainslie and mccolm have published a simplified parameterisation of the magnitude of the attenuation in sea water which maintains a similar accuracy to the parameterisations of francois and garrison .this is a function not only of frequency but also depends on depth , , salinity , , temperature , and ph .whereas there are no direct measurements of the attenuation angle , lieberman gives a clear presentation of the chemical processes causing the attenuation while niess and bertin have published a complex attenuation formula based on mediterranean conditions .here we present a complex version of the ainslie and mccolm formulation , which retains the attenuation magnitude but introduces the phase shifts predicted by lieberman .in essence the attenuation consists of three components , two of these are complex , high pass filters with cut off frequencies ( rad s ) for boric acid and ( rad s ) for magnesium sulphate .the third is the pure water component , which is real .the acorne parameterisation uses values that are the respective attenuation coefficients in db / km : in ice the mechanisms are less well understood .however following price in regions of deep ice of most interest for acoustic neutrino detection , e.g. the south pole , the dominant attenuation mechanisms below a few hundred khz are : absorption due to proton reorientation ( relaxation ) and scattering due to bubbles and grain boundaries. for south pole conditions price predicts the absorption length ( apart from very low frequencies ) to be constant : where is the log decrement at asymptotically high frequencies , is a temperature dependent relaxation time and is the velocity of sound in ice ( 3920 m s ) .the parameters are normally evaluated experimentally . for south pole conditionsthese predict an absorption length of km at frequencies above 100 hz . the dominant scattering mechanism is caused by grain boundaries and for the grain size expected at the south pole the ice will behave as a rayleigh medium .scattering is proportional to the 4th power of frequency , up to khz : where is the mean grain diameter which is 0.2 cm .as the acoustic pulses from neutrino interactions tend to be highly directional we make the assumption that the attenuation is given by the sum of the absorption plus total scattering rather than the more usual assumption that the attenuation is given by the sum of absorption plus back - scattering .a comparison of the acorne attenuation parameterisation and earlier work is presented in figure [ fig2 ] , together with the anticipated attenuation in ice . the approximation assumes the sea behaves like a more viscous form of distilled water by decreasing the value of to match the measured attenuation in the to hz frequency region for tropical waters , consistent with the location of the saund array .the other curves are optimised for mediterranean conditions ( ) . since the ainslie & mccolm / acorne and francois & garrison results matchvery closely they are depicted as the same curve in the figure .phase shifts for a pulse 1 km distant from the neutrino interaction are shown for the complex attenuation case , corresponding to a velocity increase of .005% between 1 and 100 khz . for icethe attenuation is constant up to a frequency khz where rayleigh scattering starts to dominate .the absorption in ice is lower than that of sea water in the important 10 - 100 khz region .the effect of the various forms of attenuation on a point source of energy observed at 1 km from the origin is shown in figure [ fig3 ] .the three real attenuation mechanisms in water give similar symmetric pulses .this gives confidence that the magnitude of the attenuation need not be modelled in fine detail .interestingly however whereas the magnitude of the two complex attenuation models is similar to the other water models , ( and , indeed identical in the case of the acorne and ainslie & mccolm models ) the phase shifts cause considerable pulse distortion from the anticipated symmetric shape .this is a result of the non - linear nature of the phase shifts creating the group delay , , to vary , hence creating dispersion .the phase shift also has the effect of marginally reducing the pulse velocity at lower frequencies hence slightly delaying the pulse . in icethe pulse is again symmetric and is larger in amplitude by a factor of three .this is not , as one might expect , due to the larger value of the grneisen coefficient in ice , but is a direct consequence of the lower attenuation in the 10 - 100 khz region .is 1.38 between ice and water and the term is a factor of 6.83 larger for ice .however , the term only applies to extended sources and can be ignored for a point source . ]the ripples before and after the main pulse are caused by the abrupt scattering dependence switching on yielding a rapid rise in attenuation .any sharp transition in the frequency domain will tend to cause ripples in the time domain , these ripples are a real physical effect and not a numerical artefact .in this section we compute full 3d simulations of acoustic pulses from our previously published neutrino shower simulations . in our previous work, 1500 showers were modelled using the air shower simulation program corsika modified for a water or ice medium corresponding to 100 showers in each half decade increment from to gev .the deposited energy was binned in a cylinder of 20 m in length and 1.1 m in radius .longitudinally 100 bins of width 20 cm with bin centres from 10 cm to 1990 cm were used .radially , 20 bins were used : ten of width 1 cm with centres from 0.5 to 9.5 cm and 10 of width 10 cm with centres from 15 cm to 105 cm .the contribution of the outer ten bins towards the acoustic pulse is minimal ; they are included to capture all the energy in the shower and are useful for conservation of energy checks .this spacing should be sufficient to accurately model the acoustic frequency components in the pulse on axis up to khz in water and khz in ice . in this section the average shower distribution for each of the 15 half decades in energyis used directly as an input to drive the acoustic integral .the neutrino creates a hadronic shower of approximately 10 m in length and 5 cm in radius .the thermal energy is effectively deposited instantaneously in acoustic terms , creating an acoustic radiator analogous to a broad slit . in the far field , using fraunhofer diffraction , the angular spread of the radiation is approximately .the wavelength is of the order of the diameter of the cylinder and the length of the cylinder , yielding a pancake width in the order of 1 degree .water and ice are different media , the suitability of the medium for acoustic detection will depend largely on the magnitude of the created acoustic pulse and the attenuation . for extended sources of this nature a suitable figure of merit to compare different media is given by the ratio of the product of the grneisen coefficient ( ) and the anticipated energy deposition ( ) : using this expression , ice should produce acoustic pulses which are about an order of magnitude greater in amplitude than water . however because the velocity of sound in ice is about 2.5 times that of water , the frequencies in the pulse will also be about 2.5 times higher causing ice to have a greater attenuation than water . in the subsequent analysisa coordinate system is chosen such that the neutrino interacts at the origin and travels vertically along the axis where the value of increases with depth and the origin .the point is chosen such that the maximum `` pancake '' energy ( i.e. acoustic , and not deposited energy ) at 1 km from the shower is at zero degrees .the value of is energy dependent varying from approximately 460 cm at gev to 780 cm at gev .if the radial cross - section of the shower were constant one would expect the pancake maximum to correspond to the centroid of the energy deposition taken along the longitudinal axis .in reality however the radial distribution gets broader with the age ( depth ) of the shower ; the earlier part contributes more to the pulse energy than the later part of the shower .the point ( 0,0,- ) can be determined in two ways , either from the position of the shower centroid , or from , the primary energy ( in gev ) which defines the pancake plane .a fit to sea water data yields following relationships for : in antarctic ice the pancake depth has to be increased by approximately 6% due to the relative densities of the media. for the initial analysis the observer is positioned at 1 km from the shower in the centre of the acoustic pancake to allow easy comparison with previously published results .complex attenuation ( equations 15 ) was assumed and the acoustic integral calculated for each of the 15 half decades in energy as described above .the results are plotted in figure [ fig4 ] . as the integrals have been calculated with high precision , error - bars are not plotted , as they are less than a line width . as can be seen a characteristic bipolar pulse is produced . in figure[ fig4]a ) the pulse shape is plotted for three energies .the maximum pulse amplitude is normalised by energy .as can be seen the pulse shape is very similar for the three chosen energies ( , and gev ) .the pulse height however seems to scale slightly more rapidly than energy . in figure[ fig4]b ) the maximum and minimum pulse amplitudes are plotted on a log - log plot .the fitted lines are constrained to be proportional to energy .it is clear from figure [ fig4]b ) that any increase in pressure over proportionality is minimal . a linear fit yields : where ( pa ) is the maximum pressure and the energy in gev .the errors quoted are the statistical errors from the fit .this yields the result that , to a good approximation , the maximum pulse height at 1 km is 1.22 ppa per gev in the plane of the pancake .the average frequency ( using the music algorithm ) is very stable with energy and is approximately 26 khz .the asymmetry / is similarly stable and is about 0.2 .consider now figure [ fig5 ] showing the angular spread of the pancake .as the acoustic pulse will most likely be detected by a matched filter ( see , for example , for a full discussion of matched filters ) , which integrates over the pulse , it is the integrated pressure , or square root of the pulse energy which is of most interest .this is plotted in figure [ fig5]a ) , which shows the angular spread of the shower from to .the pancake is extremely narrow and narrows further with increasing energy . in figure[ fig5]b ) the full width of the pancake at a pressure levels of 50% and 10% of maximum is illustrated . as anticipated from simple far field diffraction theory ( the diffraction minimum is at ) the spread of the pancake decreases with increasing shower energy .this is largely because the shower gets longer as the energy increases causing a narrowing of the pancake .it is interesting to cross check the acoustic mechanism by looking at the energy flowing through a 1 km integrating sphere .the energy of the acoustic pulse flowing through each square metre of surface ( the fluence ) is given by : where is the characteristic impedance ( kg m s for water and kg m s for ice ) and is the angle out of the plane of the pancake , the radiation is cylindrically symmetrical .this can be integrated over a sphere , in figure [ fig5]c ) the result of this integral is displayed .the fit of captured acoustic energy , vs. deposited energy ( see figure [ fig5]d))is given by : where k= 22.8 without attenuation and 23.1 with attenuation . hence at 1 km attenuation reduces the acoustic energy to 50% of its value without attenuation .the linear coefficient is 2.00 to within the statistical accuracy of the showers generated by corsika , indicating that the efficiency rises quadratically with energy . due to the coherent nature of the acoustic emission mechanism, the acoustic pulse amplitude depends linearly and the acoustic energy depends on the square of the deposited energy ( assuming constant shower shape ) .this coherent behaviour breaks down at energies far beyond those of interest here .it is interesting to look at the effect of complex attenuation on the pulse asymmetry in water as there is no guarantee that the pulse will become symmetrical even in very far field .figure [ fig6 ] illustrates the mean pulse frequency , pressure times distance and asymmetry for a 10 energy deposition and for an observer in the pancake plane for distances of 10 m to km . in the absence of attenuation ,once far field has been established ( km ) both and the mean frequency , should be constant and the asymmetry zero . in practice the mean frequency drops with distance , as higher frequencies are more quickly absorbed than lower frequencies , falling from khz close to the shower to hz at km .the product of maximum pressure and distance rises initially as the radiator approaches far field conditions .the maximum pressure then drops both because the energy is absorbed and the pulse becomes more spread out in time .the asymmetry starts with a value of 0.7 dominated by near field effects then drops to .1 at 1 - 10 km and has a peak of .5 at 100 km .the dispersion will be a maximum at frequencies around the resonant peaks of mgso and b(oh) .the b(oh) peak is at .3 khz causing the rise in asymmetry where the mean frequency matches this value . in figure [ fig7 ]the mean frequency , maximum and minimum pressures and asymmetry are plotted versus angle and distance for a 10 gev shower using the acorne complex attenuation ( t=15 , s=37ppt , ph=7.9 , z=2 km ) . as the neutrino is travelling vertically downwards then positive angles ( left hand side in figure [ fig7 ] ) are measured out of the plane of the pancake and towards the surface of the water . in figures [ fig7]a ) and [ fig7]b ) the maximum and minimum values of in db re . 1 pa are displayed .as anticipated , the pressure decreases with angle and frequency .figure [ fig7]c ) shows the asymmetry where two effects are particularly noteworthy .firstly the dominant effect of complex attenuation drives the pulse towards positive asymmetry .secondly there are two regions with asymmetry of greater than 0.6 . in these regionsthe geometry causes a spike in the velocity potential yielding a very non bipolar pulse .consider now figure [ fig7]d ) showing the mean frequency .as can be seen at angles below and distances below 100 m the frequency with maximum energy is above 40 khz .this reduces with both angle and distance .the decrease of frequency with angle is caused mainly by geometric projection as discussed in section 2 .the decrease of frequency with distance is caused mainly by high frequencies being attenuated more rapidly than low frequencies .following the procedure outlined in section 3.2 and adopting a model of antarctic ice from , the initial analysis was again at 1 km from the cascade in the plane of the pancake .( the pancake is at a depth about 5% further from the shower origin due to the relatively lower density of antarctic ice to that of sea water ) . in figure[ fig8]a ) the detected acoustic pulse is plotted for three energies .the pulse shape again scales with energy but is 5 - 6 ppa per gev , ( about 5 times that of water ) .the pulse is narrower than in water , though not by the factor of 2.6 as predicted by the relative velocities .the pulse is more symmetric as the attenuation is dominated almost entirely by scattering and is non - complex .the pulse also shows a ripple indicative of the sharp nature of rayleigh scattering . in figure[ fig8]b ) the relationship between the maximum pulse height and energy is displayed .again the pulse height grows almost linearly with energy and the best straight line fit of to as in equation 20 yields a slope of 1.0079 ( 0.0017 ) and an intercept of -11.349 ( 0.151 ) , corresponding again to a growth in pulse height which is very slightly greater than energy deposition and an intercept corresponding to 4.48 ppa / gev . in practice it is the higher energies which are of most interest . if the fit is done in the 10 - 10 gev region the intercept corresponds to 5.35 ppa / gev .the average frequency is nearly constant at 39 khz and the asymmetry is nearly zero .consider now figure [ fig9 ] showing the angular spread of the pancake in ice .the analysis is identical to that described in section 4.3 .as can be seen in figures [ fig9]a ) and [ fig9]b ) , the angular spread of the pulse is very similar to that of sea water , but is slightly broader due to the effective wavelength in ice being longer than that of water . in figures [ fig9]c ) and[ fig9]d ) the energy captured on an integrating sphere at 1 km is calculated . in figure[ fig9]d ) the scaling of captured energy with the square of the deposited energy is again evident .the intercepts are -22.19 gev and -21.76 gev with and without attenuation respectively .hence about 11 times as much energy is created in ice as in water .once attenuation is included the energy drops to 37% ; 63% of the energy is lost nearly entirely due to scattering .this loss is slightly more than water largely because ice pulses are at a higher frequency causing more effective attenuation .as with sea water the analysis is extended to distances between 10 m and 50 km and angles between . the mean frequency , maximum and minimum pulse heights and asymmetry are displayed in figure [ fig10 ] . here the mean frequency is defined as the frequency of a single cycle sine wave that most closely represents the acoustic bipolar pulse , this is usually a little higher than the peak frequency . as anticipatedboth the frequencies and pulse height are greater than that of sea water .also , due to the real nature of the attenuation the asymmetry falls rapidly to zero in the plane of the pancake and the odd symmetry in the far field between plus and minus angles is also more evident .previous work in this area has concentrated on modelling average shower parameters as a function of energy .fluctuations have not been considered . indeed due to the nature of previous parameterisations which involve correlated variables, the inclusion of fluctuations is not feasible as varying one parameter will mean that the other parameters have also to be tuned .the strategy in this section is to parameterise the shower and its fluctuations using an orthogonal basis set .the showers have been modelled with corsika , which uses a thinning process .the stochastic fluctuations in individual showers were smoothed using a non - causal 3rd order butterworth filter . in figure [ fig11 ]the effect of the butterworth filtering is shown for four typical gev showers .the overall shape of the shower is retained but the noise considerably reduced .parameterisation using singular value decomposition ( svd ) is an eigenvector based technique and has a number of advantages which include applicability to data for which functional parameterisation is difficult and , most importantly , the ability to include fluctuations .the ability to include fluctuations stems from the orthogonal nature of the parameterisation ; each parameter can be varied independently as there is no covariance .the standard svd method is matrix based and directly applicable in a 2d parameterisation . in the case of a 3d parameterisation the svd has to be applied a number of times ; we use two in this study . asthe radial distribution is most critical for the acoustic pulse shape , this is used as our primary dimension .each of our 1500 corsika generated showers is treated as a matrix with 100 rows corresponding to longitudinal distance and 20 columns corresponding to radius .these are appended to form : creating an observation matrix * o of size 150,000x20 .a singular value decomposition was performed on the * o matrix : where * w * and * v * are unitary containing the column and row eigenvectors of respectively sorted in decreasing magnitude of the associated eigenvalue and * l * is diagonal containing the square roots of the respective eigenvalues .the and subscripts correspond to signal and noise and are discussed below .if all 20 eigenvectors are used the data can be reproduced perfectly . with fewer eigenvectorsthe data can be approximated with an accuracy given by : * * the matrix has been partitioned into a signal space and noise space by choosing the eigenvectors which contain the most information and assigning those to the signal space , the remaining eigenvectors become the noise space : the choice of the value of can be somewhat subjective , however choosing yields , which is more than sufficient given the other errors in the process . the matrix ( 20x4 ) consists of the four row eigenvectors .the matrix has four columns ( 4x150,000 ) , this can be partitioned as follows : where each of the sub matrices is of size 100x1 .however these have the shower information sequentially and need to be reshaped to look at the longitudinal correlations between showers .the matrix is reordered into four matrices of size 1500x100 : a further singular value decomposition is applied to each of these matrices in turn .as these matrices contain successively less information four of the column eigenvectors were retained for , three for , two for and one for .the four and matrices of this second svd contain the information yielding the 10 parameter values for each shower .these parameters may be recovered by creating a matrix : the matrix ( 1500x10 ) contains on each row the coefficients to for each shower in turn .the original showers , , can simply be reconstructed by matrix multiplication of the appropriate coefficients : the showers are recreated to within an average accuracy of 5 % in profile and magnitude .lower energy showers have the greatest variation in shape . in figure [ fig11 ] the longitudinal distribution of four of the most extreme showers are plotted .these are chosen by having the greatest and least component of the first two radial eigenvectors .these will be some of the most difficult showers to model .the reconstructed showers though clearly not perfect do however reasonably fit the data . the mean and standard deviation of the parameters in each of the 15 half decades can now be used to reproduce the statistics of the showers , to .interpolation can be used to model intermediate values .care must be taken however as these are not orthogonal as is normally the case with the svd . because of the linear nature of the parameterisation however these correlations can be reintroduced by multiplying by the matrix square root of the correlation matrix : where is a vector containing the average values at a given energy , is a vector of normally distributed random numbers with a mean of 0 and a standard deviation of 1 , are the standard deviations of and is the matrix of the correlation coefficients between parameters within a shower and averaged over all showers at a particular energy .the parameter values are too cumbersome to list here but are available on the acorne web site . in figure [ fig12 ]the anticipated pulses from 100 corsika generated events for each of the energies , , and gev are plotted .these show clear evidence that fluctuations become less important with increasing energy . in figure [ fig13 ]the maximum acoustic pulse height per gev at = 8 m and =1 km from the shower is presented . for boththe original and svd parameterised data one hundred showers are modelled in each half decade .the output directly from corsika is used to create the `` direct '' plot . the pulse height from interpolated svd parameterisationare plotted at 1/4 and 3/4 decades for clarity .the error - bars are drawn at the 10 and 90 centiles .the earlier acorne parameterisation is shown for comparison .neither of the parameterisations is perfect , however the differences are small ; only a few percent at all energies .the svd parameterisation model seems to slightly under - estimate the fluctuations .this is due to the statistics being slightly non gaussian ( leptokurtic ) .the earlier acorne parameterisation also works well and produces pulses with similar accuracies .the svd technique successfully models the shape of both the radial and longitudinal distributions of the showers to a high accuracy ( of the information is retained ) .however for the acoustic detection of cosmic ray neutrinos it is primarily energies of greater than 10 that are of significance to the acoustic technique . in this regionfluctuations only contribute a few percent to the acoustic pulse amplitude and our earlier parameterisation is perfectly adequate for modelling pulses in this region . in broader termshowever the svd method outlined is a linear technique and does not rely on optimisation and as the accuracy is dictated by the number of eigenvectors , a trade off can be made between accuracy and complexity .a new way of computing the acoustic signal for neutrino showers in water and ice has been described .the method is computationally fast and allows the most up to date knowledge of the attenuation to be incorporated naturally .this is now known to be complex in nature .a parameterisation of this attenuation is given .the properties of the expected acoustic signals from such showers have been described .these properties will be used in a future search for such interactions in an acoustic array . a matrix method of parameterising the signals which includes fluctuations has also been described .this method is based on the svd technique and it is shown to model both the shape of the radial and longitudinal distributions of the showers to a good accuracy .9 proceedings of the international workshop on acoustic and radio eev neutrino detection activities ( arena ) , desy , zeuthen , germany , ( may 2005 ) , world scientific , editors r. nahnhauer and s. bser , + proceedings of the workshop on acoustic and radio eev neutrino detection activities ( arena ) , university of northumbria , uk , ( june 2006 ) , journal of physics : conference series * 81 * ( 2007 ) , editors l. thompson and s. danaher . _end to the cosmic - ray spectrum ? _ , k. griesen , phys .* 16 * ( 1966 ) 748 , + g. t. zaptsepin , v. a. kuzmin , sov .jetp lett .* 4 * ( 1966 ) 78 ._ high energy neutrinos from astrophysical sources : an upper bound _ ,e. waxman and j. bahcall , phys* d59 * ( 1999 ) 023002 ; also http://arxiv.org/abs/hep-ph/9807282 ._ neutrinos from propagation of ultra - high energy protons _ , r. d. engel , d. seckel and t. stanev , phys .d * 64 * ( 2001 ) 093010 ; also http://arxiv.org/abs/astro-ph/0101216 .see , for example , _ the amanda neutrino telescope : principle of operation and first results _, e. andres et al .* 13 * ( 2000 ) 1 - 20 ; also http://arxiv.org/abs/astro-ph/9906203 .see , for example , _ first year performance of the icecube neutrino telescope _ , a. achterberg et al ., astropart . phys .* 26 * ( 2006 ) 155 - 173 .also http://arxiv.org/abs/astro-ph/0604450 .see , for example , _ first results of the instrumentation line for the deep - sea antares neutrino telescope _, j. a. aguilar et al ., astropart . phys .* 26 * ( 2006 ) 314 ; also http://arxiv.org/abs/astro-ph/0606229 .see , for example , _ operation and performance of the nestor test detector _, g. aggouras et al . , nucl . instrum . and meth .a * 552 * ( 2005 ) 420 - 439 .see , for example , _ recent achievements of the nemo project _, e. migneco et al .instrum . and meth . a * 588 * ( 2008 ) 111 - 118 .see , for example , _km3net , a new generation neutrino telescope _, e. de wolf , nucl .instrum . and meth . a * 588 * ( 2008 ) 86 - 91 .see , for example , _ the baikal neutrino experiment : status , selected physics results , and perspectives _, v. aynutdinov et al . , nucl .instrum . and meth . a * 588 * ( 2008 ) 99 - 106 .see , for example , _ exploring the ultra - high energy sky : status and first results of the pierre auger observatory _ , v. van elewyck , mod .* a23 * ( 2008 ) 221 ._ acoustic radiation by charged atomic particles in liquids : an analysis _, j. g. learned , phys .d * 19 * , ( 1979 ) 3293 ._ sound propogation in chemically active media _ , l. liebermann , phys* 76 * ( 1949 ) 10 ._ simulation of ultra high energy neutrino interactions in ice and water _ , s. bevan , s. danaher , j. perkin , s. ralph , c. rhodes , l. thompson , t. sloan , d. waters , astropart.phys .* 28 * ( 2007 ) 366 - 379 ; also http://arxiv.org/abs/0704.1025 ._ a macroscopic description of coherent geo - magnetic radiation from cosmic ray air showers_ , o. scholten , k. werner , f. rusydi , astropart.phys . * 29 * ( 2008 ) 94 - 103 ; also http://arxiv.org/abs/0709.2872 ._ de motu fluidorum in genere _ , l. euler ( 1752 ) , + translated as : _ principles of the motion of fluids _ , l. euler , http://arxiv.org/pdf/0804.4802 . _acoustics : an introduction to its physical principles and applications _ , a. d. pierce , mcgraw - hill inc . new york .isbn 0 - 07 - 049961 - 6 ( 1981 ) . c. underwood , northumbria university private communication ( 2004 )see , for example , _ solid state physics _, n. w. ashcroft and n. d. mermin , thomson learning ( 1976 ) 492 - 494 ._ sensitivity of an underwater acoustic array to ultra - high energy neutrinos _ , n. g. lehtinen , s. adam , g. gratta , t. k. berger , and m. j. buckingham , astropart . phys . * 17 * ( 2002 ) 279 ._ sound absorption based on ocean measurements : part i : pure water and magnesium sulfate contributions _ , r. e. francois , g. r. garrison , journal of the acoustical society of america * 72(3 ) * ( 1982 ) 896 - 907 , + _ sound absorption based on ocean measurements : part ii : boric acid contribution and equation for total absorption _, r. e. francois , g. r. garrison , journal of the acoustical society of america * 72(6 ) * ( 1982 ) 1879 - 1890 . _ a simplified formula for viscous and chemical absorption in seawater _ , m. a. ainslie and j. g. mccolm , journal of the acoustical society of america* 103(3 ) * ( 1998 ) 1671 - 1672 ._ the origin of sound absorption in water and in sea water _ , l. n. lieberman , the journal of the acoustical society of america * 20(6 ) * ( 1948 ) 868 - 873 ._ underwater acoustic detection of ultra high energy neutrinos _ , v. niess and v. bertin , astropart. phys . * 26(4 - 5 ) * ( 2006 ) 243 - 256 .also http://arxiv.org/abs/astro-ph/0511617 ._ attenuation of acoustic waves in glacial ice and salt domes _ , p. b. price , journal of geophysical research , solid earth * 111 * ( 2006 ) b02201 ._ corsika : a monte carlo code to simulate extensive air showers _ , d. heck , j. knapp , j. n. capdevielle , g. schatz , t. thouw , forschungszentrum karlsruhe report fzka 6019 ( 1998 ) _ music , max .likelihood , and cramer - rao bound _ , p. stoica and a. nehorai , ieee trans . on acoustics , speech , and sig .* 37(5 ) * ( 1989 ) 720 - 741 ._ application of svd in high energy gamma ray astronomy _, s. danaher , d. j. fegan , j. hagan , astropart .phys * 1 * ( 1993 ) 357 .http://www.pppa.group.shef.ac.uk/acorne.php = 1 cm b ) comparison of a spherical ( = 1 cm ) and triaxial deposition ( = 1 cm , = 0.5 cm , = 1 m ) c ) rotating the observer from -axis to the -axis in the plane for the triaxial distribution d ) rotating the observer in the plane from the -axis to the -axis for the triaxial distribution . ] , niess and bertin , francois and garrison , ainslie and mccolm and the acorne parameterisations . since the ainslie & mccolm / acorne and francois & garrison results match very closely they are depicted as the same curve in the figure .the attenuation in antarctic ice is shown for comparison .inset : the phase shifts at 1 km for the two complex attenuation models . ] , niess and bertin , francois and garrison , ainslie and mccolm and acorne parameterisations . the anticipated acoustic pulse for antarctic iceis shown for comparison .the acorne and niess and bertin pulses are delayed due to the use of complex attenuation . ]
the production of acoustic signals from the interactions of ultra - high energy ( uhe ) cosmic ray neutrinos in water and ice has been studied . a new computationally fast and efficient method of deriving the signal is presented . this method allows the implementation of up to date parameterisations of acoustic attenuation in sea water and ice that now includes the effects of complex attenuation , where appropriate . the methods presented here have been used to compute and study the properties of the acoustic signals which would be expected from such interactions . a matrix method of parameterising the signals , which includes the expected fluctuations , is also presented . these methods are used to generate the expected signals that would be detected in acoustic uhe neutrino telescopes . , , , , , , , , acoustic integration , attenuation , neutrino .
inventory management is of great economic importance to industry , but forecasting demand for spare parts is difficult because it is _ intermittent _ : in many time periods the demand is zero .various methods have been proposed for forecasting , some simple and others statistically sophisticated , but relatively little work has been done on intermittent demand .we now list the methods most relevant to this paper . _ single exponential smoothing _ ( ses ) generates estimates of the demand by exponentially weighting previous observations via the formula where is a _ smoothing parameter_. the smaller the value of the less weight is attached to the most recent observations .an up - to - date survey of exponential smoothing algorithms is given in .they perform remarkably well , often beating more complex approaches , but ses is known to perform poorly ( under some measures of accuracy ) on intermittent demand .a well - known method for handling intermittency is _croston s method _ which applies ses to the demand sizes and intervals independently .given smoothed demand and smoothed interval at time , the forecast is both and are updated at each time for which .according to it is hard to conclude from the various studies that croston s method is successful , because the results depend on the data used and on how forecast errors are measured .but it is generally regarded as one of the best methods for intermittent series , and versions of the method are used in leading statistical forecasting software packages such as sap and forecast pro . to remove at least some of the known bias of croston s method on stochastic intermittent demand ( in which demands occur randomly ) , a correction factor is introduced by syntetos & boylan : where is the smoothing factor used for inter - demand intervals , which may be different to the smoothing factor used for demands . because it is used to smooth both and .] this works well for intermittent demand but is incorrect for non - intermittent demand .this problem is cured by syntetos who uses a forecast this reduces bias on non - intermittent demand , but slightly increases forecast variance .another modified croston method is given by levn & segerstedt , who claim that it also removes the bias in the original method but in a simpler way : they apply ses to the ratio of demand size and interval length each time a nonzero demand occurs .that is , they update the forecast using however , this also turns out to be biased .a more recent development is the tsb ( teunter - syntetos - babai ) algorithm , which updates the demand probability instead of the demand interval .this allows it to solve the problem of _ obsolescence _ which was not previously dealt with in the literature .an item is considered obsolete if it has seen no demand for a long time .when many thousands of items are being handled automatically , this may go unnoticed by croston s method and its variants .one of the authors of this paper ( prestwich ) has worked with an inventory company who used croston s method , but were forced to resort to ad hoc rules such as : _ if an item has seen no demand for 2 years then forecast 0 ._ tsb is designed to overcome this problem . instead of a smoothed interval it uses exponential smoothing to estimate a probability where is 1 when demand occurs at time and 0 otherwise .different smoothing factors and are used for and respectively . is updated every period , while is only updated when demand occurs .the forecast is in this paper we shall use cr to denote the original method of croston , sba the variant of syntetos & boylan , sy that of syntetos , and tsb that of teunter , syntetos & babai .we explore a new variant of croston s method that is unbiased and handles obsolescence .its novelty is that during long periods of no demand its forecasts decay hyperbolically instead of exponentially ( as in tsb ) , a property that derives from bayesian inference .the new method is described in section [ method ] and evaluated in section [ experiments ] , and conclusions are summarised in section [ conclusion ] .this paper is an extended version of .we take a croston - style approach , separating demands into demand sizes and the inter - demand interval . as in most croston methods , when non - zero demand occurs the estimated demand size and inter - demand period are both exponentially smoothed , using factors and respectively .the novelty of our method is what happens when there is no demand .suppose that at time , up to and including the last non - zero demand we have smoothed demand size and inter - demand period , and that we have observed consecutive periods without demand since the last non - zero demand .what should be our estimate of the probability that a demand will occur in the next period ?a similar question was addressed by laplace : given that the sun has risen times in the past , what is the probability that it will rise again tomorrow ?his solution was to add one to the count of each event ( the sun rising or not rising ) to avoid zero probabilities , and estimate the probability by counting the adjusted frequencies .so if we have observed sunrises and 0 non - sunrises , in the absence of any other knowledge we would estimate the probability of a non - sunrise tomorrow as .this is known as the _ rule of succession_. but he noted that , given any additional knowledge about sunrises , we should adjust this probability .these ideas are encapsulated in the modern _ pseudocount _ method which can be viewed as bayesian inference with a beta prior distribution .we base our discussion on a recent book ( chapter 7 ) that describes the technique we need in the context of bayesian classifiers .for the two possibilities and we add non - negative pseudocounts and respectively to the actual counts and of observations . from the beta distribution hyperparameters , but we already use these symbols for smoothing factors . ]as well as addressing the problem of zero observations , pseudocounts allow us to express the relative importance of prior knowledge and new data when computing the posterior distribution . by bayes rulethe posterior probability of a nonzero demand occurring is estimated by ( this is actually a conditional probability that depends on the recent observations and prior probabilities , but we follow and write for simplicity . ) in our problem we have seen no demand for periods so and : we can eliminate one of the pseudocounts by noting that the prior probability of a demand found by exponential smoothing is , and that the pseudocounts must reflect this : hence and as with tsb , to obtain a forecast we multiply this probability by the smoothed demand size : we can also eliminate by choosing a value that gives an unbiased forecaster on stochastic intermittent demand , as follows . consider the demand sequence as a sequence of substrings , each starting with a nonzero demand : for example the sequence has substrings , and . within a substring and remain constant so our forecaster has expected forecast = \mathbb{e}\left [ \frac{\hat{y}_t}{\hat{\tau}_t } \left ( \frac{1}{1 + \tau_t/ \hat{\tau}_tc_1 } \right ) \right ] \approx \mathbb{e}\left [ \frac{\hat{y}_t}{\hat{\tau}_t } \left ( 1 - \frac{\tau_t } { \hat{\tau}_tc_1 } \right ) \right ] = \frac{\hat{y}_t}{\hat{\tau}_t } \left(1- \frac{1}{c_1}\right)\ ] ] the derivation used the linearity of expectation , the constancy of and , the fact that =\hat{\tau}_t ] for .they use two values : to simulate low demand and to simulate lumpy demand .they use values 0.1 , 0.2 and 0.3 , and values 0.01 , 0.02 , 0.03 , 0.04 , 0.05 , 0.1 , 0.2 , 0.3 .we add sy and he s to these experiments , but we drop ses as they found it to have large errors .they take mean results over 10 runs , each with 120 time periods , whereas we use 100 runs .they initialise each forecaster with `` correct '' values whereas we initialise with arbitrary values then run them for periods using demand probability . a final differenceis that instead of mean error and mean squared error we use mase and mmr / u2 respectively .the results are shown in tables [ sta1][sta4 ] .because cr , sba and sy use only one smoothing factor we do not show their results for cases in which .comparing mase best - cases in each table , tsb and sy are least biased , closely followed by he s , then cr and sba . comparing mmr best - cases , sba is best , followed by tsb and he s , then cr and sy . comparing u2 best - cases he s is always at least as good as tsb , though there is little difference . comparing mmr - worst cases ,neither tsb nor he s dominates the other though he s seems slightly better .again sba gives best results , cr and sy generally the worst .comparing u2-worst cases he s beats tsb and seems to be more robust under different smoothing factors .sba again gives best results , while cr and sy have variable performance . to examine the relative best - case performance of he s and tsb more closely , table [ logcomp ] compares tsb and he s using rgrmse and pb . to make this comparison we must choose smoothing factors for both methods ,and we do this in two different ways : those giving the best mmr results and those giving the best u2 results .the table also shows the best factors , denoted by . using the mmr - best factors he s performs rather less well than tsb on lumpy demand , under both rgrmse and pb . butusing u2-best factors there is little difference between the methods .the table also shows that to optimise u2 we should use small factors for both tsb and he s , but to optimise mmr the best factors depend on the form of the demand .another demand distribution that has recieved interest , and for which there is also theoretical and empirical support , is the stuttering poisson distribution in which demand intervals are poisson and demand sizes are geometrically distributed .again we use a discrete version with geometrically distributed intervals .the geometric distribution is characterised by a probability we shall denote by , and is discrete with =(1-g)^{k-1}g$ ] for .we use two values of : 0.2 and 0.8 to simulate low and lumpy demand respectively .otherwise the experiments are as in section [ biasstoch2 ] .the results are shown in tables [ stu1][stu4 ] .though the numbers are different , qualitatively the tsb and he s results are the same as for logarithmic demand sizes . however , cr , sba and sy now give similar results to each other , and sba no longer has the best best - case u2 result , though it still has the best worst - case u2 result .table [ geocomp ] compares tsb and he s using the rgrmse and pb measures .qualitatively the results are the same as with logarithmic demand sizes , except that he s is now worse than tsb in all 4 tables for mmr best - cases though there is still little difference between the u2-best cases .the experiments so far use stationary demand , but teunter _ et al . _also consider nonstationary demand .again demand sizes follow the logarithmic distribution , while the probability of a nonzero demand decreases linearly from in the first period to 0 during the last period .demand sizes are again logarithmically distributed . as pointed out by teunter _et al . _ , none of the forecasters use trending to model the decreasing demand so all are positively biased .the results are shown in tables [ dec1][dec4 ] .tsb clearly has the best best - case mase , mmr and u2 results , while he s is next - best , though sba occasionally beats he s .he s has the worst worst - case mase , mmr and u2 results , followed by tsb .however , if we only consider he s and tsb results for which then cr , sba and sy have the worst worst - cases , with he s also poor on lumpy demand .the best for all methods is larger : as pointed out by teunter _, large smoothing factors are best at handling non - stationary demand , while small factors are best when demand is stationary .the best value is relatively unimportant here , which makes sense as demand sizes are stationary .we repeat the experiments of section [ decreasing ] , but instead of linearly decreasing the probability of demand we reduce it immediately to 0 after half ( 60 ) of the periods , again following teunter __ demand sizes are again logarithmically distributed .the results are shown in tables [ obs1][obs4 ] . as found by teunter _the differences between tsb and cr , sba and sy are more pronounced because the latter are given no opportunity to adjust to the change in demand pattern .the results are qualitatively similar to those for decreasing demand , but with greater differences .it is noted in that although tsb is unbiased in the above sense , it is biased if we only compare forecasts with expected demand at issue points only ( that is when demand occurs ) .ses is similarly biased , but not croston methods such as sba or sy .the cause is the decay in forecast size between demands , and he s will clearly suffer from a similar bias .we repeated the stationary demand experiments with logarithmically and geometrically distributed demand sizes , and measured the bias of tsb and he s based on issue points only .both had greater bias than sy ( as expected ) but neither dominated the other . on stationary demandsba performs very well , followed by tsb and he s .the relative performance of he s and tsb depends on how they are tuned .if we tune them using mmr then tsb beats he s under both the rgrmse and pb measures , but the best smoothing factors are erratic ; while if we tune them using u2 there is no significant difference between them , and the best smoothing factors are small . we prefer to take the u2-based results , not because they are better for our method but because they are more consistent with other work : teunter _ et al ._ found that small smoothing factors are best for stationary demand , though admittedly this could simply be because they used another ( unscaled ) mean squared error measure .intuitively this makes sense , whereas using mmr we found no consistent results for the best smoothing factors . also recommend tuning by u2 .thus we recommend tuning forecasting methods using u2 rather than mmr , and when doing this small smoothing factors are best for stationary demand .under these conditions tsb and he s seem to be equally good under two different measures , though they are slightly beaten by sba .he s is more robust than tsb under changes to the smoothing factors , with better worst - case behaviour as measured by both mmr and u2 .we believe that this is because he s s hyperbolic decay between demands is slower than tsb s exponential decay , so large smoothing factors are less harmful .but on non - stationary demand , with intervals increasing linearly or abruptly , tsb is best followed by he s , with cr , sba and sy giving worse performance if we consider the same range of smoothing factor settings . heretsb s greater reactivity serves it well . as found by other researchers , and as is intuitively clear , large smoothing factors are best at handling changes in demand patternhowever , he s s robustness means that we can recommend smoothing factors that behave reasonably well on both stationary and non - stationary demand : .the results in all cases are not much worse than with optimally - tuned factors .we presented a new forecasting method called hyperbolic - exponential smoothing ( he s ) , which is based on an application of bayesian inference when no demand occurs .we showed theoretically that he s is approximately unbiased , and compared it empirically with croston variants cr , sba , sy and tsb . on stationary demand we found little difference between tsb and he s , though he s was more robust under changes to smoothing factors , and both performed well against other croston methods . on non - stationary demand tsbperformed best , followed by he s .like tsb , he s has two smoothing factors and . in common with other methods ,he s performs best on stationary demand with small smoothing factors , and best on non - stationary demand with larger factors . using smoothing factors gave reasonable results on a variety of demand patterns and we recommend these values .thanks to aris syntetos for helpful advice , and to the anonymous referees for useful criticism .this work was partially funded by enterprise ireland innovation voucher iv-2009 - 2092 . 10 j. s. armstrong , f. collopy .error measures for generalizing about forecasting methods : empirical comparisons ._ international journal of forecasting _ * 8*:6980 , 1992 .j. e. boylan , a. a. syntetos .the accuracy of a modified croston procedure . _ international journal of production economics _ * 107*:511517 , 2007 .d. c. chatfield , j. c. hayyab .all - zero forecasts for lumpy demand : a factorial study . _ international journal of production research _ * 45*(4):935950 , 2007 .j. d. croston .forecasting and stock control for intermittent demands ._ operational research quarterly _ * 23*(3):289304 , 1972 .r. fildes .the evaluation of extrapolative forecasting methods ._ international journal of forecasting _ * 8*(1):8198 , 1992 .r. fildes , k. nikolopoulos , s. f. crone , a. a. syntetos .forecasting and operational research : a review ._ journal of the operational research society _* 59*:11501172 , 2008 .e. s. gardner jr .exponential smoothing : the state of the art part ii ._ international journal of forecasting _ * 22*(4):637666 , 2006 .a. a. ghobbar , c. h. friend .evaluation of forecasting methods for intermittent parts demand in the field of aviation : a predictive model ._ computers & operations research _ * 30*:20972114 , 2003 .j. d. de gooijer , r. j. hyndman .25 years of iif time series forecasting : a selective review , tinbergen institute discussion paper no 05 - 068/4 , tinbergen institute , 2005 .r. j. hyndman , a. b. koehler .another look at measures of forecast accuracy . _ international journal of forecasting _ * 22*(4):679688 , 2006 .s. kolassa , w. schtz .advantages of the mad / mean ratio over the mape ._ foresight _ * 6*:40-43 , 2007 .essai philosophique sur les probabilits .paris : courcier , 1814 .e. levn , a. segerstedt . inventory control with a modified croston procedure and erlang distribution ._ international journal of production economics _ * 90*(3):361 - 367 , 2004 .d. poole , a. mackworth .artificial intelligence : foundations of computational agents .cambridge university press , 2010 .s. d. prestwich , s. a. tarim , r. rossi , and b. hnich . forecasting intermittent demand by hyperbolic - exponential smoothing . _international journal of forecasting _ , 2014 ( to appear ) .a. a. syntetos .forecasting for intermittent demand .unpublished phd thesis , buckinghamshire chilterns university college , brunel university , 2001 .a. a. syntetos , j. e. boylan .the accuracy of intermittent demand estimates . _ international journal of forecasting _ * 21*:303314 , 2005 .a. a. syntetos , j. e. boylan , s. m. disney .forecasting for inventory planning : a 50-year review ._ journal of the operations research society _* 60*:149160 , 2009 .a. syntetos , z. babai , d. lengu , n. altay .distributional assumptions for parametric forecasting of intermittent demand . in : n.altay & a. litteral ( eds . ) , service parts management : demand forecasting and inventory control , springer verlag , ny , usa , 2011 , pp.3152 .r. teunter , b. sani . on the bias of croston s forecasting method ._ european journal of operations research _ * 194*:177183 , 2007 .r. teunter , a. a. syntetos , m. z. babai .intermittent demand : linking forecasting to inventory obsolescence ._ european journal of operations research _ * 214*(3):606615 , 2011 .applied economic forecasting .rand mcnally , 1966 ..logarithmic demand sizes with [ cols= " > , > , > , > , > , > , > , > , > , > , > " , ]
croston s method is generally viewed as superior to exponential smoothing when demand is intermittent , but it has the drawbacks of bias and an inability to deal with obsolescence , in which an item s demand ceases altogether . several variants have been reported , some of which are unbiased on certain types of demand , but only one recent variant addresses the problem of obsolescence . we describe a new hybrid of croston s method and bayesian inference called hyperbolic - exponential smoothing , which is unbiased on non - intermittent and stochastic intermittent demand , decays hyperbolically when obsolescence occurs and performs well in experiments .
sophisticated solutions have been studied for future cellular systems aiming at improving both the uplink and downlink data rates . for instance , introducing multiple antennas at both base stations ( bss ) and mobile stations ( mss ) greatly increases the achievable rates .furthermore , employing relays in these systems extends the coverage and enhances the performance . however , interference is still the main performance limiting factor in cellular systems .the transmission rate , especially when the mss are located at the cell edges , is greatly influenced by the inter - cell interferences .for instance for a cell edge ms , the received interference signal in the downlink can be severe and even of a comparable strength as the useful signal , which degrades the achieved rate significantly . to enhance the performance in cellular systems , smart spatial signal processing techniques at the bss and the mss , and also at the relays if they are employed in the system , need to be found .apart from joint processing techniques which require data exchange among the cooperating parties , we focus on distributed signal processing techniques which require only the exchange of channel state information . before discussing the sum rate ,we first briefly review two interference reduction techniques which have been studied extensively , i.e. , interference alignment ( ia ) and sum mean square error ( mse ) minimization .ia is achieved by aligning all the interferences in a smaller subspace of the received signal space while keeping the useful signal subspace interference free .ia has received great attention in the last few years . basically , the ia problem has the nice property that it is a multi - affine problem .therefore , it can be tackled by alternatively solving several linear subproblems . for instance , the ia problem is a tri - affine problem if relays are employed .firstly , the filters of the bss are optimized with fixed relay processing matrices and fixed filters at the mss .secondly , the relay processing matrices are optimized with fixed filters at the bss and mss . at the third step, the filters of the mss are optimized with fixed filters at the bss and fixed relay processing matrices . however , since ia ignores the received noise , it performs poorly at low and moderate signal to noise ratios ( snrs ) . on the other hand ,optimizing the spatial filters at the bss and the mss , as well as the processing matrices at the relays if they are employed , for minimizing the sum mse always achieves a compromise between interference reduction and noise reduction . in general , the sum mse is not a convex function .however , it is a convex function of either the filters at the bs , the filters at the mss or the relay processing matrices alone .this multi - convex structure of the sum mse function also allows alternating minimization algorithms to achieve a local minimum .nevertheless , minimizing the sum mse does not necessarily imply achieving the maximum sum rate .furthermore , it is worth to mention here that minimizing the sum bit error rate ( ber ) is an alternative objective to minimizing the sum mse .however , it is complicated to optimize the sum ber in multi - user multi - antenna scenarios as the ber has no closed form solution . besides the techniques mentioned above , directly maximizingthe sum rate is a promising goal for efficiently utilizing the limited available system resources .if the interference is treated as noise and some power constraints are considered , the sum rate maximization problem is a non - convex optimization problem .this non - convexity of the sum rate maximization problem holds even if we optimize over either the filters at the bss only , the filters at the mss only or the relay processing matrices only .therefore , iterative alternating optimization algorithms can not be directly implemented here . in the last decade , a lot of progress has been made in finding efficient sum rate maximization algorithms .algorithms from global optimization theory are proposed for finding the global maximum of the sum rate . nevertheless , these algorithms suffer from high computational complexity which limits their practicality to small scenarios only .unlike the computationally expensive global optimization algorithms , relatively low complexity suboptimum algorithms have also received great attention .basically , the special structure of the sum rate function can be exploited to achieve a near optimum sum rate . in ,an interference broadcast channel is considered .instead of maximizing the sum rate , the authors maximizes the product of the snrs at the mss . rather than optimizing the filters at the bs and the mss all together, it is shown that the problem can be simplified to three subproblems , which are not necessarily convex .each subproblem aims at optimizing either the transmit powers , the bs filters or the ms filters .geometric programming is employed for approximating the solution of the non - convex subproblems . in ,some auxiliary variables are used to simplify the sum rate maximization problem in a broadcast channel .the authors introduce new variables to the problem such that the multiple constraints can be equivalently written as a single constraint .the sum rate function can be written as a difference of two concave functions .accordingly , the authors of linearly relax the second term and solve the resulting problem iteratively .some authors also exploit the minimized mse to maximize the sum rate . from the information theory perspective ,_ _ al . _ have found that there is a linear relationship between the derivative of the mutual information and the minimum mse for gaussian channels .moreover , it is shown in that this relationship holds for any wireless system with linear filters . considering a broadcast channel scenario ,the relationship between the derivative of the mutual information and the minimum mse can be exploited by designing the receive filters such that the mse at the receivers is minimized . in this case, the mse will be a function of the transmit filters .accordingly , the sum rate maximization problem for optimizing the transmit filters can be formulated as a minimization of the sum of log - mses .an approximate solution of this new formulation is found using geometric programming . designing the receive filters to minimize the mse and optimizing the remaining variables to maximize the sum rateis also considered in .it is shown that by relaxing the sum rate maximization problem and adding some auxiliary variables , a successive convex approximation approach can be applied .because of the problem relaxations , this approach does not converge to neither a local maximum nor the global maximum of the original problem . for a broadcast channel scenario ,the receive filters are designed aiming at minimizing the mse and by adding some auxiliary variables , the sum rate maximization problem is reformulated as a biconvex optimization problem of the transmit filter and the added auxiliary variables .this work is extended to many different scenarios such as mimo interference channels , interfering broadcast channels , and relay interference channels .the main drawback of this approach is that the receive filters are not optimized to maximize the sum rate . in the present paper ,we aim at formulating the sum rate maximization problem as a multi - convex problem so that it can be efficiently solved by low complexity iterative algorithms .we specifically consider the downlink transmission in a cellular scenario with bss serving multiple mss , although the same approach can be applied to the uplink transmission as well .the transmission from the bss to the mss takes place either through several non - regenerative relays or directly without relays .first , we focus on describing our approach for a two - hop transmission scheme where relays are employed .then , we show that the approach can also be applied to other wireless systems by taking the single - hop transmission scheme without relays as an example .the key idea of our approach is to replace the signal to interference plus noise ratio ( sinr ) at a ms by a new term whose maximal value is found to be 1+sinr . using this new term, we formulate a multi - concave objective function .we will show that this objective function has the same maxima as the sum rate function and , therefore , maximizing this objective function is equivalent to maximizing the sum rate function .the rest of this paper is organized as follows . in the next section ,a two - hop transmission scenario and a single - hop transmission scenario are described . in section [ sec:5 ] ,the two - hop transmission is first investigated and based on it , the multi - convex formulation of the sum rate is derived .an iterative sum rate maximization algorithm is proposed in section [ sec:5.5 ] . to show that our idea is quite general and fits in many scenarios , we derive the multi - convex problem formulation for the single - hop transmission in section [ sec:6 ] .a few additional aspects are discussed in section [ sec : disc ] and the performance of the proposed algorithm is shown in section [ sec:7 ] . in section [ sec:8 ] ,the conclusions are drawn .in this paper , we will consider two related scenarios , i.e. , a two - hop interference broadcast scenario and a single - hop interference broadcast scenario .the former will be described here , and the latter will be described in section [ sec:2.2 ] . a downlink cellular scenario consisting of cells is considered .each cell contains a bs with antennas , and mss with antennas each .we first assume that the direct channels between the bss and the mss are relatively weak due to the radio environment so that they can be neglected . to enable the communication between the bss and the mss , half - duplex relays with antennaseach are deployed in the scenario .the transmission takes place in two subsequent time slots as illustrated in fig .[ fig : sce_1 ] . in the first time slot ,the bss transmit to the relays . in the second time slot ,the relays retransmit a linearly processed version of what they received in the first time slot to the mss .the channels between the communication parties are assumed to remain constant during the transmission . to simplify the discussions, we assume that each ms receives a single desired data symbol from the corresponding bs .accordingly , each bs transmits simultaneously complex valued data symbols with .a direct extension to the case where multiple data symbols are desired will be briefly discussed in section [ sec:7 ] .[ c][c][0.65]bs [ c][c][0.65]bs [ c][c][0.65]relay [ c][c][0.65]relay [ c][c][0.65]ms [ c][c][0.65]ms [ c][c][0.65]ms [ c][c][0.65] [ c][c][0.65]ms [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] cell scenario with bss , relays and mss .the transmissions in the first and the second time slot are illustrated by the dotted and solid arrows , respectively.,title="fig:",width=336 ] + let , , and denote the indices of the bss , the mss , and the relays , respectively .then , the data symbol transmitted by the corresponding bs for the -th ms is denoted by and all the data symbols transmitted by the -th bs are denoted by the vector . for each bs ,the transmitted data symbols are pre - processed by a linear transmit filter denoted by .the signal vector transmitted by the -th bs reads the received signal vector at the -th relay is where denotes the channel matrix between the -th bs and the -th relay , and represents the noises at the different antennas of the relay , which are assumed to be independently identically distributed ( i.i.d . )gaussian with zero mean and variance .it is assumed that the number of antennas at a relay is not large enough to spatially separate the received signals , i.e. , .therefore , the amplify and forward relaying protocol is considered .the -th relay linearly processes its received signals with the matrix and the transmitted signal of the -th relay is denoted by furthermore , the received signal vector at the -th ms is where denotes the channel matrix between the -th relay and the -th ms , and represents the noises at the ms , which are also assumed to be i.i.d .gaussian with zero mean and variance .then the -th ms can linearly post - process its received signal vector using a linear receive filter to obtain the estimated data symbol as suppose the duration of each time slot is normalized to one , it is assumed that the transmitted data symbols are uncorrelated and they have the same average power for all where denotes the expectation .moreover , the sum power constraint at the bss is given by the sum power constraint at the relays is given by the second scenario we consider is similar to the one described in section [ sec:2.1 ] , except that the direct channels between the bss and mss are assumed to be usable and no relays are deployed .the mss receive signals directly from the bss within a single time slot as illustrated in fig .[ fig : sce_2 ] .therefore , the received signal vector at the -th ms reads where denotes the channel matrix between the -th ms and the -th bs .similar to ( [ eq : rec_d_hat_1 ] ) , the estimated data symbol at the -th ms is calculated as furthermore , only the power constraint ( [ eq : tot_bs ] ) at the bss is relevant for the single - hop scenario .[ c][c][0.65]bs [ c][c][0.65]bs [ c][c][0.65]ms [ c][c][0.65]ms [ c][c][0.65]ms [ c][c][0.65] [ c][c][0.65]ms [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] [ c][c][0.7] cell scenario with bss and mss.,title="fig:",width=336 ]to simplify the notations for the rest of this paper , we will partition the system variables into three disjoint sets with the variables being kept in a certain order for plugging them in a function argument , namely the tuple of the transmit filters the tuple of the relay processing matrices and the tuple of the receive filters in this section , we formulate the sum rate maximization problem for the two - hop interference broadcast scenario described in section [ sec:2.1 ] . using the notations introduced above, it can be observed from ( [ eq : rec_d_hat_1 ] ) that the estimated data symbol at the -th ms is a tri - affine function of the tuple of the transmit filters , the tuple of the relay processing matrices , and the receive filter .let denote the -th column of .then , ( [ eq : rec_d_hat_1 ] ) can be rewritten as where is the effective useful link of the -th ms including the relays and the transmit filter vector .let be an diagonal matrix where all the diagonal elements are ones except for the -th diagonal element being zero .the received interference plus noise at the -th ms is given by the first term and the second term of ( [ eq : interference_noise ] ) represent the received intra - cell and inter - cell interference , respectively .the last two terms of ( [ eq : interference_noise ] ) describe the noises received by the -th ms , including the noise retransmitted by the relays . based on this ,the receive sinr at the -th ms can be written as and thus , the sum rate is calculated as which is a function of the tuples of variables , , and . for the considered two - hop transmission scheme ,the sum rate maximization problem for optimizing the transmit filters , the relay processing matrices and the receive filters with the sum power constraints at the bss and at the relays can be stated as subject to and where the constraints of ( [ eq : con1.1 ] ) and ( [ eq : con1.2 ] ) follow from ( [ eq : tot_bs ] ) and ( [ eq : tot_relay ] ) , respectively .the sum power constraint of ( [ eq : con1.1 ] ) at the bss is a convex set of the transmit filters .furthermore , the sum power constraint of ( [ eq : con1.2 ] ) at the relays is a biconvex set of the transmit filters and the relay processing matrices .however , the objective function the sum rate function is not a concave function of , , and .therefore , the optimization problem of ( [ eq : obj1])([eq : con1.2 ] ) is a non - convex problem . with a closer look at the structure of the sum rate function of ( [ eq : sumrate ] ) , one can observe that the achieved rate at a ms is a logarithmic function of .basically , the main difficulty of handling the sinr function of ( [ eq : sinr ] ) is that both its nominator and denominator are functions of the system variables , see ( [ eq : eff_ch ] ) and ( [ eq : interference_noise ] ) . in order to reformulate the optimization problem of ( [ eq : obj1])([eq : con1.2 ] ) as a multi - convex optimization problem ,a term related to the sinr is introduced in the following proposition .[ def:1 ] let be a scaling factor which scales the -th transmitted data symbol .then , the function has a single maximum being equal to , where is defined in . using the function ( [ eq : eta_1 ] )can be written as since described in ( [ eq : rec_d_hat_1 ] ) is a tri - affine function of , , and , the function + described in ( [ eq : g_fun ] ) is a tri - convex function of , and for a fixed . by calculating the general derivative of with respect to and setting the result to zero ,two stationary points can be calculated as and by substituting ( [ eq : w_0 ] ) and ( [ eq : w_opt ] ) in ( [ eq : eta_2 ] ) , the values of at and , respectively , are calculated as and considering the fact that the function described in ( [ eq : eta_1 ] ) is non - negative and the function must achieve its maximum at .the nice property of is that just its denominator is a function of the system variables , , and whereas for defined in ( [ eq : sinr ] ) , both the nominator and the denominator are functions of the system variables .in the previous section , it has been shown that the term is equivalent to the received sinr at the -th ms when the scaling factor is optimized using ( [ eq : w_opt ] ) .let be a vector of the scaling factors and let the elements of be chosen as ( [ eq : w_opt ] ) .then , the function is equivalent to the sum rate function of ( [ eq : sumrate ] ) in the sense that both have the same local and global maxima if holds . to show the concavity of with respect to the tuples , and , using ( [ eq : eta_2 ] ) , ( [ eq : f1 ] ) can be rewritten as in ( [ eq : obj2.1 ] ) , just the second term includes the system variables .although the function + is a tri - convex function of , and when is fixed , + is not necessarily convex .accordingly , we aim at finding a new equivalent objective function which is linear in such that we can exploit the fact that is a multi - convex function of the system variables . in this section ,the optimization problem of ( [ eq : obj1])([eq : con1.2 ] ) is reformulated as a multi - convex optimization problem .let be a vector of additional scaling factors .then , the function is obviously a concave function of . by taking the first order derivative of with respect to and setting the result to zero , the optimum scaling factor is calculated as substituting ( [ eq : t_opt ] ) in ( [ eq : obj2.2 ] ) yields from ( [ eq : obj_equi ] ), it can be concluded that the new objective function is equivalent to the sum rate function in the sense that they both have the same global and local maxima if the optimum scaling factors in and are chosen .moreover , the function has a single maximum at if , , , and are fixed , and the function is * a concave function of if , , , and are fixed because the logarithm is a concave monotonically increasing function , * a concave function of if , , , and are fixed because is a convex function of , * a concave function of if , , , and are fixed because is a convex function of , and * a concave function of if , , , and are fixed because , is a convex function of . accordingly , the sum rate maximization problem of ( [ eq : obj1])([eq : con1.2 ] ) can be equivalently formulated as a multi - convex optimization problem stated as subject to and this problem is a multi - convex problem of , , , and .the vectors and of the scaling factors can be optimized using ( [ eq : w_opt ] ) and ( [ eq : t_opt ] ) , respectively . with fixed scaling factors ,just the last term of ( [ eq : obj2.2 ] ) is relevant for optimizing the system variables and thus , the optimization problem ( [ eq : obj3])([eq : con3.2 ] ) can be stated as subject to and as described previously in section [ sec:5.2 ] , the function is a tri - convex function of , , and for fixed .moreover , the power constraints of ( [ eq : con6.1 ] ) and ( [ eq : con6.2 ] ) are a convex set and a biconvex set , respectively . based on this , the optimization problem of ( [ eq : obj6])([eq : con6.2 ] ) is a tri - convex problem for fixed and . by taking the general derivative of with respect to and setting the result to zero, the optimum receive filter is calculated as the problem structure with respect to with fixed , , , and , is a convex quadratically constrained quadratic problem .tools from quadratic optimization can be applied to find the optimum transmit filters .similarly , with fixed , , , and , the optimization problem ( [ eq : obj6])([eq : con6.2 ] ) can be solved for the tuple of the relay processing matrices using the conventional quadratic optimization tools . in this section , an iterative algorithm which alternately maximizes the multi - concave objective function by sequentially optimizing , , , and is described .let be an arbitrarily small tolerance value .then , the proposed algorithm can be summarized as follows : set arbitrary initial values for , and set feasible initial values for and chosen such that the constraints of ( [ eq : con3.1 ] ) and ( [ eq : con3.2 ] ) hold in each iteration calculate given , and using ( [ eq : u_opt ] ) calculate given , , and using quadratic optimization tools calculate given , , and using quadratic optimization tools calculate given , , and using ( [ eq : t_opt ] ) calculate given , and using ( [ eq : w_opt ] ) stop if from the multi - convex optimization literature it is known that the above algorithm converges to a local maximum .in this section , we will show that the proposed multi - convex formulation of the sum rate and the iterative algorithm can be applied to the single - hop interference broadcast scenarios described in section [ sec:2.2 ] as well . from ( [ eq : dhat_direct ] ), one can observe that is a bi - affine function of the tuple of transmit filters and the receive filter .then ( [ eq : dhat_direct ] ) can be rewritten as where is the effective useful link corresponding to the -th ms including the transmit filter vector , and the effective interference plus noise received at the -th ms is in ( [ eq : z_direct ] ) , the first term and the second term represent the received intra - cell interference and the received inter - cell interference , respectively .the noise at the -th ms is described by the last term of ( [ eq : z_direct ] ) . substituting ( [ eq : q_direct ] ) and ( [ eq : z_direct ] ) into( [ eq : sinr ] ) , the receive sinr at the -th ms can be calculated .then , the sum rate can be calculated as and the sum rate maximization problem can be formulated as subject to similar to the two - hop scenario discussed in section [ sec:5.1 ] , this problem is non - convex .for the single - hop transmission , the function described in ( [ eq : g_fun ] ) can be redefined using ( [ eq : rec_ms3 ] ) .the multi - concave objective function can be written as based on this , the sum rate maximization problem can be formulated as a multi - convex optimization problem stated as subject to this problem is a multi - convex optimization problem of and if the optimum and are a priori chosen .as described in section [ sec:5.4 ] , the optimization problem of ( [ eq : obj5])([eq : con5 ] ) can be solved alternatingly over , , , and .the iterative algorithm can be summarized as follows : set arbitrary initial values for , and set feasible initial values for chosen such that the constraint of ( [ eq : con5 ] ) holds . in every iteration given and ( [ eq : u_opt ] ) calculate given , and using quadratic optimization tools calculate given , and ( [ eq : t_opt ] ) calculate given and ( [ eq : w_opt ] ) stop if key idea of the algorithm proposed in this paper is to find a multi - concave objective function which is equivalent to the sum rate function in the sense that they have the same maxima , e.g. , the function in ( [ eq : obj2.2 ] ) and the function in ( [ eq : obj5.2 ] ) .it can be observed from the analysis in the previous sections that the new objective function must be a multi - concave function of the system variables as long as the function described in ( [ eq : g_fun ] ) is a multi - convex function .this however only requires that each estimated data symbol is a multi - affine function of the system variables .therefore , we may conclude that for a system in which the estimated data symbols are multi - affine functions of the system variables , the sum rate maximization problem can be equivalently formulated as a multi - convex optimization problem . in this paper, we only considered the case where each ms receives a single desired data symbol from the corresponding bs . if more than one data symbol is desired by each ms , one can expect that the estimated data symbols at a ms are superposed by colored noise .therefore , a pre - whitening filter is required at each ms to decorrelate the noise signals , which results in that the estimated data symbols are no longer multi - affine functions of the system variables . however, if this correlation among the noise signals is ignored and the received data symbols are decoded symbol - wise , an approximate sum rate can still be maximized using the proposed algorithm .in the following simulation results , the performance of the proposed sum rate maximization algorithm is evaluated in a cellular scenario with cells , mss per cell , antennas at each bs , relays , and antennas at each relay and ms . the two - hop transmission scheme is applied . concerning the channel model , we employ an i.i.d . complex gaussian channel model with the average channel gain being normalized to one . to assess the performance of our proposed algorithm , two reference schemes are considered .firstly , an ia algorithm is considered where the tuple of the transmit filters , the tuple of the receive filters , and the tuple of the relay processing matrices are alternatingly optimized to minimize the total interference leakage in the system .the considered ia algorithm is a direct extension of the interference leakage minimization algorithm proposed in to a multiuser relay scenario .the second reference scheme is the sum mse minimization algorithm which minimizes the sum mse by alternatingly optimizing , , and .[ c][c]sum rate in bits use [ ct][ct] in db [ rb][rb]sum rate max .[ rb][rb]sum mse min .[ rb][rb]ia in decibel for a scenario with , , , and ,title="fig:",width=336 ] + firstly , the achieved sum rate per time slot is considered as a performance measure .the performance of the proposed algorithm is considered as a function of the pseudo snr which is defined as the ratio of the sum transmit power of all the bss and relays to the noise variance , i.e. , figure [ fig : result ] shows the performances of the three considered algorithms averaged over many different channel snapshots .it can be seen from fig .[ fig : result ] that the ia algorithm performs poorly as compared to the other two algorithms at low to moderate pseudo snrs . on the one hand, the ia algorithm does not consider noise reduction . on the other hand, the ia algorithm does not intend to improve the received powers of the useful signals when minimizing the interferences .that is to say , the ia algorithm does not maximize the received snrs at the mss . in the pseudo snr region shown in fig .[ fig : result ] , both the sum mse minimization and the sum rate maximization algorithms achieve superior performance as compared to the ia algorithm .however , the sum rate maximization algorithm outperforms the sum mse minimization algorithm on average .this shows that minimizing the sum mse does not necessarily achieve high sum rates . at high pseudo snrs ,interferences become more harmful . as ia aims at perfectly nullifying all the interferences ,the sum rates achieved by the ia algorithm increase approximately linearly with the pseudo snrs and the slope is related to the achieved degrees of freedom ( dofs ) .furthermore , if the sum mse minimization and the sum rate maximization algorithms are able to find the global optima , all three curves should have the same slope at high pseudo snrs .however , as the total available power increases , the feasible region described by the constraint sets of ( [ eq : con1.1 ] ) and ( [ eq : con1.2 ] ) enlarges and this complicates the search for a good local optimum for both the sum mse minimization and the sum rate maximization algorithms . as a result , both algorithms can not achieve the same dofs as the ia algorithm .[ ct][ct]sum rate in bits use [ c][c]probability density [ r][r]sum rate max .[ rb][rb]sum mse min .[ b][b]ia for a scenario with , , , and ,title="fig:",width=336 ] + next we will take a closer look at the convergence of the proposed sum rate maximization algorithm . in fig .[ fig : distribution ] , the approximated probability density of the sum rates achieved by the proposed sum rate maximization algorithm , the sum mse minimization algorithm , and the ia algorithm at a pseudo snr of are shown .one can observe that the ia algorithm sometimes achieves a high sum rate but the average performance remains low .this implies that the snr at each ms may vary across a wide range depending on the channel realization .the performance of the sum mse minimization algorithm is more stable than that of the ia algorithm because the received useful signal powers are forced close to the transmit signal powers , i.e. , the gains of the useful links are close to one .finally , the proposed sum rate maximization algorithm achieves the highest average sum rate with the smallest variance among the three considered algorithms in this case . for a randomly given channel realization, the algorithm converges , with high probability , to a solution which achieves a sum rate in the range between bits per channel use and bits per channel use .however , for some channel realizations , the algorithm may also converge to solutions achieving a sum rate of about bits per channel use or bits per channel use .the reason is that the sum rate maximization algorithm is not guaranteed to achieve a global maximum .if the auxiliary variables and are fixed , the optimization problem of ( [ eq : obj6])([eq : con6.2 ] ) is similar to a weighted sum mse minimization problem . how to find the global optimum for such a problem is still an open question. in fact , alternatingly adapting the sets of optimization variables may result in that one or several users are turned off . in our simulation results for instance, it may happen that zero , one , or even two of the six mss are turned off depending on the pseudo snr and the channel realizations . because of this , the ia algorithm can even outperform the proposed sum rate maximization algorithm at very high pseudo snrs .[ c][c]sum rate in bits use [ ct][ct]number of iterations [ cb][cb]sum rate max .[ cb][cb]sum mse min .[ cb][cb]ia for a scenario with , , , and ,title="fig:",width=336 ] + figure [ fig : convergence ] shows the average sum rate versus the number of iterations of the considered algorithms for the first 50 iterations at a pseudo snr of .since reducing interferences does not guarantee an increment of the sum rate , the sum rate achieved by the ia algorithm converges .the average sum rate achieved by the sum mse minimization algorithm slowly increases with the number of iterations . ascompared to the sum mse algorithm , the proposed sum rate maximization algorithm not only converges faster but also converges to a higher sum rate on average .the main reason is that the auxiliary variables and are adapted in every iteration to help maximizing the sum rate .furthermore , since these auxiliary variables can be calculated in closed form , our sum rate maximization algorithm is comparable with the sum mse minimization algorithms in terms of its computational complexity , which is mainly determined by the quadratic optimization tools used for optimizing the transmit filters at the bss and the relay processing matrices .in this paper , the sum rate maximization problem in cellular networks is considered .it is shown that by adding two sets of auxiliary variables , this problem can be formulated as a multi - convex optimization problem .the property of multi - convexity in the new formulation makes it possible to find a local optimum using a low complexity iterative algorithm .the new proposed multi - convex formulation is not limited to our considered scenario , but it can be applied to many multiuser wireless system in which the estimated data symbols are multi - affine functions of the system variables .this work is supported by deutsche forschungsgemeinschaft ( dfg ) , grants no .we2825/11 - 1 and kl907/5 - 1 .hussein al - shatri and anja klein performed this work in the context of the dfg funded collaborative research center ( sfb ) 1053 multi - mechanism - adaptation for the future internet ( maki ) .k. gomadam , v. cadambe , and s. jafar , `` a distributed numerical approach to interference alignment and applications to wireless interference networks , '' _ ieee transactions on information theory _ , vol .57 , no . 6 , pp . 33093322 , june 2011 .h. al - shatri and t. weber , `` interference alignment aided by non - regenerative relays for multiuser wireless networks , '' in _ proc .8th international symposium on wireless communication systems _ , aachen , germany , november 2011 , pp .271275 .s. w. peters and r. w. heath , `` interference alignment via alternating minimization , '' in _ proc .ieee international conference on acoustics , speech , and signal processing _ , taipei , taiwan , april 2009 , pp .24452448 .r. tresch , m. guillaud , and e. riegler , `` on the achievability of interference alignment in the -user constant interference channel , '' in _ proc .ieee / sp 15th workshop on statistical signal processing _ , cardiff , uk , august - september 2009 , pp .277280 .d. schmidt , c. shi , r. berry , m. honig , and w. utschick , `` minimum mean squared error interference alignment , '' in _ proc . the 43rd asilomar conference on signals , systems and computers _ , pacific grove , usa , november 2009 , pp .11061110 .s. ma , c. xing , y. fan , y .-c . wu , t .- s .ng , and h. poor , `` iterative transceiver design for relay networks with multiple sources , '' in _ proc .military communications conference _ , princeton , usa , october - november 2010 , pp .369374 .r. s. ganesan , h. al - shatri , t. weber , and a. klein , `` iterative filter design for multi - pair two - way multi - relay networks , '' in _ proc .ieee international conference on communications _ ,budapest , hungary , june 2013 , pp .45224526 .l. qian , y. zhang , and j. huang , `` : achieving global optimality for a non - convex wireless power control problem , '' in _ ieee transactions on wireless communications _ , vol . 8 , no . 3 , march 2009 , pp .15531563 .m. codreanu , a. tlli , m. juntti , and m. latva - aho , `` downlink weighted sum rate maximization with power constraints per antenna groups , '' in _ proc .ieee 65th vehicular technology conference _ , dublin , irland , april 2007 , pp .20482052 .n. vucic , s. shi , and m. schubert , `` programming approach for resource allocation in wireless networks , '' in _ proc .8th international symposium on modeling and optimization in mobile , ad hoc and wireless networks _ , avignon , france , may - june 2010 , pp .380386 .s. s. christensen , r. agarwal , e. de carvalho , and j. cioffi , `` weighted sum - rate maximization using weighted for beamforming design , '' _ ieee transactions on wireless communications _, vol . 7 , no . 12 , pp . 47924799 , december 2008 .j. kaleva , a. tlli , and m. juntti , `` weighted sum rate maximization for interfering broadcast channel via successive convex approximation , '' in _ proc .ieee global communications conference _ , anaheim , usa , december 2012 , pp. 38383843 .q. shi , m. razaviyayn , z .- q .luo , and c. he , `` an iteratively weighted approach to distributed sum - utility maximization for a interfering broadcast channel , '' _ ieee transactions on signal processing _ , vol .59 , no . 9 , pp .43314340 , september 2011 .k. truong and r. heath jr ., `` interference alignment for the multiple - antenna amplify - and - forward relay interference channel , '' in _ proc .45th asilomar conference on signals , systems and computers _ , pacific grove , usa , november 2011 , pp .12881292 .h. al - shatri , x. li , r. s. ganesan , a. klein , and t. weber , `` closed - form solutions for minimizing sum in multiuser relay networks , '' in _ proc .ieee 77th vehicular technology conference _ ,dresden , germany , june 2013 , pp. 1 5 .j. gorski , f. pfeuffer , and k. klamroth , `` biconvex sets and optimization with biconvex functions : a survey and extensions , '' _ mathematical methods of operations research _ , vol .66 , no . 3 , pp . 373407 , december 2007 .
in this paper , we propose a novel algorithm to maximize the sum rate in interference - limited scenarios where each user decodes its own message with the presence of unknown interferences and noise considering the signal - to - interference - plus - noise - ratio . it is known that the problem of adapting the transmit and receive filters of the users to maximize the sum rate with a sum transmit power constraint is non - convex . our novel approach is to formulate the sum rate maximization problem as an equivalent multi - convex optimization problem by adding two sets of auxiliary variables . an iterative algorithm which alternatingly adjusts the system variables and the auxiliary variables is proposed to solve the multi - convex optimization problem . the proposed algorithm is applied to a downlink cellular scenario consisting of several cells each of which contains a base station serving several mobile stations . we examine the two cases , with or without several half - duplex amplify - and - forward relays assisting the transmission . a sum power constraint at the base stations and a sum power constraint at the relays are assumed . finally , we show that the proposed multi - convex formulation of the sum rate maximization problem is applicable to many other wireless systems in which the estimated data symbols are multi - affine functions of the system variables . sum rate maximization , interference , multi - convex function , amplify - and - forward relay .
to the current arrangement of traffic shapers and a scheduler in the access switch shown in fig.[fg : isp_traffic_control ] , both subscribers and internet service providers ( isps ) in shared access networks ( e.g. , cable internet or ethernet passive optical network ( epon ) ) can not exploit the benefits of full sharing of resources available in the network ; the capability of allocating available bandwidth by the scheduler ( e.g. , weighted fair queueing ( wfq ) ) is limited to traffic already shaped by token bucket filters ( tbfs ) per service contracts with subscribers .we recently proposed isp traffic control schemes based on core - stateless fair queueing ( csfq ) and token bucket meters ( tbms ) to allocate excess bandwidth among active subscribers in a fair and efficient way , while not compromising the service contracts specified by the original tbf for conformant subscribers . through the use of a common first in , first out ( fifo ) queue for both conformant and non - conformant packets, the proposed traffic control schemes can preserve packet sequence ; handling conformant and non - conformant packets differently at their arrivals , they give priority to the former , while allocating excess bandwidth to the latter proportional to their token generation rates . in this way , the proposed traffic control schemes address the critical issue in traffic shaping based on the original tbf that the excess bandwidth , resulting from the inactivity of some subscribers , can not be allocated to other active subscribers in the long term . the csfq - based schemes do not change negotiated token bucket parameters during their operations unlike modified token bucket algorithms ( e.g. , ) based on the modification of tbf algorithm itself and/or the change of its negotiated parameters during the operation , which may compromise the quality of service ( qos ) of conformant traffic as a result .the use of a common fifo queue in the csfq - based schemes , however , degrades the short - term performance of conformant traffic due to the presence of non - conformant packets already in the queue .also , the rate estimation based on exponential averaging makes it difficult to quickly react to rapid changes in traffic conditions .these may compromise the quality of service ( qos ) for conformant traffic in the short term compared to that under the original tbf , especially for highly bursty traffic like that of transmission control protocol ( tcp ) . to address the issues resulting from the use of common fifo queue and rate estimation based on exponential averaging in the csfq - based schemes, we propose a new isp traffic control scheme based on deficit round - robin ( drr ) , which ideally combines the advantages of both the original tbf ( i.e. , passing short , bursty conformant traffic without shaping ) and the weighted fair queueing ( wfq ) ( i.e. , allocating excess bandwidth among active flows proportional to their weights ) .the proposed isp traffic control scheme is to meet the following requirements in allocating excess bandwidth : the allocation of excess bandwidth should not compromise the qos of traffic conformant to service contracts based on the _ original token bucket _ algorithm ; excess bandwidth should be allocated among active subscribers proportional to their negotiated long - term average rates , i.e. , token generation rates .the first requirement implies that conformant packets should have priority over non - conformant ones in queueing and scheduling .the second requirement can be stated formally as follows : let and be _ excess bandwidth _ and _ _ \{__total arrival rate of non - conformant packets } at time for a shared access network with subscribers , i.e. , and , where is the capacity of the access link , the arrival rate of conformant packets for all subscribers , and the arrival rate of non - conformant packets for the subscriber , respectively . if , the _ normalized _ fair rate is a unique solution to where is the weight for the subscriber , which is proportional to the token generation rate ; otherwise , is set to .fig.[fg : isp_traffic_control_schemes ] shows two isp traffic control schemes enabling proportional allocation of excess bandwidth , i.e. , the csfq - based scheme and the proposed drr - based one . unlike the csfq - based scheme , the proposed scheme as it is based on drr uses per - subscriber queues , which separate traffic from different subscribers . also , unlike the drr - based reference scheme described in , the proposed scheme can preserve packet sequence , while giving priority to conformant packets , through managing logically separate queues per flow belonging to the same subscriber , i.e. , one for conformant and the other for non - conformant packets within a physical per - subscriber queue .algorithms[alg : enqueueing ] and[ alg : dequeueing ] show pseudocode of enqueueing and dequeueing procedures of the proposed scheme , where , , and are deficit , conformant byte , non - conformant byte counters and quantum for the subscriber , respectively . and are lists of active queues having conformant and non - conformant packets .the two additional counters ( i.e. , and ) are used to keep track of the number of bytes for conformant and non - conformant packets in a queue , which function as _ logically separate _queues within a common per - subscriber queue for sequence preserving ; to give priority to conformant packets in queueing , when a newly arrived conformant packet is to be discarded due to buffer overflow , is decreased instead , while is increased , which emulates preemptive queueing .+ ( a ) + + ( b ) likewise , the two lists of active queues are used to give conformant packets priority in scheduling by checking first during the dequeueing procedure ; as described in algorithm[alg : dequeueing ] , conformant packets are first scheduled in a round - robin manner ( i.e. , without taking into account deficit counters ) , while non - conformant packets , after serving all conformant packets in the queues , are scheduled based on drr for proportional allocation of excess bandwidth . * on receiving * a packet for the subscriber : * on receiving * a packet when all queues are empty or at the end of packet transmission : note that the proposed scheme does not use the rate estimation based on exponential averaging that makes it difficult for the csfq - based scheme to react promptly to rapid changes in traffic conditions and interact with tcp flows . also note that the proposed drr - based scheme , whose complexity is , has an advantage in complexity over the csfq - based scheme which corresponds to the extreme case of csfq islands , i.e. , the node itself is an island , where both the functionalities of edge and core routers of csfq reside in the same node .we compared the proposed scheme with the original tbf and the csfq - based one with buffer - based amendment using the simulation model described in , which is shown in fig.[fg : simulation_model ] .we model the ( optical ) distribution network ( ( o)dn ) using a virtual local area network ( vlan)-aware ethernet switch . to identify each subscriber ,we assign a unique vlan identifier ( vid ) ; the egress classification in the access switch is based on vids and the classified downstream flows go through the isp traffic control scheme as shown in fig.[fg : isp_traffic_control_schemes ] .subscribers are connected through 100-mb / s user - network interfaces ( unis ) to shared access with the same feeder and distribution rates of 100-mb / s , each of which receives packet streams from udp or tcp sources in the application server .the backbone rate ( i.e. , ) and the end - to - end round - trip time are set to 10 gb / s and 10 ms . for details of the simulation models and their implementation, readers are referred to . in the first experiment ,16 subscribers are divided into 4 groups , 4 subscribers per each for direct comparison with the results in :[fg : isp_traffic_control ] ) .it bears note in this regard that 16 subscribers are not much different from those for the deployment of current - generation time division multiplexing ( tdm)-pons ; for instance , the maximum allowable subscribers per pon in epon and gigabit pon ( gpon ) are 32,768 and 128 , respectively , but actual numbers are around 16 and 32 , respectively , due to optical budget in the odns and the average bandwidth per subscriber . ] for groups 1 - 3 , each subscriber receives a 1000-byte packet from a udp source at every 0.5 ms , resulting in the rate of 16 mb / s .token generation rates are set to 2.5 mb / s , 5 mb / s , and 7.5 mb / s for groups 1 , 2 , and 3 , respectively .their starting times are set to 0 s , 60 s , 120 s. for group 4 , each subscriber receives packets from a greedy tcp source with token generation rate of 10 mb / s and starting time of 180 s. token bucket size is set to 1 mb for all subscribers , and peak rate control is not used .the size of per - subscriber queues for both the original tbf ( denoted as `` rr+tbf '' ) and the proposed scheme ( `` proposed '' ) is set to 1 mb ( 16 mb in total ) .for csfq - based scheme ( `` csfq+tbm '' ) , the size of common fifo queue is set to 16 mb to cope with worst - case bursts resulting from 16 token buckets .the averaging constants for the estimation of flow rates ( ) and the normalized fair rate ( ) are set to 100 ms and 200 ms , respectively ; as for the buffer - based amendment , we set a threshold to 64 kb .fig.[fg : thruput_time_mixed ] shows flow throughput averaged over a 1-s interval from one sample run , demonstrating how quickly each scheme can respond to the changes in incoming traffic and allocate excess bandwidth accordingly .as expected , the original tbf scheme can not allocate excess bandwidth to active subscribers except for short periods of time around 60 s , 120 s and 180 s enabled by tokens in the buckets .the csfq - based and the proposed schemes , on the other hand , can allocate well excess bandwidth among udp flows until 180 s when tcp flows start ; of the two , the proposed scheme provides much better performance in terms of fluctuation . due to 1-mb token buckets, there are spikes in the throughput of newly started flows at 60 s and 120 s , during which the throughput of existing flows decreases temporarily .as tcp flows start at 180 s , the difference between the csfq - based and the proposed schemes become even clearer : as for the csfq - based scheme , the buffer - based amendment reduces the transient period , but at the expense of fluctuations in steady states . with the proposed scheme, there is virtually no fluctuation in tcp flow throughputs as well , but there is small increase in tcp throughput which lasts from 180 s to 197 s due to 1-mb token buckets , which is also the case for the original tbf .[ fg : thruput_avg ] shows the average throughput of flows for two 40-s periods i.e. , a subperiod ( 60 s ) minus a transient period ( 20 s ) with 95 percent confidence intervals from 10 repetitions , demonstrating static performance of each scheme ( i.e. , _ how exactly _ it can allocate available bandwidth among subscribers per the requirements described in sec .[ sec-2 ] in a steady state ) .as shown in fig.[fg : thruput_avg ] ( a ) , the csfq - based scheme suffers from the fluctuations observed in fig.[fg : thruput_time_mixed ] , while the proposed scheme allocates excess bandwidth from group 4 , which is inactive during this period , exactly per ( [ eq : fair_rate ] ) .fig.[fg : thruput_avg ] ( b ) shows that with tcp flows , the difference between the actual throughput of a flow and its fair share indicated by dotted lines become larger for the csfq - based scheme ; note that during this period , because there is no excess bandwidth available , each flow should be allocated bandwidth per its token generation rate , which is why the original tbf scheme shows as good performance as the proposed one . to check the scalability of the proposed scheme , we also ran simulations for a system with 1-gb / s access link and 160 subscribers ( each of 4 groups has 40 subscribers instead of 10 ) and obtained results nearly similar to those shown in figs.[fg : thruput_time_mixed ] and [ fg : thruput_avg ] .and ( b ) .,title="fig : " ] + ( a ) + and ( b ) .,title="fig : " ] + ( b ) to investigate transient responses of each scheme in shorter time scale , we also carried out another experiment where we consider 4 subscribers with token generation rate of 10 mb / s and token bucket size of 10 mb ; subscriber 1 receives a 10-mb conformant burst from the application server , while subscribers 2 - 4 receive non - conformant udp traffic with source rate of 50 mb / s .the flow throughput is averaged over a 10ms - interval to better show the details .fig.[fg : thruput_time_onoff ] illustrates the flow throughput before , during , and after the conformant burst for all three traffic control schemes , where we can clearly see that the proposed scheme provides the advantages of both the original tbf ( i.e. , passing the conformant burst without shaping and thereby any additional delay ) and the wfq ( i.e. , proportional allocation of excess bandwidth among active subscribers ) . in case of the csfq - based scheme ,however , the beginning of the burst is delayed by 1.11 s due to the presence of non - conformant packets already in the fifo queue . during the burst ,the allocation of bandwidth is quite distorted ( i.e. , subscriber 1 takes all the bandwidth ) because the csfq - based scheme can not respond quickly enough for traffic changes in such a short period of time .there is also delay after the burst in recovering the fair share of each subscriber .we have proposed a new drr - based isp traffic control scheme providing the advantages of both tbf and wfq .simulation results have demonstrated that the proposed scheme can guarantee the qos of conformant packets in all time scales while allocating excess bandwidth among active subscribers proportional to their token generation rates .also , unlike the csfq - based schemes , the proposed traffic control scheme does not have many design parameters affecting the performance of traffic control .now that we have an isp traffic control scheme which can allocate excess bandwidth among active subscribers in the long term while not compromising the qos of conformant traffic in the short term , it is time to investigate the business aspect of isp traffic control exploiting the excess bandwidth allocation as outlined in ; if we develop and implement isp traffic control schemes enabling excess bandwidth allocation and flexible service plans exploiting it , we could better meet user demand for bandwidth and qos even with the existing network infrastructure and save cost and energy for major network upgrade .i. stoica , s. shenker , and h. zhang , `` core - stateless fair queueing : a scalable architecture to approximate fair bandwidth allocations in high - speed networks , '' _ ieee / acm trans ._ , vol . 11 , no . 1 , pp . 3346 , feb . 2003 .d. abendroth , m. e. eckel , and u. killat , `` solving the trade - off between fairness and throughput : token bucket and leaky bucket - based weighted fair queueing schedulers , '' _ aeu - international journal of electronics and communications _ , vol .60 , no . 5 ,404407 , 2006 .
in shared access shaping subscriber traffic based on token bucket by isps wastes network resources when there are few active subscribers , because it can not allocate excess bandwidth in the long term . to address it , traffic control schemes based on core - stateless fair queueing ( csfq ) and token bucket meters ( tbms ) have been proposed , which can allocate excess bandwidth among active subscribers proportional to their token generation rates . using fifo queue for all packets , however , degrades the short - term performance of conformant traffic due to the presence of non - conformant packets already in the queue . also , the rate estimation based on exponential averaging makes it difficult to react to rapid changes in traffic conditions . in this paper we propose a new traffic control scheme based on deficit round - robin ( drr ) and tbms to guarantee the quality of service of conformant packets in all time scales while allocating excess bandwidth among active subscribers proportional to their token generation rates , whose advantages over the csfq - based schemes are demonstrated through simulation results . access , internet service provider ( isp ) , traffic shaping , fair queueing , deficit round - robin ( drr ) , quality of service ( qos ) .
recent advances in estimation and identification ( see , _ e.g. _ , and the references therein ) stemming from mathematical control theory may be summarized by the two following facts : * their algebraic nature permits to derive exact non - asymptotic formulae for obtaining the unknown quantities in real time .* there is no need to know the statistical properties of the corrupting noises .those techniques have already been applied in many concrete situations , including signal processing ( see the references in ) .their recent and successful extension to discrete - time linear control systems has prompted us to study their relevance to financial time series .the relationship between time series analysis and control theory is well documented ( see , _e.g. _ , and the references therein ) .our viewpoint seems nevertheless to be quite new when compared to the existing literature .the title of this communication is due to its obvious connection with some aspects of _ technical analysis _ , or _ charting _ ( see , _e.g. _ , and the references therein ) , which is widely used among traders and financial professionals . consider the univariate time series : is not regarded here as a stochastic process like in the familiar arma and arima models but is supposed to satisfy `` approximatively '' a linear difference equation where .introduce as in digital signal processing the additive decomposition where * is the _ trendline _ which satisfies eq .exactly ; * the additive `` noise '' is the mismatch between the real data and the trendline .thus where we only assume that the `` ergodic mean '' of is , _i.e. _ , it means that , , the moving average is close to if is large enough .it follows from eq . that also satisfies the properties and .most of the stochastic processes , like finite linear combinations of i.i.d .zero - mean processes , which are associated to time series modeling , do satisfy almost surely such a weak assumption .our analysis * does not make any difference between non - stationary and stationary time series , * does not need the often tedious and cumbersome trend and seasonality decomposition ( our trendlines include the seasonalities , if they exist ). it should be clear that * a concrete time series can not be `` well '' approximated in general by a solution of a `` parsimonious '' eq . , _i.e. _ , a linear difference equation of low order ; * the use of large order linear difference equations , or of nonlinear ones , might lead to a formidable computational burden for their identifications without any clear - cut forecasting benefit .we adopt therefore the quite promising viewpoint of where the control of `` complex '' systems is achieved without trying a global identification but thanks to elementary models which are only valid during a short time interval and are continuously updated .we utilize here low order difference equations .then the window size for the moving average does not need to be `` large ''. sect .[ parameter ] , which considers the identifiability of unknown parameters , extends to the discrete - time case a result in .the convincing computer simulations in sect . [ change ] are based on the exchange rates between us dollars and uros .besides forecasting the trendline , we predict * the position of the future rate w.r.t .the forecasted trendline , * the standard deviation w.r.t .the forecasted trendline .those results might lead to a new understanding of volatility and risk management .[ conclusion ] concludes with a short discussion on the notion of _consider again eq . .the -transform of satisfies ( see , _ e.g. _ , ) \\ - \dots - a_{n-1 } z [ x - x(0 ) ] - a_n x = 0 \end{array}\ ] ] it shows that , which is called the _ generating function _ of , is a rational function of , _i.e. _ , : where \\q(z ) = z^n - \dots - a_{n-1}z - a_n \in \mathbb{r}[z ] \end{array}\ ] ] hence [ rationnel ] , , satisfies a linear difference equation if , and only if , its generating function is a rational function .it is obvious that the knowledge of and permits to determine the initial conditions .consider the inhomogeneous linear difference equation where $ ] , .then the -transform of is again rational .it is equivalent saying that , , still satisfies a homogeneous difference equation .let be the field generated over the field of rational numbers by , which are considered as unknown parameters and therefore in our algebraic setting as independent indeterminates .write the algebraic closure of ( see , _e.g. _ , ) . then , _i.e. _ , is a rational function over .moreover is a _ differential field _ ( see , _e.g. _ , ) with respect to the derivation .its subfield of _ constants _ is the algebraically closed field .introduce the square wronskian matrix of order where its -row , , is it follows from eq . that the rank of is if , and only if , does not satisfy a linear difference relation of order strictly less than .hence [ linident ] if does not satisfy a linear difference equation of order strictly less than , then the parameters are _ linearly identifiable_. are uniquely determined by a system of linear equations , the coefficients of which depend on and , . ] for identifying the dynamics , _i.e. _ , , without having to determine the initial conditions consider the wronskian matrix , where its -row , , is it is obtained by taking the -dependent entries in the last rows of type , _i.e. _ , in disregarding the entries depending on .the rank of is again .hence are linearly identifiable .assume now that the dynamics is known but not the numerator in eq . .we obtain from the first rows .hence are linearly identifiable .we proceed as in and in .the unknown linearly identifiable parameters are solutions of a matrix linear equation , the coefficients of which depend on .let us emphasize that we substitute to its filtered value thanks to a discrete - time version of .we are utilizing data from the european central bank , depicted by the blue lines in the figures [ us5a ] and [ us5b ] , which summarize the last daily exchange rates between the us dollars and the uros . in order to forecast the exchange rate days aheadwe apply the rules sketched in sect .[ hint ] and we utilize a linear difference equation of order ( the filtered values of the exchange rates are given by the black lines in the figures [ us5a ] , [ us5b ] ) .[ us5c ] provides the estimated values of the coefficients of the difference equation .the results on the forecasted values of the exchange rates are depicted by the red lines in the figures [ us5a ] and [ us5b ] , which should be viewed as a predicted trendline .consider again the `` error '' in eq . and its moving average in eq .forecasting as in sect .[ fotr ] tells us an expected position with respect to the forecasted trendline .the blue line of fig .[ us5d ] displays the result for the window size .the meaning of the _ indicators _ and is clear .table [ tuse ] compares for various window sizes the signs of the predicted values of , which tells us if one should expect to be above or under the trendline , with the true positions of with respect to the trendline .the results are expressed via percentages ..comparison between the sign of the predicted value of and the true position of w.r.t .the trendline ( days ahead ) .[ cols="^,^,^ " , ] figures [ us10a ] , [ us10b ] , [ us10d ] , [ us10e ] display the same type of results as in sections [ fotr ] , [ above ] , [ vola ] via similar computations for a forecasting days ahead .the quality of the computer simulations only slightly deteriorates .the existence of _ trends _ , which is * the key assumption in technical analysis , * quite foreign , to the best of our knowledge , to the academic mathematical finance , where the paradigm of _ random walks _ is prevalent ( see , _ e.g. _ , ) , is fundamental in our approach .a theoretical justification will appear soon .we hope it will lead to a sound foundation of technical analysis , which will bring as a byproduct easily implementable real - time computer programs .blanchet - scalliet c. , diop a. , gibson r. , talay d. , tanr e. ( 2007 ) .technical analysis compared to mathematical models based methods under parameters mis - specification ._ j. banking finance _ , * 31 * , 13511373 .fliess m. , fuchshumer s. , schberl m. , schlacher k. , sira - ramrez h. ( 2008 ) . an introduction to algebraic discrete - time linear parametric identification with a concrete application ._ j. europ ._ , * 42 * , 211232 .fliess m. , sira - ramrez h. ( 2008 ) .closed - loop parametric identification for continuous - time linear systems via new algebraic techniques . in h. garnier ,l. wang ( eds ) : _ identification of continuous - time models from sampled data _ , pp .363391 , springer .markovsky i. , willems j.c . , van huffel s. , de moor b. , pintelon r. ( 2005 ) .application of structured total least squares for system identification and model reduction ._ ieee trans .. control _ ,* 50 * , 1490 - 1500 .
: new fast estimation methods stemming from control theory lead to a fresh look at time series , which bears some resemblance to `` technical analysis '' . the results are applied to a typical object of financial engineering , namely the forecast of foreign exchange rates , via a `` model - free '' setting , _ i.e. _ , via repeated identifications of low order linear difference equations on sliding short time windows . several convincing computer simulations , including the prediction of the position and of the volatility with respect to the forecasted trendline , are provided . -transform and differential algebra are the main mathematical tools . time series , identification , estimation , trends , noises , model - free forecasting , mathematical finance , technical analysis , heteroscedasticity , volatility , foreign exchange rates , linear difference equations , -transform , algebra .
determination of protein structure is important for understanding protein functions .the classical techniques for protein structure determination include x - ray crystallography , nuclear magnetic resonance ( nmr ) , and electron microscopy , etc .these determination techniques , however , suffer from the limitations of both expensive costs and long determination period , leading to the ever - increasing gap between the number of known protein sequences and that of solved protein structures .therefore , computational methods to predict protein structures from sequences are becoming increasing important to narrow down the gap .+ depending on whether protein structure homologues have been empirically solved or not , the protein structure prediction approaches can be categorized into three families : homology modeling , threading , and _ ab initio _ methods .homology modeling approaches exploit the fact that two protein sharing similar sequences often have similar structures , and threading methods compare the target sequence against a set of known protein structures and report the structure with the highest score as the predicted structure .although homology modeling and threading approaches generally yield high quality predictions , the two approaches can not help us understand the thermodynamic mechanism during the the protein folding process .+ in contrast to homology modeling and threading techniques , _ ab initio _ prediction methods work without requirements of known similar protein structures . briefly speaking , _ ab initio_ prediction methods are based on the `` thermodynamic hypothesis '' , i.e. the native structure of a protein should be the highly populated one with sufficiently low energy .for example , rosetta , one of the successful _ ab initio _ prediction tools , employs the monte carlo strategy to search conformations assembled from fragments of known structures , and finally reports the centroid of a large cluster of low - energy conformations .+ one of the key components of _ ab initio _ prediction approaches is designing an effective energy function . typically , an energy function consists of a variety of energy terms characterizing different structural features , especially the interplay between local and global interactions among residues .for example , the hydrophobic interaction term is designed to capture the observed tendency of non - polar residues to aggregate in aqueous solution and exclude water molecules .van der waals force term , is the sum of the attractive or repulsive forces among residues .hydrogen bonding term describes the electromagnetic attractive interaction between polar molecules in which hydrogen is bound to highly electronegative atom oxygen in the carboxyl .+ it is critical for an energy function to be able to distinguish native - like conformations from non - native decoy conformations , and drive as much as possible initial conformations to the native - like one in the conformation search process . to achieve these two objectives ,a widely - used strategy is to maximize the correlation between the energy and the structural similarity of the decoys and the native structure , besides requiring the native structure to be the lowest one. inspired by the `` funnel - shaped free energy surface '' idea , fain et al. proposed a funnel sculpting technique to generate an energy function step by step until a random starting conformation `` roll '' into the native - like neighborhood . and funnel landscapes describe how protein topology determines folding kinetics . shell et al. attempts to smooth energy function to make the energy landscape a funnel .+ there are usually multiple terms in energy function , e.g. rosetta utilizes a total of terms in residue - level conformation search phase , and over terms in the full - atom mode ; therefore , it is important to finding the optimal weighting of so many energy terms .schafer et al . proposed a linear programming to ensure the native conformation is more stable than decoys , while in our methods , a series of decoys are sampled to describe the basin near the native conformation .this study focuses on designing an optimal weighting of a total of energy terms used in rosetta .central to our effort is the `` reverse sampling '' technique , i.e. rosetta applies monte carlo technique to approach a native - like conformation , while our model attempts to generate from the native state an ensemble of initial conformations most relevant to the native state .a linear program was proposed to enlarge as much as possible the set of initial conformations that can `` roll '' into the native - like neighborhood .this way , the possibility of successful search and the quality of predicted conformation increase as the `` basin of attraction '' of the native structure is extended .+ the manuscript is organized as follows : section describes the whole framework of our method , and the lp model to optimize protein energy weights as well .section lists experimental results of the optimized energy function . in section , we will discuss some limitations of our method and possible future works .our weight optimizing technique works in an iterative manner ; that is , we start with a uniform weighting scheme , i.e. all terms have the same weight , and proceed with rounds of alternating estimation of native - like neighborhood and enlargement of the neighborhood .the two - steps procedure is repeated until the weights change between successive iterations is sufficiently small .the details of the two steps are described in the following subsections : as mentioned above , it is critical for an energy function to drive as much as possible initial assembled conformations to the native - like structures in the conformation search process .intuitively , the initial assembled conformations are said to lie in the `` attraction basin '' of the native structure , or native - like neighborhood , if the conformations can finally evolve to the native structure during the search process .we first describe how to estimate the native - like neighborhood .+ we utilized a `` reverse sampling '' technique to determine the native - like neighborhood under a specific energy function . here, the term `` reverse sampling '' is introduced to describe the conformation generating process essentially reverse to the conformation search process used by _ ab initio _ methods .+ in particular , _ ab initio _ methods usually employ monte carlo technique to search native - like conformations from a random initial conformation . at each step, a perturbation of the current conformation is made via fragment replacing in rosetta or torsion angle sampling in falcon to generate a new conformation .the newly generated conformation is accepted if it has lower energy relative to the original conformation ; otherwise the new conformation will be accepted with a probability according to the metropolis - hasting rule ( see figure 1 , left panel ) .it should be pointed out that starting from some initial conformations , the probability to reach the native - like structures might be very low .+ by `` reverse sampling '' , we mean that the sampling process starts from the native structure , and attempt to find a neighboring conformation with higher energy at each step ( see figure 1 , right panel ) .two constraints were imposed onto the `` reverse sampling '' process : * at each step , the newly generated neighboring conformation should not be too far from the original one . here , we require that the rmsd ( root mean square deviation ) of the two conformations to be less than 0.5 angstrom . *the reverse sampling process ends at a conformation if we failed to find one of its neighboring conformations with higher energy after multiple trials , say 1000 times in the study . informally , this conformation is called `` edge point '' conformation of the attraction basin . this way , a path of conformations is generated by the reverse sampling process , where denotes the native structure, denotes inter - mediate conformations , and denotes the final edge point conformation . for the edge pointconformation , its neighboring conformations are sampled and denoted as . in our study , is set as 1000 .the rmsd between and is calculated as a rough measure of the attraction basin radius . intuitively , if we can `` cut off '' the edge point conformation , the attraction basin of the native structure will be enlarged since the reverse sampling process will not stuck at the conformation yet .the `` cutting off '' operation is accomplished via tuning the weights of the energy terms to decrease the energy of to be less than at least one of its neighbors .meanwhile , the new weighting should still keep the order of the inter - mediate conformations in the path , i.e. the energy of should be still lower than that of the conformation .figure 2 intuitively shows how the conformation path changes during the energy function optimization .the weights tuning process is accomplished using the following linear program : where the vector denotes the weights of energy terms , and denotes the original weights .thus , the objective of the linear program is to find a new weight with change as small as possible . for an inter - mediate conformation in the reverse sampling path, denotes the vector of its energy terms , i.e. . formula ( 1 ) describes a restriction that the original relative order of and should be kept even using the new weights , and formula ( 2 ) is set to `` cut off '' the edge point conformation , i.e. at least one of the neighbors of the edge point conformation has a higher energy .thus , is no longer an edge point conformation under the new weights .for proteins in different classes , different energy terms might emphasized .thus , we evaluate the weights optimizing procedure on three proteins with different scop classes . the detail information of the three proteins are listed in table 1 .[ info ] .benchmark protein structures used in the study .the proteins come from different scop classes : all ( class a ) , / ( class c ) , + ( class d ) .residue numbers are , , , respectively . columns and shows the number and total length of helices and strands . [ cols="^,^,^,^,^,^",options="header " , ] [ finalweights ] we also investigated whether the `` cutting off edge point '' strategy helps enlarge the attraction basin of the native structure . to estimate the attraction basin ,a total of 100 paths of conformations were generated using the `` reverse sampling '' technique from the native structure .the rmsd between the starting native structure and the edge point conformations are calculate as a rough estimation of the radius of the attraction basin . as shown in figure 3 ,the mean rmsd of the trials is angstrom initially , and increases to nearly angstrom finally .this suggests that the attraction basin is really significantly enlarged during the iteration process .finally we conducted experiments to investigate whether the optimized weights help improve protein structure prediction or not . to achieve this goal , we run rosetta for the testing proteins with weights obtained at each iteration step . for each weighting scheme ,a total of 1000 decoys were generated by rosetta . among these decoys ,a clustering procedure is run and the centroid of the largest cluster is reported as the final prediction .the `` good decoy ratio '' is also calculated . here, we adopted a widely - used criteria that a decoy is called `` good decoy '' if it has a rmsd less than angstrom to the native structure .+ taking protein 1iloa as an example , the best prediction has a rmsd of angstrom ( see figure 6 , left panel ) , and as iteration proceeds the quality of the best prediction improves step by step and finally the best prediction has a small rmsd of only angstrom ( see figure 6 , right panel ) .similar trends were observed for protein 1ctfa and 1iloa ( see figure 4 ) .+ in addition , the `` good decoy ratio '' was also improved significantly ( see figure 5 ) . for example , if using the initial weights , there are only 10 good decoy among the decoys generated by rosetta for protein 1iloa ; in contrast , there are over good decoys among the generated decoys when using the optimized weights .this means that using the optimized weights of energy function , rosetta can generate high - quality decoys more efficiently .+ angstrom and angstrom , respectively .thus , the optimized weights help improve the quality of predicted structures.,scaledwidth=100.0% ] angstrom and angstrom , respectively .thus , the optimized weights help improve the quality of predicted structures.,scaledwidth=100.0% ] angstrom and angstrom , respectively .thus , the optimized weights help improve the quality of predicted structures.,scaledwidth=100.0% ] [ 1iloa ]in the study , we present an attempt to find the optimal weights of energy terms .the basic idea is to estimate the attraction basin of the native structure using `` reverse sampling '' technique , and then enlarge the attraction basin using a linear program .experimental results on several benchmark proteins suggest that the optimized weights can significantly improve rosetta s prediction , and the prediction efficiency is substantially increased as well . + it has been reported that the energy terms apply at different stages of protein folding . according to this observation , rosetta employs a multi - step prediction strategy . in particular , rosettafirst uses score function with only hydrophobic core terms , then uses with secondary structure terms , and finally uses to incorporate more energy terms .the study here focuses on the optimization of weights for the third step . +our work focuses on the attraction basin near the native structure .when we get a conformation with over angstrom to the native structure , the reverse sampling ends since a conformation with so large rmsd is usually a random conformation .the random conformations are excluded since they provide little information for the attraction basin .+ one of the limitations of the study presented here is the limited number of benchmark proteins .the weights were trained for each protein individually .ideally , we have only a common weighting scheme for a protein class rather than a specific weighting for an individual protein .how to extend the linear program to achieve this objective remains as one of our future works .this study was funded by the national basic research program of china ( 973 program 2012cb316502 ) , the national natural science foundation of china ( 30800189 , 30800168 , 90914047 ) , and the strategic priority research program of the chinese academy of sciences , grant no .wm zheng was supported in the part by national natural science foundation of china ( 11175224 , 11121403 ) .simons , k.t ._ et al_. ( 1997 ) assembly of protein tertiary structures from fragments with similar local sequences using simulated annealing and bayesian scoring functions ._ , * 268 * , 209 - 225 . fain b. _ et al_. ( 2003 ) funnel sculpting for in silico assembly of secondary structure elements of proteins ._ proceedings of the national academy of sciences of the united states of america _ * 100(19 ) * , 10700 - 10705 .
predicting protein structure from amino acid sequence remains as a challenge in the field of computational biology . if protein structure homologues are not found , one has to construct structural conformations from the very beginning by the so - called _ ab initio _ approach , using some empirical energy functions . a successful algorithm in this category , rosetta , creates an ensemble of decoy conformations by assembling selected best short fragments of known protein structures and then recognizes the native state as the highly populated one with a very low energy . typically , an energy function is a combination of a variety of terms characterizing different structural features , say hydrophobic interactions , van der waals force , hydrogen bonding , etc . it is critical for an energy function to be capable to distinguish native - like conformations from non - native ones and to drive most initial conformations assembled from fragments to a native - like one in a conformation search process . in this paper we propose a linear programming algorithm to optimize weighting of a total of energy terms used in rosetta . we reverse the monte carlo process of rosetta to approach native - like conformations to a process generating from the native state an ensemble of initial conformations most relevant to the native state . intuitively , an ideal weighting scheme would result in a large `` basin of attraction '' of the native structure , which leads to an objective function for the linear programming . we have examined the proposal on several benchmark proteins , and the experimental results suggest that the optimized weights enlarge the attraction basin of the native state and improve the quality of the predicted native states as well . in addition , a comparison of optimal weighting schema for proteins of different classes indicates that in different protein classes energy terms may have different effects . = 13.5pt = 13.5pt
artificial neural network and its application has been a hot topic in recent decades , including the cellular neural networks , hopfield neural networks , and cohen - grossberg neural networks , etc .especially , as a great progress in the history of neural networks , hinton et al . in 2006 proposed the deep learning algorithm by using the generalized back - propagation algorithm for training multi - layer neural networks , which has been shown a powerful role in many practical fields , such as natural language processing , image recognition , bioinformatics , recommendation systems , and so on .generally , the investigated neural network models are mainly defined by ordinary differential equations ( odes ) ; however , partial differential equations ( pdes ) can describe the real things or events more exactly .for example , diffusion effect can not be avoided in the neural networks model when electrons are moving in asymmetric electromagnetic field , and in the chemical reactions many complex patterns are generated by the reaction - diffusion effects .there have been many results about the dynamical behaviors of neural networks with reaction - diffusion terms , for example , the existence , uniqueness and global exponential stability of the equilibrium point of delayed reaction - diffusion recurrent neural networks were investigated in ; a better result on stability analysis was given by analyzing the role of the reaction - diffusion terms in - ; investigated the exponential stability of impulsive neural networks with time - varying delays and reaction - diffusion terms ; studied the global -stability of reaction - diffusion neural networks with unbounded time - varying delays and dirichlet boundary conditions ; and discussed the existence and global exponential stability of anti - periodic mild solution for reaction - diffusion hopfield neural networks systems with time - varying delays . neural network can be categorized into the scope of complex networks , while synchronization and its control problem - has been an important issue for complex networks , which has attracted many researcher interests from various disciplines , like mathematics , computer science , physics , biology , etc .synchronization means that all the nodes have the same dynamical behavior under some local protocols between the communication with each node s neighbours , therefore , synchronization is the collective behavior of the whole network while the communication protocol is distributed .many synchronization patterns have been investigated , like complete synchronization , cluster synchronization , finite - time ( or fixed - time ) synchronization , and so on . in this paper , we will study the complete synchronization and its control problem for linearly coupled neural networks with reaction - diffusion terms .hitherto , many works have been presented on this problem , see - . in , the authors discussed the asymptotic and exponential synchronization for a class of neural networks with time - varying and distributed delays and reaction - diffusion terms ; - investigated the adaptive synchronization problem for coupled neural networks with reaction - diffusion terms ; studied the -synchronization problem for coupled neural networks with reaction - diffusion terms and unbounded time - varying delays ; - discussed the synchronization for fuzzy or stochastic neural networks with reaction - diffusion terms and unbounded time delays ; proposed a general array model of coupled reaction - diffusion neural networks with hybrid coupling , which was composed of spatial diffusion coupling and state coupling ; - discussed the synchronization problems of reaction - diffusion neural networks based on the output strict passivity property in which the input and output variables varied with the time and space variables ; and investigated the pinning control problem for two types of coupled neural networks with reaction - diffusion terms : the state coupling and the spatial diffusion coupling .as for the control technique , except for the continuous strategy , discontinuous control strategies are more economic and have better application value , including impulsive control , sample - data control , intermittent control , etc .for example , - discussed the impulsive control and synchronization for delayed neural networks with reaction - diffusion terms ; investigated the synchronization problem of reaction - diffusion neural networks with time - varying delays via stochastic sampled - data controller ; - studied the synchronization for neural networks with reaction - diffusion terms under periodically intermittent control technique . in the real world , the periodically intermittent control is rare , and a more general intermittent control technique , called the aperiodically intermittent control , is proposed to realize synchronization of complex networks , see .moreover , the authors in and also consider the complete synchronization and the cluster synchronization for complex networks with time - varying delays and constant time delay respectively .based on the above discussions , in this paper , we will investigate _ the complete synchronization problem for linearly coupled neural networks with time - varying delay and reaction - diffusion terms and hybrid coupling via aperiodically intermittent control_. the rest of this paper is organized as follows . in section [ pre ] , some necessary definitions , lemmas and notations are given . in section[ main ] , the network model with constant aperiodically intermittent control gain is first proposed and the criteria for complete synchronization are obtained .then , we apply the adaptive approach on the aperiodically intermittent control gain by designing a useful adaptive rule , and its validity is also rigorously proved . in section [ nu ] ,numerical simulations are presented to illustrate the correctness of theoretical results .finally , this paper is concluded in section [ conclude ] .some definitions , assumptions , notations and lemmas used throughout the paper will be presented in this section . at first, we describe the single reaction - diffusion neural network with time - varying delays and dirichlet boundary conditions by the following equation : \times\omega \end{array}\right.\end{aligned}\ ] ] where ; is a compact set with smooth boundary and mes in space ; is the state of neuron at time and in space ; means the transmission diffusion coefficient along the neuron ; represents the rate with which the neuron will reset its potential to the resting state ; and are the connection weights with and without delays respectively ; and denote the activation functions of the neuron in space ; denotes the external bias on the neuron ; is the bounded time - varying delay with , where ; the initial values are bounded and continuous functions .denote with these notations , ( [ ys ] ) can be rewritten in the compact form as with the norm defined by ^{\frac{1}{2}}.\end{aligned}\ ] ] as for the activation functions and , [ add ] there exist positive constants and , such that for any vectors and , the following conditions hold : as for the effect of reaction - diffusion , the following lemma plays an important role .[ use ] ( see ) let be a cube , and let be a real - valued function belonging to which vanishes on the boundary , i.e. , . then for a complex network with linearly coupled neural networks ( [ ys2 ] ) , we suppose its coupling configuration ( or the corresponding coupling matrix ) satisfies that : 1 . ; 2 . is irreducible .[ lyasta ] ( see ) for the above coupling matrix and a constant , the new matrix is lyapunov stable .especially , suppose the left eigenvector corresponding to the eigenvalue ` ' of matrix is , then satisfies the following condition without loss of generality , in the following , we assume that . in this paper , we adopt the aperiodically intermittent control strategy firstly proposed in - , whose description can be found in figure 1 . for the -th time span , it is composed of control time span ] denotes the banach space of all continuous functions mapping \times r^m ] , and can be found in ( [ norm ] ) .now , applying lemma [ ht ] to the adaptive network ( [ a-1 ] ) , we can get the following result . [ thm2 ] suppose assumption [ add ] holds , then coupled network ( [ a-1 ] ) with adaptive rules ( [ good ] ) and ( [ a-3 ] ) can realize the complete synchronization asymptotically . the proof will be given in appendix b. similarly , we can derive the following corollaries directly . for the coupled neural networks only with the adaptive state coupling , , i=0,1,2,\cdots\\ \frac{\partial{w^j(t , x)}}{\partial{t}}=&f(w^j(t , x);w^j(t-\tau(t),x))+\sum\limits_{k=1}^n\xi_{jk}\gamma^1 w^k(t , x ) , ~~j=1,2,\cdots , n,\\ & ~\mathrm{if}~~t\in ( s_i , t_{i+1 } ) , i=0,1,2,\cdots \end{array } \right.\end{aligned}\ ] ] where is defined in ( [ origin ] ) .suppose assumption [ add ] holds , then the above coupled network with adaptive rules ( [ good ] ) and ( [ a-3 ] ) can realize the complete synchronization asymptotically . for the coupled neural networks only with the adaptive spatial diffusion coupling , , i=0,1,2,\cdots , j=2,3,\cdots , n;\\ \frac{\partial{w^j(t , x)}}{\partial{t}}=&f(w^j(t , x);w^j(t-\tau(t),x))-\sum\limits_{k=1}^n\xi_{jk}\gamma^2 \delta(w^k(t , x ) ) , \\& ~\mathrm{if}~~t\in ( s_i , t_{i+1 } ) , i=0,1,2,\cdots , j=1,2,\cdots , n .\end{array } \right.\end{aligned}\ ] ] where is defined in ( [ origin ] ) .suppose assumption [ add ] holds , then the above coupled network with adaptive rules ( [ good ] ) and ( [ a-3 ] ) can realize the complete synchronization asymptotically .to demonstrate the effectiveness of obtained theoretical results , in this section we will present some numerical simulations . for the uncoupled neural network , we choose the following model : \\ w(t , x)&=&0,\quad\hfill { } ( t , x)\in [ -\tau,+\infty)\times\partial{\omega } \end{array}\right.\end{aligned}\ ] ] where , , and , simple calculations show that assumption [ add ] holds with .the time - varying delay is choose as , therefore , ; matrices are defined by figure [ wf1 ] and figure [ wf2 ] show the dynamical behaviors of and for neural network ( [ nuncoupled ] ) respectively ; moreover , by projecting on the plane , figure [ wf3 ] shows its chaotic behavior explicitly . for neural network ( [ nuncoupled ] ) with the initial values\times\omega],scaledwidth=90.0% ] ) on the plane with the initial values \times\omega ] . as for the coupling matrix , we choose therefore , the left eigenvector corresponding to the zero eigenvalue of matrix can be obtained as : . as for the aperiodically intermittent control scheme , we choose the control time as \cup [ 5,9.92]\cup [ 9.99,14.89]\cup [ 14.92,19.85]\cup [ 19.9,24.83]\cup [ 24.87,29.78]\\ \cup [ 29.84,34.8]\cup [ 34.82,39.78]\cup [39.8,44.73]\cup [ 44.79,49.73]\cup [ 49.78,54.7]\cup \cdots\end{aligned}\ ] ] obviously , we have and defined in assumption [ asutime ] , and defined in ( [ max ] ) .choose and the coupling strength , then the parameters defined in ( [ wa1])-([w8 ] ) can be derived as : , , , , , , , , , and .therefore , conditions in theorem [ thm1 ] are satisfied , i.e. , the complete synchronization can be realized .using the crank - nicolson method for pdes , we can simulate the corresponding numerical examples .in fact , the coupling strength for the complete synchronization can be smaller than the calculated value , and simulations show that if , then the complete synchronization can be realized .figure [ wf4 ] and figure [ wf5 ] illustrate the dynamical behaviors of error and network error defined in ( [ norm ] ) when the coupling strength . when the coupling strength ] defined in ( [ norm ] ) when the coupling strength ,scaledwidth=90.0% ] in this subsection , we consider the following network with an adaptive coupling strength : , i=0,1,2,\cdots\\ \frac{\partial{w^j(t , x)}}{\partial{t}}=&f(w^j(t , x);w^j(t-\tau(t),x))\\ & + \sum\limits_{k=1}^4\xi_{jk } w^k(t , x)-\sum\limits_{k=1}^4\xi_{jk } \delta(w^k(t , x ) ) , ~j=1,2,3,4,\\ & ~\mathrm{if}~~t\in ( s_i , t_{i+1 } ) , i=0,1,2,\cdots \end{array } \right.\end{aligned}\ ] ] where all the parameters are the same with those in subsection [ n - static ] .to illustrate the effect of aperiodically intermittent control more clearly , here we choose the control time span as \cup [ 5,9]\cup [ 9.5,13]\cup [ 14,18]\cup [ 18.3,22]\cup [ 23,26.5]\\ \cup & [ 27,31]\cup [ 31.7,35]\cup [ 36,40.5]\cup [ 41,45.2]\cup [ 45.9,50]\cup \cdots\end{aligned}\ ] ] in this case , and , since , therefore , ( [ smalltau ] ) holds .we design the adaptive rules as , i=0,1,\cdots\end{aligned}\ ] ] then , according to theorem [ thm2 ] , the complete synchronization can be realized . under the above adaptive scheme ,figure [ wf6 ] shows the dynamical behaviors of ; while figure [ wf7 ] illustrates the dynamics of network error defined in ( [ norm ] ) and the coupling strength . under the adaptive rule ( [ n - ada ] ) ] defined in ( [ norm ] ) and the coupling strength under the adaptive rule ( [ n - ada]),scaledwidth=90.0% ]in this paper , the complete synchronization for linearly coupled neural networks with time - varying delays and reaction - diffusion terms under aperiodically intermittent control is investigated .we first propose a novel spatial coupling protocol for the synchronization , then present some criteria for the complete synchronization with no constraint on the time - varying delay .moreover , for the small delay case , i.e. , when the bound of time - varying delay is less than the infimum of the control span , we propose a simple adaptive rule for the coupling strength , and prove its validity rigorously .finally , some simulations are given to demonstrate the effectiveness of obtained theoretical results .define a lyapunov function as follows : where is a definite positive diagonal matrix defined in lemma [ lyasta ] , and .when , i=0,1,2,\cdots$ ] , differentiating , we have \nonumber\\ = & v_1+v_2+v_3+v_4+v_5+v_6.\end{aligned}\ ] ] from green s formula , the dirichlet boundary condition , and lemma [ use ] , we have therefore , according to assumption [ add ] , one can get \\ = & -c\int_{\omega}e^j(t , x)^te^j(t , x)dx\\ & + \int_{\omega}e^j(t , x)^ta\tilde{g}(e^j(t , x))dx+\int_{\omega}e^j(t , x)^tb\tilde{h}(e^j(t-\tau(t),x))dx\\ \le&\int_{\omega}e^j(t , x)^t(-c)e^j(t , x)dx+\int_{\omega}e^j(t , x)^t[\varepsilon_1^{-1}aa^t+\varepsilon_1{g^{\star}}^2]e^j(t , x)dx\\ & + \int_{\omega}e^j(t , x)^t(\varepsilon_2^{-1}bb^t)e^j(t , x)dx+\varepsilon_2{h^{\star}}^2\int_{\omega}e^j(t-\tau(t),x)^te^j(t-\tau(t),x)dx\\ = & \int_{\omega}e^j(t , x)^t(-c+\varepsilon_1^{-1}aa^t+\varepsilon_1{g^{\star}}^2+\varepsilon_2^{-1}bb^t)e^j(t , x)dx\\ & + \varepsilon_2{h^{\star}}^2\int_{\omega}e^j(t-\tau(t),x)^te^j(t-\tau(t),x)dx\\ \le&\frac{\alpha_1}{2}\int_{\omega}e^j(t , x)^te^j(t , x)dx+\varepsilon_2{h^{\star}}^2\int_{\omega}e^j(t-\tau(t),x)^te^j(t-\tau(t),x)dx,\end{aligned}\ ] ] therefore , \nonumber\\ \le&\alpha_1v(t)+\alpha_2v(t-\tau(t)).\end{aligned}\ ] ] according to lemma [ lyasta ] , is negative definite , we have with the same reason , one can also get that therefore , combining with ( [ v0])-([v6 ] ) , one can get that on the other hand , for , recalling the fact that is a symmetric matrix with its column and row sums being zero , so , by the similar analysis , we have combining with ( [ control])-([rest ] ) , and according to lemma [ synlem ] , we can deduce that the complete synchronization can be finally realized , i.e. , , therefore , . the proof is completed .from ( [ a-2 ] ) , we can have the compact form for the error equation with , , i=0,1,2,\cdots\\ \frac{\partial{e(t , x)}}{\partial{t}}=&\tilde{f}(e(t , x);e(t-\tau(t),x))\\ & + ( { \xi}\otimes\gamma^1 ) e(t , x)-({\xi}\otimes\gamma^2 ) \delta(e(t , x))\\ & ~\mathrm{if}~~t\in ( s_i , t_{i+1 } ) , i=0,1,2,\cdots \end{array } \right.\end{aligned}\ ] ] where , and define the lyapunov function as ( [ lyap_1 ] ) , then with the same process as the proof of theorem [ thm1 ] , we have where and are defined in ( [ w4 ] ) and ( [ w5 ] ) , respectively . according to lemma [ ht ] , we can obtain , which implies that the complete synchronization can be realized . the proof is completed .this work was supported by the national science foundation of china under grant nos . 61203149 , 61233016 , 11471190 and 11301389 ; the national basic research program of china ( 973 program ) under grant no .2010cb328101 ; `` chen guang '' project supported by shanghai municipal education commission and shanghai education development foundation under grant no .11cg22 ; the fundamental research funds for the central universities of tongji university ; the nsf of shandong province under grant no .zr2014am002 , project funded by china postdoctoral science foundation under grant nos .2012m511488 and 2013t60661 , the special funds for postdoctoral innovative projects of shandong province under grant no .201202023 , iif of shandong university under grant no .2012ts019 .lu , global exponential stability and periodicity of reaction - diffusion delayed recurrent neural networks with dirichlet boundary conditions , chaos , solitons fractals 35 ( 1 ) ( 2008 ) 116 - 125 .chen , x.w .liu , w.l .lu , pinning complex networks by a single controller , ieee trans .circuits and systems - i 54 ( 6 ) ( 2007 ) 1317 - 1326 .wang , j.d .cao , synchronization of a class of delayed neural networks with reaction - diffusion terms , physics letters a 369 ( 3 ) ( 2007 ) 201 - 211 .k. wang , z.d .teng , h.j .jiang , adaptive synchronization in an array of linearly coupled neural networks with reaction - diffusion terms and time delays , communications in nonlinear science and numerical simulation 17 ( 10 ) ( 2012 ) 3866 - 3875 .wang , h.n .wu , l. guo , novel adaptive strategies for synchronization of linearly coupled neural networks with reaction - diffusion terms , ieee trans .neural networks and learning systems 25 ( 2 ) ( 2014 ) 429 - 440 .gan , r. xu , p.h .yang , exponential synchronization of stochastic fuzzy cellular neural networks with time delay in the leakage term and reaction - diffusion , communications in nonlinear science and numerical simulation 17 ( 4 ) ( 2012 ) 1862 - 1870 .wang , h.n .huang , s.y .ren , pinning control strategies for synchronization of linearly coupled neural networks with reaction - diffusion terms , ieee trans . neural networks andlearning systems 27 ( 4 ) ( 2016 ) 749 - 761 .yang , j.d .cao , z.c .yang , synchronization of coupled reaction - diffusion neural networks with time - varying delays via pinning - impulsive controller , siam journal on control and optimization 51 ( 5 ) ( 2013 ) 3486 - 3510 .gan , exponential synchronization of stochastic cohen - grossberg neural networks with mixed time - varying delays and reaction - diffusion via periodically intermittent control , neural networks 31 ( 2012 ) 12 - 21 .gan , y. li , exponential synchronization of stochastic reaction - diffusion fuzzy cohen - grossberg neural networks with time - varying delays via periodically intermittent control , journal of dynamic systems , measurement , and control 135 ( 6 ) ( 2013 ) 061009 .gan , h. zhang , j. dong , exponential synchronization for reaction - diffusion neural networks with mixed time - varying delays via periodically intermittent control , nonlinear analysis : modelling and control 19 ( 1 ) ( 2014 ) 1 - 25 .j. mei , m.h .jiang , b. wang , q. liu , w.m .xu , t. liao , exponential -synchronization of non - autonomous cohen - grossberg neural networks with reaction - diffusion terms via periodically intermittent control , neural processing letters 40 ( 2 ) ( 2014 ) 103 - 126 .liu , t.p .chen , cluster synchronization in directed networks via intermittent pinning control , ieee trans .neural networks 22 ( 7 ) ( 2011 ) 1009 - 1020 .
in this paper , the complete synchronization problem of linearly coupled neural networks with reaction - diffusion terms and time - varying delays via aperiodically intermittent pinning control is investigated . the coupling matrix for the network can be asymmetric . compared with state coupling in the synchronization literature , we design a novel distributed coupling protocol by using the reaction - diffusion coupling - spatial coupling , which can accelerate the synchronization process . this can be regarded as the main difference between this paper and previous works . using the lyapunov function and theories in the aperiodically intermittent control , we present some criteria for the complete synchronization with a static coupling strength . in this case , there is no constraint on the bound of time - varying delays , so it can be larger than the length of control span . on the other hand , for the network with an adaptive coupling strength , we propose a simple adaptive rule for the coupling strength and prove its effectiveness rigorously . in this case , the bound of time - varying delay is required to be less than the infimum of the control time span . finally , numerical simulations are given to verify the theoretical results . adaptive , aperiodically intermittent , reaction - diffusion , synchronization
the gravitational wave sensitivity for the laser interferometer space antenna ( lisa ) will be limited at low frequencies by the stray acceleration noise in the orbits of the nominally free - falling test masses that serve as interferometry end mirrors .the test mass ( mass ) acceleration noise is typically divided into a contribution from random , position independent forces ( ) and another from coupling ( with spring constant ) to the noisy motion of the spacecraft shield : the closed loop satellite position noise arises in the noise of the position sensor used to guide the satellite control and in the imperfect compensation of external forces acting on the satellite by the finite gain thruster control loop ( gain ) .lisa aims to limit the test mass acceleration noise spectral density to ( 3 fm / s ) at frequencies down to 0.1 mhz .ltp is a single spacecraft experiment that tests drag - free control for lisa by measuring the differential motion of two test masses , each `` free - falling '' inside a lisa capacitive position sensor , along a single measurement axis ( the ltp configuration and measurement schemes are described in ref . , and a schematic of the apparatus design is shown in fig .[ ltp_figure ] ) .the simpler 1-spacecraft , 1-axis configuration requires use of control forces along the measurement axis , a performance limiting departure from lisa . the main measurement for ltp , in which the satellite is controlled to follow the first test mass ( tm1 ) while the second test mass ( tm2 ) is forced electrostatically to follow the satellite , aims to put an overall acceleration noise upper limit of 30 fm / s at 1 mhz , relaxed by an order of magnitude with respect to lisa in both noise level and frequency .in addition to a global limit on acceleration noise , ltp will fully characterize the satellite coupling term in eqn .[ eqn_lisa ] .the external force level can be extracted from the closed loop position sensor error signal , and the sensor noise can be obtained by comparison with a more precise optical readout .`` stiffness '' is measured by modulating the satellite control setpoint .this paper focuses on the random force measurements that will be possible aboard ltp . while the overall acceleration noise limit possible with ltp is an order of magnitude above the lisa goal , a special control scheme isolates the random force contribution , allowing measurement of to within a factor 2 of the lisa goal for most sources .additionally , several critical noise sources can be modulated coherently for precise measurement of their coupling into test mass acceleration .combined with the measurements of , , and discussed above , the random force measurement pushes the characterization of the lisa noise budget to lower acceleration noise levels .precise measurement of key noise parameters will allow dissection of the random noise measurement as well as a more accurate extrapolation of the noise model to the lower frequencies and lower disturbance levels needed for lisa .the control scheme for the ltp random force measurement is a modified version of that used for the overall acceleration noise measurement mentioned above .the satellite is controlled to follow tm1 , and tm2 is controlled , by nulling the differential displacement interferometric readout , to follow tm1 . in this configuration, the residual error signal measured by the interferometer can be expressed - \delta x \ : { \ensuremath{\omega^2_{2p } } } + x_{n , opt } \left ( { \ensuremath{\omega^2 _ { } } } - { \ensuremath{\omega^2_{2p } } } \right ) \right]\end{aligned}\ ] ] indices 1 and 2 refer to tm1 and tm2 , is the differential interferometry noise , is the distortion of the baseline separating the two position sensors , and is the electrostatic control loop gain for tm2 . herethe satellite motion couples to the differential interferometry signal only through the stiffness difference $ ] , a small quantity which can be measured and , if necessary , tuned electrostatically to negligible magnitude .this control scheme thus minimizes the effect of stiffness , which is the limiting factor in the overall acceleration noise measurement , and thus isolates the random forces and .the noise in the interferometry signal in eqn .[ eq_ltp_m3 ] is thus a measurement of for lisa , with several additional ltp specific `` instrumental '' noise sources .in addition to the negligible contribution from satellite coupling , the baseline distortion term ( ) is also negligible for the high stability zerodur optical bench and 10 k temperature stability projected for ltp .the remaining dominant instrumental noise sources for the ltp measurement of random force noise are then : * _ interferometry noise _ noise in the differential interferometry readout converts into an effective random force noise ( directly from eqn .[ eq_ltp_m3 ] ) the ltp interferometry requirement is roughly 8 pm above 3 mhz and is relaxed as at lower frequencies .this interferometry noise also includes a contribution from the thermal expansion of the test masses and optical windows , though this term should be small .interferometry noise is the dominant instrumental noise source above 3 mhz . *_ actuation noise _ any instability in the applied electrostatic actuation forces produces acceleration noise given by while actuation noise is part of the random force , we include it as an ltp instrumental noise source because it originates in the need to compensate a dc acceleration imbalance ( ) in ltp s 1-axis measurement configuration and is not present in lisa .limiting this noise source requires both high actuation voltage stability levels ( relative fluctuations 1 mhz ) and tight gravitational balancing of the satellite ( 1.3 nm / s ) .this is likely the dominant instrumental limitation to the ltp random force noise measurement below 3 mhz .here we have used eqn .[ eq_ltp_m3 ] to convert these instrumental disturbances into effective force noise , normalized to the random force noise on a single test mass and assuming that and are uncorrelated .the instrument noise limit for measuring such uncorrelated noise sources is roughly 5 fm / s 1 mhz , within a factor 2 of the lisa goal . while the uncorrelation assumption holds for many sources , including electrostatic effects , circuitry back - action , and brownian noise sources , it is less valid for noise from temperature and magnetic field noise , which can act coherently on the two test masses ( the instrumental noise source conversion factor in the denominator of eqns .[ eqn_interfere ] and [ eqn_actuation ] would range from 0 - 2 depending on the nature and degree of correlation ) .we should point out , however , that these environmental disturbances are dominated by short range , on - board sources , and , additionally , that the physical mechanisms coupling to the environment ( such as the random residual magnetic moment ) are not likely to be equal in the two test masses .as such , a substantial cancellation , which would render ltp insensitive to such effects , is unlikely .. before addressing individual random force noise sources relevant to ltp and lisa , we should clarify the frequency dependence of the sources in fig .[ ltp_instr_noise ] .first , noise sources which are not inherently frequency dependent ( for thermal gradient effects , for example , ) are drawn as flat ( `` white '' ) noise sources , even though they are likely to have a `` pink '' spectral shape due to a very likely low frequency increase in the driving noise source ( temperature fluctuations in this example ) . in the absence of detailed spectral information ,we choose to apply here the low frequency ( 1 mhz for ltp , 0.1 mhz for lisa ) target for the given fluctuations ( thermal , magnetic , sensor noise , etc ) even at higher frequencies where one would expect the noise to improve .for the same reason , we extend the ltp noise curves only to 1 mhz , the official low frequency limit of the mission .ltp will measure to even lower frequencies , but with sensitivity likely deteriorating as or worse , considering both the actuation and interferometer performance .several key random force noise sources are plotted in fig .[ ltp_instr_noise ] for ltp and in fig .[ lisa_noise ] for lisa ( references all address noise sources for ltp , drs , and lisa ) .the ltp random force noise measurement can , for specific known disturbances , be better extrapolated to the frequencies and environmental conditions of lisa by making precise measurements of the parameters that govern these noise sources .additionally , coupled with simultaneous measurements of environmental effects , they also permit debugging of the ltp force measurement itself , through correlation and possible subtraction of disturbances from the interferometry time series .this will also allow characterization of a noise source that might have escaped the random noise measurement because of a coherent cancellation between the two test masses . to characterize a specific noise source , we employ the differential interferometer to detect forces excited by coherently modulating the given disturbance ( on one or both test masses ) , with the low background force noise allowing fn measurement resolution in a 1 hour measurement .envisioned measurements of known sources include : * _ magnetic moment _ interaction of the test mass magnetic moment ( ) with magnetic field ( ) fluctuations produce a force , with moment likely dominated by the remnant ferromagnetic moment , which should be below 0.02 m for the au - pt test mass .the field gradient goal for ltp is 0.25 / m , improving to 0.025 / m for lisa .ltp will have magnetometers and helmholtz coils to measure and apply magnetic fields . with two coils oriented along the -axis , symmetrically on either side of a test mass / sensor , the moment be measured by modulating a magnetic gradient ( currents opposing in the two coils ) and measuring the resulting force in .components and can be measured by applying a homogeneous field along ( parallel coil currents ) and observing the torques excited about , respectively , the and axes . in both casesfield levels of order 10 t are sufficient for moment measurement at the percent level .while the test mass magnetic moment will also be measured on the ground , measurement in - flight will check for possible magnetic contamination occurring during final preparations , launch , or flight .* _ temperature gradient effects _ temperature differences between opposing faces in the position sensors that surround the test masses create forces through the radiometric effect and differential radiation pressure . for the 10 pa pressure projected for the ltp position sensor vacuum chambers , differential radiation pressure is likely to be roughly twice as large as the radiometric effect , producing 2 fm / s for the envisioned 1 mhz temperature difference stability of 10 k ( lisa aims to improve this thermal stability by a factor 10 down to 0.1 mhz ) .an additional , less predictable thermal gradient force disturbance could arise in any temperature dependent outgassing of the sensor walls . to measure the total temperature gradient `` feedthrough , '' ltpwill be equipped with thermometers and heaters , to excite and measure temperature differences across the position sensors , with differences as small as 10 mk allowing measurement within 1% of the radiation pressure effect .* _ stray dc electrostatic fields _ stray dc biases on the electrode surfaces of the capacitive position sensors can interact with the noisy test mass charge and dielectric noise to produce low frequency acceleration noise .the dc biases can be measured with modulated sensing voltages and then compensated by applying appropriate `` counter - bias '' voltages with the actuation circuitry .measurement of stray dc biases of tens of mv , and their compensation to within 1 mv , has been demonstrated on ground and will be performed on ltp and lisa to significantly reduce this potentially dangerous noise source ( figs .[ ltp_instr_noise ] and [ lisa_noise ] assume a more modest compensation to within 10 mv ) .longer measurements could also characterize the noise in the stray bias itself , a possible noise source at very low frequencies . *_ cross - talk effects _ the coupling of residual satellite motion along the non - measurement translational and rotational degrees of freedom into acceleration along the critical axis is an important and complicated noise source for both ltp and lisa .one example is the gravity gradient , which is dominated by the massive optical bench lying just beneath the axis connecting tm1 and tm2 and could be as large as /s ; this accelerates a test mass in in response to satellite motion in .another example is the acceleration noise arising through the slight rotation of the and actuation forces with the satellite rotational noise .cross - talk effects could be as big as 5 - 10 fm / s in the ltp random force noise measurement , where a modest level of off - axis satellite control is considered acceptable for cost reasons ( ltp requires 70 nm on the and axes , compared to 10 for lisa ) .cross - talk feedthrough coefficients can be measured by coherent modulation , at the m level , of the spacecraft position via the control setpoints on different axes .achievement of ltp s principle scientific objective , demonstrating an overall acceleration noise limit of 30 fm / s at 1 mhz , is sufficient to establish lisa s capability to detect a host of interesting gravitational wave sources . the higher precision measurement of the random force noise discussed here , to within roughly twice the lisa goal at 1 mhz , will further increase confidence in lisa s performance goals .this improves substantially upon the resolution of earth based torsion pendulum measurements of stray forces ( currently roughly two orders of magnitude above lisa s goal at 1 mhz ) and confronts many of the challenges of precision metrology in space , from launch survival to all - axes satellite control , that lisa will face .additionally , the coherent disturbance experiments will extend modelling of key noise sources to lisa s lower frequencies , with noise parameter measurements performed in representative flight conditions subject to possible damage and contamination , from handling , launch , or the satellite environment , that could affect sensitive surface electrostatic , magnetic , or outgassing properties for lisa .
the ltp ( lisa testflight package ) , to be flown aboard the esa / nasa lisa pathfinder mission , aims to demonstrate drag - free control for lisa test masses with acceleration noise below 30 fm /s from 1 - 30 mhz . this paper describes the ltp measurement of random , position independent forces acting on the test masses . in addition to putting an overall upper limit for all source of random force noise , ltp will measure the conversion of several key disturbances into acceleration noise and thus allow a more detailed characterization of the drag - free performance to be expected for lisa .
over the years , there has been growing interest in the recovery of high dimensional signals from a small number of measurements .this new paradigm , so called compressed sensing ( cs ) , relies on the fact that many naturally acquired high dimensional signals inherently have low dimensional structure .in fact , since many real world signals can be well approximated as sparse signals ( i.e. , only a few entries of signal vector are nonzero ) , cs techniques have been applied to a variety of applications including data compression , source localization , wireless sensor network , medical imaging , data mining , to name just a few . over the years , various signal recovery algorithms for cs have been proposed . roughly speaking ,these approaches are categorized into two classes .the first approach is based on a deterministic signal model , where an underlying signal is seen as a deterministic vector and the sparsity promoting cost function ( e.g. , -norm ) is employed to solve the problem .these approaches include the basis pursuit ( bp ) , orthogonal matching pursuit ( omp ) , cosamp , and subspace pursuit .the second approach is based on the probabilistic signal model , where the signal sparsity is described by the _ a priori _ distribution of the signal and bayesian framework is employed in finding the sparse solution .when the multiple measurement vectors ( mmv ) from different source signals with common support are available , accuracy of the sparse signal recovery can be improved dramatically by performing joint processing of these vectors .since the algorithms based on mmv usually performs better than those relying on single measurement vector , many efforts have been made in recent years to develop an efficient sparse recovery algorithm .the mmv - based recovery algorithms targeted for the deterministic signal recovery include the mixed - norm solution and convex relaxation while the probabilistic approaches include the mmv sparse bayesian learning ( sbl ) method , block - sbl , auto - regressive sbl , and kalman filtering - based sbl ( ksbl ) . in this work ,we are primarily concerned with the mmv - based signal recovery problem when the observation vectors are sequentially acquired . to be specific ,we express the observation vector acquired at time index as where is the system matrix , is the source signal vector , and are the noise vector .we assume that is modeled as a zero mean complex gaussian random vector , i.e. , .our goal in this setup is to estimate the source signal using the sequence of the observations when 1 ) the source signal is sparse ( i.e. , the number nonzero elements in is small ) and 2 ) the dimension of the observation vector is smaller than that of the source vector . in particular , we focus on the scenario where the nonzero elements of change over time with certain temporal correlations . in this scenario, we assume that correlated time - varying signals are well modeled by gauss - markov process .note that this model is useful in capturing local dynamics of signals in linear estimation theory .the main purpose of this paper is to propose a new statistical sparse signal estimation algorithm for the sequential observation model we just described .the underlying assumption used in our model is that the nonzero amplitude of the sparse signals is changing in time , leading to different signal realizations for each measurement vector , yet the support of the signal amplitude is slowly varying so that the support remains unchanged over certain consecutive measurement vectors .we henceforth refer to this model as _ simultaneously sparse signal with locally common support _ since the support of the sparse signal remains constant over the fixed interval under this assumption .many of signal processing and wireless communication systems are characterized by this model .for example , this model matches well with the characteristics of multi - path fading channels for wireless communications where the channel impulse response should be estimated from the received signal .[ fig : cir ] shows a record of the channel impulse responses ( cir ) of underwater acoustic channels ( represented over the propagation delay and time domain ) measured from the experiments conducted in atlantic ocean in usa .we observe that when compared to the amplitude of the channel taps , the sparsity structure of the cir is varying slowly .thus , we can readily characterize this time - varying sparse signal using the correlated random process along with a deterministic binary parameter representing the existence of the signal .in recovering the original signal vector from the measurement vectors , we use the modified expectation - maximization ( em ) algorithm .the proposed scheme , dubbed as sparse - kalman - tree - search ( skts ) , consists of two main operations : 1 ) kalman smoothing to gather the _ a posteriori _ statistics of the source signals from individual measurement vector within the block of interest and 2 ) identification of the support of the sparse signal vector using a greedy tree search algorithm . treating the problem to identify the sparsity structure of the source signal as a combinatorial search , we propose a simple yet effective greedy tree search algorithm that examines the small number of promising candidates among all sparsity parameter vectors in the tree . thereexist several approaches to estimate the time - varying sparse signals under mmv model . in , reweighted optimization has been modified for the sequential dynamic filtering . in , modified sbl algorithm has been suggested to adopt autoregressive modeling . in , em - based adaptive filtering scheme has been proposed in the context of sparse channel estimation .other than these , notable approaches include turbo approximate message passing ( amp ) , lasso - kalman , and kalman filtered cs .we note that our work is distinct from these approaches in the following two aspects .first , in contrast to the previous efforts using continuous ( real - valued ) parameters to describe signal sparsity in , the proposed method employs the deterministic discrete ( binary ) parameter vector that captures the on - off structure of signal sparsity . due to the use of deterministic parameter vector ,an effort to deal with the probabilistic model on signal sparsity is unnecessary .also , since the search space is discretized , identification of parameter vector is done by the efficient search algorithm .second , while the recent work in estimates signal amplitude using kalman smoother and then identifies the support of sparse signal by thresholding of the innovation error norm , our work pursues direct estimation of the binary parameter vector using the modified em algorithm .we note that a part of this paper was presented in .the distinctive contribution of the present work is that the algorithm is developed in a more generic system model and practical issues ( e.g. , parameter estimation and iteration control ) and real - time implementation issues are elaborated .further , extensive simulations for the practical applications are conducted to demonstrate the superiority of the proposed method .[ t ] the rest of this paper is organized as follows . in section [ sec : skts_proposed ] , we briefly explain the sparse signal model and then present the proposed method . in section [ sec : chan ] , we discuss the application of the proposed algorithm in the wireless channel estimation . in section [ sec : simul ] , the simulation results are provided , and section [ sec : conclusion ] concludes the paper .notation : uppercase and lowercase letters written in boldface denote matrices and vectors , respectively .superscripts and denote transpose and conjugate transpose ( hermitian operator ) , respectively . denotes the conjugation of the complex number . indicates an -norm of a vector . for the -norm , we abbreviate a subscript for simplicity . is a diagonal matrix having elements only on the main diagonal . and denote the real and imaginary parts of , respectively . ] denotes the conditional expectation of given . ] and . means the probability of the event . denotes a trace operation of the matrix . is the element - by - element product ( hadamard product ) of the matrices and . denotes the coordinate vector .in this section , we consider the statistical estimation of the time - varying sparse signals from the sequentially collected observation vectors . as mentioned , our approach is based on the assumption that the support of the sparse signal varies slowly in time so that the multiple measurement vectors sharing common support can be used to improve the estimation quality of the sparse signals . in this section ,we first describe the simultaneously sparse signal model and then present the proposed sparse signal estimation scheme .we express a time - varying sparse signal as a product of a vector of random processes describing the amplitudes of nonzero entries in and the vector ^{t} ] .as mentioned , we assume that the support of the underlying sparse signals is locally time - invariant so that is constant in a block of consecutive measurement vectors . using this together with the observation model in ( [ eq : smodel ] ) , we obtain the simultaneously sparse signal model since follows gaussian distribution for the given , the _ a priori _ distribution of the source signal can be described by )^{h}{\rm cov}\left(\mathbf{h}_n \right)^{-1 } ( \mathbf{h}_{n}-e\left[\mathbf{h}_n ; \mathbf{c}_{i } \right ] ) \right),\end{aligned}\ ] ] where & = { \rm diag}(\mathbf{c}_{i } ) e[\mathbf{s}_{n } ] \nonumber \\ { \rm cov}\left(\mathbf{h}_n \right ) & = { \rm diag}(\mathbf{c}_{i } ) { \rm cov } ( \mathbf{s}_{n}){\rm diag}(\mathbf{c}_{i } ) .\label{eq : sttt}\end{aligned}\ ] ] when the multiple measurement vectors in the block are available , the maximum likelihood ( ml ) estimate of is expressed as where ^{t} ] and ] .+ for + + + let be the element of .+ if the -th entry is already one , skip the loop . otherwise , set the -th entry of to one .+ for , evaluate for all .+ if for any , then the candidate is duplicate node and hence we remove it .+ if , add into .+ + + end + output : .+ [ tb : tree ] in the m - step , we find maximizing in ( [ eq : finalexp ] ) as where . in finding , we need to check all possible combinations satisfying the sparsity constraint . since this brute force search is prohibitive for practical values of , we consider a computationally efficient search algorithm returning a sub - optimal solution to the problem in ( [ eq : max ] ) .the proposed approach , which in essence builds on the greedy tree search algorithm , examines candidate vectors to find out the most promising candidate of in a cost effective manner .the tree structure used for the proposed greedy search algorithm is illustrated in fig .[ fig : tree ] . starting from a root node of the tree ( associated with ^{t} ] .as the layer increases , one additional entry is set to one and thus entries of are set to one in the -th layer ( ) ( see fig . [fig : tree ] ) . at each layer of the tree, we evaluate the cost function for each node and then choose the best nodes whose cost function is maximal .the rest of nodes are discarded from the tree .the candidates of associated with the best nodes at each layer are called survival list " . for each node in the survival list ,we construct the child nodes in the second layer by setting one additional entry of to one^{t} ] . ] .note that since we do not distinguish the order of the bit assertion in , two or more nodes might represent the same realization of during this process ( see fig . [fig : tree ] ) .when duplicate nodes are identified , we keep only one and discard the rest from the tree . after removing all duplicate nodes , we choose the best nodes and then move on to the next layer .this process is repeated until the tree reaches the bottom layer of the tree .we note that since the tree search complexity is proportional to the depth of the tree ( ) , the dimension of source vector ( ) , and the number of nodes being selected ( ) , one can easily show that the complexity of the proposed tree search is .hence , with small values of and , the computational complexity is reasonably small and proportional to the dimension of the source signal vector .the proposed tree search algorithm is summarized in table [ tb : tree ] .it is worth mentioning that one important issue to be considered is how to estimate the sparsity order .one simple way is to use the simple correlation method , where the observation vectors are correlated with the column vectors of and is chosen as the number of the column vectors whose absolute correlation exceeds the predefined threshold .while this approach is simple to implement , the performance might be affected by the estimation quality of .one can alternatively consider a simple heuristic that terminates the tree search when a big drop in the cost metric is observed .after all iterations are finished ( i.e. ) and is obtained , we use the kalman smoother once again to compute using the newly updated .the final estimate of is expressed as [ cols="<",options="header " , ] [ tb : lte ] note that the channel taps in the standard lte channel model are only approximately sparse . in order to determine the parameters of the gauss - markov process and for a given , we minimize the approximation error between the gauss - markov process and the jake s model as suggested in . using the cir estimates obtained by the sparse signal recovery algorithms ,the transmitted symbols are detected by the mmse equalizer in frequency domain .then , the channel decoder is followed to detect the information bits . to evaluate the performance of the recovery algorithms , we measure bit error rate ( ber ) at the output of the channel decoder .we test the performance of the channel estimators when the exact -sparse channels are used .the sparsity order for these channels is set to and the dimension of the measurement vector is set to .note that when , the pilot resources occupy 3.12% of the overall ofdm resources .we assume that the sparsity structure remains unchanged over the block of pilot containing ofdm symbols .we set the doppler rate to . in fig .[ fig : perf ] ( a ) and ( b ) , we plot the mse and ber performance of the recovery algorithms as a function of snr . from the figure, we clearly observe that the skts algorithm performs best among all algorithms under test and also performs close to that of the oracle - based kalman smoother .we next investigate the performance of the proposed skts algorithm when the practical lte channel models are used . in this test, we observe the behavior of the algorithms for four distinctive scenarios : a ) eva channel with and , b ) eva channel with and , c ) epa channel with and , and d ) epa channel with , .we set and for eva and epa channel models since the eva channel exhibits longer delay spread . in fig .[ fig:3gpp ] , we observe that the skts algorithm maintains the performance gain over the competing algorithms for wide range of doppler rates . note that when compared to the results of the exact -sparse channel model , we see that the performance gap between the skts and ksbl is a bit reduced .next , we compare the performance of the rt - skts described in section [ sec : skts_online ] with the original skts algorithm .in this simulations , we set and . for the rt - skts algorithm , we set . in order to test the performance in a harsh condition, we arbitrarily change the delay structure of the cir for every 30 observation vectors .to ensure the convergence of the online update strategy in ( [ eq : rup1 ] ) and ( [ eq : rup2 ] ) , we use the first 10 observation vectors for warming up purpose and then use the rest for measuring the mse performance . notethat in practice , such warming up period would not be necessary since the support of channel vector would not be changed abruptly in many real applications . in fig .[ fig : on ] , we see that the rt - skts algorithm performs close to the original skts algorithm in low and mid range snr regime . in the high snr regime , however , the rt - skts algorithm suffers slight performance loss due to the approximation step of and .nevertheless , as shown in fig .[ fig : on ] and fig .[ fig : perf ] ( a ) , the rt - skts algorithm maintains the performance gain over the conventional channel estimators . in this subsection , we investigate the performance of the skts algorithms in the reconstruction of the dynamic mri images . in our test , we use a sequence of dimensional cardiac images shown in fig .[ fig : cardiac ] images .the raw image data is available online . ] .we generate the measurements by performing two dimensional discrete wavelet transform ( dwt ) with a 2-level daubechies-4 wavelet , applying two dimensional dft matrix and taking the randomly chosen frequency - domain image samples . after adding the gaussian noise to the image, we recover the original image using the recovery algorithms .we set , which corresponds to about 35% of the image size ( i.e. , ) .we could empirically observe that the location of nonzero coefficients in wavelet image is slowly changing ( i.e. , support change occurs for only a few places ) , which matches well with our simultaneous sparse signal model . in order to capture the most of signal energy , we set for all images to the the number of coefficient containing % of the signal energy . ] . in fig[ fig : cardiac ] , we plot the mse of the several image recovery schemes obtained for each image .the skts algorithm outperforms the basis pursuit denoising ( bpdn ) and rw1l - df and also performs close to the oracle - based kalman smoother .note that we could not include modified cs scheme in in our numerical experiments since large number of measurement samples is required for the first image .in this paper , we studied the problem to estimate the time - varying sparse signals when the sequence of the correlated observation vectors are available . in many signal processing and wireless communication applications , the support of sparse signals changes slowly in time and thus can be well modeled as simultaneously sparse signal , we proposed a new sparse signal recovery algorithm , referred to as sparse kalman tree search ( skts ) , that identifies the support of the sparse signal using multiple measurement vectors .the proposed skts scheme performs the kalman smoothing to extract the _ a posteriori _ statistics of the source signals and the greedy tree search to identify the support of the signal . from the case study of sparse channel estimation problem in orthogonal frequency division multiplexing ( ofdm ) and image reconstruction in dynamic mri, we demonstrated that the proposed skts algorithm is effective in recovering the dynamic sparse signal vectors .from ( [ eq : estep ] ) and ( [ eq : vv2 ] ) , we get \\ = & c '' + \frac{1}{\sigma_{w}^{2 } } \sum_{n = ti+1}^{t(i+1)}\bigg\ { e\left[{\rm tr } \left[2 { \rm re}\left ( \mathbf{b}_{n } { \rm diag}(\mathbf{c}_{k } ) \mathbf{s}_{n } \mathbf{y}_{n}^{h } \right)\right ] \bigg| \mathbf{y}_{1:t } ; \hat{\mathbf{c}}_i^{(l ) } \right ] \nonumber \\ & - e\left[{\rm tr}\left [ \mathbf{b}_{n } { \rm diag}(\mathbf{c}_{i } ) \mathbf{s}_{n}\mathbf{s}_{n}^{h } { \rm diag}(\mathbf{c}_{i } ) \mathbf{b}_{n}^{h } \right]\bigg| \mathbf{y}_{1:t } ; \hat{\mathbf{c}}_i^{(l ) } \right ] \bigg\ } \\ = & c '' + \frac{1}{\sigma_{w}^{2 } } \sum_{n = ti+1}^{t(i+1)}\bigg\ { { \rm tr } \left[2 { \rm re}\left ( \mathbf{b}_{n } { \rm diag}(\mathbf{c}_{i } ) e\left[\mathbf{s}_{n}\bigg| \mathbf{y}_{1:t } ; \hat{\mathbf{c}}_i^{(l ) } \right ] \mathbf{y}_{n}^{h } \right)\right ] \nonumber \\ & - { \rm tr}\left [ \mathbf{b}_{n } { \rm diag}(\mathbf{c}_{i } ) e\left [ \mathbf{s}_{n}\mathbf{s}_{n}^{h}\bigg| \mathbf{y}_{1:t } ; \hat{\mathbf{c}}_i^{(l ) } \right ] { \rm diag}(\mathbf{c}_{i } ) \mathbf{b}_{n}^{h } \right ] \bigg\ } , \\\end{aligned}\ ] ] where and are the terms independent of . using the property of the trace ,i.e , , we have \right ) \nonumber \\ & - { \rm tr}\left [ \mathbf{b}_{n } { \rm diag}(\mathbf{c}_{i } ) e\left [ \mathbf{s}_{n}\mathbf{s}_{n}^{h}\bigg| \mathbf{y}_{1:t } ; \hat{\mathbf{c}}_i^{(l ) } \right ] { \rm diag}(\mathbf{c}_{i } ) \mathbf{b}_{n}^{h } \right ] \bigg\}\end{aligned}\ ] ]denoting as the transpose of the row vector of , we can express the lefthand term of ( [ eq : sterm ] ) as since and , we further have \mathbf{c}_{i},\end{aligned}\ ] ] and hence we finally have e. j. candes , j. romberg , and t. tao , robust uncertainty principles : exact signal reconstruction from highly incomplete frequency information , " _ ieee trans . information theory _ , vol .489 - 509 , feb . 2006 .z. zhang and b. d. rao , sparse signal recovery with temporally correlated source vectors using sparse bayesian learning , " _ ieee journal of selected topics in signal processing _ , vol . 5 , pp . 912 - 926 , sept .2011 .r. prasad , c. r. murphy and b. d. rao , joint approximately sparse channel estimation and data detection in ofdm systems using sparse bayesian learning , " _ ieee trans . signal process .62 , no . 14 , pp . 3591 - 3603 , july 2014 .j. w. choi , t. j. riedl , k. kim , a. c. singer , and j. c. preisig , adaptive linear turbo equalization over doubly selective channels , " _ ieee journal of oceanic engineering _ , vol .473 - 489 , oct . 2011 .j. w. choi , k. kim , t. j. riedl , and a. c. singer , iterative estimation of sparse and doubly - selective multi - input multi - output ( mimo ) channel , " _ proc .signals , systems and computers asilomar conference _ , nov .2009 , pp .620 - 624 .w. u. bajwa , j. haupt , a. m. sayeed and r. nowak , compressed channel sensing : a new approach to estimating sparse multipath channels , " _ proceedings of the ieee _98 , pp . 1058 - 1076 , june 2010 . c. r. berger , s. zhou , j. c. preisig and p. willett , sparse channel estimation for multicarrier underwater acoustic communication : from subspace methods to compressed sensing , " _ ieee trans. signal process .1708 - 1721 , march 2010 .
in this paper , we propose a new sparse signal recovery algorithm , referred to as sparse kalman tree search ( skts ) , that provides a robust reconstruction of the sparse vector when the sequence of correlated observation vectors are available . the proposed skts algorithm builds on expectation - maximization ( em ) algorithm and consists of two main operations : 1 ) kalman smoothing to obtain the _ a posteriori _ statistics of the source signal vectors and 2 ) greedy tree search to estimate the support of the signal vectors . through numerical experiments , we demonstrate that the proposed skts algorithm is effective in recovering the sparse signals and performs close to the oracle ( genie - based ) kalman estimator . compressed sensing , simultaneously sparse signal , multiple measurement vector , expectation - maximization ( em ) algorithm , maximum likelihood estimation
conflicting information is often unavoidable for large - sized knowledge bases ( kbs for short ) .thus , analyzing conflicts has gained a considerable attention in artificial intelligence research . in the same vein ,measuring inconsistency has proved useful and attractive in diverse scenarios , including software specifications , e - commerce protocols , belief merging , news reports , integrity constraints , requirements engineering , databases , semantic web , and network intrusion detection .inconsistency measuring is helpful to compare different knowledge bases and to evaluate their quality .a number of logic - based inconsistency measures have been studied , including the maximal -consistency , measures based on variables or via multi - valued models , n - consistency and n - probability , minimal inconsistent subsets based inconsistency measures , shapley inconsistency value , and more recently the inconsistency measurement based on minimal proofs .there are different ways to categorize the proposed measures .one way is with respect to their dependence on syntax or semantics : semantic based ones aim to compute the proportion of the language that is affected by the inconsistency , via for example paraconsistent semantics .whilst , syntax based ones are concerned with the minimal number of formulae that cause inconsistencies , often through minimal inconsistent subsets .different measures can also be classified by being formula or knowledge base oriented .for example , the inconsistency measures in consist in quantifying the contribution of a formula to the inconsistency of a whole knowledge base containing it , while the other mentioned measures aim to quantify the inconsistency degree a the whole knowledge base . some basic properties such as _ consistency , monotony , free formula independence _, are also proposed to evaluate the quality of inconsistency measures . in this paper, we propose a syntax - based framework to measure inconsistencies using a novel methodology allowing to resolve inconsistencies in a parallel way . to this end , _distributable mus - decomposition _ and _ distribution index _ of a kb are introduced .intuitively , a distributable mus - decomposition gives a reasonable partition of a kb such that it allows multiple experts to solve inconsistencies in parallel ; and the distribution index is the maximal components that a kb can be partitioned into .this methodology is of great importance in a scenario where the information in a kb is precious , large , and complex such that removing or weakening information requires intensive and time - consuming interactions with human experts .consider .intuitively , contains a large number of inconsistencies . and interestingly, our approach can recognize as distributable parts of such that each expert can focus on verifying a single part carefully and independently .in contrast , classical approaches follow the idea of resolving inconsistency as a whole without being able to break a kb into independent pieces .take , for example , the classical hitting set approach which identifies a minimal set of formulae , e.g. of , to remove for restoring consistency .note that has many such hitting sets of a big size .therefore , even if working in parallel , each expert needs to verify a large number of formulae , which is time consuming . more problematic in general , there are often overlaps among hitting sets so that multiple experts have to waste time in unnecessarily rechecking the overlaps .this is the same if we simply distribute one minimal inconsistent subsets to an expert .however , the proposed distributable mus - decomposition avoids this problem because it gives a disjoint decomposition of a kb .the methodology is inspired and a side - product of our exploration of the _ decomposition _ property defined for inconsistency measures , which is rarely discussed in the literature due to its modeling difficulty .our technical contributions are as follows : * we propose _ independent decomposability _ as a more reasonable characterization of inconsistency measures .* we define a _ graph representation _ of kbs to analyze connections between minimal inconsistent subsets by exploiting the structure of the graph .such a representation is then used to improve an existing inconsistency measure to satisfy the independent decomposability . *based on the graph representation , a series of _ mus - decompositions _ are introduced and used for defining the _ distribution - based inconsistency measure . we show the interesting properties of and give a comparison with other measures , which indicates its rationality . * we study the complexity of ( via an _ extended set packing problem _ ) and we provide encodings as a 0/1 linear program or min cost satisfiability for its computation .the paper is organized as follows : sections [ sec : preliminaries ] and [ sec : miv ] give basis notions and recall some inconsistency measures relevant to the present work . in section [ sec : partition ] , we propose a graph representation of a kb and use it to revise an existing measure . section [ sec : kbmeasure ] focuses on _ mus - decomposition _ and _ distribution - based inconsistency measure_. section [ sec : computation ] gives the complexity results of the proposed measure and its computation algorithms whose efficiency is evaluated in section [ sec : experiments ] .section [ sec : conclusion ] concludes the paper with some perspectives .through this paper , we consider the propositional language built over a finite set of propositional symbols using classical logical connectives .we will use letters such as and to denote propositional variables , greek letters like and to denote propositional formulae .the symbols and denote tautology and contradiction , respectively .a knowledge base consists of a finite set of propositional formulae .sometimes , a propositional formula can be in conjunctive normal form ( cnf ) i.e. a conjunction of clauses .where a clause is a disjunction literals , and a literal is either a propositional variable ( ) or its negation ( ) . for a set , denotes its cardinality .moreover , a kb is inconsistent if there is a formula such that and , where is the deduction in classical propositional logic . if is inconsistent , _minimal unsatisfiable subsets ( mus ) _ of are defined as follows : let be a kb and . is a minimal unsatisfiable ( inconsistent ) subset ( mus ) of iff and , .the set of all minimal unsatisfiable subsets of is denoted .clearly , an inconsistent kb can have multiple minimal inconsistent subsets .when a is singleton , the single formula in it , is called a _ self - contradictory formula_. we denote the set of self - contradictory formulae of by . a formula that is not involved in any mus of is called _free formula_. the set of free formulae of is written , and its complement is named _ unfree formulae _ set , defined as . moreover , the _ maximal consistent subset _ and _ hitting set _ are defined as follows : let be a kb and be a subset of . is a maximal satisfiable ( consistent ) subset ( mss ) of iff and , .the set of all maximal satisfiable subsets is denoted .[ def3 ] given a universe of elements and a collection of subsets of , is a hitting set of if . is a minimal hitting set of if is a hitting set of and each is not a hitting set of .we review the inconsistency measures relevant to the ones proposed in this paper .there have been several contributions for measuring inconsistency in knowledge bases defined through minimal inconsistent subsets theories . in ,hunter and konieczny introduce a scoring function allowing to measure the degree of inconsistency of a subset of formulae of a given knowledge base . in other words , for a subset , the scoring function is defined as the reduction of the number of minimal inconsistent subsets obtained by removing from ( i.e. ) . by extending the scoring function, the authors introduce an inconsistency measure of the whole base , defined as the number of minimal inconsistent subsets of .formally , + measure also leads to an interesting shapley inconsistency value with desirable properties .combining both minimal inconsistent subsets and maximal consistent subsets is another way to define inconsistency degree .we consider the inconsistency value that counts for a given kb , the number of its and its self - contradictory formulae ( subtraction of 1 is required to make when is consistent ) : another inconsistency measure considered in this paper is defined as the minimum hitting set of : is the size of the smallest hitting set of w.r.t .its cardinality .in addition , a set of properties have been proposed to characterize an inconsistency measure .[ bim ] given two knowledge bases and , and formulae and in , * consistency : iff is consistent * monotony : * free formula independence : if is a free formula in , then * mininc : if then .the monotony property shows that the inconsistency value of a kb increases with the addition of new formulae .the free formula independence property states that the set of formulae not involved in any minimal inconsistent subset does not influence the inconsistency measure .the mininc is used to characterize the shapley inconsistency value by in .there are common properties that we examine for an inconsistency measure ( definition [ bim ] ) , while leaving another property , called _decomposability _ or _ additivity _ ,debatable due to its modelling difficulty .indeed , properties in definition [ bim ] have an inspiring root from the axioms of shapley value . as mentioned in , one of the main limitation of the original additivitylies in the fact that the interactions of sub - games are not considered .moreover , argue that a direct translation of shapley s additivity has little sense for inconsistency measures . for this reason , _ pre - decomposability _ and decomposability are defined for formula - oriented inconsistency measures . in this section ,we analyze the limitation of existing decomposability property and propose an _ independent decomposability _ which is more intuitive .we then derive a new measure by modifying to satisfy the independent decomposability property by considering the interactions between muses through _ mus - graph representation _ of a kb .let us recall pre - decomposability and decomposability properties .[ def : additivity ] let be knowledge bases and an inconsistency measure . satisfies pre - decomposability if it satisfies the following condition : if of a set by , i.e. , and , then .pre - decomposability ensures that the inconsistency degree of a kb can be obtained by summing up the degrees of its sub - bases under the condition that is a partition of .[ def : additivity ] satisfies decomposability if it satisfies the following condition : if , then .compared to pre - decomposability , decomposability characterizes a weaker condition that consider only muses cardinalities of and .although pre - decomposability and decomposability can characterize some kind of interactions .we argue that this condition is not sufficient .let us consider the following example : [ exam : enh - additivity ] let , each of which contains only one single mus . consider two bases .clearly , = , and . for any measure , if satisfies the decomposability property ( definition [ def : additivity ] ) , we have and . moreover , if satisfies the mininc property . then , k and k will have the same value , which is counter - intuitive because the components of are unrelated , whereas those of are overlapping .consequently , the components of are more spread than those of .one can expect that should contain more inconsistencies than .this example illustrates the necessity to characterize the interactions among sub - bases whose inconsistency measures can be summed up . to this end, we propose the following independent decomposability property : [ def : enh - additivity ] let be knowledge bases and an inconsistency measure . if and for all , then . is then called ind - decomposable . to perform additivity for a given measure, the independent decomposability requires an additional precondition expressing that pairwise sub - bases should not share unfree formulae , which encodes a stronger independence among sub - bases . indeed , the independent decomposability avoids the counter - intuitive conclusion illustrated in example [ exam : enh - additivity ] . to illustrate this ,suppose that satisfies independent decomposability , then we have , but not necessarily as and share the formula .hence can be different from . clearly , the following relations hold among different decomposability conditions .decomposability implies pre - decomposability ; pre - decomposability implies independent decomposability . indeed , as shown by example [ exam : enh - additivity ], the strong constraints of pre - decomposability and decomposability would make an inconsistency measure behavior counter - intuitive .in contrast , the independence between sub - bases required in the independent decomposability property make it more intuitive . while we can see that the measure is pre - decomposable , decomposable , and ind - decomposable , it is not the case for measure as shown below .[ prop : decom - relation ] the measure is not pre - decomposable , neither decomposable and nor ind - decomposable . consider the counter example : , and .it is easy to check that and satisfy the conditions of pre - decomposability , decomposability , and independent decomposability .we have while .consequently , .thus , is not pre - decomposable , neither decomposable and nor ind - decomposable .indeed , the following theorem states that under certain constraints , is multiplicative instead of additive .let be kbs such that and , for all with , . then , iff where .by induction on .the case of is trivial .we now consider the case of .let .using induction hypothesis , we have iff where .+ _ part . let .then , there exist and such that . if ( resp . ) then there exists such that ( resp . ) is consistent . using and , is consistent and we get a contradiction. therefore , and .+ _ part . let and . then, the set is consistent , since we have and .let us now show that is in .assume that is not in .then , there exists such that is consistent . if ( resp . ) , then ( resp . ) is consistent and we get a contradiction .therefore , is in . using this theorem, we deduce the following corollary : [ prop : indmcs ] let be kbs such that and , for all with , .then , . as the independent decomposability gives a more intuitive characterization of the interaction among subsets , in the following , we are interested in restoring the independent decomposability property of the measure .let us first define two fundamental concepts : _ mus - graph _ and _ mus - decomposition_. [ def : gmus ] the mus - graph of of a kb , denoted , is an undirected graph where : * is the set of vertices ; and * , is an edge iff .a mus - graph of gives us a structural representation of the connection between minimal unsatisfiable subsets .[ ex : graph ] let .we have where , , , , and .so is as follows : : mus - graph of ,width=151 ] moreover , leads to a partition of a kb , named _ mus - decomposition _ , as defined below .[ def : musdec ] a mus - decomposition of is a set such that and are the connected components of . by the fact that and the uniqueness of the connected components of a graph , we can easily see : mus - decomposition exists and is unique for an inconsistent kb .( example [ ex : graph ] contd . )the mus - decomposition of contains two components of : and by noting that .obviously , the mus - decomposition of a kb can be computed in polynomial time given its mus - graph .interestingly , we can see that the partition satisfies the application conditions of independent decomposability .that is , if an inconsistency measure is ind - decomposable and free - formula independent , then . in the following ,based on mus - decomposition , we present an alternative to the inconsistency measure ( defined in section [ sec : preliminaries ] ) so as to make it ind - decomposable .let be a kb with its mus - decomposition .the measure is defined as follows : \mathit{0 } & \mbox{otherwise}. \end{array } \right.\ ] ] that is , instead of as in , the maximal consistent subsets of mus - decomposition of are used in .( example [ ex : graph ] contd . ) [ ex : partitionmus ] we have and . then .[ prop2 ] measure is ind - decomposable .let be a kb such that and , for all with , .one can easily see that if and only if , for all , .we now consider the case of .we denote by the set of connected components in for .thus , is the set of connected components in , since .moreover , it is obvious that .let be the mus - decomposition of for .we have , since is the mus - decomposition of . that is , by taking into account the connections between minimal inconsistent subsets , mus - decomposition gives us a way to define an inconsistency measure which still satisfies the independent decomposability .recall that we want to have a way to resolve inconsistencies in a parallel way as mentioned in section [ sec : intro ] .indeed , mus - decomposition defines a disjoint partitions of a kb .however , it is inadequate for this purpose .consider again .the mus - decomposition can not divide into smaller pieces because its mus - graph contains only one connected component .a solution to this problem is via a more fine - grained analysis of a mus - graph by taking into account its inner structures . to this end , we propose _ partial _ and _ distributable _ mus - decompositions , based on which a new inconsistency measure is proposed and shown having more interesting properties .let us first study a general characterization of inconsistency measures with respect to the independent decomposability property .[ def : dimk ] let be a kb , the mus - decomposition of and a function from to .the mus - decomposition based inconsistency measure of with respect to , denoted , is defined as follows : a range of possible measures can be defined using the above general definition .let us review some existing instances of according to some functions .the simplest one is obtained when . in this case , we get a measure that assigns to the number of its connected components. however , this measure in not monotonic . indeed , adding new formulae to a kb can decrease the number of connected components .for instance , consider the kb that contains two singleton connected components and . now , adding the formula to leads to a new kb containing a unique connected component .besides , this simple measure considers each connected component as an inseparable entity . moreover , when we take ( the number of involved in the connected component ) , is equal to measure i.e. .this measure again does not take into account the inner structure of minimal inconsistent subsets of a .we now modify to take into account interactions between muses .in particular , we deeply explore the independent decomposability and the monotony properties to define a new inconsistency measure , while keeping other desired properties satisfied .to this end , we first introduce the partial mus - decomposition notion .[ def : ccpartition ] let be a kb and subsets of .the set is called a partial mus - decomposition of if the following conditions are satisfied : * for ; * ; * , .we denote the set of partial mus - decompositions of .the following proposition comes from the fact that the mus - decomposition of a kb is in .any inconsistent kb has at least one partial mus - decomposition . unlike the uniqueness of mus - decomposition, a kb can have multiple partial mus - decompositions as shown in the following example .[ ex : icc3 ] consider .figure [ fig : ubimuslb ] depicts the graph representation of which contains two connected components and where and .so the mus - decomposition of is .however , there are many partial mus - decompositions with some examples listed below : * , and .* , , and . * , and .note that and can not form a partial mus - decomposition due to the violation of the condition ( 2 ) in definition [ def : ccpartition ] .this also shows that condition ( 3 ) alone can not guarantee to satisfy the condition 2 in the definition ., width=302 ] [ def : max - pmusd ] a partial mus - decomposition is called maximal if where is defined by moreover , is called the distribution index of . that is , the maximal partial mus - decomposition has the largest cardinality among all partial mus - decompositions .and the distribution index is the cardinality of maximal mus - decompositions .( example [ ex : icc3 ] contd.)[ex : icc3-max ] among maximal partial mus - decompositions is .note that contains highly connected formulae that can not be separated into a partial mus - decomposition of size larger than 2 .although a ( maximal ) partial mus - decomposition can be formed by any subsets of , the next proposition indicates that only are needed to obtain a ( maximal ) partial mus - decomposition .[ lem : pmusd ] let be an inconsistent kb .there exist distinct muses such that is a maximal partial mus - decomposition of .suppose is a maximal mus - decomposition of .let for , then it is easy to verify that is a partial mus - decomposition whose cardinality is the distribution index of , so it is a maximal pmusd .that is , each element of a maximal partial mus - decomposition can be some minimal unsatisfiable subsets of , as in example [ ex : icc3 ] .moreover , the following proposition tells that we can have another special format of maximal mus - decomposition .[ lem : pmusd1 ] let be an inconsistent kb . there exist distinct for , such that is a maximal partial mus - decomposition of and is maximal w.r.t .set inclusion .we call such a maximal partial mus - decomposition a distributable mus - decomposition . by lemma [ lem : pmusd ] , take a maximal mus - decomposition of the form . denote the connected component of such that .now consider such that is still a partial mus - decomposition of .such exists because we can take . is finite , so are and . now taking that is maximal w.r.t .set - inclusion with such a property , the conclusion follows .( example [ ex : icc3 ] contd.)[ex : dmusd ] is a distributable mus - decomposition , but is not because .[ ex : dmusd - intro ] recall the example in section [ sec : intro ] : .the distributable mus - decomposition of is .a distributable mus - decomposition defines a way to separate a whole kb into maximal number of disjoint inconsistent components .the decomposed components , such as , can in turn be delivered to different experts to repair in parallel . in the case where resolving inconsistency is a serious and time - consuming decision, this can advance task time by a distributed manipulation of maximal experts .indeed , the rational in distribution mus - decomposition related to inconsistency resolving is given in the following proposition .given an inconsistent base and is a distributable mus - decomposition of .suppose is a consistent base obtained by removing or weakening formulae in .then is consistent .that is , inconsistencies in each component can be resolved separately and the merged kb afterwards is consistent .however , note that where is not necessarily consistent if each expert only removes one formula from . ] .for instance , in example [ ex : icc3-max ] , if we have and after expert verification , we still have inconsistency in . in this case, we can drop because have been manually chosen by experts ; or for carefulness , we can retrigger the same process to resolve the rest inconsistencies . as we can see above that a distributable mus - decompositiongives a reasonable disjoint partition of a kb . in this section, we study the distribution index which rises an interesting inconsistency measure with desired properties .[ def : measuremip ] let be a kb , the distribution - based inconsistency degree is defined as : intuitively , characterizes how many experts are demanded to repair inconsistencies in parallel .the higher the value is , more labor force is required .( example [ ex : dmusd ] contd . )since is a distributable mus - decomposition , we have .indeed , the so defined measure satisfies several important properties for an inconsistency measure . satisfies consistency , monotony , free formula independence , mininc , and independent decomposability ._ consistency _ : if is consistent , the partial mus - decomposition set is empty , so .+ _ monotony _ : for any kb and , it is easy to see that a partial mus - decomposition of is a partial mus - decomposition of . therefore , .+ _ free formula independence _ : it follows from the obvious fact that free formula do not effect the set of partial mus - decompositions .+ _ mininc _ : for , clearly , the only partial decomposition of is , so .+ _ independent decomposability _: let two bases satisfying and .for any partial mus - decompositions of and : and , it is easy to see .moreover , is of the maximal cardinality in .otherwise , by lemma [ lem : pmusd ] , there are that form a partial mus - decomposition of : with .since , we have either or for all .so at least one of and has a partial mus - decomposition whose cardinality is stricter larger than its distribution index . a contradiction with the definition of distribution index .so . consequently , satisfies independent decomposability property . moreover , the distribution - based inconsistency measure is a lower bound of inconsistency measures which satisfy monotony , independent decomposability , and mininc properties .given an inconsistency measure that satisfies monotony , independent decomposability , and mininc , we have . for any partial mus - decomposition of , we have .so by monotony , . moreover , since satisfies independent decomposability , . taking a maximal partial mus - decomposition , one can deduce that .by mininc and monotony , , so .( example [ exam : enh - additivity ] contd . )[ ex : ids ] for different measures based on muses , we have * and ; * and ; * and ; * and . so all and give a conclusion that is less inconsistent than , which coincides with our intuition , but it is not the case of .in example [ ex : ids ] , we have and of the same value .but it is not the general case as shown in the following example .( example [ ex : icc3 ] contd . ) for the connected component , while its distribution index is 1 .however , the following proposition gives a general relationship between and .[ prop : icc1hs ] let be a kb .we have as can be partitioned into disjoint components of minimal inconsistent subsets of , a minimal hitting set of must contain at least one formula from each component .that is , .( example [ ex : dmusd - intro ] contd . )we have and .but the former means that can be distributed to experts to resolve inconsistency in parallel and each expert only verifies two elements because of the distribution mus - decomposition is ; whilst the latter means that each expert needs to verify at least formulae to confirm an inconsistency resolving plan .and different experts have to do repetition work due to overlapping among different hitting sets .this example shows that the proposed mus - decomposition gives a more competitive inconsistency handling methodology than the hitting set based approach albeit the occasionally equivalent value of the deduced inconsistency measures and .in this section , we consider the computational issues of distribution - based inconsistency measure by generalizing the classical set packing problem , and then show two encodings of , which is aiming at practical algorithms for its solution .we first look at the following proposition which is a simple conclusion of lemma [ lem : pmusd ] .[ prop : mumus ] let be a kb . is the maximal cardinality of satisfying 1 . .proposition [ prop : mumus ] states that is the largest number of ( pairwise disjoint ) muses of such that their union will not rise any new mus , which gives a way to compute .next we study this computation in the framework of maximum closed set packing ( mcsp ) defined in the following .the maximum set packing problem is one of the basic optimization problems ( see , e.g. , ) .it is related to other well - known optimization problems , such as the maximum independent set and maximum clique problems .we here introduce a variant of this problem , called the _ maximum closed set packing problem_. we show that this variant is np - hard by providing a reduction from the maximum set packing problem which is np - hard . in this work ,the maximum closed set packing problem is used to compute the distribution - based inconsistency measure .let be universe and be a family of subsets of .a _ set packing _ is a subset such that , for all with , .our variant is obtained from the maximum set packing problem by further requiring that the union of selected subsets does not contain unselected subsets in as defined below .a _ closed set packing _is a set packing such that , for all , is not a subset of .the _ maximum ( free ) set packing problem _ consists in founding a ( free ) set packing with maximum cardinality , written msp ( mcsp ) .[ thm : mfsp - complexity ] mcsp is np - hard .we construct a reduction from the maximum set packing problem to the maximum closed set packing problem .let be a universe , a family of subsets of and are distinct elements which do not belong to .define and .we have is a solution of the maximum set packing problem for if and only if is a solution of the maximum closed set packing problem for . since maximum setpacking msp is np - hard , so is the mcsp .we here provide an encoding of the maximum closed set packing problem in linear integer programming .let be a universe and a set of subsets of .we associate a binary variable ( ) to each subset in .we also associate a binary variable to each element in .+ the first linear inequalities allow us to only consider the pairwise disjoint subsets in : the following inequalities allow us to have if and only if , for all , : where , for all , . indeed ,if then , using inequality ( [ eq2 ] ) , we have , for all , .otherwise , we have and , using inequality ( [ eq3 ] ) , there exists such that .+ finally , the objective function is defined as follows : the linear inequalities in , and with the objective function is a correct encoding of mcsp .let be a subset of that corresponds to a solution of the linear integer program . using the inequalities in ( [ eq1 ] ), we have , for all with , .thus , corresponds to a set packing . using the inequalities and , we have , for all , if and only if , for all , .hence , for all , there exists such that , so is not a subset of .therefore , is a closed set packing . finally , from maximizing the objective function in , we deduce that is a solution of the maximum closed set packing for . in this section, we describe our encoding of the maximum closed set packing problem as a mincostsat instance .let be a cnf formula and a cost function that associates a non - negative cost to each variable in .the mincostsat problem is the problem of finding a model for that minimizes the objective function : let be a universe and a set of subsets of .we associate a boolean variable ( resp . ) to each ( resp . ) .the inequalities in in our previous integer linear program correspond to instances of the atmostone constraint which is a special case of the well - known cardinality constraint .several efficient encodings of the cardinality constraint to cnf have been proposed , most of them try to improve the efficiency of constraint propagation ( e.g. ) . we here consider the encoding using sequential counter . in this case , the inequality is encoded as follows ( we fix ) : where is a fresh boolean variable for all .regarding to the inequalities in ( [ eq2 ] ) , it can be encoded by the following clauses : indeed , these clauses are equivalent to the following ones : the inequalities in can be simply encoded as : contrary to mcsp , the optimization process in mincostsat consists in minimizing the objective function . in order to encode mcsp as an mincostsat instance, we rename each variable with ( is a fresh boolean variable ) in ( [ atmost ] ) , ( [ ceq2 ] ) and ( [ ceq3 ] ) , for all .the mincostsat instance encoding the maximum closed set packing problem for is where is the cnf formula obtained from by the renaming described previously and is defined as follows : * for all , ; and * for all , .note that the optimization process in consists in minimizing and that corresponds to maximizing .in this section , we present a preliminary experimental evaluation of our proposed approach .all experiments was performed on a xeon 3.2ghz ( 2 gb ram ) cluster .we conduced two kinds of experiments .the first one deals with instances coming from classical muses enumeration problem . for this category we use two complementary state - of - the art muses enumeration solvers and then we apply our encoding into mcsp to compute the values of .when enumerating all muses is infeasible we use emus instead of camus to enumerate a subset of muses .indeed , emus is a real time solver that outperforms camus when we deal with partial muses enumeration .the instances where emus is used are indicated with an asterisk . in the second experiment ,the instances are randomly generated .to represent a kb with formulae involving muses , called mfsp_m_n , we first generate randomly a family of sets of positive integers from the interval $ ] .we suppose that each set of numbers represents a mus .we randomly set the size of . in our experiments , we consider . in table[ tab : exp ] , for each instance , we report the number of muses ( ) , the value of the inconsistency measure ( ) and the time ( in seconds ) needed to compute . to solve the encoded instances , we use partial maxsat solver .as we can observe , the value is much smaller than the number of muses .furthermore , the computation time globally increases as increases .note that for instances whose value is equal to 1 , it means that they are strongly interconnected ..computation of ( real - world and random instances ) [ tab : exp ] [ cols="<,<,<,<",options="header " , ]we studied in this paper a new framework for characterizing inconsistency based on the proposed independent decomposability property and mus - decomposition . such defined inconsistency measures ( i.e. and )are shown with desired properties .the distributable mus - decomposition allows to resolve inconsistencies in a parallel way , which is a rarely considered methodology for handling large knowledge bases with important informations .complexity and practical algorithms are studied based on the advance of mus enumeration .we will study the lower bound complexity of the measure and explore applications of the proposed methodology in the future .bailleux , o. , and boufkhad , y. 2003 .efficient cnf encoding of boolean cardinality constraints . in _ 9th international conference on principles and practice of constraint programming - cp 2003 _ , 108122 .
measuring inconsistency is viewed as an important issue related to handling inconsistencies . good measures are supposed to satisfy a set of rational properties . however , defining sound properties is sometimes problematic . in this paper , we emphasize one such property , named _ decomposability _ , rarely discussed in the literature due to its modeling difficulties . to this end , we propose an independent decomposition which is more intuitive than existing proposals . to analyze inconsistency in a more fine - grained way , we introduce a graph representation of a knowledge base and various mus - decompositions . one particular mus - decomposition , named _ distributable mus - decomposition _ leads to an interesting partition of inconsistencies in a knowledge base such that multiple experts can check inconsistencies in parallel , which is impossible under existing measures . such particular _ mus - decomposition _ results in an inconsistency measure that satisfies a number of desired properties . moreover , we give an upper bound complexity of the measure that can be computed using 0/1 linear programming or min cost satisfiability problems , and conduct preliminary experiments to show its feasibility .